Theory of Cooperation

Cognitive and mathematical principles of cooperation, and their application to the systems in wide range of fields are studied.

Real-Imaginary Loops

Mutually Understanding Intelligence

Developing Intelligence

Language Aquisition as Mutual Belief Forming

Equilibrium Selective Role Coordination for Autonomous Driving

Driving a car is a social activity with egocentric and cooperative aspects. The egocentric aspect is characterized by self-adaptive behaviors to surrounding environments while following traffic rules. The cooperative aspect is characterized by mutually adaptive behaviors with other agents (e.g., vehicles and bicycles).while automatic driving is rapidly improving in the egocentric aspect, there are still problems to be solved in the cooperative aspect. This study presents a mutually cooperative motion control method.


  • Naoto Iwahashi. “ Equilibrium Selective Role Coordinaiton for Autonomous Driving,” IEEE Int Conf Awareness Science and Technology, 2019.

Physics Projection

This paper presents a new approach named physics projection, through which robots can learn the physical world and predict the effects of their actions actively and online. Physics projection consists of three components: a robot, physical world model, and physics engine. The process of physics projection has a double loop structure comprising (1) a learning loop of the physical world model and (2) a simulation search loop. Experiments were performed using the TurtleBot3 mobile robot and Unity graphic engine. The results clearly showed that the robot predicted the effects of its various actions under the given physical conditions and successfully executed the tasks of carrying a wine glass without dropping it and a cup filled with water without spilling. The robot could predict a catastrophic effect that could not be predicted by a human operator.


  • Naoto Iwahashi, Hideaki Negoro, Soichiro Kawano. “Physics Projection,” The 33rd Annual Conference of the Japanese Society for Artificial Intelligence, June. 2019. PDF

Developing Intelligenc to Mutually Understand

ロボットが人間と経験を共有した日常的な言語コミュニケーションを行えるようになるためには,ロボットが,感覚・運動系などの認知機能との関連性を含めた総体としての言語システムを,人間や環境との相互作用を通して,適応的に学習する仕組みをいかに実現するかが課題です.この課題に対し,当研究室は,従来とはまったく異なるヒューマン・ロボット・インタラクション研究のアプローチ ―発達的アプローチ― の研究を推進しています.本アプローチに基づいて,これまでに,動作と言語によるマルチモーダルコミュニケーション能力を学習する手法を開発してきました.本手法によれば,ロボットは,初期状態において,言語のような記号的な情報を全く持たず,発話や行動による人間との共同行為を通して,言語,物体概念,ならびに動作を,オンラインでインクリメンタルに学習できます.その結果,ロボットは,状況に応じて,人間の発話を解釈し,日用品や縫いぐるみを操作したり,人間に操作指示や確認の発話をしたり,質問に答えたりできるようになります.


  • 岩橋直人, “ロボットによる言語獲得 -言語処理の新しいパラダイムを目指して,” 人工知能学会誌, vol.18, no.1, pp.49-58, 2003.PDF
  • N. Iwahashi. “Robots That Learn Language: A Developmental Approach to Situated Human-Robot Conversations” in Human-Robot Interaction, N. Sarkar, Ed. Vienna: I-Tech Education and Publishing, 2007, pp.95-118.
  • N. Iwahashi, K. Sugiura, R. Taguchi, T. Nagai and T. Taniguchi, “Robots That Learn to Communicate: A Developmental Approach to Personally and Physically Situated Human-Robot Conversations,” in Proc. AAAI Fall Symposium on Dialog with Robots, 2010, pp.38-43.
  • 杉浦孔明, 岩橋直人, 柏岡秀紀, 中村哲. “言語獲得ロボットによる発話理解確率の推定に基づく物体操作対話,” 日本ロボット学会誌, vol.28, no.8, pp.978-988, 2010.

Neural Network Model of Human-Robot Multimodal Language Interaction




  • 守屋綾祐,高渕健太,岩橋直人,“Neural Network Model for Human-Robot Multimodal Linguistic Interaction,” ヒューマン・エージェント・インタラクション・シンポジウム 2017.PDF

Recurrent Grad-CAM: Spatiotemporal Localization for Explaining Videos

Artificial intelligence systems are outperforming humans in an increasing number of tasks. The effective collaboration between artificial intelligence systems and humans has been increasingly pursued, for which the mechanisms that enable human understanding and trust in such systems are important. However, as the systems are constructed through machine learning in a highly abstracted manner, it is difficult for human to understand why the systems make decisions under individual situations. The understanding, visualization, and interpretation of artificial intelligence systems have recently been attracting attention.
In this context, Selvaraju et al. proposed the visual explanation technique called gradientweighted class activation mapping (Grad-CAM), which enables the systems to present humans with the spatial portions important for decision making by using convolutional neural network (CNN). However, as the Grad-CAM technique has previously been applied to a feedforward CNN, it is difficult for the technique to treat tasks with timeseries input.
We proposed a recurrent Grad-CAM, which is the extension of the conventional Grad-CAM. The proposed technique applies the gradient-based localization principle to a recurrent CNN to solve the aforementioned difficulties.

Figure: Resultant heatmap examples: original video and recurrent Grad-CAM


  • N. Yamashita, N. Iwahashi, S. Nakano, T, Sakai and M. Hamano. “ Visually Explaining Videos using Recurrent Neural Networks with Gradient-Based Localization,” The 80th National Convention of Information Processing Society of Japan, Mar. 2018.  

Synchronizing Intelligence

Time synchronization is an imortant aspect of human cognitive capability. ロボットが,人間同士の協調的な行為を観察することで,人間がどのようなルールに基づいて,どのような目的をもって行動しているのかを理解し,ロボットが人間と同じ協調的な行為を行うことができる機械学習手法を開発した.


  • 押川慧, 中村友昭, 長井隆行, 岩橋直人, 船越孝太郎, 金子正秀. “人のインタラクションからの教師なしルール学習,” 人工知能学会全国大会, Mar. 2017.
  • 佐々木友弥,岩橋直人,船越孝太郎,中野幹生,押川慧, 中村友昭, 長井隆行. “MDL Coupled HMMs による協調行為の学習と生成,” 情報処理学会全国大会, Mar. 2018. PDF

Neural Network Model of Human Mental Disorder


Individual and Social Mutual Beliefs Models


Cloud Based Developmental Robotics



Unsupervised Natural Communicative Learning



  • 山本一馬, 石田卓也, 岩橋直人, Ye Kyaw Thu, 國島丈生. “人とロボットによる雑談から眼前の物体に記号接地している発話の検出法,” HAIシンポジウム, P-36, 2016.
  • 石田卓也, 山本一馬, 岩橋直人, Ye Kyaw Thu, 中村友昭, 長井隆行, 國島丈生. “雑談の記号接地:SVMによる記号接地発話の検出とMHDPとCRFを用いた物体への記号接地,” 人工知能学会全国大会, Mar. 2017.
  • Ye Kyaw Thu, T. Ishida, N, Iwahashi, T. Nakamura, T. Nagai “Symbol Grounding from Natural Conversation for Human-Robot Communication,” Int. Conf. Human Agent Interaction, Nov. 2017.

Context Dependent Intentional Motion

下図に示す画像中で,人間によって動かされている縫いぐるみの移動は,画面中央で静止している縫いぐるみを参照点にとれば「飛び越える」という概念の例であり,画面右側の箱を参照点にとれば「乗る」という概念の例になります.このように空間的移動の概念には,参照点に依存しているものがあります.このような参照点に依存する動作を学習・認識・生成するための手法,参照点に依存した隠れマルコフモデル(Referential-Point-Dependent hidden Markov model (RPD-HMM))を提案し,さらに高性能なディープラーニング技術を開発しました.


  • 羽岡哲郎, 岩橋直人. “言語獲得のための参照点に依存した空間的移動の概念の学習,” 電子情報通信学会技術研究報告 PRMU2000-105, 2000, pp.39-46.PDF
  • K. Sugiura, N. Iwahashi, H. Kashioka and S. Nakamura. “Learning, Generation and Recognition of Motions by Reference-Point-Dependent Probabilistic Models,” Advanced Robotics, vol.25, no.6-7, pp.825-848, 2011.
  • 深井海生, 武井豪介, 高渕健太, 岩橋直人, Ye Kyaw Thu, 國島丈生. “LRCNによる参照点に依存した動作の認識,” 人工知能学会全国大会, Mar. 2017. PDF