TERAIS seminar on human-robot interaction - Omar Eldardeer, Marco Matarese (11.4.2025)
v piatok 11.4.2025 o 11:30 v miestnosti I 9
Prenášajúci: Dr. Omar Eldardeer (Italian Institute of Technology, Genoa, Italy)
Názov: Towards Human-Robot Shared Perception
Abstrakt:
With the increasing presence of robots in our daily lives, the need for seamless human-robot collaboration has become more critical than ever. A fundamental aspect of this collaboration is shared perception—the ability for both humans and robots to develop a mutual understanding of their environment to support effective joint action. In this talk, we explore the theoretical foundations necessary for achieving shared perception, discussing key requirements.
Furthermore, we delve into biologically inspired cognitive architectures that enhance robotic perception, drawing insights from human cognition to improve how robots interpret, learn from, and respond to dynamic environments. By integrating principles from neuroscience and artificial intelligence, these architectures pave the way for more intuitive and natural human-robot interactions. Through this discussion, we aim to highlight the challenges, opportunities, and future directions in the quest for more intelligent and collaborative robotic systems.
Prenášajúci: Dr. Marco Matarese (Italian Institute of Technology, Genoa, Italy)
Názov: Friend or Foe? The Role of Socially Skilled Robots in Explainable AI
Abstrakt:
As artificial intelligence (AI) systems become more prevalent in high-stakes decision-making and collaborative environments, the explainability of such systems remains a critical challenge. Integrating social skills into explainable AI (XAI) represents a promising step toward more effective human-AI collaboration. Socially capable robots can enhance trust, engagement, and comprehension by delivering explanations in ways that align with human cognitive and social processes.
In this talk, we examine the potential of socially skilled robots in XAI by highlighting the positive effects of social explanations on human-robot collaboration. By leveraging natural communication patterns, such as verbal and non-verbal cues, these systems can facilitate better decision-making and performance in complex tasks.
However, we also explore the potential downsides of explainable artificial agents-assisted training since implementing such agents is not without risks. Excessive reliance on AI explanations can reduce learning and problem-solving efforts, fostering user complacency and diminishing long-term learning outcomes. Additionally, social reciprocity mechanisms may lead users to accept explanations uncritically, even when flawed or misleading.
By examining these dynamics, we aim to foster a discussion on how to design AI systems that balance clarity, engagement, and user autonomy, ensuring that explanations support rather than hinder human learning and decision-making