International Workshop on Ontologies for Autonomous Robotics (ROBONTICS) @ AIxRobotics, Naples, Italy (hybrid event – online participation also possible)
We encourage researchers interested in the fields of robotics and knowledge engineering to submit research and position papers by October 15th. Researchers with accepted papers will be invited to present at the RobOntics 2025 workshop.
WORKSHOP MOTIVATION
RobOntics focuses on autonomous agents informed by knowledge-driven approaches, with particular emphasis on formal systems like ontologies, knowledge graphs, and their integration with neural architectures. The workshop aims to foster interaction across robotics, ontology, knowledge representation and reasoning, and neural computing to investigate promising cognitively-inspired knowledge based architectures, neuro-symbolic approaches and review progress in knowledge-driven robotics.
Today, the convergence of symbolic knowledge representation and neural learning is reshaping robotics and standardization efforts for intelligent robotic systems. Many open problems involve autonomous agents that must seamlessly integrate symbolic reasoning with neural perception and learning while operating in natural, artificial, or socio-technical environments. Research projects in healthcare assistance, logistics, autonomous driving, and human-robot collaboration increasingly require robots that can both learn from experience and reason with explicit knowledge in realistic human environments.
One of the key challenges is developing cognitive-inspired architectures that combine the flexibility of neural learning with the interpretability and structure of symbolic knowledge. Robotic agents need reactive world models that can rapidly adapt to dynamic environments while maintaining coherent knowledge representations. Further, knowledge-driven and commonsense knowledge-driven interaction capabilities must enable robots to understand, predict, and respond to human intentions and behaviors in natural, contextually-appropriate ways. Such knowledge should be reusable across different agents and scenarios while remaining accessible and modifiable by human operators.
To garner trust, ensure dependability, and enable effective human-robot collaboration, hybrid intelligence systems must provide transparent explanations of their reasoning processes and offer intuitive interfaces for knowledge inspection and modification.
Special Topic of RobOntics 2025
This edition of RobOntics is particularly interested in three interconnected themes:
Neuro-symbolic architectures for autonomous agents: Exploring architectures that integrate neural networks with symbolic reasoning systems, enabling robots to learn from data while maintaining interpretable knowledge structures and logical reasoning capabilities.
Cognitive-inspired world models for knowledge-driven reactive agents: Investigating how cognitive principles can inform the development of dynamic world models that support rapid, knowledge-informed reactions to environmental changes while maintaining consistency with learned and encoded knowledge.
Knowledge-driven human-robot interaction: Examining how ontological knowledge representation can enhance robots’ understanding of human behavior, intentions, and social contexts to enable more natural, effective, and trustworthy human-robot collaboration.
IMPORTANT DATES
- Submission deadline: October 15th, 2025
- Notification: November 7th, 2025
- Camera ready version: November 28th, 2025
- Workshop: TBA (December 8th-10th, 2025)
LIST OF TOPICS (partial)
Participants are invited to submit original papers (5-8 pages + references) on the topics of particular interest described above, but contributions are also welcome on topics such as:
- Foundational issues:
- Are there some ontological approaches better suited than others for autonomous robotics? why?
- How should we ontologically model notions like capability, action, interaction, context etc. in robotics?
- How can ontology be used to model culture, cultural knowledge and cultural behavior?
- Robustness:
- How can ontologies be used to help robots cope with the variety and relatively fluid structure of human environments?
- Ontologies in the perception-action loop:
- What roles can ontology play in autonomous manipulation?
- How can ontology be used to support machine learning for object classification?
- Interactivity:
- How can knowledge about other agents present in the environment be modelled?
- How can ontology be used to model the flow of an interaction, e.g., in the case of shared tasks?
- Normed behavior:
- How can we ontologically represent norms and cultural expectations?
- How can expectations be acquired? would they be the same for robots and for humans?
- Explainability:
- Decision chains are very complex; how can these be organized and presented at various levels of detail for the benefit of a human user? what is an explanation? what is a good explanation? how it be generated from a collection of knowledge items?
WORKSHOP CO-CHAIRS (alphabetical order)
- Daniel Beßler, Institute for Artificial Intelligence, University of Bremen, Germany
- Stefano De Giorgis, Department of Artificial Intelligence, Vrije Universiteit Amsterdam, Netherlands
- Mihai Pomarlan, Digital Media Lab, University of Bremen, Germany
- Robin Nolte, Digital Media Lab, University of Bremen, Germany
- Nikos Tsiogkas, KU Leuven, Belgium
SUBMISSION INFORMATION
Beside regular papers, position and survey papers are also welcome. All the contributions to the workshop must be submitted according to the CEUR format – laTeX template overleaf template. More information about CEUR style submissions is available here.
Papers will be refereed and accepted on the basis of their merit, originality, and relevance. Each paper will be reviewed by at least two Program Committee members. Papers must be submitted electronically in PDF, using this link:
PUBLICATION
Accepted works will be published in an open access CEUR volume as part of the new IAOA series (see http://ceur-ws.org/iaoa.html).
(Banner image made by Frédéric Bisson and was uploaded to wikimedia commons under the Creative Commons Attribution 2.0 Generic license.)