Trustworthy Multimodal, Generative, and Reinforcement Learning for Intelligent Systems
Organizers
- YushuaiLi, Aalborg University, yusli@cs.aau.dk
- ZhongmingYao, Aalborg University, zyao@cs.aau.dk
- Min Zhang, University of Waikato, zhang@waikato.ac.nz
Description
Recent advances in artificial intelligence are transforming intelligent systems by enabling the integration of perception, computation, and decision-making in complex environments. Representative domains include cyber-physical energy systems and intelligent healthcare systems, which require high levels of reliability, adaptability, and safety. These systems often rely on heterogeneous multimodal data, such as sensor signals, images, and text, and are increasingly deployed in safety-critical scenarios. However, several challenges remain. Heterogeneous data and semantic gaps across modalities hinder effective information fusion, while the dynamic and non-stationary nature of real-world environments often leads to performance degradation and limited generalization under evolving tasks or data distributions. In addition, stringent requirements for reliability, robustness, interpretability, and security further complicate deployment in high-stakes domains.
To address these challenges, trustworthy learning paradigms are needed to enhance perception, modeling, and decision-making. In this context, multimodal learning, generative modeling, and reinforcement learning have emerged as key approaches. This special session focuses on recent advances in these areas for trustworthy intelligent systems, including novel algorithms, robust and secure learning methods, interpretable models, and real-world applications in complex and safety-critical environments.






