KEYNOTE SPEECH 1

Spear and Shield: AIGC, Disinformation, and Detection

Jiebo Luo

Albert Adendt Professor of Engineering, University of Rochester

Abstract: Social media has become a major platform for news dissemination and shaping public opinion. A notable example is the recent U.S. presidential elections, during which candidates and their supporters actively utilized these platforms to campaign and voice their views. However, the accessibility and openness of social media have also made it a fertile ground for the spread of misinformation, rumors, and fabricated news, raising significant public concern. At the same time, rapid developments in AI-generated content (AIGC) have drastically lowered the barriers to producing multimedia-based disinformation. To enhance the credibility of information shared online and curb the circulation of fake content, it is essential to develop automated methods for detecting both misinformation and disinformation. In this context, we present our related research and examine the challenges associated with content generation and detection.

Biography: Professor Jiebo Luo has had a distinguished career, spanning both the University of Rochester and Kodak Research Laboratories. He has published over 600 technical papers and holds more than 90 U.S. patents. His research interests encompass computer vision, natural language processing, machine learning, data mining, computational social science, and digital health. Professor Luo has served on the editorial boards of numerous leading scientific journals, and notably held the position of Editor-in-Chief of IEEE Transactions on Multimedia (TMM) from 2020 to 2022. He has also served as general chair and program chair of major conferences, including IEEE CVPR (2012) and ACM Multimedia (2010, 2018). Professor Luo is recognized as a Fellow of ACM, AAAI, IEEE, AIMBE, IAPR, and SPIE, and is a Member of Academia Europaea as well as the U.S. National Academy of Inventors.


KEYNOTE SPEECH 2

AI for the Social Sciences

Jonathan Zhu

Professor of the city University of Hong Kong

Abstract: Social computing serves as a bridge between computing sciences and social sciences. The emergence of AI for social sciences marks a new phase in the evolution of social computing. This talk will explore the transformative impact of large language models (LLMs) and other AI technologies on social science research—spanning research design, hypothesis generation, data collection, analysis, and interpretation. The presentation will also address the significant benefits and challenges associated with applying AI in the social sciences, and discuss possible future directions for the field.

Bio: Jonathan Zhu is a Chair Professor of Computational Social Science at City University of Hong Kong with a joint appointment between Department of Media and Communication and Department of Data Science. His current research focuses on developing, using, and evaluating AI technologies for social scientists. He has published in leading journals in various fields, including communication, political science, sociology, computer science, information science, and medical informatics. He is an elected Fellow of the International Communication Association (ICA).


KEYNOTE SPEECH 3

Empower software security with large language models

Ho Chen

Professor of the University of Hong Kong

Fuzz testing, or fuzzing, is a mainstay technique in software security because it can find vulnerabilities automatically. However, fuzzing struggles to achieve high coverage of the vast state space of complex software, such as library API. I will describe how we use large language models (LLMs) to explore library code comprehensively. We iteratively use coverage feedback to mutate the prompts to the LLM to generate fuzz drivers, which improves the coverage of library code. This approach has been shown to be effective in finding true vulnerabilities in complex, popular libraries.

Bio: Ho Chen is a professor at the University of Hong Kong. His current research interests are AI-driven security and software engineering, and AI security and robustness. He is a fellow of IEEE.

Important Dates

Submission Open: February 1, 2025

Submission Deadline:March 31, 2025, 11:59PM (Pacific Time)

Acceptance Notification: May 1, 2025

Final Paper Deadline: May 31, 2025

Early Registration Deadline: May 31, 2025

Conference Dates: July 12-13, 2025