AI systems are more and more expected to take autonomous decisions, often well hidden from plain view. This is not necessarily without danger. In this talk I'll give a high-level overview of some common pitfalls in AI-powered applications, that maybe require more thought than many AI gurus would admit. On the menu: bias and fairness, confounding variables, adversarial attacks, ethics, explainability, ... and let's not forget the necessary security and privacy concerns for individual citizens and society as a whole. Joachim Ganseman is computer scientist and has a past as Ph.D. student at the University of Antwerp's Visionlab, also visiting at Queen Mary University of London and Stanford University, where he focused on digital signal processing, machine learning and audio analysis. From 2018 onwards he works in the research division of Smals on AI-related topics, including Natural Language Processing and Conversational Interfaces, studying their potential in the context of government and social security administration. In his spare time he's an excellent pianist, and for his role as co-founder of the Belgian Olympiad in Informatics he was awarded a science communication prize by the Royal Flemish Academy of Sciences of Belgium.
Get notified about new features and conference additions.