IMPORTANT NOTE:
The safety and well-being of all conference participants is our priority. After evaluating the COVID-19 uncertainty, the decision has been made to transform the in-person component of IntelliSys 2020 into a virtual format. Intelligent Systems Conference (IntelliSys) 2020 will now be held as an online/virtual event on 3 & 4 September 2020.

Keynote Speakers

Toby Walsh (Professor, University of South Wales)

Toby Walsh is a world-renowned professor on artificial intelligence at the University of South Wales and Data61. He was named by the media as a "rock star" of the digital revolution and included on the list of the 100 most important digital innovators in Australia. Toby is the author of 4 books: ‘IT’S ALIVE’ (published in English, Chinese, Korean, German and Polish), ‘Android Dreams,’ ‘Machines That Think’ and ‘2062: The World that AI Made’ (published in August 2018). He is a sought-after keynote speaker on how artificial intelligence influences business, education, warfare, personal development, and finance, among others. His talks were featured at TED, Adobe Pacific Summit, Future Shapers Forum, CEBIT and Quest Future of AI. Toby has advised a number of leading organizations on their AI strategy such as McKinsey & Co, Bertlesmann, TATA Consulting, ServiceNow, Clayton Utz, the NSW Department of Education, and Penguin Random House and is often invited to TV shows to talk about the subject. Prof. Walsh is a passionate advocate for limits to AI to ensure AI is used to improve, not hurt, our lives. Together with Pope Francis, Toby was voted runner up in the Person of the Year Award by the Arm Control Association, recognizing his work in ensuring the safe use of AI in warfare.

Amy Greenwald (Professor, Brown University)

Amy Greenwald is Professor of Computer Science at Brown University in Providence, Rhode Island. Her research focus is on game-theoretic and economic interactions among computational agents, applied to areas like autonomous bidding in wireless spectrum auctions and ad exchanges. Before joining Brown, Greenwald was a postdoc at IBM’s T.J. Watson Research Center, where her “Shopbots and Pricebots” paper was named Best Paper at IBM Research. Her honors include the Presidential Early Career Award for Scientists and Engineers (PECASE), a Fulbright nomination, and a Sloan Fellowship. Finally, Greenwald is active in promoting diversity in Computer Science, leading multiple K-12 initiatives in which Brown undergraduates teach computer science to Providence public school students.

Michael Bronstein (Professor, Imperial College London)

Michael Bronstein is a professor at Imperial College London, where he holds the Chair in Machine Learning and Pattern Recognition, and Head of Graph Learning Research at Twitter. Michael received his PhD from the Technion in 2007. He has held visiting appointments at Stanford, MIT, Harvard, and Tel Aviv University, and has also been affiliated with three Institutes for Advanced Study (at TU Munich as a Rudolf Diesel Fellow (2017-), at Harvard as a Radcliffe fellow (2017-2018), and at Princeton (2020)). Michael is the recipient of five ERC grants, Fellow of IEEE, IAPR, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019). He has previously served as Principal Engineer at Intel Perceptual Computing and was one of the key developers of the Intel RealSense technology.

Keynote Title: Deep learning on graphs: successes, challenges, and next steps

Abstract: Deep learning on graphs and network-structured data has recently become one of the hottest topics in machine learning. Graphs are powerful mathematical abstractions that can describe complex systems of relations and interactions in fields ranging from biology and high-energy physics to social science and economics. In this talk, I will outline the basic methods, applications, challenges and possible future directions in the field.

Max Welling (University of Amsterdam (UvA)/ Qualcomm)

Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He has a secondary appointment as a fellow at the Canadian Institute for Advanced Research (CIFAR). Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015. He serves on the board of the Neurips foundation since 2015 and has been program chair and general chair of Neurips in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. He is a founding board member of ELLIS. Max Welling is recipient of the ECCV Koenderink Prize in 2010. He directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). He has over 300 publications in machine learning and an h-index of 66.

Freddy Lecue (Chief Artificial Intelligence (AI) Scientist at CortAIx)

Dr. Freddy Lecue is the Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) at Thales in Montreal - Canada. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis - France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008. His research expertise is explainable machine learning, particularly explainable artificial neural networks.

Keynote Title: Explainable Machine Learning: Mind the Users and their Knowledge

Abstract: The term XAI refers to a set of tools for explaining AI systems of any kind, beyond Machine Learning. Even though these tools aim at addressing explanation in the broader sense, they are not designed for all users, tasks, contexts and applications. This presentation will describe progress to date on XAI, particularly Machine Learning by reviewing its approaches, motivation, best practices, industrial applications, and limitations. We will also highlight the importance of users, their knowledge and the underlying semantics in explainable Machine Learning.