IMPORTANT NOTE: We are hoping to offer a hybrid (in-person and virtual) format. However, as the situation is still volatile, it may be a fully virtual format.

Keynote Speakers

Amy Bruckman (Professor, Georgia Institute of Technology)

Amy Bruckman is Professor and Senior Associate Chair in the School of Interactive Computing at the Georgia Institute of Technology. Her research focuses on social computing, with interests in content moderation, collaboration, social movements, and internet research ethics. Bruckman is an ACM Fellow and a member of the ACM CHI Academy. She received her Ph.D. from the MIT Media in 1997, and a B.A. in physics from Harvard University in 1987. Her book “Should You Believe Wikipedia?” is forthcoming from Cambridge University Press in 2021.

Keynote Title: Should You Believe Wikipedia? Virtue Epistemology and the Future of Intelligent Systems to Promote Knowledge

Abstract: How does anyone know what to believe any more? In this talk, I’ll argue that learning a bit of epistemology and sociology of knowledge can help us design more effective intelligent systems. I will introduce fundamental ideas about the nature of truth and social construction of knowledge, and then present the field of virtue epistemology. Virtue epistemology suggests that knowledge is a collaborative achievement, and we all can work to achieve knowledge (justified, true belief) by aspiring to epistemic virtues: “curiosity, intellectual autonomy, intellectual humility, attentiveness, intellectual carefulness, intellectual thoroughness, open-mindedness, intellectual courage and intellectual tenacity” (Heersmink 2017). Finally, I’ll lay out an agenda for leveraging these concepts in the design of intelligent systems to promote knowledge.

Wojciech Szpankowski (Center for Science of Information, Director; Purdue University)

Wojciech Szpankowski is the Saul Rosen Distinguished Professor of Computer Science at Purdue University where he teaches and conducts research in analysis of algorithms, information theory, analytic combinatorics, random structures, and stability problems of distributed systems. He held several Visiting Professor/Scholar positions, including McGill University, INRIA, Stanford, Hewlett-Packard Labs, Universite de Versailles, University of Canterbury, New Zealand, Ecole Polytechnique, France, the Newton Institute, Cambridge, UK, ETH, Zurich, Hawaii University, and Gdansk University of Technology, Poland. He is a Fellow of IEEE, and the Erskine Fellow. In 2010 he received the Humboldt Research Award and in 2015 the Inaugural Arden L. Bement Jr. Award. In 2020 he was the recipient of the Flajolet Lecture Prize. He published two books: "Average Case Analysis of Algorithms on Sequences", John Wiley & Sons, 2001, and "Analytic Pattern Matching: From DNA to Twitter", Cambridge, 2015. In 2008 he launched the interdisciplinary Institute for Science of Information, and in 2010 he became the Director of the NSF Science and Technology Center for Science of Information.

Keynote Title: Science of AI

Abstract: AI systems have demonstrated immense potential for fundamentally reshaping virtually all technological processes and artifacts. What is perhaps less evident, but equally important, is the realization that current AI systems have fundamental limitations that are not well-understood, even in domains where their use is commonplace. In terms of performance, issues of bias, confidence, and introspection (assessing the performance of inference), are only now being investigated. In safety critical domains such as autonomous systems, learning enabled systems must be verifiable and constrained by safety properties. In systems with `humans-in-the-loop', such as healthcare, inferences must be interpretable and explainable in terms of concepts accessible to human experts. In social science and government policy formal measures and guarantees of fairness are important. When relying on sensitive data, rigorous definitions and guarantees of privacy and suitable tradeoffs of privacy and accuracy of inference are paramount. These remain fundamental challenges -- solutions to which are essential for wide deployment of AI technologies. In this talk, after reviewing some challenges facing AI, we argue for the need of a holistic framework for Science of AI based on the following pillars: data, information, complexity, logic, and inference. Building on our work over the past decade at the NSF-sponsored Center for Science of Information (CSoI), we characterize information in different data abstractions and representations. We illustrate our thinking on a few examples. We start with temporal inference of evolution of the brain, based on structural information. Then, we move to AI in agriculture and discuss provenance in the food supply chain and use of provenance data in optimization, equitable distribution, and safety. Finally, we briefly discuss challenges in AI and quantum science. If time allows, we cover recent results on regret for logistic regression and muse on misinformation.

Nikola Kasabov (Auckland University of Technology, NZ)

Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand, Fellow of the INNS College of Fellows, DVF of the Royal Academy of Engineering UK. He is the Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland and Professor at the School of Engineering, Computing and Mathematical Sciences at Auckland University of Technology, New Zealand. He also holds the George Moore Chair of data analytics at the University of Ulster, UK. Kasabov is a Past President of the Asia Pacific Neural Network Society (APNNS) and Past President of the International Neural Network Society (INNS). He is member of several technical committees of IEEE Computational Intelligence Society and Distinguished Lecturer of IEEE (2012-2014). He is Editor of Springer Handbook of Bio-Neuroinformatics, Springer Series of Bio-and Neuro-systems and Springer journal Evolving Systems. He is Associate Editor of several journals, including Neural Networks, IEEE TrNN, Tr CDS, Information Sciences, Applied Soft Computing. Kasabov holds MSc and PhD from TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 650 publications. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia Bulgaria; University of Essex UK; University of Otago, NZ; Advisory Professor at Shanghai Jiao Tong University and CASIA Beijing, Visiting Professor at ETH/University of Zurich, Honorary Professor of Teesside University, UK.

Keynote Title: Brain-Inspired Computation for Intelligent Systems

Abstract: The talk presents first some background information about the third generation of artificial neural networks, the spiking neural networks (SNN) also named neuromorphic systems. SNN are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data, thus allowing the development of new types of brain-inspired evolving intelligent systems (BI-EIS). Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain. This is lustrated on an exemplar SNN architecture NeuCube for the creati0on of BI-EIS (free software and open source, along with a cloud-based version, are available from www.kedri.aut.ac.nz/neucube and www.neucube.io). Case studies are presented of brain and environmental data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of brain and cognitive data; predicting environmental hazards and extreme events; image processing and computer vision. Hardware realisation of neuromorphic computational platforms is presented. These are massively parallel computers of thousands and millions of artificial neurons with low power consumption and ultra- high speed of processing. It is demonstrated that brain-inspired SNN architectures, such as the NeuCube, allow for explainable knowledge transfer between humans and machines through building brain-inspired Brain-Computer Interfaces (BI-BCI). Future directions of neuromorphic computation for building BI-EIS are outlined.

Elias Fallon (Engineering Group Director, Cadence Design Systems)

Elias Fallon is currently Engineering Group Director at Cadence Design Systems, a leading Electronic Design Automation company. He has been involved in EDA for more than 20 years from the founding of Neolinear, Inc, which Cadence acquired in 2004. Elias was co-Primary Investigator on the MAGESTIC project, funded by DARPA to investigate the application of Machine Learning to EDA for Package/PCB and Analog IC. Elias also leads an innovation incubation team within the Custom IC R&D group as well as other traditional EDA product teams. Beyond his work developing electronic design automation tools, he has led software quality improvement initiatives within Cadence, partnering with the Carnegie Mellon Software Engineering Institute. Elias graduated from Carnegie Mellon University with an M.S. and B.S. in Electrical and Computer Engineering. Elias, his wife and two children live north of Pittsburgh, PA, USA.

Keynote Title: Accelerating Computational Intelligence with Machine Learning for Electronic Design Automation

Abstract: Electronic Design Automation (EDA) software has delivered semiconductor design productivity improvements for decades. The next generation of intelligent systems in the cloud and at the edge will require another leap in design productivity. EDA software utilizes an advanced toolkit of computational software to provide intelligent system design productivity. The next leap in productivity will come from the addition of machine learning (ML) techniques to the toolbox of computational software capabilities employed by EDA developers. Recent research and development into machine learning for EDA point to clear patterns for how it impacts EDA tools, flows, and design challenges. This development shows how computational intelligence is impacting EDA flows, which in turn will drive the development of new intelligence systems.