IMPORTANT NOTE: In view of the COVID-19 uncertainty, and in particular the rising number of infections worldwide, Computing 2021 has been moved to a fully virtual conference.

Keynote Speakers

Jack Dongarra

University of Tennessee, Oak Ridge National Laboratory, and University of Manchester

Jack Dongarra holds an appointment at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester. He specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. He was awarded the IEEE Sid Fernbach Award in 2004; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE Charles Babbage Award; in 2013 he received the ACM/IEEE Ken Kennedy Award; in 2019 he received the ACM/SIAM Computational Science and Engineering Prize, and in 2020 he received the IEEE Computer Pioneer Award. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a foreign member of the Russian Academy of Science, a foreign member of the British Royal Society, and a member of the US National Academy of Engineering.

Keynote Title: An Overview of High Performance Computing and Future Requirements

Abstract: In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.

Stephen Grossberg

Boston University

Stephen Grossberg is Wang Professor of Cognitive and Neural Systems; Director of the Center for Adaptive Systems; and Emeritus Professor of Mathematics and Statistics, Psychological and Brain Sciences, and Biomedical Engineering at Boston University. He is a principal founder and current research leader of the fields of computational neuroscience, theoretical psychology and cognitive science, and biologically-inspired engineering, technology, and AI. In 1957-1958, he introduced the paradigm of using systems of nonlinear differential equations to develop neural network models that link brain mechanisms to mental functions, including widely used equations for short-term memory (STM), or neuronal activation; medium-term memory (MTM), or activity-dependent habituation; and long-term memory (LTM), or neuronal learning. His work focuses upon how individuals, algorithms, or machines adapt autonomously in real-time to unexpected environmental challenges. These discoveries together provide a blueprint for developing autonomous adaptive intelligence. They includes models of vision and visual cognition; object, scene, and event learning and recognition; audition, speech, and language learning and recognition; brain development; cognitive information processing; reinforcement learning, motivation, and cognitive-emotional interactions; multiple kinds of consciousness; learning to navigate using vision and path integration; social cognition and imitation learning; sensory-motor learning, control, and planning; brain dysfunctions that cause symptoms of Alzheimer’s disease, autism, medial temporal amnesia, visual and auditory neglect, and sleep disorders; mathematical analysis of neural networks; and large-scale applications of these discoveries. Grossberg founded key infrastructure of the field of neural networks, including the International Neural Network Society and the journal Neural Networks, and has served on the editorial boards of 30 journals. His lecture series at MIT Lincoln Lab led to the national DARPA Study of Neural Networks. He is a fellow of AERA, APA, APS, IEEE, INNS, MDRS, and SEP. He has published 17 books or journal special issues, over 550 research articles, and has 7 patents. He was most recently awarded the 2015 Norman Anderson Lifetime Achievement Award of the Society of Experimental Psychologists (SEP), the 2017 Frank Rosenblatt computational neuroscience award of the Institute for Electrical and Electronics Engineers (IEEE), and the 2019 Donald O. Hebb award for his work in biological learning by the International Neural Network Society (INNS).

Keynote Title: Explainable and Reliable AI: Comparing Deep Learning with Adaptive Resonance

Abstract: This lecture compares and contrasts Deep Learning with Adaptive Resonance Theory, or ART. Deep Learning is often used to classify data. However, Deep Learning can experience catastrophic forgetting: At any stage of learning, an unpredictable part of its memory can collapse. It is thus unreliable. Even if it makes some accurate classifications, they are not explainable. It is thus untrustworthy. Deep Learning has these properties because it uses the back propagation algorithm, whose computational problems due to nonlocal weight transport during mismatch learning were described in the 1980s. Deep Learning became popular after very fast computers and huge online databases became available that enabled new applications despite these problems. ART models overcome 17 foundational computational problems of back propagation and Deep Learning. ART is a self-organizing, explainable production system that incrementally learns, using arbitrary combinations of unsupervised and supervised learning, to rapidly attend, classify, and predict objects and events in a changing world, without experiencing catastrophic forgetting. ART has also successfully explained and predicted many psychological and neurobiological data, and can be derived from a thought experiment about how any system can autonomously learn to correct predictive errors in a changing world. It hereby forms a foundation for designing algorithms for any adaptively intelligent system that is truly autonomous. ART has been successfully used in hundreds of large-scale real world applications, including remote sensing, medical database prediction, and social media data clustering.

Nadia Magnenat-Thalmann

University of Geneva & Nanyang Technological University

Professor Thalmann joined NTU in August 2009 as the Director of the interdisciplinary Institute for Media Innovation. She has authored dozens of books, published more than 700 papers on virtual humans/virtual worlds and social robots (jointly with her PhD students), organized major conferences as CGI, CASA, and delivered more than 300 keynote addresses, some of them at global events such as the World Economic Forum in Davos. During her illustrious career, she also established MIRALab in Switzerland, a ground-breaking interdisciplinary multimedia research institute. In NTU, Singapore, recently, she revolutionized social robotics by unveiling the first social robot Nadine that can have mood and emotions and remember people and actions. Besides having bachelor's and master's degrees in disciplines such as psychology, biology, chemistry and computer science, Professor Thalmann completed her PhD in quantum physics at the University of Geneva. She has received honorary doctorates from Leibniz University of Hannover and the University of Ottawa in Canada and several prestigious other awards as the Humboldt Research Award in Germany

Keynote Title: What can do a Humanoid robot to support the impaired people or the elderly today?

Abstract: In my talk, I will describe the technical capacity of a humanoid robot as it is today and describe what this kind of social robot can do for impaired, or elderly people, at home or in nursing home now. I will describe a case study of our humanoid robot Nadine who has recently assisted nurses in an elderly home . I will conclude with an analysis of the real needs of the elderly or impaired people and how a humanoid robot can really help them. I will discuss on the technical features that are missing today to provide a perfect helper. I will discuss a timeline for further Research and Development as well as a priority in the tasks development according to the actual needs.