Keynote Speakers

Edward Ashford Lee (EECS, UC Berkeley)

Edward Ashford Lee has been working on software systems for 40 years. He currently divides his time between between software systems research and studies of philosophical and societal implications of technology. After education at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in Electrical Engineering and Computer Sciences. His software research focuses on cyber-physical systems, which integrate computing with the physical world. He is author of several textbooks and two general-audience books, The Coevolution: The Entwined Futures and Humans and Machines (2020) and Plato and the Nerd: The Creative Partnership of Humans and Technology (2017).

Keynote Title: Do We Really Want Explainable AI?

Abstract: "Rationality" is the principle that humans make decisions on the basis of step-by-step (algorithmic) reasoning using systematic rules of logic. An ideal "explanation" for a decision is a chronicle of the steps used to arrive at the decision. Herb Simon’s "bounded rationality" is the observation that the ability of a human brain to handle algorithmic complexity and data is limited. As a consequence, human decision making in complex cases mixes some rationality with a great deal of intuition, relying more on Daniel Kahneman's "System 1" than "System 2." A DNN-based AI, similarly, does not arrive at a decision through a rational process in this sense. An understanding of the mechanisms of the DNN yields little or no insight into any rational explanation for its decisions. The DNN is operating in a manner more like System 1 than System 2. Humans, however, are quite good at constructing post-facto rationalizations of their intuitive decisions. If we demand rational explanations for AI decisions, engineers will inevitably develop AIs that are very effective at constructing such post-facto rationalizations. With their ability to handle to handle vast amounts of data, the AIs will learn to build rationalizations using many more precedents than any human could, thereby constructing rationalizations for ANY decision that will become very hard to refute. The demand for explanations, therefore, could backfire, resulting in effectively ceding to the AIs much more power. In this talk, I will discuss similarities and differences between human and AI decision making and will speculate on how, as a society, we might be able to proceed to leverage AIs in ways that benefit humans.

Andy D. Pimentel (University of Amsterdam)

Andy D. Pimentel is full professor at the University of Amsterdam where he chairs the Parallel Computing Systems group. His research centers around system-level modeling, simulation and exploration of (embedded) multicore and manycore computer systems with the purpose of efficiently and effectively designing and programming these systems. He has an MSc and PhD in computer science from the University of Amsterdam. He is a cofounder of the International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS). He has (co-)authored more than 130 scientific publications and is an Associate Editor of the Simulation Modelling Practice and Theory journal as well as the Journal of Signal Processing Systems. He served as the General Chair of the HIPEAC 2015 conference, as Local Organization Co-Chair of Embedded Systems Week 2015, and as Program Chair of CODES+ISSS in 2016 and 2017. Furthermore, he has served on the TPC of many leading (embedded) computer systems design conferences, such as DAC, DATE, CODES+ISSS, ICCD, ICCAD, FPL, and LCTES.

Keynote Title: Systems for AI, and AI for Systems

Paul Lukowicz (DFKI and TU Kaiserslautern)

Prof. Dr. Paul Lukowicz is Full Professor of AI at the Technical University of Kaiserslautern in Germany where he heads the Embedded Intelligence group at DFKI. His research focusses on context aware ubiquitous and wearable systems including sensing, pattern recognition, system architectures, models of large scale self organized systems, and applications. Paul Lukowicz coordinates the HumanE AI project, acts as editor for various scientific publcations, and has served on the TPCs (including TPC Chair) of all the main conferences within his research area.

Jochen Cremer (TU Delft)

Dr. Jochen Cremer is the Co-Director of the TU Delft AI Energy Lab and an Assistant Professor at the Faculty of Electrical Engineering, Mathematics, and Computer Science. He is also Research Associate at Imperial College London. Jochen’s research focuses on applying machine learning and data analytics to energy systems operation and control. The TU Delft AI Energy lab team develops new computational methods for system operation and control that combine statistical machine learning and mathematical optimisation.

Keynote Title: AI for Distributed Energy Systems

Abstract: The energy transition has to happen imminently towards net-zero. There, the distributed paradigm toward more flexible operation of the electricity system may find its new foundation in the active real-time self-management of the many distributed energy resources. There, interestingly, the recent progress of developing AI-based algorithms for distributed self-organized agents part of the system can consider some system stability constraints, and manage network congestion through their energy market participation. However, reinforced AI-based control may lack verifiable guarantees, transparency, and privacy, and can have inherent inaccuracies. In response, the system operators finally responsible for the reliable operation of the grid need advanced state estimation techniques using collected measurements so they can prevent instabilities. Processing these collected measurements is unfortunately not straightforward as the amount of data is increasing exponentially which is why advanced data processing techniques are currently in development. There, also for system operators, AI-based algorithms are very promising in the estimation of system states, the detection of active devices without intrusion, and the forecasts of some state variables such as demand or distributed renewable power injection. In this context, this keynote address will put forward a vision for developing tools for the energy transition with AI methods such as multi-agent reinforcement learning and deep learning.