IMPORTANT NOTE: Due to the perduring COVID-19 pandemic, Computing 2022 has been moved to a fully virtual conference.

Keynote Speakers

Craig Knoblock

University of Southern California

Craig Knoblock is the Keston Executive Director of the Information Sciences Institute, Research Professor of both Computer Science and Spatial Sciences, and Vice Dean of Engineering at the University of Southern California. He received his Bachelor of Science degree from Syracuse University and his Master’s and Ph.D. from Carnegie Mellon University in computer science. His research focuses on techniques for describing, acquiring, and exploiting the semantics of data. He has worked extensively on source modeling, schema and ontology alignment, entity and record linkage, data cleaning and normalization, extracting data from the web, and combining these techniques to build knowledge graphs. He has published more than 400 journal articles, book chapters, and conference and workshop papers on these topics and has received 7 best paper awards on this work. He also co-authored a recent book titled Knowledge Graphs Fundamentals, Techniques, and Applications, which was published in 2021 by MIT Press. Dr. Knoblock is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the Association of Computing Machinery (ACM), and the Institute of Electrical and Electronic Engineers (IEEE). He is also past President of the International Joint Conference on Artificial Intelligence (IJCAI) and winner of the Robert S. Engelmore Award.

Keynote Title: Building and Using Knowledge Graphs to Turn Data into Knowledge

Abstract: Creating knowledge graphs from data provides a way of combining sources of information in ways that can then be exploiting to solve various real-world problems. However, the challenge in building knowledge graphs is getting the data into a usable form. In this talk I will highlight some of the techniques we have developed for ingesting data into a knowledge graph, including automatic techniques for finding errors in tables and methods for understanding the content of a given data source. I will also describe some of the applications we have developed using knowledge graphs and how we were able to transform challenging tasks into ones that could be addressed, including combating human trafficking, identifying illegal arms sales, and predicting cyber attacks.

Matt Fyles


Matt Fyles is a computer scientist with over 20 years’ experience in the design, development, delivery and support of software and hardware for the microprocessor market, spanning a wide range of applications from consumer electronics to high performance computing, with a particular focus on parallel processors. He began his career at STMicroelectronics, Europe’s largest semi-conductor company, followed by SuperH, Clearspeed and XMOS. He is currently Senior Vice President of Software at Graphcore, a Bristol-based artificial intelligence hardware and software company. Matt is a graduate of Computer Science from the University of Exeter.

Keynote Title: Computers for the Age of Machine Intelligence

Abstract: In 2011 machines started to demonstrate superhuman capabilities in areas such as image and pattern recognition for the very first time. The capability went largely unnoticed to the wider world at that point in time, but the last decade has driven one of the largest disruptive changes that has ever happened in technology. Since 2011 the semiconductor industry, driven by these early demonstrators, has been racing to develop and build new computer architectures and redesigning software abstractions based around machines programming themselves using data. This is radically reshaping all aspects of technology across industry and everyday life. Graphcore has been at the forefront of building these new computing platforms with our IPU processor architecture. In this talk we share what we have learnt from building a complete hardware and software platform for this purpose, how the applications that run on these platforms are reshaping the world and what still needs to be achieved to make this technology more accessible.

Lin William Cong

Cornell SC Johnson College of Business

Lin William Cong is the Rudd Family Professor of Management and Associate Professor of Finance at the Johnson Graduate School of Management at Cornell University, where he serves as the founding faculty director for the FinTech Initiative. He is also a Kauffman Foundation Junior Faculty Fellow, and Poets & Quants World Best Business School Professor, and serves on the editorial boards for journals such as the Management Science. He also co-founded two global forums on Crypto and Blockchain Economics Research ( and on AI and Big Data in Finance Research ( Professor Cong received his Ph.D. in Finance and MS in Statistics from Stanford University, where he was the president of the Ph.D. Students Association and won the Liberman Fellowship and Asian American Award for leadership. He graduated as the top student in Physics from Harvard University, where he completed an A.M. in Physics jointly with A.B. in Math and Physics, an Economics Minor, and a language citation in French. Professor Cong’s research spans financial economics, information economics, FinTech, Economic Data Science, and Entrepreneurship (theory and intersection with digitization and development). He has received numerous accolades such as the Asseth-Kaiko Prize for Research in Cryptoeconomics, International Centre for Pension Management Research Award, AAM-CAMRI-CFA Institute Prize in Asset Management, CME Best paper Award, Finance Theory Group Best Paper Award, the Shmuel Kandel Award in Financial Economics, and Yihong Xia Best Paper Award. He has been invited to speak, teach, and advise at hundreds of world-renowned universities, venture funds, investment and trading shops, and government agencies such as IMF, Blackrock, Asset Management Association of China, Ant Financial, SEC, ChainLink, and federal reserve banks. He has also been consulted for several FinTech regulatory litigation cases.

Keynote Title: AI Applications in Investments and Managerial Decision-making

Abstract: In this talk, I discuss applications of deep reinforcement learning (DRL) in portfolio management and corporate finance. The first application directly optimizes the objectives of portfolio management via DRL instead of the conventional supervised-learning-based paradigms that entail first-step estimations of return distributions or risk premia. Our multi-sequence neural network AlphaPortfolio model is tailored to distinguishing features of financial data and allows potential market interactions and training without labels. AlphaPortfolio yields stellar out-of-sample performances that are robust under various economic restrictions and market conditions. Moreover, we project AlphaPortfolio onto simpler modeling spaces to uncover key drivers of investment performance, including their rotation and nonlinearity. The "economic distillation" tools we invent can be used for interpreting AI and big data models in general. In the second application, we build a DRL framework to find the most effective combination of managerial actions for a given business objective and to use historical actions to back out managers' objectives in practice, be it long versus short horizon, or enterprise value versus equity value maximization, or ESG considerations, etc. DRL derives the optimal control/action trajectory under known reward structure; once combined with an inverse reinforcement learning module, our model is equivalent to the popular generative adversarial networks and reveals managers' various considerations when making decisions.

Zahari Zlatev

Aarhus University, Denmark

I studied mathematics in Sofia University (Bulgaria) and received my PhD in the University of Saint Petersburg (Russia). I worked during many years at the Department of Environmental Study of University of Aarhus (Denmark) and I am still affiliated there as emeritus. I spent a sabbatical year in the University of Illinois at Urbana-Champaign (USA). My major areas of research are numerical analysis, mathematical modelling, large-scale computations, air pollution and climatic changes. I published many papers and several monographs with scientific results obtained in these five fields.

Keynote Title: Computational Investigation of the Influence of Future Climatic Changes on Some Potentially Dangerous Pollution Levels in Europe

Abstract: Complex three-dimensional mathematical models described by systems of non-linear partial differential equations (PDEs) can successfully be used to study the pollution levels in different European countries over long time-periods consisting of many consecutive years. Four major physical and chemical processes are coupled in these models:

  • (a) transport of air pollutants in the atmosphere (advection),
  • (b) diffusion,
  • (c) wet and dry deposition, and
  • (d) chemical reactions.

The discretization of the large-scale air pollution models (of the systems of non-linear PDEs describing these models) results in huge computational tasks. Non-linear algebraic systems containing millions of equations have to be handled at every time-step by applying efficient numerical methods on powerful parallel computers. The numerical requirements are becoming much greater because it is also necessary to develop different scenarios, to perform many different runs, normally over long time-intervals of many consecutive years, to visualize and compare carefully the obtained results and to draw the relevant conclusions.

The numerical solution of this complex physical problem when the Unified Danish Eulerian Model (UNI-DEM) is used in the computations will be discussed in this paper. Several climatic scenarios will be applied and the results will be visualized by applying powerful plot routines. Several conclusions about the influence of future climatic changes on some potentially dangerous pollution levels in several European countries will be drawn.