Keynote Speakers

Marina L. Gavrilova

Marina L. Gavrilova

University of Calgary, Canada

Prof. Gavrilova holds Full Professor with Tenure appointment at the Department of Computer Science, University of Calgary, Canada. Prof. Gavrilova research interests lie in the areas of machine intelligence, biometric recognition, image processing and GIS. Prof. Gavrilova publication list includes over 300 journal and conference papers, edited special issues, books and book chapters, including World Scientific Bestseller of the Month (2007) – “Image Pattern Recognition: Synthesis and Analysis in Biometric,” Springer book (2009) “Computational Intelligence: A Geometry-Based Approach” and IGI book (2013) “Multimodal Biometrics and Intelligent Image Processing for Security Systems”. She has received support from CFI, NSERC, GEOIDE, MITACS, PIMS, Alberta Ingenuity, NATO and other funding agencies. She is an Editor-in-Chief of Transactions on Computational Sciences Springer Verlag Journal series and on Editorial board of seven journals. Prof. Gavrilova received numerous awards and her research was profiled in newspaper and TV interviews, most recently being chosen together with other five outstanding Canadian scientists to be featured in National Museum of Civilization, National Film Canada production, and on Discovery Channel Canada.

Keynote Title: A New Frontier: Deep Machine Learning for Biometric Privacy and Security

Abstract: Current scientific discourse identifies human identity recognition as one of the crucial tasks performed by government, social services, consumer, financial and health institutions worldwide. Biometric image and signal processing is increasingly used in a variety of applications to mitigate vulnerabilities, to predict risks, and to allow for rich and more intelligent data analytics. But there is an inherent conflict between enforcing stronger security and ensuring privacy rights protection. This keynote lecture looks at the new horizons that are currently being explored through integration of deep learning techniques with computer vision and biometric security research. It discusses how multi-modal biometric systems can benefit from the integration of advanced machine learning methods based on both supervised (SVM, KNN, DTrees) and deep learning (NN, CNN, SNN) approaches for image and signal processing. It also describes the developed prototype systems that can extracts and analyze not only traditional, but also emerging social behavioral patterns, such as spatial, temporal, contextual, linguistic, relational and even aesthetic data. Finally, it touches on challenges that uncontrolled data mining and sharing present to privacy and suggests some ways to mitigate them.

Jon G Peddie

Jon G Peddie

Jon Peddie Research (JPR)

Dr. Jon Peddie is a recognized pioneer in the graphics industry, President of Jon Peddie Research and named one of the most influential analysts in the world. He lectures at numerous conferences and universities on topics pertaining to graphics technology and the emerging trends in digital media technology. Former President of Siggraph Pioneers, he serves on advisory board of several conferences, organizations, and companies, and contributes articles to numerous publications. In 2015, he was given the Life Time Achievement award from the CAAD society Peddie has published hundreds of papers, to date; and authored and contributed to eleven books.

Keynote Title: Augmented Reality - where we all will live – Current problems/ challenges, future benefits and applications

Abstract: Augmented reality is a far-reaching subject and application stretching from simple entertainment such as Pokemon, to life or death first responders and tele-medicine. AR apparatus spans from smartphones, to smart glasses, from HUDs to helmets for pilots and emergency workers. Everyone will use AR from consumers of all ages, to professionals of all industries. At the CVC conference, Dr. Peddie will focus on consumer smart glasses, and discuss the technical challenges and social implications of everyone wearing smart glasses. Peddie believes AR glasses, worn as easily as we now wear prescription and sun glasses, will make the world a safer place.

Nasseh Tabrizi

Nasseh Tabrizi

East Carolina University

Tabrizi received his B.S. degree in Computer Science from Manchester University, UK. He then completed his M.S. and Ph.D. from Automatic Control and Systems Engineering Department, Sheffield University, UK. Tabrizi worked in Manchester University for two years prior to his appointment at East Carolina University in 1984. He is the Graduate Program Director of Computer Science and founder and director of Software Engineering graduate program at East Carolina University. His research interests are in the areas of Medical Imaging, Big Data Analytics, Computer Vision, Signal and Image Processing, Software Engineering, Machine Learning, and Computer Science Education. Tabrizi and his research team have prototyped different project including Archival Data Extraction and Assessment (ADEAP) system, An Agent and Virtual Reality-based Course Delivery System, RFID based Learning Assessment System, and Virtual Reality based Home Inspection and Training System. Tabrizi has participated on several major grants. His research team is involved in prototyping of innovative technologies including the one on Brain-Computer Interfaces for communicating with individuals with severe/profound intellectual disabilities. At CVC 2019, Dr. Tabrizi talk will focus on the archival document processing from computer vision to archival big data analytics.

Keynote Title: From Computer Vision to Archival Big Data Analytics

Abstract: Preservation and digitization of handwritten documents presents many challenges to archivists. After such collections are amassed, however, an even greater challenge remains: how may the knowledge contained in the image-based materials be discovered by researchers, scholars, and ordinary citizens? A typical approach is to store the materials in the form of scanned images in a database where their indices are searchable by pre-defined keywords. Although in recent years, a number of libraries and digital projects have used crowd-sourcing to assist in transcribing handwritten documents from their collections in order to preserve them in digital form, accomplishing this labor-intensive work is not feasible for large volumes of documents often with archaic script or that do not exist as fair copies, which even for experts makes them extremely difficult to decipher. In short, we need trained and expert systems to help transcriptions that have been meticulously prepared and accurately rendered. To enable automated knowledge discovery in such source materials, a flexible, centralized system of computer servers and algorithms is required. Accordingly, the Archival Big Data Extraction, Assessment, and Preservation (ABDEAP) system must be built with its dedicated computer application (servers, databases, and algorithms) by the collaborating researchers to preserve and study difficult-to-search and -analyze datasets such as handwritten historical documents. The collaborators will use the application to (1) store the archival data and process the data using currently available algorithms, and (2) use these databases as test beds for development and benchmarking of new and enhanced algorithms in computer vision, feature and pattern recognition, data mining, and computational intelligence.

Juan P. Wachs

Juan Pablo Wachs

Purdue University

Juan Wachs is the James A. and Sharon M. Tompkins Rising Star Associate Professor in the Industrial Engineering School at Purdue University. He is also an Adjunct Associate Professor of Surgery at Indiana University School of Medicine. He is the director of the Intelligent Systems and Assistive Technologies (ISAT) Lab at Purdue, and he is affiliated with the Regenstrief Center for Healthcare Engineering. He leads the Intelligent Systems and Assistive Technologies (ISAT) Laboratories at Purdue. Dr. Wachs received his B.Ed.Tech in Electrical Education in ORT Academic College, at the Hebrew University of Jerusalem campus. His M.Sc and Ph.D in Industrial Engineering and Management from the Ben-Gurion University of the Negev, Israel. He completed postdoctoral training at the Naval Postgraduate School’s MOVES Institute under a National Research Council Fellowship from the National Academies of Sciences. He pioneered the field of gesture interaction in healthcare, for applications in the operating room and austere environments. He co-authored a Best Paper Award Finalist IEEE International Conference on Systems, Man, and Cybernetics, was awarded the 2012 Air Force Summer Faculty Fellowship Program (SFFP), awarded IEEE Appreciation Award for outstanding contribution to the success of Spring 2012 Section Conference. He is the recipient of the 2013 Air Force Young Investigator Award, co-authored the poster presentation award AAAI 2015. He was also awarded the 2015 Helmsley Senior Scientist Fellow, and he is the 2016 Fulbright Scholar and the 2017 Rising Star Professor. He is the technical advisor to “prehensile technologies” which looks at how to develop technologies to improve peoples with disabilities’ wellbeing. His research interests include human-machine interaction, gesture recognition and assistive robotics.

Keynote Title: Towards Lifelong Learning Machines (L2L): How can zero shot learning lead to L2L ?

Abstract: One shot learning is a paradigm in learning theory that explores the ability of machines to recognize a certain class or category of objects from observing only a single instance of it. That means that the system needs to be able to generalize well enough to correctly categorize future observations of the same “thing” based on the fact that this new observation shares fundamental commonalities with the previous observed example. Classical machine learning approaches study this problem as a pure numerical challenge, in which the “better” algorithm is judged purely on better accuracy of classification. But this trivializes the power of the technique in real-world applications in which the context of the observation is absolutely critical to efficient generalization. In fact, without a context-dependent model of what is to be observed, one-shot learning would be an oxymoron. One shot gesture recognition is such a challenge in which teams compete to recognize hand/arm gestures after only one training instance. My work proposes a novel solution to the problem of one-shot recognition applied to human action, and specifically human gestures, using an integrative approach: a method to capture the variance of a gesture by looking at both the process of human cognition and the execution of the movement, rather than just looking at the outcome (the gesture itself). We achieve this from the perspectives of neuroscience and linguistics by employing EEG sensors on human observers to try to capture what they actually remember from the gesture. Further, we propose to leverage one shot learning (OSL) approaches coupled with conventional ZSL approaches to address and solve the problem of Hard Zero Shot Learning (HZSL). The main aim of HZSL is to be able to recognize unseen classes (zero examples) with limited (one or few examples per class) training information.