Keynote Speakers

Title: Online Materials and Teaching for CS/CE: Research, Experiences, and Recommendations for Going Online due to COVID-19

Speaker: Roman Lysecky, Professor, Electrical and Computer Engineering, University of Arizona; Head of Content, zyBooks - A Wiley Brand

Abstract: 

Online active-learning content and program auto-grading with immediate feedback have enabled new approaches to teaching lower-division computer science/engineering courses. Having started with the goal of reducing failure rates in lower-division CS/CE courses by replacing existing textbooks/homework with web-native, integrated, active-learning content, zyBooks now cover more than 18 CS/CE courses and have been used by more than 700 universities and 1 million students. This talk briefly introduces the webnative, active-learning learning content that consists of aggressively-minimized text, animations, interactive learning questions, auto-graded homeworks, and auto-graded programming labs. We summarize published research findings that highlight results on student learning outcomes, student earnestness in completing reading activities, student struggle rates and stress, and student engagement in class.
Many faculty are being asked to quickly switch their courses online due to the COVID-19 situation. Instructors are scrambling to produce videos and online assignments, and figuring out how to give students feedback, to serve students remotely. Because the zyBook already provides extensive interactive learning, with automated instant feedback for the students outside of class, there's little/no need to create additional content or feedback mechanisms. We further highlight best practices for teaching courses online and provide recommendations for quickly switching a class to online using zyBooks.

Lawrence O. HallSpeaker:

Roman Lysecky is a Professor of Electrical and Computer Engineering at the University of Arizona and Head of Content at zyBooks - A Wiley Brand. He received his Ph.D. in Computer Science from the University of California, Riverside in 2005. His research focuses on embedded systems with emphasis on medical device security and on computer science/engineering education. He is an inventor on one US patent. He has authored more than 10 textbooks and contributed to several more on topics including C, C++, Java, Data Structures, Digital Design, VHDL, Verilog, Web Programming, and Computer Systems. His recent books with zyBooks utilize a web-native, active-learning approach that has shown measurable increases in student learning and course grades. He has also authored more than 100 research publications in top journals and conferences. His research has been supported by the National Science Foundation (including a CAREER award in 2009), the Army Research Office, the Air Force Office of Scientific Research, and companies such as Toyota. He received the Outstanding Ph.D. Dissertation Award from the European Design and Automation Association (EDAA) in 2006, nine Best Paper Awards, and multiple awards for Excellence at the Student Interface from the College of Engineering at the University of Arizona.


Title: Towards Open World Video Event Understanding and Convolutional Neural Networks Implicitly Learn Object Size

Abstract: 

This talk will provide a very brief overview of the USF Institute for Artificial Intelligence + X and then discuss the two projects of the title.
Events are central to the content of human experience. From the constant stream of sensory onslaught, the brain segments, extracts, represents aspects related to events, and stores them in memory for future comparison, retrieval, and re-storage. Contents of events consist of objects/people (who), location (where), time (when), actions (what), activities (how), and intent (why). Many deep learning-based approaches extract this information from videos. However, most methods cannot adapt much beyond what they were trained on and are incapable of recognizing new events beyond those they were explicitly programmed or trained for. The main limitation of current event analysis approaches is the implicit closed world assumption. The ability to support open world inference is limited by three main aspects: the underlying representation, the source of semantics, and the ability to continuously learn or adapt. This part of the talk will focus on flexible representations, amenable for open-world and self-supervised learning, that are not dependent on the existence of a large amount of training data.
There are very good models using convolutional neural networks (CNN) to predict lung nodules in computed tomography (CT) images which will become malignant in the future (>90% accurate). Size is an important indicator of potential malignancy (72% accurate by itself). However, the variable size nodules in lung screening CT images have to be resized to a standard size for CNN training/testing. So, we looked at whether the networks had learned a concept of nodule size. It is shown that they can generally learn object size from experiments using both lung CT images and natural images of animals from the COCO dataset.

Speakers: Dr. Sudeep Sarkar and Dr. Lawrence O. Hall, University of South Florida

Speaker: Dr. Sudeep Sarkar

Sudeep Sarkar is a Professor and Chair of Computer Science and Engineering and the Associate Vice President for Special Projects at the University of South Florida in Tampa. He is a Fellow of the American Association for the Advancement of Science (AAAS), Institute of Electrical and Electronics Engineers (IEEE) and International Association for Pattern Recognition (IAPR), American Institute for Medical and Biological Engineering (AIMBE), and a Fellow and member of the Board of Directors of the National Academy of Inventors (NAI). He has served on many journal boards and is currently the Editor-in-Chief for Pattern Recognition Letters. He has 25-year expertise in computer vision and pattern recognition algorithms and systems, holds ten U.S. patents, licensed technologies, and has published high-impact journal and conference papers.


Lawrence O. HallSpeaker:

Lawrence O. Hall is a Distinguished University Professor in the Department of Computer Science and Engineering at University of South Florida and the co-Director of the Institute for Artificial Intelligence + X. He received his Ph.D. in Computer Science from the Florida State University in 1986 and a B.S. in Applied Mathematics from the Florida Institute of Technology in 1980. He is a fellow of the IEEE, AAAS, AIMBE and IAPR. He received the Norbert Wiener award in 2012 and the Joseph Wohl award in 2017 from the IEEE SMC Society. He is a past President of the IEEE Systems, Man and Cybernetics Society, former EIC of what is now the IEEE Transactions on Cybernetics. His research interests lie in learning from big data, distributed machine learning, medical image understanding, bioinformatics, pattern recognition, modeling imprecision in decision making, and integrating AI into image processing. He continues to explore un and semi-supervised learning using scalable fuzzy approaches.