SCML Meeting: January 23, 2019

Professor Adam Smith (Computational Media) will present on “Deep Learning and the Future of Search: Objects, Apps, and Beyond”.    Here is his abstract:

Web search has matured over the past 20 years, but it still leans on the traditions of textual document retrieval. You give me a textual query, and I’ll give you some textual documents that are related to it. Most audio and visual search technologies, just 10 years younger, rely on hand-engineered feature extraction systems to replace the role of words in documents. Recent sweeping advances in perceptual artificial intelligence are now making it possible for search systems to radically extend the space of queries and documents. We can now search by photograph or by voice to find physical objects or moments reachable within interactive media.

In this talk, you will learn about the key concepts behind cutting-edge, cross-modality search engines. You will learn how to train semantic embeddings which map items onto vectors in an abstract space where distances and directions are significant. You will learn strategies for turning piles of data into a useful index that can be efficiently queried. Examples shown will draw on interactive media indexing and retrieval systems produced by the Design Reasoning Lab at UC Santa Cruz which map out the space of interesting content within apps and games by direct interacting with them.