03
Jun
2019
Jun
2019
AAII Seminar: June 5, 2019
categories: Seminar
This week we will have 2 presentations from Computational Media:
Speaker 1:
Mahika Dubey, Graduate Student in Computational Media, UCSC Creative Coding Lab
Title:
Data Brushes: Interactive Neural Style Transfer for Data Art
Abstract:
We introduce in-browser applications for the application of data art-based style-transfer brushes onto an image, inviting casual creators and other non-technical users to interact with deep convolutional neural networks to co-create custom artworks. In addition to enabling a novel creative workflow, the process of interactively modifying an image via multiple style transfer neural networks reveals meaningful features encoded within the networks, and provides insight into the effect particular networks have on different images, or even different regions of an image, such as border artifacts and spatial stability or instability. Our data brushes provide new perspectives on the creative use of neural style transfer for data art and enhance intuition for the expressive range of particular style transfer features.
Speaker 2:
Oskar Elek, Postdoctoral Researcher in Computational Media, UCSC Creative Coding Lab
Title:
Learning Patterns in Sample Distributions for Monte Carlo Variance Reduction
Abstract:
This ongoing project investigates the prediction of unknown distributions from stochastic, sparse samples. A relevant problem in many domains (especially forecasting), we address it from the perspective of stochastic Monte Carlo integration in physically based image synthesis (rendering). Because the sample distributions obtained in rendering are complex, chaotic, and do not conform to known statistical distributions, simple estimation methods usually yield results with high amounts of variance. To tackle this issue, we systematically study these sample distributions to understand common patterns and archetypes, and propose to use deep neural networks to learn them. I will present our current results, with an open discussion centered on the main challenge: How can we use the knowledge of characteristic sample patterns to bootstrap the network and get better predictions?