September 13, 2016

Thesis ready!

In this Thesis we focus on providing practical solutions to different problems related with the capture and manipulation of data, and simulation of processes from the real world.

We dedicate one part of this work to explore novel use cases for tablet devices, leveraging the natural user interaction they enable to prototype more engaging image capture and editing tools. We present a novel framework for the simulation of the craftsmanship involved in analog photography techniques and other interesting optical manipulations.

A second part is devoted to image reconstruction algorithms that share the use of perceptual cues to compensate for the missing data. First, we introduce our SMAA anti aliasing filter for real time applications, where a comprehensive morphological analysis is performed to provide smooth but sharp results. Next, a simple procedure is described to capture extended dynamic range images with mobile devices, using computational photography techniques.

The last part deals with the capture of 3D data from the real world. We present a new depth-from-defocus algorithm for obtaining detailed depth maps of scenes. Finally, we describe the first system for stylized capture of hair, producing results suitable for 3D fabrication inspired by the abstraction process performed by sculptors.

September 10, 2016

"Artificial Creativity" Documentary

I recently worked in a short documentary with some colleagues from the Graphics and Imaging Lab, aimed mainly to showcase the general public the state of the art of some subfields of computer graphics. It was recorded entirely in spanish, with subtitles in additional languages available in the near future.

Humans beings, as visual beings, have leveraged different tools through history to communicate visually, often through works with a strong creative and artistic component. In this area, the arrival of computers has meant a revolution, providing new media for their broadcasting and creation. But in contrast to other tools, computers keep getting more powerful and more capable… Will they one day be able to create art autonomously? 

September 8, 2016

Intrinsic Light Fields

Intrinsic Light Fields  

Elena Garces1      Jose I. Echevarria1      Wen Zhang      Hongzhi Wu2

 1Universidad de Zaragoza.       2Zhejiang University


We present the first method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to 2D single images and videos, a light field is a 4D structure that captures non-integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state-of-the-art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherency. For efficiency, we extend Retinex theory, working on the gradient domain where new albedo and occlusion terms are introduced. Results show our method provides 4D intrinsic decompositions difficult to achieve with previous state-of-the-art algorithms.

September 4, 2016

Systems and methods for simulating the effects of liquids on a camera lens

Systems and methods for simulating the effects of liquids 
on a camera lens  

Adobe Systems, Inc.

Patent US 9176662 B2 (2015)

Systems and methods for simulating liquid-on-lens effects may provide an interface through which users can add and/or manipulate fluids on a virtual camera lens. A physically based fluid simulation may simulate the behavior of the fluid as it is deposited on and/or manipulated on the virtual lens, and determine the distribution of the fluid across the lens. A ray tracing technique may be employed to determine how light is refracted through the virtual lens and the fluid, and to render a distorted output image as seen through the lens and the fluid. As the fluid is manipulated, corresponding changes in the image may be displayed in real time. The input image may be an existing single image or a direct camera feed (e.g., of a tablet type device). The user may select a fluid type and/or various fluid properties for the image editing operation.


February 12, 2015

Fast depth from defocus from focal stacks [TVCJ]

Fast Depth from Defocus from Focal Stacks 

Stephen W. Bailey1      Jose I. Echevarria2      Bobby Bodenheimer3      Diego Gutierrez2      

1University of California at Berkeley       2Universidad de Zaragoza       3Vanderbilt University

The Visual Computer

We present a new depth from defocus method based on the assumption that a per pixel blur estimate (related with the circle of confusion), while ambiguous for a single image, behaves in a consistent way when applied over a focal stack of two or more images. This allows us to fit a simple analytical description of the circle of confusion to the different per pixel measures to obtain approximate depth values up to a scale. Our results are comparable to previous work while offering a faster and flexible pipeline.