Research

Currently working in the Machine Intelligence Design group in Adobe Design

Graduated with a PhD from Carnegie Mellon University, advised by Kayvon Fatahalian and Jim McCann. I research Creativity Support Tools, which allow artists of all skill levels to perform their tasks more efficiently, more easily, and be more creative. My research projects include a system for creating theatrical lighting designs from a single, non-realistic, image through a technique we call "visual objectives," an interface paradigm designed to make layer selection in complex image compositions easier, and a high-dimensional parameterized design exploration system called "design adjectives."

I received my Bachelor of Computer Science and Arts (Lighting Design) from Carnegie Mellon University in 2014, after spending many late nights in the theater and coding up graphics projects.

Current CV

Thesis

Improving Parameterized Design with Interactive User-Guided Sampling and Parameter Identification Tools

Committee: Kayvon Fatahalian (Co-Chair), James McCann (Co-Chair), Brad Myers, Sylvain Paris (Adobe)

Thesis Document | CMU Department Archive

Modern computer graphics design tasks often take place in high-dimensional parameterized design spaces. In these spaces, the design is specified by the value of tens to hundreds of parameters which often interact in ways that are difficult to predict. For instance, a parametric font may have tens of parameters controlling everything from stroke thickness to serif appearance, while a parameterized material may have hundreds of parameters specific to the material, such as a brick material providing controls for number of bricks per row and column, but no single brick size parameter. In these parameterized domains, creating a design often follows a coarse-to-fine iterative process where a designer creates a set of initial designs that are gradually refined until a design meeting all constraints is created. Per-parameter interfaces commonly used for parameterized design are not well-aligned with this process. One common method of providing higher-level navigation is to enable visual exploration by using a design gallery. This gallery presents an organized collection of samples selected from the design space that the user can browse through. Gallery interfaces provide a solid overview of a design space, but are difficult to direct to specific regions based on the user's current design goal.

This thesis presents a collection of software systems and interface techniques that support productive design in high-dimensional parameter spaces through interactive user-guided sampling. A major component of this work is Design Adjectives, a domain agnostic framework for creating parameterized design tools that use machine learned models of user intent to guide exploration through high-dimensional design spaces. The combination of a design gallery with a model of intent creates an interactive exploration interface that is more closely aligned with how the design process works. An implementation of the design adjectives system based on Gaussian process regression is presented. This implementation is able to rapidly learn user intent from only a few examples, and can generate samples of desirable designs for gallery viewing at interactive rates. Components of this framework can be improved on a domain-specific basis, and an example of such an improvement is examined in a theatrical lighting design context. Information gleaned from these exploratory design methods can be used in conjunction with parameter identification tools, such as the Hover Visualization tool presented in this thesis, to help support fine-tuning. In user studies evaluating these systems, users felt that they were able to explore the design space more quickly and easily when compared to existing per-parameter interface.

Projects

Design Adjectives: A Framework for Interactive Model-Guided Exploration of Parameterized Design Spaces

Evan Shimizu, Matt Fisher, Sylvain Paris, James McCann, Kayvon Fatahalian
To Appear at UIST 2020

Project Website | Paper | Overview Video | Walkthrough Video

Many digital design tasks require a user to set a large number of parameters. Gallery-based interfaces provide a way to quickly evaluate examples and explore the space of potential designs, but require systems to predict which designs from a high-dimensional space are the right ones to present to the user. In this paper we describe design adjectives, a domain agnostic framework for creating parameterized design tools that use machine learned models of user intent to guide exploration through high-dimensional design spaces. We provide an implementation of the design adjectives framework based on Gaussian process regression, which is able to rapidly learn user intent from only a few examples. We use these models to generate samples of desirable designs for gallery viewing, and enhance slider-based interfaces to assist design fine tuning. Both learning and sampling occur at interactive rates, making the system suitable for iterative design workflows. We demonstrate use of the design adjectives framework to create design tools for three domains: materials, fonts, and particle systems. We evaluate these tools in a user study showing that participants were able to easily explore the design space and find designs that they liked, and in professional case studies that demonstrate the framework’s ability to support professional design concepting workflows.

Finding Layers Using Hover Visualizations

Evan Shimizu, Sylvain Paris, Matt Fisher, Kayvon Fatahalian
Presented at Graphics Interface 2019

Project Website | Paper | Code

In 2D digital art software, it is common to organize documents in to layers that are composited to create the final image, mimicking the traditional technique of creating an image by drawing on stacked transparent celluloid sheets. While intuitive, the layer stack suffers from problems of scale and organization. Documents with many layers are unwieldy to edit, finding a specific layer is akin to finding a needle in a haystack. This paper presents a click and hover interaction which visualizes the impact of a layer in the context of the full resolution composited image, providing an easier and faster way to identify layers in complex compositions. Through a user study, we find that users prefer to use hover visualizations in all cases, and that in compositions with many overlapping, semi-transparent layers, users are able to locate layers twice as fast with the click and hover interface.

Exploratory Stage Lighting Design using Visual Objectives

Evan Shimizu, Sylvain Paris, Matt Fisher, Ersin Yumer, Kayvon Fatahalian
Presented at Eurographics 2019

Project Website | Paper | Code | CGF Page

Lighting is a critical element of theater. A lighting designer is responsible for drawing the audience's attention to a specific part of the stage, setting time of day, creating a mood, and conveying emotions. Designers often begin the lighting design process by collecting reference visual imagery that captures different aspects of their artistic intent. Then, they experiment with various lighting options to determine which ideas work best on stage. However, modern stages contain tens to hundreds of lights, and setting each light source's parameters individually to realize an idea is both tedious and requires expert skill. In this paper, we describe an exploratory lighting design tool based on feedback from professional designers. The system extracts abstract visual objectives from reference imagery and applies them to target regions of the stage. Our system can rapidly generate plausible design candidates that embody the visual objectives through a Gibbs sampling method, and present them as a design gallery for rapid exploration and iterative refinement. We demonstrate that the resulting system allows lighting designers of all skill levels to quickly create and communicate complex designs, even for scenes containing many color-changing lights.

Lumiverse

Lumiverse is a cross-platform framework for creating scalable, interactive lighting applications. Lumiverse provides abstractions for organizing, selecting, and animating lighting devices, allowing real-world devices to be manipulated with programming concepts similar to those found in traditional 2D graphics and web applications.

Lumiverse was developed at Carnegie Mellon University by Evan Shimizu and Chenxi Liu. This project was supported in part by funding from the Carnegie Mellon University Frank-Ratchye Fund for Art @ the Frontier.

You can download and learn more about the framework at the project website.