Arvind Satyanarayan
Hi, I'm Arvind Satyanarayan.

I'm a Computer Science Ph.D. candidate at Stanford University, working with Jeff Heer and the Interactive Data Lab. My research seeks to lower the threshold for design, with a focus on data visualization. I'm also an advisor for Apropose, Inc., a Bay Area startup I co-founded to build data-driven web design tools.

I graduated from UC San Diego, and worked with Jim Hollan to study interactions with wall-sized displays. At UCSD's Revelle College, I helped create an internship program and served as a Senator, a Resident Advisor, and an Orientation Leader.

Years ago, I wrote plugins and tutorials for Movable Type, and co-founded Melody.

Papers and Notes

Brushing Schematic Brushing Schematic

Declarative Interaction Design for Data Visualization

Declarative visualization grammars can accelerate development, facilitate retargeting across platforms, and allow language-level optimizations. However, existing declarative visualization languages are primarily concerned with visual encoding, and rely on imperative event handlers for interactive behaviors. In response, we introduce a model of declarative interaction design for data visualizations. [...] Adopting methods from reactive programming, we model low-level events as composable data streams from which we form higher-level semantic signals. Signals feed predicates and scale inversions, which allow us to generalize interactive selections at the level of item geometry (pixels) into interactive queries over the data domain. Production rules then use these queries to manipulate the visualization’s appearance. To facilitate reuse and sharing, these constructs can be encapsulated as named interactors: standalone, purely declarative specifications of interaction techniques. We assess our model’s feasibility and expressivity by instantiating it with extensions to the Vega visualization grammar. Through a diverse range of examples, we demonstrate coverage over an established taxonomy of visualization interaction techniques.

The Lyra VDE The Lyra VDE

Lyra: An Interactive Visualization Design Environment

We present Lyra, an interactive environment for designing customized visualizations without writing code. Using drag-and-drop interactions, designers can bind data to the properties of graphical marks to author expressive visualization designs. Marks can be moved, rotated and resized using handles; relatively positioned using connectors; and parameterized by data fields using property drop zones. Lyra also provides a data pipeline interface for iterative, visual specification of data transformations and layout algorithms. [...] Visualizations created with Lyra are represented as specifications in Vega, a declarative visualization grammar that enables sharing and reuse. We evaluate Lyra’s expressivity and accessibility through diverse examples and studies with journalists and visualization designers. We find that Lyra enables users to rapidly develop customized visualizations, covering a design space comparable to existing programming-based tools.

The Ellipsis Interface

Authoring Narrative Visualizations with Ellipsis

Data visualization is now a popular medium for journalistic storytelling. However, current visualization tools either lack support for storytelling or require significant technical expertise. Informed by interviews with journalists, we introduce a model of storytelling abstractions that includes state-based scene structure, dynamic annotations and decoupled coordination of multiple visualization components. We instantiate our model in Ellipsis: a system that combines a domain-specific language (DSL) for storytelling with a graphical interface for story authoring. [...] User interactions are automatically translated into statements in the Ellipsis DSL. By enabling storytelling without programming, the Ellipsis interface lowers the threshold for authoring narrative visualizations. We evaluate Ellipsis through example applications and user studies with award-winning journalists. Study participants find Ellipsis to be a valuable prototyping tool that can empower journalists in the creation of interactive narratives.

Webzeitgeist Pipeline

Webzeitgeist: Design Mining the Web

Advances in data mining and knowledge discovery have transformed the way Web sites are designed. However, while visual presentation is an intrinsic part of the Web, traditional data mining techniques ignore render-time page structures and their attributes. This paper introduces design mining for the Web: using knowledge discovery techniques to understand design demographics, automate design curation, and support data-driven design tools. [...] This idea is manifest in Webzeitgeist, a platform for large-scale design mining comprising a repository of over 100,000 Web pages and 100 million design elements. This paper describes the principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables.

Multiple overlays controlled via a touchscreen

Using Overlays to Support Collaborative Interaction with Display Walls

Large-scale display walls, and the high-resolution visualizations they support, promise to become ubiquitous. Natural interaction with them, especially in collaborative environments, is increasingly important and yet remains an on-going challenge. Part of the problem is a resolution mismatch between low-resolution input devices and high-resolution display walls. In addition, enabling concurrent use by multiple users is difficult. In this paper, we present an overlay interface element superimposed on wall-display applications to help constrain interaction, focus attention on subsections of a display wall, and facilitate collaborative multi-user workflow.

Posters, Demos, and Technical Reports

The CHI 2013 Interactive Schedule

CHI iSchedule Interface

CHI 2013 features 30-second "Video Preview" summaries for each of 500+ separate events. The Interactive Schedule uses large display screens and mobile applications to help attendees navigate this wealth of video preview in order to identify events they would like to attend.

Learning Structural Semantics for the Web

Webzeitgeist Components

Researchers have long envisioned a Semantic Web, where unstructured Web content is replaced by documents with rich semantic annotations. This paper introduces a method for automatically "semantifying" structural page elements: using machine learning to train classifiers that can be applied post-hoc.



A Platform for Large Scale Machine Learning on Web Design

Webzeitgeist Components

We present a platform for large-scale machine learning on Web designs, which consists of a Web crawler and proxy server to store a lossless and immutable snapshot of the Web; a page segmenter that codifes a page's visual layout; and crowdsourced metadata which augments segmentations.