Building a Better Lyra

Last year we had a successful Knight Foundation Prototype Grant-funded collaboration with the University of Washington Interactive Data Lab (IDL) to improve their Voyager data exploration tool. At the end of our collaboration we knew we wanted to work with the amazing team from the IDL again, so we were thrilled when Jeff Heer & Arvind Satyanarayan approached us to help build the next version of Lyra. Lyra is an interactive, open-source visualization environment built on top of the IDL’s Vega visualization specification language.

Screenshot of Lyra

Lyra was originally built as an Angular application, and supports the creation of a rich range of graphics through intuitive drag-and-drop interactions. (Check out this demo video for a brief overview of Lyra's capabilities!) However, users of the original application encountered some persistent user experience issues, and Arvind felt that the Angular architecture was becoming a barrier to the improvements they hoped to make to the interface. He had begun to re-implement Lyra from scratch as a React application, and we joined the project just in time to help him tackle some gnarly architectural and UX conundrums.

Lyra, Vega, and Application State

The first pivotal question we had to answer was how to best represent and store the Lyra application state, specifically how to represent that state as a single application model object.

To understand why this was necessary, it's important to understand the relationship between Lyra and Vega. Lyra is an interactive user interface that permits a user to bind marks to data. In order to be rendered by the Vega library the visualization state must be constructed as a JSON object conforming to the Vega spec that is then used to render the visualization itself. Every time data is bound or unbound, or marks are added or removed, that Vega model object must be recomputed.

The Vega specification object itself is generated by parsing a subset of the data within the larger Lyra application model. For example, if there are three datasets loaded, but only one of them is bound to marks, then all three datasets would be represented in Lyra's application state, but the Vega spec would only contain a representation of those marks and that one bound dataset. (If "data binding" is new to you, check out how it works in D3; Vega leverages aspects of D3 internally and operates on the same principles.)

Lyra Architecture Diagram, showing the flow of data from the Lyra model to the Vega view and the feedback loop from the Vega signals back into the Lyra model

Lyra Architecture Diagram, showing the flow of data from the Lyra model to the Vega view and the feedback loop from the Vega signals back into the Lyra model

Once rendered, the Vega view may emit signal events which would change our application state, such as dragging a mark to a new position. Vega signals are dynamic variables that update in response to user interaction. They enable Lyra to support the direct-manipulation interaction users are accustomed to from vector graphics editors like Illustrator, such as clicking and dragging handles to change shapes, but by default their state is self-contained within the Vega view. (To learn more about signals, check out Arvind’s talk on Vega from OpenVis Conf 2016.) Any changes to those signal values have to be propagated back to the parent Lyra application model so that the Vega specification can be properly recreated at will. Representing an application model through a view that can be efficiently recreated on demand is a perfect match for React, but Lyra's application model was spread across interrelated objects instantiated from constructors representing marks, datasets and other "primitive" types within Lyra. The values of those objects were mutated in-place, and interdependencies between these primitive types made it difficult to trace the flow of how the application model was actually being created or mutated. There wasn't any one-stop shop for making a model update.

A single JavaScript specification object that is used to instantiate Vega was created by traversing this network of interrelated objects. To go the other direction (recreating that network of application state models when given a Vega spec itself) was feasible, but complex, and being able to recreate a given Lyra application state at will was a prerequisite for the implementation of features like undo/redo. In order to move forward, we wanted a system where our state could be represented by a single JavaScript object, which could then be used to generate the full application instance object collection.

Enter: Redux, which provides exactly the sort of state container we needed. While migrating to Redux deserves an article of its own, implementing the Lyra application state as a Redux store gave us the single immutable model we needed: Any UI change updates the top-most application model and that model only, and any change to the application model is used to create, update, or re-render the instantiated primitives used to derive the Vega specification. Where before state was updated through an uncertain series of calls between instance objects, now any change to the state must be dispatched as a Redux action, making it easier to trace the flow of control in the application and greatly simplifying the design required to add new features. By structuring our Redux model with Immutable.js, we now had an immutable application state and one-way data flow that greatly simplified the architecture of key features like undo/redo, saving & loading, and the training features described below.

User Experience

While I began to tackle the Redux migration, Sue and Arvind began to dig into the UX side of the application. The original Angular version of Lyra was quite powerful, but Arvind found that non-technical users had trouble adopting Lyra as a data visualization tool—the interface assumed a high level of experience with visualization, and could be more than a little overwhelming as a result! Lyra's flexibility and power could only be properly tapped if you knew how to use that interface, and the cognitive load required to properly learn that UI made unguided discovery of Lyra's features extremely difficult.

Our UX work fell into two major categories:

  1. Cleaning up the UI to make the relationships between data and visualization more obvious, to facilitate more successful unguided discovery
  2. Implementing guided learning tools, such as hints and tutorial walkthroughs

UI Clarity

Sue interviewed the three types of potential Lyra users Arvind had identified—students, developers, and journalists—to learn their goals and needs. These interviews were distilled into personas, and we determined that the Journalist persona represented our primary target user. This is the individual who has data, and needs to tell a story with it, but may not have the proficiency (or, more likely, the time) to build something custom with D3 or other lower-level tools. Our interviews validated many of Arvind's original assumptions about Lyra's target audience, but we still needed to clean up the interface to make it easier for them to get started.

The original version of Lyra modeled its interface on graphical editors like Photoshop, with tools on the left and layers on the right. A data binding operation could affect elements in both of those sidebars, however, which users found jarring: the UI did not clearly communicate the interconnections between the elements that make up a data visualization.

Lyra 2 iterated on the original Lyra's panel structure, clarifying the relationships between different interface elements. Sue cleaned up those panels even further to emphasize the information flow within the application, where a user creates marks and then binds those to data to generate the final visualization. Lyra's primitives are stacked over to the left, and as we move right we pass through their property inspector (now emphasized through its own column) and any loaded datasets, and finally arrive in the rendered chart view. As data can also be dragged directly onto a property in the inspector, this layout also positioned the data pipelines directly between the two panes onto which data can be dropped to speed up the data binding flow.

Screenshot of the redesigned Lyra interface

Our next hurdle was that such a wide range of drag and drop actions are supported that we didn't have a good way to show users how to bind data to certain mark properties. For example, dragging and dropping a data field onto a scatterplot is comparatively straightforward:

Animation of the Lyra data binding interaction flow for a basic scatterplot

But how could you bind that field to color, or mark size? Sue brainstormed a number of solutions to this problem, and we're excited to continue collaborating with Arvind and the IDL as this interface is further fleshed out.

Welcome to Lyra

A tool like D3 can be overwhelming at first blush, but there is a rich ecosystem of code examples, tutorials and in-depth guides to support users with a wide range of learning styles and skill levels. Lyra 1 however had only included a handful of examples, all of which were highly complex, customized graphics. This demonstrated the potential of the tool, but showing a user the end goal isn’t the same thing as leading them along the path: we we knew we had to create features within Lyra to help with guided discovery of the app. We broke that onboarding down into two specific interfaces, Hints and Walkthroughs.

Hints are prompts you can turn on or off while you explore the app. Where a tooltip would give only textual information, a hint is more interactive—they can offer suggestions, highlight other similar actions in the app, and also reveal hidden UX features, such has the extra drop-zones that become available when you hold shift while dragging on the canvas.

Animation showing how the Lyra hints interface appears on certain user actions

Hints leverage our Redux architecture by listening for the same events that trigger UI state changes—one “reducer” (state change listener) would respond to an add-rectangle event by adding the mark to the visualization, while the hints reducer uses that same event to pop up a contextual dialogue box. Hints are designed to help a user learn more about Lyra, but still give them the freedom to play around with the app. Advanced users can choose to toggle them off once they are more familiar with Lyra’s interface.

Where Hints are designed to support free-form exploration of the application interface, Walkthroughs are full-on tutorials that help a user learn how to achieve a specific goal, whether it's to make a basic bar chart or to recreate Charles Minard’s Napoleon’s March. After the user chooses which walkthrough they want to complete, a series of prompts will appear on screen guiding them to interact with the app in specific ways. Clicking the “next step” button performs the bare minimum of validation (inspecting the application state to ensure the user successfully completed the specified task) before moving on to the next step in the walkthrough. This minimal validation lets the user still have agency to play around with the app, whether they want to use their own data or customize their mark colors or text. At the end of the walkthroughs, they can either end up with exactly what the guide's thumbnail details, or their own creative take on the assignment.

We’re excited to have been able to partner with the Interactive Data Lab on tackling these challenges, and we continue to stay involved in the development of Lyra and the other applications in the Vega ecosystem—an open platform that has and will continue to enable the creation of open data visualization tools for our communities.


Contact Us

We'd love to hear from you. Get in touch!


201 South Street, Boston, MA 02111

New York

315 Church St, New York, NY 10013

Phone & Email