Creating Reproducible Analysis Workflows with Data Pipelines



Case

UX Strategy and Product Design

Colin Shiner

To keep up with more complex data workflows, our users needed a tool that simplified their work without limiting their options. 

Synopsis

By conducting market research and user interviews with a mix of 30 data scientists, data engineers, analysts, and system administrators, my team at InterSystems and I discovered an opportunity to improve how users collect data from multiple sources and keep track of the transformations they perform on it to prepare it for analysis. 

I worked with a cross-functional team to bring the project from exploratory research, to concept, to reality. 

Background

Our exploratory user research had yielded serveral important findings. Most critically, users in our target client organizations were increasingly being asked to wear multiple hats. Clearly defined roles were becoming less common, and businesses were looking for ways to help their teams stay productive amid employee churn, changing technologies, and more incoming raw data than ever before. 

Working with a consulting team, we had tested some potential solution areas and found an opportunity to improve the way data analytics teams processed data inputs and turned them into actionable information for their business units. Specifically, many users relied on datasets pulled from SQL-like data sources, yet were not SQL experts themselves. Therefore, much of their time was spent figuring out how to move the data from a SQL environment to some other environment (one that would allow them to use Python, for example) while they cleaned and analyzed it. This process was time consuming and resource intensive, and we wanted to provide an alternative. 


Process

Building on these research insights, I began sketching ideas for potential product solutions while, in parallel, a colleague from the product management team secured approval to develop a proof of concept. 

Working in iterative design cycles, I used wireframes and Figma prototypes to demonstrate how different sets of features for the new product might work. Then, collaborating with the development and product teams, we determined which mix of features and user flows would be feasible to develop quickly and also sufficiently demonstrate the value of the tools we were creating to potential clients as a minimum viable product (MVP). 

As we began to crystalize the vision and scope of the MVP, I started building higher fidelity Figma mockups to better clarify how specific features and flows could mesh into a cohesive product. 

Some of the tools our users might use throughout the week to conuduct an analysis. 
Low fidelity sketch of a potential UI: Option A
Low fidelity sketch of a potential UI: Option B

Product Concept

These sketches converged on idea of a data "pipeline" where users could ingest and transform data from thousands of databases using a no-code/low-code interface and still toggle back to a SQL-focused interface whenever they chose to.

The pipeline UI gives the user a visual way to trace how data is being transformed and used, and also allows users to create their own reproducible data transformations using Excel-like formulas that we translate to SQL transformations behind the scenes. 

Higher fidelity mockup: Screen 1
Higher fidelity mockup: Screen 2
Higher fidelity mockup: Screen 3

Extending the Design System

One additional consideration for the project involved our company design system. In recent years, the marketing and UX teams had been investing enormous effort in establishing a company-wide design system to speed up development times and give our products a consistent look. Therefore, as the Pipeline wireframes progressed to higher levels of fidelity, I took care to emulate the visual style of the design system while also extending it to include new types of elements and interactions. 

Design system guidelines for a "Chartbook" component
Higher fidelity mockup of Pipelines components
DescriptionEach card on the left-hand side of the screen represents a data transformation (For example: rename a column, multiply two columns together, filter the data based on some criteria, etc.), and clicking a card opens the configuration options for that transformationCards can be reordered vertically by "dragging and dropping", which allows the user to organize how their data is being transformed while minimizing the amount of "re-writing" they have to do. 

Outcomes and Current State

As of present, the product has been extremely well-received by initial testers and sales teams, and early demos prompted several user research participants to ask “how do I sign up for advance access?” The product is currently in development. 

Want to know more? 

Let's talk!

Get in touch on LinkedIn