Increasing the Value of Your Data with Interactive Reports
By Jesh Harbaugh, Assistant Director of Business Intelligence
Development, Evaluation, and Strategic Initiatives (DESI)
SENECA FAMILY OF AGENCIES
This is the second post of a 3 part series
We expect a lot from front-line staff and treatment teams who work with our clients and families every day. In addition to planning and providing high-quality clinical interventions, direct-care staff are usually required to document this work in a timely manner within an electronic health record. There’s not much time left at the end of the day for a typical direct-care provider to analyze data—unless they experience the benefits of engaging with their data as outweighing the effort and time it takes them to access and consume it.
As I discussed in the previous post in this series, Data is Only Useful When it’s Used, it’s our responsibility to create systems that make it easy and intuitive for stakeholders to consume high-quality, relevant information. It’s paramount that these systems are designed with the user in mind and cater to their preferences, habits, and/or workflow.
With interactive, web-based reports and dashboards, we find an ideal mix of characteristics which align with the established habits of most people living in our modern, connected society. People are accustomed to accessing content on mobile devices and have little patience for slow-loading pages. We are used to rich, colorful visuals that draw us in, and dynamic movement and motion which keeps us engaged. The more we can replicate that experience with our presentation of data, the more likely people are to engage with it.
By nature, the value of the information the user consumes from an interactive report will generally be greater than a static report, because they can easily explore new analyses and perspectives that are of particular interest to their role and skill set. Let’s take a look at a very simple example that provides an opportunity to explore the percent of clients actionable on a given CANS item by their primary diagnosis. (This was created in Power BI, a free business intelligence (BI) software program.)
This snapshot doesn’t initially look any different than a static report. Along the top, we have a simple visual which primarily serves as a slicer, but also provides information by displaying how many clients were assigned each diagnosis. The bar graph shows the percent of clients who were actionable on each CANS item on their initial assessment. So far, so good, but nothing revolutionary.
Here’s what happens when I click on the “Neurodevelopmental Disorders” diagnosis category:
The report instantly shows us how clients with this specific diagnosis differ from the full sample. You can see that the bar graph still displays a “shadow” of the original value, which serves as a benchmark to compare the new values against. In this sample data, you can see that clients with Neurodevelopmental Disorders are roughly twice as likely to be scored as actionable on the “Impulse/Hyperactivity” and “Regulatory: Body/Emotional Control” items, and less than half as likely to have an actionable item score on the “Depression” item (compared to the full sample).
I’ll let you draw your own conclusions about how this approach above compares to presenting the same data in table form:
One of the core benefits of interactive reports is that they enable and encourage exploration, not just passive consumption of information. They are learning tools, not just attempts at information transfer. Oftentimes, people expect data to give them answers, but as report designers, we should resist the temptation to think this way, and instead think of how we can guide the user to ask the right questions. The scope of information contained in reports is often specific and provides a single perspective—not the multi-faceted view that’s required to understand the nuances of a complex issue. A report serves to supplement to a user’s existing subject matter expertise, and it should amplify that expertise by helping them apply it more effectively.
In the example report above, we are initially guiding the user to ask the question, “what areas of need are most prevalent for clients with a diagnosis of [x]?” Armed with that information, we ultimately want the user to ask, “what am I doing (and what should I do) to address the specific needs of clients with a diagnosis of [x]?” This report certainly can’t provide that information, but it points the user down a path of deeper inquiry and action. We can design other reports that can give insight into the effectiveness of specific interventions the user might decide to employ—for example, a report that shows changes in the percent of clients actionable on a given item over time.
Depending on your role and area of expertise, I’m sure you can think of other ways you might make use of this information or additional ways you’d be interested in slicing the data (in addition to diagnosis).
Most of us will likely agree that interactive, web-based reports are an ideal tool to have at our disposal—but how do you go about building reports like this for your organization?
The next blog post in this series will detail one organization’s approach to successfully implementing interactive reports and outline some key considerations you should be aware of as you set out to build your own reports and dashboards.
For more information on this post and the work being done at Seneca, email Jesh Harbaugh at firstname.lastname@example.org