March 30 2017 | 0 Comments | 158 reads Average Rating: 4

Preparing to Leap: Big Data and the Agile Analytical Platform

by Ben Steverman in Health Analytics

The potential associated with big data has healthcare organizations chomping at the bit. Indeed, the promise of performance and scalability – reducing processing times by 50% to 75% or more – typically prompts leaders to want to jump right into the fray. The problems relating to healthcare data aggregation and analytics are good candidates for big data. Healthcare analytical data certainly fits the criteria of the three V’s – volume, variety, and velocity, commonly used to describe the data that big data is ideally suited for. There is data from claims, electronic medical records (EMRs), labs, pharmacy, accounting , demographic , and real-time devices . . . creating a nearly unquenchable need for more and more varieties of data.

However, many healthcare system professionals hesitate or turn away from the opportunities offered by big data. They have first-hand experience with inconsistent data and the vexing challenges of maintaining a solid baseline of structured data required to deliver meaningful (or even correct!) data over time in a scalable way. How would a flexible, more varied, less structured platform help? Sounds more like a nightmare.

At the same time, there are the data scientists, the statisticians, and clinical staff who need to do their research. This analytical process is inherently an ad hoc, bottom up endeavor. They want to explore behaviors, correlations, and combine data sources as they hone in on critical factors pertinent to their investigation.

The two perspectives are quite different. Production analytics requires rigor while discovery analytics requires flexibility and agility.

Production analytical processes need to be robust and scalable. They need to consistently deliver results and have sufficient quality validation so that data anomalies are automatically detected. The processes they create need to be operationally scalable so as more and more insights become automated and utilized across the enterprise data, quality doesn’t suffer.

Discovery analytics requires fast access to data; being able to map and read data with minimal effort and also being able to process it quickly. Visualization tools need to be made available so the team can explore the shape and structure of relationships. Preliminary results need to be able to be published and saved so that graphs, reports, and insights in progress can be shared with users and consumers of the data.

Some organizations have leaned toward a production analytics environment and have problems keeping up with an expanding backlog. Other organizations have leaned toward a discovery analytics environment and have problems with repeatability and sustaining quality. Some have both environments and they bear little in common. Insights uncovered on the discovery side need to be completely modelled and rewritten in order to make it into production.

The key to succeeding with a push to big data is to understand and support both sets of users in a way that unifies the two. A unified data strategy where the tools, governance, and promotional model from discovery to production release is the same. The application stack contains visual tools, discovery tools, data quality surveillance, source code management, standards . . . all of the things to support the continuum of usage. Some tools are only relevant to discovery, other tools are only used in production, but they are integrated. There is no whole sale rework of the discovery solution but a graceful handoff of code data sources. Big data gives us the flexibility to do apply the rigor and the agility via the same application stack.

In a subsequent posting, I will be discussing the different aspects of the optimal analytic stack. I will be addressing the role meta data and governance plays in enabling the integration of new, experimental sources with the well established sources, the art of software quality management and operational scalability, the inclusion of non-production insights and graphics into the user experience, and the way a social media/conversational metaphor can be used to enhance and extend the usability of insights at any level of development.

Rate this Article:

Rating: 
Average: 4 (3 votes)

Author
Ben Steverman
Chief Technology Officer

Ben has been a leader in software development and architecture for over 30 years and brings breadth and depth of experience designing large-scale server architectures. Ben has led technical organizations in Healthcare, Financial Services, and Manufacturing.

Read full profile and other posts |

Log in to post comments

SEARCH BLOG

OUR THOUGHT LEADERS

Arun Rangamani
SVP, Care Optimization and SCIOXpert Services


Ben Steverman
Chief Technology Officer


Bob Abrahamson
Vice President, Product Management


David Hom
Chief Evangelist


Dr. Kevin Keck
Chief Medical Officer


Jen Cressman
Vice President, Professional Services


John Pagliuca
Vice President, Life Sciences


Jonathan Niloff, MD (Guest Author)
President, Niloff Healthcare Strategies, LLC


Lalithya Yerramilli
Vice President, Analytics


Lesli Adams, MPA (Guest Author)
Director of Population Health Strategy, Oracle Corporation


Leslie Strader
Project Manager


Linda Pantovic (Guest Author)
Director of Compliance & Risk Adjustment, Scripps Health Plan Services


Mark Feeney
Life Sciences Consultant


Monique Pierce
Vice President, Business Optimization


Nayfe Faillace
Chief Compliance & Privacy Officer


Nicole Cormier
Senior Manager, Home Health


Priyanka Rajkumar
VP - SCIOXpert and Solutions, Analytics


Rachel Hall
Senior Business Analyst


Rena Bielinski
SVP, Strategic Accounts


Rodger Smith
SVP, Payment Integrity


Rose Higgins
President, North America


Subha Vaidyanathan
VP, Technology and Data Management


Taryn Bevilacqua
Compliance Director


Tom Peterson
SVP, Risk Adjustment


ARCHIVES

Sign up to receive the latest SCIO news & insights, industry updates, event updates and more, right in your inbox.