This is the second part of the article Three Ways Your Analytics Platform Is Sabotaging Your Analytics (and how to fix it). The first part of this article can be found here.

         3. Don’t Get Used; Use Your Measures

Measures. They’re in your statutory reporting requirements; they’re in your incentive program; they’re in your internal quality-improvement program. But if your analytics platform is locked into only certain standards, you lack the ability to expand, to ask new questions, and to test new ideas. If measures cannot be retrieved or calculated when you need them, then you’re working for the platform, and not the other way around. And if you can’t adjust the granularity or specificity of the measure – allowing your teams to focus on specific sets of facilities, providers, or patients – you just lost a major tool in coordinating your quality-improvement program.

And once they’re there, do you trust your measures? Trust goes beyond simply believing the numbers; rather, are they reliable and traceable? Do you know what standards are the bases for your measures, which data are being used to calculate the measures, and what granularity, filters, and caveats are being applied to the specific results? The proliferation of local measures, in addition to the national standards like HEDIS and MU, means that any one organization may have hundreds or thousands of measures. Is it clear what the measure is doing so you know if it’s still serving its purpose?

To get real power out of the reporting framework in your analytics platform, your measures must be available, useful, and transparent. You need to be able to get at the measures when you need them and in a form that’s useful to you, whether that’s a spreadsheet, a feed to a reporting system, or the tablets in the hands of your care coordinators. Your measures need to be useful to you, not just in satisfying others’ reporting requirements, but for your own needs, from the reports that fuel the morning huddles to the data that drive your quality-improvement program. And don’t forget the transparency. You should be able to find out where the data used for measure calculation came from, and even how the measure functions. It helps to have a readable measure syntax, but clear documentation explaining not just what the measure is but where it came from gives you considerable power over and confidence in your reporting process.

From our perspective as data aggregators and integrators, we see it as our job to ensure success through effective communication and productive tools. However, there are a number of ways for organizations to transform how they use their analytics, although the particulars – and likelihood of success – will greatly depend on the organization.

  • Data integration depends on many factors (ask any organization that has gone through such a transformation), but our experience suggests that the top three promoters for success are: a) Leadership, the presence of a champion or champions in the C-suite who have the vision and clarity of what integration will bring to their organization and willingness to promote the organizational changes and investment needed to bring integration to fruition. b) Training throughout the entire organization, to ensure that users understand the value and impact of integration and also to make sure that workflow reflects the assumptions and definitions of the integration process. c) Quality and Validation, defined both by implicit data quality as well as by user requirements such that end users, administrators, and executives will have confidence in outcomes.
  • Connectivity is dependent both on a transport mechanism and the datasets to be connected. The transport mechanism most obviously includes the specifications for the transport protocol, but also relies on the definition of the source and destination dataset schemas. When you look at the definition of your transport protocol, is it capable of properly reflecting the source data in a reliable way? Is there extensive and complete coverage of your use cases? Are the structure rules and constraints on your inbound and outbound feeds sufficiently exact for your analytic needs? This can be an extremely demanding process for any organization, especially those with multiple legacy systems and diverging contracts or incentive programs. Organizations with complex sets of feeds and connections may want to bring in additional assistance in this area; at the least, an outside resource without specific affiliations to particular vendors will help in providing an unbiased perspective on how to improve connectivity.
  • To ensure that their measures are available, useful, and transparent, organizations should focus on meeting at least the initial requirements of the points above – integration and connectivity. Organizations can start this process by identifying crucial weak links in system-system data flow, such as between clinical data sources and appointment-tracking or billing systems. Equally crucial are identifying weak links in human-interface data flow, such as how often data must be ported out of one system, manually manipulated, and transported by one or more individuals, then manually loaded or presented to another system for further analysis or distribution. (A good metric is how many Excel spreadsheets your organization generates, modifies, and consumes on a monthly basis.) By identifying these links and aligning them with the most actionable and impactful measures – for example, those that are most frequently and closely missed – an organization can begin its data transformation with an eye toward immediate improvement in measure performance and use, along with a plan for the long-term improvements in measure and reporting productivity discussed in the article.

Finally, these are some questions I would like to raise, and I am looking forward for your contributions.

  • How many crucial reports are transferred between systems, departments, and entities using spreadsheets? How many different people are responsible for manually gathering data for, assembling, and then distributing these reports? How much information is gathered from or captured into systems that require person-by-person access, such as an EHR? And how closely is the quality of this process monitored and understood?
  • When was the last time you said, “If only we knew…?” about some facet of your business? What was it you wanted to know? What could you have done if only you had been able to answer that question?
  • How much of your budget do you allocate to improving and integrating your data assets? How much of your revenue stream, profits, and general business workflow depend on these assets? How far apart are these numbers?
  • Are the IT tools used throughout your organization able to talk to one another? If so, when was the last time you checked to make sure that they are all receiving all the information they need to come to correct conclusions? If not, are there cases where some systems might become much more valuable if they could “know” what the other systems “know”?
  • How often do you find that you are “waiting” for results? For example, how long do you have to wait to find out how a provider group performed on some requirement on a given day? Can you know by the next morning, or does it take several days, weeks, or months? What difference would it make to your productivity and business efficacy if you could know the next day?


Michael Simon, Ph.D., is a principal data scientist at Arcadia Healthcare Solutions, with experience in data analytics, process efficiency, and experimental design. Since joining Arcadia in 2011, Michael has focused on demonstrating and enhancing the value of client data through exploratory data analysis and advanced statistical methodologies, and on the analysis of healthcare data quality. Michael is a former National Science Foundation AAAS policy fellow and earned his doctorate in biology from Tufts University in Medford, Mass. He also holds a Bachelor of Arts degree in economics and a Bachelor of Science in electrical engineering from Rice University in Houston, Texas.