With the passage of the HITECH Act of 2009, and subsequent, well-intentioned regulatory and legal changes, there’s been an explosion in the availability of standardized personal health data across EHRs and various data aggregators. As a result, reams of health data now exist in a lot of places.
This growth in health data tracks with the creation, storage, and shareability of other consumer-generated data including digital behavioral data, social activity and health device data (like from fitness trackers).
And as data is more available, there is an opposing momentum to regulate that data.
We’re experiencing two equal and opposite forces, liberalizing the availability of health data and at the same time, tightening the strings around its use. As a result, health data is becoming, both, more and less available.
While there’s a lot of chatter across the government and in the industry about health data and how we can or cannot use it, we wanted to talk to consumers to find out, as the world around us has become more data-driven, where do consumers draw the line?
As an industry, how should we behave?
When it comes to using data to tailor and target health information and advertising: do consumers think that’s a fair exchange of value?
Consumers definitely have an appetite for getting targeted, relevant information about health-related products and services without having to go to the doctor.
of consumers say they want information about relevant products, medicines and treatments without having to see the doctor
But if the information or notifications they get were based on their search histories, they report being less favorable.
of consumers say it would be useful to receive notifications on the Web or in an app based based on their searches or clicks about health concerns
While consumers claim to not like the idea of behavioral targeting, they do recover when assured that their data isn’t being saved. This can be a key consideration for marketers as you look into ways to target condition-specific information and advertising.
of consumers say it would be useful to receive notifications on the Web or in an app based based on their searches or clicks about health concerns IF it’s clear that no identifying information is being saved about them
In summary, people want value, and will share some personal information for that value, but there’s an intangible line out there that they won’t cross.
When asked about the usefulness of targeted information a) overall, b) based on searches and clicks, c) that may or may not be saved, about a quarter of respondents expressed no opinion: “neither agree nor disagree.” Perhaps they’re not really bothered by behavioral targeting but aren’t comfortable endorsing it. Or perhaps this is people saying “maybe”, or “it depends”: that as long as there is a real value exchange in how we innovate, we can be bold about it.
Beyond information, what kind of value justifies the sharing of health data?
We gave respondents a series of hypothetical situations and asked them if it made them more likely or less likely to support sharing their personal health data.
Two ‘no-brainer’ situations emerged.
Drug Interaction Management: 79% said they are more likely to support a doctor being informed by a person’s EHR that the drug they are about to prescribe could have a dangerous interaction with another drug the patient is already taking but forgot to tell them about.
Emergency Access to Health Profile: 75% said they would support access to their health data if they were brought to an ER unconscious and access to their medical records prevented the ER doc from making a dangerous decision.
Two other scenarios were a bit more intrusive and spurred some reticence.
Use of Genetic Data: 62% supported getting a more complete diagnosis and treatment plan using genetic information gleaned from what had been understood to be a limited genetic test around a pregnancy.
Artificial Intelligence: Only 50% supported gathering and processing data from far-flung family histories to pinpoint potential health risks.
While people seem to understand that more and more personal data can drive better health interventions, they have their guard up around certain new techniques and technologies.
Finally, we explored how motivational incentives and discounts could justify increased sharing of personal health data.
Straight Discounts: About half of the respondents said they would be moved to share their data with their insurance company in order to get a meaningful discount.
Would share personal health data to get meaningful discounts from health insurance company
Discounts for Healthy Behaviors: Only 39% were willing to take on required behaviors based on data shared, even understanding that these behaviors would be intended to improve their health, in order to receive discounts.
Would share personal health data to get meaningful discounts from health insurance company
Given that respondents expressed some concerns around sharing their data across scenarios, we asked our respondents to rank the top three risks they saw in increased sharing of personal health data.
Following are the various risks people selected as their top three reasons for not wanting to share their data.
Risks most often ranked as a top 3 are:
Discrimination | Insurance companies will use this data to discriminate (50%); Employers may use health data to deny people in certain categories jobs (35%)
Security | Closely guarded data is not as secure as intended (41%)
Exploitation | Health data will inevitably be used for commercial purposes (31%)
Overall, people want to know that their data is protected and won’t fall into the wrong hands or be used against their interests in any way. There is a perception that the data might be used to discriminate, either in the provision of health insurance or employment. Security was another risk most often chosen as #1 of the three ranked. And there was a significant concern that the data would be used for commercial purposes.
You have an opportunity to accelerate data innovation and be bold in delivering explicit utility and value.
Be aware of how you frame the use of consumer data. If it’s for value to the consumer, it counts as utility or a service, and is permitted. If it’s for value to the brand, the healthcare professional or the health system, it counts as commercial exploitation, and becomes a risk.
Innovation is only half the battle; education and communication will be vital when opening up new areas of value. If we take an “if you launch it, they will come” approach, as technology companies often do, we may encounter a backlash. So, education is vital, ideally well before any new product that might raise fears of intrusive new tech rolls out.
People want things on their own terms: Avoid if-then scenarios beyond the simplest of exchanges.
As you move to a greater reliance on 1st party data, be aware of people’s sentiment against personal data storage. Be clear about what data you’re saving and why, and get people to actively consent.
There’s a fine line between service and exploitation – don’t cross it.
Data in this report comes from PulsePoint’s Consumer Perceptions survey, conducted in November 2020. These data represent a national sample of 1,264 respondents across age, income, gender and education bands in the U.S.
PulsePoint is a technology company using real time data to transform healthcare. Through machine learning and programmatic automation, we interpret the hard-to-read signals of the health journey to understand the connection points between relevance and engagement.
We do this by unifying real-time Digital Determinants of Health™, offline and clinical data to create a unique and precise view of health audiences that refines, improves and increases its view over time.
To learn more about how we can support your business with our programmatic marketing platform.
additional resources