Businesses have a consumer “trust” problem. And the problem is getting worse. Every day, the news headlines are filled with organizations selling, abusing, and mishandling consumer data. And with gray areas that come with advanced data-driven capabilities such as artificial intelligence (AI) and machine learning (ML), we are headed towards some interesting times.
According to a recent Deloitte survey of 4,000 global consumers, 53% said they would never use a company’s products if the company was selling consumer data for profit, and 40% believed none of an organization’s profits should be generated from selling data. However, 27% of respondents acknowledged that they never consider how a company uses their data to make vendor selections. Herein lies the challenge. While some data abuse is malicious, a large portion of trust and data privacy problems are derived from companies’ lack of strategy or understanding the risks and rewards.
In a data-driven, digital-first world, trust and transparency are essential ingredients to bake into every strategy, touchpoint, system, policy, and process. That’s why business leaders can’t afford to wait for regulators or government bureaucrats to pass new data and privacy laws. Rather than wait, businesses can thrive by taking control of their data trust, transparency, and privacy agenda.
What does “trust” mean in the digital world?
Let’s define how consumers and business buyers think about trust.
Bottom line: Both the acquisition/retention of customers and shareholder value rely on positive digital trust. But what happens when trust and transparency are not essential ingredients in the way you do business? Exhibit A is the difference between two digital tech juggernauts—Apple and META (formerly Facebook).
Apple draws a hard line on the use of customer data, how data is shared, and how data is protected. The trust Apple has built not only drives enterprise valuation ($2.5 Trillion), but also allows the company to enter new markets more easily. For example, Apple is using software and data to enter into the electronic car market, which is projected to hit $358 billion by 2027.
On the other hand, Facebook’s data policies and how member data is handled and protected are largely opaque. META ($948 billion valuation) leadership and brand experts did not feel the company’s brand trust was strong enough to enter new markets. As a result, they changed the company name last month. This is a dramatic example of a brand and data trust and transparency problem.
AI accelerates the trust + transparency imperative
Investing in AI or ML is another reason to rethink and prioritize trust and transparency strategies. With advanced, data science-driven capabilities exploding, the trust and transparency gap will only grow. Millions of businesses, universities, and tech companies around the globe are racing to unleash AI and ML to predict, think, and learn faster and smarter.
Many privacy advocates fear the shortcuts being taken and the already mounting examples of AI going rogue. To create trustworthy human-machine interactions that require hundreds of millions of data points, we must have transparency with proactive education on how AI and ML are being used. “Open up your AI black box” is the rally cry being shouted from leading trust and privacy advocates who warn of the fallout of not having safeguards in place.
Data privacy everywhere is the smart move
While AI and ML are advanced capabilities, we should not ignore trust and transparency required for handling all data types. We must earn customer, market, and investor trust every day in every interaction. To do this right, we need both hard-and-fast data policies and a company culture of trust. Here are practices companies and brands can enact to build a trust and transparency culture:
Ways to Build a Data Trust and Transparency Culture
- Align data policies with company mission and purpose to guide your people and process
- Proactively share your policies and processes versus burying details in the fine print
- Disclose how data is collected and shared, including naming third-party data sources and partners
- Provide clear communications on how customers, consumers, and users benefit from sharing their data and/or the use of AI
- Partner with digital trust and safety organizations, such as Georgetown’s Center on Privacy and Technology, to help create and adopt a trust culture and policies
- Design for relevance, not just personalization, so communications is not offensive or “creepy” if messaging misses the mark. Remember, data and AI models aren’t always perfect!
- Use data in aggregate rather than only at the individual level
- Form a cross-functional team that ensures legal and industry compliance. It’s important to proactively monitor and prepare for new data regulations and privacy laws.
Take control with a proactive strategy
Ensuring adherence to compliance regulations and data privacy laws is table stakes. As more data protection and privacy regulations are passed—such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States—companies can use these regulations as a catalyst to build trust. Emerging and existing data laws and regulations should be proactively reviewed and the resulting data processes, practices, and policies transparently shared.
It is the business leaders’ responsibility to protect consumers and guide employees in creating a transparent, trusted environment. If you still aren’t convinced that establishing trust is a smart business investment, digital brand and safety tracking organizations show that the most highly trusted brands perform 5 to 7 times better than their counterparts. Now that’s data you can trust!