Do you think the impact of bad data is just a minor inconvenience? Think again.

Bad data cost Unity, a publicly-traded video game software development company, $110 million.

And that’s only the tip of the iceberg.

The impact of bad data: a case study on Unity

Unity stock dropped 37% on 11 May 2022, after the company announced its first-quarter earnings, despite strong revenue growth, decent margins, good customer growth and continued high performance in dollar-based net expansion.

But there was one data point in Unity’s earnings that were not as positive.

The company also shared that its operate revenue growth was still up but had slowed due to a fault in its platform that reduced the accuracy of its Audience Pinpointer tool.

The fault in Unity’s platform? Bad data.

Unity ingested bad data from a large customer into its machine learning algorithm, which helps place ads and allows users to monetize their games. This not only resulted in decreased growth, but also ruined the algorithm, forcing the company to fix it to remedy the problem going forward.

The company’s management estimated the impact on the business at approximately $110 million in 2022.

Unity isn’t alone: the impact of bad data is everywhere

Unity isn’t the only company that has felt the impact of bad data deeply.

Take X.

On 25 April 2022, X accepted a deal to be purchased by Tesla and SpaceX founder Elon Musk. A mere 18 days later, Musk shared that the deal was “on hold” as he confirmed the number of fake accounts and bots on the platform.

What ensued demonstrates the deep impact of bad data on this extremely high-profile deal for one of the world’s most widely-used speech platforms. Notably, X has battled this data problem for years. In 2017, X admitted to overstating its user base for several years, and in 2016 a troll farm used more than 50,000 bots to try to sway the US presidential election. X first acknowledged fake accounts during its 2013 IPO.

Now, this data issue is coming to a head, with Musk investigating X’s claim that fake accounts represent less than 5% of the company’s user base and angling to reduce the previously agreed upon purchase price as a result.

X, like Unity, is another high-profile example of the impact of bad data, but examples like these are everywhere—and it costs companies millions of dollars.

Gartner estimates that bad data costs companies nearly $13 million per year, although many don’t even realize the extent of the impact. Meanwhile, Harvard Business Review finds that knowledge workers spend about half of their time fixing data issues. Just imagine how much effort they could devote elsewhere if issues weren’t so prevalent.

Overall, bad data can lead to missed revenue opportunities, inefficient operations and poor customer experiences, among other issues that add up to that multi-million dollar price tag.

Why observability is now imperative for the C-suite

The fact that bad data costs companies millions of dollars each year is bad enough—and that many companies don’t even realize this because they don’t measure the impact is potentially even worse. After all, how can you ever fix something of which you’re not fully aware?

Getting ahead of bad data issues requires data observability, which encompasses the ability to understand the health of data in your systems. Data observability is the only way that organizations can truly understand not only the impact of any bad data but also the causes of it—both of which are imperative to fixing the situation and stemming the impact.

It’s also important to embed data observability at every point possible with the goal of finding issues sooner in the pipeline rather than later because the further those issues progress, the more difficult (and more expensive) they become to fix.

Critically, this observability must be an imperative for C-suite leaders, as bad data can us impacthave a serio on company revenue (just ask Unity and X). Making data observability a priority for the C-suite will help the entire organization, not just data teams, rally around this all-important initiative and make sure it becomes everyone’s responsibility.

This focus on end-to-end data observability can ultimately help:

  • Identify data issues earlier on in the data pipeline to stem their impact on other areas of the platform and/or business
  • Pinpoint data issues more quickly after the pop-up to help arrive at solutions faster
  • Understand the extent of data issues that exist to get a complete picture of the business impact

In turn, this visibility can help companies recover more revenue faster by taking the necessary steps to mitigate bad data. Hopefully, the end result is a fix before the issues end up costing millions of dollars. And the only way to make that happen is if everyone, starting with the C-suite, prioritizes data observability.

Learn more about IBM Databand’s continuous data observability platform and how it helps detect data incidents earlier, resolve them faster and deliver more trustworthy data to the business. If you’re ready to take a deeper look, book a demo today.

Was this article helpful?
YesNo

More from Databand

IBM Databand achieves Snowflake Ready Technology Validation 

< 1 min read - Today we’re excited to announce that IBM Databand® has been approved by Snowflake (link resides outside ibm.com), the Data Cloud company, as a Snowflake Ready Technology Validation partner. This recognition confirms that the company’s Snowflake integrations adhere to the platform’s best practices around performance, reliability and security.  “This is a huge step forward in our Snowflake partnership,” said David Blanch, Head of Product for IBM Databand. “Our customers constantly ask for data observability across their data architecture, from data orchestration…

Introducing Data Observability for Azure Data Factory (ADF)

< 1 min read - In this IBM Databand product update, we’re excited to announce our new support data observability for Azure Data Factory (ADF). Customers using ADF as their data pipeline orchestration and data transformation tool can now leverage Databand’s observability and incident management capabilities to ensure the reliability and quality of their data. Why use Databand with ADF? End-to-end pipeline monitoring: collect metadata, metrics, and logs from all dependent systems. Trend analysis: build historical trends to proactively detect anomalies and alert on potential…

DataOps Tools: Key Capabilities & 5 Tools You Must Know About

4 min read - What are DataOps tools? DataOps, short for data operations, is an emerging discipline that focuses on improving the collaboration, integration and automation of data processes across an organization. DataOps tools are software solutions designed to simplify and streamline the various aspects of data management and analytics, such as data ingestion, data transformation, data quality management, data cataloging and data orchestration. These tools help organizations implement DataOps practices by providing a unified platform for data teams to collaborate, share and manage…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters