Are you creating data assets or a data dump?

Photo by Paweł Czerwiński on Unsplash

In 2011, analyst firm IDC predicted that by 2020 overall data would grow by 50 times but the IT professionals available to manage that data would only increase by 1.5 times. Fast forward to 2017 and the total enterprise storage systems were up 14% reaching $18 billion (£10 billion) in Q3 2017, according to IDC.

Our appetite to store more data is becoming insatiable. However, if you look around your organisation at all the extra money we have spent on big data – in terms of more storage and better reporting – most organisations still do not have good quality information actively driving better outcomes within their operational processes.

At the beginning of my career data storage was limited. We had to justify and optimise the data we stored to ensure there was significant business value for the cost expenditure. Today we have lost this discipline and it is not uncommon to have outdated, inaccurate, misinterpreted, duplicated or even non-standardised data running our businesses.

Processes which were once encapsulated by a single core system are now spread across multiple systems as we diversify from centralised applications to distributed cloud computing and Software-as-a-Service (SaaS) applications creating divergent data silos.

Most data professionals who deal with data on a daily basis know data is bad, but how bad? According to an IBM study in 2016 bad data costs the US economy $3.1 trillion a year.

In Australia, Australian Securities and Investment Commission (ASIC) identifies around AU$1 billion (£557 million) is available to be claimed from lost bank accounts, shares, investments and life insurance policies. The primary reason proposed for this is “when people move house and forget to update their details with the company who holds the money”.

So the trillion dollar question is: are you creating data assets for your organisation or a data dump?

Managers over the past few decades have studied how to optimise business processes by spending millions of dollars on software and hardware and ignoring data quality. Data quality has the ability to significantly improve business processes. Take the ASIC example above, if an address was cleansed against the National Change of Address database and only 1% was updated that would yield $10 million (£5.5 million).

Organisations have spent millions of dollars on big data and gaining insights into data. But without focusing on the root causes, they are missing out on what could have significant benefit for an organisation – good quality data which can be shared across both operational and analytical processes.

Identifying data quality issues

Outdated, inaccurate, misinterpreted, duplicated or even non-standardised data are easily identifiable when you look in the right places.

  1. Understand your critical business process.
  2. Identify the data required to execute and support the business process.
  3. Identify key data issues.
  4. Review options to increase data quality and understand the financial impact.
  5. Understand where this data resides.

These issues account for additional cost, effort and inefficiency incurred by institutions as a result of poor data quality. Data quality affects both customers and organisations. Bad data can cause huge issues with regulatory compliance and reporting. It has a ripple effect across the entire value chain of an organisation. Data quality errors move from the front office to the back office to partners, and on through compliance processes, affecting everything from integration of solutions and systems to bad customer experiences.

Organisations such as UPS who integrate data across their processes have benefited from this approach for a number of years; by delivering data creating an informed customer, every step of the way. More advanced organisation who then use information to drive additional customer usage and engagement like Netflix are reaping the rewards of better quality, more accurate and relevant information.

Organisations who have ignored data quality and data redundancy have very complex process which require the customer to enter data multiple times and still have large rework percentages. Take for example home loans from brokers; the rework due to data issues can be as high as 70% in some financial services organisations.

Okay so we have data quality issues, what next?

The amount of data we are collecting today has exponentially increased over the past five years and will continue to exponentially increase over the next five years. We are essentially hoarding data at the moment with little or no understanding on how we will use it. Quite often the default answer is “if we capture everything then we cannot miss anything”. While technically correct, this approach increases costs significantly. And with no action plan in place to ever review the data and its value this problem can only increase as our data expands.

So is all data equal, or put another way, does all the data we collect provide the same amount of business value and opportunity? Or is there some data which provides more value and more opportunity? We can distinguish between core data which drives business processes and operational data which supports business processes.

Core data is where the business opportunity lies, and these are the data assets which can drive revenue or reduce costs in the business model. Finding the business value in this data requires one to understand the data and its value to the organisation. Collaborating with different departments within the business often unveils why this information is important and how it can be cleansed. As owners of business processes it is our responsibility to understand the inputs and outputs of the process, data included.

For example: Data collected by an online form for orders can be used downstream for marketing and financial activities. Inaccurate data entry or aged data can cause a lot of rework or miscommunication with the customer.

Understanding the financial impact in these situations helps to build the business case to improve the data quality. Via the inclusion of service providers to enhance data, implementation of business rules to standardise information and sharing of core data across the value chain ensure it is updated appropriately and continuously.

The decreasing cost of data storage has given us a false sense of comfort allowing us to capture and store large amounts of information. Organisations who now start to understand their data and the way in which it adds value to the organisation have an opportunity to increase the data quality and value of this data driving enhanced business models.

It is not good enough, however, to have these impressive initiatives with new technology like big data and machine learning (ML) if the outcomes are not driven back into the business processes and the data. Rather than creating data assets, creating silos generates costly data dumps which will eventually affect your bottom line.

By John Heaton, CTO of Moneycatcha

Heaton is an IT and banking technology veteran, having worked in various senior roles with Oracle, Heritage Bank and Xstrata Copper.
Connect with him on LinkedIn.

This article was first published by BankingTech