We need “Data-as-a-service” standards

Photo by Markus Spiske on Unsplash

Data-as-a-service (DaaS) has been around as a concept for a few years, but it’s still a new term for many organisations.

It’s about breaking down the silos and separating data repositories, and creating a ubiquitous data service available to all the different members.

If we consider that technology – in terms of computers – has only been around for just over 70 years, it is very young in its establishment as a discipline.

We’ve gone through various iterations from centralised to decentralised and back again. And now we’re decentralising once more, with technology and apps being spread across a number of devices, systems and platforms.

When technology was centralised, it was relatively easy to manage information and manage data, because it resided under a single entity: a data centre or some legal entity within that organisation’s control.

As organisations mature, they adopt commercial packages (buy vs build) and or migrate to cloud solutions, we’re starting to see data becoming a lot more fragmented.

It sits in many different silos across a now virtual organisation, and there is mass duplication of data across different organisational systems and platforms.

Over the past five years, we’ve really started to break technology down into smaller components and smaller modules. Data is the last piece of the puzzle. We haven’t yet made DaaS the cousin to Software-as-a-service (SaaS).

With SaaS, organisations can simply go out and get apps or solutions from a cloud provider, they don’t need to procure software, they can just use it. But then the data is locked into that environment. So how does the organisation start to extract that data from SaaS and put it into a cohesive and consistent view of information across all of their services?

In the past, people tried to extract data and move it around various repositories and place it in a common location.

The problem is that moving it around involves having to integrate many systems and different components together. In the integration world, they’ve solved this by using enterprise integration to create an enterprise bus that sits across multiple systems.

DaaS builds on that concept. It tries to define core information by creating a data bus across your organisation which stores data and makes it available to all the other areas of your organisation, so you can deliver data to different applications.

This data bus can be updated, managed and maintained in real time or near real time. DaaS is also cost-effective as it creates a layer of data that can be used across different part of the organisation without gaining redundancy, for example multiple copies that are outdated.

DaaS allows relevant stakeholders to continually update that information as it moves across each part of the value chain, keeping it relevant and up to date.

In the data world, data is going the same way as the rest of technology. It’s starting to be compartmentalised, and made more self-contained and descriptive, in smaller and more digestible parts.

The challenge is that there’s no common language yet. There’s no consistent taxonomy adopted by the industry in terms of what the different types of data are, and without commonality, there’s no common definition or understanding.

It’s very difficult to communicate without some kind of translator, and that’s the journey we’re on from a data perspective.

We have all these different languages, and only one or two speaking the same language. We have to extract and transform and then consume information to be able to convert from one standard to another.

Standard definitions will develop. Just like with languages, a common point of reference evolves, usually informally, and is adopted as a standard. One example from technology is the @ sign used for email addresses.

Computing pioneer Ray Tomlinson used the @ sign to separate the user name from the name of their machine when trying to send messages across the ARPANET network (a precursor to the Internet) and it then became adopted for email addresses ever since.

Initiatives like LIXI for loan origination and the ISO standards for payments are going a long way to create this shared taxonomy for data that will eventually be ubiquitous across industry.

With open data, the standards or definitions of data will start to firm up. As openness evolves, definitions will move from being siloed in one organisation, to be used across an industry, then multiple industries, and then the whole ecosystem. Enforcement and regulatory and industry bodies can influence them, but won’t dictate them.

Once we have a proper taxonomy of data, and common terms and understanding of what data is, then DaaS can truly become a reality. Because when you’re talking about a loan application or an account or a customer, that definition is then standardised across multiple industries. It makes moving and sharing the data much easier.

The NPP (New Payments Platform) is one example – it’s used to move payment messages around the country, then to SWIFT which links to the rest of the world.

If we start to create common standards and definitions, even internally within organisations, we can start to create Data-as-a-Service.

John Heaton is the chief technology officer of Moneycatcha.

This article was first published by FinTech Business