Tokenization of Data, Data Products, and Why You Should Care

Ian Tick

Ian Tick

Head of Content, K2View

What is tokenization of data and how does it prevent data breaches, support compliance, and empower users? This article answers these questions, and explains why a data product approach to tokenization is the optimal solution.

Table of Contents

Tokenization of Data: Not Just “Nice-to-Have”
What is Tokenization of Data?
How Tokenization of Data Works
What Information Needs to be Tokenized?
How Data Products are Transforming the Tokenization of Data
Now’s the Time to Care about Data Tokenization

Tokenization of Data: Not Just “Nice-to-Have”

Why should you care about tokenization of data?

Let’s start with the fact that the number – and cost – of data breaches are rising every year.

In 2021 there were 1,291 publicly reported data breaches in the US, up 17% from 2020, which had 1,108 reported breaches. A recent report by IBM found the average cost of a data breach rose 10% over the same timeframe, from $3.86 million to $4.24 million. Interestingly, remote working, and digital transformation due to the pandemic, increased the total cost of a data breach by an average of $1.07 million.

As companies trudge forward in their pursuit of digital transformation, and as remote work becomes a permanent trait of modern business, the risk and potential impact of a data breach is rising.

At the same time, the regulatory landscape around data privacy and protection is becoming increasingly stringent. Failing to comply with sensitive data protection standards could lead to a business-killing mix of consequences, including steep penalties, litigation, and reputation damage.

Fortunately, there are ways to protect sensitive data and prevent data breaches. Today, data tokenization is one of the most effective options.

What is Tokenization of Data?

Unlike the masking of data, the tokenization of data is the process of replacing sensitive data with a non-sensitive equivalent, or token, that has no exploitable value.

It’s a method of data obfuscation that helps businesses safely store and use sensitive data while complying with data protection and privacy regulations.

When data is tokenized, it replaces sensitive data – in databases, data repositories, internal systems, and applications – with non-sensitive data elements, such as a randomized string of characters. Unlike sensitive data which could be easily exploited by a malicious actor, a token has no value in the event of a breach.

62-1

Original sensitive data is usually stored in a centralized token vault.

How Tokenization of Data Works

Unlike encryption, which uses a mathematical algorithm to obfuscate sensitive data, tokenization of data does not use a key or algorithm, by which the original data could be reverse-engineered. Instead, it replaces meaningful values with a randomly generated alphanumeric ID.

Tokens can be configured to retain the same format as the original string of data, for use in operational and analytical workloads. Tokenization also supports data sharing, and enables compliant testing and development, while keeping data protected.

For example, tokenization allows a company that processes credit card payments to substitute the credit card number with randomized alphanumeric characters. The token has no value, and is not connected to the individual who used it to make a purchase, or their account. The actual credit card number is stored in an encrypted virtual vault.

This tokenization of data helps the business comply with the Payment Card Industry Data Security Standard (PCI DSS), without having to retrofit the existing business applications to become PCI DSS-compliant. It therefore reduces the cost for companies to accept, transmit, or store cardholder information, while safeguarding sensitive data against fraud and data breaches.

To reverse the tokenization process, original data is retrieved securely via HTTPS using the original token, as well as authorization credentials. This process is called detokenization.

What Information Needs to be Tokenized?

When we refer to sensitive data, we often mean Personally Identifying Information (PII) and Personal Health Information (PHI).

Bank accounts, financial statements, credit card numbers, social security numbers, medical records, criminal records, driver’s license numbers, stock trades, voter registrations, and more fall into this category.

62-2-wp

COVID-19 emphasized the need to tokenize personal vaccination data.

While healthcare and financial institutions are currently the biggest users of data tokenization, organizations across all sectors are beginning to recognize the value of this data obfuscation method. As data privacy laws expand, and as the consequences for noncompliance get harsher, smart businesses are proactively seeking advanced solutions to protect sensitive data, while maintaining its full business utility.

How Data Products are Transforming the Tokenization of Data

Data products are revolutionizing the tools used for data masking and data tokenization. A data product delivers a complete set of data on a specific business entity, such as a customer, claim, credit card, payment, or store. Every piece of data about a business entity is managed within its own, encrypted Micro-Database™.

The data product manages one Micro-Database for every instance of a business entity. It can be deployed in a data mesh or data fabric.

With a data product approach to tokenization, all of the sensitive data for a business entity is tokenized in its corresponding Micro-Database, alongside its original content. And each Micro-Database is secured by a unique 256-bit encryption key.

This resolves one of the biggest risks associated with conventional data tokenization solutions, which store all of an organization’s sensitive data in one token vault. A centralized vault – even outside of the IT environment – increases the risk of a mass breach. These vaults might be harder to penetrate, but when an attacker is successful, the results could be catastrophic for businesses and their customers.

Moreover, a centralized token vault can lead to bottlenecks for scaling up data and make it harder to ensure referential and format integrity of tokens across systems.

In comparison, data products:

  • Continually ingest fresh data from source systems

  • Identify, unify, and transform data into micro-databases, without impacting underlying systems

  • Tokenize data in real time, or in batches, for a specific data product

  • Secure each micro-database with its own encryption key and access controls

  • Preserve format and maintains data consistency based on hashed values

  • Provide tokenization and detokenization APIs

  • Make tokens accessible in milliseconds

  • Ensure ACID compliance for token management

Now’s the Time to Care about Data Tokenization

The stakes are rising around data protection. Malicious actors are eager to exploit data security vulnerabilities, while internal breaches – even resulting from accidents – remain a monumental challenge for enterprises.

Businesses today need a data protection solution that covers 3 key components:

  1. Protection from breach

  2. Compliance with regulatory standards

  3. Support for operational and analytical workloads

With a data product approach to data tokenization, you enhance the security of your sensitive data, while still allowing data consumers to use it, with minimal risk of an internal breach. Now is not the time to be a laggard. As the scale, cost, and damage of data breaches and noncompliance rises, the time for a data product approach to tokenization is now.

Tokenize your data now,
before it’s too late