LinkTree

Home » News » Is Age Assurance Being Hijacked by Identity Assurance?

Is Age Assurance Being Hijacked by Identity Assurance?

Governments around the world are rightly focused on protecting children online, preventing underage access to alcohol, tobacco, risky content and gambling and improving safety in age-restricted environments. These are legitimate policy goals and they deserve serious attention. But alongside this effort, a quieter trend is emerging: the use of age assurance as a mechanism to […]

Governments around the world are rightly focused on protecting children online, preventing underage access to alcohol, tobacco, risky content and gambling and improving safety in age-restricted environments. These are legitimate policy goals and they deserve serious attention. But alongside this effort, a quieter trend is emerging: the use of age assurance as a mechanism to accelerate the adoption of government digital identity systems.

This is not a conspiracy theory and it does not require malicious intent. It is a structural risk created when two fundamentally different functions (proving who you are and proving that you are old enough) are treated as if they are the same thing.

They are not.


Age Assurance Is Not Identity Assurance

To prove your identity, you must reveal who you are to whoever is asking. To prove your age, you only need to demonstrate that you meet a threshold that is applicable for an age-related eligibility requirement. You can prove that you are over eighteen without disclosing your name, your address, your national identity number or any persistent identifier.

In the physical world, we intuitively understand this distinction. A bartender takes a look at you and says ‘yep, your old enough’. Even if they ask for your ID, they only check a date of birth and they don’t keep a persistent record of it. Digitally, the same principle should apply. Yet in many regulatory environments, governments are increasingly encouraging (and in some cases requiring) age assurance to be performed using government digital ID systems.

That is not a technical necessity. It is a policy choice. It is also nothing to do with trust.


Scale Changes the Risk

The difference matters because of volume. Identity checks are relatively infrequent. Most people prove who they are only a handful of times a year: when opening a bank account, applying for credit, starting a new job, buying a house or voting.

Age checks, by contrast, are ubiquitous. They occur when people buy alcohol, access social media, watch age-restricted content, enter venues, gamble, use dating apps or participate in online communities. These interactions happen daily and at enormous scale.

If governments can position their digital ID systems as the default or trusted mechanism for these routine checks, they gain far more than a useful service. They gain continuous transaction volume, behavioural metadata and the normalisation of identity-linked access to everyday life. It means that they can quietly force adoption of digital ID as a de-facto reality rather than having to make unpopular mandatory rules.

This is where age assurance quietly becomes something else.


Surveillance Capacity Without Surveillance Intent

This does not require sinister motives. Most governments already have legal powers to obtain metadata from service providers, including logs of use and compliance records. If age assurance is tied to identity, then information about who accessed what, when and how often becomes technically trivial to collect.

This creates surveillance capacity even if it is never actively exploited. The risk is therefore structural rather than intentional.

If access to online platforms or content under the banner of child protection requires the use of government digital ID, anonymous speech becomes harder. Political criticism becomes more easily traceable. Journalistic sources become riskier to protect. Sexual orientation in countries where same-sex relationships are illegal, becomes fraught with danger. Not because someone is necessarily watching, but because the capability exists.

The same logic applies offline. If digital ID is required to enter venues in order to prove age, those systems can easily become records of presence. In the event of an incident, they may also become witness lists or suspect pools. What begins as age verification quietly becomes behavioural logging, population tracking and feeders of police intelligence.


Metadata Is More Powerful Than It Looks

At a population level, metadata derived from age-assurance systems can be highly revealing. Patterns of alcohol consumption, gambling frequency, content access or time-of-day behaviour can all be inferred from logs. In most jurisdictions, governments already have legal pathways to obtain such data from regulated providers. These don’t have to reveal behaviours of individuals to nevertheless have a powerful impact on the day-to-day experiences of individuals lives.

When age assurance is identity-linked, this behavioural dataset becomes far richer than necessary for the original policy goal. For example, age-assurance logs could show when and where people drink, move around towns or go online most often and that insight can easily turn into new rules about when those activities are allowed.

The issue is not that this will be abused. The issue is that it does not need to be abused to be powerful.


Age Assurance Does Not Require Identity

None of this means that age assurance is wrong or that digital identity is inherently dangerous. Both have legitimate roles. Digital ID can simplify public services and reduce fraud. It can be more secure and practically useable. It can form a verified and trusted source of data for an age assurance system.

Age assurance can protect children and help businesses comply with the law. It can be a way to smooth out the delivery of age appropriate experiences. It can prevent bad actors being in child-only spaces. It can ensure that organisations have privacy-preserving methods to make age-related eligibility decisions.

The danger arises when the two are fused by default rather than by necessity.

There is no technical requirement for age assurance to rely on identity. Privacy-preserving alternatives already exist and continue to improve. These include features of humans that vary with age through age estimation, attribute-based credentials, anonymous age tokens, zero-knowledge proofs and probabilistic inference models. These are not necessarily different to how offline age assurance has happened for decades.

All of these answer the same question: is this person old enough? None of them need to answer the second, far more invasive question: who is this person?

Yet in many regulatory environments, these approaches struggle to gain recognition. They are excluded from trust frameworks, dismissed as insufficiently robust or simply left uncertified. Meanwhile, government digital ID systems are presented as the only trusted means, publicly funded and embedded into regulation.

The effect is not neutral. It shapes the market. Age assurance becomes a funnel into identity assurance and innovation that would minimise data collection is quietly squeezed out.


Policy Choices, Not Technical Ones

This creates a policy tension that deserves open discussion. A healthy approach would treat age assurance as a standalone function and encourage multiple methods to achieve it, particularly those that minimise personal data. It would require proportionality, limit metadata retention and separate identity from access wherever possible.

The guiding principle should be simple: use the least identifying method that achieves the policy goal.

Not: use the most powerful system available.

This is exactly what the new ISO/IEC 27566-1:2025 – Age Assurance Systems – Part 1: Framework is aiming to achieve.


Incentives Matter

The strategic question is whether the slow progress of privacy-preserving age assurance is accidental. Governments have strong incentives to ensure that their digital ID systems achieve widespread adoption. Age checks offer a uniquely high-volume use case. Trust frameworks and regulatory controls can determine which technologies are allowed to operate. Child safety provides a compelling public narrative.

These are not accusations. They are incentives. And good governance requires examining incentives as well as intentions.


Keep the Functions Separate

This does not mean rejecting digital identity. It means resisting the idea that it must be the solution to every problem.

Age assurance should protect people. Digital identity should empower them.

When one becomes a mechanism to force the other, both lose legitimacy.

It is entirely possible to protect children, respect adult privacy, enable innovation and avoid unnecessary surveillance at the same time. But only if we stop pretending that proving you are old enough requires proving who you are.

It does not. And policy should reflect that reality before the distinction is quietly erased.

Age Check Certification Scheme
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.