The ethical dilemma posed by Decentralized Identity

The ethical dilemma posed by Decentralized Identity

And how to solve it, in theory and in practice

Identity systems have traditionally been hierarchical directories. In organizations, central administrators define the rights that each user (or group of users) has on the system. And so, they need to know who the user is.

One the internet, nobody knows you’re a dog

That’s a big problem to solve, famously cartooned by Steiner in 1993: “on the internet, nobody knows you’re a dog.”

Since 1993, the internet has taken the world. Identity and Access Management (IAM) systems know span a wide variety of uses, that include customers too. Privacy regulations define what is allowed and what isn’t, as far as individual data processing and storage is concerned.

Most of these systems are still very much centralized. Since passwords are creating large security gaps, protocols such as OAuth2 enabled the reuse of social accounts. People login through facebook/google/github/etc. The obvious downside is that those large networks get to know everything you authorize.

As a result, an internet of behaviors (IoB) is emerging, as many technologies capture and use the “digital dust” of peoples’ daily lives. The IoB combines existing technologies that focus on the individual directly — facial recognition, location tracking and big data for example — and connects the resulting data to associated behavioral events, such as cash purchases or device usage.

Gartner predicts that by year-end 2025, over half of the world’s population will be subject to at least one IoB program, whether it be commercial or governmental. As we already discussed in a previous article on surveillance capitalism, one can expect extensive ethical and societal debates about the different methods employed to affect behavior, and whether that’s even a legitimate approach in the first place.

Technologists should embed those new issues into their identity work. As an example, a new IETF protocol called GNAP, currently being specified, embeds a privacy by design approach to mitigate those issues (disclaimer, I’m one of the co-editors). End-user identity claims typically come from OpenID connectors, but there’s also a new kid in town that might reduce the risk of surveillance: decentralized identity.

Self-sovereign identity

The sovrin foundation has defined the concept of self-sovereign identity (SSI): only the end-user should own its identity data fully without the intervention from an external administration.

The idea has developed on the roots of public blockchains, but focuses on very specific types of data: decentralized identifiers (DID) and verifiable credentials (VC), as defined by another standardization body, the Decentralized Identity Foundation (DIF). Thimothy Ruff explains these new concepts through a transportation metaphor which is worth a read. It explains why experts and organizations such as Microsoft spend so much efforts on this line of work, including a semantic layer related to identity.

The objective on which everyone agrees is to give back control to individuals, so that their data cannot be shared without their consent. Maybe even people could own their identity and be paid by corporations such as facebook to use their profile data, as suggested by authors such as Gaspard Koenig, in France or Jaron Lanier, in the United States.

Or generative identity?

An idea strongly opposed by jurist scholar Elizabeth M. Renieris as well as Philip Sheldrake, who puts forward that the real issue should be the regulation of bigtech companies.

In centralising identity on the individual, as SSI does, it removes some identification, authentication, and claims processes from being subject to law and organisational governance (e.g. the GDPR does not apply to individuals), and into the chaos of social groups and the formation and reformation of social norms and other societal structures. It’s worth noting that social norms form without any real and widespread understanding of the technology and with little if any appreciation for potential emergent consequences — P. Sheldrake

The very philosophical underpinings to the SSI movements are indeed questionable. Renieris calls for a contextual identity, while Sheldrake introduced the notion of generative identity. Sheldrake argues, with good reason, that SSI is a technical movement that should introspect its potential for social dystopia (examplified by Aadhaar’s biometric centralized identification system in India, in which there is no opt-out possible for citizens in the country; a potential problem that any identity system, decentralized or not, needs to be purposely designed to avoid). Badly designed identity systems facilitate exclusion. The arguments are well grounded in social theory, here’s a visual summary that compares self-sovereign (noun-like) versus generative (verb-like) approaches.

Source : [Akasha foundation](https://cdn.hashnode.com/res/hashnode/image/upload/v1618573527677/7hKucAllY.html)Source : Akasha foundation

What Sheldrake tells us, is that we shouldn’t talk about “our data”, but “data about us”. For instance if you make a DNA test, the results also tell much about your relatives. Data is interpersonal by nature, and therefore cannot be owned by an individual.

This can further refer to Edouard Glissant’s definition of rhizomatic identity, whose epistemological shift was defined in the Poetics of Relation: “each and every identity is extended through a relationship with the Other”.

What technologists should do

What’s important is that identity specialists take into account the potential consequences of their work on society. “Decentralization” or “self-sovereignty” shouldn’t become marketing stands that are looking for category dominance in a technology arms race. These concepts shouldn’t be taken at their face value only; they’re not intrisically good; they’re merely tools which can be used to transform our world and relationships, for better or worse.

That leaves us with a question. What should technologists change in their approach?

We have a few hints that nobody has a clue:

  • Phil Windley’s answer to the criticism remains very focused on technology, but in my view doesn’t address the core concerns. Likewise, INATBA, a European blockchain association, published in november 2020 a position paper explaining “what’s at stake” with decentralized identity. But stakes are only technical, focused on issues such as interoperability, there’s not a single word on the related ethical issues.

  • On the opposite side, the generative identity charter itself says the technological aspects are out of its scope. This isn’t an issue per say, their role is to ask the right questions and provide a reflexive approach that transcends any technical framework. But still… one would expect some directions.

So I’ll give my take on that issue. I would suggest a divide between identity for humans and identity for machines.

  1. Identity for machines would greatly benefit from decentralized identifiers, as detailed by sovrin’s workpaper on SSI for IoT. That’s the line of work I’ve been following on my work related to cybersecurity for connected medical devices (mediam EU project).

  2. Identity for humans should remain a socio-technical construct, therefore mitigated through organisations subject to the laws where they operate. Thus the technical artefact that should be decentralized is the controlling key (as DIF KERI tries to do) that governs the access to derived identity information, which may well be relying on trusted parties in order to provide verified identities (for instance, to comply with AML and KYC obligations). Which highlights a terminology issue, as DIDs may not necessarily be decentralized. Whether identifiers themselves are centralized or decentralized should be a thoughtful architectural choice that depends on the use case and objectives, not a fundamental property.

Whether you agree or not with this idea, please let me know what you think.

Disclaimer: the mediam project has received funding from the European Union’s Horizon 2020 research and innovation programme under the NGI_TRUST grant agreement no 825618

Image credit: generative-identity.org