Disclaimer: This will not be a typical post from No Kill Switch. You know: one that follows one of two patterns - either "hypothesis -> answer -> summary" or "thesis -> confirmation/denial -> summary". Instead, expect something much more chaotic - a vague consideration over the rough draft of an idea. But treat it as an open invitation to some speculative brainstorming. It may get us completely nowhere, but maybe there's some potential to uncover :)

If you're up for that, let's get rolling.


Let's sketch some context first: a 3rd party cookie is a dead man walking. It's just a matter of time until it vanishes completely. The daring attempt to replace it with FLoC has apparently failed as well (at least it looks so in late April 2021). That means that the digital advertising market is about to eventually lose one of its primary vehicles to classify and target individuals, manage reach & frequency, etc.

That actually is quite a big deal (not because I'm a fan of ads - quite the contrary). Whether we like it or not, ads are the main currency we use to "pay" for content on the Internet. That mechanism is not an inherently bad thing (content creators should be fairly compensated) - but it should respect an individual's right to privacy and data ownership.

Hmm, so maybe it is possible to fix the existing mechanism in a way that makes all the parties (mostly) happy? How could that look alike?

Currently, various entities do collect and aggregate (match) our behavioral data practically beyond our control (despite all the ineffective regulations). The majority of popular sites place hundreds of 3rd party cookies on our computers. Even if we could control each of them separately (which doesn't seem practical), we wouldn't know their exact purpose (what precisely each of them is used for).

But what about inverting the control? And merging the non-controlled behavioral snapshots into controlled, unified records of deliberately shared properties?

The idea

Imagine that ...

  1. First of all, an individual exclusively owns her/his personal data and leases them (as a sealed, internally integral piece of information) deliberately in exchange for some service - e.g., to get the content (s)he wants.
  2. The service provider would use this (leased) information to generate a relevant response (ad, personalized content) and forget immediately afterward.
  3. It should be up to the individual who to share the data with, what range of data, and for what purpose (kind of usage). But also, it should be up to the service provider to specify what data he is interested in and whether to accept a given piece of data or not.
  4. Data consumer (service provider) should not be able to keep (cache) the data and associate it with the same individual "next time".

The data

When it comes to the data individual shares with the service provider ...

  1. Individual's data should reside in a safe container, bound to her/his digital identity (even if it's just a local browser profile). The before-spoken individual should define that data deliberately and manually - we're not talking about collected browsing history or something like that. The format should be at least partially standardized but flexible and adaptable (JSON on steroids, with rigid top-level structure and some rules regarding the grammar).
  2. The data should be divided into two conceptual categories - permanent characteristics and temporal preferences/interests. While the former tells more about demographic profile (gender, age, nationality, place of living, etc.), the latter exposes the non-permanent information one would like to communicate to service providers (e.g.: "I'm looking for car insurance", "I'm expecting a baby", "I'm looking forward to my holiday trip").
  3. Why would one like to expose such information? To receive more useful commercial offers, to get inspired with more relevant content, even to have the newsfeed prioritized with her/his (assumed) expectations in mind. One could call it: personalization based on the declaration.
  4. An individual should be able to change her/his data whenever (s)he wants.

The truth

What about the trustworthiness of such information? No, I don't want any mechanism to verify/sign/audit it. The rules would be simple:

  • if you don't share the data, you don't get the service (or you get the limited service) from the service provider (according to the open "contract" he publishes)
  • if you share the data, remember that "garbage in" results in "garbage out" - if your input data is not accurate, falsy, or outdated, you'll get ads/recommendations/content that will be nothing but annoying focus dispellers

The goal here is simple: such inversion of control should incentivize you to be honest.

The obstacles

Of course, there are plenty of risks/challenges that are not hard to spot at this stage:

  1. Would we be able to create a format that is both flexible enough to reflect very varying information and structured enough to make sense out of it?
  2. The data format is one thing, but crafting tools so everyone can adjust her/his personal information in an easy and accessible way also seems non-trivial.
  3. Flexibility to accept or deny leasing information (for every site independently) sounds like a good plan, but it has to have a very unobtrusive UX, so browsing across hundreds of sites doesn't become an endless, frantic checkbox click-fest.
  4. The whole model appears to work better for an unknown (not logged in) user; if the user is already registered and logged in, what would prevent the service provider from caching and offline processing users' data?
  5. Web communication could gradually become even more chatty than now - website opens the bargain by sending meta-queries (for information it expects) which user's browser denies or accepts (this information can't be cached), by sending the subset of data, so the service provider can send back the personalized content, etc.
  6. Inevitably, a subgroup of people would (more or less consciously), instead of sharing who they really are, tell the service provider who they'd like to be; some bad actors would intentionally craft artificial, unattractive & unrealistic "profiles" to confuse recommendation systems. And so on, people are such wicked creatures ... :)
  7. An inverted system could work for recommendations or content personalization, but it would not provide some key functionalities of the former one (the one powered by 3rd party cookies) - like objectively estimating the general audience size across various service providers.

But ...

The price

Frankly, none of these obstacles feels like a real show-stopper.

The majority of them appear entirely solvable, this way or another. In fact it appears easier and less ambitious than e.g. Solid project (yes, the one associated to Tim Berners-Lee, father of original World Wide Web).

Yes, some information will not be available and/or 100% trustworthy, but this is the price of privacy. I believe that at some point not far ahead, all the involved parties will be more than willing to pay such price.

Share this post