Categories
design facial recognition marketing you know, for kids

No cam. No mic. We found other ways to surveil your children.

Projection is not only a defense mechanism where we rationalize the world by identifying behavior of others as being motivated by what motivates us, say trauma or abuse. It’s also how marketing works. When you do it deliberately, it’s called “advertisting” or “business development” or “advertainment” or whatever the tech news calls itself these days.

However, even when it’s done deliberately, the mechanism that fuels the intention and the enthusiasm for an idea still comes from somewhere in your brain that not easily understood, and is desperately hungry, all the time. Your id breaks through and tells us what’s really going on, and you don’t know it because you think just because you’re using your rational brain – you know, to make an ad campaign for a smart speaker for children that supposedly avoids the problems of surveillance capitalism by having no mic, no camera, etc – you don’t know you’re telling on yourself.

The Yoto smart speaker is a device that connects to the cloud to deliver content to pre-verbal children. “No cam. No mic. No funny business,” is an interesting claim if you believe they’re projecting what they believe when they’re asking you to believe something about them. What funny business do you mean? Are you saying it’s a completely offline device that delivers new content without having to purchase cartridges or tapes or cds? Because that’s awesome.

In fact, I had one myself and I loved it. It trained me to handle and fetishize my parents objects so I could learn to consume them, but that’s cool. I like music.

No, Yoto just wants to collect, store and monitor your child’s behavioral data, just like everyone else. “Parents can also upload content they select (say, songs from a playlist, or a certain audio book) to blank cards using a parent app; the cards work using NFC technology, like a contactless credit card, that link to content stored on Yoto’s servers.”

Probably sell it too, since many companies who do the former do the latter; some only do it to enable the latter. But we haven’t even looked up the founders of the company yet.

Elizabeth Bodiford has a nice way of describing this kind of behavior in her poem, We Tell On Ourselves:

We tell on ourselves by the way that we walk.

Even by the things of which we talk.

Categories
facial recognition

You Sure As Shit Can Ban Technology

In this week’s Sunday New York Times there’s an article about Clearview AI, a company taking advantage of a lack of regulation in personal and personally-identifying data to market a facial recognition application to law enforcement agencies.   The basic function of their product is to input a picture of anyone and output potential identity matches, including photos, confidence scores and links to the matched data.

The  article quotes David Scalzo, founder of a venture capital firm that was an early investor:

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

AI is particularly ripe for this kind of ploy, the type that master manipulators who throw up their hands and say, “We never broke the law!” depend on for their arguments to have something soft to land on.  Even if it’s bullshit, is softer than the cold hard ground.

More importantly, it’s a long-term strategy, where that something soft is if you say it often enough in public, and other people are saying it too, it has the potential to become true. Our human brains have a hard time dismissing information that at the time it was taken in, it was not in focus. But we absorb background information just the same as anything else we take in, we just don’t act on it the same way.

Like mercury in fish, it accumulates familiarity in journalism’s fatty tissues when you’re not paying too much attention, and only when the accumulation is at a critical level is there any discussion, after it’s too late. Polluters, bankers, human traffickers, fossil fuel purveyors, that kind of character (Scalzo run a private equity firm).  The argument goes something like this:

  1. State as fact a state of the world that has to be true for your product to be acceptable.
  2. State as fact that this state of the world is inevitable
  3. State that while there’s always bad apples, it’s for the courts to decide, later, maybe.
  4. Restate 1) and 2) as a final coda.

In this case of this statement, it’s demonstrably bullshit.  Clearview AI claims to have scraped over 3 billion photos for their database.  Yes, with a b.  As widely reported in 2019, the question of the legality of scraping sites for training data is not whether it’s legal – it’s not, except in certain circumstances involving Creative Commons licenses. (And even CC is speaking up to say, “woah, that’s not what CC is for, Yahoo and IBM“)

There’s nothing about information a priori that creates a privacy threat for individuals, only the value of individuals’ personal data to commercial and institutional interests.  It doesn’t matter what Scalzo’s opinion is anyway, since he has a vested interest.  

Importantly though is that you can absolutely ban technology.  You can ban it all day, and it’s done all the time.  Nobody is allowed to own a silencer for a pistol, a Cruise missile, a radar detector, or to set up a microphone, x ray, and camera array around your house to monitor your movements.  The thing that bans this is laws.  The law states “this is not allowed” and almost everyone who would have done it for some anti-social purpose just because it was allowed would then not do it because if they got caught they would get in trouble.

You can’t ban the idea of a Cruise missile, but that’s fine.  The idea might be used for all kinds of things.  The idea of matching a picture to a database of other pictures as a form of search already has all kinds of miraculous uses, and in essence isn’t that different from searching for text, whether you do it online or open a phone book to P to figure out what pizza place you always call that you only know by seeing the ad on the page.

Is it difficult to ban the nefarious and anti-social use of software for the purposes of making money?  Probably. (I’d say ask China, but I don’t want you to)  But the first step is doing it, and then it’s not okay to do it anymore.  David Scalzo and Clearview AI want it to be okay.

It’s not, and will never be.  But if we pay attention and talk about these issues, and if we demand the attention of our politicians and law enforcement agencies to be principled and thoughtful and push back against any technology that infringes on any individual’s personal liberty or safety just because we haven’t codified laws yet.  Laws codify what people like us think, and when we say we think that this kind of behavior is not okay, laws can ban technology just fine, thank you.