Critics are wrong to slam iPhone X’s new face tech

Apple’s new iPhone X reads faces. And privacy pundits are gnashing their teeth over it.

The phone’s complex TrueDepth image system includes an infrared projector, which casts 30,000 invisible dots, and an infrared camera, which checks where in three-dimensional space those dots land. With a face in view, artificial intelligence on the phone figures out what’s going on with that face by processing locations of the dots.

Biometrics in general and face recognition in particular are touchy subjects among privacy campaigners. Unlike a password, you can’t change your fingerprints — or face.

Out of the box, the iPhone X’s face-reading system does three jobs: Face ID (security access), Animoji (avatars that mimic users’ facial expressions), and also something you might call “eye contact,” to figure out if the user is looking at the phone (to prevent sleep mode during active use).

A.I. looks at the iPhone X’s projected infrared dots and, depending on the circumstances, can check: Is this the authorized user? Is the user smiling? Is the user looking at the phone?

Privacy advocates rightly applaud Apple because Face ID happens securely on the phone — face data isn’t uploaded to the cloud where it could be hacked and used for other purposes. And Animoji and “eye contact” don’t involve face recognition.

Criticism is reserved for Apple’s policy of granting face-data access to third-party developers, according to a Reuters piece published this week.

That data roughly includes where parts of the face are (the eyes, mouth, etc.), as well as rough changes in the state of those parts (eyebrows raised, eyes closed and others). Developers can program apps to use this data in real time, and also store the data on remote servers.

The controversy raises a new question in the world of biometric security: Does facial expression and movement constitute user data or personal information that should be protected in the same way that, say, location data or financial records should be?

I’ll give you my answer below. But first, here’s why it really matters.

The coming age of face recognition

The rise of machine learning and A.I. means that over time, face recognition, which is already very accurate, will become close to perfect. As a result, it will be used everywhere, possibly replacing passwords, fingerprints and even driver’s licenses and passports for how we determine or verify who’s who.

That’s why it’s important that we start rejecting muddy thinking about face-detection technologies, and instead learn to think clearly about them.

Here’s how to think clearly about face tech.

See also  Strengthen National Security, Uphold Integrity Gov Otti, Kwankwaso Urges Police Graduates

Face recognition is one way to identify exactly who somebody is.

As I detailed in this space, face recognition is potentially dangerous because people can be recognized at far distances and also online through posted photographs. That’s a potentially privacy-violating combination: Take a picture of someone in public from 50 yards away, then run that photo through online face-recognition services to find out who they are and get their home address, phone number and a list of their relatives. It takes a couple of minutes, and anybody can do it free. This already exists.

Major Silicon Valley companies such as Facebook and Google routinely scan the faces in hundreds of billions of photos and allow any user to identify or “tag” family and friends without permission of the person tagged.

In general, people should be far more concerned about face-recognition technologies than any other kind.

It’s important to understand that other technologies, processes or applications are almost always used in tandem with face recognition. And this is also true of Apple’s iPhone X.

For example, Face ID won’t unlock an iPhone unless the user’s eyes are open. That’s not because the system can’t recognize a person whose eyes are closed. It can. The reason is that A.I. capable of figuring out whether eyes are open or closed is separate from the system that matches the face of the authorized user with the face of the current user. Apple deliberately chose to disable Face ID unlocking when the eyes are closed to prevent unauthorized phone unlocking by somebody holding the phone in front of a sleeping or unconscious authorized user.

Apple also uses this eye detector to prevent sleep mode on the phone during active use, and that feature has nothing to do with recognizing the user (it will work for anyone using the phone).

In other words, the ability to authorize a user and the ability to know whether a person’s eyes are open are completely separate and unrelated abilities that use the same hardware.

Which brings us back to the point of controversy: Is Apple allowing app developers to violate user privacy by sharing face data?

Raising eyebrows

Critics lament Apple’s policy of enabling third-party developers to receive face data harvested by the TrueDepth image sensors. They can gain that access in apps by using Apple’s ARKit, and the specific new face-related tools therein.

The tools allow the building of apps that can know the position of the face, the direction of the lighting on the face and also facial expression.

The purpose of this policy is to allow developers to create apps that can place goofy glasses (or fashionable glasses to try on at an online eyewear store’s website), or any number of other apps that can react to head motion and facial expression. Characters in multiplayer games will appear to frown, smile and talk in an instant reflection of the players’ actual facial activity. Smiling while texting may result in the option to post a smiley face emoji.

See also  Dr. Oluebube Chukwu Honors Resilient Men on International Men’s Day

Apple’s policies are restrictive. App developers can’t use the face features without user permission, nor can they use them for advertising, marketing or making sales to third-party companies. They can’t use face data to create user profiles that could identify otherwise anonymous users.

The facial expression data is pretty crude. It can’t tell apps what the person looks like. For example, it can’t tell the relative size and position of resting facial features such as eyes, eyebrows, noses and mouths. It can, however tell changes in position. For example, if both eyebrows rise, it can send a crude, binary indication that, yes, both eyebrows went up.

The question to be answered here is: Does a change in the elevation of eyebrows constitute personal user data? For example, if an app developer leaks the fact that on Nov. 4, 2017, Mike Elgan raised his left eyebrow, has my privacy been violated? What if they added that the eyebrow raising was associated with a news headline I just read or a tweet by a politician?

That sounds like the beginning of a privacy violation. There’s just one problem. They can’t really know it’s me — they just know that someone who claimed to have my name registered for their app, then later that a human face raised an eyebrow. I might have handed my phone to a nearby 5-year-old, for all they know. Also, they don’t know what the eyebrow was reacting to. Was it something on screen? Or maybe somebody in the room said something to elicit that reaction.

The eyebrow data is not only useless, it’s also unassociated with both an individual person and the source of the reaction. Oh, and it’s boring. Nobody would care. It’s junk data for anyone interested in profiling or exploiting the public.

Technopanic about leaked eyebrow-raising obscures the real threat of privacy violation by irresponsible or malicious face recognition.

That’s why I come not to bury Apple, but to praise it.

Turn that frown upside down

Face recognition will prove massively useful and convenient for corporate security. The most obvious use is replacing keycard door access with face recognition. Instead of swiping a card, just saunter right in with even better security (keycards can be stolen and spoofed).

See also  Gov Otti Inaugurates Greater Ohafia Development Agency To Boost Regional Growth In Abia

This security can be extended to vehicles, machinery and mobile devices as well as to individual apps or specific corporate datasets.

Best of all, the face recognition can be accompanied by peripheral A.I. applications that make it really work. For example, is a second, unauthorized person trying to come in when the door opens? Is the user under duress? Under the influence of drugs, or falling asleep?

I believe great, secure face recognition could be one answer to the BYOD security problem, which still hasn’t been solved. Someday soon enterprises could forget about authorizing devices, and instead authorize users on an extremely granular basis (down to individual documents and applications).

Face recognition will benefit everyone, if done right. Or it will contribute to a world without privacy, if done wrong.

Apple is doing it right.

Apple’s approach is to radically separate the parts of face scanning. Face ID deals not in “pictures,” but in math. The face scan generates numbers, which are crunched by A.I. to determine whether the person now facing the camera is the same person who registered with Face ID. That’s all it does.

The scanning, the generation of numbers, the A.I. for judging whether there’s a match and all the rest all happens on the phone itself, and the data is encrypted and locked on the phone.

It’s not necessary to trust that Apple would prevent a government or hacker from using Face ID to identify a suspect or dissident or target. Apple is simply unable to do that.

Meanwhile, the features that allow changes in facial expression and whether the eyes are open are super useful, and users can enjoy apps that implement these features without fear of privacy violation.

Instead of slamming Apple for its new face tech, privacy advocates should be raising awareness about the risks we face with irresponsible face recognition.

This story, “Critics are wrong to slam iPhone X’s new face tech” was originally published by Computerworld.