HomeArticle

Remove the camera indicator light from the smart glasses. What on earth do you want to shoot?

果壳2025-12-08 16:27
The era of AI hardware: Privacy changes from a "wall" to a "fog"...

Do you still remember the product that initially triggered this wave of smart glasses craze? Ray-Ban Meta (the smart glasses co-branded by Meta and Ray-Ban) has a starting price of $299. As of this spring, over 2 million units have been sold.

Those who consider themselves digital geeks almost always own a pair. Many people use it as a new tool for shooting "life Vlogs", using it to record their journeys and wearing it while visiting exhibitions. All e-commerce platforms were once out of stock.

Image source: Tom’s Guide

An engineer named Bong Kim "revolutionized" the original way of using the glasses by simply modifying one function. He turned off the recording indicator light on the Meta glasses. According to him, customers who come to him for modifications are from all over the world.

“When people know they're being filmed, they become unnatural.”

Kim uploaded a video on YouTube with a straightforward title: "How to Make the Indicator Light of Meta Glasses (Including This Year's New Version) Stop Lighting Up".

He demonstrated how to physically damage the relevant circuit of the LED light with a small drill to permanently disable it. That is, the system still thinks the light is working normally, but in fact, it has gone out.

All the functions of the modified glasses work normally and they look as good as new. The only difference is that there is no longer a light reminder when recording.

Bong Kim's YouTube homepage | Image source: 404 Media

The video is only a few minutes long, but its content is enough to make Meta engineers break out in a cold sweat.

Years ago, Google Glass, a pioneer in smart glasses, had to leave the market regrettably due to the "peeping panic". Meta obviously doesn't want to make the same mistake. So they added a "safety baseline" to the glasses: whenever the user takes a photo or records a video, a small white light on the frame will light up to remind people around that "you are being filmed".

If someone covers the light with tape or a finger, the glasses will stop working immediately and pop up a notification saying "Please remove the obstruction". This "anti-cheating" mechanism is also written into the privacy policy.

Until Bong Kim broke this baseline. Moreover, he also offers a modification service for $60 per pair. If the user doesn't have a pair of glasses, he will buy one on their behalf, modify it and then send it to them. According to him, his customers are from all over the world (but recently, eBay removed his listing).

According to a reporter from 404 Media who experienced it, after Kim's skillful modification, the effect is very obvious. In a dark environment, only the wearer can see a very faint internal reflection, and others can't see any light at all.

The modified Meta glasses while recording | Image source: YouTube

 The Perfect “Pretender” 

In the Meta glasses modification community, there are various reasons for discussion: some people say that it won't distract them when shooting a concert; some say that the baby will cry when the light flashes while recording daily videos; others claim to use it to record consultations with lawyers...

But when a reporter from 404 Media tried to contact several users who bought the "modified version" to ask about their specific reasons for purchase, none of them replied.

In the relevant discussion area on Reddit, opinions are divided into two camps. One is the "hardware freedom camp": they believe that since users have bought the device, they have the right to decide how it works and can modify it as they like.

The other is the "privacy realist camp": they worry that this kind of freedom will ultimately lead to a more dangerous reality, that is, all public places may be silently recorded.

Facts have proved that this kind of worry is not unfounded.

Just a few weeks ago, the University of San Francisco issued an internal warning: a man used Meta smart glasses to secretly film women on campus and uploaded the videos to a "PUA-style" social account.

The man was caught with the original version of the glasses. If the indicator light completely disappears, can you tell whether he is looking at you or filming you?

The person being watched can't tell. In September, a TikTok influencer named Navarro noticed that the beautician was wearing Meta glasses throughout the waxing service. She felt extremely uneasy during the whole process, constantly wondering in her mind, "Is she filming me?"

Navarro telling her story on TikTok | Image source: TikTok

Although the beautician explained that the glasses were not turned on at that time, Navarro still couldn't let it go. For weeks after the service, she was worried that the waxing process might have been secretly filmed or leaked.

The “Wall” Turns into “Fog”

Last year, on the eve of the "battle of a hundred smart glasses", we wrote an article discussing that as more and more AI hardware appears in daily life, the privacy issues in the Internet era have evolved into more profound social problems.

We have long been living in a "digitized" era.

Algorithms have long been "openly" collecting our data. The privacy policy is right there, so long that we're too lazy to read it. In order to get the system to push us some coupons, deliberately posting "I'm so poor recently that I can't afford to order takeout" on social media is actually a kind of understanding and numbness towards being monitored by algorithms.

However, different from our attitude towards online life, people mostly want to "stealth beautifully" offline - using a fake name when sending express deliveries, not showing their faces in the takeout avatars, and instinctively avoiding street photography cameras.

Wearable AI hardware is quietly puncturing this layer of window paper - the subject of data collection and analysis has changed from the individual user of the device to everything around the user.

We're not surprised to see cameras on street corners and will first check for hidden cameras when staying in a hotel. But few people would suspect a pair of "sunglasses" coming towards them - it's so perfectly "disguised" that data collection becomes hidden and hardly noticeable.

News from the University of San Francisco | Image source: KRON4

“When people know they're being filmed, they become unnatural. Without the light, their behavior is more real.” Even Bong Kim explained why he made such a modification in this way.

Some people seemingly from the "hardware freedom camp" claim, "Thousands of people use their mobile phone cameras without any permission. Why isn't anyone doing anything about it?"

Because the perfect "pretender" has raised the difficulty of privacy protection from "I don't get in the frame" to "not even realizing that you're already in the frame".

Take a real - life case for example. Two Harvard students, AnhPhu Nguyen and Caine Ardayfio, connected Meta smart glasses to the facial recognition service PimEyes and then to the people - search website FastPeopleSearch.

They simply pointed the glasses at strangers on the Boston subway, and the AI could instantly match the person's home address, phone number, and list of relatives. The passers - by were completely unaware, didn't give consent, and didn't even notice.

An experiment conducted by two Harvard students | Image source: 404 Media

Do you feel a bit powerless?

In the AI era, some "digital doubles" that are modeled, synthesized, and reshaped can appear on adult websites, social media, and even be on the phone with your parents.

What's more troublesome is that once data is collected and uploaded to the Internet for circulation, it's very difficult to truly disappear.

A Hot Potato

Currently, privacy regulations in various countries (such as the well - known GDPR in the EU and CCPA in California, USA) mainly target the software and data processing levels, regulating "how enterprises handle personal data". The data collection in public spaces by new forms of hardware such as smart glasses, AI earphones, and AI rings is in the same embarrassing situation as "secretly filming with a mobile phone". It's morally unacceptable, but if not discovered, the person being filmed is helpless.

The hot potato is ultimately thrown to the manufacturers.

After the incident was exposed, Meta responded that such modifications violated the terms of service. They said that in the latest batch of glasses, they have increased the brightness and size of the LED light from 1mm to 2mm to improve visibility, changed the flashing light to a constant light, and built - in a tampering detection function.

But this hasn't stopped Kim from continuing to accept orders to modify the Meta glasses that have already been released.

"Default recording", "less interference", "always online", and "intelligent and unnoticeable" are the main selling points of new - era AI hardware. At the same time, the drawbacks are also emerging.

Regarding the privacy mechanism design of digital products, the industry is not without more "hardcore" solutions.

Similarly facing the issue of unauthorized hardware modification, Apple's approach on the Macbook is to connect the power supply circuit of the camera and the circuit of the green indicator light in series at the "physical level". As long as the indicator light doesn't light up, whether it's broken by itself or damaged, the camera will be completely powered off, truly achieving "all - or - nothing" at the hardware level. (It can be understood that the indicator light of Meta glasses is independent, and Meta's approach is to improve the visibility of the indicator light.)

Secondly, users should be given sufficient warnings. Take mobile phones for example. In the early stage of mobile phone popularization, some countries required that shooting devices must have a perceptible shooting signal, such as an un - turn - offable shutter sound. This sound is hard - coded in the firmware, can't be turned off in silent mode, can't be masked by plugging in headphones, and can't be bypassed by software or jailbreaking. Although this kind of mandatory signal is simple and crude, it effectively reduces the concealment of secret filming.

This is the same purpose as the warning function of the iOS and Android systems, which will forcefully display a "small dot" in the status bar when the camera or recording function is turned on.

The last layer of protection is data localization. Some hardware products that focus on safety and privacy only run core sensitive data within the local chip and never connect to the Internet. The GDPR is strongly promoting the principles of "minimizing data collection" and "not uploading data unless necessary".

According to trend reports from multiple market research institutions, in recent years, users have far more trust in products with "visible safety designs" such as physical shielding sheets and hardware switches than devices that only rely on software promises.

Because in the AI era, people need a kind of certainty more than ever - the certainty of proving their innocence and protecting themselves.

The German writer Sascha Lobo proposed a concept, the "Post - Privacy Society", at the re:publica conference in 2009 to describe the social state after the popularization of social media and smart devices, where the boundaries of privacy have been completely rewritten.

Privacy is no longer a "wall" but "fog", vague but everywhere and hard to defend against.

We have to admit that we're stepping into this era.

This article is from the WeChat official account “Guokr” (ID: Guokr42). Author: Gaoji Dongwu. Republished by 36Kr with permission.