This hidden feature of the Pixel 10 will tell you whether the photo has been edited by AI.
The world's smartest phone, Google Pixel, is now taking on the task of photo editing for you.
The newly released Pixel 10 series of phones not only uses AI to assist you in taking photos or enhancing long - distance shots but also supports a brand - new AI photo - editing tool. All you need to do is tell Gemini what kind of photo you want, and the AI will automatically edit it for you.
Is AI photography an irresistible trend?
When they first heard about this feature, several photography experts in the ifanr editorial department thought it was dispensable. After all, they are already proficient in using various post - processing software and can manually fine - tune photos to achieve any desired effect without having to outsmart the AI.
However, for many novice users, when faced with an unappealing photo, they often don't even know what the problem is, let alone how to make adjustments.
Some photos with lens flare or "death top light" are extremely difficult to salvage using traditional photo - editing software. Ensuring the harmony and consistency of the overall picture's color tone and brightness involves very complex and delicate operations, which leave many people at a loss.
Now, all you have to do is tell Gemini that you want to fix the lighting in a photo. After waiting for a short period, you'll get a very natural and optimized image.
Image source: Fixing lighting and composition
Even requests like removing background clutter, adding clouds to the sky, or a general and abstract instruction like "restore this old photo" can be understood by Gemini.
Having an AI assistant help with photo editing is not a rare thing. Some domestic manufacturers' AI voice assistants can also adjust photos according to user instructions. Although it hasn't been implemented, Apple demonstrated photo - editing with AI Siri last year.
Actually, this is fundamentally different from Gemini's photo - editing function. These AI assistants can mostly understand very single - function instructions, such as "beautify" or "brighten". Some more capable ones can eliminate passers - by or apply styles.
However, Gemini's ability lies in using its own text - to - image generation model to redraw the original photo based on the user's prompt.
This approach is not entirely new. Discussions about using Doubao or ChatGPT to refine and beautify photos are not uncommon on social platforms. ifanr has also published a review and tutorial on Doubao photo - editing, which is essentially the same as the AI - instruction photo - editing on the Pixel 10.
The biggest advantage is that this function is integrated into the default photo app on the Pixel 10. You don't need to switch between different apps after taking a photo, making the whole process very convenient.
Instruction: Remove the plastic bag. Image source: MKBHD
Since AI photo - editing mainly relies on cloud - based large models to generate images, it is a highly promotable AI application. Domestic manufacturers can fully cooperate with large models like Doubao to integrate functions such as instruction - based photo editing into the local system.
As the only company in the world that develops its own chips and phones and also has a top - tier large model, Google can offer more than just these functions for the Pixel.
The new Tensor G5 chip, with performance improvements brought by TSMC's manufacturing, has significantly enhanced the local operation efficiency of the Gemini Nano model. This allows Google to integrate AI directly into the camera while still maintaining a very high speed for photo processing.
This brings a killer photography feature to the Pixel 10 camera: the enhanced 100x zoom function, Pro Res Zoom.
Photos taken with 100x zoom. Left: Before AI optimization; Right: After AI optimization. Image source: CNET
For an ordinary camera lens, a photo taken at 100x the native resolution is almost like a "mosaic - filled" mess, so blurry that it's hardly usable.
The Pixel 10 Pro can use local AI to calculate the missing details from the sensor after taking a photo and generate a relatively clear image in just 4 - 5 seconds.
Currently, some of the Pixel 10's AI imaging functions seem more like "AI for the sake of AI" and have limited value for users.
For example, the new "Camera Coach" function uses edge - side AI to quickly analyze the viewfinder in real - time, providing several options for the final photo effect. Then, it teaches you step - by - step how to take more professionally composed photos.
In contrast, Huawei's similar "AI - assisted composition" function allows users to simply align with the points in the viewfinder, and the phone will automatically zoom to complete the composition. This ability to directly produce results is closer to what we expect from artificial intelligence.
The boundary between expression and reality
From shooting composition to actual imaging and then to post - editing, AI has permeated every aspect of the Pixel phone's imaging. The discussion and debate about photo authenticity have become issues that we cannot ignore.
Compared with the Pixel Studio last year, which caused a huge controversy for its ability to freely modify ordinary photos, the AI imaging functions of the Pixel 10 this year have obviously slowed down. The functions are more focused on enhancing the final output, reducing the room for users to freely express their creativity.
Moreover, Google has introduced the "C2PA" content credential to the Google Photo app. In the future, for every photo taken by the Pixel 10, detailed information about what device it was taken with and whether it has been AI - edited will be available.
Image source: The Verge
Isaac Reynolds, the product manager of the Pixel camera, believes that the C2PA label, which is not easy to modify once applied, is more effective than traditional watermarks. Google thinks that this marking method is being gradually promoted, but it is currently in an "educational period".
This raises a question: What kind of photos are considered "AI - edited"?
Adding a fire to a street photo or putting a cockroach in a food photo is definitely "AI - edited". This might be one of the main reasons why Google added C2PA: to prevent people from being deceived by these highly realistic AI - modified photos and to respond to the controversy surrounding the Pixel 9 last year.
Regarding the boundary of AI - edited photos, Apple executive Craig Federighi expressed his dilemma last year. Apple originally regarded photos as reliable indicators of reality rather than "illusions", so it didn't want to launch AI - editing functions at first.
However, due to strong user demand, Apple is willing to take a small step and allow users to clean up some insignificant details in photos.
Photos edited by AI will also be marked as "modified using the cleanup function" and can be reverted at any time.
So, does the 100x zoom on the Pixel 10 this year count as AI editing?
For a long time, mobile phone photography has relied on algorithms to enhance the phone's recognition of the subject and improve the details of the picture. Pro Res Zoom is essentially a further development of this process, enabling the picture to be magnified 100 times and still be usable.
In the past, the "moon - shooting" feature promoted by some imaging phones also largely relied on algorithms to complete the details, and was even jokingly called "drawing the moon" by netizens.
The ultra - long - focal "telephoto" feature currently promoted by domestic phones also greatly depends on restoring the details of the picture after reaching 30x zoom, which is essentially similar to Pro Res Zoom.
In Google's view, such a process already falls into the category of AI editing and is marked accordingly in C2PA.
Moreover, the "multi - frame synthesis" imaging technology used by many manufacturers is also considered "AI editing" by Google.
This brings up a problem. What we originally thought of as "straight - out - of - the - camera" mobile phone photography actually involves a large number of computational processes. Currently, there is no specific standard for defining what belongs to "AI editing".
After all, there is a significant difference in authenticity between a photo whose quality is simply enhanced by AI and a photo with a cockroach added by AI.
Once this standard is fully popularized across the network and social platforms can display relevant information, the act of taking and sharing photos may change its nature.
Most of the time, the photos we take and share are more about preserving and sharing memories and emotions, with expression outweighing reality.
C2PA, on the other hand, objectively shows how your expression and emotions are fabricated and makes this information public.
It's like posting a beautifully retouched photo on Xiaohongshu and then seeing a label indicating that the photo has been "retouched" when you click on the top - right corner. Doesn't it immediately dampen your enthusiasm for posting?
Actually, the C2PA certification first appeared on the Leica M11 - P camera, a device not closely related to AI.
Every photo taken with this camera contains detailed information such as the name of the security addition, date, changes, and tools used, allowing people to know the source of the file.
For a camera like this, which is specifically designed for documentary photography, the existence of C2PA is very reasonable. It can serve as evidence of news authenticity and protect the digital copyright of image creators.
In today's era when everyone has a camera and a voice, the boundary between personal expression and photo reality is gradually blurring. A casually taken daily photo may even trigger a hot - button social issue.
AI can beautify our captured memories and give us more freedom in expression, but at the same time, it dilutes the sense of reality in images.
Finding a balance between "expression" and "reality" is an issue that the entire industry and society cannot avoid.
This article is from