Before the start of sales, before the first reviews of the serial Google Pixel 6 and 6 Pro, the leading IT publications, as usual, shared with the public their sweet dreams, which they saw to the accompaniment of whispers of Google marketers. I watched with amazement how the vulgar influx of the camera block was extolled as a designer refinement, read the reasoning about the fact that “now Google Pixels will show everyone”, how the Pixel 6 Pro will humiliate and punish competitors, and much more, eerily similar to the ritual of shamans who want to change the objective reality into which they are built with sound waves of their speech apparatus. Since we’re on the subject of mobile photography, let’s remember the position of the last Pixels before the sixth generation. To do this, I will give an excerpt from the translation of this article., published on XDA, which I really liked for its objectivity in the first part and did not like at all in the second, dreaming of the unrealizable.
Google Pixel 6 Pro camera review: hardware finally worthy of the Google smart box.
Since its inception, Google Pixel phones have introduced the principle of prioritizing software over hardware. This philosophy was applied to all functions of the smartphone, but it was especially noticeable when photographing, where the past Pixels tried their best to avoid new trends in the field of updating photo modules as such. When in 2017 many phone manufacturers began to add two or even three specialized photo modules to the main camera, Google left only one. When even Apple, known for its conservatism, proposed an ultra-wide sensor in 2019, the Pixel 4 said: “Nafig” A year later, in 2020, when Android brands were in an arms race for sensor size,
At its core, Google’s idea was that “our software is so good at handling images that we really don’t need modern hardware.” It worked at first, and it worked very well. The first two or three generations of the Google Pixel were undeniably some of the best camera phones available at the time of release. Powered by the magic of Google’s machine learning, the first Pixels offered flawless HDR, lifelike digital bokeh, and industry-leading night mode. But even the world’s most advanced software (which Google probably offers) cannot overcome mediocre hardware and aging, especially since competitors like Apple, Samsung, Huawei and many others have followed in Google’s footsteps by prioritizing computer photography. while updating the hardware of the cameras.
This policy led to the fact that after the release of the Pixel 4, Google cameras ceased to lead the lists of authoritative ratings. It is possible that this happened even a little earlier, when the Pixel 3 cameras ceded the throne to the Huawei Mate 20 Pro , but this is debatable.
After an obvious failure, Google finally sensed that something was wrong and tried to fix the situation in the generation of Pixel 6 and 6 Pro, which we see in the form of a significant update of the hardware filling. But of course, the camera hardware upgrade doesn’t mean that Google has abandoned its philosophy of “computational photography is king.” This new camera hardware simply complements Google’s machine learning digital imaging software, which itself has received a big hardware boost in the form of a new custom-built Tensor SoC.
The main question, the answer to which everyone wants to know, sounds like this: “Will the Google Pixel 6 Pro return the throne of the best camera phone?” The answer is not so simple, because digital photography is very popular these days and competitors are not standing still. Let’s take a closer look at the cameras of the Pixel 6 and 6 Pro.
Google Pixel 6 Pro: camera hardware
The Google Pixel 6 Pro’s updated camera hardware is topped by Samsung’s 50-megapixel sensor, the GN1 with a 1.2-micron pixel size and a 1 / 1.31-inch sensor size. This is a huge leap from the previous Sony IMX363 sensor that Google used from the Pixel 3 all the way up to the Pixel 5a. The Pixel 6 Pro also features a Periscope zoom lens for the first time in the series, which achieves a lossless 4x optical zoom.
These hardware improvements are huge. The GN1’s image sensor has a significantly larger area than the Sony IMX363, which means it can naturally pick up a lot more light and also create more natural bokeh due to its shallower depth of field.
Caption: Comparison of the matrix area of the main cameras Pixel 5A and Pixel 6 Pro.
This is where the excerpt from the article ends.
We continue the conversation
So, before the start of sales of the Google Pixel 6 and 6 Pro, Google smartphones were no longer perceived as photo flags, and new Googlephones were called upon to return to their former power. In other words, previous Pixel cameras were mediocre (for the money). Despite all the claims and promises, Google’s “augmented reality” camera software, running on outdated hardware, did not pull off. After the start of sales, after the first tests and exchange of impressions, it suddenly turned out that the new Pixel cameras, despite the 4x increase in the sensor area, are at best not inferior to the flagships of other manufacturers already on sale. The most accurate and objective assessment of the new Pixel cameras, despite the resource criticism coming from below and never from above, was given by the DxOMark laboratory, which virtually destroyed all shamanic songs about superiority:
And Google’s ultimate hopes for leadership (for me personally) were buried by the recent blind comparison of the GALAXY S21 ULTRA and Pixel 6/6 Pro cameras in the Fishmonger scene, where I was sincerely amazed by the majority’s choice of the Pixel. I don’t know what guided those people who chose yellow
snow, ice and rotten octopus (and maybe squid), but it happened, surstroemming fans won:
The indignation is caused by some misunderstanding of a simple thing – photos of fish in such a tonal palette will never get into advertising, they will not be published in newspapers, they will not appear on the Instagram of pop divas with millions of subscribers and hired graphic designers who monitor quality. Even those people who have chosen the wrong octopus, even they, looking at the photo again after a while, will be surprised at their choice. This brings up the question of Google’s AI for cameras – did he know he was photographing fish? The answer is obvious – no, they used a built-in settings profile created for indoor environments with artificial light, the same for an octopus and a mop. It is for these reasons, including, that the Pixel 6 is currently far behind the iPhones and Asians such as Xiaomi and Huawei, which show a more balanced approach to playing with light. Cameras Pixel 6 and 6 Pro are now doing the same thing that Huawei’s flagships were recently accused of, using the epithets “on my eyes” and “twisted eyes”, they “poisonous” colors. Let’s compare the current camera capabilities of the Google Pixel 6 with the DxOMark leader Huawei P50 Pro:
It cannot be called anything other than a complete defeat of the Pixel, given, in addition to vague algorithms for analyzing illumination (gray instead of white), also the problem of lower details, which Google solves by “smearing” and color filling. However, everyone does this, and the pictures of the same Huawei P50 Pro will seem flawed in 10 years.
The Google Pixel 6 and 6 Pro smartphones have just hit the market, their software is still raw, many bugs are being fixed right now, and fixes are coming one after another. Will this affect the camera? Can Google programmers get anything else out of the Samsung GN1 sensor? Let’s wait and see, but for now, Google has nothing to brag about. But you shouldn’t blame the company for this either, there are about ten models of smartphones with a GN1 sensor on the market, and they are not at all visible in the first lines of DxOMark.
Finally, I would like to dispel the sad picture a little and talk about the first negative collision of the judicial system with new technologies of digital photography and visualization. On Mobile-Review and in the comments, we have all mentioned for several years the fact that in the final photo we do not see the very objects that we photographed. They are redrawn by the algorithm, and in many cases, individual details are simply completed, they are not really there. For example, this happens when photographing the Moon from the surface of the planet; without software processing it simply cannot be photographed in detail using smartphone optics. Probably, similar considerations were guided by Judge of Kenosha County (a city in the north of the United States) Bruce Schroeder, who prohibitedPublic Prosecutor Binger pinch-to-magnify pictures when showing photos from his iPad. This happened after the defense attorney Richards remarked that when the image on the iPad screen is enlarged, the Apple software does not show what it really is, but what it thinks is.
The world is rolling into the abyss of digital fantasy, which manifests itself after the processing of digital photo software. Against this background, isn’t it time to stop calling digital photographs “photography”? Isn’t it time to start using the terms “the picture painted by the landscape painter Huawei”, “portrait painter Samsung” and “Cubist Digma”? Since we don’t get what we see?