You may have noticed a trend that places several camera lenses onto the backs of popular smartphones. The iPhone 7, LG G6, OnePlus 3T, and most other top-of-line smartphones (with the notable exception of the Samsung S8) carry two camera lenses, and more lenses may be coming to future flagship models. Dedicated camera devices like the sold-out (until late 2017) Light L16 pack a whopping 16 lenses into a relatively small footprint.
The question that many find themselves asking these days is, "Why?"
In this Multinewmedia exclusive, we'll walk though brief explanations of the many reasons for adding additional lenses and how they impact camera performance.
One of the most straight-forward advantages of having multiple lenses is the ability to pack more data into a digital image. This data is counted pixel by pixel. Pixels are individual dots of color that combine to form an image. Because so many pixels are required to make a photograph, they are counted in groups of one-million, and these groups are referred to as megapixels (MP). Instead of cramming larger lenses into phones—like the Nokia Lumia 1040 did years ago to achieve 40MP resolution—it is possible to use several smaller lenses to take independent shots that software stitches together after the photo is taken. The result? An image with a higher megapixel count than any of the individual lenses can create.
The result isn't necessarily an image with the sum of all lens megapixels, however, because some data from various lenses overlaps. But with current technology it takes significantly less hardware space to capture a high-resolution photograph using several lower-resolution lenses than one large high-resolution lens. Just how far can we go in the pursuit of achieving greater resolutions? A 3.2 gigapixel (3,200MP) camera is currently under construction at the Large Synoptic Survey Telescope project, but it certainly won't fit on your phone, or likely in the room you're currently occupying.
Some lenses are really good in low light, and others are not. The lenses for low light performance aren't always the type you'd want to use for every application. Therefore, adding a second lens is sometimes done to improve low light performance. Afterall, software can boost the brightness of what the lens observes in order to make pictures taken in near-dark environments viewable, but doing so creates that "underwater" effect that distorts colors, pixelates, and creates a very surreal glow.
You take lively group photos, you take breath-taking landscape shots, and you take socially questionable selfies. In other words, you're an expert on cramming a lot of action into an individual photograph. You were likely relieved when 4:3 (standard aspect ratio) lenses gave way to 16:9 (wide) while keeping up with display standards, but now you're ready to move into a whole new dimension of super-wide photographs. Sure, you could stitch images together manually (or have software do it for you like panorama apps do), or you could just use an appropriate wide-angle lens built into your device.
While wider-angle lenses are beginning to appear on the backs of high-end smartphones, many more models are adopting wide angle lenses as their front-facing lens. Any selfie expert can attest that it is difficult to get all of the desired elements of a photograph included while working as both subject and photographer and simultaneously sporting an outstretched arm. For this reason, wide angle lenses will be increasingly common as front-facing lenses despite an arguably slower march to the back of the phone, but rest assured, they will increasingly appear on both sides of devices over time.
All animals, including humans, require multiple light sensors (err... eyes) to perceive depth in the natural world. Your digital devices aren't much different. Lenses channel photons (light) so that your camera can create a two-dimensional representation of the three-dimensional world. It is the offset between eyes, or lenses in this case, that allow for distances to be calculated and depth to be accurately portrayed. If the variance between how your left eye and right eye sees something is miniscule, then the object is far away. If the variance is large, then the object —maybe a predator!—is very near. While not an extremely common use case in today's cellphones, multiple camera lens inputs do allow for 3D, or stereoscopic, images to be captured. The offset in perspective from the two lenses allows 3D display technologies to display an image with perceived depth.
Due to the nature of lenses themselves, they can be engineered or adjusted to alter their focal point and magnification. We've all heard of zoom lenses, right? Camera lenses, telescopes, and microscopes all operate under the same parabolic optical principles and therefore focus and zoom are inherent properties within their design. Adding a second lens can allow for additional focus or zoom settings as they may differ from those on the first lens. Additionally it is possible to use the stereoscopic effect detailed previously to calculate the appropriate focus for a photographed object. Calculations can be made to determine object distance utilizing the data provided by two lenses. Consider focus and zoom to be the utilization of 3D or stereoscopic technology for use in improving two-dimensional image quality rather than creating a three-dimensional image.
We hope that this information has been helpful, but now it's your turn to be heard. Tell us what we missed, or what questions you still have after reading. Type your replies in the comments section below. And don't forget to Like and Share this article.
# # #