Rather than relying purely on optical processes, computational photography makes use of digital computation to augment the image making process. If you use a smartphone camera, you’re almost certainly already benefiting from computational processes.
Did you know that when you open the camera app on your smartphone it immediately starts capturing images from the very first moment it was opened? Even before you have pressed the button to take your photo, your phone has already captured multiple images at varying exposures. It does this so that it can automatically stack several images together to increase the dynamic range of the image. See my video on exposure bracketing to see what I mean. Photographers use this technique all the time in their software of choice, to ensure that they have adequate detail in both their highlights and shadows. Using computation, your smartphone is doing all this automatically without you even realising. It also performs a whole host of other processes too, such as removing red eye, sharpening the image, correcting colours and minimising motion blur. Modern phone cameras all do a fantastic job at improving your photos, albeit they take away your ability to control the process.
Because phones need to be compact and lightweight, they can’t fit in all the optical hardware that a DSLR or mirrorless camera can. This is part of the reason behind the drive to improve computation in photography. As digital camera users, we’re all familiar with fast aperture lenses that can often be very large and heavy, but we use them because of their ability to create great depth of field and brilliantly blurry backgrounds. In order to create the same effect, smartphone cameras artificially detect what the main subject of the scene is and then create the blurred out bokeh effect in the background. They have other tricks up their sleeves too, for example they can simulate long exposures by taking a series of shorter exposure image and stacking them together to create an image that looks like a much longer exposure.
The development of computation in photography seems to be heading towards enhanced manipulation of the scenes we see before us. Augmented reality is already a feature we have come to expect, even if right now it is mainly used for novelty effects, such as adding new hair or a pair of glasses onto a selfie. Progress is fast though, and it may not be long before we see more serious features, such as the ability to take a photo and alter the point of focus in the image after it already exists. Furthermore, it could be possible to change the lighting in a scene, so that the source of light appears to be coming from a different direction to what it actually did when the photograph was taken. Indeed, we are already seeing tools in editing software that can replace skies with completely new ones, in order to give the scene a different appearance.
Disruptive technology always divides opinion and many photographers are particularly opposed to computational photography. It is often seen as a low-effort, inferior alternative to optical photography and when it comes to effort this is certainly true. It’s hard to describe the results as inferior though, more artificial, yes, but in terms of quality, just as good, if not better than an image processed in editing software. There will always be genres of photography that rely on the image to be an accurate representation of the subject being photographed. It’s not hard to understand that journalism, documentary, record and street photography for example, would be diminished through the manipulation of augmented reality, but if we are making photos as art, then there should be no limits on the tools and process we can use to create that art. There is certainly something to be said about the lack of control though. The creative process of creating an image and processing it, is often a very personal one and styles can vary wildly from one individual to another. I certainly wouldn’t want to lose that control, so I would always choose to have the option to turn off any automation.
I think ultimately, there is a place for both optical and computational photography. We have seen multiple technological advances in music formats over the last few decades and although we can now listen to music that has no physical format at all, there is growing demand from a section of the public to buy and listen to music on vinyl records. Similarly, film photography is still popular with a number of photographers who enjoy the analog process and as we move towards a more automated future of computational photography, there will always be some people who prefer to have full control over their creative process. For many people, this will add a value to their work that a computer algorithm or neural network simply cannot.