I’ve been wondering for a while when silicon (ie computation) is going to start substituting for glass (ie high quality optics) in photography. It seems we may be getting close.
By using special optics that render an image uniformly and predictably blurry, engineers can recover an image with a much greater depth of field than a similar lens system would be capable of without computational enhancement. It’s already finding application in various surveilance scenarios and may come to camera phones.
Of course, what I’d really like is the opposite. The small sensors in digital cameras allow the use of small and less expensive lens systems to acheive high levels of quality when compared to a 35mm camera, but the short focal lengths of the lenses result in excessive depth of field, even with wide open apertures. As a result, it’s difficult to blur the background behind your subject, a common and useful technique. You can certainly do it with Photoshop, but that requires masking the subject separately from the background, which is a pain.
It would be really cool if I could adjust the depth of field virtually on the camera. Even better would be to be able to get the raw blurred sensor data and tweak the effective focal plane and depth of field in Photoshop.