A Google computer scientist by the name of Sam Hasinoff has created a new technology that he claims will radically cut exposure times when using small apertures, which will reduce the risk of blurry pictures. This software will be published by; you guessed it, Google next month. This software uses focus stacking in order to reduce the time that exposures require. Increasing the depth of field in traditional photography requires reducing the size of the aperture in question, and that will reduce the amount of light hitting the sensor, thereby increasing the amount of time that’s required to expose the photo properly. Longer exposure times can lead to a blur.
Sam Hasinoff spoke with New Scientist magazine, and explained that this method will automatically calculate which photos produce the desired picture for any particular exposure. He is quoted as saying “If either the scene or camera is moving, our method will record less motion blue, leading to a sharper and more please photo”.
To me, this sounds like the software is using a combination of both HDR and focus stacking, and I’m not quite sure how that well that would work with an object that is moving rapidly. It seems to me that everything would be done behind the scenes, and would be taking several photos at a wider aperture to simulate the affect of taking a photo at f8.0. It’s use would be ideal in compact cameras, phones, etc.