

If we flash back to the inaugural Viewfinder, we took the position that the best camera for any shot is the one in your hand. Stanford Computer Science Tech Report CSTR 2012-01, 2012.Explore the gadgets and gizmos that can take your smartphone photography to a new level.

In particular, new camera control algorithms for stack metering and focusing are presented, which takes advantage of the knowledge of the user's intent indicated via the viewfinder editing interface and optimizes the camera parameters accordingly.ĭissertation: WYSIWYG Computational Photography via Viewfinder Editingįocal Stack Compositing for Depth of Field Control The new method trades away spatial locality for efficiency and robustness against camera or scene motion.įinally, several applications of the framework are demonstrated, such as high-dynamic-range (HDR) multi-exposure photography, focal stack composition, selective colorization, and general tonal editing. This implementation is enabled by a new spatiotemporal edit propagation method that meaningfully combines and improves existing algorithms, achieving an order-of-magnitude speed-up over existing methods. This dissertation realizes and presents a real-time implementation of viewfinder editing on a mobile platform, constituting the first of its kind. Thus, the WYSIWYG aspect is reclaimed, enriching the user's photographing experience and helping him make artistic decisions before or during capture, instead of after capture. Furthermore, the user specifies via the interface how the coded representation should be decoded in computational photography applications, guiding the acquisition and composition of photographs and giving immediate visual feedback to the user. With viewfinder editing, the viewfinder more accurately reflects the final image the user intends to create, by allowing the user to alter the local or global appearance of the photograph via stroke-based input on a touch-enabled digital viewfinder, and propagating the edits spatiotemporally. In response, this dissertation explores a new kind of interface for manipulating images in computational photography applications, called viewfinder editing. Consequently, these techniques discard one of the most significant attractions of digital photography: the what-you-see-is-what-you-get (WYSIWYG) experience. Depending on the type of the computational photography technique involved, the coded representation may appear to be a distorted image, or may not even be an image at all. However, the coded representation, available to the user at the time of capture, is often not sufficiently indicative of the decoded output that will be produced later. Many computational photography techniques work by first capturing a coded representation of the scene-a stack of photographs with different settings, an image obtained via a modified optical path, et cetera-and then computationally decoding it later as a post-process according to the user's specification. Along with it, the popularity of computational photography also grew. The past decade witnessed a rise in the ubiquity and capability of digital photography, paced by the advances in embedded devices, image processing and social media. Dissertation, Stanford University, December 2013. WYSIWYG Computational Photography via Viewfinder Editing
