Computational photography is replacing many physical components of cameras with image manipulation but surely the final step is to remove the need for a lens. Why bother focusing all that light when you can simply compute the image:
Photography usually requires optics in conjunction with a recording device (an image sensor). Eliminating the optics could lead to new form factors for cameras.
Here, we report a simple demonstration of imaging using a bare CMOS sensor that utilizes computation. The technique relies on the space variant point-spread functions resulting from the interaction of a point source in the field of view with the image sensor.
These space-variant point-spread functions are combined with a reconstruction algorithm in order to image simple objects displayed on a discrete LED array as well as on an LCD screen. We extended the approach to video imaging at the native frame rate of the sensor. Finally, we performed experiments to analyze the parametric impact of the object distance. Improving the sensor designs and reconstruction algorithms can lead to useful cameras without optics.
This is another level of computational photography. Usually we see the technology helping the photographer to do better but this automated studio can do away with need for a photographer:
StyleShoots Live is an all-in-one “smart studio” designed to provide both stills and video of brands shooting their latest apparel on models in one large steel enclosure. With advanced robotics and AI technology, the machine handles all of the technical duties that would usually be performed by a camera crew - such as setting up shots and lighting. It allows for instant review of stills and video with incredible production speed.
A motorized camera head with three axis movement uses a 4K capable Canon 1DX Mk II and a 3D depth sensor, controlled by the system’s Style Engine™. The proprietary software controls the movements, camera and lights to produce the desired footage based on fully customizable styles.
Neural networks can be trained to fill in missing details in images but what about improving astronomical imaging?
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data.
Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results.
We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.
Basically the network is trained to improve the images by being trained on a test set of images that have been artificially degraded by adding the sort of noise you find in a telescope/atmosphere system. Is this crossing the line of objectivity? With deconvolution or speckle interferometry the astronomer knows what the process is but in the case of a neural network it is necessary to trust that performance on the test set transfers to the real data. This is a problem common to the introduction of neural networks as a statistical technique - there are no confidence intervals or significance levels.
It is tough being a programmer - you have to put up with so much stuff from people who aren't programmers and even other programmers turn up and spoil your wonderful code. Is there enough that is posi [ ... ]