A Look at the Latest—Futuristic Photography

A Look at the Latest—Futuristic Photography

4373

Every year, innovative imaging technology impresses us all. While not everything comes to market, it shows that scientists much smarter than most of us continue to push against the boundaries of what we had previously considered photographically possible.

Fresh back from the 2014 CES, let’s take a look at some of these impressive developments.

Tiny camera captures wide angles and details.
When taking a photo, sometimes you can’t choose between wide-angle shots and zooming in close, but with a new system, you might not have to choose. A miniature camera promises to give a big-picture view without sacrificing high resolution, claim researchers from the University of California, San Diego.

To capture all the details of a crime scene, you might take many photos at close range, the researchers say. To get the whole scene at once, you could use a wide-angle or fisheye lens; but without an especially large lens you would be sacrificing the fine resolution that would help you catch the detail you might otherwise have missed.

The new imager achieves the optical performance of a full-size wide-angle lens in a device less than one-tenth of the volume. It has a 100x range of focus, meaning it can image anything between half a meter and 500 meters away, and it has 0.2-milliradian resolution, equivalent to 20/10 human vision.

The researchers worked with monocentric lenses made of perfectly round concentric glass shells. Their symmetry allows them to produce wide-angle images with high resolution and hardly any of the geometrical distortions common to fisheye lenses. A dense array of glass optical fiber bundles, polished to a concave curve on one side so that they perfectly align with the lens’s surface, handles conveying the information collected by the lens to electronic sensors. A 5MP prototype works now; a 30MP version is in the works.

Bug-eyed: new optics for deeper wide-angle images.
To better capture wide images with depth, a new lens design combines the human eye’s focusing ability and an insect eye’s wide-angle view. Ohio State University says their researchers’ work could yield future smartphones that rival the photo quality of current stand-alone cameras “and surgical imaging that enables doctors to see inside the human body like never before.”

Human eyes change focus; insect eyes are comprised of many small optical components that can’t change focus but give a wide view. “We can combine the two,” the scientists say. “What we get is a wide-angle lens with depth of field.”
The prototype has multiple separate dome-shaped flexible transparent polymer pockets filled with a gelatinous fluid similar to that inside the human eye. With pumps, each dome is adjustable, altering the direction and focus of the lens. The current model is 5mm across; the design will be shrunk for use in mobile imaging and surgical laparoscopy.

Computational imaging gets high-quality photos from poor lenses. Good pictures require quality lenses and sensors, right? Maybe not. Computational imaging has long promised better pictures through software, and a new technique demonstrates just how much better might be possible.

Researchers at the University of British Columbia used a simple single-element lens coupled with complex math to generate sharp results. “Modern imaging optics are highly complex systems consisting of up to two dozen individual optical elements. This complexity is required in order to compensate for the geometric and chromatic aberrations of a single lens, including geometric distortion, field curvature, wavelength-dependent blur and color fringing,” the researchers say. “We propose a set of computational photography techniques that remove these artifacts, and thus allow for post-capture correction of images captured through uncompensated, simple optics which are lighter and significantly less expensive.”

Software adjusts and enhances photo lighting post-capture. A new technique lets amateurs in five minutes emulate the pro-level lighting an experienced shooter can take hours to set up. Scientists at Cornell University report an imaging technique that combines multiple images of a scene into one better picture. But it’s not HDR, which combines many exposures: instead you can dramatically alter the lighting in the shots, and get the best effects of each capture in the final composition. It’s called “computational lighting design.”

“What often separates professional photographers from amateurs is their mastery of lighting,” it says in a write-up from Cornell. “Lighting can control what parts of an image draw your attention, or whether an object looks expensive or cheesy. And even for pros, getting the lights set up for the desired effect can be time-consuming.”

How does the process work? The not-new part is to “set the camera on a tripod and walk around with a flash, firing it from many different angles to shoot a hundred or more exposures, each with slightly different lighting. Load the images into Photoshop as layers and superimpose, mix and chop the layers to get the lighting you want.”

Still sound like too much work? Cornell’s got you covered: their app combines those layers for you and provides sliders and other tools to mix and match for the best lighting, working from three main aspects: Edge Lighting that emphasizes the shapes of objects and their shadows; Fill Lighting to illuminate everything uniformly; and Diffuse Color Light, which brings out the color and texture of every object. The results: “Three professional photographers were asked to compare the system to the traditional procedure of working with tens of layers. All were enthusiastic and reported that working with the basis lights gave them a good starting point and greatly reduced the time spent,” Cornell reported. “In another test, seven novice users with little experience in photography were able to produce professional-quality results in an average of 15 minutes.”

Full-frame sensor captures video in very low light. A new sensor captures video even when you can’t see anything—at “a level of brightness in which it is difficult for the naked eye to perceive objects.” Canon developed a high-sensitivity 35mm full-frame sensor, but it’s exclusively for video recording, at least in this first iteration. Why video? Well, even HD video is about a 2MP frame. By making a large sensor have so few pixels, Canon is able to concentrate on the light-gathering capabilities of larger pixels/photosites.

“Delivering high-sensitivity, low-noise imaging performance, the new 35mm CMOS sensor enables the capture of Full HD video even in exceptionally low-light environments,” the company says. The sensor features pixels measuring 19 microns square in size, which is more than 7.5 times the surface area of the pixels on the CMOS sensor incorporated in its top-of-the-line EOS-1D X DSLR, Canon adds.
The sensor’s pixels and readout circuitry employ new technologies that reduce noise, which tends to increase as pixel size increases, Canon adds. “Thanks to these technologies, the sensor facilitates the shooting of clearly visible video images even in dimly lit environments with as little as 0.03 lux of illumination, or approximately the brightness of a crescent moon.” Canon has captured test video in a room illuminated only by the light from burning incense sticks (approximately 0.05–0.01 lux).

Camera creates 3D from a kilometer away. To get 3D information such as the distance to a faraway object, scientists bounce a laser beam off the object and measure how long it takes the light to travel back to a detector. The technique, called time-of-flight, is used in machine vision, navigation systems for autonomous vehicles, and other applications—but most have a relatively short range and struggle to image objects that do not reflect laser light well.

That’s according to a team of Scotland-based physicists who say they’ve tackled these limitations, and have a system that can gather high-resolution, 3D information about objects that are typically very difficult to image, from up to a kilometer away. The system works by sweeping a low-power infrared laser beam rapidly over an object. It then records, pixel-by-pixel, the round-trip flight time of the photons in the beam as they bounce off the object and arrive back at the source. The system can resolve depth on the millimeter scale over long distances using a detector that virtually counts individual photons. The primary use of the system is likely to be scanning static, man-made targets, such as vehicles.

Inexpensive “nano-camera” captures translucent 3D.
New imaging technology developed at MIT captures 3D images with only one photon per pixel—and could be used in medical imaging, collision-avoidance detectors for cars and interactive gaming, the university reports.

Unlike time-of-flight technology, the new “nanophotography” method “is not fooled by rain, fog or even translucent objects,” MIT says. The primary change is information encoding and calculations “that are very common in the telecommunications world” but new to imaging. “Nano-camera” prototype equipment cost only $500.

Thinnest lens ever.
Physicists at Harvard created an ultrathin lens that focuses light without the distortions of conventional lenses. The lens is only 60 nanometers thick, “essentially two-dimensional,” they say. “Yet its focusing power approaches the ultimate physical limit set by the laws of diffraction.” The lens is a new type of technology, scalable from near-infrared to terahertz wavelengths, and simple to manufacture. The flat lens eliminates optical aberrations such as the fisheye effect that results from conventional wide-angle lenses. Astigmatism and coma aberrations also do not occur with the flat lens, so the resulting image or signal is completely accurate and does not require any complex corrective techniques.

Lens-free camera.
Hey, who needs lenses anyway? Not the researchers at Bell Labs. A new system made there uses only an LCD-based multi-aperture opening and a sensor to not only capture photos but to create images comprised of far less data than those of today. The “compressive sensing” technology constructs an image by comparing the differences in the light coming through each aperture. The lensless compressive imaging architecture “consists of two components, an aperture assembly and a sensor. No lens is used,” the researchers say. “The aperture assembly consists of a two-dimensional array of aperture elements. The transmittance of each aperture element is independently controllable. The sensor is a single-detection element. A compressive sensing matrix is implemented by adjusting the transmittance of the individual aperture elements according to the values of the sensing matrix.” The proposed architecture “is simple and reliable because no lens is used,” and the prototype was built by using an off-the-shelf LCD panel and photoelectric sensor.

What else is being worked on today? I can’t wait to find out what’s next.

NO COMMENTS