A recently granted Apple (NASDAQ:AAPL) patent demonstrates how the Cupertino, California-based company is striving to make the picture quality of its digital cameras even better. The patent, titled “Image capture using luminance and chrominance sensor,” covers a new type of camera sensor array that uses three separate image sensing devices, reports Apple Insider.
According to the patent abstract, one of the sensors captures a luminance image, while the other two sensors capture chrominance images. The apparatus may also include an “image processing module” that combines the separate images into a final composite image.
Chrominance conveys the color information for an image, while luminance conveys the light intensity. Although the use of multiple sensors improves the overall color quality and resolution of digital images, it also enables cameras to overcome the so-called “blind region” problem.
Apple notes in the patent that typical digital cameras “employ a photosensor made of a grid of light-sensitive pixels” that detect both luminance and chrominance. Generally this system works well unless a blind region is created for either type of sensor.
The blind region phenomenon occurs when a luminance sensor detects an area that a chrominance sensor is unable to detect, Apple Insider reports. For example, an object in the foreground may block a chrominance sensor from seeing an area that is still detected by the luminance sensor. As a result, the composite image created by the camera may be distorted by this lack of complete information.
However, Apple’s new patent resolves this issue by using two chrominance sensors with blind regions that are offset so that there is always at least one chrominance sensor to provide accurate color information for the final composite image. Although it’s not known when Apple might begin to implement this technology in its devices, this patent could possibly give the iPhone maker a technological edge over its smartphone rivals.
Follow Nathanael on Twitter @ArnoldEtan_WSCS