John Compton and John Hamilton's Posts | Return to the Kodak Blog


« July 2007
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

John Compton and John Hamilton's Recent Posts

John Compton and John Hamilton's Archived Posts

John Compton and John Hamilton's Posts

John Compton and John Hamilton
Technology Troublemaker and Algorithm Agitator

June 14, 2007

Color Filter Array 2.0

Have you ever noticed that digital cameras don't seem to work as well as you want under low light conditions? Images often come out dark, or noisy, or blurry - just because the sensor isn't sensitive enough.

Kodak has been working to improve sensor performance - so that image sensors can make better use of the light that comes through the lens. And after several years of work (involving a whole team of Kodak scientists), earlier today, Kodak announced a new image sensor technology that addresses this need directly, giving image sensors a 2x to 4x increase in sensitivity - the equivalent of one to two photographic stops.

There's a lot to cover in explaining how this really works. Thinking it through, it seemed that a good way to talk about this was to ask the inventors, John Compton and John Hamilton, a few questions. We just hope we asked some of the questions you may have wanted to ask.

Conventional Bayer Color Filter Array

The Interview:

What did you set out to do when you began this work?

JC: We wanted to enhance the low-light performance of digital sensors, to see what it would take to get better sensitivity.

JH: For years the industry has been using Bayer filter array that was invented by Kodak researcher Bryce Bayer back in 1976 - we wanted to see if we could improve on that.

The Bayer filter is found in most of today's sensors. What's been holding the other designs out?

JH: The simplest answer is that the Bayer pattern works so well - it provides excellent color reproduction from a single image sensor. And while the overall pattern used has stayed the same, there have been some minor tweaks along the way. The green channel has been opened up a bit to get more photographic speed, and the processing software continues to be optimized for computational speed and color accuracy.

JC: Another thing that has helped has been the broad acceptance of this pattern in the industry. To support video, for example, most consumer cameras today use a hardware-accelerated path to process this data and retain fast frame rates. Because this hardware solution is based on the Bayer pattern, there is a lot of inertia not to displace this pattern gratuitously.

Do you have more than one pattern that you can use in this approach?

JH: Yes, we've developed several patterns, allowing us to adapt this approach to different sensor architectures and applications. For example, different patterns might have different levels of image processing associated with them - you might use a pattern with lower image processing requirements in an application where you don't have access to as powerful a processor as you might need for a different pattern.

Different patterns that can be used

These new designs use both panchromatic1("clear") and color filters. What is the advantage of adding panchromatic pixels to the sensor?

JC: The real advantage is that the panchromatic pixels are more sensitive, since they detect all wavelengths of visible light (rather than filtering light to detect color information).

JH: One way that helps to think about this is to look at it in terms of luminance and chrominance. In the original Bayer design, the green pixels are used to recover most of the luminance information from the image. Now, we are using panchromatic pixels - which are more sensitive than green pixels, because none of the photons get filtered out or wasted - to act as the luminance. This gives us a more sensitive luminance channel in the final image, which raises the sensitivity of the entire sensor.

Composing an Image

You could say that a sensor system is composed of roughly three parts: a pattern of color filters, the hardwired photoreceptors, and software to interpolate the data and reconstruct the image. Which of these are modified in this new technology?

JH: Clearly the color filter pattern and the software interpolation are different with this approach. What's more, the arrangement of the photoreceptors can be changed, but that's not a requirement. Depending on the application and light levels, you may want to combine neighboring color pixels (of the same color) to match with the more sensitive panchromatic pixels. While you can always do this after the entire sensor has been read out, you might want to do this by "binning2" pixels directly on the sensor - which sometimes requires different wiring of the photoreceptors.

Does this new technology work for both CCD and CMOS image sensors?

JC: Yes, it does - for the most part, we are only changing the color filter layer of the sensor, so we can use this broadly across both sensor technologies. CMOS does have some advantages, however, including some different ways to apply binning as well as the opportunity to include the new image processing algorithms directly on the image sensor itself.

Do sensors need to be completely redesigned to incorporate this new technology?

JC: The filter sets certainly need to be changed and this has some implications for the underlying process. But it is pretty straightforward to implement a change from Bayer Pattern sensors to these new designs.

It sounds like the software used to reconstruct these images is complex. What are some of the problems you needed to work through?

JH: The Bayer filter pattern has a very tight 2x2 repeat pattern. So, for a red pixel, you're never more than two pixels away from another red pixel. One of the new patterns uses a pan checkerboard and on the complement of that checkerboard, there is a pair of reds, a pair of greens, another pair of greens and a pair of blues.

Bayer PatternSample pattern for new KODAK Technology

Finding the right color edge of something can be a challenge. You've got to tie the color edge to the pan image, which gives you a good idea of where that edge is. What you'd like to do is bring the color out to the edge, but keep it from going any further. If you hold to these edges, it's hard to do the noise cleaning because that is done by averaging pixels that you expect to have about the same value. If you're not careful, you'll be averaging pixels on either side of the edge and you'll get what we call "color bleed." For instance, if you have skin next to blue jeans, you'll see a cyan halo on the hand.

And if you overclean the image, it looks like plastic, because it is just too smooth. So, it's hard to get the right amount of cleaning ensuring that you reduce the noise, and at the same time, keep the edge definition reasonably good. And so there's been a lot of work done on finding the best way to do that.

Do you get a more detailed image by using panchromatic pixels?

JH: Not really. Image detail comes primarily from the luminance channel of the image. In a Bayer pattern sensor, half of the total pixels are arranged in a green checkerboard and are used for luminance. In these new designs, half of the total pixels are arranged in a panchromatic checkerboard and used for luminance. We still have the same amount of luminance data - we're just getting it with higher sensitivity than before.

The color information comes along in a similar way. In a Bayer pattern, you have red and blue to help put the rest of the color together with the green edge. In the new design, the red, blue and green help to put the color back together for the pan record. Instead of two chrominance channels (red minus green and blue minus green), we really have three (red minus pan, green minus pan and blue minus pan).

In what situations does this design offer the biggest improvements?

JH: In situations where you want more sensitivity to light. In a low-light situation, these new patterns will produce a lot less color noise than a Bayer pattern sensor. You can run the shutter faster, which gets rid of a lot of motion artifacts. It will cut down on camera shake or, if you're taking a picture of a moving object there will be less blur. Both situations are illustrated in the following image pairs.

Example of Noise Reduction:

Bayer PatternNew KODAK Technology

Example of Blur Reduction:

Bayer PatternNew KODAK Technology

JC: Another way to think of this is that you have the same number of photons coming into the new sensor as you would with the Bayer pattern. It's just that the new filter arrays waste fewer of the photons since fewer of them end up absorbed in a color filter.

This new technology seems to work more like human vision - a combination of color pixels and panchromatic pixels, just like rods and cones in the eye.

JC: Actually the human retina has the best color perception in the fovea, the little tight area in the center of the retina. That is where most of the cones are. Elsewhere in the eye, you find mostly rods. And they let you see black & white. Your color vision relies more on black and white. So these sensor patterns are really different in that the color and the panchromatic pixels are distributed uniformly across the sensor.

JH: But in terms of luminance, the human vision systems has better resolution capability than color perception acuity. We take advantage of that in JPEG and we take advantage of that in NTSC. And now we're taking advantage of that in the design of image sensors as well. We see more shades of gray than we see different colors. We're very adaptive to light. It's a logarithmic system and that's why we go through so many different ranges of brightness.

This sounds pretty cool - when will cameras be available that use sensors with this new technology?

JC: Samples of the first sensor with this technology should be available in the first quarter of 2008. Once that is available, some additional time will be needed by camera manufacturers to design, develop and manufacture a camera using this sensor. So we're hoping it's not too much longer after that.

So - what are you guys working on next?

JH & JC: Sorry - we can't answer just yet, but we've got plenty of ideas in the hopper. So stay tuned.


1. Panchromatic - Simply refers to light with all colors, which is another way of saying white light.

2. Binning - Pixels work by converting incoming photons into electrons, which are accumulated in each pixel during the exposure (like holding a bucket under a hose for a period of time). After that, the electrons provide a signal for each pixel that can be read out (like measuring the depth of water in the bucket). Alternatively, the electrons from two or more pixels can be combined, or binned, to provide a larger signal to read out (like dumping the water from several buckets into a single bucket and then measuring the depth of water in that bucket). This binning provides a more robust signal, but obviously the contributions of the individual pixels (like the water in the individual buckets) cannot be determined after binning, so there is a loss of resolution.


Add Your Comment

Posted By: alexandria (7/4/2007)

Comment: Globali warming poses a "serious threat to America's national security" with terrorism worsening and the U.S. will likely be dragged into fights over water and other shortages, top retired military leaders warn .. What think? <a href="">new alexandria</a> <a href="">florist alexandria</a>

Posted By: neil (7/2/2007)

Comment: hi, is it theoretically possible to set the iso of individual pixels? i figured if you could, then you could increase the dynamic range of the sensor. thanks, neil

Posted By: patrick (6/30/2007)

Comment: Hai how are you,?

Posted By: John Compton (6/25/2007)

Comment: I thought I'd take a moment to comment on how the sample images were captured. The images are not simulations, but are captured with real hardware. We used two prototype image sensors that were identical in every way except for the color filter array: one was Bayer and one was our new technology checkerboard pattern (see "Sample pattern for new KODAK Technology" in the blog). These sensors were mounted on identical test boards with identical optics, shutter, etc. The two units were then mounted together on a tripod; two laptops were used to control the two units. The images were captured by pressing simultaneously the "go" buttons on the two laptops (not a real sophisticated arrangement, but it worked!). Sometimes the exposure times for the two units were different (to show reduced motion blur in the image of the juggler, for example) and sometimes the exposure times were the same (to show better low light capability, for example). Camera motion was provided when needed by whacking the side of the tripod during exposure. The Bayer images were processed using conventional Bayer processing techniques, and the new technology images were processed using our new algorithms. Beyond that, no additional processing was done. There was no additional tweaking or modification of the images using Photoshop. The resulting images provide a good comparison of the benefits of this new technology.

Posted By: Dave Gilblom (6/23/2007)

Comment: Regarding comment by dc4121: Since the RGB pixels are stacked in the Foveon X3 sensors, any stack can be converted to monochrome just by adding the three pixels at that location together. This could be done from any Sigma raw file.

Posted By: Walt Sorensen (6/19/2007)

Comment: Kudos on the brillent idea of LRGB. This becomes a low cost alternitive to the Fuji R-pixel design. Shocking that no one has really done this before. What's left? Find a low cost solution to reduce signal to noise ratio when packing the sensors close together, and during ultra-long exposures which would allow for incressed shapness with out geting a plastic look. (too much emphisis has been put on "processing out noise" rather then building sensors that have less noise - try copper conducors on gallium-arsenide or amorphous-silicon) Then create a super sensitive solution with a range from ISO 50 to a realisticly usable super-high ISO (ISO 12800+)with a IR-UV option and you'll be a hit. low/no noise with sharpness or usable super-high iso's is what is needed. Again kudos for the low cost solution to squeeze some more sensitivity out of existing chips.

Posted By: nytecam (6/18/2007)

Comment: Here's my Pan-Bayer mix eg 50% pan luminance + 50% RGB chroma at Nytecam

Posted By: Peter (6/18/2007)

Comment: Please Mr. JC and JH to help: The photo "Stop Action Performance" has some irregular motley, in the shirt bottom brim (The photo is magnified 100 -200% or higher can enough to be noticed). (The shirt bottom brim is natural in the left of photo, Bayer Pattern). The Image:

Posted By: Peter (6/18/2007)

Comment: X-3 CMOS will be successful? I did not see. At the present time, besides Bayer Pattern and Color Filter Array 2.0 CCD / CMOS, still under development O-FILM by FUJIFILM (RGB Multilayer organic semiconductor sensor), I hope that the O-FILM future will be successful, in the next few year's times. The O-FILM can be moving away that bad spectral response of the Si sensors (CCD / CMOS, X-3, etc, most sensitive is IR, this don't we want the features!). The O-FILM sensors have films the same with the RGB spectral response, ideal color image sensor.

Posted By: dc2141 (6/18/2007)

Comment: Maybe Kodak should approach Foveon about a parnership. Foveon's problem has always been noise at higher ISOs but few people dispute the better resolution (given the megapixels) that their sensors capture. Their current SD14 (with 4.6 megapixel output, despite Sigma's marketing hype that it's 14 MP) still resolves like a 8-10 MP. If they kept their same basic concept but interspliced their layered pixels with luma-only pixels, they could boost the sensitivity of the sensor, reducing noise at higher ISOs and still reap at least a good deal of the benefits of the concept otherwise.

Posted By: nytecam (6/18/2007)

Comment: the two sample images eg 'new' and 'old' in each example seem to me as identical shots but doctored maybe in Photoshop? Kodak wouldn't do that would they!

Posted By: D. Hoskins (6/18/2007)

Comment: Why not put the full size images of the woman and the juggler up as a "click for full size image" popups? That way people can judge the image quality better, but it won't cause changes to your web page layout.

Posted By: Jan Egil Kristiansen (6/18/2007)

Comment: Please let the blog page display the images in the size they have when they leave the server. The girl is only 121x184 in the blog, while the image itself is 187x283. What I see now, is not only a result of two different filter arrays, but also of poor resampling in my web browser. (Could this be Mattias Hutter's problem with the shirt border?)

Posted By: Jan Egil Kristiansen (6/18/2007)

Comment: I need more explanation about how a 2.5x increase in photons reaching the sensors translates to up to 4x increase in sensitivity?

Posted By: Jan Egil Kristiansen (6/18/2007)

Comment: With the white cells closer spaced than the cells of each color, you really need different Nyquist diffusion low pass filters for white and for RGB. That is something of a brain twister, isn't it? I wonder if there is any fast way to reconfigure a low-pass filter without shaking the camera?

Posted By: 3 patterns - 3 names (6/18/2007)

Comment: A nobrainer - you have 3 patterns and 3 names: the "John pattern", the "Compton pattern" and the "Hamilton pattern".

Posted By: Jan Egil Kristiansen (6/18/2007)

Comment: Yes, please give these patterns names, before Marketing comes up with "Kodak Improved Sensor Technology". Got home with some rock and roll shots this Saturday. Lots of pure red and blue. That should give 2.5x more light than the Bayer. (Pure green is rare on stage, the Bayer pattern with all its green is anti rock?)

Posted By: Bruce McL (6/18/2007)

Comment: It looks to me like there are implications here for getting higher dynamic range from a sensor. Can you sacrifice one out of four clear pixels? I'm thinking about having a neutral gray pixel that is less sensitive than the clear or even than the RGB pixels.

Posted By: Matthias Hutter (6/17/2007)

Comment: Thus great work on new filter technology, I do not really fancy the new filter order. Take a look at the big foto. Comparing this two fotos, it is true that the new filter has less blur and/or less color noise. Great work in any case. I have never be very fond of the lack of color precision in bayer filter due to interpolation. And your new filter technology contains more colour artefakts due to even worse color precision than bayer. Just look at the bottom border of the shirt or at the trousers. It contains colored spots. The whole picture looks like it would have been jpeg compressed (with colour subsampling). Sorry to tell you that, and sorry for my bad english, because you did really great work on filter technology.

Posted By: WR (6/16/2007)

Comment: When I read the first announcement I thought that you were using a 2x2 pixel RGBC mask. IMO the "diagonal green" and "vertical clear" patterns might not be the best configuration as all of them are not symetrical and therefore conplicate interpolation for upsampling. The distance to a sub-pixel of an other color in different directions might vary from 1 to 2.83 pixel. Did you also think about using a: a) a 2x2 RGBC mask b) a classical Bayer pattern, replacing green by clear and using "just" the red and blue to derive the color ?

Posted By: Peter (6/16/2007)

Comment: The photo "Stop Action Performance" has some irregular motley, in the shirt bottom brim (The photo is magnified 100 -200% or higher can enough to be noticed). (The shirt bottom brim is natural in the left of photo, Bayer Pattern). Image:

Posted By: Peter (6/16/2007)

Comment: Honorable Mr. John Compton and John Hamilton: Honorable Mr. John Compton and John Hamilton: I checked your C.F.A. 2.0 experimental photos, in the "Stop Action Performance" photo were found: Has some irregular motley, in the shirt bottom brim (The photo is magnified 100 -200% or higher can enough to be noticed). (The shirt bottom brim is natural in the left of photo, Bayer Pattern) It is C.F.A. 2.0 problem? Or the future can be improved? Image:

Posted By: Zarko (6/15/2007)

Comment: Was there some overwhelming reason for choosing diagonal pattern and the 4x4 sampling cell instead of 3x3? That diagonal 4x4 pattern has 14 different and quite non-uniform sampling cells. Out of curiosity I tried a homogenous 4x4 pattern with 3x3 sampling cells and got 5 very simple patterns, reducible to 4, plus the increased precision of the smaller cell. I'd almost be tempted to write the pattern here in ASCII, but don't have colors :-)

Posted By: John Compton (6/15/2007)

Comment: Great comments! "Why not a 2x2 LRGB pattern?" Our patterns provide a higher proportion of pan pixels in order to provide a very robust pan interpolation. "Why not mix white into the red, green, and blue?" This is one way to improve photographic speed, but if you want to get back to properly saturated colors the required math amplifies the noise so you generally get no net benefit. "How about using filters like 'everything but red', etc.?" This is what happens with complementary color filters: cyan, magenta, and yellow. It is true that this lets in more light, and such filters have commonly been used for video. For still imaging, however, the math used to get from CMY to RGB amplifies the noise (see my previous comment) and there is generally no net benefit. I'm not in a good position to comment on Foveon technology. However, I can say that the color filter pattern of our new technology involves no significant changes to sensor design or manufacturing, so it is straightforward to apply it to different types of conventional image sensors. While I can't comment on specifics of our algorithms, I can say that development of this new technology will be ongoing. After all, improvements to processing Bayer pattern image sensors are still being developed 32 years after Kodak scientist Bryce Bayer invented his well known pattern. Although our new techology will likely be seen first in consumer digital still cameras and cellphone cameras, this does not preclude the possibility of seeing our new technology in other applications in the future, potentially including professional photography. Thanks for all the interest!

Posted By: Chris (6/15/2007)

Comment: It seems to me that an unavoidable consequence of your approach is greater susceptibility of the imaging system to aliasing artifacts - after all, your spatial sampling of chroma information is much sparser, and I doubt that you are willing to blur the image sufficiently to satisfy Nyquist. So, I guess and ask, are you using some fancy algorithm to guess/detect color aliasing and "undo" it? It seems to me that what you do may be a welcome improvement to popular line of cameras - visual quality of images may be improved, most of the time, but I would have some reservations if this approach will meet with approval of pro market. Please comment.

Posted By: JasonF (6/15/2007)

Comment: Actually, in this pattern half the light gets to the luminance sensors, a quarter goes to the green and red/blue get a eighth each. This should reduce the ammount of color noise i see in photographs and i'm excited to see this technology in some of the higher end digital cameras down the line.

Posted By: steve (6/15/2007)

Comment: Hi, I have a question I hope you might discuss, since I know nothing about CMOS or CCD design, and would like to know why another simple idea is impractical: If you want each photosite to catch more photons, why are any of the colored sites sensitive to red-only or blue-only? If you have panchormatic site to compare to, then sites sensitive to "everything but red" and "everything but blue" should serve just as well, but would catch many more photons. Is there a hardware difficulty making a mask with this kind of frequency response?

Posted By: MikeJ (6/14/2007)

Comment: Does the new array have a name?

Posted By: AaronM (6/14/2007)

Comment: I wonder how this compares to a layered sensor like the Foveon in terms of speed and image accuracy

Posted By: johnnyliu (6/14/2007)

Comment: how about keeping traditional CFA, but using white-mixed light-red/light-green/light-blue, to provide the same high ISO performance, and reduce color diffusing

Posted By: Jeff Phillips (6/14/2007)

Comment: Very cool. The amateur astro community has been doing this sort of thing with mono cameras and filters, shooting LRGB. I'm curious, though, why the LL,GG,R,B pattern vs. a 2x2 LRGB pattern?

Posted By: Flavio (6/14/2007)

Comment: Interestingly, i was thinking a week ago, "how can we reorganize the bayer pattern?" It's so great to hear you've finally come with a practical solution. You are truly making photography go further, keeping up the R&D reputation of Kodak. Congratulations!