When visible light strikes an image sensor, electrons are generated by a photoelectric effect occurring in the silicon from which the sensor is made. Sensitivity is increased by capturing light without waste and efficiently converting it into electrons. Pixels in an image sensor must incorporate a photodiode and provide a number of functions, including signal accumulation start, signal accumulation end, readout pixel selection, and selected pixel signal readout. Each pixel consists of a photodiode and transistors to perform the various tasks. Sensitivity can be enhanced by modifying the transistor array and layout, and by making the photodiode as large as possible. Sony's first priority when it began to develop CMOS sensors was to improve the pixels.
In addition to using a larger photodiode, enhancing sensitivity also requires a system to guide the light accurately to the photodiode. Figure 1 is a cross-sectional photograph of a pixel. From top to bottom, each pixel consists of a on-chip micro-lens, on-chip color filter, inner lens, wiring, and photodiode. The on-chip micro-lens and inner lens guide the light precisely onto a single pixel measuring less than 2µm across, while the color filter enables color images to be captured by transmitting red, green or blue light. Between the inner lens and the photodiode is the wiring layer. If too thick, it may prevent the light that has passed through the micro-lens, color filter and inner lens from reaching the photodiode or cause it to collect in a location other than the photodiode. By using copper (Cu) wiring, Sony was able to reduce the thickness of the wiring layer, allowing the light to be collected effectively in the photodiode.