Algorithmic antialiasing techniques involve "sampling" the content of each pixel at multiple locations, meaning that the color is computed at more than one location inside the area covered by the pixel. The results from these "samples" are combined to determine the final color of the pixel. These samples are essentially additional pixels, used to increase the effective resolution of the image to be displayed. If the edge of an object falls partially inside the area of a pixel, its color and the color of another object that partially fills the "area" of the pixel can both be used to calculate the final color. The result is smoother transitions from one line of pixels to another line of pixels along the edges of objects, where aliasing is most obvious.
"Supersampling" is an antialiasing technique that is simply a brute force approach and is used in NVIDIA's GeForce2 GPUs and other modern graphics processors. A graphics processor that uses supersampling renders the screen image at a much higher resolution than the current display mode, and then scales and filters the image to the final resolution before it is sent to the display. A variety of methods exist for performing this operation, but each requires the graphics processor to render as many additional pixels as required by the supersampling method. Additionally, because the graphics processor is rendering more actual pixels than will be displayed, it must scale and filter those pixels down to the resolution for final display. This scaling and filtering can further reduce performance.
The degree of scaling in a specific supersampling mode is often identified by the ratio of pixels in the unscaled image to the number of pixels in the final, scaled output. For example, 2x supersampling writes twice as many pixels to the frame buffer as would be required without antialiasing. 4x writes four times as many pixels. As you might guess, supersampling causes a substantial drop in performance as measured by frame rate. If the graphics processor renders four times as many pixels, then the frame rate will be one fourth what it was in the standard display mode. In fact, the performance drop can be even worse than the "x" multiple of the supersampling setting because of the scaling step mentioned in the previous paragraph.
Quoted From NVIDIA Corporation
|Is this answer useful?
So far 92 of 179 person(s) think this answer is useful.