11/12 What's the difference between a pixel value and an RGBA value? My
understanding is that RGBA is a 32-bit int w/ 8 bits each for red,
blue, green, and alpha, but I don't know how that maps to a pixel
value.
\_ A pixel "value" is your perception of the light that comes
out of the pixel on the screen. Look at the pixel; you see
white. That's the "value". In color monitors, this corresponds
to the combination of the contributions of the red, green, and
blue guns. Out of RGBA, only RGB contribute (A is opacity and
is used for intermediate computation). RGB can be in any format -
OpenGL supports float or 8-bit int (among others) per channel -
but in general there's a maximum value (in 8 bits, 0xff) that
maps to the maximum brightness. For more details:
http://www.faqs.org/faqs/graphics/colorspace-faq
\_ A pixel 'value' is actually an integer, not a perception of
the light. -- nitpick.
\_ Great Answer -!op
\_ So a pixel is simply a set of color values at the appropriate
transparency?
\_ No, it's the sum of red light, blue light, and green light.
(Why RGB? You have receptors in your eyes that match those
three frequencies.) It has nothing to do with transparency.
Transparency is only used for intermediate calculations
(google "Porter-Duff Compositing")
\_ Yermom told me all about your "pixel"
\_ By the same token, why do graphics cards have 32 bit color and not
just 24 bit ("true") color? Also, I always run windows/games in
16bit, because it's good enough for most things. when does 32bit
color really make things look better?
\_ A graphics card's "32 bit color" is RGBA (8 bits for each).
The monitor only shows RGB. So they are the same. Your eye
can distinguish roughly 1% gradations in intensity, so you
might think you only need 100 color values per channel not
256. But because of "gamma" (nonlinear eye response, which is
partially corrected in the monitor, see the faq above) the
possible values of each channel are not distributed evenly
across what you can perceive. The only real place more than
8 bits / channel is used is medical applications like xrays
where radiologists usually use about 12 bits/channel (and that's
just 12 bits of gray, not colored at all). Summary: 8 bits
per channel is about all that's needed; you wouldn't notice
much if at all if you had more bits per channel.
\_ I can tell the difference between 16-bit and 24-bit color. (It's
easier with some images than others.) Even when transparency
effects aren't used, 32-bit sometimes is preferred over 24-bit
simply because the hardware can shuttle around double-words more
readily than 1.5 words.
\_ also, the card is advertising framebuffer and gpu features
that affect intermediate calculations as well as final
presentation. extra bits helps prevent a lot of visible
artifacts from poorly normalized colors during steps like
alpha compositing, texture mapping, and multi-pass rendering. |