If you are just starting out learning photography, this section may be of more use later, after you master things like exposure, depth of field, ISO, shutter speed, sharpness and histogram.
Light is captured by your sensor, measured and converted into numbers that identify it’s location, intensity and color. How exactly this is done is hard core science and is not useful in the art of photography, so happily I’ll ignore that part of the process. There are however some points that can assist us with processing images.
A pixel (aka a picture element) is a point on the camera’s sensor. that represents one color at a specific level of brightness. Pixels are laid out in a grid pattern of columns and rows, each pixel records light information-brightness and color for a point in this grid of points. Together, these points of color, at varying intensities, make up the tones and colors of a digital image. More pixels equals more resolution and the higher the resolution the bigger an image can be blown up and retain sharpness. Physically larger pixels are said to be better able to gather light and produce better color with less noise. Hence I have heard it claimed often that larger sensors with larger pixels will give files with more colors and less noise. All the major manufactures of high end digital single lens reflex (DSLR) cameras offer very expensive cameras with lots of large pixels, rendering breathtaking color and detail. Medium format cameras go even further. The prices reflect the quality and are generally way out of reach for most photographers, even professional photographers.
Cameras are often marketed by proclaiming this year’s model has even more pixels. I am currently stuck with a 10 mega pixel camera, when for the majority of what I shoot would be fine with a 6 mega pixel camera. More mega pixels has not given me more quality, just bigger files which take longer to capture, download, process and take up many hard drives to store and backup! I shoot people and have made clients happy with 8 megapixel files that have been used by major magazines, graphic design firms and ad agencies. When I look at files on my computer screen or make an 8×10 print, I can’t tell if I shot them with my Canon 20D (8.2 mega pixels), 30D ( 8.2 mp ), 40D ( 10.1 mp ) or Mark ll ( 8 mp ). While my cameras may not produce files with the best possible color, greatest latitude and lowest noise, they are excellent for my purposes. I am not a detail freak, ultimate color and absolutely pure rich files are not a major concern for me. However you may have a fetish for the most dazzling color and the detail so deep your viewer is hypnotized and rendered speechless. The top of the line cameras may be what you are after to achieve your goals. Check out http://www.dxomark.com/ for camera comparison and see how your camera ranks and which cameras offer the best image quality.
Binary – Digital Data
All computer data is based on the binary system. The smallest unit of digital information is a bit. A bit can be on or off, 1 or 0. Whether music file or text or a photo, what we see on our computers comes from strings of 0’s and 1’s.
An image that has more bit depth, has more colors:
2 bit color = 4 colors
4 bit = 16 colors
8 bit = 256 colors
16 bit = 65,536 colors
24 bit = 16,777,216 colors
Bit Depth = Color Depth
Bit depth refers to how many colors your computer system can display or the number of colors a pixel can capture. Greater bit depth will allow for greater tonal range, rendering images with more subtly of color and more shades of any given color.
Viewing a natural scene or a color print that’s printed well, our eyes are able to see around 16 million colors millions of colors. Computer systems, monitors and cameras vary in their capability to display color. Monitors can be set to display more or less color. Current computer systems usually have a monitor and video card that can display true color, around 16 million colors. Cameras also can vary in their ability to capture color. A camera’s bit depth is what determines it’s ability to capture color.
Color in a digital image is created through the combination of the 3 primary colors. Each color, red, blue and green has it’s own color channel. If your camera captures at 8 bits per channel ( and most do), then for each of the 3 channels, the color is defined as a number between 0-256. If for example you are looking at the red channel for a given digital image, red will have a value somewhere from 0-255. 0 = pure black and 255 = pure white. If the value for red is a low number it will be a darker red and a high number, closer to 255, the red will be a lighter shade of red. Each of the 3 channels will have a value for it’s primary color , either red, blue or green. Combined these 3 values will equal color for a given pixel. An image with 8 bits per channel is often referred to as a 24 bit image because 8 bit x 3 channels = 24 bit.
The more data we have to work with, the less likely we are to get into problems when we make changes to the image in a digital editing software. When we change color or tonal values, we are actually throwing away data. In Photoshop we have the ability to edit our raw files in 16 bit, instead of 8 bit. With 65,536 colors per channel, we have quite a bit more to work with than the usual 256 colors per channel. When making very dramatic changes to tone or color, you will get into less complications with banding, solarized in tonal transition areas and images that look ‘digital’.