The lens in a camera is like an eye that lets light into the camera body and focuses it on the sensor, where the image is recorded. The larger the optical components (known as ‘elements’) in the lens, the more light gets in. The quality of the elements influences the camera’s picture quality.
The cost of producing large, high-quality lens elements represents a significant percentage of the price you pay for a digital camera or accessory lens. The size of the lens elements also influences the amount of light that can reach the sensor.
Light enters the lens through an adjustable aperture, which is controlled by an iris diaphragm that works like the pupil in your eye. The wider the aperture, the more light it admits.
Lens apertures are specified in f-stops, which represent the ratio of the diameter of the iris diaphragm to the focal length of the lens. Modern lenses use a standard f-stop scale in which the amount of light is half the light delivered by the f-stop below it. This is often expressed as an Exposure Value, or EV.
For a typical DSLR camera and lens, the sequence is as follows: f/2.8, f/4, f/5.6, f/8, f/11, f/16 and f/22. Most lenses offer intermediate f-stops at 1/3 EV intervals, with a typical sequence being: f/2.8, f/3.2, f/3.5, f/4, f/4.5, f/5, f/5.6, f/6.3, f/7.1, f/8, f/9, f/10, f/11, f/13, f/14, f/16, f/18, f/20 and f/22.
Aperture adjustment is fully supported in DSLRs and most mirrorless cameras. Digicams usually stop at f/8 and most point-and-press models only provide two aperture settings: wide-open and ‘stopped-down’.
This diagram shows the relationship between the sizes of f-stops in the standard sequence. The f/4 setting, for example, lets in twice the amount of light admitted by the f/5.6 setting.
Camera Focusing Systems
Most cameras have an array of autofocusing (AF) points on a dedicated sensor, with professional cameras having more sensor points than entry-level models.
Since these points are used to determine where the lens is focused, more points provides faster, more accurate autofocusing. The exposure is also metered at these points.
A typical AF sensor pattern from an entry-level DSLR camera. The brackets indicate the sensors that are mainly used for focus tracking.
A typical AF sensor pattern from a professional DSLR camera. Note the larger number of sensors in the array.
All modern digital cameras use one or both of two systems:
1. Phase detection, which is used by DSLR cameras when shooting with the viewfinder, separates the incoming light into pairs of images. These pairs are compared via a dedicated sensor and the focus of the lens is adjusted until the images coincide.
2. Contrast detection systems are used in cameras without viewfinders and in DSLRs in Live View mode and are often slower than phase detection systems. They measure the intensity differences between adjacent pixels in the image produced by the lens and adjust the lens until the greatest differences are obtained.
Both systems may be unable to find focus when the overall subject contrast is low and/ or the sensor can’t detect edges in the subject.
This diagram shows how a contrast-detect autofocusing system works, with a black rectangle representing the subject and the AF sensors shown as red rectangles. The sensor labelled A is the only one capable of detecting contrast because it spans a contrast boundary. Sensor B would see all white; sensor C would see all black and sensor D would see all grey.
Examples include misty scenes and large single coloured surfaces like walls or blue sky. When this happens, the camera may ‘hunt’ for focus, driving the lens back and forth through its focusing range.
Misty scenes with low contrast and diffuse edges make it difficult for contrast-based AF systems to find focus.
Even the simplest autofocusing systems can easily focus on subjects containing ø¢â‚¬Ëœhard’ edges and wide brightness differences.
Manual Focusing
Simple digicams and camera-phones seldom support manual focusing, most using zone focusing that focuses the lens at a particular distance. Normally two ø¢â‚¬Ëœzones’ are provided: macro (for close-ups) and infinity (for landscapes).
Some digicams display a linear scale to assist users to focus manually. But this usually requires estimating the camera-to-subject distance and using the arrow pad buttons to adjust the lens to the correct distance. In some cameras, the centre of the screen can be magnified for focus checking.
Cameras that provide manual focus modes require photographers to switch off autofocusing, either via the camera’s menu or with a slider on the lens. In interchangeable-lens cameras (and a few advanced digicams) the lens ring is turned to focus the subject. Some cameras require electronic links between the lens and the camera to allow the focus ring to be turned and trigger the focus confirmation light (which most cameras provide).
Face Detection
Face detection technology is common in modern cameras and a few recently-released models include systems that can be set to detect cats’ and dogs’ faces. When these systems are switched on, the autofocusing system – and usually also the auto exposure system – links with a microprocessor that analyses the scene, looking for areas that are shaped like human or pets’ faces with identifiable eyes and mouth in the correct places.
In most cases, a rectangle is superimposed on each face in the scene. Many cameras will use a different coloured rectangle for the main subject to show which face will be used as the focusing target.
Face detection systems can pick up most faces where the entire face is visible – even when they are near the edges of the frame. Detected faces are outlined with a rectangle. However, they may not be able to identify faces that are almost totally obscured behind door or window frames or by sunglasses, goggles and hats.
Many systems link with the flash to ensure a natural-looking balance of flash and ambient lighting. Exposure adjustment to correct backlighting is also usually provided and some cameras include face tracking to keep focus on moving subjects.
However, while these systems improve your chances of taking sharp pictures of people, (particularly active children and pets), there are times when they can fail. Many systems can’t pick up faces that are too close to the camera – or too far away. Simpler systems can’t identify subjects in profile and even advanced systems can have difficulties when only a small part of the face is visible. Sunglasses, swimming goggles and other things that cover the eyes can also cause face detection systems to fail.
Scene Recognition
Scene recognition works by analysing colour, brightness and distance information from the camera’s sensors and comparing the resulting patterns with patterns stored within the camera that characterise certain subject types. Most cameras include landscape, portrait, backlit portrait, twilight, night portrait and close-up recognition. Once the subject type is identified, the camera’s aperture and shutter speed are set and the focusing, exposure and sensitivity ranges may be constrained and colour adjustments may be made to suit the selected subject type.
Smile Shutter and Blink Detection technologies are included in many digicams. The former triggers the shutter when a smile is detected and most systems let you set detection levels between a slight smile and a broad grin.
Blink detection causes the camera to take several shots of the subject in a burst. Each shot is analysed and the camera saves the one in which most people’s eyes are open.
This pair of images illustrates the processing applied by in-camera red-eye correction. The top image shows the shot as it was taken with red patches caused by the on-camera flash. The bottom image shows the same image after in-camera red-eye correction.
In-camera red-eye detection analyses images after flash shots are taken, looking for red eyes in subjects. The red patches are replaced by dark blue-black, making subjects’ eyes look more natural. Some cameras can carry out the process on-the-fly, while others require you to select red-eye correction in playback mode. Both systems are equally effective – but neither is totally fail-proof.
Depth of Field Control
The most impressive photographs are usually those in which the zone of sharp focus (or depth of field) is set by the photographer. Depth of field is controlled by three factors: the physical size of the sensor, the distance between the camera and the subject and the relationship between the lens and the lens aperture.
The smaller the sensor the greater its inherent depth of field. This is why it’s easier to take sharp pictures with small-sensor digicams but you have more control over depth of field with DSLRs.
The Aperture-priority (A) shooting mode lets you adjust the lens aperture for depth-of-field control. Wide apertures (typically from f/2 to f/4) will blur background details, while small apertures (from about f/5.6 on with digicams and f/11 with interchangeable-lens cameras) will enable most of the subject to appear sharp in the picture.
The skill is the stop the lens down just enough to make the background fuzzy without causing the key elements of the subject to be out of focus. Many entry-level cameras provide in-camera assistance that lets you select from a range of pre-determined settings on the basis of the results you wish to achieve.
Useful URLs
The following websites provide additional information on the topics covered in this article.
Lens Advice page: everything you need to buy the right lens. More than 400 lenses listed in free downloadable PDFs, categorised and with links to more than 150 trusted reviews, Editor’s Choice, and tips articles that will guide you to buying the right lens:
/tips/lens-advice
Information on how to read lens sharpness graphs:
www.photoreview.com.au/tips/understanding-and-using-mtf-graphs-.aspx
Tips on making optimal use of a camera’s focusing system:
www.bythom.com/autofocus2.htm
This is an excerpt from Mastering Digital Photos 3rd Edition.
Click here for more details on this and other titles in the Pocket Guide series.
Visit epson.com.au for the latest Epson printers.