There have been significant advances in autofocus technologies in the past few years. Here we look at some of the systems camera manufacturers have developed to improve autofocusing performance.

-

The diagrams above show how phase-difference detection is used in the new Dual Pixel CMOS AF system. (Source: Canon.)

Traditionally, two autofocusing (AF) systems have co-existed in digital cameras: phase-detection systems in DSLRs and contrast-detection systems in non-reflex cameras. For phase-detection, light entering the lens is split into pairs of images and the intensity patterns (peaks and valleys) that indicate the focus point are compared by the AF processing system.

The separation error is calculated to determine whether the subject is closer to (in front focus) or behind (back focus) the current position, enabling the camera to calculate which way and by how much the lens must be adjusted. Autofocusing is usually very fast with these systems.

Contrast-detection systems take advantage of the fact that contrast is highest when an image is in focus. The difference in contrast between adjacent pixels on the sensor is measured and the lens is adjusted until the maximum contrast is achieved.

-

These diagrams show how phase-difference detection works. Light entering the lens is split into two images with two secondary microlens arrays in the AF sensor unit. Paired line sensors measure each focus point. The lens moves forward or backward depending on whether the focal point is in front of the subject (closer to the camera), or behind the subject (farther away from the camera). Because the camera can immediately calculate the direction and how much to move the lens, phase-difference AF can be very fast.

Focusing can be very accurate with this system because the image will be at its sharpest when the sensor detects focus. But it’s not necessarily very fast. In addition, since no actual distance measurement is involved, the system can’t calculate whether the subject is in front focus or back focus and focus tracking isn’t possible.

Sometimes, the lens will rack backwards and forward until the maximum contrast is reached, a phenomenon known as ‘hunting’. This hunting is noticeable when you’re recording movie clips. It also delays the triggering of the shutter for still shots, a phenomenon known as ‘AF lag’.

DSLR cameras can only record an image when the mirror is up, which creates a lag between when the shutter is pressed and the picture is taken. This means phase-detection AF can’t be used in movie mode and DSLR cameras default to contrast AF when they are switched to live view mode.

Sony addressed this problem with its ‘translucent-mirror technology’ in its SLT (Single-Lens Translucent) cameras. The semi-transmissive mirrors in these cameras don’t need to flip up, allowing phase-detection AF to operate all the time. Sony’s system is better at tracking moving subjects and supports fast continuous shooting speeds as well as being usable in movie mode. But, because it relies on light rays hitting the sensor from different angles, it can only operate at apertures of f/5.6 or larger.

Some manufacturers have developed ‘hybrid’ systems that combine both phase- and contrast-detection technologies. Fujifilm was the first company to embed phase-detection detectors in the Super CCD EXR sensor in its F300EXR camera. Similar systems are provided in more recent FinePix cameras as well as recent releases from Canon, Nikon, Olympus, Ricoh, Samsung and Sony.

To make the phase detection system work, different sensing detectors must be able to capture light from different parts of the lens and form two separate images. To create these detectors, some of the photosites in the array are half covered by a black mask on one side, while others are masked on the opposite side. These left- and right-facing photosites register light coming from opposite sides of the lens, enabling the image processor to measure phase differences and determine how much adjustment the lens requires for sharp focus.

-

This diagram shows how much of the sensor is covered by the phase-detection sensors in a camera with a typical hybrid AF system. Designers must balance the area required to provide adequate tracking AF performance against the reduction in light reaching the sensor.

Typically, the phase-detection sensors cover a relatively small area of the image sensor. This is because the half-covered detectors reduce the amount of light reaching the sensor. In low light levels, the signal may not be strong enough to support autofocusing ““ and overall image quality could suffer through increased image noise.

And Now for Something New….

At the beginning of July 2013, Canon announced its Dual Pixel CMOS AF system, which provides phase-detection AF without substantially reducing the light reaching the sensor. It’s achieved by using two photodiodes in each photosite on the sensor; one ‘looking’ to the left and the other to the right.

This means readouts are captured simultaneously from each side of the photosite and the difference between them can be measured. The signal differences are used to calculate the distance to the subject, allowing the lens to focus upon it, as in a normal phase-detection AF system.

Canon’s illustrations show the photodiodes in each photosite lying side-by-side, which suggests the array resembles a linear array of AF points with no cross-type points. If this is the case, the system would only be sensitive horizontally (or vertically, depending on the direction of the linear array). Other advantages aside, this could put it at a disadvantage when compared with AF sensor arrays with multiple cross-type points, which can detect in both directions.

One further benefit of the new Canon technology is that these signals can be combined and output as a single image pixel, which Canon claims suffers from ‘virtually no light loss’. In other words, rather than having embedded ‘pixels’ dedicated to AF, which are separate from the imaging photosites, the dual pixels have both AF and imaging functions.

Canon’s literature says approximately 80% of the shooting area of the CMOS sensor horizontally and vertically is covered by the dual pixels in the EOS 70D, the first camera with the new technology. The remaining photosites bordering this area have the same dual pixel structure but aren’t used for autofocusing.

Because the new system is collecting almost twice the signal from the sensor, compared with traditional AF systems, it provides much more information than any Hybrid CMOS AF system delivers. Canon has had to develop a dedicated IC (integrated circuit) to enable the 70D to process the focusing data separately from the image processing in order to optimise focusing speeds.

-

The structure of the sensor in Canon’s Dual Pixel CMOS AF system uses two photodiodes in each photosite. The Bayer colour filter (which enables colour signals to be recorded) is shown overlaid upon the sensor. (Source: Canon.)

-

The diagram above shows how much of the sensor area in Canon’s EOS 70D is covered by the ‘Dual Pixels’ and how the system is used during focusing and shooting. (Source: Canon.)

 

Canon says its Dual Pixel CMOS AF technology is particularly effective for capturing moving subjects, where Canon claims it can achieve AF speeds ‘that approach’ those of optical viewfinders, even with live view shooting.

Interestingly, the 70D’s system works slightly differently when recording still pictures and movie clips. In the still capture mode, it determines the degree of focus adjustment required before moving the lens. In movie mode, focusing and lens movement occur simultaneously to minimise any possible hunting.

Canon claims the Dual Pixel CMOS AF system is 30% faster than the hybrid systems introduced in the EOS 650D and EOS M. Our tests showed it could match the speed and accuracy of the AF systems in most video camcorders.

Who’s it For?

Dual Pixel CMOS AF technology offers the following promises:

1. It’s fast and accurate and capable of eliminating hunting when used in live view mode. This is a significant benefit to anyone who wants to record video with a DSLR.

2. Front- and back-focusing are effectively eliminated

3. Focus tracking is fast and accurate enough to keep moving subjects in focus while recording movie clips or burst sequences.

4. The system will work with fast lenses at any aperture setting to as low as f/11, unlike conventional SLR AF systems, which became inoperable with lenses faster than f/5.6. This allows video shooters to stop down to f/11 while recording.

5. It can be used in light levels as low as 0 EV (almost complete darkness).

6. It can work with face detection technologies to keep moving subjects in focus.

7. It also works with teleconverters.

The system is supported by 103 of Canon’s lenses and partially supported for the rest. Lenses that are partially supported use a similar system to hybrid AF in One-Shot AF mode, employing phase detection to determine the initial focus and then fine-tuning it with contrast detection.

What it Won’t Work For

According to Canon’s literature, as implemented in the EOS 70D, the Dual Pixel AF system is not ideal for shooting rapid action. In such situations users are advised to use the viewfinder for shot composition and shoot stills, rather than movies.

With the viewfinder, the 70D provides the same phase-detection AF system as found in the EOS 7D, which has an array of 19 cross-type points that can detect vertical and horizontal edges. In the centre of the array is an X-type sensor that can detect diagonals.

This system has some limitations; the central detector requires f/2.8 lenses (or faster), while the other points will work as far as f/5.6.

 

This is an excerpt from Photo Review Issue 58.

Subscribe to Photo Review:
–  Quarterly Print Edition and PDF/eMag
–  Monthly Mag App