Charge-coupled devices are the basis of modern television technology. main characteristics of cds

Vendors now offer a huge selection of cameras for video surveillance. Models differ not only in parameters common to all cameras - focal length, viewing angle, light sensitivity, etc. - but also in various branded "chips" that each manufacturer seeks to equip their devices with.

Therefore, often short description The characteristics of a video surveillance camera is a frightening list of obscure terms, for example: 1/2.8" 2.4MP CMOS, 25/30fps, OSD Menu, DWDR, ICR, AWB, AGC, BLC, 3DNR, Smart IR, IP67, 0.05 Lux and that's not all.

In the previous article, we focused on video standards and camera classifications depending on them. Today we will analyze the main characteristics of video surveillance cameras and decipher the designations of special technologies used to improve the quality of the video signal:

  1. Focal length and viewing angle
  2. Aperture (F-number) or lens speed
  3. Iris adjustment (auto iris)
  4. Electronic shutter (AES, shutter speed, shutter speed)
  5. Sensitivity (light sensitivity, minimum illumination)
  6. Protection classes IK (Vandal-proof, anti-vandal) and IP (from moisture and dust)

Sensor type (CCD CCD, CMOS CMOS)

There are 2 types of CCTV camera matrices: CCD (in Russian - CCD) and CMOS (in Russian - CMOS). They differ both in device and principle of operation.

CCD CMOS
Sequential reading from all matrix cells Arbitrary reading from the matrix cells, which reduces the risk of smiring - the appearance of vertical smearing of point light sources (lamps, lanterns)
Low noise level High noise level due to so-called temp currents
High dynamic sensitivity (more suitable for shooting moving objects) The effect of "rolling shutter" - when shooting fast moving objects, horizontal stripes, image distortions may occur
The crystal is used only to accommodate photosensitive elements, the rest of the microcircuits must be placed separately, which increases the size and cost of the camera. All microcircuits can be placed on a single chip, which makes the production of cameras with CMOS sensors simple and inexpensive.
Due to the use of the matrix area only for photosensitive elements, the efficiency of its use increases - it approaches 100% Low power consumption (almost 100 times less than CCDs)
Expensive and complex production Performance

For a long time it was believed that the CCD matrix gives a much better image quality than CMOS. However, modern CMOS matrices are often practically in no way inferior to CCDs, especially if there are no too high requirements for the video surveillance system.

Matrix size

Indicates the size of the matrix diagonally in inches and is written as a fraction: 1/3", 1/2", 1/4", etc.

It is generally believed that the larger the matrix, the better: less noise, a clearer picture, a larger viewing angle. However, in fact, the best image quality is provided not by the size of the matrix, but by the size of its individual cell or pixel - the bigger it is, the better. Therefore, when choosing a camera for video surveillance, you need to consider the size of the matrix along with the number of pixels.

If matrices with sizes 1/3 "and 1/4" have the same number of pixels, then in this case the 1/3" matrix will naturally give the best image. But if there are more pixels on it, then you need to pick up a calculator and calculate the approximate size of a pixel.

For example, from the matrix cell size calculations below, you can see that in many cases the pixel size on a 1/4" matrix is ​​larger than on a 1/3" matrix, which means that a 1/4" video image, although it is smaller size will be better.

Matrix size Number of pixels (million) Cell size (µm)
1/6 0.8 2,30
1/3 3,1 2,35
1/3,4 2,2 2,30
1/3,6 2,1 2,40
1/3,4 2,23 2,45
1/4 1,55 2,50
1 / 4,7 1,07 2,50
1/4 1,33 2,70
1/4 1,2 2,80
1/6 0,54 2,84
1 / 3,6 1,33 3,00
1/3,8 1,02 3,30
1/4 0,8 3,50
1/4 0,45 4,60

Focal length and viewing angle

These parameters are of great importance when choosing a camera for video surveillance, and they are closely related. In fact, the focal length of a lens (often referred to as f) is the distance between the lens and the sensor.

In practice, the focal length determines the angle and range of the camera:

  • the smaller the focal length, the wider the viewing angle and the less detail can be seen on objects located far away;
  • the longer the focal length, the narrower the angle of view of the camcorder and the more detailed the image of distant objects.


If you need a general overview of some area, and you want to use as few cameras as possible for this, buy a camera with a short focal length and, accordingly, a wide viewing angle.

But in those areas where detailed observation of a relatively small area is required, it is better to place a camera with an increased focal length, pointing it at the object of observation. This is often used at the cash desks of supermarkets and banks, where you need to see the denomination of banknotes and other details of calculations, as well as at the entrance to parking lots and other areas where you need to distinguish a car number from a long distance.


The most common focal length is 3.6mm. It roughly corresponds to the viewing angle of the human eye. Cameras with this focal length are used for video surveillance in small rooms.

The table below contains information and relationships of focal length, viewing angle, recognition distance, etc. for the most common tricks. The figures are approximate, as they depend not only on the focal length, but also on other parameters of the camera's optics.

Depending on the width of the viewing angle, cameras for video surveillance are usually divided into:

  • conventional (viewing angle 30°-70°);
  • wide-angle (viewing angle from about 70 °);
  • telephoto (viewing angle less than 30°).

The letter F, only usually capitalized, also denotes the aperture of the lens - therefore, when reading the characteristics, pay attention to the context in which the parameter is used.

Lens type

Fixed (monofocal) lens- the most simple and inexpensive. The focal length is fixed in it, and it cannot be changed.

IN varifocal (varifocal) lenses you can change the focal length. Its adjustment is made manually, usually once when the camera is installed on the shooting location, and later - as needed.

Trans factor or zoom lenses also provide the ability to change the focal length, but remotely, at any time. Changing the focal length is made using an electric drive, so they are also called motorized lenses.

"Fish eye" (fisheye, fisheye) or panoramic lens allows you to install just one camera and achieve a 360° view.


Of course, as a result, the resulting image has a "bubble" effect - straight lines are curved, however, in most cases, cameras with such lenses allow you to divide one general panoramic image into several separate ones, corrected for the perception familiar to the human eye.

Pinhole lenses allow you to conduct covert video surveillance, due to its miniature size. In fact, the pinhole camera does not have a lens, but only a miniature hole instead. In Ukraine, the use of covert video surveillance is seriously limited, as is the sale of devices for it.

These are the most common lens types. But if you go deeper, the lenses are also divided according to other parameters:

Aperture (F-number) or lens speed

Determines the ability of the camera to capture high-quality images in low light conditions. How more number F, the smaller the aperture is opened and the more light the camera needs. The smaller the aperture, the more the aperture is opened, and the camcorder can produce clear images even in poor lighting conditions.

The letter f (usually lowercase) also denotes the focal length, so when reading the characteristics, pay attention to the context in which the parameter is used. For example, in the picture above, the aperture is indicated by a small f.

Lens mount

There are 3 types of mounts for attaching a lens to a video camera: C, CS, M12.

  • Mount C is now rarely used. C lenses can be attached to a CS mount camera using a special ring.
  • The CS mount is the most common type. CS lenses are not compatible with C cameras.
  • The M12 mount is used for small lenses.

Aperture adjustment (auto iris), ARD, ARD

The aperture is responsible for the flow of light to the matrix: with an increased flow of light, it narrows, thus preventing the image from being exposed to light, and in low light, on the contrary, it opens so that more light enters the matrix.

There are two large groups of cameras: fixed diaphragm(this also includes cameras without it at all) and with adjustable.

Aperture adjustment in various models of cameras for video surveillance can be carried out:

  • Manually.
  • Automatically camcorder using direct current, based on the amount of light hitting the sensor. This automatic iris control (ADC) is referred to as DD (Direct Drive) or DD/DC.
  • Automatically a special module built into the lens and tracking the light flux passing through the relative aperture. This method of ARD in the specifications of video cameras is referred to as VD (Video Drive). It is effective even when direct sunlight enters the lens, but surveillance cameras with it are more expensive.

Electronic shutter (AES, shutter speed, shutter speed, shutter)

At different manufacturers this parameter may be referred to as an automatic electronic shutter, shutter speed or shutter speed, but in essence it means the same thing - the time during which the light is exposed to the matrix. It is usually expressed as 1/50-1/100000s.

The action of the electronic shutter is somewhat similar to the automatic iris adjustment - it adjusts the light sensitivity of the matrix in order to adjust it to the level of illumination of the room. In the figure below, you can see the image quality in low light conditions with different speed shutter (in the figure, manual adjustment, while AES does this automatically).

Unlike DGS, adjustment occurs not by adjusting the light flux falling on the matrix, but by adjusting the shutter speed, the duration of the accumulation of electric charge on the matrix.

However the capabilities of the electronic shutter are much weaker than the automatic iris adjustment, therefore, in open spaces where the light level varies from dusk to bright sunlight, it is better to use cameras with DGS. Video cameras with an electronic shutter are optimal for rooms where the level of illumination does not change much over time.

The characteristics of the electronic shutter are not much different from various models. A useful feature is the ability to manually adjust the shutter speed (shutter speed), as in low light conditions, low values ​​​​are automatically set, and this leads to blurry images of moving objects.

Sens-UP (or DSS)

This is a function of the accumulation of the charge of the matrix depending on the level of illumination, i.e., increasing its sensitivity to the detriment of speed. Necessary for capturing a high-quality image in poor lighting conditions, when tracking high-speed events is not critical (there are no fast moving objects on the object of observation).

It is closely related to the shutter speed (shutter speed) described above. But if the shutter speed is expressed in time units, then Sens-UP is in the shutter speed increase factor (xN): the charge accumulation time (shutter speed) increases N times.

Permission

We touched on the topic of CCTV camera permissions a little in the last article. Camera resolution is, in fact, the size of the resulting image. It is measured either in TVL (television lines) or in pixels. The higher the resolution, the more details you can see in the video.

Video camera resolution in TVL is the quantity vertical lines(brightness transitions) placed horizontally in the picture. It is considered more accurate, since it gives an idea of ​​the output image size. While the resolution in megapixels indicated in the manufacturer's documentation can be misleading for the buyer - it often refers not to the size of the final image, but to the number of pixels on the matrix. In this case, you need to pay attention to such a parameter as "Effective number of pixels"

Resolution in pixels- this is the size of the picture horizontally and vertically (if it is specified as 1280 × 960) or the total number of pixels in the picture (if it is specified as 1 MP (megapixel), 2 MP, etc.). Actually, getting a resolution in megapixels is very simple: you need to multiply the number of horizontal pixels (1280) by the number of vertical ones (960) and divide by 1,000,000. Total 1280 × 960 = 1.23 MP.

How to convert TVL to pixels and vice versa? There is no exact conversion formula. To determine the video resolution in TVL, you need to use special test tables for video cameras. For an approximate representation of the ratio, you can use the table:


Effective pixels

As we said above, often the size in megapixels indicated in the characteristics of video cameras does not give an accurate idea of ​​the resolution of the resulting image. The manufacturer indicates the number of pixels on the matrix (sensor) of the camera, but not all of them are involved in creating the picture.

Therefore, the parameter "Number (number) of effective pixels" was introduced, which just shows how many pixels form the final image. Most often, it corresponds to the actual resolution of the resulting image, although there are exceptions.

IR (infrared) illumination, IR

Allows you to shoot at night. The capabilities of the matrix (sensor) of a video surveillance camera are much higher than those of the human eye - for example, the camera can "see" in infrared radiation. This property began to be used for filming at night and in unlit / dimly lit rooms. When a certain minimum illumination is reached, the camcorder enters infrared recording mode and turns on the IR (IR) illuminator.

IR LEDs are built into the camera in such a way that the light from them does not fall into the camera lens, but illuminates the viewing angle.

An image captured in low light conditions using infrared illumination is always black and white. Color cameras that support night shooting also switch to black and white mode.

IR illumination values ​​in video cameras are usually given in meters - that is, how many meters from the camera the illumination allows you to get a clear image. An IR light with a long range is called an IR illuminator.

What is Smart IR, Smart IR?

Smart IR (Smart IR) allows you to increase or decrease the power of infrared radiation depending on the distance to the object. This is done so that objects that are close to the camera are not overexposed in the video.

IR filter (ICR), day/night mode

The use of infrared illumination for filming at night has one peculiarity: the matrix of such cameras is produced with increased sensitivity to the infrared range. This creates a problem for shooting in the daytime, since the matrix registers the infrared spectrum during the day, which violates the normal color of the resulting image.

Therefore, such cameras operate in two modes - day and night. During the day, the sensor is covered by a mechanical infrared filter (ICR), which cuts off infrared radiation. At night, the filter is shifted, allowing the rays of the IR spectrum to freely hit the matrix.

Sometimes day/night mode switching is implemented in software, but this solution produces lower quality images.

The ICR filter can also be installed in cameras without infrared illumination - to cut off the infrared spectrum in the daytime and improve the color rendering of the video.

If the camera does not have an IGR filter because it was not originally designed for shooting at night, you cannot add the night shooting function to it simply by purchasing a separate IR module. In this case, the color of the daytime video will be significantly distorted.

Sensitivity (light sensitivity, minimum illumination)

Unlike cameras, where sensitivity is expressed in terms of ISO, the sensitivity of CCTV cameras is most often expressed in lux (Lux) and means the minimum illumination under which the camera is capable of producing a video image. good quality- clear and no noise. The lower the value of this parameter, the higher the sensitivity.

Surveillance cameras are selected in accordance with the conditions in which they are planned to be used: for example, if the minimum sensitivity of the camera is 1 lux, then it will not be possible to obtain a clear image at night without additional infrared illumination.

Conditions Light level
Natural lighting outdoors on a cloudless sunny day over 100,000 lux
Natural lighting outdoors on a sunny day with light clouds 70,000 lux
Natural light outdoors on a cloudy day 20,000 lux
Shops, supermarkets: 750-1500 lux
Office or shop: 50-500 lux
Hotel halls: 100-200 lux
Parking lots, warehouses 75-30 lux
Twilight 4 suite
Well lit motorway at night 10 lux
Seats for spectators in the theatre: 3-5 lux
Hospital at night, deep twilight 1 lux
Full moon 0.1 - 0.3 lux
Moonlit night (quarter moon) 0.05 lux
clear moonless night 0.001 lux
Cloudy moonless night 0.0001 lux

The signal to noise ratio (S/N) determines the quality of the video signal. Noise in the video appears as a result of poor lighting and looks like colored or black and white snow or grain.

The parameter is measured in decibels. In the picture below, quite good image quality is already shown at 30 dB, but in modern cameras, to get high-quality video, S / N must be at least 40 dB.

DNR noise reduction (3D-DNR, 2D-DNR)

Naturally, the problem of the presence of noise in the video did not go unnoticed by manufacturers. On this moment There are two technologies for noise reduction in the picture and the corresponding image enhancement:

  • 2-DNR. Older and less advanced technology. Basically, only near-ground noise is removed, in addition, sometimes the image is slightly blurred due to cleaning.
  • 3-DNR. latest technology, which works according to a complex algorithm and removes not only near noise, but also snow and grain in the far background.

Frame rate, fps (stream rate)

The frame rate affects the smoothness of the video image - the higher it is, the better. To achieve a smooth picture, a frequency of at least 16-17 frames per second is required. The PAL and SECAM standards support frame rates at 25 fps, while the NTSC standard supports 30 fps. For professional cameras, the frame rate can reach up to 120 fps and higher.

However, keep in mind that the higher the frame rate, the more space will be required to store the video and the more the transmission channel will be loaded.

Backlight compensation (HLC, BLC, WDR, DWDR)

Common video surveillance problems are:

  • separate bright objects falling into the frame (headlights, lamps, lanterns), which illuminate part of the image, and due to which it is impossible to see important details;
  • too bright lighting in the background (sunny street outside the doors of the room or outside the window, etc.), against which nearby objects are displayed too dark.

To solve them, there are several functions (technologies) used in surveillance cameras.

HLC - bright light compensation. Compare:

BLC - backlight compensation. Implemented by increasing the exposure of the entire image, resulting in objects in the foreground becoming brighter, but the background it turns out too light, it is impossible to see the details on it.

WDR (sometimes also called HDR) is a wide dynamic range. Also used for backlight compensation, but more effective than BLC. When using WDR, all objects in the video have approximately the same brightness and clarity, which allows you to see in detail not only the foreground, but also the background. This is achieved due to the fact that the camera takes pictures with different exposures, and then combines them to get a frame with the optimal brightness of all objects.

D-WDR - software implementation of wide dynamic range, which is somewhat worse than a full-fledged WDR.

Protection classes IK (Vandal-proof, anti-vandal) and IP (from moisture and dust)

This parameter is important if you choose a camera for outdoor video surveillance or in a room with high humidity, dust, etc.

IP classes- this is protection against ingress of foreign objects of various diameters, including dust particles, as well as protection against moisture. ClassesIK- this is anti-vandal protection, i.e. from mechanical impact.

The most common protection classes among outdoor surveillance cameras are IP66, IP67 and IK10.

  • Protection class IP66: The camera is completely dustproof and protected from strong water jets (or sea waves). Water gets inside in small quantities and does not interfere with the operation of the camcorder.
  • Protection class IP67: The camera is completely dustproof and can withstand short-term full immersion under water or long periods under snow.
  • Anti-vandal protection class IK10: the body of the camera will withstand a hit of 5 kg of cargo from a height of 40 cm (impact energy 20 J).

Hidden areas (Privacy Mask)

Sometimes it becomes necessary to hide from observation and recording some areas that fall into the field of view of the camera. Most often this is due to the protection of privacy. Some camera models allow you to adjust the parameters of several such zones, covering a certain part or parts of the image.

For example, in the figure below, the windows of the neighboring house are hidden in the camera image.

Other functions of CCTV cameras (DIS, AGC, AWB, etc.)

OSD menu- opportunity manual setting many camera parameters: exposure, brightness, focal length (if there is such an option), etc.

- shooting in low light conditions without infrared illumination.

DIS- function of image stabilization from the camera when shooting in conditions of vibration or movement

EXIR Technology is an infrared illumination technology developed by Hikvision. Thanks to it, a greater backlight efficiency is achieved: a longer range with less power consumption, dispersion, etc.

AWB- automatic adjustment of the white balance in the image, so that the color reproduction is as close as possible to the natural, visible to the human eye. Particularly relevant for rooms with artificial lighting and various light sources.

AGC (AGC)- automatic gain control. It is used to ensure that the output video stream from cameras is always stable, regardless of the strength of the input video stream. Most often, video signal amplification is required in low light conditions, and vice versa, when the light is too strong, video signal reduction is required.

Motion Detector- thanks to this function, the camera can turn on and record only when there is movement on the object of observation, as well as transmit an alarm signal when the detector is triggered. This helps to save space for storing video on the DVR, offload the video stream transmission channel, and organize notification of personnel about a violation.

Camera alarm input- this is the ability to turn on the camera, start recording video when an event occurs: triggering a connected motion sensor or another sensor connected to it.

alarm output allows you to trigger a reaction to an alarm event recorded by the camera, for example, turn on a siren, send an alert by mail or SMS, etc.

Didn't find the feature you were looking for?

We have tried to collect all the frequently encountered characteristics of cameras for video surveillance. If you did not find here an explanation of some parameter that you do not understand - write in the comments, we will try to add this information to the article.


website

about choosing a video camera for a family, we wrote about matrices. There we touched on this issue easily, but today we will try to describe both technologies in more detail.

What is a matrix in a video camera? This is a microcircuit that converts a light signal into an electrical signal. Currently, there are 2 technologies, i.e. 2 types of matrices - CCD (CCD) and CMOS (CMOS). They are different from each other, each has its pros and cons. It is impossible to say for sure which one is better and which one is worse. They develop in parallel. We will not go into technical details, because. they will be tritely incomprehensible, but in general terms we will define their main pros and cons.

CMOS technology (CMOS)

CMOS sensors First of all, they brag about low power consumption, which is a plus. A camcorder with this technology will operate slightly longer (depending on battery capacity). But these are trifles.

The main difference and advantage is the arbitrary reading of cells (in CCD, reading is carried out simultaneously), which eliminates the smearing of the picture. Have you ever seen "vertical pillars of light" from bright point objects? So CMOS-matrices exclude the possibility of their appearance. And cameras based on them are cheaper.

There are also disadvantages. The first of these is the small size of the photosensitive element (in relation to the pixel size). Here, most of the pixel area is occupied by electronics, so the area of ​​​​the photosensitive element is also reduced. Consequently, the sensitivity of the matrix is ​​reduced.

Because electronic processing is carried out on a pixel, then the amount of noise in the picture increases. This is also a disadvantage, as is the low scan time. Because of this, there is a “rolling shutter” effect: when the operator moves, the object in the frame may be distorted.

CCD technology (CCD)

Camcorders with CCD-matrices provide high-quality images. Visually, it is easy to notice less noise in video shot with a CCD-based video camera compared to video shot with a CMOS camera. This is the first and most important advantage. And one more thing: the efficiency of CCD matrices is simply amazing: the fill factor approaches 100%, the ratio of registered photons is 95%. Take an ordinary human eye - here the ratio is approximately 1%.


High price and high power consumption are the disadvantages of these matrices. The fact is that here the recording process is incredibly difficult. Image fixation is carried out thanks to many additional mechanisms that are not available in CMOS matrices, so CCD technology is much more expensive.

CCD matrices are used in devices that require a color and high-quality image, and which, possibly, will shoot dynamic scenes. These are professional camcorders for the most part, although household ones too. These are also surveillance systems, digital cameras, etc.

CMOS matrices are used where there are no particularly high requirements for image quality: motion sensors, inexpensive smartphones ... However, this was the case before. Modern CMOS matrices have different modifications, which makes them very high quality and worthy in terms of competing with CCD matrices.

Now it is difficult to judge which technology is better, because both show excellent results. Therefore, putting the type of matrix as the only selection criterion is at least stupid. It is important to take into account many characteristics.


Please rate this article:

To convert the light flux into an electronic signal, which is then converted into a digital code that is recorded on the camera's memory card.
The matrix consists of pixels, the purpose of each is to output an electronic signal corresponding to the amount of light falling on it.
The difference between CCD and CMOS sensors is in conversion technique the signal received from the pixel. In the case of CCDs - consistently and with a minimum of noise, in the case of CMOS - quickly and with less power consumption (and thanks to additional circuits, the amount of noise is significantly reduced).
However, first things first...

Distinguish between CCD and CMOS matrices

CCD - matrix

A charge-coupled device (CCD, in English - CCD) is so named because of the way charge is transferred between light-sensitive elements - pixel to pixel and ultimately removing the charge from the sensor .

The charges are shifted along the matrix in rows from top to bottom. Thus, the charge moves down the rows of multiple registers (columns) at once.
Before leaving the CCD sensor, the charge of each pixel is amplified, and the output is analog signal with different voltage (depending on the amount of light hitting the pixel). Before processing, this signal is sent to separate (outside the chip) analog-to-digital converter, and the resulting digital data is converted into bytes representing a line of the image received by the sensor.

Since the CCD transmits electric charge, which has low resistance and is less susceptible to interference from other electronic components, the resulting signal usually contains less variety of noise compared to the CMOS sensor signal.

CMOS - matrix

IN CMOS sensor (CMOS - complementary metal - oxide semiconductor, in English - CMOS), the processing device is located next to every pixel (sometimes mounted on the matrix itself), which increases performance systems. Also, due to the lack of additional processing devices, we note low power consumption CMOS - matrices.

Some idea of ​​the process of reading information from matrices can be obtained from the following video


Technologies are constantly being improved, and today the presence of a CMOS matrix in a camera or camcorder indicates a higher class of the model. Manufacturers often focus on models with CMOS sensors.
Recently, the development of a rear-mounted CMOS sensor has become popular, showing better results when shooting in low light conditions, and also having a lower noise level.

A single element is sensitive in the entire visible spectral range, therefore, a light filter is used over the photodiodes of color CCD matrices, which transmits only one of three colors: red (Red), green (Green), blue (Blue) or yellow (Yellow), magenta ( Magenta), turquoise (Cyan). And in turn, there are no such filters in a black-and-white CCD matrix.


DEVICE AND PRINCIPLE OF PIXEL OPERATION

The pixel consists of a p-substrate coated with a transparent dielectric, on which a light-transmitting electrode is deposited, which forms a potential well.

Above the pixel, there may be a light filter (used in color matrices) and a converging lens (used in matrices where the sensing elements do not completely occupy the surface).

A positive potential is applied to a light-transmitting electrode located on the crystal surface. Light falling on a pixel penetrates deep into the semiconductor structure, forming an electron-hole pair. The resulting electron and hole are pulled apart by the electric field: the electron moves to the carrier storage zone (potential well), and the holes flow into the substrate.

A pixel has the following characteristics:

  • The capacity of a potential well is the number of electrons that a potential well can hold.
  • The spectral sensitivity of a pixel is the dependence of the sensitivity (the ratio of the photocurrent value to the luminous flux value) on the radiation wavelength.
  • Quantum efficiency (measured as a percentage) is a physical quantity equal to the ratio of the number of photons whose absorption caused the formation of quasiparticles to the total number of absorbed photons. In modern CCD matrices, this figure reaches 95%. For comparison, the human eye has a quantum efficiency of about 1%.
  • Dynamic range is the ratio of saturation voltage or current to the RMS voltage or current of dark noise. Measured in dB.
CCD MATRIX AND CHARGE TRANSFER DEVICE


The CCD matrix is ​​divided into rows, and each row is in turn divided into pixels. The rows are separated from each other by stop layers (p +), which do not allow the flow of charges between them. To move the data packet, parallel, aka vertical (English VCCD) and serial, aka horizontal (English HCCD) shift registers are used.

The simplest cycle of operation of a three-phase shift register begins with the fact that a positive potential is applied to the first gate, as a result of which a well is formed, filled with the generated electrons. Then we apply a potential to the second gate, higher than on the first one, as a result of which a deeper potential well is formed under the second gate, into which electrons from under the first gate will flow. To continue the movement of the charge, you should reduce the potential value on the second gate, and apply a larger potential to the third. The electrons flow under the third gate. This cycle continues from the point of accumulation to the directly reading horizontal resistor. All electrodes of the horizontal and vertical shift registers form phases (phase 1, phase 2 and phase 3).

Classification of CCD matrices by color:

  • Black and white
  • colored

Classification of CCD matrices by architecture:

Photosensitive cells are marked in green, opaque areas are marked in gray.

The following characteristics are inherent in the CCD matrix:

  • The charge transfer efficiency is the ratio of the number of electrons in the charge at the end of the shift register path to the number at the beginning.
  • The fill factor is the ratio of the area filled with photosensitive elements to the total area of ​​the photosensitive surface of the CCD matrix.
  • Dark current is an electrical current that flows through a photosensitive element in the absence of incident photons.
  • Read noise is the noise that occurs in the conversion and amplification circuits of the output signal.

Matrices with personnel transfer. (English frame transfer).

Advantages:

  • The ability to occupy 100% of the surface with photosensitive elements;
  • Readout time is lower than full-frame transfer sensor;
  • Less blurring than full-frame transfer CCD;
  • Has a duty cycle advantage over full-frame architecture: the frame-shift CCD collects photons all the time.

Flaws:

  • When reading data, the light source should be covered with a shutter to avoid the appearance of a smearing effect;
  • The path of movement of the charge has been increased, which negatively affects the efficiency of charge transfer;
  • These sensors are more expensive to manufacture and manufacture than full frame transfer devices.

Interline-transfer matrices or matrixes with column buffering (English Interline-transfer).

Advantages:

  • There is no need to apply a shutter;
  • No lubrication.

Flaws:

  • Possibility to fill the surface with sensitive elements by no more than 50%.
  • The read speed is limited by the speed of the shift register;
  • The resolution is lower than that of frame and full frame transfer CCDs.

Matrices with line-frame transfer or matrices with column buffering (English interline).

Advantages:

  • The processes of charge accumulation and transfer are spatially separated;
  • The charge from the accumulation elements is transferred to the transfer registers closed from the light of the CCD matrix;
  • The charge transfer of the entire image is carried out in 1 cycle;
  • No lubrication;
  • The interval between exposures is minimal and suitable for video recording.

Flaws:

  • Possibility to fill the surface with sensitive elements by no more than 50%;
  • Resolution is lower than frame and full frame transfer CCDs;
  • The path of movement of the charge has been increased, which negatively affects the efficiency of charge transfer.

CCD APPLICATIONS

SCIENTIFIC APPLICATIONS

  • for spectroscopy;
  • for microscopy;
  • for crystallography;
  • for fluoroscopy;
  • for the natural sciences;
  • for biological sciences.

SPACE APPLICATION

  • in telescopes;
  • in star trackers;
  • in tracking satellites;
  • when probing planets;
  • on-board and manual equipment of the crew.

INDUSTRIAL APPLICATION

  • to check the quality of welds;
  • to control the uniformity of painted surfaces;
  • to study the wear resistance of mechanical products;
  • for reading barcodes;
  • to control the quality of product packaging.

SECURITY APPLICATIONS

  • in residential apartments;
  • at airports;
  • at construction sites;
  • in the workplace;
  • in "smart" cameras that recognize a person's face.

APPLICATION IN PHOTOGRAPHY

  • in professional cameras;
  • in amateur cameras;
  • in mobile phones.

MEDICAL APPLICATION

  • in fluoroscopy;
  • in cardiology;
  • in mammography;
  • in dentistry;
  • in microsurgery;
  • in oncology.

AUTO-ROAD APPLICATION

  • for automatic license plate recognition;
  • for speed control;
  • for traffic flow management;
  • for a parking pass;
  • in police surveillance systems.

How distortion occurs when shooting moving objects on a sensor with a rolling shutter:


IN last years in the near-computer (and not only) press, enthusiastic reviews are often found dedicated to the next “technological miracle, designed to revolutionize the future of digital photography” - this is a generalized version of the phrase, in one form or another found in each of these articles. But what is characteristic is that after just a year, the initial hype is gradually fading away, and most manufacturers of digital photographic equipment prefer to use proven solutions instead of “advanced development”.

I would venture to suggest that the reason for this development of events is quite simple - it is enough to pay attention to the "brilliant simplicity" of this or that decision. Indeed, the resolution of the matrix is ​​not enough? And let's arrange the pixels not in columns and rows, but in diagonal lines, and then “rotate” the “picture” programmatically by 45 degrees - here we will immediately double the resolution! It does not matter that in this way the clarity of only strictly vertical and horizontal lines increases, while the oblique and curves (of which the real image consists) remain unchanged. The main thing is that the effect is observed, which means that you can loudly declare it.

Unfortunately, the modern user is "spoiled by megapixels". He is unaware that every time the resolution is increased, the developers of "classic" CCD matrices have to solve the most difficult task of ensuring an acceptable dynamic range and sensor sensitivity. But “solutions” like switching from rectangular to octagonal pixels seem quite understandable and justified to an ordinary amateur photographer - after all, it is so clearly written in advertising booklets ...

The purpose of this article is to try to simple level explain what determines the quality of the image obtained at the output from the CCD. At the same time, you can completely ignore the quality of optics - the appearance of the second “DSLR” costing less than $ 1000 (Nikon D 70) allows us to hope that a further increase in sensor resolution for cameras is acceptable price category will not be limited to "soapy" lenses.

Internal photoelectric effect

So, the image formed by the lens falls on the CCD matrix, that is, the light rays fall on the light-sensitive surface of the CCD elements, the task of which is to convert the photon energy into an electric charge. It happens approximately as follows.

For a photon that has fallen on a CCD element, there are three scenarios for the development of events - it will either “ricochet” from the surface, or be absorbed in the thickness of the semiconductor (matrix material), or “pierce through” its “working area”. It is obvious that the developers are required to create such a sensor, in which the losses from the "ricochet" and "shoot through" would be minimized. The same photons that were absorbed by the matrix form an electron-hole pair if there was an interaction with an atom of the semiconductor crystal lattice, or only a photon (or hole) if the interaction was with atoms of donor or acceptor impurities, and both of these phenomena are called internal photoelectric effect. Of course, the operation of the sensor is not limited to the internal photoelectric effect - it is necessary to store the charge carriers “taken away” from the semiconductor in a special storage, and then read them.

CCD element

IN general view The design of the CCD element looks like this: a p-type silicon substrate is equipped with channels from an n-type semiconductor. Above the channels, electrodes are made of polycrystalline silicon with an insulating layer of silicon oxide. After applying an electric potential to such an electrode, in the depleted zone under the n-type channel, potential hole, whose purpose is to store electrons. A photon penetrating into silicon leads to the generation of an electron, which is attracted by the potential well and remains in it. More photons (bright light) provide more charge to the well. Then you need to calculate the value of this charge, also called photocurrent, and amplify it.

The reading of the photocurrents of the CCD elements is carried out by the so-called sequential shift registers, which convert a string of charges at the input into a train of pulses at the output. This series is an analog signal, which is then fed to the amplifier.

Thus, using the register, it is possible to convert the charges of a row of CCD elements into an analog signal. Actually, serial register shift in CCDs is implemented using the same CCD elements combined in a row. The operation of such a device is based on the ability charge-coupled devices(this is what the abbreviation CCD means) to exchange charges of their potential wells. The exchange is carried out due to the presence of special transfer electrodes(transfer gate) located between adjacent CCD elements. When an increased potential is applied to the nearest electrode, the charge “flows” under it from the potential well. Between the CCD elements can be located from two to four transfer electrodes, the "phase" of the shift register depends on their number, which can be called two-phase, three-phase or four-phase.

The supply of potentials to the transfer electrodes is synchronized in such a way that the movement of the charges of potential wells of all CCD-elements of the register occurs simultaneously. And in one transfer cycle, the CCD elements, as it were, “transmit charges along the chain” from left to right (or from right to left). Well, the CCD element that turned out to be the “extreme” gives its charge to the device located at the output of the register, that is, the amplifier.

In general, a serial shift register is a parallel input, serial output device. Therefore, after reading all the charges from the register, it is possible to apply to its input newline, then the next and thus form a continuous analog signal based on a two-dimensional array of photocurrents. In turn, the input parallel stream for the serial shift register (i.e. rows of the two-dimensional array of photocurrents) is provided by a set of vertically oriented serial shift registers, which is called parallel shift register, and the whole structure as a whole is just a device called a CCD matrix.

The "vertical" serial shift registers that make up the parallel shift register are called CCD columns and their work is fully synchronized. Two-dimensional array The photocurrents of the CCD matrix are simultaneously shifted down by one row, and this happens only after the charges of the previous row from the serial shift register located “at the very bottom” have gone to the amplifier. Until the serial register is released, the parallel register is forced to idle. Well, for normal operation, the CCD matrix itself must be connected to a microcircuit (or a set of them), which supplies potentials to the electrodes of both serial and parallel shift registers, and also synchronizes the operation of both registers. In addition, a clock generator is needed.



Full frame sensor

This type of sensor is the simplest from a constructive point of view and is called full frame CCD(full-frame CCD - matrix). In addition to the “strapping” microcircuits, this type of matrix also needs a mechanical shutter that blocks the light flux after the exposure is completed. Before the shutter is completely closed, the reading of charges cannot be started - during the working cycle of the parallel shift register, extra electrons will be added to the photocurrent of each of its pixels, caused by photons hitting the open surface of the CCD matrix. This phenomenon is called "Smearing" the charge in a full-frame matrix(full - frame matrix smear).

Thus, frame read rate in such a scheme is limited by the speed of both parallel and serial shift registers. It is also obvious that it is necessary to block the light coming from the lens until the reading process is completed, so exposure interval also depends on the reading speed.

There is an improved version of the full-frame matrix, in which the charges of the parallel register do not come line by line to the input of the serial one, but are “stored” in the buffer parallel register. This register is located under the main parallel shift register, the photocurrents are moved line by line to the buffer register and from it are fed to the input of the serial shift register. The surface of the buffer register is covered with an opaque (usually metal) panel, and the whole system is called frame buffered matrices(frame - transfer CCD).


Frame Buffered Matrix

In this scheme, the potential wells of the main parallel shift register “empty” noticeably faster, since when transferring lines to the buffer, there is no need to wait for each line full cycle serial register. Therefore, the interval between exposures is reduced, although the reading speed also drops - the line has to “travel” twice as far. Thus, the interval between exposures is reduced for only two frames, although the cost of the device due to the buffer register increases markedly. However, the most noticeable disadvantage of matrices with frame buffering is the lengthened "route" of photocurrents, which negatively affects the safety of their values. And in any case, a mechanical shutter should operate between frames, so there is no need to talk about a continuous video signal.

Matrices with column buffering

Especially for video equipment, a new type of matrix was developed, in which the interval between exposures was minimized not for a couple of frames, but for a continuous stream. Of course, to ensure this continuity, it was necessary to provide for the rejection of a mechanical shutter.

Actually this scheme, named column-buffered matrices(interline CCD -matrix), somewhat similar to systems with frame buffering - it also uses a buffer parallel shift register, the CCD elements of which are hidden under an opaque coating. However, this buffer is not located in a single block under the main parallel register - its columns are "shuffled" between the columns of the main register. As a result, next to each column of the main register there is a buffer column, and immediately after exposure, the photocurrents do not move “from top to bottom”, but “from left to right” (or “from right to left”) and in just one working cycle enter the buffer register, entirely and completely freeing potential holes for the next exposure.

The charges that have fallen into the buffer register are read in the usual order through a serial shift register, that is, “from top to bottom”. Since the reset of photocurrents to the buffer register occurs in just one cycle, even in the absence of a mechanical shutter, there is nothing similar to the “smearing” of charge in a full-frame matrix. But the exposure time for each frame in most cases corresponds in duration to the interval spent on the full reading of the buffer parallel register. Thanks to all this, it becomes possible to create a video signal with a high frame rate - at least 30 frames per second.



Matrix with column buffering

Often in the domestic literature, matrices with column buffering are mistakenly called "interlaced". This is probably due to the fact that the English names "interline" (line buffering) and "interlaced" (interlaced scanning) sound very similar. In fact, when reading all rows in one cycle, we can talk about a matrix with progressive scan(progressive scan), and when odd lines are read for the first cycle, and even lines for the second (or vice versa), we are talking about interlaced matrix(interlaced scan).

Although the photocurrents of the main parallel shift register immediately fall into the buffer register, which is not subjected to "photon bombardment", Charge smearing in column-buffered matrices(smear) also happens. This is caused by a partial flow of electrons from the potential well of the "light-sensitive" CCD element to the potential well of the "buffer" element, especially often this happens at charge levels close to the maximum, when the pixel illumination is very high. As a result, a light strip stretches up and down from this bright point in the picture, spoiling the frame. To combat this unpleasant effect, when designing the sensor, the “light-sensitive” and buffer columns are located at a greater distance from each other. Of course, this complicates the exchange of charge, and also increases the time interval of this operation, but the damage that “smearing” causes to the image leaves no choice for developers.

As mentioned earlier, to provide a video signal, it is necessary that the sensor does not require the overlap of the light flux between exposures, since the mechanical shutter in such operating conditions (about 30 operations per second) can quickly fail. Fortunately, thanks to buffer strings, it is possible to implement electronic shutter, which, firstly, allows you to do without a mechanical shutter if necessary, and secondly, provides ultra-low (up to 1/10000 second) shutter speeds, especially critical for shooting fast-moving processes (sports, nature, etc.). However, the electronic shutter also requires that the matrix has a system for removing the excess charge of the potential well, however, everything will be discussed in order.

You have to pay for everything, and for the ability to form a video signal, too. Buffer shift registers “eat up” a significant part of the matrix area, as a result, each pixel gets only 30% of the light-sensitive area of ​​its total surface, while this area is 70% for a full-frame sensor pixel. That is why in most modern CCD matrices, on top of each pixel is microlens. Such a simple optical device covers most of the area of ​​the CCD element and collects the entire fraction of photons incident on this part into a concentrated light flux, which, in turn, is directed to a rather compact photosensitive area of ​​the pixel.



microlenses

Since with the help of microlenses it is possible to register the light flux falling on the sensor much more efficiently, over time, these devices began to supply not only systems with column buffering, but also full-frame matrices. However, microlenses also cannot be called a “solution without drawbacks”.

Being an optical device, microlenses to some extent distort the recorded image, most often this is expressed in the loss of clarity in the smallest details of the frame - their edges become slightly blurred. On the other hand, such a blurred image is by no means always undesirable - in some cases, the image formed by the lens contains lines, the size and frequency of which are close to the dimensions of the CCD element and the interpixel distance of the matrix. In this case, the frame is often observed stepping(aliasing) - assigning a certain color to a pixel, regardless of whether it is completely covered by an image detail or only part of it. As a result, the lines of the object in the picture are torn, with jagged edges. To solve this problem, cameras with sensors without microlenses use an expensive anti-aliasing filter(anti-aliasing filter), and a sensor with microlenses does not need such a filter. However, in any case, you have to pay for this with some decrease in the resolution of the sensor.

If the subject is not well lit, it is recommended to open the aperture as wide as possible. However, this sharply increases the percentage of rays incident on the surface of the matrix at a steep angle. Microlenses, on the other hand, cut off a significant proportion of such rays, so the efficiency of light absorption by the matrix (that for which the diaphragm was opened) is greatly reduced. Although it should be noted that rays incident at a steep angle are also a source of problems - entering the silicon of one pixel, a photon with a long wavelength, which has a high penetrating power, can be absorbed by the material of another matrix element, which will eventually lead to image distortion. To solve this problem, the surface of the matrix is ​​covered with an opaque (for example, metal) "grid", in the cutouts of which only the light-sensitive zones of the pixels remain.

Historically, full-frame sensors have been used primarily in studio technology, and column-buffered sensors in amateur technology. Both types of sensors are found in professional cameras.

IN classical scheme With a CCD element that uses polycrystalline silicon electrodes, the sensitivity is limited due to partial scattering of light by the electrode surface. Therefore, when shooting in special conditions that require increased sensitivity in the blue and ultraviolet regions of the spectrum, back-illuminated matrices are used. In sensors of this type, the recorded light falls on the substrate, and in order to provide the required internal photoelectric effect, the substrate was polished to a thickness of 10–15 micrometers. This stage of processing greatly increased the cost of the matrix, in addition, the devices turned out to be very fragile and required increased care during assembly and operation.



Back-illuminated matrix

Obviously, when using light filters that attenuate the light flux, all expensive operations to increase the sensitivity lose their meaning, so back-illuminated matrices are used mostly in astronomical photography.

Sensitivity

One of the most important characteristics of a recording device, whether it be photographic film or a CCD matrix, is sensitivity- the ability to respond in a certain way to optical radiation. The higher the sensitivity, the less light is required for the response of the recording device. Various values ​​​​(DIN, ASA) were used to indicate sensitivity, but in the end the practice took root to designate this parameter in ISO units (International Standards Organization - International Standards Organization).

For a single CCD element, the response to light should be understood as charge generation. Obviously, the sensitivity of a CCD matrix is ​​the sum of the sensitivity of all its pixels and generally depends on two parameters.

The first parameter is integrated sensitivity, which is the ratio of the photocurrent (in milliamps) to the luminous flux (in lumens) from a radiation source, the spectral composition of which corresponds to a tungsten incandescent lamp. This parameter allows you to evaluate the sensitivity of the sensor as a whole.

The second parameter is monochromatic sensitivity, that is, the ratio of the magnitude of the photocurrent (in milliamperes) to the magnitude of the light energy of the radiation (in millielectronvolts) corresponding to a certain wavelength. The set of all monochromatic sensitivity values ​​for the part of the spectrum of interest is spectral sensitivity- dependence of sensitivity on the wavelength of light. Thus, the spectral sensitivity shows the ability of the sensor to register shades of a certain color.

It is clear that the units of measurement for both integral and monochrome sensitivity differ from the designations popular in photographic technology. That is why manufacturers of digital photographic equipment in the product specifications indicate equivalent sensitivity CCDs in ISO units. And in order to determine the equivalent sensitivity, the manufacturer only needs to know the illumination of the subject, aperture and shutter speed, and use a couple of formulas. According to the first, the exposure value is calculated as log 2 (L *S /C), where L is the illuminance, S is the sensitivity, and C is the exposure constant. The second formula defines the exposure value as 2*log 2 K - log 2 t ., where K is the aperture value and t is the shutter speed. It is not difficult to derive a formula that allows, given L, C, K and t, to calculate what S equals.

The sensitivity of the matrix is ​​an integral value, depending on the sensitivity of each CCD element. Well, the sensitivity of the matrix pixel depends, firstly, on “photons substituted for the rain” photosensitive area(fill factor), and secondly, from quantum efficiency(quantum efficiency), that is, the ratio of the number of registered electrons to the number of photons incident on the surface of the sensor.

In turn, the quantum efficiency is affected by a number of other parameters. First, this reflection coefficient- value representing the proportion of those photons that "ricochet" from the sensor surface. As the reflection coefficient increases, the fraction of photons involved in the internal photoelectric effect decreases.

Photons not reflected from the sensor surface will be absorbed, forming charge carriers, however, some of them will “get stuck” near the surface, and some will penetrate too deep into the material of the CCD element. It is obvious that in both cases they will not take any part in the formation of the photocurrent. The "penetrating power" of photons into a semiconductor, called absorption coefficient, depends both on the material of the semiconductor and on the wavelength of the incident light - "long-wave" particles penetrate much deeper than "short-wave". When developing a CCD element, it is necessary for photons with a wavelength corresponding to visible radiation to achieve such an absorption coefficient that the internal photoelectric effect occurs near the potential well, thereby increasing the chance for an electron to fall into it.

Often, instead of quantum efficiency, the term is used "quantum output"(quantum yield), but in reality this parameter displays the number of charge carriers released when one photon is absorbed. Of course, with the internal photoelectric effect, the bulk of the charge carriers still fall into the potential well of the CCD element, however, a certain part of the electrons (or holes) avoids the "trap". The numerator of the formula describing the quantum efficiency is exactly the number of charge carriers that fell into the potential well.

An important characteristic of a CCD matrix is sensitivity threshold- parameter of the light-recording device, which characterizes the minimum value of the light signal that can be registered. The smaller this signal, the higher the sensitivity threshold. The main factor limiting the threshold of sensitivity is dark current(dark current). It is a consequence of thermionic emission and occurs in a CCD element when a potential is applied to the electrode, under which a potential well is formed. This current is called “dark” because it consists of electrons that have fallen into the well in the absence of a light flux. If the luminous flux is weak, then the value of the photocurrent is close, and sometimes even less, than the value of the dark current.

There is a dependence of the dark current on the temperature of the sensor - when the matrix is ​​heated by 9 degrees Celsius, its dark current doubles. To cool the matrix, various heat removal (cooling) systems. In field chambers, the weight and size characteristics of which greatly limit the use of cooling systems, sometimes the metal case of the chamber is used as a heat exchanger. In studio equipment, there are practically no restrictions on weight and dimensions, moreover, a sufficiently high power consumption of the cooling system is allowed, which, in turn, are divided into passive and active ones.

Passive cooling systems provide only "discharge" of excess heat of the cooled device into the atmosphere. At the same time, the cooling system plays the role of a maximum conductor of heat, providing more efficient heat dissipation. Obviously, the temperature of the cooled device cannot become lower than the ambient air temperature, which is the main disadvantage of passive systems.

The simplest example of a passive heat exchange system is radiator(heatsink), made from a material with good thermal conductivity, most often from metal. The surface in contact with the atmosphere is shaped to provide as large a scattering area as possible. It is generally recognized that the maximum scattering area is needle radiators, shaped like a "hedgehog", studded with heat-dissipating "needles". Often, to force heat transfer, the surface of the radiator is blown microfan - similar devices called coolers(cooler, from the word cool- to cool), in personal computers cool the processor. Based on the fact that the microfan consumes electricity, systems using it are called "active", which is completely wrong, since coolers cannot cool the device to a temperature lower than atmospheric. At high ambient temperatures (40 degrees and above), the efficiency of passive cooling systems begins to decline.

Active cooling systems due to electrical or chemical processes provide the device with a temperature below the ambient air. Actually, active systems“produce cold”, however, both the heat of the cooled device and the heat of the cooling system are released into the atmosphere. A classic example of an active cooler is a conventional refrigerator. However, despite the rather high efficiency, its weight and size characteristics are unacceptable even for studio photographic equipment. Therefore, her active cooling ensured Peltier systems, whose operation is based on the use of the effect of the same name, when, in the presence of a potential difference at the ends of two conductors made of different materials, thermal energy will be released or absorbed at the junction of these conductors (depending on the polarity of the voltage). The reason for this is the acceleration or deceleration of electrons due to the internal contact potential difference of the conductor junction.

When using a combination of n-type and p-type semiconductors, in which heat absorption is carried out due to the interaction of electrons and "holes", the maximum heat-conducting effect occurs. To enhance it, you can apply the cascade combination of Peltier elements, and since both heat absorption and release occur, the elements must be combined so that one side of the cooler is “hot” and the other is “cold”. As a result of the cascade combination, the temperature of the “hot” side of the Peltier element farthest from the matrix is ​​much higher than that of the surrounding air, and its heat is dissipated in the atmosphere with the help of passive devices, that is, radiators and coolers.

Using the Peltier effect, active cooling systems can lower the temperature of the sensor down to zero degrees, dramatically reducing the level of dark current. However, excessive cooling of the CCD array threatens to condense moisture from the surrounding air and short circuit the electronics. And in some cases, the limiting temperature difference between the cooled and photosensitive planes of the matrix can lead to its unacceptable deformation.

However, neither radiators, nor coolers, nor Peltier elements are applicable to field cameras, which are limited in weight and dimensions. Instead, this technique uses a method based on the so-called black pixels(dark reference pixels). These pixels are columns and rows covered with an opaque material along the edges of the matrix. The average value for all photocurrents of black pixels is considered dark current level. Obviously, under different operating conditions (temperature of the environment and the camera itself, battery current, etc.), the level of dark current will be different. When used as a “reference point” for each pixel, that is, by subtracting its value from the photocurrent, it is possible to determine exactly what charge is created by the photons that fell on the CCD element.

When suppressing the dark current in one way or another, one should be aware of another factor that limits the threshold of sensitivity. It is thermal noise(thermal noise), created even in the absence of potential on the electrodes, only by the chaotic movement of electrons along the CCD element. Long exposures lead to a gradual accumulation of stray electrons in the potential well, which distorts the true value of the photocurrent. And the “longer” the shutter speed, the more electrons “lost” in the well.

As you know, the light sensitivity of a film within the same cassette remains constant, in other words, it cannot change from frame to frame. But a digital camera allows you to set the most optimal value of equivalent sensitivity for each shot. This is achieved by amplifying the video signal coming from the matrix - in some ways, such a procedure, called "increasing the equivalent sensitivity", similar to turning the volume control on a music player.

Thus, in low light, the user faces a dilemma - either increase the equivalent sensitivity, or increase the shutter speed. At the same time, in both cases, one cannot avoid damage to the frame by the noise of a fixed distribution. True, experience shows that with a “long” shutter speed, the picture does not deteriorate as much as when the matrix signal is amplified. However, a long exposure time threatens another problem - the user can "twist" the frame. Therefore, if the user plans to shoot indoors frequently, then he should choose a camera with a high lens aperture, as well as a powerful and “intelligent” flash.

Dynamic Range

The matrix is ​​required to be able to detect light both in bright sunlight and in low room lighting. Therefore, the potential wells of the matrix must be very capacious, and also be able to both retain a minimum number of electrons in low light, and contain a large charge obtained when a powerful light flux hits the sensor. And the image formed by the lens often consists of both brightly lit areas and deep shadows, and the sensor must be able to register all their shades.

The ability of the sensor to form a good image in different lighting conditions and high contrast is determined by the parameter "dynamic range", which characterizes the ability of the matrix to distinguish in the image projected onto its recording surface, the darkest tones from the lightest ones. When the dynamic range is expanded, the number of shades in the image will increase, and the transitions between them will correspond as closely as possible to the image formed by the lens.



Effect of dynamic range on frame quality (A - wide dynamic range, B - narrow dynamic range)

A characteristic that describes the ability of a CCD element to accumulate a certain amount is called "depth of potential well"(well depth), and the dynamic range of the matrix depends on it. Of course, when shooting in low light conditions, the dynamic range is also affected by the sensitivity threshold, which, in turn, is determined by the magnitude of the dark current.

It is obvious that the loss of electrons that make up the photocurrent occurs not only in the process of accumulation of the charge of the potential well, but also during its transportation to the output of the matrix. These losses are caused by the drift of electrons that are “torn off” from the main charge when it flows under the next transfer electrode. The smaller the number of detached electrons, the higher charge transfer efficiency(charge transfer efficiency). This parameter is measured as a percentage and shows the percentage of the charge remaining during the "crossing" between the CCD elements.

The effect of transfer efficiency can be demonstrated by the following example. If for a 1024 X 1024 matrix the value of this parameter is 98%, then to determine the value of the photocurrent of the central pixel at the output of the matrix, it is necessary to raise 0.98 (the amount of charge transferred) to the power of 1024 (the number of “crossings” between pixels) and multiply by 100 (percentage ). The result is completely unsatisfactory - some 0.0000001% of the initial charge will remain. Obviously, with an increase in resolution, the requirements for transfer efficiency become even more stringent, as the number of "crossings" increases. In addition, the frame readout rate drops, because an increase in the transfer rate (to compensate for the increased resolution) leads to an unacceptable increase in the number of “stripped off” electrons.

In order to achieve acceptable frame readout rates with high charge transfer efficiency, when designing a CCD array, it is planned to place potential wells in a “deepened” position. Due to this, the electrons do not “stick” to the transfer electrodes so actively, and it is for the “deep location” of the potential well that an n-channel is introduced into the design of the CCD element.

Returning to the above example: if in a given 1024 X 1024 matrix the charge transfer efficiency is 99.999%, then the output of the sensor from the photocurrent of the central charge will remain 98.98% of its original value. If a higher resolution matrix is ​​being developed, then a charge transfer efficiency of 99.99999% is required.

Blooming

In those cases where the internal photoelectric effect leads to an excess number of electrons that exceeds the depth of the potential well, the charge of the CCD element begins to "spread" over neighboring pixels. In the photographs, this phenomenon, referred to as blooming(from the English blooming - blurring), is displayed in the form of white spots and the correct shape, and the more excess electrons, the larger the spots.

Blooming suppression is carried out by means of a system electronic drainage(overflow drain), the main task of which is to remove excess electrons from the potential well. The most famous options vertical drainage(Vertical Overflow Drain, VOD) and lateral drainage(Lateral Overflow Drain, VOD).

In a system with vertical drainage, a potential is applied to the matrix substrate, the value of which is chosen so that when the depth of the potential well is overflowed, excess electrons flow from it to the substrate and are scattered there. The disadvantage of this option is a decrease in the depth of the potential well and, accordingly, a narrowing of the dynamic range of the CCD element. It is also obvious that this system not applicable in matrices with back light.



Vertical electronic drainage

The side drain system uses electrodes that prevent the potential well electrons from penetrating into the "drainage grooves" from which excess charge is dissipated. The potential on these electrodes is selected in accordance with the overflow barrier of the potential well, while its depth does not change. However, due to the drainage electrodes, the light-sensitive area of ​​the CCD element is reduced, so microlenses have to be used.



Lateral electronic drain

Of course, the need to add drainage devices to the sensor complicates its design, but the frame distortions introduced by blooming cannot be ignored. Yes, and an electronic shutter cannot be implemented without drainage - it plays the role of a “curtain” at ultra-short shutter speeds, the duration of which is less than the interval spent on transferring charge from the main parallel shift register to the buffer parallel register. The “shutter”, that is, the drainage, prevents the penetration into the wells of the buffer CCD elements of those electrons that were formed in the “light-sensitive” pixels after the specified (and very short) exposure time has passed.

"Stuck" pixels

Due to technological errors in some CCD elements, even the shortest shutter speed leads to an avalanche-like accumulation of electrons in the potential well. In the picture, such pixels, called "sticky"(stuck pixels) are very different from the surrounding dots both in color and brightness, and, unlike the noise of a fixed distribution, they appear at any shutter speed and regardless of the temperature of the matrix.

Stuck pixels are removed using the built-in software camera, which provides the search for defective CCD elements and storing their "coordinates" in non-volatile memory. When forming an image, the values ​​of defective pixels are not taken into account, they are replaced by the interpolated value of neighboring pixels. To determine the defectiveness of a pixel during the search process, its charge is compared with the reference value, which is also stored in the non-volatile memory of the camera.

Matrix Diagonal Size

Sometimes, among other parameters of a digital camera, it is indicated diagonal size of the CCD(usually in fractions of an inch). First of all, this value is related to the characteristics of the lens - the larger the sensor dimensions, the larger the image formed by the optics should be. In order for this image to completely cover the recording surface of the matrix, the dimensions of the optical elements have to be increased. If this is not done and the “picture” created by the lens turns out to be smaller than the sensor, then the peripheral areas of the matrix will be unclaimed. However, in a number of cases, camera manufacturers did not indicate that in their models a certain proportion of megapixels turned out to be “out of work”.

But in digital "reflex cameras" created on the basis of 35-mm technology, the opposite situation is almost always encountered - the image formed by the lens overlaps the light-sensitive area of ​​\u200b\u200bthe matrix. This is due to the fact that sensors with frame dimensions of 35-millimeter film are too expensive, and leads to the fact that part of the image formed by the lens is literally “behind the scenes”. As a result, the characteristics of the lens are shifted to the "long focus" region. Therefore, when choosing interchangeable lenses for a digital SLR, you should consider zoom ratio- as a rule, it is about 1.5. For example, if you install a 28-70mm zoom lens, its working range will be 42-105mm.

This ratio has both positive and negative effects. In particular, shooting with a wide coverage angle, which requires short throw lenses, becomes more difficult. Optics with a focal length of 18mm or less are very expensive, and in a digital "SLR" it turns into a trivial 27mm. However, telephoto lenses are also very expensive, and with a large focal length, as a rule, the relative aperture decreases. But an inexpensive 200mm lens at a factor of 1.5 turns into a 300mm lens, while the “real” 300mm optics have an aperture of the order of f / 5.6, the 200mm aperture is higher than f / 4.5.

In addition, any lens is characterized by such aberrations as field curvature and distortion, which are expressed in blurring and curvature of the image in the edge areas of the frame. If the dimensions of the matrix are smaller than the size of the image formed by the lens, the "problem areas" simply will not be registered by the sensor.

It should be noted that the sensitivity of the matrix is ​​related to the dimensions of its recording area. The larger the photosensitive area of ​​each element, the more light falls on it and the more often the internal photoelectric effect occurs, thus increasing the sensitivity of the entire sensor. In addition, a large pixel allows you to create a potential well of "increased capacity", which has a positive effect on the width of the dynamic range. illustrative of that example - matrices digital "reflex cameras", comparable in size to the frame of 35 mm film. These sensors traditionally differ in sensitivity of the order of ISO 6400 (!), And the dynamic range requires an ADC with a bit depth of 10-12 bits.

At the same time, matrices of amateur cameras have a dynamic range for which an 8-10-bit ADC is enough, and the sensitivity rarely exceeds ISO 800. The reason for this is the design features of this technique. The fact is that Sony has very few competitors in terms of the production of small-sized (1/3, 1/2 and 2/3 inches diagonally) sensors for amateur technology, and this was caused by a competent approach to development model range matrices. When developing the next generation of matrices with a resolution “per megapixel more”, almost complete compatibility with previous sensor models was ensured, both in terms of dimensions and interface. Accordingly, camera designers did not have to develop the lens and the "electronic stuffing" of the camera from scratch.

However, as the resolution increases, the buffer parallel shift register captures an increasing fraction of the sensor area, as a result, both the light-sensitive region and the "capacity" of the potential well are reduced.



Reducing the photosensitive area of ​​the CCD with increasing resolution.

Therefore, behind every “N +1 megapixel” lies the painstaking work of developers, which, unfortunately, is not always successful.

Analog to digital converter

The video signal that has passed through the amplifier must be converted into a digital format understandable to the camera's microprocessor. For this, it is used analog-to-digital converter, ADC(analog to digital converter, ADC) - a device that converts an analog signal into a sequence of numbers. Its main characteristic is bit depth, that is, the number of recognized and encoded discrete signal levels. To calculate the number of levels, it is enough to raise two to the power of bit depth. For example, "8 bits" means that the converter is able to determine 2 to the eighth signal levels and display them as 256 different values.

With a large ADC capacity, it is possible (theoretically) to achieve a greater color depth(color depth), that is, the bit depth of color processing that describes maximum amount color shades that can be reproduced. Color depth is usually expressed in bits, and the number of shades is calculated in the same way as the number of ADC signal levels. For example, with a 24-bit color depth, you can get 16777216 shades of color.

In reality, the color depth for files in JPEG formats or TIFF, which are used by a computer to process and store images, is limited to 24 bits (8 bits for each color channel - blue, red and green). Therefore, sometimes used ADCs with a bit depth of 10, 12 and even 16 bits (that is, a color depth of 30, 36 and 48 bits) can be mistakenly considered "redundant". However, the dynamic range of the matrix of some models of digital photographic equipment is quite wide, and if the camera is equipped with the function of saving a frame in a non-standard format (30–48 bits), then during further computer processing it is possible to use “extra” bits. As you know, errors in the calculation of exposure by the frequency of manifestation are second only to inaccuracies in focusing. And therefore, the ability to compensate for such errors with the help of “lower” (in case of underexposure) or “upper” (in case of overexposure) bits turns out to be very useful. Well, if the exposure is calculated without errors, then “compressing” 30–48 bits into standard 24 bits without distortion is not a particularly difficult task.

Obviously, the dynamic range of the CCD should be the basis for increasing the bit depth of the ADC, since with a narrow dynamic range of the ADC with 10-12 bits per channel, there will simply be nothing to recognize. And often it cannot be called anything other than a publicity stunt to mention the “36-bit” and even “48-bit” color of a modest “soap box” with a matrix of half an inch diagonally, because even a 30-bit color requires at least a sensor with a diagonal of 2 / 3 inches.



Loading...
Top