Likbez: how a digital camera works. Camera device

We kindly ask you not to send articles from the Internet - they can be found by search engines. Write your own, interesting and unique article. Photograph and describe laboratory work in physics or chemistry, send photos of your homemade ....
send articles to [email protected]

How digital cameras work

Most digital cameras have an LCD screen on which you can immediately view the resulting image. This is one of the main advantages of digital cameras. These photos can be viewed on a computer or sent by e-mail.

Digital cameras, in addition to shared memory, also support flash cards that store the pictures you take. You can transfer photos from the camera to a computer or other device either via flash cards (SmartMedia, CompactFlash and Memory Stick), SCSI, USB, FireWire, or via floppy disks, hard drive and CD and DVD discs.

CompactFlash Memory Card Digital photos tend to take up a lot of space. The most common formats are TIFF, unzipped, compressed JPEG (archived), and RAW. In this case, the data is stored in the form in which they were received from the photosensitive matrix. Therefore, the quality of RAW images is significantly higher than the quality of JPEG images, but they take up much more space. Nevertheless, most digital cameras use the high and medium quality JPEG format to store pictures.

Almost all digital cameras have special data compression programs that allow you to reduce the size of photos and free up some space for other pictures. There are two types of compression: compression based on repeating elements and compression based on "extra details". For example, if 30 percent of the photo is blue skies, this means that the photo will have too many repeating shades of blue. Special programs “compress” these repeated colors, so that the photo does not lose its brightness, and there is more free space on the camera. This method allows you to reduce the size of the image by almost 50 percent.

Compression based on "excess details" is a more complex process. As a rule, a digital camera captures more colors than the human eye perceives. Therefore, as a result of such compression, some “unnecessary details”, so to speak, are removed from the picture, due to which the weight of the photo is reduced. Summarizing:

To take a picture, the CCD camera performs the following operations:

First you need to point the camera at a certain object and set the optical zoom, i.e. zoom in or out on an object.
Then lightly press the button.
The camera automatically focuses on the subject.
The camera sets the aperture and shutter speed for optimal exposure.
Then you need to press the button again until it stops.
The camera exposes the CCD and when the light reaches the CCD, it charges each of the elements - pixels individually. This charging further corresponds to an electrical impulse, and thus we obtain in digital form data on the illumination of each of the pixels.
An analog-to-digital converter (ADC) measures the charge and creates a digital signal that represents the charge values ​​in each individual pixel.
The processor collects data from various pixels and creates a specific color range. On many digital cameras, you can immediately view the resulting image on the screen.
Some cameras compress the image automatically.
Information is stored on one of the types of storage devices, for example, on a flash card.

Lesson topic: "Digital information processing devices: digital video camera"

The purpose of the lesson:

create conditions for the formation of students' ideas about the types and purpose of digital devices for information processing;

continue to develop information processing skills using various devices;

continue to educate respect for computer technology, the implementation of the rules of safe behavior in the office

DURING THE CLASSES:

1. Organizing time.

2. Repetition of the material from the previous lesson:
1) what device did we talk about in the last lesson?

2) What main elements of a camera can you name?

3) What are the advantages of digital cameras?

4) Where are the images stored in the camera?

5) How is the transfer of images from the camera?

3. Learning new material.

For today's lesson, you have prepared messages about digital video cameras - a device that greatly expands the possibilities modern computers. We will conduct our acquaintance with this device according to the same plan as our acquaintance with a digital camera, i.e.:

1 - the main elements of the video camera

2- advantages of digital video cameras

3 - devices for recording information in a video camera

4 - transferring information from a video camera to a computer

5– webcams

Let's give the floor to the representatives of the groups.

(students make messages, if necessary, accompany the story with illustrations)

The material that can be offered to students is in Appendix 1.

4. Workshop on transferring video to a computer

Just like in the previous lesson, you can shoot fragments of students' speeches, their activities in the lesson. In practice, show how to transfer video (in extreme cases, from the camera). The form of work is individual.

5. Editing a video about the study of Digital Information Processing Devices

Working with a video editor MoveMaker (front):

MoveMaker.

2. Upload video images - Record video - Import video.

3. Upload photo - Record video - Import images

4. Arrange video clips and photos on the storyboard panel (drag and drop)

5. Add transitions: Movie Edit - View Video Transitions - Select Video Transition - drag it to the storyboard panel between frames.

6. Add effects: Film Editing - View Effects - Select Effect - drag it to the storyboard panel directly on the frame. To enhance the effect, it can be used several times.

7. Adding titles and inscriptions: Film editing - Creating titles and titles - Select the effect of titles or inscriptions - enter text, set formatting - click the "Finish" button.

8. Adding music: Record video - import sound and music - drag a fragment to the storyboard panel.

9. Saving the movie in WMV – Completing the movie creation – Saving the movie on the computer - Confirm the prompts of the save movie wizard.

Give this algorithm to students as a reminder. We do the work together, the teacher shows everything the same on the screen.

6. Homework: In the next lesson, students will complete a movie making project. To do this, they will have to think over the theme of the project, what fragments and photographs they will use. At the lesson, they will shoot the material and edit a short film. (The topics are varied: My school, My class, Our computer science office, Our teachers, etc.) The work is supposed to be in groups of 2-3 people.

Annex 1. Camcorders

Camcorders are primarily divided into digital and analog. Here I will not consider analog cameras (VHS, S-VHS, VHS-C, Video-8, Hi-8) for obvious reasons. They have a place in a commission shop, or on the top shelf in a pantry (what if someday it will become a rarity), but analog video processing will be considered for sure, since, I think, everyone has a lot of cassettes. So, modern household video cameras differ in the type of video information carrier, in the method of recording (encoding) video information, in the size and number of matrices, and, of course, in optics.

1.1.1. According to the type of storage media, cameras are divided into:

HDV-cameras: the newest and, apparently, the main format in the future. Frame size up to 1920*1080. Imagine, each frame is a 2-megapixel photo, and you will understand what the quality of the video is. Strictly speaking, HDV is a recording format, as there are HDD cameras that operate on the HDV format. But I specifically put this format in this row, since most existing HDV cameras record on cassettes. If money is not an issue for you, these cameras are for you.

DV-cameras: the main format of consumer digital video cameras. Frame size 720*576 (PAL) and 720*480 (NTSC). The quality of the recording largely depends on the optics and the quality (and quantity) of the matrices. DV-cameras are divided into DV proper (mini-DV) - cameras and Digital-8 cameras. Which one to buy depends on you, on the one hand, mini-DV cameras are more common, on the other hand, if you had a Video -8 camera before, it makes sense to pay attention to Digital -8 cameras, since these cameras record freely on any format 8 cassettes (Video -8, Hi -8, Digital -8 (of course, they can swear, they say, Video -8 is rather weak for me, but they write easily)), in addition, recording on better quality cassettes (Hi -8, Digital -8), you'll get a longer recording time than mini-DV .

DVD cameras. I am not a fan of this type of cameras. Their recording quality is lower than that of DV-cameras, and even a disc with the best quality for them lasts for 20 minutes. But! If you are not picky about quality (especially since the difference is not so noticeable on an ordinary TV screen) and you don’t want to bother with making a movie, then encoding it into DVD format, you can use a DVD camera. Moreover, you can assemble a full-fledged DVD from the received files on a 1.4 GB DVD (used in DVD cameras) quite quickly using specialized programs (for example, CloneDVD and DVD-lab).

Flash cameras. The recording is made on a flash card in MPEG 4 and MPEG 2 formats. The duration depends on the size of the card, the selected frame size and the encoding quality. MPEG 2 is preferable, as the quality is higher, but it takes up more space. But neither one nor the other format, when processing video information for recording on a card, will be able to provide quality that is at least a little close to DV. Therefore, such cameras can be recommended as a gift for children or for shooting in extreme conditions, since the indisputable advantage of these cameras is their compactness and the absence of mechanical parts (an exception is a zoom lens).

HDD cameras. Recording is done on the built-in hard disk. Recording can be done in all formats from HDV to MPEG 4 (depending on the model). Perhaps, like flash cameras, this is the future of consumer video cameras, but unlike the latest HDD cameras, they can already provide excellent HDV quality, or up to 20 hours of good quality MPEG 2 recording on a 30 Gb disk. But let's look at this splendor from the other side, recording 1 hour of DV format takes 13-14 Gb on the hard disk, and, having made some simple calculations, say that it's easier to rearrange the cassette or rewrite video to the computer after 2.3-3 hours of recording (to good quality you get used to it quickly).

HDV cameras

High price

DV(miniDV)-cameras

De facto mainstream home video standard

The problem of choice, cheap "soap dishes" and semi-professional models coexist peacefully in this standard

DV(Digital-8) cameras

Recording and playback on any 8 format cassettes

Longer recording time per cassette compared to miniDV

A small spread of the format

DVD cameras

Recorded, took the disc out of the camera, put it in the player

Poor recording quality

Short write time to disk

Flash cameras

No mechanical parts (with the exception of zoom), resulting in higher reliability

Poor recording quality

HDD cameras

Much longer recording time compared to cassette units

High speed of data rewriting on computer hard drive

Frequent uploading of video to computer

In the "field" you need a laptop with a sufficiently large hard drive

High price

1.1.2. Any digital video camera uses compression (compression) of digitized video, because on this moment there is simply no media that can withstand uncompressed video (one minute of uncompressed PAL 720 * 576 video without sound takes about 1.5 GB on the hard drive, simple calculations show that 90 GB is already required for one hour). And yet it is necessary to process this huge amount of information, even a simple overwriting of 90 GB will take about five hours. Therefore, manufacturers of video cameras simply need to use compression of digitized video. Modern camcorders use the following types of compression: DV, MPEG 2, MPEG 4 (DivX, XviD).

DV is the main type of video compression in modern digital video cameras, it is used by HDV, miniDV, Digital 8 and some HDD cameras. The high quality of this type of compression, I think, is still leading among other formats for a long time.

MPEG 2 is the format used to burn DVDs. Although it has a slightly worse recording quality compared to DV, but depending on the bitrate (roughly speaking, the number of bytes allocated for one second of video) using this species compression, you can get a video of sufficiently high quality (remember licensed DVDs).

MPEG 4 - to be honest, the manufacturers of digital equipment (photo and video) have seriously “tarnished” the reputation of this format. To "squeeze" everything possible out of this format, you need to use a fairly powerful computer and spend a decent amount of time. Therefore, it turns out that the final video in MPEG 4 format on camcorders and cameras is of low resolution and low (to put it mildly) quality. Whether DivX or XviD is used is not so important, the difference (small), again, can only be seen when processing video on a computer.

1.1.3. An important, but rather the main, influence on the final result is the quality of the matrix used to digitize the optical signal passing through the camera lens. The bigger it is, the better. When choosing a video camera, do not be too lazy to look into the specification and see the number of effectively used pixels (“dots” on the matrix). For example, the specification for a Sony XXXXXX video camera says that with a frame size of 720 * 576 (0.4 Megapixels), a 2 Megapixel matrix is ​​used for video. Naturally, this has the most positive effect on the final result, since with any encoding (compression), the law strictly operates: the better the source material, the better the result, and the more light hits the matrix, the less digital noise there will be, the darker the time it will be possible to use a video camera, etc. All of the above in triple size refers to three-matrix cameras, among other things, the three-matrix system allows you to significantly reduce color noise due to the fact that the separation of light into RGB color components (a prerequisite for receiving a video signal) is not performed electronics, but an optical prism, then each matrix processes its own color.

Indirectly, the size and quality of the matrix can be judged by the digital camera built into the camcorder, the higher its resolution, the better.

1.1.4. With camcorder optics, everything is simple: the more, the better. The larger the lens diameter, the more light will hit the sensor. The greater the optical magnification of the lens ... However, it is worth dwelling on this in more detail. The first thing I want to say: NEVER look at the proud inscriptions on the side of the video camera (X120, X200, X400, etc.). You only need to look at the optical zoom of the lens (either on the camera (optical zoom), or on the lens itself). Of course, digital zoom can be used, but do not forget that digital zoom is a limitation on the number of effectively used pixels of the matrix (see figure). And just a 2x digital zoom (for example, with a 10x lens, this will be a 20x total increase) will reduce the effectively used pixels on the matrix by 4 times!

Well, it would be nice to have an optical stabilizer, since cameras with a digital stabilizer do not use the entire area of ​​\u200b\u200bthe matrix.

Webcams

Webcams are inexpensive fixed network devices that transmit information, usually video, over wireless or cross-switched Internet and Ethernet channels. The main purpose of "room" webcams is to use them for video mail and teleconferencing. Such cameras are widely used in "baby sitting" - they perfectly cope with the role of baby monitors, transmitting an image of a child left to himself. "Outdoor" anti-vandal webcams act as security video monitors. The ability to capture an image in camcorder or still camera mode is additional features webcams. In this case, you should not expect high quality from recorded videos or digital photos. Because it makes no sense to equip webcams with high-quality optics and expensive electronics - real-time video data transmission requires incredibly high compression, which inevitably leads to a loss in image quality. Although it is fundamentally impossible to get a gorgeous picture using webcams, it is the quality of the resulting image that is the main characteristic that allows you to subjectively compare and choose cameras of this type. However, preference can also be influenced by an interesting design, software package and various options such as support for skins and additional communication interfaces. All webcams are equipped with a motion detector function and an audio input that allows you to transmit sound information, they are also often equipped with connectors for connecting various external sensors and devices such as lighting devices and alarms. World practice shows that the main manufacturers of webcams are companies manufacturing computer peripherals (Genius, Logitech, SavitMicro) or network equipment (D-Link, SavitMicro), and not video or photographic equipment, which once again emphasizes the difference in the technologies used.

Video image compression formats

As an initial image processing step, the MPEG 1 and MPEG 2 compression formats split the reference frames into several equal blocks, which are then subjected to a diskette cosine transform (DCT). Compared to MPEG 1, the MPEG 2 compression format provides better image resolution with more high speed transmission of video data through the use of new algorithms for compression and removal of redundant information, as well as encoding the output data stream. Also, the MPEG 2 compression format allows you to select the compression level due to the quantization accuracy. For video with a resolution of 352x288 pixels, the MPEG 1 compression format provides a transmission rate of 1.2 - 3 Mbps, and MPEG 2 - up to 4 Mbps.

Compared to MPEG 1, the MPEG 2 compression format has the following advantages:

Like JPEG2000, the MPEG 2 compression format provides scalability for different levels of image quality in a single video stream.

In the MPEG 2 compression format, the accuracy of motion vectors is increased to 1/2 pixel.

The user can select an arbitrary precision of the discrete cosine transform.

The MPEG 2 compression format includes additional prediction modes.

The MPEG 2 compression format used the now discontinued AXIS Communications AXIS 250S video server, JVC Professional's 16-channel VR-716 video storage device, FAST Video Security DVRs, and many other video surveillance products.

MPEG 4 compression format

MPEG4 uses a technology called fractal image compression. Fractal (contour-based) compression involves extracting the contours and textures of objects from an image. The contours are presented in the form of a so-called. splines (polynomial functions) and encoded by reference points. Textures can be represented as spatial frequency transform coefficients (eg discrete cosine or wavelet transform).

The range of data rates supported by the MPEG 4 video compression format is much wider than in MPEG 1 and MPEG 2. Further developments by specialists are aimed at completely replacing the processing methods used by the MPEG 2 format. The MPEG 4 video image compression format supports a wide range of standards and data rates. MPEG 4 includes both progressive and interlaced scanning techniques and supports arbitrary spatial resolution and bit rates ranging from 5 kbps to 10 Mbps. MPEG 4 has improved the compression algorithm, the quality and efficiency of which has been improved at all supported bit rates. Developed by JVC Professional, the VN-V25U webcam, part of the works line of network devices, uses the MPEG 4 compression format for video image processing.

Video formats

The video format determines the structure of the video file, how the file is stored on the storage medium (CD, DVD, hard disk or communication channel). Usually different formats have different file extensions (*.avi, *.mpg, *.mov, etc.)

MPG - A video file that contains MPEG1 or MPEG2 encoded video.

As you have noticed, usually MPEG-4 movies have AVI extension. The AVI (Audi o-Video Interleaved) format was developed by Microsoft for storing and playing videos. It is a container that can contain anything from MPEG1 to MPEG4. It can contain 4 types of streams - Video, Audio, MIDI, Text. Moreover, there can be only one video stream, while there can be several audio streams. In particular, AVI can contain only one stream - either video or audio. The AVI format itself imposes absolutely no restrictions on the type of codec used, neither for video nor for audio - they can be anything. Thus, any video and audio codecs can be perfectly combined in AVI files.

RealVideo is a format created by RealNetworks. RealVideo is used for live TV broadcast on the Internet. For example, the CNN television company was one of the first to broadcast on the Web. It has a small file size and the lowest quality, but you, without particularly downloading your communication channel, can watch the latest TV news on the website of your chosen TV company. Extensions RM, RA, RAM.

ASF - Stream Format from Microsoft.

WMV - A video file recorded in the Windows Media format.

DAT - A file copied from a VCD(VideoCD)\SVCD disc. Contains MPEG1\2 video stream.

MOV - Apple Quicktime Format.

Connecting to a PC or TV

The simplest connector - RCA AV output - simply "tulips" - is available in any video camera, is adapted for connection to any television and video equipment, and provides analog video transmission with the greatest loss in quality. It is much more valuable that digital video cameras have such analog inputs - this allows you to digitize your archives of analog recordings, if you previously had an analog video camera. In the "figure" the period of their storage will be extended, and it will also be possible to edit them on a computer. Hi8, Super VHS (-C), mini-DV (DV) and Digital8 camcorders are equipped with an S-video connector, which, unlike RCA, transmits separate color and brightness signals, which significantly reduces losses and significantly improves image quality. The presence of an S-video input on digital models gives the same benefits to owners of Hi 8 or Super VHS archives. The built-in LaserLink infrared transmitter in Sony camcorders, using the IFT-R20 receiver, allows you to watch footage on a TV without connecting it with wires. Just place the camcorder next to the TV at a distance of up to 3 m and turn on "PLAY". A more advanced Super LaserLink transmitter, which is equipped with all the latest models, works at a greater distance (up to 7 m). The presence in the camcorder of the editing connectors allows for linear editing by synchronizing the camcorder with VCRs and the editing deck. In this case, on all devices connected to each other, the readings of the tape counter and all the main modes are controlled synchronously: playback, recording, stop, pause and rewind. In Panasonic camcorders, the Control-M connector is used for this purpose, in Sony camcorders - Control-L (LANC). Their specifications are incompatible, so we recommend that you check the compatibility of the interface with the VCR and camcorder.

RS-232-C connector ("digital photo output")

Connector for connecting a camcorder to a computer serial port for transferring still frames in digital form and controlling the camcorder from a PC. In "fancy" models, instead of RS-232-C, an even faster "photo output" is built in - a USB interface. All mini-DV and Digital8 camcorders are equipped with a DV output (i.LINK or IEEE 1394 or FireWire) for fast lossless digital audio/video transmission. To do this, you need to have another device that supports the DV format - a DV VCR or a computer with a DV card. More valuable, of course, are camcorders that have, in addition to an output, also a DV input. Some firms produce the same model in two versions: the so-called. "European" (without entrances) and "Asian" (with entrances). This is due to the high customs duties in Europe on the import of digital video recorders, which can rightly include a video camera with a DV-input. IEEE-1394, FireWire and i. LINK is three names for the same high-speed digital serial interface, which is used to transfer any kind of digital information. IEEE-1394 (IEEE - Institute of Electrical and Electronics Engineers) Designation of an interface standard developed by Apple Corporation(branded as FireWire). The designation is adopted by the American Institute of Electrical and Electronics Engineers (IEEE). Most mini-DV and Digital8 camcorders are equipped with an IEEE-1394 interface that sends digital video information directly to a computer. The hardware includes an inexpensive adapter and a 4 or 6 wire cable. Allows you to transfer data at speeds up to 400 Mbps.

i. LINK

IEEE 1394 digital input/output. Allows you to transfer footage to a computer. Camcorder models with i. Link increase work flexibility through interactive editing, electronic storage and posting images.

firewire

Registered trademark of Apple, which was actively involved in the development of the standard. The name FireWire ("fire wire") belongs to Apple and can only be used to describe its products, and in relation to such devices on a PC, it is customary to use the term IEEE-1394, that is, the name of the standard itself;

Memory card

On this card you can store in in electronic format photos, videos, music. It can be used to transfer an image to a computer.

memory stick

The Memory Stick is a proprietary Sony design that can store images, speech, music, graphics, and text files at the same time. Weighing only 4 grams and the size of a stick of gum, the memory card is reliable, has protection against accidental erasure, 10-pin connection for greater reliability, data transfer rate - 20 MHz, write speed - 1.5 Mb / s, read speed - 2.45 Mb/s Capacity of digital still pictures on a 4 MB card (MSA-4A): in JPEG 640x480 format SuperFine mode - 20 frames, Fine - 40 frames, Standard - 60 frames; in JPEG 1152x864 format SuperFine - 6 frames, Fine - 12 frames, Standard - 18 frames. Capacity of MPEG Movies on a 4 MB card (MSA-4A): in Presentation mode (320x2.6 for 15 seconds); in Video Mail mode (160x1.6 for 60 seconds.

SD Memory Card

SD-card - a new standard memory card the size of a postage stamp allows you to store any kind of data, including a variety of photo, video and audio formats. SD cards are currently available in 64, 32, 16 and 8 MB capacities. Until the end of 2001, SD-cards with a capacity of up to 256 MB will go on sale. One 64 Mb SD card contains about the same amount of music as one CD. Since the data transfer rate of the SD card is 2 Mb/s, dubbing from a CD takes only 30 seconds. Since the SD Memory Card is a solid-state storage medium, vibration does not affect it in any way, ie there is no skipping sound that occurs with rotating media such as CD or MD. Maximum audio recording time on a 64 Mb SD card: 64 minutes high quality (128 kbps), 86 minutes standard (96 kbps) or 129 minutes in LP mode (64 kbps).

The main difference between a film camera and a digital camera is the way the light passes through the lens. Where film is located in traditional film cameras, a digital camera has an electronic matrix with photosensitive elements. It is on the surface of the electron-optical converter (matrix) that an image is created, which is then converted into electrical signals processed by the camera's processor. From Matrix digital camera not only the quality of the resulting photos directly depends, but also the cost of the camera itself. What is a photosensitive matrix and how is a color image created in a digital camera?

Matrix: types and principle of operation

The photosensitive matrix is key element any modern digital camera. It can be called the "heart" of a digital camera. If we compare the camera with the human eye, then the matrix is ​​the retina of a digital apparatus, on which the optical signal is converted into a digital image. The matrix or sensor is a complex structured plate of semiconductor material. This chip has an ordered array of photosensitive elements. Millions of such photosensitive elements or pixels are isolated from each other and form only one point of the image. It should be noted that, despite the high accuracy in the manufacture of digital camera matrices, each sensor is unique in its own way and therefore there are no two completely identical cameras in nature.

The main task of the camera matrix is ​​to ensure the conversion of an optical image into an electrical one. When the camera shutter is released, millions of tiny cells are exposed to light, and a charge accumulates on them, which, of course, varies depending on the amount of light that has fallen on a given cell of the matrix. These charges are transferred to an electrical circuit, which is designed to amplify them and convert them into a digital form. Signal amplification is performed according to the ISO sensitivity settings automatically selected by the camera or set by the user. The more the selected ISO sensitivity differs from the actual sensor sensitivity, the stronger the signal. But signal amplification can adversely affect the final image - so-called "noise" appears in the form of random noise.

To date, in the production of photosensitive matrices for digital cameras, mainly two technologies are used - CMOS (Complementary Metal Oxide Semiconductor) and CCD (Charge Coupled Device). In Russian translation, these two types of sensors are known as CMOS and CCD arrays.

CMOS sensors are made from complementary metal oxide semiconductor materials. Their key feature is that they can read and amplify the light signal from any point on their surface. The CMOS sensor can convert charge into voltage right in the pixel. This feature allows you to significantly increase the speed of the camera when processing information from the matrix.

In addition, this technology makes it possible to integrate matrices directly with an analog-to-digital converter (ADC), which reduces the cost of a digital camera due to some simplification of its design. Plus, CMOS matrices are characterized by lower power consumption. However, they have a significant drawback - in order to increase the light sensitivity of the matrix and thereby improve the image quality, manufacturers have to significantly increase the physical dimensions of the sensor.

CCDs have become widespread in modern digital cameras of amateur and professional level even though they are slightly more labor intensive in production. The principle of operation of such a matrix is ​​based on the line-by-line movement of the accumulated electric charges. In the process of reading the charge, charges are transferred to the edge of the matrix and towards the amplifier, which then transmits the amplified signal to the analog-to-digital converter (ADC). Since the information from the cells is read sequentially, it is possible to take the next picture only after the previous image has been completely formed. At the same time, the advantage of CCD matrices is their relatively small size.

The CCD sensors used in modern digital cameras are divided by their design into full-frame, frame buffered, column buffered, progressive scan, interlaced scan, and back-illuminated. For example, in interlaced CCDs, each pixel has both a light receiver and a charge storage area. In turn, in full-frame matrices, the entire pixel performs the function of receiving the light flux, and the charge transfer channels are hidden under the pixel.

Enough long time CCDs were considered to have greater light sensitivity, wider dynamic range, and better noise immunity than CMOS sensors. Therefore, digital cameras with CCD matrices were used where high image quality was required, and cameras with CMOS sensors were assigned the role of inexpensive amateur devices. However, for last years manufacturers, due to the improvement in the quality of silicon wafers and the amplifier circuit, have been able to significantly improve the performance of CMOS matrices. And now, in terms of image quality, cameras based on CMOS matrices are practically in no way inferior to cameras that use CCD sensors.

The latest CMOS sensors are able to guarantee professional image quality. And therefore, from the point of view of the quality of the photo image, in fact, the type of matrix already says little, a much more important factor is the specific characteristics of this sensor - its physical dimensions, resolution, light sensitivity, signal-to-noise ratio.

As we have already found out, the matrix of a digital camera consists of a huge number of light-sensitive rectangular semiconductor elements called pixels. Each such pixel collects electrons that arise in it under the action of photons that come from a light source. But how does the process of forming an image with a camera matrix take place?

In a simplified form, this can be described using the example of a CCD matrix. During the exposure of the frame, controlled by the camera shutter, each pixel is gradually filled with electrons in proportion to the amount of light that hit it. Next, the camera shutter closes, and the columns with electrons accumulated in pixels begin to move to the edge of the sensor, where a similar measuring column is located.

In this column, the charges move already in the perpendicular direction and, ultimately, fall on the measuring element. Microcurrents are created in it, proportional to the charges that have fallen on it. Thanks to this scheme, it becomes possible to determine not only the value of the accumulated charge, but also which pixel on the matrix, that is, the row number and column number, it corresponds to. Based on this, a picture is built corresponding to the image focused on the surface of the photosensitive matrix. In matrices built using CMOS technology, the charge is converted into voltage directly in the pixel, after which it can be read electric circuit camera.

Color Image Formation

Digital camera sensors can only respond to the intensity of the light that hits them. That is, they can only determine the gradations of light intensity - from completely white to completely black. The more photons hit the pixel, the higher the brightness of the light. But how, then, does a digital camera recognize color tones? Traditional film cameras use negative film, consisting of three layers, which allows the film to retain different color shades of light. In digital cameras, other technical solutions to form a color image.

In order for the sensor of a digital camera to be able to distinguish color shades, a block of microscopic light filters is installed above its surface. If microlenses are used in the matrix, which serve to additionally focus light on pixels in order to increase their sensitivity, then filters are placed between each microlens and cell.

As is well known, any color in the spectrum can be obtained by mixing just a few primary colors (red, green and blue). The distribution of light filters over the surface of the sensor for forming a color image can be different, depending on the selected algorithm. Most digital cameras today use the Bayer pattern.

Within the framework of this system, color filters above the surface of the matrix are interspersed with each other, in a checkerboard pattern. Moreover, the number of green filters is twice as large as red or blue, since the human eye is more sensitive to the green part of the light spectrum. As a result, it turns out that the red and blue filters are located between the green ones. Checkerboard arrangement of filters is necessary to ensure that images of the same color are obtained regardless of whether the user holds the camera vertically or horizontally.


Bayer color model (source www.figurative.ru)

Thus, the color of each pixel is determined by the light filter covering it. All exposed elements of the cell participate in obtaining color information. The color image itself is built by the camera electronics after the electrical signal taken from the camera's sensor cells is converted into a digital code by an analog-to-digital converter (ADC). However, CMOS sensors can independently process the color component of the signal.

Analog to Digital Converter (ADC)

As we already understood, the operation of the photosensitive matrix is ​​closely related to the analog-to-digital converter of the camera (ADC). After each of the million photosensitive elements of the matrix converts the energy of the light incident on it into an electrical charge, this accumulated charge is amplified to the required level for its subsequent processing by an analog-to-digital converter.

Analog to digital converter is a device responsible for converting an input analog signal into a digital signal. The ADC converts the analog values ​​of the electric charge received by each photosensitive element into digital values, which are then received by the camera automation, in particular, the built-in microprocessor, already in binary code.

The main characteristic of the ADC is its bit depth, that is, the number of discrete signal levels that are encoded by the converter. For example, a 1-bit analog-to-digital converter can only classify light sensor signals as either black (0) or white (1). And an eight-bit ADC is already able to build 256 different brightness values ​​​​for each sensor. Modern models of digital cameras with large sensors use 12-, 14- or 16-bit analog-to-digital converters. The high bit depth of the ADC installed in the camera may indicate that this digital camera is capable of creating images with a wide tonal and dynamic range.

After the ADC converts the analog voltages received from the sensors into a binary coded mark consisting of zeros and ones, it transmits this digitized data to the camera's digital signal processor. In the processor, this data is already converted into a color image in accordance with the algorithms introduced by the manufacturer, including, in particular, determining the coordinates of image points and assigning them a certain color tint. When building a color image, the built-in electronics of the camera provides adjustment of the brightness, contrast and saturation of the picture. It also removes various interferences and "noises" from it.

Of course, the sensor and its associated analog-to-digital converter are not the only components of a digital camera that determine its quality. Optics, electronics and other elements are also very important to ensure the high quality of the produced photographic images. Nevertheless, it is customary to determine the level of a modern digital camera based on the technical perfection of the photosensitive matrix installed in it. Moreover, the development of photographic technology in general today is largely determined by the speed of development of more and more advanced sensors.

Let's start simple. Consider the simplest camera (Camera Obscura)

Rays of light are reflected from each of the points of the object. The hole in the barrier allows only one beam to pass through. If you do not install a barrier, then on the film we will get a meaningless image.

The opening in the barrier is called the aperture or diaphragm. In reality, it misses more than one beam. In this case, the dot is displayed as a spot on the film.

If the aperture is too large, the image will be blurry. However, if the aperture is too small, less light enters the film and diffraction effects begin. Diffraction of light is the phenomenon of deviation of light from the rectilinear direction of propagation when passing near obstacles.

The lens allows you to use a large aperture and increase the flow of light from each point.

NN is the main optical axis crossing the centers of spherical surfaces

A bundle of parallel lines intersects at the main focus F

f is the main focal length,

u,v are conjugate focal lengths

A beam passing through the center of a lens is not refracted!

The system is exactly like a camera obscura, but collects more light!

Focal length is the distance from the rear (or second) main point of the lens to its focus when a light beam enters the lens parallel to its optical axis

Only some of the objects are in focus. Camera focusing is provided by shifting the matrix relative to the lens (changing the conjugate focus v), or changing the degree of refraction in the lens (changing the main focal length f)

Only those points of the image will be sharply defined, the rays of which form a small "scattering spot"

By changing the aperture, you can change the size of the "scatter spots" and at the same time increase the depth of field (the interval over which the object is approximately in focus). At the same time, a small aperture reduces the amount of light - you have to increase the shutter speed (exposure time).

The size of the sensor and its distance to the lens determine the field-of-view of the camera

The matrix consists of many light-sensitive cells - pixels. Each cell, when light hits it, generates an electrical signal proportional to the intensity of the light flux. If information is used only about the brightness of the light, the picture is black and white, and to make it color, the cells are covered with color filters.

The pixel size in the camera must not be smaller minimum size lens points. To get the best effect from a digital camera with a sensor containing small pixels, you should not use cheap optics.

The matrix (sensor, photosensor) is a camera device where an image is obtained. Actually, this is an analogue of a photographic film, or a film frame. As in it, the rays of light collected by the lens “paint” the picture. The difference is that this picture is stored on the film, and on the sensors of the matrix, under the influence of light, electrical signals arise, which are processed by the camera's processor, after which the image is saved as a file to the memory card. The camera matrix itself is a special microcircuit with photo sensors-pixels (photodiodes). It is they who, when light hits, generate a signal, the larger, the more light hits this pixel sensor.

In most matrices, each pixel is covered with a red, blue or green filter (only one!) in accordance with the well-known RGB (red-green-blue) color scheme. Why these particular colors? One of the hypotheses explaining human color vision is the three-component theory, which states that there are three types of light-sensitive elements in the human visual system. One type of element responds to green, another type to red, and a third type to blue.

On the matrix, the filters are arranged in groups of four, so that two greens have one blue and one red. This is done because the human eye is most sensitive to green. Light rays of different spectra have different wavelengths, so the filter only allows rays of its own color to pass into the cell.

So, the resulting picture consists only of red, blue and green pixels - this is how RAW files (uncompressed format) are recorded. To record JPEG and TIFF files, the camera's processor analyzes the color values ​​of neighboring cells and calculates the color of the pixels. This processing process is called color interpolation, and it is extremely important for obtaining high-quality photographs.

The camera processor is responsible for all the processes that result in a picture. The processor determines the exposure parameters, decides which ones to apply in a given situation. from processor and software depends on the quality of photos and the speed of the camera.

The term "Exposure" refers to the amount of light that hits a photosensitive photographic material in a given period of time. The three main parameters that affect exposure are sensitivity, shutter speed, and aperture.

It should be noted that various distortions occur during image formation. Image distortions formed by the optics system during photography are called aberrations. Depending on the nature of origin, aberrations are chromatic (color)

and geometric (called distortion).

Chromatic (color) aberrations are optical distortions caused by different angles of refraction of light waves of different lengths. Red has the maximum refraction, violet has the minimum.

The degree of distortion depends on the quality of the lens and is reduced by using special lenses. For example, chromatic aberrations can be reduced by an achromatic lens consisting of two types of glass (crown and flint).

Distortion is a geometric distortion of straight lines. Distortions result from a change in the linear magnification provided by the optics across the image field. There are two types of distortion - barrel (negative) and pincushion (positive).

Aspheric optics are used to reduce distortion. The design of the lens includes lenses with an elliptical or parabolic surface, due to which the geometric similarity between the photographic object and its image is restored.

The lion's share of these distortions can be compensated using digital image processing methods - calibration. The essence of the calibration method is to compare the reference and real parameters, and in the analytical accounting for distortions.

After the shooting is done, an equally important task remains - to save the resulting photo on a memory card. It is desirable to do this with maximum quality, without losing any information obtained during the shooting. Today, most cameras allow you to save pictures in two fundamentally different formats - RAW and JPEG. RAW is raw, unprocessed information from the matrix, written to a file. It is assumed that the photographer will work with the RAW file on his own, converting it on a computer to get the finished photo. JPEG is actually a finished photograph.

Some, usually more expensive cameras, offer to save photos in a "raw" (RAW) format. For the raw format, there are no specific standards. they differ from manufacturer to manufacturer. The raw format contains all the data received directly from the photosensitive element, before the camera software changes the white balance or something else. Saving a photo in raw format allows you to fine-tune settings such as white balance after the photo has been saved to your PC. Most professional photographers use the raw format because it gives them the most flexibility in prepress. back side flexibility - "raw" photos take up an extremely large amount of space on the memory card.

Image compression is the application of data compression techniques to a digital image. By reducing the redundancy of image data, the efficiency of image storage and transmission can be improved.

Modern cameras do everything themselves, to get a picture, the user just needs to press one button. But it's still interesting: by what magic does the picture get into the camera? We will try to explain the basic principles of digital cameras.

Main parts

Basically, the device of a digital camera repeats the design of an analog one. Their main difference is in the photosensitive element on which the image is formed: in analog cameras it is a film, in digital cameras it is a matrix. Light through the lens enters the matrix, where an image is formed, which is then stored in memory. Now we will analyze these processes in more detail.

The camera consists of two main parts - the body and the lens. The case contains a matrix, a shutter (mechanical or electronic, and sometimes both at once), a processor and controls. A lens, whether detachable or hardwired, consists of a group of lenses housed in a plastic or metal housing.

Where is the picture

The matrix consists of many light-sensitive cells - pixels. Each cell, when light hits it, generates an electrical signal proportional to the intensity of the light flux. Since only information about the brightness of the light is used, the picture is black and white, and in order for it to be color, you have to resort to various tricks. The cells are covered with color filters - in most matrices, each pixel is covered with a red, blue or green filter (only one!), In accordance with the well-known RGB (red-green-blue) color scheme. Why these particular colors? Because these colors are primary, and all the rest are obtained by mixing them and reducing or increasing their saturation.

On the matrix, the filters are arranged in groups of four, so that two greens have one blue and one red. This is done because the human eye is most sensitive to green. Light rays of different spectra have different wavelengths, so the filter only allows rays of its own color to pass into the cell. The resulting picture consists only of red, blue and green pixels - this is how RAW files (raw format) are recorded. To record JPEG and TIFF files, the camera's processor analyzes the color values ​​of neighboring cells and calculates the color of the pixels. This processing process is called color interpolation, and it is extremely important for obtaining high-quality photographs.

This arrangement of filters on the matrix cells is called the Bayer pattern
There are two main types of matrices, and they differ in the way information is read from the sensor. In CCD-type matrices, information is read from cells sequentially, so the file processing time can take quite a long time. Although such sensors are "thoughtful", they are relatively cheap and, moreover, the level of noise in the images obtained with them is less.

CCD type

In matrices of the CMOS type (CMOS), information is read individually from each cell. Each pixel is marked with coordinates, which allows you to use the matrix for metering and autofocus.

CMOS sensor

The described types of matrices are single-layer, but there are also three-layer ones, where each cell simultaneously perceives three colors, distinguishing differently colored color streams by wavelength.

Three-layer matrix

The camera processor has already been mentioned above - it is responsible for all the processes that result in a picture. The processor determines the exposure parameters, decides which parameters to apply in a given situation. The quality of photos and the speed of the camera depend on the processor and software of the camera.

Smart-microcam.ru has a slightly different principle of operation, but we will not deviate from our article.

At the click of the shutter

The shutter measures the amount of time that light hits the sensor (shutter speed). In the vast majority of cases, this time is measured in fractions of a second - as they say, and you won’t have time to blink. In digital SLR cameras, as in film cameras, the shutter consists of two opaque shutters that cover the sensor. Because of these shutters in digital SLRs, it is impossible to sight on the display - after all, the matrix is ​​\u200b\u200bclosed and cannot transmit an image to the display.

IN compact cameras the matrix is ​​not closed by the shutter, and therefore it is possible to compose the frame according to the display

When the shutter button is pressed, the shutters are driven by springs or electromagnets, allowing light to enter, and an image is formed on the sensor - this is how a mechanical shutter works. But there are also electronic shutters in digital cameras - they are used in compact cameras. An electronic shutter, unlike a mechanical one, cannot be felt by hand, it is, in general, virtual. The matrix of compact cameras is always open (which is why you can compose the picture while looking at the display, and not at the viewfinder), but when the shutter button is pressed, the frame is exposed for the specified exposure time, and then written to memory. Due to the fact that electronic shutters do not have shutters, their shutter speeds can be ultra-short.

Focus

As mentioned above, the matrix itself is often used for autofocusing. In general, there are two types of autofocus - active and passive.

For active autofocus, the camera needs a transmitter and receiver that work in the infrared region or with ultrasound. The ultrasonic system measures the distance to an object using echolocation of the reflected signal. Passive focusing is carried out according to the contrast assessment method. In some professional cameras both types of focus are combined.

In principle, the entire area of ​​​​the matrix can be used for focusing, and this allows manufacturers to place dozens of focusing zones on it, as well as use a “floating” focus point, which the user himself can place anywhere he wants.

The fight against distortion

It is the lens that forms the image on the matrix. The lens consists of several lenses - three or more. One lens cannot create a perfect image - it will be distorted at the edges (this is called aberrations). Roughly speaking, the beam of light should go directly to the sensor, without being scattered along the way. To some extent, this is facilitated by the diaphragm - a round plate with a hole in the middle, consisting of several petals. But you can’t close the aperture too much - because of this, the amount of light falling on the sensor decreases (which is used when determining the desired exposure). If, however, several lenses with different characteristics are assembled in series, the distortions given by them together will be much less than the aberrations of each of them separately. The more lenses, the less aberration, and the less light hits the sensor. After all, glass, no matter how transparent it may seem to us, does not transmit all the light - some part is scattered, something is reflected. In order for the lenses to let in as much light as possible, they are coated with a special anti-reflective coating. If you look at the camera lens, you will see that the surface of the lens shimmers like a rainbow - this is the antireflection coating.

The lenses are positioned inside the lens like this

One of the characteristics of the lens is aperture, the value of the maximum open aperture. It is indicated on the lens, for example, like this: 28/2, where 28 is the focal length, and 2 is the aperture. For a zoom lens, the marking looks like this: 14-45 / 3.5-5.8. Two aperture values ​​are specified for zooms, since it has different minimum apertures at wide-angle and at telephoto. That is, at different focal lengths, the aperture ratio will be different.

The focal length that is indicated on all lenses is the distance from the front lens to the light receiver - in this case, the matrix. The focal length determines the viewing angle of the lens and its, so to speak, range, that is, how far it “sees”. Wide-angle lenses move the image further away from our normal vision, while telephoto lenses zoom in and have a small angle of view.

The viewing angle of the lens depends not only on its focal length, but also on the diagonal of the light receiver. For 35 mm film cameras, a lens with a focal length of 50 mm is considered normal (that is, approximately corresponding to the viewing angle of the human eye). Lenses with a shorter focal length are wide-angle, lenses with a longer focal length are telephotos.

The left side of the lower inscription on the lens is the zoom focal length, the right side is the aperture

This is where the problem lies, due to which, next to the focal length of a digital camera lens, its equivalent for 35 mm is often indicated. The diagonal of the matrix is ​​less than the diagonal of the 35 mm frame, and therefore it is necessary to "translate" the numbers into a more familiar equivalent. Due to the same increase in focal length in SLR cameras with "film" lenses, wide-angle shooting becomes almost impossible. An 18mm lens for a film camera is a super wide angle lens, but for a digital camera its equivalent focal length will be around 30mm or more. As for telephoto lenses, increasing their "range" is only in the hands of photographers, because a regular lens with a focal length of, say, 400 mm is quite expensive.

Viewfinder

In film cameras, you can only compose a shot using the viewfinder. Digital ones allow you to completely forget about it, since in most models it is more convenient to use the display for this. Some very compact cameras don't have a viewfinder at all, simply because there isn't room for it.

The most important thing about a viewfinder is what you can see through it. For example, SLR cameras are so called just because of the design features of the viewfinder. The image through the lens through a system of mirrors is transmitted to the viewfinder, and thus the photographer sees the real area of ​​the frame. During shooting, when the shutter opens, the mirror blocking it rises and transmits light to the sensitive sensor. Such designs, of course, do an excellent job with their tasks, but they take up quite a lot of space and therefore are completely inapplicable in compact cameras.

This is how the image through the system of mirrors enters the viewfinder of the SLR camera

Real vision optical viewfinders are used in compact cameras. This is, roughly speaking, a through hole in the camera body. Such a viewfinder does not take up much space, but its view does not correspond to what the lens “sees”.

There are also pseudo-reflex cameras with electronic viewfinders. In such viewfinders, a small display is installed, the image on which is transmitted directly from the matrix - just like on an external display.

Flash

Flash, a pulsed light source, is known to be used to illuminate where the main light is not enough. Built-in flashes are usually not very powerful, but their momentum is enough to illuminate the foreground. On semi-professional and professional cameras, there is also a contact for connecting a much more powerful external flash. This contact is called a "hot shoe".