Digitization of an analog signal. Balanced and unbalanced connection. Switching analog studio. Text Data Encoding

Let us first deal with the general principles of analog-to-digital conversion. The basic principle of digitizing any signals is very simple and is shown in Fig. 17.1, a. At some points in time t \\, ti, h, we take the instantaneous value of the analog signal and, as it were, apply some measure to it, a ruler graduated on a binary scale. An ordinary ruler contains large divisions (meters), each divided into ten parts (decimeters), each of which is also divided into ten parts (centimeters), etc. A binary ruler would contain divisions divided in half, then again in half, etc. . d. - how much resolution is enough. If the entire length of such a ruler is, say, 2.56 m, and the smallest division is 1 cm (that is, we can measure it with an accuracy of at least 1 cm, more precisely, even half of it), then there will be exactly 256 such divisions, and they can be represented as a binary number of 1 byte or 8 bits.

Nothing will change if we measure not the length, but the voltage or resistance, only the meaning of the concept of “ruler” will be slightly different. So we get successive samples of the signal value xi, xg, xs. Moreover, note that with the selected resolution and the number of discharges, we can measure a value no greater than some value that corresponds to the maximum number, in this case 255. Otherwise, we will either have to increase the number of discharges (extend the ruler) or change the resolution to a worse side (stretch it ) All of the above is the essence of the work of the analog-to-digital converter - ADC.

In fig. 17.1, and the graph demonstrates this process for the case if we measure some variable in time. If measurements are made regularly with a known frequency (it is called the sampling frequency or the quantization frequency), then only the signal values \u200b\u200bcan be recorded. If the task is then to restore the original signal from the recorded values, then, knowing the sampling rate and the accepted scale (that is, what value of the physical quantity corresponds to the maximum number in the adopted range of binary numbers), we can always restore the original signal by simply plotting the points on the graph and connecting their smooth line.

But what do we lose at the same time? Look at the pic. 17.1.6, which illustrates the famous Kotelnikov theorem (as usual, it bears a different name abroad - Nyquist, in fact they both came up with it independently of each other). This figure shows the sine wave of the limiting frequency, which we can still recover, having an array of points obtained with a sampling frequency / d. Since in the formula for the sinusoidal oscillation As \\ n (2nft) there are two independent coefficients (A is the amplitude, and / is the frequency), in order to restore the graph form uniquely, you need at least two points on each period, i.e., the digitization frequency must be at least twice as high as the highest frequency in the spectrum of the original analog signal. This is one of the common wordings, the Kotelnikov-Nyquist theorem.

Try to draw another sine wave yourself without phase shift, passing through the points indicated on the graph, and you will see that this is impossible. At the same time, you can draw as many arbitrarily different sinusoids passing through these points if their frequency is an integer number of times higher than the sampling frequency / d. In total, these sinusoids, or harmonics (that is, the terms of the expansion of the signal in the Fourier series, see Chapter 5) will give a signal of any complex shape, but they cannot be restored, and if such harmonics are present in the original signal, then they will disappear forever. Only harmonic components with frequencies below the limit are uniquely restored. That is, the digitization process is equivalent to the action of the low-pass filter with a rectangular slice of the characteristic at a frequency equal to exactly half the sampling frequency.

Now about the inverse transformation. In fact, no digital-to-analog conversion to DAC, which we will consider here, happens, we just express a binary number in the form of a proportional voltage value, that is, from the point of view of theory, we are engaged only in scale conversion. The entire analogue scale is divided into quanta — that is, gradations corresponding to the resolution of our binary “line”. If the maximum value of the signal is, for example, 2.56 V, then with an eight-bit code we get a quantum of 10 mV, and we don’t know and cannot find out what happens to the signal between these values, as well as in the time intervals between samples . If we take a series of consecutive samples of a certain signal, for example, those shown in Fig. 17.1, a, then as a result we get a stepwise picture, shown in Fig. 17.2.

Fig. 17.2. Digitized signal recovery from fig. 17.1, a

If you compare the graphs in fig. 17.1, a and in fig. 17.2, you will see that the second graph represents the first, to put it mildly, very approximately. In order to increase the degree of reliability of the obtained curve, it is necessary, firstly, to take readings more often, and secondly, to increase the bit depth. Then the steps will be smaller and smaller, and there is hope that with some sufficiently high resolution, both in time and in quantization, the curve will eventually become indistinguishable from a continuous analog line.

Marginal notes

Obviously, in the case of sound signals, additional smoothing, for example, using the low-pass filter, is simply not required here, because it will only worsen the picture, cutting off high frequencies even more. In addition, all kinds of analog amplifiers themselves smooth the signal, and the human senses also work as a filter. So the presence of steps in itself is not significant if they are small enough, but a sharp drop in the frequency response above a certain frequency affects the sound quality in a fatal way. Many people with good ear for music argue that they accurately distinguish digital sound of CD quality (which is sampled at a frequency of 44.1 kHz, that is, with a cut at a frequency that is obviously higher than the level of perception of human hearing, and with a number of gradations of at least 65 thousand for the entire range) from real analog sound, for example, from a vinyl record or from a tape. For this reason, high-quality digital sound is recorded with much higher sampling frequencies than formally necessary, for example, 192 and even 256 kHz, and then it becomes really indistinguishable from the original. True, directly digitized sound is recorded only on disks in the Audio CD format (with the specified characteristics), but for almost all other formats they use compression - compression according to special algorithms. If not for compression, neither the capacity of modern media nor the speed of computer networks would be enough for recording: just one minute of stereo sound with CD-quality parameters takes about 10 MB on the media, you can check it yourself.

We will not delve into the sampling features of analog periodic signals, since this is a very vast area in modern engineering, primarily related to the digitization, storage, replication and reproduction of sound and video, and at least a separate book should be written about this. For our purposes, the above information is sufficient, and now we will go directly to the task of digitizing and inverting a single signal value.

The conversion of an analog signal into digital form is a complex of three operations: discretization, quantization and coding.

Discretization is the replacement of a continuous analog TV signal S (t) by a sequence of samples (samples) of this signal (Fig. 2). These samples are taken at time points separated from each other by the interval T, which is called the sampling interval. The reciprocal of the sampling interval is called the sampling rate. The most common uniform discretization with a constant period, based on the Kotelnikov theorem. According to this theorem, any continuous signal S (t) having a limited frequency spectrum (0 ... f gp) can be represented without loss of information by the values \u200b\u200bof this signal S di. taken at discrete time instants t n \u003d nT (n \u003d 1,2,3, ... are integers), provided that T? 0.5 / t rp (T is the period, or sampling interval). The minimum allowable sampling rate according to Kotelnik t d.min \u003d 2f gp.

It is clear that the shorter the sampling interval (the higher the sampling frequency), the less the difference between the original signal and its discretized copy. The stepped structure of the sampled signal can be smoothed using a low-pass filter. Thus, the restoration of the analog signal from the sampled one is carried out.

The quantization process follows the discretization when converting an analog signal to digital form, which consists in replacing the instantaneous values \u200b\u200bof samples S di obtained after discretization with the nearest values \u200b\u200bfrom a set of separate fixed levels (Fig. 3). Quantization is also a discretization of the signal S q, but not in time, but in level. The fixed levels to which the samples are “tied” are called quantization levels. The dynamic range of the signal S (t), divided by quantization levels into separate ranges of values \u200b\u200b(quantization steps), forms a quantization scale.

The latter can be either linear or non-linear, depending on the conditions of the transformation. Rounding of the reference to the nearest level (upper or lower) is determined by the position of the quantization threshold inside the quantization step.

The discretized and quantized signal S dq is already digital. Indeed, if the amplitude of the pulses of the sampled signal S d can take any arbitrary values \u200b\u200bwithin the original dynamic range of the signal S (t), then the quantization operation has replaced the possible values \u200b\u200bof the signal amplitude with a limited number of values \u200b\u200bequal to the number of quantization levels. Thus, a quantized sample of a signal is expressed by a certain number determined by the number of quantization levels.

To transmit such a signal through communication channels, it is best to convert it to binary form, i.e. each signal level value is written in binary notation. In this case, the number (level value) is converted into a code combination of characters "0" or "1" (Fig. 4). This is the third, final one-way transformation of the analog signal S (t) to digital S dq, called coding .

All these three operations are performed by one technical device - an analog-to-digital converter (ADC). The inverse conversion of a digital signal into an analog one is performed in a device called a digital-to-analog converter (DAC). Analog-to-digital and digital-to-analog converters are indispensable blocks of any digital systems for transmitting, storing and processing information.

When directly encoding a television signal, code combinations are created with a frequency equal to the sampling frequency (sampling frequency f d). Each code combination corresponds to a specific sample and contains a number m of binary symbols (bits). Code words can be transmitted in parallel or sequential forms. For transmission in parallel form, it is necessary to use k communication lines (in Fig. 4 k \u003d 4).


Symbols of the codeword are simultaneously transmitted along the lines within the sampling interval. To transmit in a sequential form, the sampling interval must be divided into sub-intervals-ticks. In this case, the word characters are transmitted sequentially along one line, and one clock cycle is allocated to transmit one character of the word.

When digital information is transmitted over communication channels, the transmission rate is the number of transmitted binary symbols per unit time. The unit of speed is 1 bit / s. Will the digital signal transmission rate be equal to the product of the sampling frequency? q and the number of binary characters in one discrete sample m:

If the upper cutoff frequency of the TV signal is 6 MHz, then the minimum sampling frequency, according to Kotelnikov’s theorem, is 12 MHz. As a rule, in digital television systems, the frequency f d is chosen slightly above the minimum acceptable. This is due to the need to unify the digital TV signal for various television standards. In particular, for studio digital equipment, a sampling frequency of 13.5 MHz is recommended.

The number of quantization levels of the signal should be selected not less than the maximum number of gradations of brightness, distinguishable by the eye, which, depending on the observation conditions, ranges from 100 ... 200. Hence m \u003d 6.6 ... 7.6.

Obviously, the number of characters in the code combination can only be integer, which means that the bit depth of the code combination is m \u003d 7 (or 8). In the first case, the code combination can carry information on 128 possible signal levels (gradations of brightness), in the second case - 256. If we take m \u003d 8, then the transmission rate of digital information

V n \u003d 13.5 8 \u003d 108 (Mbps).

If we consider that, in addition to the luminance signal, color information should be transmitted, the total digital stream will double and will be equal to 216 Mbit / s. Such high-speed devices should have a TV signal conversion device, as well as communication channels.

It is not economically feasible to transmit such a large digital stream over communication channels, so the next task is to “compress” the digital TV signal. Reserves to reduce the digital stream without compromising the quality of the reproduced image exist. These reserves are specific to the TV signal, which has significant information redundancy. This redundancy is usually divided, despite some conventionality of such a division, into statistical and physiological.

Statistical redundancy is determined by the properties of the image, which is not generally a chaotic distribution of brightness, but is described by laws that establish certain relationships (correlation) between the brightnesses of individual elements. The correlation is especially great between neighboring (in space and in time) image elements. The knowledge of correlation allows not to transmit the same information repeatedly and to reduce the digital stream.

The second type is physiological redundancy - due to the limited visual apparatus of a person. Taking into account physiological redundancy allows not to transmit in the signal that information that will not be perceived by our eyesight.

Similarly, the imperfection of the human hearing system allows you to "get rid" of excessive audio information in the signal.

Despite the fact that we absorb most of the external information through vision, sound images are no less important for us, and often even more. Try to watch a movie with the sound turned off - in 2-3 minutes you will lose the thread of the plot and interest in what is happening, no matter how big the screen and high-quality image are! Therefore, in a silent movie behind the scenes, a taper played. If you remove the image and leave the sound, it’s quite possible to “listen” to the movie as an exciting radio show.

Hearing brings us information about what we don’t see, since the sector of visual perception is limited, and the ear picks up sounds coming from all directions, complementing visual images

Hearing brings to us information about what we don’t see, since the sector of visual perception is limited, and the ear picks up sounds coming from all directions, complementing visual images. At the same time, our hearing can with great accuracy localize an invisible sound source in direction, distance, speed of movement.

They learned to convert sound into electrical vibrations long before the image. This was preceded by a mechanical recording of sound vibrations, the history of which began back in the 19th century.

Accelerated progress, including the ability to transmit sound at a distance, was made possible thanks to electricity, with the advent of amplification technology, acoustoelectric and electroacoustic and transducers - microphones, pickups, dynamic heads and other emitters. Today, audio signals are transmitted not only over wires and over the air, but also over fiber-optic communication lines, mainly in digital form.

Acoustic vibrations are converted into an electrical signal, usually using microphones. Any microphone has a moving element in its composition, the vibrations of which generate a current or voltage of a certain shape. The most common type of microphone is a dynamic one, which is a “speaker on the contrary." Oscillations of air set in motion a membrane rigidly connected to a voice coil located in a magnetic field. A condenser microphone, in fact, is a condenser, one of the plates of which oscillates in time with the sound, and with it the capacitance between the plates changes. In ribbon microphones the same principle is used, only one of the plates is freely suspended. It is similar to a condenser electret microphone, the plates of which during the oscillations themselves generate an electric charge proportional to the amplitude of the oscillations. Many microphone models have a built-in amplifier (the signal level directly from the acousto-electric converter is very small). Unlike a microphone, the pickup of an electric musical instrument registers vibrations not of air, but of a solid body: strings or decks of an instrument. The pickup head reads the groove of the phonograph record using a needle mechanically connected to moving coils located in a magnetic field, or magnets if the coils are stationary. Or the needle’s vibrations are transmitted to a piezoelectric element that, when subjected to mechanical stresses, generates an electric charge. In magnetic recording, an audio signal is recorded on magnetic tape, and then read by a special head. Finally, an optical recording was traditionally adopted in the cinema: an opaque sound track was applied from the edge of the film, the width of which changed to the beat with the signal, and when the film was pulled through the projection apparatus, the electrical signal was recorded using a photosensor.

In synthesizers, sound is born directly in the form of electrical vibrations; there is no primary conversion of acoustic waves into an electrical signal.

Modern autumn sound sources are diverse, and digital media such as CDs and DVDs are becoming more widespread, although vinyl records are still preserved. We continue to listen to radio, both on-air and cable (radio points). Sound accompanies telecasts and movies, not to mention such a familiar phenomenon as telephony. A growing share in the world of audio is given to a computer, which makes it possible to conveniently archive, combine and process sound programs in the form of files. In the age of digital technology, digitized speech and music is transmitted through digital channels, including the Internet, without major transportation losses. This is ensured by digital coding, and losses occur solely due to compression, which is most often used in this case. However, on digital media it either does not exist at all (CD, SACD), or lossless sound compression algorithms (DVD Audio, DVD Video) are used. In other cases, the compression ratio is determined by the required level of sound quality (MP3 files, digital telephony, digital television, some types of media).

Fig. 1. Converting acoustic sound vibrations into an electrical signal

The reverse conversion from electrical to acoustic vibrations is carried out using loudspeakers built into radios and televisions, as well as individual speakers, headphones.

Sound refers to acoustic vibrations in the frequency range from 16 Hz to 20,000 Hz.

Sound refers to acoustic vibrations in the frequency range from 16 Hz to 20,000 Hz. Below (infrasound) and above (ultrasound) the human ear does not hear, and even within the sound range, the sensitivity of hearing is very uneven, its maximum falls at a frequency of 4 kHz. To hear sounds of all frequencies equally loudly, you need to play them at different levels. This technique, called loudness, is often implemented in household equipment, although the result cannot be recognized as unambiguously positive.


Fig. 2. Equal volume curves
(Click image for larger version)

The physical properties of sound are usually presented not in linear, but in relative logarithmic quantities - decibels (dB), since this is much more clearly in numbers and more compact on graphs (otherwise you would have to operate with quantities that have many zeros before and after the decimal point, and second would be easily lost against the background of the first). The ratio of the two levels A and B in dB (say voltage or current) is defined as:

With u [dB] \u003d 20 log A / B. If we are talking about power, then With p [dB] \u003d 10 log A / B.

In addition to the frequency range that determines the sensitivity of human hearing to pitch, there is also the concept of a volume range that shows the sensitivity of the ear to the volume level and covers the interval from the quietest sound distinguished by hearing (sensitivity threshold) to the loudest, behind which there is a pain threshold. The sensitivity threshold is adopted as a sound pressure of 2 x 10 -5 Pa (Pascal), and the pain threshold is a pressure 10 million times greater. In other words, the audibility range, or the pressure ratio of the loudest sound, to the quietest, is 140 dB, which significantly exceeds the capabilities of any audio equipment due to its own noise. Only high-resolution digital formats (SACD, DVD Audio) are matched to the theoretical limit of the dynamic range (the ratio of the loudest sound reproduced by the equipment to the noise level) is 120 dB, the CD provides 90 dB, the vinyl record is about 60 dB.


Fig. 3. Range of hearing sensitivity

Only high-resolution digital formats (SACD, DVD Audio) fit the theoretical limit of the dynamic range

Noise is always present in the sound path. This is both the intrinsic noise of amplifying elements, and external interference. Signal distortions are divided into linear (amplitude, phase) and nonlinear, or harmonic. In the case of linear distortions, the signal spectrum is not enriched with new components (harmonics), only the level or phase of the existing ones changes. Amplitude distortions that violate the original level ratios at different frequencies lead to audible timbre distortions. For a long time, it was believed that phase distortion is not critical for hearing, but today the opposite has been proved: both the timbre and the localization of sound are largely dependent on the phase relationships of the frequency components of the signal.

Any amplification path is non-linear

Any amplifier path is non-linear, therefore harmonic distortions always arise: new frequency components, spaced at a frequency of 3, 5, 7, etc. from the tone generating them (odd harmonics) or at 2, 4, 6, etc. times (even). The threshold for detecting harmonic distortion varies greatly: from a few tenths or even hundredths of a percent to 3-7%, depending on the composition of the harmonics. Even harmonics are less noticeable, because they are in consonance with the fundamental tone (the difference in frequency corresponds to an octave twice).

In addition to harmonic, intermodulation distortions occur, which are the difference products of the frequencies of the signal spectrum and their harmonics. For example, at the output of the amplifier, the input of which has two frequencies of 8 and 9 Hz (with a rather non-linear characteristic), a third (1 kHz) will appear, as well as a number of others: 2 kHz (as the difference of the second harmonics of the fundamental frequencies), etc. . Intermodulation distortions are especially unpleasant by ear, because they generate many new sounds, including those that are dissonant with respect to the main ones.

What an audiophile can hear and not only hear, but also explain to a sound engineer, may turn out to be completely invisible to an ordinary listener

Noise and distortion are largely masked by the signal, but they themselves mask low-level signals that disappear or lose their clarity. Therefore, the higher the signal-to-noise ratio, the better. Actual sensitivity to noise and distortion depends on the individual characteristics of the hearing and its training. The level of noise and distortion, which does not affect the transmission of speech, may be completely unacceptable to music. What an audiophile can hear and not only hear, but also explain to a sound engineer, may turn out to be completely invisible to an ordinary listener.

TRANSMISSION ANALOG AUDIO

Traditionally, audio signals were transmitted over wires, as well as over the air (radio).

Distinguish between unbalanced transmission line (classic wired) and balanced. Unbalanced incorporates two wires: signal (direct) and reverse (ground). Such a line is very sensitive to external noise, therefore it is not suitable for signal transmission over long distances. Often implemented using shielded wire, the screen is connected to the ground.


Fig. 4. Unbalanced shielded line

The balance line involves three wires: two signal wires, along which the same signal flows, but in antiphase, and ground. On the receiving side, common-mode noise (induced on both signal wires) is mutually subtracted and completely disappears, and the level of the useful signal is doubled.


Fig. 5. Balanced shielded line

Unbalanced lines are usually used inside devices and at short distances, mainly in user paths. In the professional sphere, the balance dominates.

In the figures, the connection points of the screen are shown conditionally, since they have to be selected each time "in place" to achieve the best results. Most often, the screen is connected only on the side of the signal receiver.

Unbalanced lines are usually used inside devices and at short distances, mainly in user paths. In the professional sphere, the dominant balance

Audio signals are normalized by the level of the effective voltage (0.707 of the amplitude value):

  • microphone 1-10 mV (for microphones without built-in amplifier),
  • linear 0.25-1 V, usually 0.7 V.

At the output of the power amplifier, from which the signal enters the loudspeakers, its level is much higher and can reach (depending on the volume) 20-50 V at currents up to 10-20 A. Sometimes, up to hundreds of volts, for broadcast lines and sounding of open spaces .

Used cables and connectors:

  • for balanced lines and microphones - shielded pair (often twisted), 3-pin XLR connectors or terminals, screw or clamp;


Fig. 6. Connectors for balanced lines: terminals and XLR

  • for unbalanced lines - shielded cable, RCA connectors ("tulip"), less often DIN (as well as GOST), as well as various plugs;


Fig. 7. Connectors for unbalanced lines: RCA, 3.5 mm and 6.25 mm plugs

  • for powerful signals for loudspeakers - unshielded (with rare exceptions) speaker cables of large cross-section, terminals or clamps, banana or needle connectors


Fig. 8. Speaker cable connectors

The quality of the connectors and cables plays a tangible role, especially in high-quality audio systems.

The quality of the connectors and cables plays a tangible role, especially in high-quality audio systems. The materials of the conductor and dielectric, the cross section, the geometry of the cable are important. The most expensive models of interconnect and speaker cables use ultra-pure copper and even solid silver, as well as Teflon insulation, which is characterized by a minimum level of dielectric absorption, which increases signal loss, and is uneven in the frequency band. The cable products market is very diverse, often different models of the same quality differ from each other only in price, and many times over.

Any cables are characterized by the loss of an analog signal, which grows with increasing frequency and transmission distance. Losses are determined by the ohmic resistance of the conductor and the contacts in the connectors, as well as the distributed reactive components: inductance and capacitance. In fact, the cable is a low-pass filter (cuts high).

In addition to transmitting at different distances, signals often have to be branched and switched. Switches (input selectors) are an integral part of many components of the audio path, both professional and user. There are specialized distribution amplifiers that branch the signal and ensure coordination with the transmission line and other components in terms of level and impedance (as well as often compensating for the decay at high frequencies) and switches, ordinary (multiple inputs and one output) and matrix (multiple inputs and outputs) )

PROCESSING AN ANALOG AUDIO

Any processing of an analog audio signal is accompanied by certain losses in its quality (frequency, phase, nonlinear distortions occur), but it is necessary. The main types of processing are as follows:

  • amplification of the signal to the level required for transmission, recording or playback by the loudspeaker: we will not hear anything from the microphone to the speaker: we must first amplify it by level and power, while ensuring the ability to adjust the volume.


Fig. 9

  • filtering by frequencies: infrasound, which is harmful to health at certain frequencies, and ultrasound are cut off from the useful sound range (20 Hz - 20 kHz). In many cases, the range is deliberately narrowed (the voice telephone channel has a band from 300 Hz to 3400 Hz, the frequency band of meter radio stations is significantly limited). For speakers, which usually have 2-3 bands, separation is also necessary, which is usually done in crossover filters already at the level of the amplified (powerful) signal.


Fig. 10. Crossover pattern for a three-way speaker system


Fig. 11. Example of an equalizer

  • noise reduction: there are special dynamic noise reduction schemes that analyze the signal and narrow the band in proportion to the level and frequency of the RF components (“denoisers”, “dehisseurs”). In this case, noise located above the signal band is cut off, and the remaining one is more or less masked by the signal itself. Such schemes always lead to very noticeable signal degradation, but in some cases their use is appropriate (for example, when working with recorded speech or in conversation radio stations). For analogue recording technology, noise suppressors based on compressors / expanders (“compander” ones, for example, Dolby B, dbx systems) are also used, whose operation is less noticeable by ear.
  • impact on the dynamic range: in order to play music programs on ordinary household systems, including car radios, was sufficiently juicy and expressive, the dynamic range is compressed, making the sound of quiet sounds louder. Otherwise, apart from the individual bursts of fortissimo (on classical music), you will have to listen to the silence from the speakers, especially given the noisy environment. For this purpose, devices called compressors are used. In some cases, on the contrary, it is necessary to expand the dynamic range, then expanders are used. And in order to avoid exceeding the maximum level, which will lead to clipping (signal limitation from above, accompanied by very high non-linear distortions, perceived as wheezing), limiters are used in the studios. They usually provide a “soft” constraint, and not just cut off the tops of the signal;

Fig. 12. An example of a studio processor for dynamic sound processing

  • special effects for studios, EMR, etc.: sound engineers and musicians have at their disposal a large number of special equipment to give the sound the desired color or to obtain a certain effect. These are various distributors (the sound of an electric guitar becomes hoarse, grainy), wah-wah consoles (amplitude modulation, causing a characteristic “croaking” effect), enhancers and exciters (devices that affect the color of the sound, in particular, which can give the sound a “tube” shade ); flangers, chorus, etc.


Fig. 13. Examples of processors and consoles for electric guitars

  • mixing sounds, echo / reverb: recordings in studios are usually carried out in a multi-channel form, then with the help of mixers the phonogram is reduced to the desired number of channels (most often 2 or 6). At the same time, the sound engineer can “push forward” a particular solo instrument recorded on a separate track, and change the volume ratio of different tracks. Sometimes multiple copies of a lower level with a certain time shift are superimposed on the signal, thereby simulating natural reverberation (echo). Currently, similar and other effects are achieved mainly with the help of signal processors that process a digital signal.


Fig. 14. The modern mixing console

ANALOG AUDIO RECORDING

It is believed that the mechanical recording of sound was first implemented by Edison in 1877, when he invented a phonograph - a roller covered with a layer of soft stanoli, on which a trace was applied with a needle that transmits air vibrations (later wax was used instead of stanoli, and the method itself was called depth recording , since the track was modulated in depth). However, in the same year, the Frenchman Charles Croet submitted an application to the Academy of Sciences about his invention - the sound was recorded on a flat glass disc covered with soot, using a needle connected to the membrane, a transverse track was obtained, then the disk was supposed to be visible and photocopied from it for duplication (the method itself was yet to be developed). In the end, the transverse recording, which turned out to be much more perfect than the deep one, gave rise to the record. Three companies appeared in the world that mass-produced records (CBS in America, JVC in Japan, Odeon in Germany - this company gave the world a double-sided record) and devices for playing them. The name “gramophone” came from Deutsche Gramophone (Germany), and the gramophone came from Pate (France). Then they began to produce portable gramophones with a socket on the hinge, with an electric motor instead of a manual drive, and later with electromagnetic adapters. The records became more and more perfect, they contained more material in terms of playing time, the frequency range expanded, initially limited to 4 kHz. Fragile shellac was replaced by vinyl, and short-lived steel needles gave way to sapphire, then diamond. The era of stereo began: in one groove two tracks were cut at an angle of 45 °. By the beginning of the 80s of the last century, when a global transition to a digital sound format was outlined, a vinyl record came at the peak of its development.


Fig. 15. Gramophone, gramophone, electric player

Magnetic recording is more perfect and has long been used in studios. The first apparatus for magnetic recording - the telegraph - was created by Waldemar Paulsen (Denmark) in 1878, and the recording was carried out on steel wire (piano string). In the 20s of the 20th century, tape recorders appeared that used magnetic tape. Mass production of tape recorders began in the 40s. First, magnetic tapes appeared on cellulose, and then on mylar basis. Audio signals are recorded on longitudinal tracks using a recording (or universal) head with a magnetic gap. The tape extends close to the head gap, and a path of remanent magnetization is formed on it. The nonlinear part of the characteristic is “washed away” by means of a high-frequency bias current (usually of the order of 100 kHz), on which a useful signal is superimposed. Studio analog tape recorders along with digital are still used for primary recording of phonograms. Households are two- and three-head (separately recording, reproducing and erasing heads or erasing and universal). Sometimes there are two reproducing heads, if reverse is provided.

Even with a very careful attitude, the magnetic tape begins to crumble over time

The magnetic tape has noises that decrease (partially out of the audible range) with an increase in the speed of pulling. Therefore, studio tape recorders have a speed of 38, while household reel-recorders have a speed of 19 and 9.5 cm / s. For household cassette recorders, a speed of 4.76 cm / s was adopted. The noise of the tape is effectively suppressed using the Dolby B compander system: during recording, the level of the high-frequency part for weak signals rises by 10 dB, and during playback it drops by the same amount.

Professional analog magnetic recording at high speed provides very high quality. It was on magnetic master tapes that music records were archived for a long time, and from them the phonogram was transferred to vinyl records with some loss of quality. However, even with a very careful attitude, the magnetic tape begins to crumble over time, it is characterized by a gradual demagnetization, deformation, copier effect (adjacent layers in the roll are mutually magnetized), it is subject to the influence of external magnetic fields. It is also difficult to quickly search for the desired fragment (although this inconvenience relates more to the domestic sphere). Therefore, with the advent of digital formats, Sony, the owner of a huge archive of CBS / Columbia records, taking care of the problem of preserving the priceless originals of records of the second half of the 20th century, developed a recording method in the format of discrete pulse-width modulation (DSD stream - Direct Stream Digital, which later gave rise to custom Super Audio CD format). If analog magnetic recording ensures the soundtrack is preserved for several decades with gradually increasing losses, then digital archives are eternal and withstand an unlimited number of copies without any degradation. For this, as well as for many other reasons (service advantages, versatility, huge processing capabilities), digital audio formats are becoming more common today.

GETTING A DIGITAL AUDIO SIGNAL

According to the Kotelnikov-Shannon theorem, a discrete signal can subsequently be completely restored, provided that the sampling frequency is at least twice the upper frequency of the signal spectrum

A digital signal is obtained from analog or is synthesized directly in a digital form (in electric musical instruments). Analog-to-digital conversion involves two main operations: discretization and quantization. Discretization is the replacement of a continuous signal by a series of samples of its instantaneous values \u200b\u200btaken at regular intervals. By the Kotelnikov-Shannon theorem, a discrete signal can subsequently be completely restored, provided that the sampling frequency is at least twice the upper frequency of the signal spectrum. Then the samples are quantized by level: each of them is assigned a discrete value closest to the real one. The quantization accuracy is determined by the bit depth of the binary representation. The higher the bit depth, the greater the quantization levels (2N, where N is the number of bits) and the lower the quantization noise - errors due to rounding to the nearest discrete level.


Fig. 16. Digitization of an analog signal and obtaining digital samples

The CD format assumes a sampling frequency of 44.1 kHz and a resolution of 16 bits. That is, 44 thousand samples per second are obtained, each of which can take one of 2 16 \u003d 65536 levels (for each of the stereo channels).

The most advanced custom audio formats are DVD Audio and Super Audio CD (SACD)

In addition to the 44.1 kHz / 16 bit format, others are also used in digital recording. Studio recording is usually performed with a bit depth of 20-24 bits. Then the data is converted to a standard CD format by recounting. Extra bits are then discarded or (better) rounded off, sometimes pseudo-random noise is mixed in to reduce dither quantization noise.

The most advanced custom audio formats are DVD Audio and Super Audio CD (SACD). DVD Audio adopts the lossless MLP data compression algorithm developed by Meridian. And SACD, unlike other formats, does not use pulse-code modulation (PCM, or PCM), but single-bit coding of a DSD stream (discrete pulse-width modulation). SACD discs are single-layer and double-layer (hybrid), with the usual CD-layer.

Today, the CD remains the most popular audio medium, despite certain restrictions on sound quality noted by audiophiles. The reason is their low sampling frequency: for accurate restoration of signals close to the upper boundary of the audio range, a filter that is not physically possible (its impulse response captures the negative time region) is needed. This is to some extent compensated by digital filtering with an increase in sampling frequency and bit depth. To ensure uninterrupted real-time playback, data on the disc is written with excessive encoding (Reed-Solomon code).

Digital media, sampling rates and bit rates

   Carrier Authorship Dimensions Sound time
   min
Count channels Fs, kHz Bit., Bit
   CD-DA    Sony
   Philips
   120, 90 mm    up to 90 2 44,1 16
   S-dat    cassette tape 3.81 mm 2 32, 44,1, 48 16
   R-dat    cassette tape 3.81 mm 2, 4 44,1 12, 16
   Dash    tape 6.3, 12.7 mm 2…48 44,056,
44,1, 48
12, 16
   Dat    Alesis    cassette
   S-vhs
60 8 44,1, 48 16, 20
   DSS    Philips    cassette 2, 4 32, 44,1,
48
16, 18
   Minidisk    Sony    64 mm 74 2, 4 44,1 16
   DVD
   Audio
   120 mm 5.1 192 24
   SACD    Sony
   Philips
   120 mm 2, 5 2800 1

To transmit digital audio, you need a broadband line, especially for an uncompressed multi-channel high-resolution stream.

DIGITAL AUDIO TRANSFER

Communication lines for the transmission of digital audio can serve as cables, optical lines and radio.

AES / EBU (balanced, coaxial), S / PDIF (unbalanced coaxial) interfaces have been developed for transmitting PCM signals over wire lines, which provide the transmission of several signals (clock frequency, digital word repetition rate, channel data) over a single wire. Inside the devices, these signals are transmitted separately, encoded at the output of the transport mechanism, and again separated in a digital receiver at the input of the digital-to-analog converter (in two-block systems).

Typically, high-quality coaxial cable is used to transmit digital audio. There are also S / PDIF converters for fiber optic lines: AT&T ST and Toslink (the latter is standard for household appliances). And also - for the use of twisted pairs as part of Ethernet cable networks. The distribution medium for compressed audio in the form of archived files is also the Internet.


Fig. 17. Optical cable with Toslink connector

Like any digital signal, digitized audio is distributed and switched using special devices - distribution amplifiers, conventional and matrix switches.

There is a factor that negatively affects digital signals and often negates almost all the advantages of digital audio over analog, including the ability to repeatedly copy, transfer and archive programs without loss of quality - we are talking about jitter. Jitter is a jitter of the phase, or the uncertainty of the moment of transition from 0 to 1 and vice versa. This happens due to the gradual deformation of rectangular pulses with almost perfect edges, which become more and more gentle due to the reactive elements of the cables, which leads to the uncertainty of the difference moment, although the steepness of the edges in each subsequent digital device is completely restored. With jitter, all modern digital devices are successfully struggling with the help of reclocking blocks. For more information, see the brochure, Switching and Signal Management.


Fig. 18. Distribution and switching

For transmission and recording to various digital media, compressed audio formats are used: Dolby Digital (AC-3) and DTS. This allows you to place on a DVD Video disc with a capacity of 4.7 GB full-length film with multi-channel audio, as well as various kinds of additional materials. The Dolby Digital format offers 5 independent channels: 2 front, 2 rear and 1 subwoofer for special effects. Compression is performed using the adaptive MPEG Audio algorithm, based on the psychoacoustic features of sound perception and providing minimal compression visibility. All this allows you to recreate a full three-dimensional sound panorama. However, Dolby Digital is much less suitable for high-quality music playback than CDs, with a lower resolution. The stream speed in Dolby Digital mode (samples on each channel are transmitted one after another) is 384-640 kbit / s, while in the usual two-channel CD format - 1411.2 kbit / s. The Dolby Digital 5.1 format has been repeatedly improved, mainly in the direction of increasing the number of channels. The DD 7.1 option is now available, which assumes 2 front, 2 side and 2 rear channels, excluding the special effects channel (the DD 6.1 version with one rear channel is also known).

The DTS format has a lower compression ratio and a higher data rate of 1536 kbit / s. Therefore, it is used not only for encoding multi-channel soundtracks on DVD Video, but for multi-channel audio discs. The DTS format, in addition to the traditional DTS 5.1, is known in the DTS ES Discrete 6.1 modifications, as well as several matrix versions, in which, like in Dolby Pro Logic II, the principle of matrixing additional channels, which are synthesized on the basis of additional information contained in the main ones, is used.

In the computer field and multimedia (at the user level) data compactness is required, therefore, compressed audio formats are widely used here. For example, MP-3, Windows Media Audio, OGG Vorbis. Thanks to compression, it becomes possible to quickly download music files from the Internet, organize a streaming audio service (WMA, Real Audio, Winamp).

DIGITAL AUDIO PROCESSING

Processing is performed using powerful DSP (signal) processors, such as Shark from Analog Devices. Thanks to its high speed, many operations can be implemented in real time: for example, changing the bit and clock frequency with interpolation, adjusting the tonal balance, equalization, noise reduction, compression, expanding or limiting the dynamic range, special effects (echo, different types of sound, for example “ stadium ”,“ concert hall ”, etc.), mixing several tracks. Typically, signal processors operate at a high bit depth of the signal (for example, 32 bits with a floating decimal point), which reduces the error incursion during complex mathematical calculations based on the fast Fourier transform, calculating a set of corresponding coefficients and subsequent multiplication.

Signal processors become cheaper as they spread, today they can be found in any receiver or Surround processor, where they perform a wide variety of functions, including decoding surround sound formats, equalization and bass control, channel calibration by amplitude and phase, etc.

Signal processors become cheaper as they spread, today you can find them on any receiver or surround processor

But, as usual, software signal processing technologies are developing even faster than hardware. Everything that a DSP processor can do is accessible with the help of special computer applications, and in this case the user gets a wider scope of activity and flexibility of the program itself, which is periodically updated and supplemented (although the software of specialized devices in our time can most often be updated , say, via a USB port from a computer or even directly from the Internet, from the equipment manufacturer’s website, but such an update, of course, is possible only within one generation of hardware, obsolete which you have to replace the module or the entire device). Computer programs for the deep processing of digital sound are sufficient for both user and professional purposes (for example, Adobe Audition). The bulk of the studio processing is done on a computer. This is very convenient and effective, and, most importantly, it allows you not to get attached to real time, making operations of any degree of complexity available without any special requirements for speed. For example, you can manually clean a phonogram (say, shot from a relic vinyl record) from clicks or subject it to “intellectual” processing to get rid of noise whose spectral composition is pre-determined in pauses and in quiet fragments.

Digital audio compression is based on the psychoacoustic features of hearing and uses the effect of masking quieter sounds with louder

Finally, compression in order to reduce the speed of data flow or transfer to another clock frequency with a possible change in bit depth is also performed both hardware and software on a computer.

There are several standard computer audio formats, both without compression and with it.

The most common uncompressed format is Microsoft Riff / Wave (extension “.wav”). Data is encoded in 8 or 16 bits. In the second (acceptable for high-quality audio) case and at a sampling frequency of 44.1 kHz, one minute of music takes 5.3 MB of disk space. In addition to the data itself, the .wav file contains a header that describes the general parameters of the file, and one or more fragments with additional information about the playback modes and order, marks, names and coordinates of various signal sections.

Unlike Riff / Wave, RAW files are data as it is - without supporting information. Which is present in standard Macintosh platform Apple AIFF files, similar to WAV.

Digital audio compression is based on the psychoacoustic features of hearing and uses the masking effect of quieter sounds more loud, while quiet ones are simply discarded, and the “threshold of relevance” of masked sounds is determined by their remoteness in frequency from the masking ones, as well as other parameters.

Of the formats involving lossy compression, the most popular is MP3 (MPEG 1/2 / 2.5 Layer 3). Allows you to apply many different compression methods, the standard is only a method of encoding already compressed data. A variant is possible with a constant bitrate, determined on the basis of the required file sizes or quality level, or with a variable, when the bitrate changes on different pieces of music, keeping the quality level constant. In general, MP3 is characterized by a very satisfactory sound at medium and high bitrates, but at low inferior to other formats. An exception is the new version of MP3 Pro, which is oriented specifically at a low bit rate and, therefore, is very much required on the Internet.

WMA, or Windows Media Audio, successfully competes with MP3 at low bitrates (for example, music at 64 kbps in WMA subjectively sounds no worse than MP3 with 128 kbps. In addition, this format provides a protective encoding against unauthorized copying.

Ogg Vorbis is generally similar to WMA and MP3, but differs in the mathematical processing apparatus and is focused on a sampling frequency of 48 kHz. In addition, it can support not 2, but up to 255 channels of sound. Bit rates up to 512 kbps, when compressed, are 20-5% more efficient than MP3s, music subjectively sounds better. A serious competitor to MP3 and WMA, albeit in an unequal struggle with giant companies.

AAC (Advanced Audio Coding) is developed on the basis of MP3 (by the same company, the Fraunhofer Institute), but has advanced features: it supports a sampling frequency of 96 kHz, up to 48 channels. Higher sound quality is "paid for" by a relatively slower encoding procedure and higher requirements for hardware in terms of speed during playback. One of the latest versions of AAC called Liquid Audio, allowing the inclusion of not only “watermarks” as AAC, but also other information (about artists, copyright, etc.) in the data stream, at some point was a serious contender for MP3 continuity .

The Japanese VQF (SoundVQ) format is very similar to AAC, which is likely to disappear from sight soon, although it is supported by Yamaha.

Digital sound can be recorded on various media. Mostly optical disks, although according to the logic of things, is it too early to stay in the arena only flash memory, which does not require any drives with motors.

Today, magnetic digital recording mainly remains in the professional sphere and is increasingly leaving the household

CDs are replicated, as well as other similar media (DVD, SACD), by stamping polycarbonate blanks from aluminum matrices on which pits - depressions are applied. In addition, if you have a regular computer with a writable CD (DVD) drive, music files of various formats can be recorded on CD-R, CD-RW, etc. Files are also stored on the hard drive of a computer or a special audio server, in which an extensive music library can be created, and the degree of file compression (from zero) is selected by the user.

Magnetic digital recording today remains mainly in the professional field and is increasingly confidently leaving the household. An optical disc is more attractive to a consumer than a cassette, even though it is small in size. In addition, their mass demand was not promoted by complex relations with holders of rights to musical content (as, however, in the case of DVD Audio and SACD). DAT tape recorders record high-quality digital audio without compression. There are several types of digital tape recorders: with stationary heads (S-DAT) and with rotating heads (R-DAT), recording a signal on a tape; reel DASH, DAT using S-VHS tapes and tilt recording. The DCC format (recording with compression in PASC) is currently recognized as unpromising. MiniDisc magneto-optical discs use ATRAC compression.

AUDIO PLAYBACK

At the end of any audio section there are analog electro-acoustic transducers - loudspeakers or headphones. Digital emitters are still at the stage of early ideas. Power amplifiers are also mostly analog, although digital ones (more precisely, switching ones working on the principle of pulse-width modulation) are gradually breaking their way. This class of amplifiers - D - provides an unprecedentedly high efficiency compared to analog (about 90%), small size and weight, and the absence of heat generation. In order for class D amplifiers to gain a firm leadership position, it is nevertheless necessary to solve many important problems, and first of all the problem of filtering the high-frequency components of the modulated signal, the level of which at the output is very high. In addition, there are practically no class D amplifiers with a digital input: an analog signal is fed to the built-in ADC. This, perhaps, is the main factor hindering the development of this direction: after all, the main value of the idea itself is not in high efficiency, but in the ability to organize a fully digital audio path without unnecessary conversions and analog transmission lines. Moreover, digital output on DVD players is not uncommon. Recently, new developments began to appear in this area. Tripath has released a special processor that controls the parameters of pulse amplification based on an analysis of the input signal, which (in digital form) is delayed for a while in the buffer. In particular, depending on the current spectrum of the signal, the optimum clock frequency is selected in terms of subsequent filtering. Such amplifiers (they are called “intelligent”) gave rise to a new category - class T amplifiers. For more details, see the brochure “Signal amplification”.

Traditional stereo and mono amplifiers are increasingly being replaced by multichannel ones, most often built into AV receivers, where there is also everything necessary for deep processing of multichannel signals, decoding and conversion from one format to another. Multichannel sound is becoming increasingly popular, and not only as an accompaniment to the movie, but also by itself.

Probably, everyone who at least once listened to an SDR receiver or transceiver could not remain indifferent to its reception, and especially to the convenience, which manifests itself in the fact that stations on the band can not only be heard, but also seen. An overview of the range on the panorama of the SDR transceiver allows you to quickly and visually find stations in the reception band, which greatly speeds up the search for correspondents during contests, and during everyday work on the air. With the help of the “waterfall”, the history of signals on the range is visually tracked and it is easy to switch to an interesting correspondent. In addition, the panorama itself shows us the frequency response of the received stations, their band and the width of the radiation, which allows us to quickly find a free section of the range for calling other ham radio operators.
  This is only if we talk about the visual part of the SDR, but also do not forget about the processing of signals, both for reception and transmission. Full control of the width and everything in the reception band. With the right choice of the necessary parameters in the settings menu items, the transmission signal also sounds great.
  But there is one circumstance, to make SDR work, additional devices are needed: a computer itself with a high-quality sound card, on which the main signal processing and a good monitor with a high screen resolution take place. Naturally, you need the appropriate software for it and for the SDR transceiver, which is not cheap. All this already entails certain specific requirements for computer knowledge with a radio amateur. Which is not always, and not at all, unfortunately present.
  There is another drawback. If this is not noticeable at the reception, then the transmission, due to the specific processing of the audio signal in the computer, causes a significant signal delay of more than 150 ms, which completely excludes the normal operation of self-monitoring in all types of radiation. It saves only an additional monitoring receiver or friend, who also has an SDR transceiver that will record the received signal.
  Currently, with the advent of the generation of affordable microprocessors from STM, it has become possible to develop devices that can partially replace some of the basic functions of large computers. Namely, processing of DSP sound and transceiver control, as well as graphical display of information on the transceiver display.
  As a result, the main nodes of such a transceiver, allow you to abandon the external computer . But at the same time, as on an external computer, a convenient service for controlling the transceiver is saved, various modes of recording signals, both for receiving and transmitting, followed by playing records through headphones or broadcast during transmission, saving the necessary information on an external SD- a card that is displayed on its own large display with a wide field of view, as well as DSP processing and signal generation with all the main types of radiation. Such transceivers provide high-quality signal reception, high slope filters with smooth adjustable boundaries, automatic Notch filter. They use multi-band graphic equalizers, compressors, reverbs for transmission, and most importantly, they get the minimum delay time. With an external synthesizer, transceiver controllers easily work with analog SDRs. In these modern transceivers, the HiQSDR and HiQSDR-mini 2.0 radio paths are widely used, which are controlled by a separate SPI bus, or through the DSP card via the main SPI bus with a minimum of connecting wires.
  A few years ago, the production of SDR transceivers began, working on the principle of directly converting a radio frequency signal to an audio IF, in which a simplified (in comparison with the classical scheme) radio channel board and a specialized computer are located in one case. The main focus here is on software. The main cost of the finished product is also determined by the cost of software. Flex and Sun SDR equipment are built on this principle.
  Currently, the principle of signal processing based on DSP methods (DSP) has moved to the next stage of its evolution. A new method has appeared for the direct digitization of the signal from the antenna, followed by the direct generation of the signal from a digit, which allows us to get rid of almost all types of problems inherent in both the classics and SDR technologies with the signal processing.
  Radios and transceivers with direct digitization of the signal have the abbreviation DDC (from Digital Down-Converter). The inverse conversion from numbers to analogs is abbreviated DUC (from Digital Up-Converter). This is a digital signal conversion programmatically. It should be noted right away that the abbreviation SDR (Software Define Radio) - software defined radio - is only a general definition of the class of signal processing technologies, which includes DDC - architecture, as one of the methods.

Already today, with the advent of the generation of affordable microprocessors, it has become possible to develop devices that can partially replace some of the basic functions of large computers. Namely, processing of DSP sound and transceiver control, as well as graphical display of information on the transceiver display. The DDC architecture instantly digitizes the entire spectrum of signals from 0 Hz to frequencies that the ADC chip can process. The most advanced ADC chips today can work in the band up to 1 GHz, but their cost today is very high. At the same time, the most popular and relatively cheap ADC chips digitize the spectrum with a band from 0 Hz to 60 ... 100 MHz, which is quite suitable for amateur radio tasks. After digitizing the spectrum of signals in the 0 Hz - 30 ... 60 MHz band, the output of the ADC chip produces a very large digital data stream, which is further processed by high-speed FPGA chips. They programmatically implement the DDC / DUC algorithm, i.e. digital step-up or step-up converter.
  A digital down converter converts the spectrum of the necessary band and transfers it to a computer for processing - i.e. creates a digital stream of significantly less bandwidth and speed. Software processing of the flow by DSP methods and final signal demodulation takes place in the computer.
  In practice, it is very rarely necessary to work with the entire spectrum of signals in the 0 Hz - 30 ... 60 MHz band. The maximum bands that we need for processing are 10 ... 50 kHz for demodulating AM, FM signals and 3 ... 5 kHz for SSB signals.
  This most advanced signal processing method was implemented in the amateur radio transceivers TULIP-DSP and the domestic analogue - Tulip-DDS / DUC.

A similar principle of signal formation is applied in transceivers of one well-known company, which began the release of new models back in 2015. A fragment of the structural diagram of such a transceiver is presented below.

If earlier, a few years ago, even in such advanced transceivers like ICOM IC-756Pro3 and IC-7600, the method of sequential scanning of the spectrum was applied and the process of updating the picture was noticeable - i.e. fast scanning, now the signal is observed and processed in a complex, in parallel, since the frequency tuning is performed instantly by the software method. Due to the fact that a large frequency section of 30 ... 60 MHz is immediately digitized, without losing the tuning to the current radio station, it becomes possible to see what happens in the neighboring part of the spectrum. Not only that, calling the second virtual receiver, you can simultaneously hear what they are talking on one and the second range. But two receivers are not the limit. It is possible to call three, five, ten ... any number of receivers. By mixing their sound in a certain way, you are aware of current events on the bands. And the “cloud” graphics will allow you to quickly select the desired station.
  The same applies to spectrum display. In practice, it is seldom when you immediately need the entire 30 ... 60 MHz section. If necessary, it is relatively easy to distinguish from the general digital stream the second, third, fourth and in general, how many small streams are needed and transfer them to a computer, thereby creating several reception channels at the same time. This method implements two, three, or how many “virtual receivers” are needed in the entire digitization band. For example, we create a separate panorama for a range of 40 meters, a separate one for a 20-meter range and for the remaining ranges ..., place them on a separate monitor and now we have the opportunity to observe in real time format the conditions of passage in our selected areas.

On the one hand, the presence of mirror strips is a drawback. Since the concept of DD refers to the entire spectrum of digitization, you can significantly offload the ADC input by paying attention to the input circuits of the receiver, which are best made high-quality and tunable. As an alternative, use low-pass filters in the input circuits with a cutoff frequency of half the clock frequency or bandpass filters. They can further attenuate strong out-of-band signals that are far enough away from the working band. At the same time, the opportunity to review the entire range of digitization is lost. Such methods of preliminary selection are justified if it is planned to use a DDC receiver in conjunction with large antennas or in an area with a difficult interference environment.
On the other hand, this drawback provides a technological opportunity by simple means to implement not only reception on the HF band, but also on the VHF and even on the DSC bands. It is only necessary to make interchangeable band-pass filters with LNA, bands equal to half the clock frequency.
  For example, in some DDC receivers they put a switch-off filter on the SV-DV range, and in one of the WiNRADiO DDC receivers and Perseus DDC receivers, there are flexibly configurable narrow-band filters.
  Some 20 years ago we couldn’t even dream of anything like this, when the panoramic set-top box for the transceiver was 2 times larger than the transceiver itself and cost 5-10 times more. About the service with quality and say no. The SDR technology that appeared in the early 2000s made it possible to look at the air and hear it in a completely different way. We saw a real live stream! Not a static “frozen” picture after a slow scan, namely, live broadcast in real time.
  If, in order to see a truncated panorama of other ranges in the first SDR transceivers with hardware signal conversion, it is necessary to have a separate receive path for each range, then in the receive path made using modern DDC technology, either any part of the range or the entire range is available , and at the same time in parallel with individual sections of its sections. Realization of all these possibilities is possible only thanks to DSP methods and direct signal digitization.
  Regarding amateur radio topics, one of the most popular functions at present and in the near future is spatial signal selection and phase noise suppression methods. Today there is a phase method of signal selection and noise reduction, implemented in hardware. In addition, using mathematical algorithms, it is easy to implement any functions for subtracting interfering and adding useful signals formed by a pair, four or a large number of ADCs.
Using these modern developments, it became possible to remotely control a transceiver and work remotely on air. Modern methods of information transfer are capable of transmitting sufficiently large data streams and with virtually no loss. The total flow of information from / to the transceiver is quite small. Using the IP stack, it becomes possible to use the transceiver as a network segment even without using a computer. By installing a transceiver outside a large city in a fairly quiet area, you can have access to the radio without leaving your apartment. By organizing guest access to the transceiver, you provide the opportunity for friends to work on the air. Another useful feature used by special services is the ability to record all the radio air, or specified pieces of radio air, on a computer’s hard drive with deferred processing. This function allows you to quickly carry out statistical processing of signals, search for and monitor target signals, as well as perform many operations that are not supposed to be known to an ordinary user.

You can choose the radios you are interested in


Top