Lecture notes on tau. Historical background Theory of automatic control course of lectures

Automatic control theory(TAU) is a scientific discipline that studies the processes of automatic control of objects of different physical nature. At the same time, with the help of mathematical means, the properties of automatic control systems are identified and recommendations for their design are developed.

Story

For the first time, information about automata appeared at the beginning of our era in the works of Heron of Alexandria “Pneumatics” and “Mechanics”, which described automata created by Heron himself and his teacher Ctesibius: a pneumatic automatic machine for opening temple doors, a water organ, an automatic machine for selling holy water, etc. Heron's ideas were significantly ahead of their time and were not used in his era.

Stability of linear systems

Sustainability- the property of an automatic control system to return to a given or close to it steady state after any disturbance.

Sustainable self-propelled guns- a system in which transient processes are damped.

Operator form of writing a linearized equation.

y(t) = y mouth(t)+y P= y out(t)+y St.

y mouth(y out) is a particular solution of the linearized equation.

y P(y St.) is the general solution of a linearized equation as a homogeneous differential equation, that is

The ACS is stable if the transient processes in n (t), caused by any disturbances, will decay over time, that is, when

Solving the differential equation in the general case, we obtain complex roots p i , p i+1 = ±α i ± jβ i

Each pair of complex conjugate roots corresponds to the following component of the transition process equation:

From the results obtained it is clear that:

Stability criteria

Routh criterion

To determine the stability of the system, tables of the form are built:

Odds Strings column 1 column 2 column 3
1
2
3
4

For the system to be stable, it is necessary that all elements of the first column have positive values; if the first column contains negative elements, the system is unstable; if at least one element is zero and the rest are positive, then the system is on the stability boundary.

Hurwitz criterion

Hurwitz determinant

Theorem: For the stability of a closed-loop automatic control system, it is necessary and sufficient that the Hurwitz determinant and all its minors be positive at

Mikhailov criterion

Let us replace , where ω is the angular frequency of oscillations corresponding to the purely imaginary root of this characteristic polynomial.

Criterion: for the stability of a linear system of nth order, it is necessary and sufficient that the Mikhailov curve, constructed in coordinates, passes sequentially through n quadrants.

Let us consider the relationship between the Mikhailov curve and the signs of its roots(α>0 and β>0)

1) The root of the characteristic equation is a negative real number

2) The root of the characteristic equation is a positive real number

The factor corresponding to a given root is

3) The root of the characteristic equation is a complex pair of numbers with a negative real part

The factor corresponding to a given root is

4) The root of the characteristic equation is a complex pair of numbers with a positive real part

The factor corresponding to a given root is

Nyquist criterion

The Nyquist criterion is a graphical-analytical criterion. Its characteristic feature is that the conclusion about the stability or instability of a closed-loop system is made depending on the type of amplitude-phase or logarithmic frequency characteristics of the open-loop system.

Let the open-loop system be represented as a polynomial

then we make a substitution and get:

For a more convenient construction of the hodograph for n>2, we reduce equation (*) to the “standard” form:

With this representation, module A(ω) = | W(jω)| is equal to the ratio of the absolute values ​​of the numerator and denominator, and the argument (phase) ψ(ω) is the difference between their arguments. In turn, the modulus of the product of complex numbers is equal to the product of the moduli, and the argument is equal to the sum of the arguments.

Modules and arguments corresponding to the factors of the transfer function

Multiplier
k k 0
p ω

After which we will construct a hodograph for the auxiliary function, for which we will change

When , and when (since n

To determine the resulting angle of rotation, we find the difference between the arguments of the numerator and denominator

The polynomial of the numerator of the auxiliary function has the same degree as the polynomial of its denominator, which implies , therefore, the resulting angle of rotation of the auxiliary function is 0. This means that for the stability of a closed system, the hodograph of the vector of the auxiliary function should not cover the origin, and the hodograph of the function , accordingly, the point with coordinates

Part 1. Theory of Automatic Control (TAC)

Lecture 1. Basic terms and definitions of TAU. (2 hours)

Basic concepts.

Control systems for modern chemical technological processes are characterized by a large number of technological parameters, the number of which can reach several thousand. To maintain the required operating mode, and ultimately the quality of the products, all these quantities must be maintained constant or changed according to a certain law.

Physical quantities that determine the progress of a technological process are called process parameters . For example, process parameters can be: temperature, pressure, flow, voltage, etc.

A technological process parameter that must be maintained constant or changed according to a certain law is called controlled variable or adjustable parameter .

The value of the controlled quantity at the considered moment in time is called instantaneous value .

The value of the controlled quantity obtained at the considered moment in time based on the data of some measuring device is called its measured value .

Example 1. Scheme of manual temperature control of the drying cabinet.


It is necessary to manually maintain the temperature in the drying cabinet at the T set level.

The human operator, depending on the readings of the mercury thermometer RT, turns on or off the heating element H using the switch P. ¨

Based on this example, you can enter definitions:

Control object (object of regulation, OU) – a device whose required operating mode must be supported externally by specially organized control actions.



Control – formation of control actions that ensure the required operating mode of the op-amp.

Regulation – a particular type of control when the task is to ensure the constancy of any output value of the op-amp.

Automatic control – control carried out without direct human participation.

Input influence(X)– influence applied to the input of a system or device.

Output impact(Y)– the impact produced at the output of a system or device.

External influence – the impact of the external environment on the system.

The block diagram of the control system for example 1 is shown in Fig. 1.2.


Rice. 1.3

Example 3. Temperature ASR circuit with measuring bridge.

When the temperature of the object is equal to the given one, the measuring bridge M (see Fig. 1.4) is balanced, no signal is received at the input of the electronic amplifier, and the system is in equilibrium. When the temperature deviates, the resistance of the thermistor R T changes and the balance of the bridge is disrupted. A voltage appears at the input of the EC, the phase of which depends on the sign of the temperature deviation from the set one. The voltage amplified in the EC is supplied to motor D, which moves the motor of the autotransformer AT in the appropriate direction. When the temperature reaches the set value, the bridge will be balanced and the engine will turn off.


Definitions:

Setting influence (the same as the input influence X) - the influence on the system that determines the required law of change of the controlled variable).

Control action (u) - the impact of the control device on the controlled object.

Control device (CD) - a device that influences the control object in order to ensure the required operating mode.

Disturbing influence (f) - an impact that tends to disrupt the required functional relationship between the reference impact and the controlled variable.

Control error (e = x - y) - the difference between the prescribed (x) and actual (y) values ​​of the controlled variable.

Regulator (P) - a set of devices connected to a regulated object and providing automatic maintenance of the set value of its controlled variable or its automatic change according to a certain law.

Automatic control system (ASR) - an automatic system with a closed circuit of influence, in which control (u) is generated as a result of comparing the true value of y with a given value of x.

An additional connection in the structural diagram of an automated control system, directed from the output to the input of the considered section of the chain of influences, is called feedback (FE). Feedback can be negative or positive.

Classification of ASR.

1. By purpose (by the nature of the change in the task):

· stabilizing ASR - a system whose operating algorithm contains an instruction to maintain the controlled variable at a constant value (x = const);

· software ASR - a system whose operating algorithm contains an instruction to change the controlled variable in accordance with a predetermined function (x is changed by software);

· tracking ASR - a system whose operating algorithm contains an instruction to change the controlled variable depending on a previously unknown value at the ACP input (x = var).

2. By the number of circuits:

· single-circuit - containing one circuit,

· multi-circuit - containing several contours.

3. According to the number of controlled quantities:

· one-dimensional - systems with 1 controlled variable,

· multidimensional - systems with several adjustable quantities.

Multidimensional ASRs, in turn, are divided into systems:

a) unrelated regulation, in which regulators are not directly related and can only interact through a common control object;

b) linked regulation, in which regulators of various parameters of the same technological process are interconnected outside the object of regulation.

4. By functional purpose:

ASR of temperature, pressure, flow, level, voltage, etc.

5. By the nature of the signals used for control:

· continuous,

· discrete (relay, pulse, digital).

6. By the nature of mathematical relationships:

· linear, for which the principle of superposition is valid;

· nonlinear.

Superposition principle (overlay): If several input influences are applied to the input of an object, then the object’s reaction to the sum of the input influences is equal to the sum of the object’s reactions to each influence separately:


L(x 1 + x 2) = L(x 1) + L(x 2),

where L is a linear function (integration, differentiation, etc.).

7. By type of energy used for regulation:

· pneumatic,

· hydraulic,

· electrical,

· mechanical, etc.

8. According to the principle of regulation:

· by deviation :

The vast majority of systems are built on the principle of feedback - regulation by deviation (see Fig. 1.7).

The element is called an adder. Its output signal is equal to the sum of the input signals. The blackened sector indicates that this input signal should be taken with the opposite sign.

· by outrage .

These systems can be used if it is possible to measure the disturbing influence (see Fig. 1.8). The diagram shows K - amplifier with gain K.

· combined - combine the features of previous ASRs.

This method (see Fig. 1.9) achieves high quality control, but its application is limited by the fact that the disturbing influence f cannot always be measured.


Basic models.

The operation of the regulatory system can be described verbally. Thus, paragraph 1.1 describes the temperature control system for the drying cabinet. A verbal description helps to understand the principle of operation of the system, its purpose, operating features, etc. However, most importantly, it does not provide quantitative estimates of the quality of regulation, therefore it is not suitable for studying the characteristics of systems and building automated control systems. Instead, TAU uses more accurate mathematical methods for describing the properties of systems:

· static characteristics,

· dynamic characteristics,

· differential equations,

· transfer functions,

· frequency characteristics.

In any of these models, the system can be represented as a link having input influences X, disturbances F and output influences Y

Under the influence of these influences, the output value may change. In this case, when a new task arrives at the input of the system, it must provide, with a given degree of accuracy, the new value of the controlled variable in steady state.

Steady state - this is a mode in which the discrepancy between the true value of the controlled variable and its set value will be constant over time.

Static characteristics.

Static characteristic element is the dependence of the steady-state values ​​of the output quantity on the value of the quantity at the input of the system, i.e.

y mouth = j(x).

The static characteristic (see Fig. 1.11) is often depicted graphically as a curve y(x).

Static is an element in which, with a constant input influence, a constant output value is established over time. For example, when different voltage values ​​are applied to the heater input, it will heat up to the temperature values ​​corresponding to these voltages.

Astatic is an element in which, under constant input action, the output signal continuously grows at a constant speed, acceleration, etc.

Linear static element is called an inertia-free element that has a linear static characteristic:

y mouth = K*x + a 0 .

As you can see, the static characteristic of the element in this case has the form of a straight line with a slope coefficient K.

Linear static characteristics, unlike nonlinear ones, are more convenient to study due to their simplicity. If the object model is nonlinear, then it is usually converted to a linear form by linearization.

The self-propelled gun is called static , if, with a constant input influence, the control error e tends to a constant value, depending on the magnitude of the influence.

The self-propelled gun is called astatic , if with a constant input influence the control error tends to zero, regardless of the magnitude of the influence.

Laplace transforms.

The study of ASR is significantly simplified when using applied mathematical methods of operational calculus. For example, the functioning of a certain system is described by a differential equation of the form

, (2.1)

where x and y are input and output quantities. If in this equation instead of x(t) and y(t) we substitute functions X(s) and Y(s) of a complex variable s such that

And , (2.2)

then the original DE under zero initial conditions is equivalent to the linear algebraic equation

a 2 s 2 Y(s) + a 1 s Y(s) + a 0 Y(s) = b 1 X(s) + b 0 X(s).

Such a transition from a differential equation to an algebraic equation is called Laplace transform , formulas (2.2) respectively Laplace transform formulas , and the resulting equation is operator equation .

The new functions X(s) and Y(s) are called images x(t) and y(t) are Laplace, while x(t) and y(t) are originals in relation to X(s) and Y(s).

The transition from one model to another is quite simple and consists in replacing the signs of differentials with operators s n , the signs of integrals with factors , and x(t) and y(t) themselves with images X(s) and Y(s).

For the reverse transition from the operator equation to functions of time, the method is used inverse Laplace transform . General formula for the inverse Laplace transform:

, (2.3)

where f(t) is the original, F(jw) is the image at s = jw, j is the imaginary unit, w is the frequency.

This formula is quite complex, so special tables have been developed (see Tables 1.1 and 1.2), which summarize the most frequently occurring functions F(s) and their originals f(t). They allow one to abandon the direct use of formula (2.3).

Table 1.2 - Laplace transforms

Original x(t) Image X(s)
d-function
t
t 2
tn
e - a t
a. x(t) a. X(s)
x(t - a) X(s) . e-a s
s n. X(s)

Table 1.2 - Formulas for the inverse Laplace transform (addition)

The law of change of the output signal is usually a function that needs to be found, and the input signal is usually known. Some typical input signals were discussed in section 2.3. Here are their images:

a single step action has the image X(s) = ,

delta function X(s) = 1,

linear impact X(s) = .

Example. Solving DE using Laplace transforms.

Suppose the input signal has the form of a single step effect, i.e. x(t) = 1. Then the image of the input signal X(s) = .

We transform the original differential equation according to Laplace and substitute X(s):

s 2 Y + 5sY + 6Y = 2sX + 12X,

s 2 Y + 5sY + 6Y = 2s + 12,

Y(s 3 + 5s 2 + 6s) = 2s + 12.

The expression for Y is defined:

.

The original of the received function is not in the table of originals and images. To solve the problem of finding it, the fraction is divided into a sum of simple fractions, taking into account that the denominator can be represented as s(s + 2)(s + 3):

= = + + =

By comparing the resulting fraction with the original one, you can create a system of three equations with three unknowns:

M 1 + M 2 + M 3 = 0 M 1 = 2

5 . M 1 + 3. M 2 + 2. M 3 = 2 à M 2 = -4

6. M 1 = 12 M 3 = 2

Therefore, a fraction can be represented as the sum of three fractions:

= - + .

Now, using table functions, the original output function is determined:

y(t) = 2 - 4 . e -2 t + 2 . e -3 t . ¨

Transfer functions.

Examples of typical links.

A link of a system is an element of a system that has certain dynamic properties. The links of control systems may have a different physical basis (electrical, pneumatic, mechanical, etc. links), but belong to the same group. The relationship between input and output signals in the links of one group is described by the same transfer functions.

The simplest typical links:

· intensifying,

· integrating,

differentiating

· aperiodic,

· oscillatory,

· delayed.

1) Reinforcing link.

The link amplifies the input signal by K times. The link equation y = K*x, transfer function W(s) = K. The parameter K is called gain .

The output signal of such a link exactly repeats the input signal, amplified by K times (see Fig. 1.15).

Examples of such links are: mechanical transmissions, sensors, inertia-free amplifiers, etc.

2) Integrating.

2.1) Ideal integrating.

The output value of an ideal integrating link is proportional to the integral of the input value.

; W(s) =

When an influence link is applied to the input, the output signal constantly increases (see Fig. 1.16).

This link is astatic, i.e. does not have a steady state.

2.2) Real integrating.

The transfer function of this link has the form:

The transition response, unlike an ideal link, is a curve (see Fig. 1.17).

An example of an integrating link is a DC motor with independent excitation, if the stator supply voltage is taken as the input effect, and the rotor rotation angle is taken as the output effect.

3) Differentiating.

3.1) Ideal differentiator.

The output quantity is proportional to the time derivative of the input:

With a step input signal, the output signal is a pulse (d-function).

3.2) Real differentiating.

Ideal differentiating links are not physically realizable. Most objects that represent differentiating links belong to real differentiating links. The transient response and transfer function of this link have the form:

4) Aperiodic (inertial).

This link corresponds to remote control and PF of the form:

; W(s) = .

Let us determine the nature of the change in the output value of this link when a stepwise effect of the value x 0 is applied to the input.

Image of step effect: X(s) = . Then the image of the output quantity is:

Y(s) = W(s) X(s) = = K x 0 .

Let's break down the fraction into prime ones:

= + = = - = -

The original of the first fraction according to the table: L -1 ( ) = 1, the second:

Then we finally get:

y(t) = K x 0 (1 - ).

The constant T is called time constant.

Most thermal objects are aperiodic links. For example, when voltage is applied to the input of an electric furnace, its temperature will change according to a similar law (see Fig. 1.19).

5) Oscillatory link has a DE and PF of the form

,

W(s) = .

When a step effect with amplitude x 0 is applied to the input, the transition curve will be

have one of two types: aperiodic (at T 1 ³ 2T 2) or oscillatory (at T 1< 2Т 2).

6) Delayed.

y(t) = x(t - t), W(s) = e - t s.

The output value y exactly repeats the input value x with some delay t. Examples: movement of cargo along a conveyor, movement of liquid through a pipeline.

Link connections.

Since the object under study, in order to simplify the analysis of its functioning, is divided into links, then after determining the transfer functions for each link, the task arises of combining them into one transfer function of the object. The type of transfer function of the object depends on the sequence of connections of the links:

1) Serial connection.

W rev = W 1. W2. W 3...

When links are connected in series, their transfer functions are multiplied.

2) Parallel connection.

W rev = W 1 + W 2 + W 3 + …

When links are connected in parallel, their transfer functions add up.

3) Feedback

Transfer function by reference (x):

“+” corresponds to negative OS,

"-" - positive.

To determine the transfer functions of objects with more complex connections of links, either sequential enlargement of the circuit is used, or they are converted using the Meson formula.

Transfer functions of ASR.

For research and calculation, the structural diagram of the ASR through equivalent transformations is reduced to the simplest standard form “object - controller”.

This is necessary, firstly, in order to determine the mathematical dependencies in the system, and, secondly, as a rule, all engineering methods for calculating and determining the settings of regulators are applied to such a standard structure.

In the general case, any one-dimensional ASR with main feedback can be brought to this form by gradually enlarging the links.

If the output of the system y is not fed to its input, then we get an open-loop control system, the transfer function of which is defined as the product:

W ¥ = W p . W y

(W p - PF of the regulator, W y - PF of the control object).

That is, the sequence of links W p and W y can be replaced by one link with W ¥ . The transfer function of a closed-loop system is usually denoted as Ф(s). It can be expressed in terms of W ¥:

This transfer function Фз(s) determines the dependence of y on x and is called the transfer function of a closed-loop system along the channel of the reference action (by reference).

For ASR there are also transfer functions through other channels:

Ф e (s) = = - by mistake,

Ф in (s) = = - by disturbance.

Since the transfer function of an open-loop system is, in the general case, a fractional-rational function of the form W ¥ = , the transfer functions of a closed-loop system can be transformed:

Ф z (s) = = , Ф e (s) = = .

As you can see, these transfer functions differ only in the expressions of their numerators. The denominator expression is called characteristic expression of a closed system and is denoted as D з (s) = A(s) + B(s), while the expression found in the numerator of the open-loop system transfer function W ¥ is called characteristic expression of an open-loop system B(s).

Frequency characteristics.

Examples of LCH.

1. Low pass filter (LPF)

LACHH LFCH Example of a circuit

The low-pass filter is designed to suppress high-frequency influences.

2. High pass filter (HPF)

LACHH LFCH Example of a circuit

The high-pass filter is designed to suppress low-frequency influences.

3. Barrier filter.

A stop filter only suppresses a certain range of frequencies

LFC and LFCH Example of a circuit



Stability criteria.

Sustainability.

An important indicator of ASR is stability, since its main purpose is to maintain a given constant value of a controlled parameter or change it according to a certain law. If the controlled parameter deviates from the specified value (for example, under the influence of a disturbance or a change in the setting), the regulator acts on the system in such a way as to eliminate this deviation. If, as a result of this impact, the system returns to its original state or goes into another equilibrium state, then such a system is called sustainable . If oscillations with ever-increasing amplitude occur or a monotonous increase in error e occurs, then the system is called unstable .

In order to determine whether a system is stable or not, stability criteria are used:

1) root criterion,

2) Stodola criterion,

3) Hurwitz criterion,

4) Nyquist criterion,

5) criterion of Mikhailov et al.

The first two criteria are necessary criteria for the stability of individual links and open-loop systems. The Hurwitz criterion is algebraic and was developed to determine the stability of closed-loop systems without delay. The last two criteria belong to the group of frequency criteria, since they determine the stability of closed systems based on their frequency characteristics. Their feature is the possibility of application to closed systems with delay, which are the vast majority of control systems.

Root criterion.

The root criterion determines the stability of the system by the type of transfer function. The dynamic characteristic of the system, which describes the basic behavioral properties, is the characteristic polynomial located in the denominator of the transfer function. By setting the denominator to zero, one can obtain a characteristic equation, the roots of which can be used to determine stability.

The roots of the characteristic equation can be either real or complex and, to determine stability, are plotted on the complex plane (see Fig. 1.34).

(The symbol indicates the roots of the equation.)

Types of roots of the characteristic equation:

Valid:

positive (root number 1);

negative (2);

zero (3);

Complex

complex conjugates (4);

purely imaginary (5);

In order of multiplicity, the roots are:

single (1, 2, 3);

conjugate (4, 5): s i = a ± jw;

multiples (6) s i = s i +1 = …

The root criterion is formulated as follows:

Linear ASR is stable if all roots of the characteristic equation lie in the left half-plane. If at least one root is on the imaginary axis, which is the stability boundary, then the system is said to be on the stability boundary. If at least one root is in the right half-plane (regardless of the number of roots in the left), then the system is unstable.

In other words, all real roots and real parts of complex roots must be negative. Otherwise the system is unstable.

Example 3.1. The transfer function of the system has the form:

.

Characteristic equation: s 3 + 2s 2 + 2.25s + 1.25 = 0.

Roots: s 1 = -1; s 2 = -0.5 + j; s 3 = -0.5 - j.

Therefore, the system is stable. ¨

Stodola criterion.

This criterion is a consequence of the previous one and is formulated as follows: A linear system is stable if all coefficients of the characteristic polynomial are positive.

That is, for the transfer coefficient from Example 3.1, according to the Stodol criterion, it corresponds to a stable system.

Hurwitz criterion.

The Hurwitz criterion works with the characteristic polynomial of a closed-loop system. As is known, the block diagram of the ACP mistakenly looks like (see figure)

W p - transfer function of the controller,

W y is the transfer function of the control object.

Let's determine the transfer function for direct communication (transfer function of an open-loop system, see paragraph 2.6.4): W ¥ = W p W y.

.

As a rule, the transfer function of an open-loop system has a fractional-rational form:

.

Then after substitution and transformation we get:

.

It follows that the characteristic polynomial of a closed-loop system (CPPS) can be defined as the sum of the numerator and denominator W ¥:

D з (s) = A(s) + B(s).

To determine Hurwitz stability, a matrix is ​​constructed in such a way that along the main diagonal the HPZS coefficients from a n +1 to a 0 are located. To the right and left of it are written coefficients with indices separated by 2 (a 0, a 2, a 4 ... or a 1, a 3, a 5 ...). Then for a stable system it is necessary and sufficient that the determinant and all the main diagonal minors of the matrix are greater than zero.

If at least one determinant is equal to zero, then the system will be on the stability boundary.

If at least one determinant is negative, then the system is unstable regardless of the number of positive or zero determinants.

Example. The transfer function of the open-loop system is given

.

It is required to determine the stability of a closed-loop system using the Hurwitz criterion.

For this purpose, the HPZ is defined:

D(s) = A(s) + B(s) = 2s 4 + 3s 3 + s 2 + 2s 3 + 9s 2 + 6s + 1 = 2s 4 + 5s 3 + 10s 2 + 6s + 1.

Since the degree of the HPLC is n = 4, the matrix will have a size of 4x4. The HPZ coefficients are a 4 = 2, a 3 = 5, a 2 = 10, a 1 = 6, and 0 = 1.

The matrix looks like:

(note the similarity of the matrix rows: 1 with 3 and 2 with 4). Qualifiers:

Δ 1 = 5 > 0,

,

Δ 4 = 1* Δ 3 = 1*209 > 0.

Since all determinants are positive, then ACP stable. ♦


Mikhailov criterion.

The stability criteria described above do not work if the transfer function of the system has a delay, that is, it can be written in the form

,

where t is the delay.

In this case, the characteristic expression of the closed system is not a polynomial and its roots cannot be determined. To determine stability in this case, the Mikhailov and Nyquist frequency criteria are used.

The procedure for applying the Mikhailov criterion:

1) The characteristic expression of the closed system is written:

D з (s) = A(s) + B(s). e - t s .

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION

Federal State Autonomous Educational Institution of Higher Professional Education

"St. Petersburg State University of Aerospace Instrumentation"

_________________________________________________________________

M. V. Burakov

Theory of automatic control.

Tutorial

Saint Petersburg

Reviewers:

Candidate of Technical Sciences D. O. Yakimovsky (Federal State Enterprise “Research Institute of Command Devices”). Candidate of Technical Sciences Associate Professor A. A. Martynov

(St. Petersburg State University of Aerospace Instrumentation)

Approved by the University Editorial and Publishing Council

as a teaching aid

Burakov M.V.

D79 Theory of automatic control: textbook. allowance. Part 1 / M. V. Burakov; – St. Petersburg: GUAP, 2013. -258 p.: ill.

The textbook discusses the basics of the theory of automatic control - a basic course for training engineers in the field of automation and control.

The basic concepts and principles of control are presented, mathematical models and methods of analysis and synthesis of linear and discrete control systems based on the apparatus of transfer functions are considered.

The textbook is intended for the preparation of bachelors and masters in the direction 220400 “Control in Technical Systems”, as well as students of other specialties studying the disciplines “Automatic Control Theory” and “Fundamentals of Control Theory”.

1. BASIC CONCEPTS AND DEFINITIONS

1.1. Brief history of TAU development

1.2. Basic concepts of TAU

1.3. Methods for describing control objects

1.4. Linearization

1.4. Management quality criteria

1.5. Deflection regulators

Self-test questions

2. TRANSFER FUNCTIONS

2.1. Laplace transform

2.2. Concept of transfer function

2.3. Typical dynamic links

2.4. Timing characteristics

2.5. Transfer function of the system with inverse

2.6. Private transfer functions

2.7. Steady State Accuracy

2.8. Converting block diagrams

2.9. Signal graphs and Mason's formula

2.10. Invariant systems

Self-test questions

3. ROOT ESTIMATES OF STABILITY AND CA-

3.1. Necessary and sufficient condition for stability

3.2. Algebraic stability criterion

3.3. Structurally unstable systems

3.4. Root indicators of the quality of transition

process

3.5. Selecting controller parameters

3.6. Root hodograph

Self-test questions

4. FREQUENCY METHODS OF ANALYSIS AND SYNTHESIS

4.1. Fourier transform

4.2. Logarithmic frequency response

4.3. Frequency characteristics of an open-loop system

4.4. Frequency stability criteria

4.4.1. Mikhailov stability criterion

4.4.2. Nyquist stability criterion

4.4.3. Nyquist criterion for systems with delay

4.5. Frequency quality criteria

4.5.1. Stability margins

4.5.2. Harmonic Accuracy

4.6. Synthesis of corrective devices

4.6.1. Assessing the quality of the tracking system by type

LFC of open-loop system

4.6.2. Correction using a differentiator

devices

4.6.3. Correction using integra-

differentiating chain

4.6.4. Synthesis of a general type corrective link

4.7. Analog correction links

4.7.1. Passive corrective links

4.7.2. Active corrective links

Self-test questions

5. DIGITAL CONTROL SYSTEMS

5.1. Analog-to-digital and digital-to-analog conversion

development

5.2. Implementation of DAC and ADC

5.3. Z - transformation

5.4. Shift theorem

5.5. Synthesis of digital systems from continuous ones

5.6. Stability of discrete control systems

5.7. Dynamic Object Identification

5.7.1. Identification problem

5.7.2. Deterministic identifier

5.7.3. Construction of the least squares model using the acceleration curve

Self-test questions

6. ADAPTIVE CONTROL SYSTEMS

6.1. Classification of adaptive systems

6.2. Extreme control systems

6.3. Adaptive control with reference model

Self-test questions

CONCLUSION

Bibliography

− BASIC CONCEPTS AND DEFINITIONS

o Brief history of the development of the theory of automatic

skogo management

The theory of automatic control can be defined as the science of methods for determining the laws of control of any objects that can be implemented using technical means.

The first automatic devices were developed by man in ancient times, as evidenced by the written evidence that has reached us. The works of ancient Greek and Roman scientists provide descriptions of various automatic devices: hodometer - an automatic device for measuring distance based on recalculating the number of revolutions of a cart wheel; machines for opening doors and selling water in temples; automatic theaters with cam mechanisms; device for throwing arrows with automatic feeding. At the turn of our era, the Arabs equipped water clocks with a float level regulator (Fig. 1.1).

In the Middle Ages, “android” automation developed, when mechanical designers created devices that imitated individual human actions. The name “android” emphasizes the humanoid nature of the machine. Androids operated based on clock mechanisms.

Several factors can be identified that necessitated the development of control systems in the 17th – 18th centuries:

1. the development of watchmaking, driven by the needs of rapidly developing shipping;

2. the development of the flour milling industry and the need to regulate the operation of water mills;

3. invention of the steam engine.

Rice. 1.1. Water clock design

Although it is known that centrifugal speed equalizers were used in water flour mills back in the Middle Ages, the first feedback control system is considered to be the temperature regulator of the Dutchman Cornelius Drebbel (1600). In 1675, X. Huygens built a pendulum regulator into the clock. Denis Papin invented the first pressure regulator for steam boilers in 1681.

The steam engine became the first target for industrial regulators, since it did not have the ability to operate stably on its own, i.e. did not have “self-leveling”

we" (Fig. 1.2).

Fig.1.2. Steam engine with regulator

The first industrial regulators are an automatic float regulator for supplying a boiler to a steam engine, built in 1765 by I.I. Polzunov, and a centrifugal speed regulator for a steam engine, for which J. Watt received a patent in 1784 (Fig. 1.3).

These first regulators were direct control systems, i.e., additional energy sources were not required to actuate the regulators - the sensitive element directly moved the regulator (modern control systems are indirect control systems, since the error signal is almost always insufficient in power to control regulatory body).

Rice. 1.3. Watt's centrifugal regulator.

It was no coincidence that the steam engine became the first object for the application of technology and control theory, since it did not have the ability to work stably on its own and did not have self-leveling.

It should also be noted the importance of the creation of the first software device for controlling a weaving loom using a punched card (for reproducing patterns on carpets), built in 1808 by J. Jacquard.

Polzunov’s invention was not accidental, since at the end of the 18th century the Russian metallurgical industry occupied a leading position in the world. Subsequently, Russian scientists and engineers continued to make great contributions to the development of the theory of automatic control.

The first work on the theory of regulation appeared in 1823, and it was written by Chizhov, a professor at St. Petersburg University.

IN 1854 K.I. Konstantinov proposed using the “electromagnetic speed controller” he developed instead of a conical pendulum in steam engines. Instead of a centrifugal mechanism, it uses an electromagnet to control the flow of steam into the machine. The regulator proposed by Konstantinov had greater sensitivity than a conical pendulum.

IN 1866 A.I. Shpakovsky developed a regulator for a steam boiler, which was heated using nozzles. The fuel supply through the nozzles was proportional to the change in steam pressure in the boiler. If the pressure dropped, the fuel flow through the injectors increased, which led to an increase in temperature and, as a consequence, an increase in pressure.

IN 1856 in Moscow, during the coronation of Alexander III, six powerful electric arc lamps with an automatic Shpakovsky regulator were installed. This was the first practical experience in manufacturing an installation and long-term operation of a series of electromechanical regulators.

From 1869–1883 V. N. Chikolev developed a number of electromechanical regulators, including a differential regulator for arc lamps, which played an important role in the history of regulation technology.

The date of birth of the theory of automatic control (ATC) is usually called 1868, when J. Maxwell’s work “On Regulators” was published, in which the differential equation was used as a model of the controller.

A great contribution to the development of TAU was made by the Russian mathematician and engineer I. A. Vyshnegradsky. In his work “On the General Theory of Regulators,” published in 1876, he examined the steam engine and the centrifugal regulator as a single dynamic system. Vyshnegradsky made the most practically important conclusions on the stable movement of systems. He first introduced the concept of linearization of differential equations, thus significantly simplifying the mathematical apparatus of research.

THE THEORY OF AUTOMATIC CONTROL FOR “DUMMIES”

K.Yu. Polyakov

Saint Petersburg

© K.Yu. Polyakov, 2008

“At a university, you need to present the material at a high professional level. But since this level goes well above the head of the average student, I will explain on my fingers. It’s not very professional, but it’s understandable.”

Unknown teacher

Preface

This manual is intended for the first acquaintance with the subject. Its task is to explain the basic concepts “on the fingers” theory of automatic control and make sure that after reading it you will be able to perceive professional literature on this topic. This manual should be considered only as a foundation, a launching pad for serious study of a serious subject, which can become very interesting and exciting.

There are hundreds of textbooks on automatic control. But the whole problem is that when the brain perceives new information, it looks for something familiar that it can “catch onto”, and on this basis “link” the new to already known concepts. Practice shows that reading serious textbooks is difficult for a modern student. There's nothing to grab onto. And behind strict scientific evidence, the essence of the matter, which is usually quite simple, often eludes. The author tried to “go down” to a lower level and build a chain from “everyday” concepts to the concepts of management theory.

The presentation at every step suffers from lack of rigor, evidence is not given, formulas are used only where it is impossible without them. The mathematician will find many inconsistencies and omissions here, since (in accordance with the goals of the manual) between rigor and understandability, the choice is always made in favor of understandability.

Little prior knowledge is required on the part of the reader. Need to have an idea

O some sections of the higher mathematics course:

1) derivatives and integrals;

2) differential equations;

3) linear algebra, matrices;

4) complex numbers.

Acknowledgments

The author expresses deep gratitude to Dr. A.N. Churilov, Ph.D. V.N. Kalinichenko and Ph.D. IN. Rybinsky, who carefully read the preliminary version of the manual and made many valuable comments that made it possible to improve the presentation and make it more understandable.

© K.Yu. Polyakov, 2008

BASIC CONCEPTS...

Introduction........................................................ ........................................................ ........................................................

Control systems................................................ ........................................................ ...........................

1.3. What types of control systems are there? ........................................................ ...............................................

M ATHEMATHIC MODELS..........................................................................................................................

2.1. What do you need to know to manage? ........................................................ ...................................................

2.2. Input and output connection................................................................... ........................................................ ...........................

How are models built? ........................................................ ........................................................ ...................

Linearity and nonlinearity................................................... ........................................................ .............

Linearization of equations................................................... ........................................................ ...................

Control................................................. ........................................................ ........................................

3M EQUIPMENT OF LINEAR OBJECTS.....................................................................................................................

Differential equations................................................... ........................................................ .........

3.2. State space models......................................................... ........................................................ ..

Transition function................................................... ........................................................ ...........................

Impulse response (weighting function) .................................................... ...................................

Transmission function................................................ ........................................................ ....................

Laplace transform................................................... ........................................................ ...............

3.7. Transfer function and state space.................................................... ...........................

Frequency characteristics........................................................ ........................................................ ..........

Logarithmic frequency characteristics................................................................... ...............................

4. T TYPICAL DYNAMIC UNITS................................................................................................................

Amplifier................................................. ........................................................ .........................................

Aperiodic link................................................... ........................................................ ........................

Oscillatory link................................................... ........................................................ ........................

Integrating link................................................... ........................................................ .......................

Differentiating links................................................... ........................................................ ..............

Lag................................................. ........................................................ ....................................

“Reverse” links................................................... ........................................................ ............................

LAFCHH of complex links.................................................... ........................................................ ...............

WITH STRUCTURAL DIAGRAMS....................................................................................................................................

Symbols........................................................ ........................................................ ......................

Conversion rules................................................... ........................................................ ...................

Typical single-circuit system................................................................... ........................................................ .....

A ANALYSIS OF CONTROL SYSTEMS......................................................................................................................

Management requirements......................................................... ........................................................ ...................

Output process........................................................ ........................................................ ...............................

Accuracy................................................. ........................................................ ...........................................

Sustainability........................................................ ........................................................ ........................................

Sustainability criteria......................................................... ........................................................ ...............

Transition process................................................... ........................................................ ...........................

Frequency quality assessments................................................................... ........................................................ ............

Root quality assessments................................................................... ........................................................ ................

Robustness........................................................ ........................................................ ....................................

WITH INTEZ REGULATORS....................................................................................................................................

Classic scheme................................................... ........................................................ ...........................

PID controllers........................................................ ........................................................ ................................

Pole placement method................................................................... ........................................................ .............

Correction of LAFCH................................................... ........................................................ ............................

Combined control................................................... ........................................................ ..........

Invariance........................................................ ........................................................ ...............................

Many stabilizing regulators.................................................... ....................................

CONCLUSION ................................................. ........................................................ ........................................................ .....

L ITERATION FOR NEXT READING..........................................................................................................

© K.Yu. Polyakov, 2008

1. Basic concepts

1.1. Introduction

Since ancient times, man has wanted to use objects and forces of nature for his own purposes, that is, to control them. You can control inanimate objects (for example, rolling a stone to another place), animals (training), people (boss - subordinate). Many management tasks in the modern world are associated with technical systems - cars, ships, airplanes, machine tools. For example, you need to maintain a given course of a ship, the altitude of an airplane, engine speed, or the temperature in a refrigerator or oven. If these tasks are solved without human participation, they speak of automatic control.

Management theory tries to answer the question “how should one manage?” Until the 19th century, the science of control did not exist, although the first automatic control systems already existed (for example, windmills were “taught” to turn towards the wind). The development of management theory began during the industrial revolution. At first, this direction in science was developed by mechanics to solve problems of regulation, that is, maintaining a given value of rotation speed, temperature, pressure in technical devices (for example, in steam engines). This is where the name “automatic regulation theory” comes from.

Later it turned out that management principles can be successfully applied not only in technology, but also in biology, economics, and social sciences. The science of cybernetics studies the processes of control and information processing in systems of any nature. One of its sections, related mainly to technical systems, is called automatic control theory. In addition to classical control problems, it also deals with the optimization of control laws and issues of adaptability (adaptation).

Sometimes the names “automatic control theory” and “automatic control theory” are used interchangeably. For example, in modern foreign literature you will find only one term - control theory.

1.2. Control systems

1.2.1. What does the control system consist of?

IN In management tasks there are always two objects – the managed and the manager. The managed object is usually calledcontrol object or simply an object, and the control object – a regulator. For example, when controlling rotation speed, the control object is an engine (electric motor, turbine); in the problem of stabilizing the course of a ship - a ship submerged in water; in the task of maintaining the volume level - dynamic

Regulators can be built on different principles.

The most famous of the first mechanical regulators is

centrifugal Watt regulator for frequency stabilization

rotation of the steam turbine (in the figure on the right). When frequency

rotation increases, the balls move apart due to the increase

centrifugal force. At the same time, through the system of levers a little

the damper closes, reducing the flow of steam to the turbine.

Temperature regulator in the refrigerator or thermostat -

this is an electronic circuit that turns on the cooling mode

(or heating) if the temperature gets higher (or lower)

given.

In many modern systems, regulators are microprocessor devices that

pewters. They successfully control airplanes and spaceships without human intervention.

© K.Yu. Polyakov, 2008

ka. A modern car is literally “stuffed” with control electronics, right down to on-board computers.

Typically, the regulator acts on the controlled object not directly, but through actuators (drives), which can amplify and convert the control signal, for example, an electrical signal can “transform” into the movement of a valve that regulates fuel consumption, or into turning the steering wheel at a certain angle.

In order for the regulator to “see” what is actually happening to the object, sensors are needed. Sensors are most often used to measure those characteristics of an object that need to be controlled. In addition, the quality of management can be improved if additional information is obtained - by measuring the internal properties of the object.

1.2.2. System structure

So, a typical control system includes a plant, a controller, an actuator, and sensors. However, a set of these elements is not yet a system. To transform into a system, communication channels are needed, through which information is exchanged between elements. Electric current, air (pneumatic systems), liquid (hydraulic systems), and computer networks can be used to transmit information.

Interconnected elements are already a system that has (due to connections) special properties that individual elements and any combination of them do not have.

The main intrigue of management is related to the fact that the environment affects the object - external disturbances, which “prevent” the regulator from performing its assigned task. Most disturbances are unpredictable in advance, that is, they are random in nature.

In addition, sensors do not measure parameters accurately, but with some error, albeit small. In this case, they talk about “measurement noise” by analogy with noise in radio engineering that distorts signals.

To summarize, we can draw a block diagram of the control system like this:

control

regulator

indignation

reverse

measurements

For example, in a ship's course control system

control object- this is the ship itself, located in the water; to control its course, a rudder is used to change the direction of water flow;

regulator – digital computer;

drive - a steering device that amplifies the control electrical signal and converts it into steering rotation;

sensors - a measuring system that determines the actual course;

external disturbances- these are sea waves and wind that deviate the ship from the given course;

measurement noise is sensor errors.

Information in the control system seems to “go in circles”: the regulator issues a signal

control on the drive, which acts directly on the object; then information about the object is returned through the sensors back to the controller and everything starts all over again. They say that the system has feedback, that is, the regulator uses information about the state of the object to develop control. Feedback systems are called closed because information is transmitted in a closed loop.

© K.Yu. Polyakov, 2008

1.2.3. How does the regulator work?

The controller compares the setting signal (“setpoint”, “setpoint”, “desired value”) with feedback signals from sensors and determines mismatch(control error) – the difference between the given and actual state. If it is zero, no control is required. If there is a difference, the regulator issues a control signal that seeks to reduce the mismatch to zero. Therefore, in many cases the regulator circuit can be drawn like this:

mismatch

algorithm

control

management

Feedback

This diagram shows error control(or by deviation). This means that in order for the regulator to begin to act, the controlled value must deviate from the set value. The block marked with ≠ finds the mismatch. In the simplest case, it subtracts a feedback signal (measured value) from a given value.

Is it possible to control an object without causing an error? In real systems, no. First of all, due to external influences and noises that are unknown in advance. In addition, control objects have inertia, that is, they cannot instantly move from one state to another. The capabilities of the controller and drives (that is, the power of the control signal) are always limited, therefore the speed of the control system (the speed of transition to a new mode) is also limited. For example, when steering a ship, the rudder angle usually does not exceed 30 - 35°, this limits the rate of course change.

We considered the option when feedback is used to reduce the difference between the specified and actual state of the control object. Such feedback is called negative feedback because the feedback signal is subtracted from the command signal. Could it be the other way around? It turns out yes. In this case, the feedback is called positive, it increases the mismatch, that is, it tends to “rock” the system. In practice, positive feedback is used, for example, in generators to maintain undamped electrical oscillations.

1.2.4. Open-loop systems

Is it possible to control without using feedback? In principle, it is possible. In this case, the controller does not receive any information about the real state of the object, so it must be known exactly how this object behaves. Only then can you calculate in advance how it needs to be controlled (build the necessary control program). However, there is no guarantee that the task will be completed. Such systems are called program control systems or open-loop systems, since information is not transmitted in a closed loop, but only in one direction.

program

control

regulator

indignation

A blind or deaf driver can also drive a car. For a while. As long as he remembers the road and can correctly calculate his place. Until he encounters pedestrians or other cars on the way that he cannot know about in advance. From this simple example it is clear that without

© K.Yu. Polyakov, 2008

feedback (information from sensors) it is impossible to take into account the influence of unknown factors and the incompleteness of our knowledge.

Despite these disadvantages, open-loop systems are used in practice. For example, an information board at a train station. Or a simple engine control system in which it is not necessary to maintain the rotation speed very precisely. However, from the point of view of control theory, open-loop systems are of little interest, and we will not talk about them anymore.

1.3. What types of control systems are there?

Automatic system is a system that operates without human intervention. Is there some more automated systems in which routine processes (collection and analysis of information) are performed by a computer, but the entire system is controlled by a human operator who makes decisions. We will further study only automatic systems.

1.3.1. Objectives of control systems

Automatic control systems are used to solve three types of problems:

stabilization, that is, maintaining a given operating mode that does not change for a long time (the setting signal is constant, often zero);

software control– control according to a previously known program (the setting signal changes, but is known in advance);

tracking an unknown master signal.

TO stabilization systems include, for example, autopilots on ships (maintaining a given course), turbine speed control systems. Programmed control systems are widely used in household appliances, such as washing machines. Servo systems serve to amplify and convert signals; they are used in drives and when transmitting commands over communication lines, for example, via the Internet.

1.3.2. One-dimensional and multidimensional systems

According to the number of inputs and outputs there are

one-dimensional systems that have one input and one output (they are considered in the so-called classical control theory);

multidimensional systems with several inputs and/or outputs (the main subject of study of modern control theory).

We will study only one-dimensional systems, where both the object and the controller have one input and one output signal. For example, when steering a ship along a course, we can assume that there is one control action (turning the rudder) and one controlled variable (course).

However, in reality this is not entirely true. The fact is that when the course changes, the roll and trim of the ship also changes. In a one-dimensional model we neglect these changes, although they can be very significant. For example, during a sharp turn, the roll may reach an unacceptable value. On the other hand, for control you can use not only the steering wheel, but also various thrusters, pitch stabilizers, etc., that is, the object has several inputs. Thus, the real course control system is multidimensional.

The study of multidimensional systems is a rather complex task and is beyond the scope of this manual. Therefore, in engineering calculations they sometimes try to simplify a multidimensional system as several one-dimensional ones, and quite often this method leads to success.

1.3.3. Continuous and discrete systems

According to the nature of the system signals, they can be

continuous, in which all signals are functions of continuous time, defined over a certain interval;

discrete, in which discrete signals (sequences of numbers) are used, defined only at certain points in time;

© K.Yu. Polyakov, 2008

continuous-discrete, which contain both continuous and discrete signals. Continuous (or analog) systems are usually described by differential equations. These are all motion control systems that do not contain computers or other elements.

discrete action devices (microprocessors, logical integrated circuits). Microprocessors and computers are discrete systems because they contain all the information

mation is stored and processed in discrete form. The computer cannot process continuous signals because it only works with sequences numbers. Examples of discrete systems can be found in economics (reference period - quarter or year) and in biology (predator-prey model). Difference equations are used to describe them.

There are also hybrid continuous-discrete systems, for example, computer systems for controlling moving objects (ships, airplanes, cars, etc.). In them, some of the elements are described by differential equations, and some by difference equations. From a mathematical point of view, this creates great difficulties for their study, therefore, in many cases, continuous-discrete systems are reduced to simplified purely continuous or purely discrete models.

1.3.4. Stationary and non-stationary systems

For management, the question of whether the characteristics of an object change over time is very important. Systems in which all parameters remain constant are called stationary, which means “not changing over time.” This tutorial covers only stationary systems.

In practical problems things are often not so rosy. For example, a flying rocket consumes fuel and due to this its mass changes. Thus, a rocket is a non-stationary object. Systems in which the parameters of an object or controller change over time are called non-stationary. Although the theory of non-stationary systems exists (the formulas have been written), applying it in practice is not so easy.

1.3.5. Certainty and randomness

The simplest option is to assume that all parameters of the object are determined (set) exactly, just like external influences. In this case we are talking about deterministic systems that were considered in classical control theory.

However, in real problems we do not have accurate data. First of all, this applies to external influences. For example, to study the rocking of a ship at the first stage, we can assume that the wave has the shape of a sine of known amplitude and frequency. This is a deterministic model. Is this true in practice? Naturally not. Using this approach, only approximate, rough results can be obtained.

According to modern concepts, the waveform is approximately described as a sum of sinusoids that have random, that is, unknown in advance, frequencies, amplitudes and phases. Interference and measurement noise are also random signals.

Systems in which random disturbances operate or the parameters of an object can change randomly are called stochastic(probabilistic). The theory of stochastic systems allows one to obtain only probabilistic results. For example, you cannot guarantee that the ship's deviation from course will always be no more than 2°, but you can try to ensure such a deviation with some probability (99% probability means that the requirement will be met in 99 cases out of 100).

1.3.6. Optimal systems

Often system requirements can be formulated as optimization problems. In optimal systems, the regulator is designed to provide a minimum or maximum of some quality criterion. It must be remembered that the expression “optimal system” does not mean that it is truly ideal. Everything is determined by the accepted criterion - if it is chosen successfully, the system will turn out good, if not, then vice versa.

© K.Yu. Polyakov, 2008

1.3.7. Special classes of systems

If the parameters of the object or disturbances are not known accurately or can change over time (in non-stationary systems), adaptive or self-adjusting controllers are used, in which the control law changes when conditions change. In the simplest case (when there are several previously known operating modes), a simple switching occurs between several control laws. Often in adaptive systems, the controller evaluates the parameters of the object in real time and accordingly changes the control law according to a given rule.

A self-tuning system that tries to adjust the regulator so as to “find” the maximum or minimum of some quality criterion is called extreme (from the word extremum, meaning maximum or minimum).

Many modern household devices (for example, washing machines) use fuzzy controllers, built on the principles of fuzzy logic. This approach allows us to formalize the human way of making decisions: “if the ship has gone too far to the right, the rudder needs to be moved very far to the left.”

One of the popular directions in modern theory is the use of artificial intelligence achievements to control technical systems. The regulator is built (or just configured) based on a neural network, which is pre-trained by a human expert.

© K.Yu. Polyakov, 2008

2. Mathematical models

2.1. What do you need to know to manage?

The goal of any control is to change the state of an object in the desired way (in accordance with the task). The theory of automatic control must answer the question: “how to build a regulator that can control a given object in such a way as to achieve the goal?” To do this, the developer needs to know how the control system will react to different influences, that is, a model of the system is needed: object, drive, sensors, communication channels, disturbances, noise.

A model is an object that we use to study another object (original). The model and the original must be similar in some way so that the conclusions drawn from studying the model can (with some probability) be transferred to the original. We will be primarily interested in mathematical models, expressed as formulas. In addition, descriptive (verbal), graphical, tabular and other models are also used in science.

2.2. Input and output connection

Any object interacts with the external environment using inputs and outputs. Inputs are possible impacts on an object, outputs are those signals that can be measured. For example, for an electric motor, the inputs can be supply voltage and load, and the outputs

– shaft rotation speed, temperature.

The inputs are independent, they “come” from the external environment. When the information at the input changes, the internal object state(this is what its changing properties are called) and, as a consequence, outputs:

input x

output y

This means that there is some rule by which the element transforms input x into output y. This rule is called an operator. Writing y = U means that output y is received in

the result of applying the operator U to input x.

To build a model means to find an operator connecting inputs and outputs. With its help, you can predict the reaction of an object to any input signal.

Consider a DC electric motor. The input of this object is the supply voltage (in volts), the output is the rotation speed (in revolutions per second). We will assume that at a voltage of 1 V the rotation frequency is 1 rpm, and at a voltage of 2 V – 2 rpm, that is, the rotation frequency is equal in magnitude to the voltage1. It is easy to see that the action of such an operator can be written in the form

U[ x] = x .

Now suppose that the same motor rotates the wheel and we have chosen the number of revolutions of the wheel relative to the initial position (at moment t = 0) as the output of the object. In this case, with uniform rotation, the product x ∆ t gives us the number of revolutions in time ∆ t, that is, y (t) = x ∆ t (here the notation y (t) clearly denotes the dependence of the output on time

neither t). Can we consider that we have defined the operator U with this formula? Obviously not, because the resulting dependence is valid only for a constant input signal. If the voltage at the input x(t) changes (it doesn’t matter how!), the angle of rotation will be written as an integral

1 Of course, this will only be true over a certain voltage range.

THEORY OF AUTOMATIC CONTROL

Lecture notes

INTRODUCTION

You will learn:

· What is the theory of automatic control (TAC).

· What is the object, subject and purpose of studying TAU.

· What is the main research method at TAU.

· What is the place of TAU among other sciences.

· What is the history of TAU.

· Why is the study of TAU important?

· What are the current trends in production automation.

What is the theory of automatic control?

The concept of TAU accumulates the terms included in its name:

· theory – a body of knowledge that allows, under certain conditions, to obtain reliable results

· control – the impact exerted on an object to achieve a certain goal;

· automatic control – control without human intervention using technical means.

That's why

TAU– a body of knowledge that allows you to create and implement automatic process control systems with specified characteristics.

What is the object, subject and purpose of studying TAU?

Object of study TAU– automatic control system (ACS).

Subject of study TAU– processes occurring in the automated control system.

Purpose of studying TAU– taking into account acquired knowledge in practical activities during the design, production, installation, commissioning and operation of automated control systems.

The main research method at TAU.

When studying control processes in TAU, they abstract from the physical and design features of the automated control system and, instead of real automatic control systems, consider their adequate mathematical models. That's why the main research method at TAU is math modeling.

Place of TAU among other sciences.

TAU, together with the theory of functioning of control system elements (sensors, regulators, actuators) forms a broader branch of science - automation. Automation, in turn, is one of the sections technical cybernetics. Technical cybernetics studies complex automated control systems for technological processes (APCS) and enterprises (APCS), built using control electronic computers.

History of TAU.

The first theoretical work in the field of automatic control appeared at the end of the 19th century, when steam engine regulators became widespread in industry, and practical engineers began to encounter difficulties in designing and setting up these regulators. It was during this period that a number of studies were carried out in which for the first time the steam engine and its regulator were analyzed using mathematical methods as a single dynamic system.

Until approximately the middle of the 20th century, the theory of regulators of steam engines and boilers developed as a branch of applied mechanics. At the same time, methods for analyzing and calculating automatic devices in electrical engineering were developed. The formation of TAU into an independent scientific and educational discipline occurred in the period from 1940 to 1950. At this time, the first monographs and textbooks were published, in which automatic devices of various physical natures were considered using uniform methods.

Currently, TAU, along with the latest sections of the so-called general management theory (operations research, systems engineering, game theory, queuing theory) plays an important role in the improvement and automation of production management.

Why is the study of TAU important?

Automation is one of the main directions of scientific and technological progress and an important means of increasing production efficiency. Modern industrial production is characterized by an increase in scale and complexity of technological processes, an increase in the unit capacity of individual units and installations, the use of intensive, high-speed modes close to critical, increasing requirements for product quality, personnel safety, equipment and the environment.

Economical, reliable and safe operation of complex technical objects can be ensured using only the most advanced technical means, the development, manufacture, installation, commissioning and operation of which is unthinkable without knowledge of TAU.

Modern trends in production automation.

Modern trends in production automation are:

- widespread use of computers for control;

- creation of machines and equipment with built-in microprocessor means of measurement, control and regulation;

- transition to decentralized (distributed) control structures with microcomputers;

- implementation of human-machine systems;

- use of highly reliable technical means;

- automated design of control systems.

1. GENERAL PRINCIPLES OF CONSTRUCTION OF ACS

You will meet:

· With basic concepts and definitions.

· With ACS structure.

· With ACS classification.

1.1. Basic concepts and definitions

Algorithm for the functioning of the device (system)– a set of instructions leading to the correct implementation of a technical process in a device or a set of devices (system).

For example, electrical system– a set of devices that ensure the unity of the processes of generation, conversion, transmission, distribution and consumption of electrical energy while ensuring a number of requirements for operating parameters (frequency, voltage, power, etc.). The electrical system is designed so that under normal operating conditions these requirements are met, i.e. Right technical process was carried out. In this case functioning algorithm of an electrical system is implemented in the design of its constituent devices (generators, transformers, power lines, etc.) and in a specific circuit for their connection.

However, external circumstances (impacts) may interfere with the proper functioning of the device (system). For example, for an electrical system, such impacts can be: changes in the load of electrical energy consumers, changes in the configuration of the electrical network as a result of switching, short circuits, wire breaks, etc. Therefore, special influences have to be exerted on the device (system), aimed at compensating for the undesirable consequences of external influences and executing the operating algorithm. In this regard, the following concepts are introduced:

Control object (OU)– a device (system) that carries out a technical process and requires specially organized external influences to implement its functioning algorithm.

Control objects are, for example, both individual devices of the electrical system (turbogenerators, power converters of electrical energy, loads) and the electrical system as a whole.

Control algorithm– a set of instructions that determines the nature of external influences on the control object, ensuring its functioning algorithm.

Examples of control algorithms are algorithms for changing the excitation of a synchronous generator and the steam flow in their turbines in order to compensate for the undesirable effect of changes in consumer load on voltage levels at the nodes of the electrical system and the frequency of this voltage.

Control device (CU)– a device that, in accordance with the control algorithm, influences the controlled object.

Examples of control devices are an automatic excitation regulator (AEC) and an automatic speed regulator (ARCV) of a synchronous generator.

Automatic control system (ACS)– a set of interacting control objects and control devices.

Such, for example, is an automatic excitation system for a synchronous generator, containing an interacting ARV and the synchronous generator itself.


In Fig. 1.1. A generalized block diagram of the automated control system is presented.

Rice. 1.1. Generalized block diagram of the automated control system

x( t) – controlled quantity – a physical quantity characterizing the state of an object.

Often the control object has several controlled quantities x 1 (t), x 2 (t)... x n (t), then they talk about n-dimensional vector of object state x(t) with the components listed above. The control object in this case is called multidimensional.

Examples of controlled quantities in an electrical system are: current, voltage, power, speed, etc.

z o (t), z d (t) – respectively, the main(acting on the control object ) and additional ( acting on the control device ) disturbing influences.

Examples of the main disturbing influence z o (t) are changes in the load of the synchronous generator, the temperature of its cooling medium, etc., and the additional disturbing influence z d (t) – change in cooling conditions UU, voltage instability of power supplies UU and so on.

Rice. 1.2. Structure of automatic control system

Rice. 1.3. Functional diagram of the automated control system

Algorithmic structure (scheme) – structure (scheme), which is a set of interconnected algorithmic links and characterizes algorithms for converting information into automated control systems.

Wherein,

algorithmic link- part of the algorithmic structure of the automated control system, corresponding to a specific mathematical or logical signal conversion algorithm.

If an algorithmic link performs one simple mathematical or logical operation, then it is called elementary algorithmic link. In the diagrams, algorithmic links are represented by rectangles, inside which the corresponding signal conversion operators are written. Sometimes, instead of operators in formula form, graphs of the dependence of the output value on the input or graphs of transition functions are given.

The following types of algorithmic links are distinguished:

· static;

· dynamic;

· arithmetic;

· logical.

Static link –a link that converts the input signal into an output signal instantly (without inertia).

The connection between the input and output signals of a static link is usually described by an algebraic function. Static links include various inertia-free converters, for example, a resistive voltage divider. Figure 1.4a shows a conventional image of a static link in an algorithmic diagram.

Dynamic link– a link that converts the input signal into an output signal in accordance with the operations of integration and differentiation in time.

The connection between the input and output signals of the dynamic link is described by ordinary differential equations.

The class of dynamic links includes elements of an automated control system that have the ability to accumulate any type of energy or substance, for example, an integrator based on an electric capacitor.

Arithmetic link– a link that performs one of the arithmetic operations: summation, subtraction, multiplication, division.

The most common arithmetic link in automation, the link that performs algebraic summation of signals, is called adder.

Logical link– a link that performs any logical operation: logical multiplication (“AND”), logical addition (“OR”), logical negation (“NOT”), etc.

The input and output signals of a logic link are usually discrete and are considered as logical variables.

Figure 1.4 shows conventional images of elementary algorithmic links.



Figure 1.4. Conventional images of elementary algorithmic links:

A– static; b– dynamic; V– arithmetic; G– logical

Structural structure (diagram) – structure (diagram), reflecting the specific circuit, design and other design of the automated control system.

Structural diagrams include: kinematic diagrams of devices, circuit diagrams and wiring diagrams of electrical connections, etc. Since TAU deals with mathematical models of automated control systems, constructive diagrams are of much less interest than functional and algorithmic ones.

1.3. ACS classification

Classification of automated control systems can be carried out according to various principles and characteristics that characterize the purpose and design of systems, the type of energy used, the control and operation algorithms used, etc.

Let us first consider the classification of automated control systems according to the most important features for control theory that characterize the functioning algorithm and control algorithm of the automatic control system.

Depending on the nature of the change in the reference influence over time ACS is divided into three classes:

· stabilizing;

· software;

· tracking.

Stabilizing automated control system– a system whose operating algorithm contains an instruction to maintain the value of the controlled quantity constant:

x(t) » x з = const.(1.3)

Sign » means that the controlled quantity is maintained at a given level with some error.

Stabilizing automated control systems are the most common in industrial automation. They are used to stabilize various physical quantities that characterize the state of technological objects. An example of a stabilizing automated control system is the excitation control system for a synchronous generator (see Fig. 1.2).

Software automated control system– a system whose operating algorithm contains an instruction to change the controlled quantity in accordance with a predetermined time function:

x(t) » x s (t) = f p (t).(1.4)


An example of a software automated control system is a system for controlling the active power of a synchronous generator load at a power station during the day. The controlled quantity in the system is the active load power R R z(setting influence) is defined as a function of time t during the day (see Fig. 1.5).

Rice. 1.5. Law of change of active power reference

Tracking automated control system– a system whose operating algorithm contains an instruction to change the controlled quantity in accordance with a previously unknown function of time:

x(t) » x s (t) = f s (t).(1.5)

An example of a tracking automated control system is a system for controlling the active power of a synchronous generator load at a power plant during the day. The controlled quantity in the system is the active load power R generator Law of change of active power reference R z(setting influence) is determined, for example, by the power system dispatcher and is of an uncertain nature during the day.

In stabilizing, program and tracking automated control systems, the control goal is to ensure equality or proximity of the controlled quantity x(t) to its set value x z (t). Such management, carried out with the aim of maintaining

x(t) » x з (t),(1.6)

called regulation.

The control device that performs regulation is called regulator, and the system itself – regulation system.

Depending on the configuration of the influence chain There are three types of automated control systems:

· with an open circuit of influences (open system);

· with a closed chain of influences (closed system);

· with a combined chain of influences (combined system).

Open-loop automated control system– a system in which control of the controlled variable is not carried out, i.e. the input influences of its control device are only external (master and disturbing) influences.

Open-loop automated control systems can in turn be divided into two types:

· exercising control in accordance with changes in only the setting influence (Fig. 1.6, a);

· exercising control in accordance with changes in both the setting and disturbing influences (Fig. 1.6, b).

Rice. 2.1. Types of signals

When studying automated control systems and their elements, a number of standard signals, called typical impacts . These impacts are described by simple mathematical functions and are easily reproduced when studying automated control systems. The use of standard influences makes it possible to unify the analysis of various systems and facilitates the comparison of their transfer properties.

The following typical effects are most widely used in TAU:

· stepped;

· pulsed;

· harmonic;

· linear.

Stepped impact– an impact that instantly increases from zero to a certain value and then remains constant (Fig. 2.2, a).

Rice. 2.2. Types of typical impacts

By the nature of the change in the output value over time The following modes of the ACS element are distinguished:

· static;

· dynamic.

Static mode– state of the ACS element, in which the output value does not change over time, i.e. y(t) = const.

It is obvious that the static mode (or equilibrium state) can only occur when the input influences are constant in time. The relationship between input and output quantities in static mode is described by algebraic equations.

Dynamic mode– state of the ACS element, in which the input quantity changes continuously over time, i.e. y(t) = var.

The dynamic mode occurs when in the element, after the application of an input influence, processes of establishing a given state or a given change in the output value occur. These processes are generally described by differential equations.

Dynamic modes, in turn, are divided into:

· unsteady (transient);

· steady (quasi-steady).

Unsteady (transient) mode– a mode that exists from the moment the input influence begins to change until the moment when the output value begins to change according to the law of this influence.

Steady state– a mode that occurs after the output value begins to change according to the same law as the input effect, i.e., it occurs after the end of the transient process.

In steady state, the element undergoes forced movement. It is obvious that the static mode is a special case of the steady (forced) mode at x(t) = const.


Concepts " transitional regime" And " steady state» illustrated by graphs of changes in the output value y(t) with two typical input influences x(t)(Fig. 2.3). Border between transitional And established modes is shown by a vertical dotted line.

Rice. 2.3. Transient and steady-state modes under typical impacts

2.3. Static characteristics of elements

The transfer properties of elements and automatic control systems in static mode are described using static characteristics.

Static characteristic of the element– dependence of the output quantity y element from input x

y = f(x) = y(x)(2.10)

in steady static mode.

The static characteristic of a specific element can be specified in analytical form (for example, y = kx 2) or in the form of a graph (Fig. 2.4).

Rice. 2.4. Static characteristic of the element

As a rule, the relationship between input and output quantities is unambiguous. An element with such a connection is called static (positional) (Fig. 2.5, A). Ambiguous element – astatic (Fig. 2.5, b).

Rice. 2.5. Types of static characteristics

Based on the type of static characteristics, elements are divided into:

· linear;

· nonlinear.

Line element– an element that has a static characteristic in the form of a linear function (Fig. 2.6):

y = b + ax.(2.11)



Rice. 2.6. Types of Linear Function

Nonlinear element– an element having a nonlinear static characteristic.

The nonlinear static characteristic is usually expressed analytically in the form of power functions, power polynomials, fractional rational functions and more complex functions (Fig. 2.7).


Rice. 2.7. Types of nonlinear functions

Nonlinear elements, in turn, are divided into:

· elements with a significantly nonlinear static characteristic;

· elements with a non-significantly nonlinear static characteristic;

Irrelevant nonlinear static characteristic– characteristic described by a continuous differentiable function.

In practice, this mathematical condition means that the graph of the function y = f(x) should have a smooth shape (Fig. 2.5, A).In a limited range of changes in the input value x such a characteristic can be approximately replaced (approximated) by a linear function. The approximate replacement of a nonlinear function by a linear one is called linearization. Linearization of a nonlinear characteristic is legitimate if during the operation of the element its input value changes in a small range around a certain value x = x 0.

Essentially nonlinear static response– a characteristic described by a function that has kinks or discontinuities.

An example of a significantly nonlinear static characteristic is the characteristic of a relay (Fig. 2.5, V), which when the input signal reaches x(current in the relay winding) of some value x 1 will change the output signal y(voltage in the switched circuit) from level y 1 to level y 2. Replacing such a characteristic with a straight line with a constant angle of inclination would lead to significant discrepancy between the mathematical description of the element and the real physical process occurring in the element. Therefore, the essentially nonlinear static characteristic cannot be linearized.

Linearization of smooth (irrelevantly nonlinear) static characteristics can be carried out either by tangent method , or by secant method .

So, for example, linearization using the tangent method consists in expanding the function y(x) in the interval around a certain point x 0 into the Taylor series and subsequently taking into account the first two terms of this series:

y(x) » y(x 0) + y¢(x 0)(x – x 0),(2.12) where y¢(x 0) – value of the derivative of the function y(x) at a given point A with coordinates x 0 And y 0 .



The geometric meaning of such linearization is to replace the curve y(x) tangent sun, drawn to the curve at the point A(Fig. 2.8).

Rice. 2.8. Linearization of the static characteristic by the tangent method

When analyzing automated control systems, it is convenient to consider linear static characteristics in deviations of variables x And y from values x 0 And y 0:

Dy = y - y 0 ; (2.13)

Dx = x - x 0 . (2.14)

Rice. 2.9. Quadripole circuit with linear elements

Nonlinear differential equation– an equation in which the function Ф contains products, quotients, powers, etc. of the variables y(t), x(t) and their derivatives.

For example, the transfer properties of a four-terminal network with a nonlinear resistor (Fig. 2.10) are described nonlinear differential equation of the form

0. (2.18)



Rice. 2.10. Four-terminal circuit with a nonlinear resistor

To function F (differential equation) also includes quantities called parameters . They link arguments together ( y(t), y¢(t),… y (n) (t); x(t),…x (m) (t), t) and characterize the properties of the element from the quantitative side. For example, parameters are body mass, active resistance, inductance and capacitance of the conductor, etc.

Most real elements are described by nonlinear differential equations, which significantly complicates the subsequent analysis of the automated control system. Therefore, they strive to move from nonlinear to linear equations of the form

For all real elements the condition m £ n is satisfied.

Odds a 0 , a 1 …a n And b 0 , b 1 …b m in equation (2.19) are called parameters. Sometimes parameters change over time, then the element is called non-stationary or with variable parameters . Such, for example, is a four-terminal network, the diagram of which is shown in Fig. 2.10.

However, in further discussions we will consider only elements with permanent parameters.

If, when compiling a linear differential equation, the static characteristic of an element was linearized, then it is valid only for the vicinity of the linearization point and can be written in deviations of variables (2.13...2.16). However, in order to simplify the notation, the deviations of the variables in the linearized equation will be denoted by the same symbols as in the original nonlinear equation, but without the symbol D .

The most important practical advantage linear equation (2.19) is the possibility of using principle of superposition, according to which the change in the output value y(t), which occurs when an element is exposed to several input signals xi(t), is equal to the sum of changes in output quantities yi(t) caused by each signal xi(t) separately (Fig. 2.11).


Rice. 2.11. Illustration of the principle of superposition

2.4.2. Timing characteristics

The differential equation does not give a visual representation of the dynamic properties of the element, but such a representation is given by the function y(t), i.e., the solution to this equation.

However, the same differential equation can have many solutions, depending on the initial conditions and the nature of the input action x(t), which is inconvenient when comparing the dynamic properties of different elements. Therefore, it was decided to characterize these properties of the element only one solution of the differential equation obtained with zero initial conditions and one of typical influences: single step, delta function, harmonic, linear. The most visual representation of the dynamic properties of an element is given by its transition function h(t).

Transition function h(t) of the element– change in time of the output value y(t) of the element under a single step action and zero initial conditions.

The transition function can be specified:

· in the form of a graph;

· in an analytical form.

The transition function, like any solution to the inhomogeneous (with right-hand side) differential equation (2.19), has two components:

· forced h in (t) (equal to the steady-state value of the output quantity);

· free h with (t) (solution of a homogeneous equation).

The forced component can be obtained by solving equation (2.19) with zero derivatives and x(t) = 1

(2.20)

We obtain the free component by solving equation (2.19) at null right side

h with (t) =(2.21)

Where p k – kth root of the characteristic equation(in general, a complex number); With k - kth constant of integration(depends on initial conditions).

Characteristic equation– an algebraic equation whose degree and coefficients coincide with the order and coefficients of the left side of a linear differential equation of the form (2.19)

a 0 p n + a 1 p n –1 +…+ a n = 0.(2.22)

2.4.3. Transmission function

The most common method for describing and analyzing automatic control systems is the operational method (method of operational calculus), which is based on the direct integral Laplace transform for continuous functions

F(p) = Z{ f(t)} = f(t) e -pt dt . (2.23)

This transformation establishes a correspondence between a function of a real variable t and a function of a complex variable p = a + jb. Function f(t), included in the Laplace integral (2.23) is called original, and the result of integration is the function F(p) – image functions f(t) according to Laplace.

The transformation is only possible for functions that are equal zero at t< 0. Formally, this condition in TAU is ensured by multiplying the function f(t) per unit step function 1 (t) or by selecting the start of time counting from the moment until which f(t) = 0.

The most important properties of the Laplace transform for zero initial conditions are:

Z{ f¢(t)} = pF(p);(2.24)

Z{ f(t)dt} = F(p)/p.(2.25)

The operational method in TAU has become widespread, since it is used to determine the so-called transfer function, which is the most compact form of describing the dynamic properties of elements and systems.

Applying the direct Laplace transform to the differential equation (2.19) using property (2.24) we obtain the algebraic equation

D(p)Y(p) = K(p)X(p),(2.26)

D(p) = a 0 p n + a 1 p n-1 +…+ a n - own operator; (2.27)

K(p) = b 0 p m + b 1 p m-1 +…+ b m - input operator. (2.28)

Let us introduce the concept of a transfer function.

Transmission function– the ratio of the image of the output quantity to the image of the input quantity at zero initial conditions:

(2.29)

Then, taking into account equation (2.26) and notation (2.27, 2.28), the expression for the transfer function takes the form:

(2.30)

Variable value p, W(p) goes to infinity, called pole of the transfer function . Obviously, the poles are the roots of the proper operator D(p).

Variable value p, at which the transfer function W(p) goes to zero, called zero transfer function . Obviously, the zeros are the roots of the input operator K(p).

If the coefficient a 0 ¹ 0, then the transfer function does not have a zero pole ( p = 0), the element characterized by it is called astatic and the transfer function of this element at p = 0 (t = ¥) equal to transmission coefficient

(2.31)

2.4.4. Frequency characteristics

Frequency characteristics describe the transfer properties of elements and automatic control systems in the mode of steady-state harmonic oscillations caused by external harmonic influence. They find application in TAU, since real disturbances, and therefore the reactions of an element or an automatic control system to them, can be represented as a sum of harmonic signals.

Let's consider essence And varieties frequency characteristics. Let the input of the linear element (Fig. 2.12, A) at the moment of time t = 0 applied harmonic influence with frequency w


x(t) = x m sinw t. (2.32)

Rice. 2.12. Diagram and curves explaining the essence of frequency characteristics

Upon completion of the transition process, the forced oscillation mode and the output value will be established y(t) will change according to the same law as the input x(t), but in the general case with a different amplitude y m and with phase shift j along the time axis relative to the input signal (Fig. 2.12, b):

y(t) = y m sin(w t + j) . (2.33)

Having carried out a similar experiment, but at a different frequency w, it can be seen that the amplitude y m and phase shift j have changed, i.e. they depend on frequency. You can also make sure that for another element the parameter dependencies y m And j from frequency w others. Therefore, such dependencies can serve as characteristics of the dynamic properties of elements.

The following frequency characteristics are most often used in TAU:

· amplitude frequency response (AFC);

· phase frequency response (PFC);

· amplitude-phase frequency response (APFC).

Amplitude frequency response (AFC)– dependence of the ratio of the amplitudes of the output and input signals on frequency


Frequency response shows how an element transmits signals of different frequencies. An example of the frequency response is shown in Fig. 2.13, A.

Rice. 2.13. Frequency characteristics:

A - amplitude; b– phase; V– amplitude-phase; g – logarithmic

Phase frequency response response– dependence of the phase shift between the input and output signals on frequency.

The phase response characteristic shows how much lag or advance of the output signal in phase the element creates at different frequencies. An example of a phase response is shown in Fig. 2.13, b.

The amplitude and phase characteristics can be combined into one common one - amplitude-phase frequency response (APFC). The AFC is a function of a complex variable jw :

W(jw) = A(w) e j j (w) (exponential form), (2.35)

Where A(w)– function module; j(w)– function argument.

Each fixed frequency value w i corresponds to a complex number W(jw i), which on the complex plane can be represented by a vector having length A(w i) and rotation angle j(wi)(Fig. 2.13, V). Negative values j(w), corresponding to the lag of the output signal from the input signal, is usually counted clockwise from the positive direction of the real axis.

When changing frequency from zero to infinity

Related publications