From 29242d68501db06a133638f36e5fadc9b179cf1d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Zden=C4=9Bk=20Hur=C3=A1k?= Date: Mon, 2 Dec 2024 11:29:53 +0100 Subject: [PATCH] Built site for gh-pages --- .nojekyll | 2 +- classes_PWA 11.html | 1190 ++ classes_references 13.html | 1164 ++ classes_reset 9.html | 1499 ++ classes_reset.html | 136 +- classes_software 16.html | 1107 ++ classes_switched 12.html | 1544 ++ classes_switched.html | 58 +- complementarity_constraints 13.html | 1267 ++ complementarity_references 17.html | 1105 ++ complementarity_simulations 10.html | 1954 +++ complementarity_simulations.html | 565 +- complementarity_software 10.html | 1140 ++ complementarity_systems 9.html | 1352 ++ des 9.html | 1250 ++ des_automata 16.html | 3336 ++++ des_automata.html | 16 +- des_references 9.html | 1108 ++ des_software 7.html | 1173 ++ hybrid_automata 19.html | 1624 ++ hybrid_automata_references 9.html | 1058 ++ hybrid_automata_software 17.html | 1087 ++ hybrid_equations 12.html | 1808 +++ hybrid_equations.html | 154 +- hybrid_equations_references 17.html | 1093 ++ hybrid_equations_software 10.html | 1064 ++ index 11.html | 1048 ++ intro 10.html | 1175 ++ intro_outline 18.html | 1097 ++ intro_references 10.html | 1141 ++ max_plus_algebra 10.html | 8168 ++++++++++ max_plus_algebra.html | 13008 ++++++++-------- max_plus_references 9.html | 1099 ++ max_plus_software 9.html | 1062 ++ max_plus_systems 12.html | 1397 ++ mld_DHA 17.html | 1187 ++ mld_intro 17.html | 1448 ++ mld_references 9.html | 1114 ++ mld_why 15.html | 1115 ++ mpc_mld_explicit 16.html | 1199 ++ mpc_mld_online 10.html | 1131 ++ mpc_mld_references 17.html | 1085 ++ mpc_mld_software 17.html | 1059 ++ petri_nets 8.html | 1679 ++ petri_nets_references 17.html | 1131 ++ petri_nets_software 12.html | 1093 ++ petri_nets_timed 17.html | 1316 ++ search.json | 4 +- sitemap.xml | 2 +- solution_concepts 11.html | 1213 ++ solution_references 10.html | 1097 ++ solution_types 9.html | 1235 ++ stability_concepts 12.html | 1232 ++ stability_recap 17.html | 1263 ++ stability_references 5.html | 1150 ++ stability_software 10.html | 1096 ++ ...ility_via_common_lyapunov_function 10.html | 1461 ++ ...ity_via_multiple_lyapunov_function 18.html | 1550 ++ stability_via_multiple_lyapunov_function.html | 2 +- verification_barrier 10.html | 2694 ++++ verification_barrier.html | 2772 ++-- verification_intro 15.html | 1059 ++ verification_reachability 10.html | 1059 ++ verification_references 17.html | 1126 ++ verification_temporal_logics 9.html | 1191 ++ 65 files changed, 85450 insertions(+), 8362 deletions(-) create mode 100644 classes_PWA 11.html create mode 100644 classes_references 13.html create mode 100644 classes_reset 9.html create mode 100644 classes_software 16.html create mode 100644 classes_switched 12.html create mode 100644 complementarity_constraints 13.html create mode 100644 complementarity_references 17.html create mode 100644 complementarity_simulations 10.html create mode 100644 complementarity_software 10.html create mode 100644 complementarity_systems 9.html create mode 100644 des 9.html create mode 100644 des_automata 16.html create mode 100644 des_references 9.html create mode 100644 des_software 7.html create mode 100644 hybrid_automata 19.html create mode 100644 hybrid_automata_references 9.html create mode 100644 hybrid_automata_software 17.html create mode 100644 hybrid_equations 12.html create mode 100644 hybrid_equations_references 17.html create mode 100644 hybrid_equations_software 10.html create mode 100644 index 11.html create mode 100644 intro 10.html create mode 100644 intro_outline 18.html create mode 100644 intro_references 10.html create mode 100644 max_plus_algebra 10.html create mode 100644 max_plus_references 9.html create mode 100644 max_plus_software 9.html create mode 100644 max_plus_systems 12.html create mode 100644 mld_DHA 17.html create mode 100644 mld_intro 17.html create mode 100644 mld_references 9.html create mode 100644 mld_why 15.html create mode 100644 mpc_mld_explicit 16.html create mode 100644 mpc_mld_online 10.html create mode 100644 mpc_mld_references 17.html create mode 100644 mpc_mld_software 17.html create mode 100644 petri_nets 8.html create mode 100644 petri_nets_references 17.html create mode 100644 petri_nets_software 12.html create mode 100644 petri_nets_timed 17.html create mode 100644 solution_concepts 11.html create mode 100644 solution_references 10.html create mode 100644 solution_types 9.html create mode 100644 stability_concepts 12.html create mode 100644 stability_recap 17.html create mode 100644 stability_references 5.html create mode 100644 stability_software 10.html create mode 100644 stability_via_common_lyapunov_function 10.html create mode 100644 stability_via_multiple_lyapunov_function 18.html create mode 100644 verification_barrier 10.html create mode 100644 verification_intro 15.html create mode 100644 verification_reachability 10.html create mode 100644 verification_references 17.html create mode 100644 verification_temporal_logics 9.html diff --git a/.nojekyll b/.nojekyll index e04e8f3..02835d6 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -4354d9f9 \ No newline at end of file +c7f1012d \ No newline at end of file diff --git a/classes_PWA 11.html b/classes_PWA 11.html new file mode 100644 index 0000000..c796f02 --- /dev/null +++ b/classes_PWA 11.html @@ -0,0 +1,1190 @@ + + + + + + + + + +Piecewise affine (PWA) systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Piecewise affine (PWA) systems

+
+ + + +
+ + + + +
+ + + +
+ + +

This is a subclass of switched systems where the functions on the right-hand side of the differential equations are affine functions of the state. For some (historical) reason these systems are also called piecewise linear (PWL).

+

We are going to reformulate such systems as switched systems with state-driven switching.

+

First, we consider the autonomous case, that is, systems without inputs: +\dot{\bm x} += +\begin{cases} +\bm A_1 \bm x + \bm b_1, & \mathrm{if}\, \bm H_1 \bm x + \bm g_1 \leq 0,\\ +\vdots\\ +\bm A_m \bm x + \bm b_m, & \mathrm{if}\, \bm H_m \bm x + \bm g_m \leq 0. +\end{cases} +

+

The nonautonomous case of systems with inputs is then: +\dot{\bm x} += +\begin{cases} +\bm A_1 \bm x + \bm B_1 u + \bm c_1, & \mathrm{if}\, \bm H_1 \bm x + \bm g_1 \leq 0,\\ +\vdots\\ +\bm A_m \bm x + \bm B_m + \bm c_m, & \mathrm{if}\, \bm H_m \bm x + \bm g_m \leq 0. +\end{cases} +

+
+

Example 1 (Linear system with saturated linear state feedback) In this example we consider a linear system with a saturated linear state feedback as in Fig 1.

+
+
+
+ +
+
+Figure 1: Linear system with a saturated linear state feedback +
+
+
+

The state equations for the close-loop system are +\dot{\bm x} = \bm A\bm x + \bm b \,\mathrm{sat}(v), \quad v = \bm k^T \bm x, + which can be reformulated as a piecewise affine system +\dot{\bm x} = +\begin{cases} +\bm A \bm x - \bm b, & \mathrm{if}\, \bm x \in \mathcal{X}_1,\\ +(\bm A + \bm b \bm k^\top )\bm x, & \mathrm{if}\, \bm x \in \mathcal{X}_2,\\ +\bm A \bm x + \bm b, & \mathrm{if}\, \bm x \in \mathcal{X}_3,\\ +\end{cases} + where the partitioning of the space of control inputs is shown in Fig 2.

+
+
+
+ +
+
+Figure 2: Partitioning the control input space +
+
+
+

Expressed in the state space, the partitioning is +\begin{aligned} +\mathcal{X}_1 &= \{\bm x \mid \bm H_1\bm x + g_1 \leq 0\},\\ +\mathcal{X}_2 &= \{\bm x \mid \bm H_2\bm x + \bm g_2 \leq 0\},\\ +\mathcal{X}_3 &= \{\bm x \mid \bm H_3\bm x + g_3 \leq 0\}, +\end{aligned} + where +\begin{aligned} +\bm H_1 &= \bm k^\top, \quad g_1 = 1,\\ +\bm H_2 &= \begin{bmatrix}-\bm k^\top\\\bm k^\top\end{bmatrix}, \quad \bm g_2 = \begin{bmatrix}-1\\-1\end{bmatrix},\\ +\bm H_3 &= -\bm k^\top, \quad g_3 = 1. +\end{aligned} +

+
+
+

Approximation of nonlinear systems

+

While the example with the saturated linear state feedback can be modelled as a PWA system exactly, there are many practical cases, in which the system is not exactly PWA affine but we want to approximate it as such.

+
+

Example 2 (Nonlinear system approximated by a PWA system) Consider the following nonlinear system +\begin{bmatrix} +\dot x_1\\\dot x_2 +\end{bmatrix} += +\begin{bmatrix} +x_2\\ +-x_2 |x_2| - x_1 (1+x_1^2) +\end{bmatrix} +

+

Our task is to approximate this system by a PWA system. Equivalently, we need to find a PWA approximation for the right-hand side function.

+
+ + +
+ + Back to top
+ + +
+ + + + + + \ No newline at end of file diff --git a/classes_references 13.html b/classes_references 13.html new file mode 100644 index 0000000..d162827 --- /dev/null +++ b/classes_references 13.html @@ -0,0 +1,1164 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

There is no single recommended literature for this lecture. Instead, a bunch of papers and monographs is listed here to get you started should need to delve deeper.

+
+

Reset control systems

+

The origing of the reset control can be traced to the paper [1]. While it may be of historical curiosity to have a look at that paper containing also some schematics with opamps, it is perhaps easier to learn the basics of reset control in some more recent texts such as the monograph [2]. Alternatively, papers such as [3], [4], [5], or [6] can provide another concise introduction to the topic.

+
+
+

Switched systems

+

Readable introduction to switched systems is in the slim book [7]. The book is not freely available online, but a useful excerpt can be found in the lecture notes [8].

+

Switched systems can also be viewed as systems described by differential equations with discontinuous right-hand side. The theory of such systems is described in the classical book [9]. The main concepts and results can also be found in the tutorial [10], perhaps even in a more accessible form. Additionally, accessible discussion in the online available beautiful (I really mean it) textbook [11], chapters 3 and 11. What is particularly nice about the latter book is that every concepts, even the most theoretical one, is illustrated by a simple Matlab code invoking the epic Chebfun toolbox.

+
+
+

Piecewise affine (PWA) systems

+

In our course we based our treatment of PWA systems on the monograph [12]. It is not freely available online, but it is based on the author’s PhD thesis [13], which is available online. While these resources are a bit outdated (in particular, when it comes to stability analysis, back then they were not aware of the possibility to extend the S-procedure to higher-degree polynomials), they still constitute a good starting point. Published at about the same time, the paper [14] reads well (as usual in the case of the second author). A bit more up-to-date book dedicated purely to PWA control is [15], but again, no free online version. The book refers to the Matlab toolbox documented in [16]. While the toolbox is rather dated and will hardly run on the current versions of Matlab (perhaps an opportunity for nice student project), the tutorial paper gives some insight into how the whole concept of a PWA approximation can be used in control design.

+
+
+

Piecewise affine (linear) approximation

+

There is quite a lot of relevant know-how available even outside the domain of (control) systems, in particular, search for piecewise affine (-linear) approximation or fitting (using optimization): [17] (although it is only restricted to convex functions), [18], [19], …

+ + + +
+ + Back to top

References

+
+
[1]
J. C. Clegg, “A nonlinear integrator for servomechanisms,” American Institute of Electrical Engineers, Part II: Applications and Industry, Transactions of the, vol. 77, no. 1, pp. 41–42, 1958, doi: 10.1109/TAI.1958.6367399.
+
+
+
[2]
A. Baños and A. Barreiro, Reset Control Systems. in Advances in Industrial Control. London; New York: Springer, 2012. Available: https://doi.org/10.1007/978-1-4471-2250-0
+
+
+
[3]
O. Beker, C. V. Hollot, and Y. Chait, “Plant with integrator: An example of reset control overcoming limitations of linear feedback,” IEEE Transactions on Automatic Control, vol. 46, no. 11, pp. 1797–1799, 2001, doi: 10.1109/9.964694.
+
+
+
[4]
O. Beker, C. V. Hollot, Y. Chait, and H. Han, “Fundamental properties of reset control systems,” Automatica, vol. 40, no. 6, pp. 905–915, Jun. 2004, doi: 10.1016/j.automatica.2004.01.004.
+
+
+
[5]
Y. Guo, Y. Wang, L. Xie, and J. Zheng, “Stability analysis and design of reset systems: Theory and an application,” Automatica, vol. 45, no. 2, pp. 492–497, Feb. 2009, doi: 10.1016/j.automatica.2008.08.016.
+
+
+
[6]
L. Zaccarian, D. Nesic, and A. R. Teel, “First order reset elements and the Clegg integrator revisited,” in Proceedings of the 2005, American Control Conference, 2005., Jun. 2005, pp. 563–568 vol. 1. doi: 10.1109/ACC.2005.1470016.
+
+
+
[7]
D. Liberzon, Switching in Systems and Control. in Systems & Control: Foundations & Applications. Boston, MA: Birkhäuser, 2003. Available: https://doi.org/10.1007/978-1-4612-0017-8
+
+
+
[8]
D. Liberzon, “Switched Systems: Stability Analysis and Control Synthesis,” Lecture {{Notes}}, 2007. Available: http://liberzon.csl.illinois.edu/teaching/Liberzon-LectureNotes.pdf
+
+
+
[9]
A. F. Filippov, Differential Equations with Discontinuous Righthand Sides. in Mathematics and its Applications. Dordrecht: Springer, 1988. Accessed: Jun. 08, 2022. [Online]. Available: https://link.springer.com/book/10.1007/978-94-015-7793-9
+
+
+
[10]
J. Cortes, “Discontinuous dynamical systems: A tutorial on solutions, nonsmooth analysis, and stability,” IEEE Control Systems Magazine, vol. 28, no. 3, pp. 36–73, Jun. 2008, doi: 10.1109/MCS.2008.919306.
+
+
+
[11]
L. N. Trefethen, Á. Birkisson, and T. A. Driscoll, Exploring ODEs. Philadelphia: SIAM-Society for Industrial and Applied Mathematics, 2017. Available: http://people.maths.ox.ac.uk/trefethen/ExplODE/
+
+
+
[12]
M. K.-J. Johansson, Piecewise Linear Control Systems: A Computational Approach. in Lecture Notes in Control and Information Sciences. Berlin, Heidelberg: Springer, 2003. Available: https://doi.org/10.1007/3-540-36801-9
+
+
+
[13]
M. Johansson, “Piecewise Linear Control Systems,” PhD thesis, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, 1999. Accessed: Aug. 01, 2011. [Online]. Available: http://lup.lub.lu.se/record/19355
+
+
+
[14]
A. Hassibi and S. Boyd, “Quadratic stabilization and control of piecewise-linear systems,” in Proceedings of the 1998 American Control Conference. ACC (IEEE Cat. No.98CH36207), Jun. 1998, pp. 3659–3664 vol.6. doi: 10.1109/ACC.1998.703296.
+
+
+
[15]
L. Rodrigues, B. Samadi, and M. Moarref, Piecewise Affine Control: Continuous-Time, Sampled-Data, and Networked Systems. in Advances in Design and Control. Philadelphia: Society for Industrial and Applied Mathematics, 2019. Available: https://doi.org/10.1137/1.9781611975901
+
+
+
[16]
M. Z. Fekri, B. Samadi, and L. Rodrigues, PWATOOLS: A MATLAB toolbox for piecewise-affine controller synthesis,” in 2012 American Control Conference (ACC), Jun. 2012, pp. 4484–4489. doi: 10.1109/ACC.2012.6315609.
+
+
+
[17]
A. Magnani and S. P. Boyd, “Convex piecewise-linear fitting,” Optimization and Engineering, vol. 10, no. 1, pp. 1–17, Mar. 2008, doi: 10.1007/s11081-008-9045-3.
+
+
+
[18]
J. Huchette and J. P. Vielma, “Nonconvex Piecewise Linear Functions: Advanced Formulations and Simple Modeling Tools,” Operations Research, May 2022, doi: 10.1287/opre.2019.1973.
+
+
+
[19]
A. Toriello and J. P. Vielma, “Fitting piecewise linear continuous functions,” European Journal of Operational Research, vol. 219, no. 1, pp. 86–95, May 2012, doi: 10.1016/j.ejor.2011.12.030.
+
+
+ + +
+ + + + + + \ No newline at end of file diff --git a/classes_reset 9.html b/classes_reset 9.html new file mode 100644 index 0000000..37db266 --- /dev/null +++ b/classes_reset 9.html @@ -0,0 +1,1499 @@ + + + + + + + + + +Reset systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Reset systems

+
+ + + +
+ + + + +
+ + + +
+ + +

We have introduced two major modeling frameworks for hybrid systems – hybrid automata and hybrid equations. Now we are ready to model any hybrid system. It turns out useful, however, to define a few special classes of hybrid systems. Their special features are reflected in the structure of their models (hybrid automata or hybrid equations). The special classes of hybrid systems that we are going to discuss are

+
    +
  • reset systems,
  • +
  • switched systems,
  • +
  • piecewise affine (PWA) systems.
  • +
+
+

Reset systems

+

They are also called impulsive systems (the reason is going to be clear soon). They are conveniently defined within the hybrid automata framework. In a hybrid automaton modelling a reset system we can only identify a single discrete state (mode), not more. In the digraph representation, we can only observe a single node.

+
+
+
+ +
+
+Figure 1: Reset system +
+
+
+

Within the hybrid equations framework, in a reset system some variables reset (jump) and flow, others only flow, but there are no variables that only reset… Well, this definition is not perfect, because as we have discussed earlier, even when staying constant between two jumps, the state variable is, technically speaking, also flowing. What we want to express is that there are not discrete variables in such model, but the hybrid equations framework intentionally does not distinguish between continuous and discrete variables.

+

We can recognize the bouncing ball as a prominent example of a reset system. Another example follows.

+
+

Example 1 (Reset oscillator) We consider a hybrid system state-space modelled by the following hybrid equations: +\begin{aligned} +\begin{bmatrix} +\dot x_1\\ \dot x_2 +\end{bmatrix} +&= +\begin{bmatrix} +0 & 1\\ -1 & 2\delta +\end{bmatrix} +\begin{bmatrix} +x_1\\x_2 +\end{bmatrix} ++ +\begin{bmatrix} +0\\1 +\end{bmatrix}, +\quad \bm x \in \mathcal C,\\ +x_1^+ &= -x_1, \quad \bm x \in \mathcal D, +\end{aligned} + where +\begin{aligned} +\mathcal D &= \{\bm x \in \mathbb R^2 \mid x_1<0, x_2=0\},\\ +\mathcal C &= \mathbb R^2\setminus\mathcal D. +\end{aligned} +

+

Simulation outcomes for some concrete value of the small positive parameter \delta are shown in the following figure.

+
+
+Show the code +
using OrdinaryDiffEq
+
+δ = 0.1
+A = [0.0 1.0;
+    -1.0 2δ]
+b = [0.0; 1.0]
+
+x0 = [0.2, 0.0]
+tspan = (0.0, 100)
+f(x, p, t) = A*x + b
+cond_fcn(x, t, integrator) = x[1]<0 ? x[2] : 1.0
+affect!(integrator) = integrator.u[1] = -integrator.u[1]
+cb = ContinuousCallback(cond_fcn, affect!)
+prob = ODEProblem(f, x0, tspan)
+
+sol = solve(prob, Tsit5(),callback=cb, reltol = 1e-6, abstol = 1e-6, saveat = 0.1)
+
+using Plots
+plot(sol[1,:],sol[2,:],lw=2,legend=false, tickfontsize=12, xtickfontsize=12, ytickfontsize=12)
+xlabel!("x₁")
+ylabel!("x₂")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Isn’t it fascinating that a linear system augmented with resetting can exhibit such a complex behavior?

+
+
+
+

Clegg’s integrator (CI)

+

Clegg’s integrator is a reset element that can be used in control systems.

+

Its function is as follows. As soon as the sign of the input changes, the integrator resets to zero. As a consequence, the integrator keeps the sign of its input and output identical.

+

Unlike the traditional (linear) integrator, the CI exhibits much smaller phase lag (some 38 vs 90 deg).

+
+

Example 2 (Response of Clegg’s integrator to a sinusoidal input) Here is a response of the Clegg’s integrator to a sinusoidal input.

+
+
+Show the code +
using OrdinaryDiffEq
+f(x, u, t) = u(t)                               # We adhere to the control systems notation that x is the state variable and u is the input.
+x0 = 0.0                                        # The initial state.
+tspan = (0.0, 10)                               # The time span.
+u = t -> 1.0*sin(t)                             # The (control) input.
+cond_fcn(x, t, integrator) = integrator.p(t)    # The condition function. If zero, the event is triggered.
+affect!(integrator) = integrator.u = 0.0        # Beware that internally, u is the state variable. Here, the state variable is reset to zero.
+cb = ContinuousCallback(cond_fcn, affect!)
+prob = ODEProblem(f, x0, tspan, u)
+sol = solve(prob, Tsit5(),callback=cb, reltol = 1e-6, abstol = 1e-6, saveat = 0.1)
+
+using Plots
+t = sol.t
+plot(sol.t,u.(t),label="u",lw=2)
+plot!(sol,lw=2,label="x", tickfontsize=12, xtickfontsize=12, ytickfontsize=12)
+xlabel!("t")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+

It may be of historical curiosity that originally the concept was presented in the form of an analog circuit (opamps, diodes, resistors, capacitors). See the references if you are interested.

+
+
+

First-order reset element (FORE)

+

Another simple reset element that can be used in control systems is known as FORE (first-order reset element) described by +\begin{array}{lr} +\dot u = a u + k e, & \mathrm{when}\; e\neq 0,\\ +u^+ = 0, & \mathrm{when}\; e = 0. +\end{array} +

+
+

Example 3 (FORE) Consider a plant modelled by G(s) = \frac{s+1}{s(s+0.2)} and a first-order controller C=\frac{1}{s+1} in the feedback loop as in Fig 2.

+
+
+
+ +
+
+Figure 2: First-order controller in a feedback loop +
+
+
+

The response of the closed-loop system to a step reference input is shown using the following code.

+
+
+Show the code +
using ModelingToolkit, Plots, OrdinaryDiffEq
+using ModelingToolkit: t_nounits as t
+using ModelingToolkit: D_nounits as D
+
+function plant(; name)
+    @variables x₁(t)=0 x₂(t) = 0 u(t) y(t)
+    eqs = [D(x₁) ~ x₂
+           D(x₂) ~ -0.2x₂ + u
+           y ~ x₁ + x₂]
+    ODESystem(eqs, t; name = name)
+end
+
+function controller(; name) 
+    @variables x(t)=0 u(t) y(t)
+    eqs = [D(x) ~ -x + u
+           y ~ x]
+    ODESystem(eqs, t, name = name)
+end
+
+@named C = controller()
+@named P = plant()
+
+t_of_step = 1.0
+r(t) = t >= t_of_step ? 1.0 : 0.0
+@register_symbolic r(t)
+
+connections = [C.u ~ r(t) - P.y
+               C.y ~ P.u]
+
+@named T = ODESystem(connections, t, systems = [C, P])
+
+T = structural_simplify(T)
+equations(T)
+observed(T)
+
+using DifferentialEquations: solve
+prob = ODEProblem(complete(T), [], (0.0, 30.0), [])
+sol = solve(prob, Tsit5(), saveat = 0.1)
+
+using Plots
+plot(sol.t, sol[P.y], label = "", xlabel = "t", ylabel = "y", lw = 2)
+
+
+

Now we turn the first-order controller into a FORE controller by augumenting it with the above described resetting functionality. The feedback loop is in Fig 3.

+
+
+
+ +
+
+Figure 3: First-order reset element (FORE) in a feedback loop +
+
+
+

The response of the closed-loop system to a step reference input is shown using the following code.

+
+
+Show the code +
using ModelingToolkit, Plots, OrdinaryDiffEq
+using ModelingToolkit: t_nounits as t
+using ModelingToolkit: D_nounits as D
+
+function plant(; name)
+    @variables x₁(t)=0 x₂(t) = 0 u(t) y(t)
+    eqs = [D(x₁) ~ x₂
+           D(x₂) ~ -0.2x₂ + u
+           y ~ x₁ + x₂]
+    ODESystem(eqs, t; name = name)
+end
+
+function controller(; name) 
+    @variables x(t)=0 u(t) y(t)
+    eqs = [D(x) ~ -x + u
+           y ~ x]
+    ODESystem(eqs, t, name = name)
+end
+
+@named C = controller()
+@named P = plant()
+
+t_of_step = 1.0
+r(t) = t >= t_of_step ? 1.0 : 0.0
+@register_symbolic r(t)
+
+connections = [C.u ~ r(t) - P.y
+               C.y ~ P.u]
+
+zero_crossed = [C.u ~ 0]
+reset = [C.x ~ 0]               
+
+@named T = ODESystem(connections, t, systems = [C, P], continuous_events = zero_crossed => reset)
+
+T = structural_simplify(T)
+equations(T)
+observed(T)
+
+using DifferentialEquations: solve
+prob = ODEProblem(complete(T), [], (0.0, 30.0), [])
+sol = solve(prob, Tsit5(), saveat = 0.1)
+
+using Plots
+plot(sol.t, sol[P.y], label = "", xlabel = "t", ylabel = "y", lw = 2)
+
+
+

Obviously the introduction of the resetting functionality into the first order controller had a positive effect on the transient response of the closed-loop system.

+
+
+
+

When (not) to use reset control?

+

However conceptually simple, reset control is not a panacea. Analysis and design of reset control systems is not straightforward compared to the traditional linear control systems. In particular, guaranteeing closed-loop stability upon introduction of resetting into a linear controller is not easy and may require advanced concepts (some of them we are going to introduce later in the course). Therefore we should use reset control with care. We should always do our best to find (another) linear controller that has a performance comparable or even better than reset control system.

+

But reset control can be helfpul if the plant is subject to fundamental limitations of achievable control performance such as

+
    +
  • integrators and unstable poles,
  • +
  • zeros in the right half-plane (non-minimum phase),
  • +
  • delays,
  • +
  • +
+

In these situations reset control can be a way to beat the so-called waterbed effect.

+ + +
+ + Back to top
+ + +
+ + + + + + \ No newline at end of file diff --git a/classes_reset.html b/classes_reset.html index 6bf3d64..c726366 100644 --- a/classes_reset.html +++ b/classes_reset.html @@ -770,49 +770,49 @@

Reset systems

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -850,53 +850,53 @@

Clegg’s integrator - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/classes_software 16.html b/classes_software 16.html new file mode 100644 index 0000000..a0ae8ac --- /dev/null +++ b/classes_software 16.html @@ -0,0 +1,1107 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +

Reset systems, switched systems, and piecewise affine (PWA) systems can all be viewed as special classes of hybrid systems, and as such can be modelled and simulated using software for hybrid systems (or even general purpose modelling and simulation software).

+

However, there are also some dedicated software tools/packages/libraries for PWA systems:

+ +

Although some more free and open-source software packages for PWA systems can be found on the internet, none of them (at as far as we know) is actively developed or at least maintained anymore:

+
    +
  • PWATOOLS toolbox for Matlab, which accompanies the recently published book Rodrigues, Samadi, and Moarref (2019). Unfortunately, this ten-year old toolbox is no longer working with the recent releases of Matlab and the author is no longer maintaining it.
  • +
  • PWLTool toolbox for Matlab: some traces of this toolbox can be found on the internet, but this one seems even older, obviously back then accompanying the book Johansson (2003).
  • +
+

Overall, besides the MPT toolbox that is still being actively developed (by our colleagues at STU Bratislava), not much is currently available within the open-source software domain… :-(

+ + + + + Back to top

References

+
+Johansson, Mikael K.-J. 2003. Piecewise Linear Control Systems: A Computational Approach. Lecture Notes in Control and Information Sciences. Berlin, Heidelberg: Springer. https://doi.org/10.1007/3-540-36801-9. +
+
+Rodrigues, Luis, Behzad Samadi, and Miad Moarref. 2019. Piecewise Affine Control: Continuous-Time, Sampled-Data, and Networked Systems. Advances in Design and Control. Philadelphia: Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9781611975901. +
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/classes_switched 12.html b/classes_switched 12.html new file mode 100644 index 0000000..2651432 --- /dev/null +++ b/classes_switched 12.html @@ -0,0 +1,1544 @@ + + + + + + + + + +Switched systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Switched systems

+
+ + + +
+ + + + +
+ + + +
+ + +

Switched systems are modelled by first-order differential (state) equations with multiple right-hand sides, that is,

+

+\dot{\bm x} = \bm f_q(\bm x), \qquad q \in \{1,2, \ldots, m\}, +\tag{1} where m right-hand sides are possible and the lower index q determines which right-hand side function is “active” at a given moment.

+

The question is, what dictates the evolution of the integer variable q? In other words, what drives the switching? It turns out that the switching can be time-driven or state-driven.

+

In both cases, the right-hand sides can also depend the control input \bm u.

+

Major results for switched systems have been achieved without the need to refer to the framework of hybrid systems. But now that we have built such general framework, it turns out useful to view switched systems as a special class of hybrid systems. The aspects in which they are special will be discussed in the following, but here let us state that in contrast to full hybrid systems, switched systems are a bit less rich on the discrete side.

+
+

Time-driven

+

The evolution of the state variable complies with the following model \dot{\bm x} = \bm f_{q(t)}(\bm x), where q(t) is some function of time. The values of q(t) can be under our control or beyond our control, deterministic or stochastic.

+

A hybrid automaton for a time-driven switched system is shown in Fig 1.

+
+
+
+ +
+
+Figure 1: An automaton for a switched system with time-driven switching +
+
+
+

The transition from one mode to another is triggered by the integer variable q(t) attaining the appropriate value.

+

Since the switching signal is unrelated to the (continuous) state of the system, the invariant of the two modes are usually covering the whole state space \mathcal X.

+
+
+

State-dependent switching

+

The model is +\dot{\bm x} += +\begin{cases} +\bm f_1(\bm x), & \mathrm{if}\, \bm x \in \mathcal{X}_1,\\ +\vdots\\ +\bm f_m(\bm x), & \mathrm{if}\, \bm x \in \mathcal{X}_m. +\end{cases} +

+

Let’s consider just two domains \mathcal X_1 and \mathcal X_2. A hybrid automaton for a state-driven switched system is shown in Fig 2.

+
+
+
+ +
+
+Figure 2: An automaton for a switched system with state-driven switching +
+
+
+

The transition to the other mode is triggered by the continuous state of the system crossing the boundary between the two domains. The boundary is defined by the function s(\bm x) (called switching function), which is zero on the boundary, see the Fig 3.

+
+
+
+ +
+
+Figure 3: State-dependent switching +
+
+
+

Through examples we now illustrate the possible behaviors of the system when the flow transverses the boundary, when it pulls away from the boundary, and when it pushes towards the boundary.

+
+

Example 1 (The flow transverses the boundary) We consider the two right-hand sides of the state equation +\bm f_1(\bm x) = \begin{bmatrix}1\\ x_1^2 + 2x_2^2\end{bmatrix} + and +\bm f_2(\bm x) = \begin{bmatrix}1\\ 2x_1^2+3x_2^2-2\end{bmatrix} + and the switching function +s(x_1,x_2) = (x_1+0.05)^2 + (x_2+0.15)^2 - 1. +

+

The state portrait that also shows the switching function is generated using the following code.

+
+
+Show the code +
s(x₁,x₂) = (x₁+0.05)^2 + (x₂+0.15)^2 - 1.0
+
+f₁(x₁,x₂) = x₁^2 + 2x₂^2
+f₂(x₁,x₂) = 2x₁^2+3x₂^2-2.0
+
+f(x₁,x₂) = s(x₁,x₂) <= 0.0 ? [1,f₁(x₁,x₂)] : [1,f₂(x₁,x₂)] 
+
+N = 100
+x₁ = range(0, stop = 0.94, length = N)
+
+using CairoMakie
+fig = Figure(size = (600, 600),fontsize=20)
+ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
+streamplot!(ax,(x₁,x₂)->Point2f(f(x₁,x₂)), 0..1.5, 0..1.5, colormap = :magma)
+lines!(ax,x₁,sqrt.(1 .- (x₁ .+ 0.05).^2) .- 0.15, color = :red, linewidth=5)
+x10 = 0.5
+x20 = sqrt(1 - (x10 + 0.05)^2) - 0.15
+Makie.scatter!(ax,[x10],[x20],color=:blue,markersize=30)
+fig
+
+
+
+
+

+
+
+
+
+

The state portrait also shows a particular initial state \bm x_0 using a blue dot. Note that the projection of both vector fields \mathbf f_1 and \mathbf f_2 evaluated at \bm x_0 onto the normal (the gradient) of the switching function at \bm x_0 is positive, that is +\left.\left(\nabla s\right)^\top \bm f_1\right|_{\bm x_0} \geq 0, \quad \left.\left(\nabla s\right)^\top \bm f_2\right|_{\bm x_0} \geq 0. +

+

This is consistent with the observation that the flow goes through the boundary.

+

We can also plot a particular solution of the ODE using the following code.

+
+
+Show the code +
using DifferentialEquations
+F(u, p, t) = f(u[1],u[2])
+u0 = [0.0,0.4]
+tspan = (0.0, 1.0)
+prob = ODEProblem(F, u0, tspan)
+sol = solve(prob, Tsit5(), reltol = 1e-8, abstol = 1e-8)
+
+using Plots
+Plots.plot(sol,lw=3,xaxis="Time",yaxis="x",label=false)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Strictly speaking, this solution does not satisfy the differential equation on the boundary of the two domains (the derivative of x_2 does not exist there). This is visually recognized in the above plot as the sharp corner in the solution. But other than that, the solution is perfectly “reasonable” – for a while the system evolves according to one state equations, then at one particular moment the system starts evolving according to another state equation. That is it. Not much more to see here.

+
+
+

Example 2 (The flow pulls away from the boundary) We now consider another pair of the right-hand sides. +\bm f_1(\bm x) = \begin{bmatrix}-1\\ x_1^2 + 2x_2^2\end{bmatrix} + and +\bm f_2(\bm x) = \begin{bmatrix}1\\ 2x_1^2+3x_2^2-2\end{bmatrix}. +

+

The switching function is the same as in the previous example.

+

The state portrait is below.

+
+
+Show the code +
f(x₁,x₂) = s(x₁,x₂) <= 0.0 ? [-1, f₁(x₁,x₂)] : [1, f₂(x₁,x₂)] 
+
+fig = Figure(size = (600, 600),fontsize=20)
+ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
+streamplot!(ax,(x₁,x₂)->Point2f(f(x₁,x₂)), 0..1.5, 0..1.5, colormap = :magma)
+lines!(ax,x₁,sqrt.(1 .- (x₁ .+ 0.05).^2) .- 0.15, color = :red, linewidth=5)
+x10 = 0.8
+x20 = sqrt(1 - (x10 + 0.05)^2) - 0.15
+Makie.scatter!(ax,[x10],[x20],color=:blue,markersize=30)
+fig
+
+
+
+
+

+
+
+
+
+

We focus on the blue dot again. The projections of the two vector fields onto the normal of the switching function satisfy +\left.\left(\nabla s\right)^\top \bm f_1\right|_{\bm x_0} \leq 0, \quad \left.\left(\nabla s\right)^\top \bm f_2\right|_{\bm x_0} \geq 0. +

+

The only interpretation of this situation is that a unique solution does not start at \bm x_0. Again, not much more to see here.

+
+
+

Example 3 (The flow pushes towards the boundary) And one last pair of the right-hand sides: +\bm f_1(\bm x) = \begin{bmatrix}1\\ x_1^2 + 2x_2^2\end{bmatrix} + and +\bm f_2(\bm x) = \begin{bmatrix}-1\\ 2x_1^2+3x_2^2-2\end{bmatrix}. +

+

The state-portrait is below.

+
+
+Show the code +
f(x₁,x₂) = s(x₁,x₂) <= 0.0 ? [1, f₁(x₁,x₂)] : [-1, f₂(x₁,x₂)] 
+
+fig = Figure(size = (600, 600),fontsize=20)
+ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
+streamplot!(ax,(x₁,x₂)->Point2f(f(x₁,x₂)), 0..1.5, 0..1.5, colormap = :magma)
+lines!(ax,x₁,sqrt.(1 .- (x₁ .+ 0.05).^2) .- 0.15, color = :red, linewidth=5)
+x10 = 0.5
+x20 = sqrt(1 - (x10 + 0.05)^2) - 0.15
+Makie.scatter!(ax,[x10],[x20],color=:blue,markersize=30)
+fig
+
+
+
+
+

+
+
+
+
+

The projections of the two vector fields onto the normal of the switching function satisfy +\left.\left(\nabla s\right)^\top \bm f_1\right|_{\bm x_0} \geq 0, \quad \left.\left(\nabla s\right)^\top \bm f_2\right|_{\bm x_0} \leq 0. +

+

But this is interesting! Once the trajectory hits the switching curve and tries to penetrate it futher, it is pushed back to the switching curve. As it tries to penetrate it further, it is pushed back to the switching curve again. And so on. But then, how does the state evolve from \bm x_0?

+

Hint: solve the ODE numerically with some finite step size. The solution will exhibit zig-zagging or chattering along the switching curve, away from the blue point. Now, keep shrinking the step size. The solution will ultimately “slide” smoothly along the switching curve. Perhaps this was your guess. One thing should worry you, however: such “sliding” solution satisfies neither of the two state equations!

+

We will make this more rigorous in a moment, but right now we just wanted to tease the intuition.

+
+
+
+

Conditions for existence and uniqueness of solutions of ODE

+

In order to analyze the situations such as the previous example, we need to recapitulate some elementary facts about the existence and uniqueness of solutions of ordinary differential equations (ODEs). And then we are going to add some new stuff.

+

Consider the ODE

+

\dot x(t) = f(x(t),t).

+

We ask the following two questions:

+
    +
  • Under which conditions does a solution exists?
  • +
  • Under which conditions is the solution unique?
  • +
+

To answer both, the function f() must be analyzed.

+

But before we answer the two questions, we must ask another one that is even more fundamental:

+
    +
  • What does it mean that a function x(t) is a solution of the the ODE?
  • +
+

However trivial this question may seem, an answer can escalate rather quickly – there are actually several concepts of a solution of an ordinary differential equation.

+
+

Classical solution (Peano, also Cauchy-Peano)

+

We assume that f(x(t),t) is continuous with respect to both x and t. Then existence of a solution is guaranteed locally (on some finite interval), but uniqueness is not.

+
+
+
+ +
+
+Not guaranteed does not mean impossible +
+
+
+

Uniqueness is not not excluded in all cases, it is just that it is not guaranteed.

+
+
+

A solution is guaranteed to be continuously differentiable ( x\in\mathrm C^1 ). Such function x(t) satisfies the ODE \dot x(t) = f(x(t),t) \; \forall t, that is why such solution is called classical.

+
+

Example 4 An example of a solution that exists only on a finite interval is + \dot x(t) = x^2(t),\; x(0) = 1, +
+for which the solution is x(t) = \frac{1}{1-t} . The solution blows up at t=1 .

+
+
+

Example 5 An example of nonuniqueness is provided by \dot x(t) = \sqrt{x(t)}, \; x(0) = 0.

+

One possible solution is x(t) = \frac{1}{4}t^2. Another is x(t) = 0. Yet another example is x(t) = \frac{1}{4}(t-t_0)^2. It is related to the Leaky bucket example.

+
+
+
+

Strenghtening the requirement of continuity (Pickard-Lindelöf)

+

Since continuity of f(x(t),t) was not enough to guarantee uniqueness, we need to impose a stricter condition on f(). Namely, we impose a stricter condition on f() with respect to x – Lipschitz continuity, while we still require that the function be continuous with respect to t.

+

Now it is not only existence but also uniqueness of a solution that is guaranteed.

+
+
+
+ +
+
+Uniqueness not guaranteed does not mean it is impossible +
+
+
+

Similarly as with Peano conditions, here too the condition is not necessary, it is just sufficient – even if the function f is not Lipschitz continuous, there may exist a unique solution.

+
+
+

Since the condition is stricter than mere continuity, whatever goodies hold here too. In particular, the solution is guaranteed to be continuously differentiable.

+

If the function is only locally Lipschitz, the solution is guaranteed on some finite interval. If the function is (globally) Lipschitz, the solution is guaranteed on an unbounded interval.

+
+
+

Extending the set of solutions (Carathéodory)

+

In contrast with the classical solution, we can allow the solution x(t) to fail to satisfy the ODE at some isolated points in time. This is called Carathéodory (or extended) solution.

+

Carathéodory solution x(t) is more than just continuous (even more than uniformly continuous) but less than contiuously differentiable (aka \mathcal C^1) – it is absolutely continuous. Absolutely continuous function is a solution of the integral equation (indeed, an equation) x(t) = x(t_0) + \int_{t_0}^t f(x(\tau),\tau)\mathrm{d}\tau,
+where we use Lebesgue integral (instead of Riemann).

+

Having referred to absolute continuity and Lebesgue integral, the discussion could quickly become rather technical. But all we want to say is that f can be “some kind of discontinuous” with respect to t. In particular, it must be measurable wrt t, which again seems to start escalating… But it suffices to say that it includes the case when f(x,t) is piecewise continuous with respect to t (sampled data control with ZOH).

+

Needles to say that for a continuous f, solutions x are just classical (smooth).

+

If the function f is discontinuous with respect to x, some more concepts of a solution need to be invoked so that existence and uniqueness can be analyzed.

+
+

Example 6 (Some more examples of nonexistence and nonuniqueness of solutions) The system with a discontinuous RHS +\begin{aligned} +\dot x_1 &= -2x_1 - 2x_2\operatorname*{sgn(x_1)},\\ +\dot x_2 &= x_2 + 4x_1\operatorname*{sgn(x_1)} +\end{aligned} + can be reformulated as a switched system +\begin{aligned} +\dot{\bm x} &= \begin{bmatrix}-2 & 2\\-4 & 1\end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}, \quad s(\bm x)\leq 0\\ +\dot{\bm x} &= \begin{bmatrix}-2 & -2\\4 & 1\end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}, \quad s(\bm x)> 0, +\end{aligned} + where the switching function is s(\bm x) = x_1.

+
+
+Show the code +
s(x) = x[1]
+
+f₁(x) = [-2x[1] + 2x[2], x[2] - 4x[1]]
+f₂(x) = [-2x[1] - 2x[2], x[2] + 4x[1]]
+
+f(x) = s(x) <= 0.0 ? f₁(x) : f₂(x) 
+
+using CairoMakie
+fig = Figure(size = (600, 600),fontsize=20)
+ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
+streamplot!(ax,x->Point2f(f(x)), -1.5..1.5, -1.5..1.5, colormap = :magma)
+vlines!(ax,0; ymin = -1.1, ymax = 1.1, color = :red)
+fig
+
+
+
+
+

+
+
+
+
+
+
+
+

Sliding mode dynamics (on simple boundaries)

+

The previous example provided yet another illustration of a phenomenon of sliding, or a sliding mode. We say that there is an attractive sliding mode at \bm x_\mathrm{s}, if there is a trajectory that ends at \bm x_\mathrm{s}, but no trajectory that starts at \bm x_\mathrm{s}.

+
+
+

Generalized solutions (Filippov)

+

It is now high time to introduce yet another concept of a solution. A concept that will make it possible to model the sliding mode dynamics in a more rigorous way. Remember that when the state \bm x(t) slides along the boundary, it qualifies as a solution to neither of the two state equations in any sense we have discussed so far. But now comes the concept of Fillipov solution.

+

\bm x() is a Filippov solution on [t_0,t_1] if for almost all t +\dot{\bm{x}}(t) \in \overline{\operatorname*{co}}\{\mathbf f(\bm x(t),t)\}, + where \overline{\operatorname*{co}} denotes the (closed) convex hull.

+
+

Example 7 Consider the model in the previous example. The switching surface, along which the solution slides, is given by \mathcal{S}^+ = \{\bm x \mid x_1=0 \land x_2\geq 0\}.

+

Now, Filippov solution must satisfy the following differential inclusion +\begin{aligned} +\dot{\bm x}(t) &\in \overline{\operatorname*{co}}\{\bm A_1\bm x_1(t), \bm A_2\bm x_2(t)\}\\ +&= \alpha_1(t) \bm A_1\bm x_1(t) + \alpha_2(t) \bm A_2\bm x_2(t), +\end{aligned} + where \alpha_1(t), \alpha_2(t) \geq 0, \alpha_1(t) + \alpha_2(t) = 1.

+

Note, however, that not all the weights keep the solution on \mathcal S^+. We must impose some restriction, namely that \dot x_1 = 0 for \bm x(t) \in \mathcal S^+. This leads to +\alpha_1(t) [-2x_1 + 2x_2] + \alpha_2(t) [-2x_1 - 2x_2] = 0 +

+

Combining this with \alpha_1(t) + \alpha_2(t) = 1 gives +\alpha_1(t) = \alpha_2(t) = 1/2, + which in this simple case perhaps agrees with our intuition (the average of the two vector fields).

+

The dynamics on the sliding mode is modelled by +\dot x_1 = 0, \quad \dot x_2 = x_2, \quad \bm x \in \mathcal{S}^+. +

+
+
+
+
+ +
+
+Possible nonuniqueness on intersection of boundaries +
+
+
+

+
+
+ + +
+
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/classes_switched.html b/classes_switched.html index 245f21d..00f7b0d 100644 --- a/classes_switched.html +++ b/classes_switched.html @@ -819,46 +819,46 @@

State-dependent - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/complementarity_constraints 13.html b/complementarity_constraints 13.html new file mode 100644 index 0000000..de943a1 --- /dev/null +++ b/complementarity_constraints 13.html @@ -0,0 +1,1267 @@ + + + + + + + + + +Complementarity constraints – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Complementarity constraints

+
+ + + +
+ + + + +
+ + + +
+ + +
+

Why complementarity constraints?

+

In this chapter we are going to present yet another framework for modelling hybrid systems, which comes with a rich theory and efficient algorithms. It is based on complementarity constraints. Before we introduce the modelling framework in the next section, we first explain the very concept of complementarity constraints and the related optimization problems.

+
+
+

Definition of complementarity constraints

+

Two variables x\in\mathbb R and y\in\mathbb R satisfy the complementarity constraint if x or y is equal to zero and both are nonnegative

+

xy=0, \; x\geq 0,\; y\geq 0,

+

or, using a dedicated compact notation
+\boxed{0\leq x \perp y \geq 0.}

+
+
+
+ +
+
+Both variables can be zero +
+
+
+

The or in the above definition is not exclusive, therefore it is possible that both x and y are zero.

+
+
+

The concept and notation extends to vectors x\in\mathbb R^n and y\in\mathbb R^n, in which case the constraint is interpreted componentwise \boxed{\bm 0\leq \bm x \perp \bm y \geq \bm 0.}

+
+
+

Geometric interpretation of complementarity constraints

+

The set of admissible pairs (x,y) in the \mathbb R^2 plane is constrained to the L-shaped subset given by the nonnegative x and y semi-axes (including the origin) as in Fig. 1.

+
+
+
+ +
+
+Figure 1: The set of solutions satisfying a complementarity constraint +
+
+
+

Optimization over these constraints is difficult, and not only because the feasible set is nonconvex, but also because constraint qualification conditions are not satisfied. Still, some results and tools are available for some classes of optimization problems with these constraints.

+
+
+

Linear complementarity problem (LCP)

+

For a given square matrix \mathbf M and a vector \mathbf q , the linear complementarity problem (LCP) asks for finding two vectors \bm w and \bm z satisfying + \begin{aligned} + \bm w-\mathbf M\bm z &= \mathbf q \\ + \bm 0 \leq \bm w &\perp \bm z \geq \bm 0. + \end{aligned} +

+

Just by moving all the provided data to the right hand side we get + \begin{aligned} + \bm w &= \underbrace{\mathbf M\bm z + \mathbf q}_{\mathbf f(\bm z)} \\ + \mathbf 0 \leq \mathbf f(\bm z) &\perp \bm z \geq \mathbf 0, + \end{aligned} +
+from which we can immediately guess how the linear problem needs to be modified so that we get a nonlinear complementarity problem (NLCP).

+
+
+

Existence of a unique solution

+

A unique solution exists for every vector \mathbf q if and only if the matrix \mathbf M is a P-matrix (something like positive definite, but not exactly, look it up yourself).

+
+ +
+

Nonlinear complementarity problem

+

Given a vector function \mathbf f: \mathbb R^n\rightarrow \mathbb R^n, find a vector \bm x\in\mathbb R^n satisfying +\bm 0\leq \bm x \perp \mathbf f(\bm x) \geq \bm 0. +

+
+
+

Mixed complementarity problem (MCP)

+

An extension of complementarity constraint to the situation in which the variable x is lower- and upper-bounded. In particular, it can be stated as + l \leq x \leq u \perp f(x). +

+

The convention for interpretation is

+
    +
  • If x is strictly within the interval, that is, l < x < u , then f(x)=0,
  • +
  • If x=l , then f(x)\geq 0 ,
  • +
  • if x=u , then f(x)\leq 0 .
  • +
+
+
+

Extended linear complementarity problem (ELCP)

+

Given some matrices \mathbf A and \mathbf B , vectors \mathbf c and \mathbf d , and m subsets \phi_j \sub \{1,2,\ldots,p\} , find a vector \bm x such that + \begin{aligned} + \sum_{j=1}^m\prod_{i\in\phi_j}(\mathbf A\bm x - \mathbf c)_i &= 0,\\ + \mathbf A\bm x &\geq \mathbf c,\\ + \mathbf B\bm x &= \mathbf d, + \end{aligned} +
+or show that no such \bm x exists.

+

The first equation is equivalent to +\forall j \in \{1, \ldots, m\} \; \exist i \in \phi_j \;\text{such that} \; (\mathbf A\bm x − \mathbf c)_i = 0. +

+

Geometric interpretation: union of some faces of a polyhedron.

+
+
+

Mathematical program with complementarity constraints (MPCC)

+

The mathematical program with complementarity constraints (MPCC) is + \begin{aligned} + \operatorname*{minimize}_{\bm x\in\mathbb R^n} & \;f(\bm x)\\ + \text{subject to} & \;0\leq h(\bm x) \perp g(\bm x) \geq 0. + \end{aligned} +

+

Special case of Mathematical program with equilibrium constraints (MPEC).

+
+
+

Mathematical program with equilibrium constraints (MPEC)

+

Optimization problem in which some variable should satisfy equilibrium constraints: + \begin{aligned} + \min_{x_1,x_2} &\; f(x_1,x_2)\\ + \text{subject to}&\; \nabla_{x_2} \phi(x_1,x_2) = 0 + \end{aligned} +

+

For convex \phi() it can be reformulated into a Bilevel optimization problem.

+
+
+

Bilevel optimization

+

Optimization problem in which some variables are constrained to be results of some inner optimization. In the simplest form +\begin{aligned} + \min_{x_1,x_2} &\; f(x_1,x_2)\\ + \text{s. t.}\ &\; x_2 = \text{arg}\,\min_{x_2} \;\phi(x_1,x_2) +\end{aligned} +

+
+
+

Disjunctive constraints

+

A number of affine constraints combined with \lor and \land logical operators.

+

+T_1 \lor T_2 \lor \ldots \lor T_m, + where +T_i = T_{i1} \land T_{i1} \land \ldots \land T_{in_{i}}, + where +T_{ij} = c_{ij}x + d_{ij} \in \mathcal D_{ij}. +

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/complementarity_references 17.html b/complementarity_references 17.html new file mode 100644 index 0000000..9251a29 --- /dev/null +++ b/complementarity_references 17.html @@ -0,0 +1,1105 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

Concise (short and yet sufficient for our purposes) introduction to (linear) complementarity problems and systems is in [1]. Besides describing the idea of complementarity in dynamical systems, it also shows how it is related to other modeling frameworks for hybrid dynamical systems. More detailed and yet very accessible introduction is in the thesis [2]. Condensed treatment is in the papers [3] and [4].

+

A readable introduction to the Extended Linear Complementarity Problem is in [5] (it is also freely available as a technical report).

+

The topics of complementarity constraints in dynamical systems and optimization is still being actively researched. A recent publication on QP optimization with complementarity constraints (LCQP) is [6].

+

Numerical methods for nonsmooth dynamical systems that are based on complementary constraints (and implemented in SICONOS software) are comprehensively presented in [7].

+ + + + + Back to top

References

+
+
[1]
W. P. M. H. Heemels, B. De Schutter, and A. Bemporad, “Equivalence of hybrid dynamical models,” Automatica, vol. 37, no. 7, pp. 1085–1091, Jul. 2001, doi: 10.1016/S0005-1098(01)00059-0.
+
+
+
[2]
M. Heemels, Linear complementarity systems: a study in hybrid dynamics,” PhD thesis, Technische Universiteit Eindhoven, Eindhoven, NL, 1999. Available: https://heemels.tue.nl/content/papers/Hee_TUE99a.pdf
+
+
+
[3]
W. P. M. H. Heemels, J. M. Schumacher, and S. Weiland, “Linear Complementarity Systems,” SIAM Journal on Applied Mathematics, vol. 60, no. 4, pp. 1234–1269, Jan. 2000, doi: 10.1137/S0036139997325199.
+
+
+
[4]
A. J. van der Schaft and J. M. Schumacher, “Complementarity modeling of hybrid systems,” IEEE Transactions on Automatic Control, vol. 43, no. 4, pp. 483–490, Apr. 1998, doi: 10.1109/9.664151.
+
+
+
[5]
B. De Schutter and B. De Moor, “The Extended Linear Complementarity Problem and the Modeling and Analysis of Hybrid Systems,” in Hybrid Systems V, P. Antsaklis, M. Lemmon, W. Kohn, A. Nerode, and S. Sastry, Eds., in Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 1999, pp. 70–85. doi: 10.1007/3-540-49163-5_4.
+
+
+
[6]
J. Hall, A. Nurkanovic, F. Messerer, and M. Diehl, LCQPowA Solver for Linear Complementarity Quadratic Programs.” arXiv, Nov. 2022. Accessed: Dec. 03, 2022. [Online]. Available: http://arxiv.org/abs/2211.16341
+
+
+
[7]
V. Acary and B. Brogliato, Numerical Methods for Nonsmooth Dynamical Systems: Applications in Mechanics and Electronics. in Lecture Notes in Applied and Computational Mechanics, no. 35. Berlin Heidelberg: Springer, 2008. Available: https://doi.org/10.1007/978-3-540-75392-6
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/complementarity_simulations 10.html b/complementarity_simulations 10.html new file mode 100644 index 0000000..788bf94 --- /dev/null +++ b/complementarity_simulations 10.html @@ -0,0 +1,1954 @@ + + + + + + + + + +Simulations of complementarity systems using time-stepping – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Simulations of complementarity systems using time-stepping

+
+ + + +
+ + + + +
+ + + +
+ + +

One of the useful outcomes of the theory of complementarity systems is a new family of methods for numerical simulation of discontinuous systems. Here we will demonstrate the essence by introducing the method of time-stepping. And we do it by means of an example.

+
+

Example 1 (Simulation using time-stepping) Consider the following discontinuous dynamical system in \mathbb R^2: +\begin{aligned} +\dot x_1 &= -\operatorname{sign} x_1 + 2 \operatorname{sign} x_2\\ +\dot x_2 &= -2\operatorname{sign} x_1 -\operatorname{sign} x_2. +\end{aligned} +

+

The state portrait is in Fig. 1.

+
+
+Show the code +
f₁(x) = -sign(x[1]) + 2*sign(x[2])
+f₂(x) = -2*sign(x[1]) - sign(x[2])
+f(x) = [f₁(x), f₂(x)]
+
+using CairoMakie
+fig = Figure(size = (800, 800),fontsize=20)
+ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
+streamplot!(ax,(x₁,x₂)->Point2f(f([x₁,x₂])), -2.0..2.0, -2.0..2.0, colormap = :magma)
+fig
+
+
+
+
+
+ +
+
+Figure 1: State portrait of the discontinuous system +
+
+
+
+
+

One particular (vector) state trajectory is in Fig. 2.

+
+
+Show the code +
using DifferentialEquations
+
+function f!(dx,x,p,t)
+    dx[1] = -sign(x[1]) + 2*sign(x[2])
+    dx[2] = -2*sign(x[1]) - sign(x[2])
+end
+
+x0 = [-1.0, 1.0]
+tfinal = 2.0
+tspan = (0.0,tfinal)
+prob = ODEProblem(f!,x0,tspan)
+sol = solve(prob)
+
+using Plots
+Plots.plot(sol,xlabel="t",ylabel="x",label=false,lw=3)
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+Figure 2: Trajectory of the discontinuous system +
+
+
+
+

We can also plot the trajectory in the state space, as in Fig. 3.

+
+
+Show the code +
Plots.plot(sol[1,:],sol[2,:],xlabel="x₁",ylabel="x₂",label=false,aspect_ratio=:equal,lw=3,xlims=(-1.2,0.5))
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+Figure 3: Trajectory of the discontinuous system in the state space +
+
+
+
+

Now, how fast does the solution approach the origin?

+

Let’s use the 1-norm \|\bm x\|_1 = |x_1| + |x_2| to measure how far the trajectory is from the origin. We then ask: +\frac{\mathrm d}{\mathrm dt}\|\bm x\|_1 = ? +

+

We avoid the troubles with nonsmoothness of the absolute value by consider each quadrant separately. Let’s start in the first (upper right) quadrant, that is, x_1>0 and x_2>0, and therefore |x_1| = x_1, \;|x_2| = x_2, and therefore +\frac{\mathrm d}{\mathrm dt}\|\bm x\|_1 = \dot x_1 + \dot x_2 = 1 - 3 = -2. +

+

The situation is identical in the other quadrants. And, of course, undefined on the axes.

+

The conclusion is that the trajectory will hit the origin in finite time: for, say, x_1(0) = 1 and x_2(0) = 1 , the trajectory hits the origin at t=(|x_1(0)|+|x_2(0)|)/2 = 1. But with an infinite number of revolutions around the origin…

+

How will a standard algoritm for numerical simulation handle this? Let’s have a look at that.

+
+

Forward Euler with fixed step size

+

+\begin{aligned} +{\color{blue}x_{1,k+1}} &= x_{1,k} + h (-\operatorname{sign} x_{1,k} + 2 \operatorname{sign} x_{2,k})\\ +{\color{blue}x_{2,k+1}} &= x_{1,k} + h (-2\operatorname{sign} x_{1,k} - \operatorname{sign} x_{2,k}). +\end{aligned} +

+
+
+Show the code +
f(x) = [-sign(x[1]) + 2*sign(x[2]), -2*sign(x[1]) - sign(x[2])]
+
+using LinearAlgebra
+N = 1000
+x₀ = [-1.0, 1.0]    
+x = [x₀]
+tfinal = norm(x₀,1)/2
+tfinal = 5.0
+h = tfinal/N 
+t = range(0.0, step=h, stop=tfinal)
+
+for i=1:N
+    xnext = x[i] + h*f(x[i]) 
+    push!(x,xnext)
+end
+
+X = [getindex.(x, i) for i in 1:length(x[1])]
+
+Plots.plot(t,X,lw=3,label=false,xlabel="t",ylabel="x")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+

Backward Euler

+

+\begin{aligned} +{\color{blue} x_{1,k+1}} &= x_{1,k} + h (-\operatorname{sign} {\color{blue}x_{1,k+1}} + 2 \operatorname{sign} {\color{blue}x_{2,k+1}})\\ +{\color{blue} x_{2,k+1}} &= x_{1,k} + h (-2\operatorname{sign} {\color{blue}x_{1,k+1}} - \operatorname{sign} {\color{blue}x_{2,k+1}}). +\end{aligned} +

+
+
+

Formulation using LCP

+

Instead solving the above nonlinear equations with discontinuities, we introduce new variables u_1 and u_2 as the outputs of the \operatorname{sign} functions: +\begin{aligned} +{\color{blue} x_{1,k+1}} &= x_{1,k} + h (-{\color{blue}u_{1}} + 2 {\color{blue}u_{2}})\\ +{\color{blue} x_{2,k+1}} &= x_{1,k} + h (-2{\color{blue}u_{1}} - {\color{blue}u_{2}}). +\end{aligned} +

+

But now we have to enforce the relation between \bm u and \bm x_{k+1}. Recall the standard definition of the \operatorname{sign} function is +\operatorname{sign}(x) = \begin{cases} +1 & x>0\\ +0 & x=0\\ +-1 & x<0, +\end{cases} + but we change the definition to a set-valued function +\begin{cases} +\operatorname{sign}(x) = 1 & x>0\\ +\operatorname{sign}(x) \in [-1,1] & x=0\\ +\operatorname{sign}(x) = -1 & x<0. +\end{cases} +

+

Accordingly, we set the relationship between \bm u and \bm x to +\begin{cases} +u_1 = 1 & x_1>0\\ +u_1 \in [-1,1] & x_1=0\\ +u_1 = -1 & x_1<0, +\end{cases} + and +\begin{cases} +u_2 = 1 & x_2>0\\ +u_2 \in [-1,1] & x_2=0\\ +u_2 = -1 & x_2<0. +\end{cases} +

+

But these are mixed complementarity contraints we have defined previously! +\boxed{ +\begin{aligned} +\begin{bmatrix} +{\color{blue} x_{1,k+1}}\\ +{\color{blue} x_{1,k+1}} +\end{bmatrix} +&= +\begin{bmatrix} +x_{1,k}\\ +x_{2,k} +\end{bmatrix} + h +\begin{bmatrix} +-1 & 2 \\ +-2 & -1 +\end{bmatrix} +\begin{bmatrix} +{\color{blue}u_{1}}\\ +{\color{blue}u_{2}} +\end{bmatrix}\\ +-1 \leq {\color{blue} u_1} \leq 1 \quad &\bot \quad -{\color{blue}x_{1,k+1}}\\ +-1 \leq {\color{blue} u_2} \leq 1 \quad &\bot \quad -{\color{blue}x_{2,k+1}} +\end{aligned} +} +

+
+
+

9 possible combinations

+

Let’s explore some: x_{1,k+1} = x_{2,k+1} = 0, while u_1 \in [-1,1] and u_2 \in [-1,1]:

+

+\begin{aligned} +\begin{bmatrix} +0\\ +0 +\end{bmatrix} +&= +\begin{bmatrix} +x_{1,k}\\ +x_{2,k} +\end{bmatrix} + h +\begin{bmatrix} +-1 & 2 \\ +-2 & -1 +\end{bmatrix} +\begin{bmatrix} +{\color{blue}u_{1}}\\ +{\color{blue}u_{2}} +\end{bmatrix}\\ +& -1 \leq {\color{blue} u_1} \leq 1, \quad -1 \leq {\color{blue} u_2} \leq 1 +\end{aligned} +

+
    +
  • How does the set of states from which the next state is zero look like? +\begin{aligned} +-\begin{bmatrix} +-1 & 2 \\ +-2 & -1 +\end{bmatrix}^{-1} +\begin{bmatrix} +x_{1,k}\\ +x_{2,k} +\end{bmatrix} +&= h +\begin{bmatrix} +{\color{blue}u_{1}}\\ +{\color{blue}u_{2}} +\end{bmatrix}\\ +-1 \leq {\color{blue} u_1} \leq 1, \quad -1 &\leq {\color{blue} u_2} \leq 1 +\end{aligned} +
  • +
+

+\begin{bmatrix} +-h\\-h +\end{bmatrix} +\leq +\begin{bmatrix} +0.2 & 0.4 \\ +-0.4 & 0.2 +\end{bmatrix} +\begin{bmatrix} +x_{1,k}\\ +x_{2,k} +\end{bmatrix} +\leq +\begin{bmatrix} +h\\ h +\end{bmatrix} +

+

For h=0.2

+
+
+Show the code +
using LazySets
+h = 0.2
+H1u = HalfSpace([0.2, 0.4], h)
+H2u = HalfSpace([-0.4, 0.2], h)
+H1l = HalfSpace(-[0.2, 0.4], h)
+H2l = HalfSpace(-[-0.4, 0.2], h)
+
+Ha = H1u  H2u  H1l  H2l
+
+using Plots
+Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Indeed, if the current state is in this rotated square, then the next state will be zero.

+
+
+

Another

+

u_1 = 1, u_2 = 1:

+

+\begin{aligned} +\begin{bmatrix} +{\color{blue} x_{1,k+1}}\\ +{\color{blue} x_{1,k+1}} +\end{bmatrix} +&= +\begin{bmatrix} +x_{1,k}\\ +x_{2,k} +\end{bmatrix} + h +\begin{bmatrix} +-1 & 2 \\ +-2 & -1 +\end{bmatrix} +\begin{bmatrix} +{1}\\ +{1} +\end{bmatrix}\\ +\color{blue}x_{1,k+1} &\geq 0\\ +\color{blue}x_{2,k+1} &\geq 0 +\end{aligned} + which can be reformatted to +\begin{bmatrix} +x_{1,k}\\ +x_{2,k} +\end{bmatrix} + h +\begin{bmatrix} +-1 & 2 \\ +-2 & -1 +\end{bmatrix} +\begin{bmatrix} +1\\ +1 +\end{bmatrix}\geq \bm 0 +

+
    +
  • and further to +\begin{bmatrix} +x_{1,k}\\ +x_{2,k} +\end{bmatrix} +\geq h +\begin{bmatrix} +-1\\ +3 +\end{bmatrix} +
  • +
+
+
+Show the code +
using LazySets
+h = 0.2
+A = [-1.0 2.0; -2.0 -1.0]
+u = [1.0, 1.0]
+b = h*A*u
+
+H1 = HalfSpace([-1.0, 0.0], b[1])
+H2 = HalfSpace([0.0, -1.0], b[2])
+Hb = H1  H2
+
+using Plots
+Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
+Plots.plot!(Hb)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+

All nine regions

+
+
+Show the code +
using LazySets
+h = 0.2
+A = [-1.0 2.0; -2.0 -1.0]
+
+u = [1, -1]
+b = h*A*u
+
+H1 = HalfSpace(-[1.0, 0.0], b[1])
+H2 = HalfSpace([0.0, 1.0], -b[2])
+Hc = H1  H2
+
+u = [-1, 1]
+b = h*A*u
+
+H1 = HalfSpace([1.0, 0.0], -b[1])
+H2 = HalfSpace(-[0.0, 1.0], b[2])
+Hd = H1  H2
+
+u = [-1, -1]
+b = h*A*u
+
+H1 = HalfSpace([1.0, 0.0], -b[1])
+H2 = HalfSpace([0.0, 1.0], -b[2])
+He = H1  H2
+
+using Plots
+Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
+Plots.plot!(Hb)
+Plots.plot!(Hc)
+Plots.plot!(Hd)
+Plots.plot!(He)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+

Solutions using a MCP solver

+
+
+Show the code +
M = [-1 2; -2 -1]
+h = 2e-1
+tfinal = 2.0
+N = tfinal/h
+
+x0 = [-1.0, 1.0]
+x = [x0]
+
+using JuMP
+using PATHSolver
+
+for i = 1:N
+    model = Model(PATHSolver.Optimizer)
+    set_optimizer_attribute(model, "output", "no")
+    set_silent(model)
+    @variable(model, -1 <= u[1:2] <= 1)
+    @constraint(model, -h*M * u - x[end]  u)
+    optimize!(model)
+    push!(x, x[end]+h*M*value.(u))
+end
+
+t = range(0.0, step=h, stop=tfinal)
+X = [getindex.(x, i) for i in 1:length(x[1])]
+
+using Plots
+Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
+Plots.plot!(Hb)
+Plots.plot!(Hc)
+Plots.plot!(Hd)
+Plots.plot!(He)
+Plots.plot!(X[1],X[2],xlabel="x₁",ylabel="x₂",label="Time-stepping",aspect_ratio=:equal,lw=3,markershape=:circle)
+Plots.plot!(sol[1,:],sol[2,:],label=false,lw=3)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+ + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/complementarity_simulations.html b/complementarity_simulations.html index b6925dc..1877a2c 100644 --- a/complementarity_simulations.html +++ b/complementarity_simulations.html @@ -745,46 +745,46 @@

Simulations of complementarity systems using time-stepping

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + +
@@ -805,49 +805,49 @@

Simulations of complementarity systems using time-stepping

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + +
@@ -901,48 +901,48 @@

Forward - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -1041,8 +1041,7 @@

9 possible combinati & -1 \leq {\color{blue} u_1} \leq 1, \quad -1 \leq {\color{blue} u_2} \leq 1 \end{aligned}

-
    -
  • How does the set of states from which the next state is zero look like? +

    How does the set of states from which the next state is zero look like? \begin{aligned} -\begin{bmatrix} -1 & 2 \\ @@ -1059,8 +1058,7 @@

    9 possible combinati \end{bmatrix}\\ -1 \leq {\color{blue} u_1} \leq 1, \quad -1 &\leq {\color{blue} u_2} \leq 1 \end{aligned} -

  • -
+

\begin{bmatrix} -h\\-h @@ -1099,54 +1097,54 @@

9 possible combinati - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -1190,9 +1188,7 @@

Another

1\\ 1 \end{bmatrix}\geq \bm 0 -

-
    -
  • and further to + and further to \begin{bmatrix} x_{1,k}\\ x_{2,k} @@ -1202,8 +1198,7 @@

    Another

    -1\\ 3 \end{bmatrix} -
  • -
+

Show the code @@ -1225,56 +1220,56 @@

Another

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
@@ -1320,62 +1315,62 @@

All nine regions

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -1422,80 +1417,80 @@

Solutions usi - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/complementarity_software 10.html b/complementarity_software 10.html new file mode 100644 index 0000000..ce5816b --- /dev/null +++ b/complementarity_software 10.html @@ -0,0 +1,1140 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +
+

Solving optimization problems with complementarity constraints

+

Surprisingly, there are not many software packages that can handle complementarity constraints directly.

+
    +
  • Within the realm of free software, I am only aware of PATH solver. Well, it is not open source and it is not issued under any classical free and open source license. It can be interfaced from Matlab and Julia (and AMPL and GAMS, which are not relevant for our course). For Matlab, compiled mexfiles can be downloaded. For Julia, the solver can be interfaced directly from the popular JuMP package (choosing the PATHSolver.jl solver), see the section on Complementarity constraints and Mixed complementarity problems in JuMP manual.
  • +
+

When restricted to Matlab, there are several options, all of them commercial:

+ +

Gurobi does not seem to support complementarity constraints.

+

Mosek supports disjunctive constraints, within which complementarity constraints can be formulated. But they are then approached using a mixed-integer solver.

+
+
+

Modeling and simulation of dynamical systems with complementarity constraints

+

Within the modeling and simulation domains, there are two free and open source libraries that can handle complementarity constraints, mainly motivated by nonsmooth dynamical systems:

+
    +
  • SICONOS +
      +
    • C++, Python
    • +
    • physical domain independent
    • +
  • +
  • PINOCCHIO +
      +
    • C++, Python
    • +
    • specialized for robotics
    • +
  • +
+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/complementarity_systems 9.html b/complementarity_systems 9.html new file mode 100644 index 0000000..8d57c31 --- /dev/null +++ b/complementarity_systems 9.html @@ -0,0 +1,1352 @@ + + + + + + + + + +Complementarity systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Complementarity systems

+
+ + + +
+ + + + +
+ + + +
+ + +
+

Linear complementarity system (LCS)

+

Having introduced the complementarity constraints and optimization problems with these constraints, we can now show how these constraints can be used to model a certain class of dynamical systems – complementarity dynamical systems. We start with linear ones, namely, linear complementarity systems (LCS). These are also called in the literature as Linear dynamical complementarity problems (LDCP).

+

Linear complementarity system is modelled by \boxed{ +\begin{aligned} +\dot x(t) &= A x(t) + Bu(t)\\ +y(t) &= C x(t) + Du(t)\\ +0&\leq u(t) \perp y(t) \geq 0. +\end{aligned}} +\tag{1}

+
+

Example 1 (Electrical circuit with a diode as an LCS)  

+
+
+

+
Electrical circuit to be modelled as an LCS
+
+
+

Note the upside-down orientation of the voltage and the current for the capacitor – we wanted the diode current identical to the capacitor current.

+

Following the charge formalism within Lagrangian modelling, we can choose the generalized coordinates as + q = \begin{bmatrix} + q_L \\ q_C + \end{bmatrix}. +

+

That this is indeed a sufficient number is obvious, but we can also check the classical formula B-N+1 = 4-3+1 = 2. But we can also choose the state variables as + x = \begin{bmatrix} + i_L\\ q_c + \end{bmatrix}. +

+

The resulting state equations are +\begin{aligned} +i_L' &= -\frac{1}{LC}q_C - \frac{1}{L}u_D\\ +q_C' &= i_L - \frac{1}{RC} q_C - \frac{1}{R} u_D. +\end{aligned} +

+

The idealized volt-ampere characteristics of the diode is

+
+
+

+
Ideal volt-ampere characteristic of a diode
+
+
+

Flipping the axes to get the current as the horizontal axis, we get

+
+
+

+
Flipped volt-ampere characteristic of a diode
+
+
+

Finally, after introducing an auxiliary variable (the reverse voltage of the diode) \bar u_D = -u_D , we get the desired dependence

+
+
+

+
Yet another reformatted VA characteristic of a diode
+
+
+

which can be modelled as a complementarity constraint
+ +0\leq i_D \perp \bar u_D \geq 0. +

+

Now, upon replacing the diode voltage with its reverse \bar u_D while using i_D=i_C, we get +\begin{aligned} +i_L' &= -\frac{1}{LC}q_C + \frac{1}{L} \bar u_D\\ +q_C' &= i_L - \frac{1}{RC} q_C + \frac{1}{R} \bar u_D\\ +0&\leq q_C' \perp \bar u_D \geq 0. +\end{aligned} +

+

We are not there yet – there is a derivative in the complementarity constraint. But just substitute for it: +\begin{aligned} +i_L' &= -\frac{1}{LC}q_C + \frac{1}{L} \bar u_D\\ +q_C' &= i_L - \frac{1}{RC} q_C + \frac{1}{R} \bar u_D\\ +0&\leq i_L - \frac{1}{RC} q_C + \frac{1}{R} \bar u_D \perp \bar u_D \geq 0, +\end{aligned} +
+and voila, we finally got the LCS description. We can also reformat it into the vector format +\begin{aligned} +\begin{bmatrix} +i_L' \\ q_C' +\end{bmatrix} &= +\begin{bmatrix} +0 &-\frac{1}{LC}\\ +1 & - \frac{1}{RC} +\end{bmatrix} +\begin{bmatrix} +i_L \\ q_C +\end{bmatrix} + +\begin{bmatrix} +\frac{1}{L}\\ +\frac{1}{R} +\end{bmatrix} +\bar u_D\\ +0 &\leq \left(\begin{bmatrix} +1 & - \frac{1}{RC} +\end{bmatrix} +\begin{bmatrix} +i_L \\ q_C +\end{bmatrix} + +\begin{bmatrix} +\frac{1}{L}\\ +\frac{1}{R} +\end{bmatrix} +\bar u_D\right ) \bot \bar u_D \geq 0. +\end{aligned} +

+
+
+

Example 2 (Mass-spring system with a hard stop as a linear complementarity system) Two carts moving horitontally (left or right) are interconnected through a spring. The left cart is also interconnected with the wall through a another spring. Furthemore, the motion of the left cart is constrained in that there is a hard stop that prevents the cart from moving further to the left. The setup is shown in Fig. 1.

+
+
+
+ +
+
+Figure 1: Mass-spring system with a hard stop to be modelled as a LCS +
+
+
+

The variables x_1 and x_2 give deviations of the two carts from their equilibrium positions.

+

The hard stop is located at the equilibrium position of the left cart.

+

Besides the two positions, their derivatives are also introduced as state vectors. The input u corresponds to the reaction force of the hard stop.

+

As the output, only the position of the left cart is (arbitrarily) chosen.

+

The state equations and the output equation are + \begin{aligned} + \dot x_1(t) &= x_3\\ + \dot x_2(t) &= x_4\\ + \dot x_3(t) &= -\frac{k_1+k_2}{m_1}x_1(t) + \frac{k_2}{m_1}x_2(t) + \frac{1}{m_1}u(t)\\ + \dot x_4(t) &= \frac{k_2}{m_2}x_1(t) - \frac{k_2}{m_2} x_2(t)\\ + y(t) &= x_1(t). + \end{aligned} +

+

The presence of the hard stop can be modelled as an inequality constraint on the state (or the output in this case) x_1(t) = y(t) \geq 0.

+

Strictly speaking, similar constraint should also be imposed on the right cart. That one can not overcome the hard stop either. Furthermore, the left cart would stand in the way too. But we ignore it here for the sake of simplicity of our explanation.

+

The reaction force u can only be nonnegative + u(t) \geq 0. +

+

Furthermore, the reaction force is acting if and only if the left cart hits the hard stop, that is,
+ + y(t) u(t) = 0. +

+

All the above three constraints can be written compactly as a complementarity constraing + 0\leq y(t) \perp u(t) \geq 0. +

+
+
+
+

Complementarity system as a feedback interconnection

+

A complementarity system Eq. 1 can be seen as a feedback interconnection of a linear system and a complementarity constraint.

+
+
+

+
Complementarity system as a feedback interconnection
+
+
+
+
+

Complementarity systems vs PWA and max-plus linear systems

+

Consider the feedback interconnection of a dynamical system and the max(y,u) function in the feedback loop as in Fig. 2.

+
+
+
+ +
+
+Figure 2: Feedback interconnection of a dynamical system and a nonlinearity +
+
+
+

We now express the original y as a difference of two nonnegative variable satisfying the complementarity constraint +y = y^+ - y^-,\quad 0 \leq y^+ \bot y^- \geq 0. +

+

The motivation for this was that with the new variables y^+ and y^-, the max function can be expressed as +\max(y,0) = \max(y^+ - y^-, 0) = y^+. +

+

Now, set y^+ = u and then +y = u - y^-, + from which +y^- = u - y + and therefore the original feedback interconnection can be rewritten as

+
+
+

+
Feedback interconnection equivalent to the one with max(y,0)
+
+
+
+
+

More complicated PWA functions in feedback

+

The function \max(y,0) that we have just considered is a very simple piecewise affine (PWA) function. But we can consider more complicated PWA functions. Only a little bit complicated PWA function is in Fig. 3.

+
+
+
+ +
+
+Figure 3: A simple piecewise affine function +
+
+
+

The function is defined by shifting and scaling the original \max(y,0) function: +u(y) = k_1 \max(y-y_1,0) = \max(k_1(y-y_1),0). +

+

We can now enforce complementarity based on this function in the feedback loop, see Fig. 4.

+
+
+
+ +
+
+Figure 4: Feedback system with a shifted PWA function modelled as complementarity constraint +
+
+
+

This procedure can be extended towards PWA functions composed of several segments, see Fig. 5.

+
+
+
+ +
+
+Figure 5: PWA function with multiple segments +
+
+
+

The function is defined as +\begin{aligned} +u(y) &= k_0 y + u_0 + (k_1-k_0) \max(y-y_1,0) \\ +&\qquad + (k_2-k_1) \max(y-y_2,0)\\ +&= k_0 y + u_0 + \underbrace{\max((k_1-k_0)(y-y_1),0)}_{u_1}\\ +&\qquad + \underbrace{\max((k_2-k_1)(y-y_2),0)}_{u_2} +\end{aligned} + and the feedback interconnection now contain several parallel paths with complementarity constraints as in Fig. 6

+
+
+
+ +
+
+Figure 6: Feedback system with multiple-segment PWA modelled as complementarity constraints +
+
+
+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/des 9.html b/des 9.html new file mode 100644 index 0000000..c49c0ff --- /dev/null +++ b/des 9.html @@ -0,0 +1,1250 @@ + + + + + + + + + +Discrete-event systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Discrete-event systems

+
+ + + +
+ + + + +
+ + + +
+ + +

We have already mentioned that hybrid systems are composed of time-driven subsystems and event-driven subsystems. Assuming that the primary audience of this course are already familiar with the former (having been exposed to state equations and transfer function), here we are going to focus on the latter, also called discrete-event systems DES (or DEVS).

+
+

(Discrete) event

+

We need to start with the definition of an event. Sometimes an adjective discrete is added (to get discrete event), although it appears rather redundant.

+

The primary characteristic of an event is instantaneous occurence, that is, an event takes no time.

+

Within the context of systems, an event is associated with a change of state – transition of the system from one state to another. Between two events, the systems remains in the same state, it doesn’t evolve.

+
+
+
+ +
+
+The concept of a state +
+
+
+

True, here we are making a reference to the concept of a state, which we haven’t defined yet. But we can surely rely on understanding this concept developed by studying the time-driven systems (modelled by state equations).

+
+
+

Although it is not instrumental in defining the event, the state space is frequently assumed discrete (even if infinite).

+
+

Example 1 (DES state trajectory) In the figure below we can see an example state trajectory of a discrete-event system corresponding to a particular sequence of events.

+
+
+
+ +
+
+Figure 1: Example of a state trajectory in response to a sequence of events +
+
+
+

It is perhaps worth emphasizing that the state space is not necessarily equidistantly discretized.

+

Also note that for some events no transitions occur (e_3 at t_3).

+
+
+
+
+ +
+
+Frequent notational confusion: does the lower index represent discrete time or an element of a set? +
+
+
+

The previous example also displays one particular annoying (and frequently occuring in literature) notational conflict. How shall we interpret the lower index? Sometimes it is used to refer to (discrete) time, and on some other occasions it can just refer to a particular element of a set. In other words, this is a notational clash between name of the variable and value of the variable. In the example above we obviously adopted the latter interpretation. But in other cases, we ourselves are prone to inconsistency. Just make sure you understand what the author means.

+
+
+
+

Example 2 (State trajectory of a continuous-time dynamical systems) Compare now the above example of a state trajectory in a DES with the example of a continuous-time state space system below, whose model could be \dot x(t) = f(x). In the latter, any change, however small, takes time. In other words, the system evolves continuously in time.

+
+
+
+ +
+
+Figure 2: Example of a state trajectory of a continuous-time continuous-valued dynamical system +
+
+
+

The set of states (aka state space) is \mathbb{R} (or a subset) in this case (in general \mathbb{R}^n or a subset).

+
+
+

Example 3 (State trajectory of a time-discretized (aka sampled-data) system) As a yet another example of a state trajectory, consider the response of a discrete-time (actually time-discretized or also sampled-data system) system model by x_k = f(x_k) in the figure below. Although we could view the sampling instances as the events, these are given by time, hence the moments of transitions are predictable. Hence the system can still be viewed and analyzed as a time-driven and not event driven one.

+
+
+
+ +
+
+Figure 3: Example of a state trajectory of time-discretized (aka sampled data) system +
+
+
+
+
+
+

When do events occur?

+

There are three major possibilities:

+
    +
  • when action is taken (button press, clutch released, …),
  • +
  • spontaneously: well, this is just an “excuse” when the reason is difficult to trace down (computer failure, …),
  • +
  • when some condition is met (water level is reached, …). This needs an introduction of a concept of a hybrid systems, wait for it.
  • +
+
+
+

Sequence of “time-stamped” events (aka timed trace)

+

The sequence of pairs (event, time) (e_1,t_1), (e_2,t_2), (e_3,t_3), \ldots

+

is sufficient to characterize an execution of a deterministic system, that is, a system with a unique initial state and a unique transitions at a given state and an event.

+
+
+

DES can be stochastic, but what exactly is stochastic then?

+

Stochasticity can be introduced in

+
    +
  • the event times (e.g. Poisson process),
  • +
  • but also in the transitions (e.g. probabilistic automata, more on this later).
  • +
+
+
+

Sometimes time stamps not needed – the ordering of events is enough

+

The sequence of events (aka trace) e_1,e_2,e_3, \ldots can be enough for some analysis, in which only the order of the events is important.

+
+

Example 4 credit_card_swiped, pin_entered, amount_entered, money_withdrawn

+
+
+
+

Discrete-event systems are studied through their languages

+

When studying discrete-event systems, soon we are exposed to terminology from the formal language theory such as alphabet, word, and language. This must be rather confusing for most students (at least those with no education in computer science). In our course we are not going to use these terms actively (after all our only motivation for introducing the discipline of discrete-event systems is to take away just a few concepts that are useful in hybrid systems), but we want to sketch the motivation for their introduction to the discipline, which may make it easier for a student to skim through some materials on discrete-event systems.

+

We define at least those three terms that we have just mentioned. The definitions correspond to the everyday usage of these terms.

+
+
Alphabet
+
+a set of symbols. +
+
Word (or string)
+
+a sequence of symbols from a finite alphabet. +
+
Language
+
+a set of words from the given alphabet. +
+
+

Now, a symbol is used to label an event. Alphabet is then a set of possible events. A particular sequence of events (we also call it trace) is then represented by a word. Since we agreed that events are associated with state transitions of a corresponding system, a word represents a possible execution or run of a system. A set of all possible executions of a given system can then be formally viewed as language.

+

Indeed, all this is just a formalism, the agreement how to talk about things. We will see an example of this “jargon” in the next section when we introduce the concept of an automaton and some of its properties.

+
+
+

Modelling frameworks for DES (as used in our course)

+

These are the three frameworks that we are going to cover in our course. There may be some more, but these three are the major ones, and from these there is always some lesson to be learnt that we will find useful later when finally studying hybrid systems

+
    +
  • State automaton (pl. automata)
  • +
  • Petri net
  • +
  • (max,plus) algebra, MPL systems
  • +
+

While the first two frameworks are essentially equally powerful when it comes to modelling DES, the third one can be regarded as an algebraic framework for a subset of systems modelled by Petri nets.

+

We are going to cover all the three frameworks in this course as a prequel to hybrid systems.

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/des_automata 16.html b/des_automata 16.html new file mode 100644 index 0000000..24b306d --- /dev/null +++ b/des_automata 16.html @@ -0,0 +1,3336 @@ + + + + + + + + + +State automata – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

State automata

+
+ + + +
+ + + + +
+ + + +
+ + +

Having just discussed the concept of a discrete-event system, we now introduce the most popular modeling framework for such systems: a state automaton, or just an automaton (plural automata). It is also known as a state machine or a (discrete) transition system.

+
+

Definition 1 (Automaton) Automaton is a tuple \boxed{ +G = \{\mathcal X,\mathcal X_0,\mathcal E,\mathcal F\},} +
+where

+
    +
  • \mathcal X is the set of states (also called modes or locations).
  • +
  • \mathcal X_0 \subseteq \mathcal X is the set of initial states.
  • +
  • \mathcal E is the set of events (also actions, transition labels, symbols). It is also called alphabet.
  • +
  • \mathcal F\subseteq \mathcal X \times \mathcal E \times \mathcal X is the set of transitions. In the deterministic case it can also be narrowed down to a transition function f:\mathcal X \times \mathcal E \rightarrow \mathcal X. Note that f is then is a partial function, it is not necessarily defined for all combinations of states and events. Sometimes f is used even for multivalued functions: f:\mathcal X \times \mathcal E \rightarrow 2^\mathcal{X}, where 2^\mathcal{X} is a power set (a set of all subsets of X).
  • +
+
+
+
+
+ +
+
+Some comments on the notation +
+
+
+
    +
  • The set of states is often denoted by \mathcal Q to spare the letter \mathcal X for the continuous valued state space of hybrid systems.
  • +
  • The set of events is often denoted by \mathcal A to spare the letter \mathcal E for the set of transitions (edges in the corresponding graph), because F and f may also need to be spared for the continuous-valued transitions. But then the letter \mathcal A actually fits this purpose nicely because the event set is also called the alphabet.
  • +
+
+
+
+

Marked states

+

In some literature, the definition of the automaton also includes a set \mathcal X_\mathrm{m} \subseteq \mathcal X of marked or accepting states, in which case the definition of an automaton now includes three (sub)sets of states: \mathcal X, \mathcal X_0 and \mathcal X_\mathrm{m}. \boxed{ +G = \{\mathcal X,\mathcal X_0,\mathcal E,\mathcal F, \mathcal X_\mathrm{m}\}.} +

+

The marked states are just some states with special roles in the system. Namely, these are the states into which the system should be controlled. I do not particularly like this idea of mixing the model of the system with the requirements, but some part of the community likes it this way.

+
+
+

Automaton as a (di)graph (also a state transition diagram)

+

So far the definition of an automaton was not particularly visual. This can be changes by viewing the automaton as a directed graph (digraph) with. These are the basic rules

+
    +
  • State is represented as a node of the graph.
  • +
  • Transition from a given state to another state is represented as an edge connecting the two nodes.
  • +
  • Events (actions) are the labels attached to the edges. It is not necessary that each edge has its unique label.
  • +
+
+

Example 1 (Automaton as a digraph) Consider an automaton defined by these sets: \mathcal X = \{x_1,x_2,x_3\}, \mathcal X_0 = \{x_1\}, \mathcal E = \{e_1,e_2,e_3\}, \mathcal F = \{(x_1,e_1,x_2),(x_2,e_2,x_1),(x_1,e_3,x_3),(x_2,e_2,x_3)\}.

+

The corresponding digraph is in Fig 1.

+
+
+
+
+
+
+ + +G + + +init +init + + + +x₁ + +x₁ + + + +init->x₁ + + + + + +x₂ + +x₂ + + + +x₁->x₂ + + +e₁ + + + +x₃ + +x₃ + + + +x₁->x₃ + + +e₂ + + + +x₂->x₁ + + +e₂ + + + +x₂->x₃ + + +e₃ + + + +
+
+
+Figure 1: An example automaton as a digraph +
+
+
+
+
+
+

We may also encounter the following term.

+
+

Definition 2 (Active event function and set) Active event function (actually a multivalued function) \Gamma: \mathcal X \rightarrow 2^\mathcal{E} assigns to each state a set of active events. Active event set \Gamma(x) is the set of active events in a particular state x.

+
+
+
+

Finite state automaton (FSA)

+

This may be regarded as a rather superfluous definition – a finite state automaton (FSA) is a state automaton with a finite set \mathcal X of states. It is also known as a finite state machine (FSM).

+
+
+

Execution of an automaton

+
    +
  • x_1\xrightarrow{e_1} x_2\xrightarrow{e_2} x_1 \xrightarrow{e_1} x_2 \xrightarrow{e_4} x_3\ldots
  • +
  • Sometimes also written as x_1,e_1,x_2,e_2,\ldots
  • +
+
+
+
+
+ +
+
+Notational confusion +
+
+
+

Here x_k for some k is the name of a particular state. It is not the name of a (yet to be introduced) state variable; In fact, it can be viewed as its value (also valuation).

+
+
+
    +
  • Some authors strictly distinguish between the state variable and the state (variable valuation), +
      +
    • similarly as in probability theory random variable X vs its value x, as in F(x) = P(X\leq x);
    • +
  • +
  • some do not, but then it may lead to confusion;
  • +
  • yet some others avoid the problem by not introducing state variables and only working with enumerated states.
  • +
+
+
+
+
+ +
+
+Notational confusion 2 +
+
+
+

Even worse, it is also tempting to interpret the lower index k as (discrete) time, but nope, in the previous k is not the time index.

+

Again, some authors do not distinguish…

+
+
+
+
+

Path of an automaton

+

Corresponding to the execution

+

x_1\xrightarrow{e_1} x_2\xrightarrow{e_2} x_1 \xrightarrow{e_1} x_2 \xrightarrow{e_4} x_3\ldots

+

the path is just the sequence of visited states:

+

x_1,x_2,x_1,x_2,x_3,\ldots

+
+

In continuous-valued dynamical systems, we have a state trajectory, but then time stamps are attached to each visited state.

+
+
+

Example 2 (Beverage vending machine)  

+
+
+
+
+
+
+ + +G + + +init +init + + + +waiting + +waiting + + + +init->waiting + + + + + +swiped + +swiped + + + +waiting->swiped + + +swipe card + + + +swiped->waiting + + +reject payment + + + +paid + +paid + + + +swiped->paid + + +accept payment + + + +coke_dispensed + +coke_dispensed + + + +paid->coke_dispensed + + +choose coke + + + +fanta_dispensed + +fanta_dispensed + + + +paid->fanta_dispensed + + +choose fanta + + + +coke_dispensed->waiting + + +take coke + + + +fanta_dispensed->waiting + + +take fanta + + + +
+
+
+Figure 2: Example of a digraph representation of the automaton for a beverage vending machine +
+
+
+
+
+
    +
  • State sequence (path): waiting, swiped, paid, coke_dispensed, waiting

  • +
  • Events sequence: swipe card, accept payment, choose coke, take coke

  • +
  • Indeed, the two states coke_dispensed and fanta_dispensed can be merged into just beverage_dispensed.

  • +
  • How about other paths? Longer? Shorter?

  • +
+
+
+
+
+
+
+ + +G + + +init +init + + + +waiting + + +waiting + + + +init->waiting + + + + + +swiped + +swiped + + + +waiting->swiped + + +swipe card + + + +swiped->waiting + + +reject payment + + + +paid + +paid + + + +swiped->paid + + +accept payment + + + +coke_dispensed + +coke_dispensed + + + +paid->coke_dispensed + + +choose coke + + + +fanta_dispensed + +fanta_dispensed + + + +paid->fanta_dispensed + + +choose fanta + + + +coke_dispensed->waiting + + +take coke + + + +fanta_dispensed->waiting + + +take fanta + + + +
+
+
+Figure 3: Example of a digraph representation of the automaton for a beverage vending machine with a marked state +
+
+
+
+
+

The waiting state can be marked (is accepting).

+
+
+

Example 3 (Longitudinal control of a ground vehicle)  

+
+
+
+
+
+
+ + +G + + +init +init + + + +still + +still + + + +init->still + + + + + +accelerating + +accelerating + + + +still->accelerating + + +push acc + + + +cruising + +cruising + + + +accelerating->cruising + + +cruise ON + + + +coasting + +coasting + + + +accelerating->coasting + + +rel acc + + + +cruising->accelerating + + +push acc + + + +cruising->coasting + + +rel acc + + + +braking + +braking + + + +cruising->braking + + +push brake + + + +coasting->braking + + +push brake + + + +braking->still + + +zero vel + + + +braking->cruising + + +cruise ON + + + +braking->coasting + + +rel brake + + + +
+
+
+Figure 4: Example of a digraph representation of the automaton for a longitudinal control of a ground vehicle +
+
+
+
+
+
+
+
    +
  • By cruise on I mean switching on some kind of a cruise control system, which keeps the velocity constant.
  • +
  • It turns out the optimal control strategy for trains (under some circumstances).
  • +
  • Note that some of the events are indeed actions started by the driver, but some are just coming from the physics of the vehicle (transition from braking to zero velocity).
  • +
+
+
+

Example 4 (Corridor switch)  

+
+
+
+
+
+
+ + +G + + +init +init + + + +OFF + +OFF + + + +init->OFF + + + + + +ON + +ON + + + +OFF->ON + + +switch₁,switch₂ + + + +ON->OFF + + +switch₁,switch₂ + + + +
+
+
+Figure 5: Example of a digraph representation of the automaton for a corridor switch +
+
+
+
+
+

Two events associated with one transitions can be seen as two transitions, each with a single event, both sharing the starting and ending states.

+
+
+

Example 5 (JK flip-flop) We now consider the classical JK flip-flop logical circuit. It symbol is in Fig 7 and the truth table follows. Our goal is to represent its functionality using a state automaton.

+
+
+
+ +
+
+Figure 6: Symbol for a JK flip-flop logical circuit +
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
JKQ_kQ_{k+1}Description
0000No change
0011No change
0100Reset
0110Reset
1001Set
1011Set
1101Toggle
1110Toggle
+
+
+
+
+
+
+ + +G + + +init +init + + + +Low + +Low + + + +init->Low + + + + + +Low->Low + + +¬J ∧ ¬K ∧ clk + + + +High + +High + + + +Low->High + + +J ∧ clk + + + +High->Low + + +K ∧ clk + + + +High->High + + +¬J ∧ ¬K ∧ clk + + + +
+
+
+Figure 7: JK flip-flop as an automaton +
+
+
+
+
+
+
+

Example 6 (Double intensity switching)  

+
+
+
+
+
+
+ + +G + + +init +init + + + +OFF + +OFF + + + +init->OFF + + + + + +ON + +ON + + + +OFF->ON + + +push + + + +ON->OFF + + +push + + + +ON2 + +ON2 + + + +ON->ON2 + + +push + + + +ON2->OFF + + +push + + + +
+
+
+Figure 8: Example of a digraph representation of the automaton for double intensity switching +
+
+
+
+
+

Obviously we need to introduce time into the automaton…

+
+
+
+

State as the value of a state variable

+

Definition of the state space by enumeration (such as \mathcal X = \{0,1,2,3,4,5\}) doesn’t scale well. As an alternative, a state can be characterized by the value (sometimes also valuation) of a state variable. A state variable is then given by

+
    +
  • the name (for example, x),
  • +
  • the “type” (boolean, integer, vector, …).
  • +
+
+

Example 7 (Examples of state variables)  

+
    +
  • Corridor switch: x \in \{\mathrm{false},\mathrm{true}\} (possibly also \{0,1\}).
  • +
  • Double intensity switching: +
      +
    • x \in \{0,1,2\} \subset \mathbb Z,
    • +
    • or \bm x = \begin{bmatrix}x_1\\ x_2 \end{bmatrix}, where x_1,x_2 \in \{0,1\}.
    • +
  • +
+
+
+
+

State (transition) equation

+

Denoting a new state after a transition as x^+, the state equation reads \boxed{x^+ = f(x,e)}

+

Upon introduction of discrete-time (index) k, it can also be rewritten as x_{k+1} = f(x_k,e_k) or also x[k+1] = f(x[k],e[k]).

+
+
+
+ +
+
+Note +
+
+
+
    +
  • The function f can be defined by a computer code rather than a clean mathematical formula.
  • +
  • The discrete-time index of the event is sometimes considered shifted, that is x_{k+1} = f(x_k,e_{k+1}). You should be aware of this.
  • +
+
+
+
+
+

Extensions

+

The concept of an automaton can be extended in several ways. In particular, the following two extensions introduce the concept of an output to an automaton.

+
+

Moore machine

+

One extension of an automaton with outputs is Moore machine. The outputs assigned to the states by the output function y = g(x).

+

The output is produced (emitted) when the (new) state is entered.

+

Note, in particular, that the output does not depend on the input. This has a major advantage when a feedback loop is closed around this system, since no algebraic loop is created.

+

Graphically, we make a conventions that outputs are the labels of the states.

+
+

Example 8 (Moore machine) The following automaton has just three states, but just two outputs (FLOW and NO FLOW).

+
+
+
+
+
+
+ + +G + + +init +init + + + +closed + +NO FLOW +Valve +closed + + + +init->closed + + + + + +partial + +FLOW +Valve +partially +open + + + +closed->partial + + +open valve one turn + + + +partial->closed + + +close valve one turn + + + +full + +FLOW +Valve +fully open + + + +partial->full + + +open valve one turn + + + +full->closed + + +emergency shut off + + + +full->partial + + +close valve one turn + + + +
+
+
+Figure 9: Example of a digraph representation of the Moore machine for a valve control +
+
+
+
+
+
+
+
+

Mealy machine

+

Mealy machine is another extension of an automaton. Here the outputs are associated with the transitions rather than the states.

+

Since the events already associated with the states can be viewed as the inputs, we now have input/output transition labels. The transition label e_\mathrm{i}/e_\mathrm{o} on the transion from x_1 to x_2 reads as “the input event e_\mathrm{i} at state x_1 activates the transition to x_2, which outputs the event e_\mathrm{o}” and can be written as x_1\xrightarrow{e_\mathrm{i}/e_\mathrm{o}} x_2.

+

It can be viewed as if the output function also considers the input and not only the state y = e_\mathrm{o} = g(x,e_\mathrm{i}).

+

In contrast with the Moore machine, here the output is produced (emitted) during the transition (before the new state is entered).

+
+

Example 9 (Mealy machine) Coffee machine: coffee for 30 CZK, machine accepting 10 and 20 CZK coins, no change.

+
+
+
+
+
+
+ + +G + + +init +init + + + +0 + +No coin + + + +init->0 + + + + + +10 + +10 CZK + + + +0->10 + + +insert 10 CZK / no coffee + + + +20 + +20 CZK + + + +0->20 + + +insert 20 CZK / no coffee + + + +10->0 + + +insert 20 CZK / coffee + + + +10->20 + + +insert 10 CZK / no coffee + + + +20->0 + + +insert 10 CZK / coffee + + + +20->10 + + +insert 20 CZK / coffee + + + +
+
+
+Figure 10: Example of a digraph representation of the Mealy machine for a coffee machine +
+
+
+
+
+
+
+

Example 10 (Reformulate the previous example as a Moore machine) Two more states wrt Mealy

+
+
+
+
+
+
+ + +G + + +init +init + + + +0 + +NO COFFEE +No +coin + + + +init->0 + + + + + +10 + +NO COFFEE +10 +CZK + + + +0->10 + + +insert 10 CZK + + + +20 + +NO COFFEE +20 +CZK + + + +0->20 + + +insert 20 CZK + + + +10->20 + + +insert 10 CZK + + + +30 + +COFFEE +10+20 +CZK + + + +10->30 + + +insert 20 CZK + + + +20->30 + + +insert 10 CZK + + + +40 + +COFFEE +20+20 +CZK + + + +20->40 + + +insert 20 CZK + + + +30->0 + + + + + +30->10 + + +insert 10 CZK + + + +30->20 + + +insert 20 CZK + + + +40->10 + + + + + +40->20 + + +insert 10 CZK + + + +40->30 + + +insert 20 CZK + + + +
+
+
+Figure 11: Example of a digraph representation of the Moore machine for a coffee machine +
+
+
+
+
+
+
+
+
+ +
+
+Note +
+
+
+

There are transitions from 30 and 40 back to 0 that are not labelled by any event. This does not seem to follow the general rule that transitions are always triggered by events. Not what? It can be resolved upon introducing time as the timeout transitions.

+
+
+
+

Example 11 (Dijkstra’s token passing) The motivation for this example is to show that it is perhaps not always productive to insist on visual description of the automaton using a graph. The four components of our formal definition of an automaton are just enough, and they translate directly to a code.

+

The example comes from the field of distributed computing systems. It considers several computers that are connected in ring topology, and the communication is just one-directional as Fig 12 shows. The task is to use the communication to determine in – a distributed way – which of the computers carries a (single) token at a given time. And to realize passing of the token to a neighbour. We assume a synchronous case, in which all the computers are sending simultaneously, say, with some fixed sending period.

+
+
+
+
+
+
+ + +G + + +0 + +0 + + + +1 + +1 + + + +0->1 + + + + + +2 + +2 + + + +1->2 + + + + + +3 + +3 + + + +2->3 + + + + + +3->0 + + + + + +
+
+
+Figure 12: Example of a ring topology for Dijkstra’s token passing in a distributed system +
+
+
+
+
+

One popular method for this is called Dijkstra’s token passing. Each computer keeps a single integer value as its state variable. And it forwards this integer value to the neighbour (in the clockwise direction in our setting). Upon receiving the value from the other neighbour (in the counter-clockwise direction), it updates its own value according to the rule displayed in the code below. At every clock tick, the state vector (composed of the individual state variables) is updated according to the function update!() in the code. Based on the value of the state vector, an output is computed, which decodes the informovation about the location of the token from the state vector. Again, the details are in the output() function.

+
+
+Show the code +
struct DijkstraTokenRing
+    number_of_nodes::Int64
+    max_value_of_state_variable::Int64
+    state_vector::Vector{Int64}
+end
+
+function update!(dtr::DijkstraTokenRing)                        
+    n = dtr.number_of_nodes
+    k = dtr.max_value_of_state_variable
+    x = dtr.state_vector
+    xnext = copy(x)
+    for i in eachindex(x)   # Mind the +1 shift. x[2] corresponds to x₁ in the literature.
+        if i == 1                                              
+            xnext[i] = (x[i] == x[n]) ? mod(x[i] + 1,k) : x[i]  # Increment if the left neighbour is identical.
+        else                                                    
+            xnext[i] = (x[i] != x[i-1]) ? x[i-1] : x[i]         # Update by the differing left neighbour.
+        end
+    end
+    dtr.state_vector .= xnext                                              
+end
+
+function output(dtr::DijkstraTokenRing)     # Token = 1, no token = 0 at the given position. 
+    x = dtr.state_vector
+    y = similar(x)
+    y[1] = iszero(x[1]-x[end])
+    y[2:end] .= .!iszero.(diff(x))
+    return y
+end
+
+
+
output (generic function with 1 method)
+
+
+

We now rund the code for a given number of computers and some initial state vector that does not necessarily comply with the requirement that there is only one token in the ring.

+
+
+Show the code +
n = 4                           # Concrete number of nodes.
+k = n                           # Concrete max value of a state variable (>= n).
+@show x_initial = rand(0:k,n)   # Initial state vector, not necessarily acceptable (>1 token in the ring).
+dtr = DijkstraTokenRing(n,k,x_initial)
+@show output(dtr)               # Show where the token is (are).
+
+@show update!(dtr), output(dtr) # Perform the update, show the state vector and show where the token is.
+@show update!(dtr), output(dtr) # Repeat a few times to see the stabilization.    
+@show update!(dtr), output(dtr)
+@show update!(dtr), output(dtr)
+@show update!(dtr), output(dtr)
+
+
+
x_initial = rand(0:k, n) = [4, 2, 0, 3]
+output(dtr) = [0, 1, 1, 1]
+(update!(dtr), output(dtr)) = ([4, 4, 2, 0], [0, 0, 1, 1])
+(update!(dtr), output(dtr)) = ([4, 4, 4, 2], [0, 0, 0, 1])
+(update!(dtr), output(dtr)) = ([4, 4, 4, 4], [1, 0, 0, 0])
+(update!(dtr), output(dtr)) = ([1, 4, 4, 4], [0, 1, 0, 0])
+(update!(dtr), output(dtr)) = ([1, 1, 4, 4], [0, 0, 1, 0])
+
+
+
([1, 1, 4, 4], [0, 0, 1, 0])
+
+
+

We can see that although initially the there can be more tokens, after a few iterations the algorithm achieves the goal of having just one token in the ring.

+
+
+
+

Extended-state automaton

+

Yet another extension of an automaton is the extended-state automaton. And indeed, the hyphen is there on purpose as we extend the state space.

+

In particular, we augment the state variable(s) that define the states/modes/locations (the nodes in the graph) by additional (typed) state variables: Int, Enum, Bool, …

+

Transitions from one mode to another are then guarded by conditions on theses new extra state variables.

+

Besides being guarded by a guard condition, a given transition can also be labelled by a reset function that resets the extended-state variables.

+
+

Example 12 (Counting up to 10) In this example, there are two modes (on and off), which can be captured by a single binary state variable, say x. But then there is an additional integer variable k, and the two variables together characterize the extended state.

+
+
+
+
+
+
+ + +G + + +init +init + + + +OFF + +OFF + + + +init->OFF + + +int k=0 + + + +ON + +ON + + + +OFF->ON + + +press + + + +ON->OFF + + +(press ⋁ k ≥ 10); k=0 + + + +ON->ON + + +(press ∧ k < 10); k=k+1 + + + +
+
+
+Figure 13: Example of a digraph representation of the extended-state automaton for counting up to ten +
+
+
+
+
+
+
+
+
+

Composing automata

+

Any practically useful modelling framework should support decomposition of a large system into smaller subsystems. These should then be able to communicate/synchronize with each other. In automata such synchronization can be realized by sending (or generating) and receiving (or accepting) events. A common choice of symbols for the two is !,?, as illustrated in the following example. But these symbols are just one possible convention, and any other symbols can be used.

+
+

Example 13 (Composing automata)  

+
+
+
+
+
+
+ + +G + + +init +init + + + +1 + +1 + + + +init->1 + + + + + +2 + +2 + + + +1->2 + + +press? + + + +3 + +3 + + + +3->3 + + +press! + + + +
+
+
+Figure 14: Example illustrating how two automata can be synchronized by sending and receiving events +
+
+
+
+
+
+
+
+

Languages and automata

+

When studying automata, we often encounter the concept of a language. Indeed, the concept of an automaton is heavily used in the formal laguage theory. Although in our course we are not going to refer to these results, some resources we recommend for our courses do, and so it is useful to understand how automata and languages are related.

+

First, we extend the definition of a transition function in that it accepts the current state and not just a single event but a sequence of events, that is

+

+f: \mathcal X \times \mathcal E^\ast \rightarrow \mathcal X, + where \mathcal E^\ast stands for the set of all possible sequences of events.

+

Language generated by the automaton is +\mathcal L(\mathcal G) = \{s\in\mathcal E^\ast \mid f(x_0,s) \;\text{is defined}\} +

+

Language marked by the automaton (the automaton is accepting or recognizing that language) +\mathcal L_\mathrm{m}(\mathcal G) = \{s\in\mathcal L(\mathcal G) \mid f(x_0,s) \in \mathcal{X}_\mathrm{m}\} +

+
+

Example 14 (Language accepted by automaton) +\mathcal{E} = \{a,b\}, \mathcal{L} = \{a,aa,ba,aaa,aba,baa,bba,\ldots\} +

+
+
+
+
+
+
+ + +G + + +init +init + + + +0 + +0 + + + +init->0 + + + + + +1 + + +1 + + + +1->1 + + +a + + + +1->0 + + +b + + + +0->1 + + +a + + + +0->0 + + +b + + + +
+
+
+Figure 15: Example of an automaton generating the language \mathcal{L} = \{a,aa,ba,aaa,aba,baa,bba,\ldots\} +
+
+
+
+
+

What if we remove the self loop at state 0? The automaton then accepts languages starting with a and with b being the last event or immediately followed by a.

+
+
+

What is the language view of automata good for?

+
    +
  • Definitions, analysis, synthesis.
  • +
  • We then need language concepts such as +
      +
    • concatenation of strings: \quad c = ab
    • +
    • empty string \varepsilon: \quad\varepsilon a = a \varepsilon = a
    • +
    • prefix, suffix
    • +
    • prefix closure \bar{\mathcal{L}} (of the language \mathcal L)
    • +
    • +
  • +
+
+
+
+

Blocking

+

An important concept in automata is blocking. A state is blocking if there is no transition out of it. An example follows.

+
+

Example 15 (Blocking states) In the automaton in Fig 16, state 2 is blocking. It is a deadlock state. States 3 and 4 are livelock states.

+
+
+
+
+
+
+ + +G + + +init +init + + + +0 + +0 + + + +init->0 + + + + + +2 + + +2 + + + +2->0 + + +g + + + +1 + +1 + + + +0->1 + + +a + + + +1->2 + + +b + + + +5 + +5 + + + +1->5 + + +g + + + +3 + +3 + + + +1->3 + + +a + + + +4 + +4 + + + +3->4 + + +b + + + +4->3 + + +a + + + +4->4 + + +g + + + +
+
+
+Figure 16: Example of an automaton with blocking states +
+
+
+
+
+

Language characterization: \bar{\mathcal{L}}_\mathrm{m}(\mathcal G) \sub \mathcal L(\mathcal G).

+
+
+
+

Queueing systems

+

Queueing systems are a particular and very useful class of discrete-event systems. They consist of these three components:

+
    +
  • entities (also customers, jobs, tasks, requests, etc.)
  • +
  • resources (also servers, processors, etc.): customers are waiting for them
  • +
  • queues (also buffers): where waiting is done
  • +
+

A common graphical representation that contains all these three compoments is in Fig 17.

+
+
+
+ +
+
+Figure 17: Queueing system +
+
+
+
+

Examples of queueing systems

+
    +
  • entities: people waiting for service in a bank or at a bust stop
  • +
  • resources: people (again) in a bank at the counter
  • +
  • queues: bank lobbies, bus stops, warehouses, …
  • +
+
+
+
+ +
+
+Note +
+
+
+

What are other examples?

+
    +
  • entities: packets, …
  • +
  • resources: processor, computer periphery, router, …
  • +
  • queues: …
  • +
+
+
+
+
+

Why shall we study queueing systems?

+
    +
  • Resources are not unlimited
  • +
  • Tradeoff needed between customer satisfaction and fair resources allocation
  • +
+
+
+

Networks of queueing systems

+

Queueing systems can be interconnected into networks.

+
+
+
+ +
+
+Figure 18: Example of a network of queueing systems +
+
+
+
+
+

Queueing systems as automata

+

The reason why we mentioned queueing systems in this part of our course is that they can be modelled as automata. And we already know that in order to define and automaton, we must characterize the key components defining the automaton – three in this case:

+
    +
  • events: \mathcal E = \{\text{arrival},\text{departure}\};

  • +
  • states: number of customers in the queue +\mathcal X = \{0,1,2,3,\ldots\}, \quad \mathcal X_0 = \{0\}, +

  • +
+
+
+
+ +
+
+Note +
+
+
+

Obviously this is not a finite state automation – unless the queue is bounded – and whether the queue’s length is bounded is a modelling assumption.

+
+
+
    +
  • state transition: +f(x,e) = +\begin{cases} +x+1, & \text{if}\; x\leq 0 \land e = \mathrm{arrival}\\ +x-1, & \text{if}\; x > 0 \land e = \mathrm{departure}. +\end{cases} +
  • +
+
+
+

Queueing system as an automaton

+
+
+
+ +
+
+Figure 19: Queueing system as an automaton +
+
+
+
+
+
+ +
+
+Note +
+
+
+

Note how the states correspond to the value of the state variable.

+
+
+
+

Example 16 (Example of a queueing system: jobs processing by a CPU)

+
+
+
+

Stochastic queueing systems

+

An important extension of the basic concept of a queueing system is the introduction of randomness. In particular, the arrivals can be modelled using random processes. Similarly, the departures given by the delays (the processing time) of the server can be modelled as random.

+

Obviously, the time needs to be included in the automaton, and so far we do not have it there. It is then high time to introduce it.

+
+
+
+

Timed automaton

+

So far, even if the automaton corresponded to a physical system (and did not just represent a generator of a language), the time was not included. The transitions were triggered by the events, but we did not specify the time at which the event occurred.

+

There are, however, many situations when it is useful or even crucial to incorporate time. We can then answer questions such as

+
    +
  • How many events of a certain type in a given interval?
  • +
  • Is the time interval between two events above a given threshold?
  • +
  • How long does the system spend in a given state?
  • +
  • +
+

There are several ways how to incorporate time into the automaton. We will follow the concept of a timed automaton with guards (introduced by Alur and Dill). Within their framework we have

+
    +
  • one or several resettable clocks: c_i,\, i=1,\ldots, k, driven by the ODE + \frac{\mathrm{d} c_i(t)}{\mathrm d t} = 1, \quad c_i(0) = 0; +
  • +
  • each transition labelled by the tripple {guard; event; reset}.
  • +
+
+
+
+ +
+
+Note +
+
+
+

Both satisfaction of the guard and arrival of the event constitute enabling conditions for the transition. They could be wrapped into a single compound condition.

+
+
+
+

Example 17 (Timed automaton with guards)  

+
+
+
+
+
+
+ + +G + + +init +init + + + +0 + +0 + + + +init->0 + + + + + +1 + +1 + + + +0->1 + + +-; msg; c₁ + + + +1->1 + + +c₁≥1; msg; c₁ + + + +2 + +2 + + + +1->2 + + +0<c₁<1; msg; c₁ + + + +3 + +3 + + + +2->3 + + +c₁<1; alarm; - + + + +
+
+
+Figure 20: Example of a timed automaton with guards +
+
+
+
+
+
+
+

Example 18 (Timed automaton with guards and invariant)  

+
+
+
+
+
+
+ + +G + + +init +init + + + +0 + +0 + + + +init->0 + + + + + +2 + +2 +c₁<1 + + + +3 + +3 + + + +2->3 + + +-; alarm; - + + + +1 + +1 + + + +0->1 + + +-; msg; c₁ + + + +1->2 + + +0<c₁<1; msg; c₁ + + + +1->1 + + +c₁≥1; msg; c₁ + + + +
+
+
+Figure 21: Example of a timed automaton with guards and invariant +
+
+
+
+
+
+
+

Invariant vs guard

+
    +
  • Invariant (of a location) gives an upper bound on the time the system can stay at the given location. It can leave earlier but not later.
  • +
  • Guard (of a given transition) gives an enabling condition on leaving the location through the given transition.
  • +
+
+

Example 19 (Several trains approaching a bridge) The example is taken from [1] and is included in the demos coming with the Uppaal tool.

+
+ + + +
+
+ + Back to top

References

+
+
[1]
G. Behrmann, A. David, and K. G. Larsen, “A Tutorial on Uppaal,” in Formal Methods for the Design of Real-Time Systems, M. Bernardo and F. Corradini, Eds., in Lecture Notes in Computer Science, no. 3185., Berlin, Heidelberg: Springer, 2004, pp. 200–236. doi: 10.1007/978-3-540-30080-9_7.
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/des_automata.html b/des_automata.html index d0305b1..98891f4 100644 --- a/des_automata.html +++ b/des_automata.html @@ -2131,16 +2131,16 @@

Mealy machine

@show update!(dtr), output(dtr)
-
x_initial = rand(0:k, n) = [1, 2, 1, 4]
-output(dtr) = [0, 1, 1, 1]
-(update!(dtr), output(dtr)) = ([1, 1, 2, 1], [1, 0, 1, 1])
-(update!(dtr), output(dtr)) = ([2, 1, 1, 2], [1, 1, 0, 1])
-(update!(dtr), output(dtr)) = ([3, 2, 1, 1], [0, 1, 1, 0])
-(update!(dtr), output(dtr)) = ([3, 3, 2, 1], [0, 0, 1, 1])
-(update!(dtr), output(dtr)) = ([3, 3, 3, 2], [0, 0, 0, 1])
+
x_initial = rand(0:k, n) = [1, 1, 0, 3]
+output(dtr) = [0, 0, 1, 1]
+(update!(dtr), output(dtr)) = ([1, 1, 1, 0], [0, 0, 0, 1])
+(update!(dtr), output(dtr)) = ([1, 1, 1, 1], [1, 0, 0, 0])
+(update!(dtr), output(dtr)) = ([2, 1, 1, 1], [0, 1, 0, 0])
+(update!(dtr), output(dtr)) = ([2, 2, 1, 1], [0, 0, 1, 0])
+(update!(dtr), output(dtr)) = ([2, 2, 2, 1], [0, 0, 0, 1])
-
([3, 3, 3, 2], [0, 0, 0, 1])
+
([2, 2, 2, 1], [0, 0, 0, 1])

We can see that although initially the there can be more tokens, after a few iterations the algorithm achieves the goal of having just one token in the ring.

diff --git a/des_references 9.html b/des_references 9.html new file mode 100644 index 0000000..963a77e --- /dev/null +++ b/des_references 9.html @@ -0,0 +1,1108 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

Literature for discrete-event systems is vast, but within the control systems community the classical (and award-winning) reference is [1]. Note that an electronic version of the previous edition (perfectly acceptable for us) is accessible through the NTK library (possibly upon CTU login). This book is rather thick too and covering its content can easily need a full semestr. However, in our course we will only need the very basics of the theory of (finite state) automata and such basics are presented in Chapters 1 and 2. The extension to timed automata is then presented in Chapter 5.2, but the particular formalism for timed automata that we use follows [2], or perhaps even better [3].

+

+

The basics are also presented in the tutorial paper by the same author(s) [4]. A very short (but sufficient for us) intro to discrete-event systems that adheres to Cassandras’s style is given in the first chapter of the recent hybrid systems textbook [5].

+

Alternatively, there are some other recent textbooks that contain decent introductions to the theory of (finite state) automata. These are often surfing on the wave of popularity of the recently fashionable buzzword of cyberphysical or embedded systems, but in essence these deal with the same hybrid systems as we do in our course. The fact is, however, that the modeling formalism can be a bit different from the one in Cassandras (certainly when it comes to notation but also some concepts). One such textbook is [6], for which an electronic version accessible through the NTK library (upon CTU login). Another one is [7]. In particular, Chapter 2 serves as an intro to the automata theory. Last but not least, we mention [8], for which an electronic version is freely downloadable.

+ + + + + Back to top

References

+
+
[1]
C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, 3rd ed. Cham: Springer, 2021. Available: https://doi.org/10.1007/978-3-030-72274-6
+
+
+
[2]
R. Alur and D. L. Dill, “A theory of timed automata,” Theoretical Computer Science, vol. 126, no. 2, pp. 183–235, Apr. 1994, doi: 10.1016/0304-3975(94)90010-8.
+
+
+
[3]
R. Alur, “Timed Automata,” in Computer Aided Verification, N. Halbwachs and D. Peled, Eds., in Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 1999, pp. 8–22. doi: 10.1007/3-540-48683-6_3.
+
+
+
[4]
S. Lafortune, “Discrete Event Systems: Modeling, Observation, and Control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, no. 1, pp. 141–159, 2019, doi: 10.1146/annurev-control-053018-023659.
+
+
+
[5]
H. Lin and P. J. Antsaklis, Hybrid Dynamical Systems: Fundamentals and Methods. in Advanced Textbooks in Control and Signal Processing. Cham: Springer, 2022. Accessed: Jul. 09, 2022. [Online]. Available: https://doi.org/10.1007/978-3-030-78731-8
+
+
+
[6]
R. Alur, Principles of Cyber-Physical Systems. Cambridge, MA, USA: MIT Press, 2015. Available: https://mitpress.mit.edu/9780262029117/principles-of-cyber-physical-systems/
+
+
+
[7]
S. Mitra, “A verification framework for hybrid systems,” PhD thesis, Massachusetts Institute of Technology, 2007. Accessed: Sep. 04, 2022. [Online]. Available: https://dspace.mit.edu/handle/1721.1/42238
+
+
+
[8]
E. A. Lee and S. A. Seshia, Introduction to Embedded Systems: A Cyber-Physical Systems Approach, 2nd ed. Cambridge, MA, USA: MIT Press, 2017. Available: https://ptolemy.berkeley.edu/books/leeseshia//
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/des_software 7.html b/des_software 7.html new file mode 100644 index 0000000..de2e3ea --- /dev/null +++ b/des_software 7.html @@ -0,0 +1,1173 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +

The number of software tools for defining and analysing state automata is huge, as is the number of domains of application of this modelling concept. As we are leaning towards the control systems domain, we first encounter the tools produced by The Mathworks company (Matlab and Simulink)

+ +
+

(Open)Modelica

+

A popular modelling language for physical systems is Modelica. Starting with version 3.3 (several years ago already), it has a support for state machines, see Chapter 17 in the language specification and the screenshot in Fig 2 below. A readable introduction to state machines in Modelica is in [1].

+
+
+
+ +
+
+Figure 2: Screenshot of a state machine diagram in Modelica +
+
+
+

Several implementations of Modelica language and compiler exist. On the FOSS side, OpenModelica is a popular choice. Slides from an introdutory presentation [2] about state machines in OpenModelica are available for free download.

+
+
+

UPPAAL

+

Dedicated software for timed automata. Not only modelling and simulation but also formal verification. Available at https://uppaal.org/. In our course we will only use it in this block/week. A tutorial is [3].

+
+
+

Python

+

SimPy – discrete-event simulation in Python. We are not going to use it in our course, but if you are a Python enthusiast, you may want to have a look at it.

+
+
+

Julia

+

Two major packages for discrete-event simulation in Julia are:

+ +
+
+

UML/SysML

+

If you have been exposed to software engineering, you have probably seen UML diagrams. Their extension (and restriction at the same time) toward systems that also contain hardware is called SysML. And SysML does have support for defining state machines (by drawing the state machine diagrams), see the screenshot in Fig 3 below.

+
+
+
+ +
+
+Figure 3: Screenshot of a state machine diagram in SysML +
+
+
+

SysML standard also augments the original concept of a state automaton with hierarchies, and some more. But we are not going to discuss it here. Should you need to follow this standard in your project, you may consider exploring some free&open-source (FOSS) tool for creating SysML diagrams such as Modelio or Eclipse Papyrus. But we are not going to use them in our course.

+
+
+

Drawing tools

+

Last but not least, you may want only to draw state automata (state machines). While there is no shortage of general WYSIWYG drawing and diagramming tools, you may want to consider Graphviz software that processes text description of automata in DOT language. This is what I used in this lecture. As an alternative, but still text-based, you may want to give a try to Mermaid, which can also draw what they call state diagrams.

+

If you still prefer WISYWIG tools, have a look at IPE, which I also used for some other figures in this lecture and in the rest of the course. Unlike most other tools, it also allows to enter LaTeX math.

+ + + +
+ + Back to top

References

+
+
[1]
H. Elmqvist, F. Gaucher, S. E. Mattson, and F. Dupont, “State Machines in Modelica,” in Proceedings of 9th International MODELICA Conference, Munich, Germany, Sep. 2012, pp. 37–46. doi: 10.3384/ecp1207637.
+
+
+
[2]
B. Thiele, “State Machines in OpenModelica - Current Status and Further Development.” Feb. 2015. Available: https://openmodelica.org/images/docs/openmodelica2015/OpenModelica2015-talk14-OMStateMachines_Bernhard%20Thiele.pdf
+
+
+
[3]
G. Behrmann, A. David, and K. G. Larsen, “A Tutorial on Uppaal,” in Formal Methods for the Design of Real-Time Systems, M. Bernardo and F. Corradini, Eds., in Lecture Notes in Computer Science, no. 3185., Berlin, Heidelberg: Springer, 2004, pp. 200–236. doi: 10.1007/978-3-540-30080-9_7.
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/hybrid_automata 19.html b/hybrid_automata 19.html new file mode 100644 index 0000000..7298c3e --- /dev/null +++ b/hybrid_automata 19.html @@ -0,0 +1,1624 @@ + + + + + + + + + +Hybrid automata – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Hybrid automata

+
+ + + +
+ + + + +
+ + + +
+ + +

Well, here we are at last. After these three introductory topics on discrete-event systems, we’ll finally get into hybrid systems.

+

There are two frameworks for modelling hybrid systems:

+
    +
  • hybrid automaton, and
  • +
  • hybrid (state) equations.
  • +
+

Here we start with the former and save the latter for the next chapter/week.

+

First we consider an autonomous (=no external/control inputs) hybrid automaton – it is a tuple of sets and (set) mappings +\boxed{ +\mathcal{H} = \{\mathcal Q, \mathcal Q_0, \mathcal X, \mathcal X_0, f, \mathcal I, \mathcal E, \mathcal G, \mathcal R\},} + where

+
    +
  • \mathcal Q is a set of discrete states (also called modes or operating modes or locations).

    +
      +
    • Examples: +
        +
      • \mathcal Q = \{\text{on}, \text{off}\},
      • +
      • \mathcal Q = \{\text{working}, \text{broken},\text{in repair}\},
      • +
      • \mathcal Q = \{\text{gear}\,1, \ldots, \text{gear}\,5\} .
      • +
    • +
    • It can be characterized either by +
        +
      • by enumeration like above, or
      • +
      • using a state variable q(t) attaining discrete values. The variable can also be a vector one, possibly a binary vector state variable encoding an integer scalar state variable.
      • +
    • +
  • +
  • \mathcal Q_0\subseteq \mathcal Q is a set of initial discrete states.

    +
      +
    • It can contain only a single element. +
        +
      • Example: \mathcal Q_0 = \{\text{off}\}.
      • +
    • +
    • But if it contains more than one element, it can be used to represent uncertainty in the initial state.
    • +
  • +
  • \mathcal X\subseteq \mathbb R^n is a set of continuous states.

    +
      +
    • Rather than by enumeration as in the case of discrete state, it is characterized by real-valued state variables x, oftentimes vector ones \bm x. Often they are denoted \bm x(t) to emphasize the evolution in time.
    • +
  • +
  • \mathcal X_0\subseteq \mathcal X is a set of initial continuous states.

    +
      +
    • A set of values of the (vector) state variable at the initial time.
    • +
    • Often just a single initial state \mathcal X_0=\{x_0\}, but it can be useful to set ranges of the values of individual state variables to account for uncertainty.
    • +
  • +
  • f:\mathcal{Q}\times \mathcal X \rightarrow \mathbb R^n is a vector field parameterized by the location

    +
      +
    • Often the dependence on the location q expressed as f_q(x) rather than the more symmetric f(q,x).

    • +
    • This defines a state equation parameterized by the location: \dot{x}(t) = f_q(x(t)).

    • +
    • It is also possible to consider a set-valued map, replacing f with \mathcal F: \mathcal{Q}\times \mathcal X \rightarrow 2^{\mathbb R^n}, which leads to the differential inclusion \dot x \in \mathcal F_q(x).

    • +
  • +
  • \mathcal I: \mathcal Q \rightarrow 2^\mathcal{X} gives a (location) invariant. It is also called a domain (of the location). The latter term is perhaps more appropriate becaue the term invariant is already too much overloaded in the context of dynamical systems.

    +
      +
    • It is parameterized by the location.
    • +
    • It is a subset of the continuous-valued state space \mathcal I(q) \subseteq \mathbb R^n.
    • +
    • It is a set of values that the state variables are allowed to attain while staying in the given location; if the state of the systems evolves towards the boundary of this set with a tendency to leave it, the system must be ready to leave that location by transitioning to another location.
    • +
  • +
+
+
+
+ +
+
+Caution +
+
+
+

Strictly speaking, \mathcal{I} is a mapping and not a set. Only the mapping evaluated at a given location q, that is, \mathcal{I}(q), is a set.

+
+
+
    +
  • \mathcal E\subseteq \mathcal Q \times \mathcal Q is a set of transitions.

    +
      +
    • It is a set of the edges of the graph.
    • +
    • Example: \mathcal E = \{(\text{off},\text{on}),(\text{on},\text{off})\}, that is, a two-component set.
    • +
  • +
  • \mathcal G: \mathcal E \rightarrow 2^\mathcal{X} gives a guard set.

    +
      +
    • It is associated with a given transition. In particular, \mathcal G(q_i,q_j) is the guard set for the transition (q_i,q_j)\in\mathcal E.
      +
    • +
    • The guard condition for the given transition is satisfied if x\in \mathcal G(q_i,q_j).
    • +
    • If the guard condition is satisfied, the transition is enabled – it may be executed. But it does not have to.
    • +
    • The enabled transition must be executed when the state x leaves the invariant set of the original location.
    • +
  • +
  • \mathcal R: \mathcal E \times \mathcal X\rightarrow 2^{\mathcal X} is a reset map.

    +
      +
    • For a given transition from one location to another, it resets the continuous-valued state x to a new value within some subset.
    • +
    • Often the map is single-valued, r: \mathcal E \times \mathcal X\rightarrow \mathcal X (multivalued-ness can be used to model uncertainty).
    • +
    • We also say that the state experiences a jump.
      +
    • +
    • The state after the jump (associated with the given transition) is reset according to x^+ = r(q_i,q_j, x), or x^+ \in \mathcal R(q_i,q_j, x) in the multivalued case.
      +
    • +
    • If no resetting of the continuous-valued state takes place, the reset map is defined just as the identity operator with respect to x , that is, r(q_i,q_j, x) = x.
    • +
  • +
+
+

Example 1 (Thermostat – the hello world example of a hybrid automaton) The thermostat is a device that turns some heater on or off (or sets some valve open or closed) based on the sensed temperature. The goal is to keep the temperature around, say, 18^\circ C.

+

Naturally, the discrete states (modes, locations) are on and off. Initially, the heater is off. We can identify the first two components of the hybrid automaton: \mathcal Q = \{\text{on}, \text{off}\}, \quad \mathcal Q_0 = \{\text{off}\}

+

The only continuous state variable is the temperature. The initial temperature not not quite certain, say it is known to be in the interval [5,10]. Two more components of the hybrid automaton follow: \mathcal X = \mathbb R, \quad \mathcal X_0 = \{x:x\in \mathcal X, 5\leq x\leq 10\}

+

In the two modes on and off, the evolution of the temperature can be modelled by two different ODEs. Either from first-principles modelling or from system identification (or preferrably from the combination of the two) we get the two differential equations, say: +f_\text{off}(x) = -0.1x,\quad f_\text{on}(x) = -0.1x + 5, + which gives another component for the hybrid automaton.

+

The control logic of the thermostat is captured by the \mathcal I and \mathcal G components of the hybrid automaton. Let’s determine them now. Obviously, if we just set 18 as the threshold, the heater would be switching on and off all the time. We need to introduce some hysteresis. Say, keeping the temperature within the interval (18 \pm 2)^\circ is acceptable. +\mathcal I(\text{off}) = \{x\mid x> 16\},\quad \mathcal I(\text{on}) = \{x\mid x< 20\}, +

+

+\mathcal G(\text{off},\text{on}) = \{x\mid x\leq 17\},\; \mathcal G(\text{on},\text{off}) = \{x\mid x\geq 19\}. +

+

Finally, \mathcal R (or r) is not specified as the x variable (the temperature) doesn’t jump. Well, it is specified implicitly as an identity mapping r(x)=x.

+

The graphical representation of the thermostat hybrid automaton is shown in Fig 1.

+
+
+
+ +
+
+Figure 1: Hybrid automaton for a thermostat +
+
+
+

Is this model deterministic? There are actually two reasons why it is not:

+
    +
  1. If we regard the characterization of the initial state (the temperature in this case) as a part of the model, which is the convention that we adhere to in our course, the model is nondeterministic.
  2. +
  3. Since the invariant for a given mode and the guard set for the only transition to the other model overlap, the response of the system is not uniquely determined. Consider the case when the system is in the off mode and the temperature is 16.5. The system can either stay in the off mode or switch to the on mode.
  4. +
+
+
+

Hybrid automaton with external events and control inputs

+

We now extend the hybrid automaton with two new components:

+
    +
  • a set \mathcal{A} of (external) events (also actions or symbols),
  • +
  • a set \mathcal{U} external continuous-valued inputs (control inputs or disturbances).
  • +
+

\boxed{ + \mathcal{H} = \{\mathcal Q, \mathcal Q_0, \mathcal X, \mathcal X_0, \mathcal I, \mathcal A, \mathcal U, f, \mathcal E, \mathcal G, \mathcal R\} ,} + where

+
    +
  • \mathcal A = \{a_1,a_2,\ldots, a_s\} is a set of events

    +
      +
    • The role identical as in a (finite) state automaton: an external event triggers an (enabled) transition from the current discrete state (mode, location) to another.
    • +
    • Unlike in pure discrete-event systems, here they are considered within a model that does recognize passing of time – each action must be “time-stamped”.
    • +
    • In simulations such timed event can be represented by an edge in the signal. In this regard, it might be tempting not to introduce it as a seperate entity, but it is useful to do so.
    • +
  • +
  • \mathcal U\in\mathbb R^m is a set of continuous-valued inputs

    +
      +
    • Real-valued functions of time.
    • +
    • Control inputs, disturbances, references, noises. In applications it will certainly be useful to distinghuish these roles, but here we keep just a single type of such an external variable, we do not have to distinguish.
    • +
  • +
+
+

Some modifications needed

+

Upon introduction of these two types of external inputs we must modify the components of the definition we provided earlier:

+
    +
  • f: \mathcal Q \times \mathcal X \times \mathcal U \rightarrow \mathbb R^n is a vector field that now depends not only on the location but also on the external (control) input, that is, at a given location we consider the state equation \dot x = f_q(x,u).

  • +
  • \mathcal E\subseteq \mathcal Q \times (\mathcal A) \times \mathcal Q is a set of transitions now possibly parameterized by the actions (as in classical automata).

  • +
  • \mathcal I : \mathcal Q \rightarrow 2^{\mathcal{X}\times \mathcal U} is a location invariant now augmented with a subset of the control input set. The necessary condition for staying in the given mode can be thus imposed not only on x but also on u.

  • +
  • \mathcal G: \mathcal E \rightarrow 2^{\mathcal{X}\times U} is a guard set now augmented with a subset of the control input set. The necessary condition for a given transition can be thus imposed not only on x but also on u.

  • +
  • \mathcal R: \mathcal E \times \mathcal X\times \mathcal U\rightarrow 2^{\mathcal X} is a (state) reset map that is now additionally parameterized by the control input.

  • +
+

If enabled, the transition can happen if one of the two things is satisfied:

+
    +
  • the continous state leaves the invariant set of the given location,
    +
  • +
  • an external event occurs.
  • +
+
+

Example 2 (Button-controlled LED)  

+
+
+
+ +
+
+Figure 2: Automaton for a button controlled LED +
+
+
+

+\mathcal{Q} = \{\mathrm{off}, \mathrm{dim}, \mathrm{bright}\},\quad \mathcal{Q}_0 = \{\mathrm{off}\} +

+

+\mathcal{X} = \mathbb{R}, \quad \mathcal{X}_0 = \{0\} +

+

+\mathcal{I(\mathrm{off})} = \mathcal{I(\mathrm{bright})} = \mathcal{I(\mathrm{dim})} = \{x\in\mathbb R \mid x \geq 0\} +

+

+f(x) = 1 +

+

+\mathcal{A} = \{\mathrm{press}\} +

+

+\begin{aligned} +\mathcal{E} &= \{(\mathrm{off},\mathrm{press},\mathrm{dim}),(\mathrm{dim},\mathrm{press},\mathrm{off}),\\ +&\qquad (\mathrm{dim},\mathrm{press},\mathrm{bright}),(\mathrm{bright},\mathrm{press},\mathrm{off})\} +\end{aligned} +

+

+\begin{aligned} +\mathcal{G}((\mathrm{off},\mathrm{press},\mathrm{dim})) &= \mathcal X \\ +\mathcal{G}((\mathrm{dim},\mathrm{press},\mathrm{off})) &= \{x \in \mathcal X \mid x>2\}\\ +\mathcal{G}((\mathrm{dim},\mathrm{press},\mathrm{bright})) &= \{x \in \mathcal X \mid x\leq 2\}\\ +\mathcal{G}((\mathrm{bright},\mathrm{press},\mathrm{off})) &= \mathcal X. +\end{aligned} +

+

+r((\mathrm{off},\mathrm{press},\mathrm{dim}),x) = 0, +

+
    +
  • that is, x^+ = r((\mathrm{off},\mathrm{press},\mathrm{dim}),x) = 0.
  • +
  • For all other transitions r((\cdot, \cdot, \cdot),x)=x, +
      +
    • that is, x^+ = x.
    • +
  • +
+
+
+

Example 3 (Water tank) We consider a water tank with one inflow and two outflows – one at the bottom, the other at some nonzero height h_\mathrm{m}. The water level h is the continuous state variable.

+
+
+
+ +
+
+Figure 3: Water tank example +
+
+
+

The model essentially expresses that the change in the volume is given by the difference between the inflow and the outflows. The outflows are proportional to the square root of the water level (Torricelli’s law) +\dot V = +\begin{cases} +Q_\mathrm{in} - Q_\mathrm{out,middle} - Q_\mathrm{out,bottom}, & h>h_\mathrm{m}\\ +Q_\mathrm{in} - Q_\mathrm{out,bottom}, & h\leq h_\mathrm{m} +\end{cases} +

+

Apparently things change when the water level crosses (in any direction) the height h_\mathrm{m}. This can be modelled using a hybrid automaton.

+
+
+
+ +
+
+Figure 4: Automaton for a water tank example +
+
+
+

One lesson to learn from this example is that the transition from one mode to another is not necessarily due to some computer-controlled switch. Instead, it is our modelling choice. It is an approximation that assumes negligible diameter of the middle pipe. But taking into the consideration the volume of the tank, it is probably a justifiable approximation.

+
+
+

Example 4 (Bouncing ball) We assume that a ball is falling from some initial nonzero height above the table. After hitting the table, it bounces back, loosing a portion of the energy (the deformation is not perfectly elastic).

+
+
+
+ +
+
+Figure 5: Bouncing ball example +
+
+
+

The state equation during the free fall is +\dot{\bm x} = \begin{bmatrix} x_2\\ -g\end{bmatrix}, \quad \bm x = \begin{bmatrix}10\\0\end{bmatrix}. +

+

But how can we model what happens during and after the collision? High-fidelity model would be complicated, involving partial differential equations to model the deformation of the ball and the table. These complexities can be avoided with a simpler model assuming that immediately after the collision the sign of the velocity abruptly (discontinuously) changes, and at the same time the ball also looses a portion of the energy.

+

When modelling this using a hybrid automaton, it turns out that we only need a single discrete state. The crucial feature of the model is then the nontrivial (non-identity) reset map. This is depicted in Fig 6.

+
+
+
+ +
+
+Figure 6: Hybrid automaton for a bouncing ball eaxample +
+
+
+

For completeness, here are the individual components of the hybrid automaton: +\mathcal{Q} = \{q\}, \; \mathcal{Q}_0 = \{q\} +

+

+\mathcal{X} = \mathbb R^2, \; \mathcal{X}_0 = \left\{\begin{bmatrix}10\\0\end{bmatrix}\right\} +

+

+\mathcal{I} = \{\mathbb R^2 \mid x_1 > 0 \lor (x_1 = 0 \land x_2 \geq 0)\} +

+

+f(\bm x) = \begin{bmatrix} x_2\\ -g\end{bmatrix} +

+

+\mathcal{E} = \{(q,q)\} +

+

+\mathcal{G} = \{\bm x\in\mathbb R^2 \mid x_1=0 \land x_2 < 0\} +

+

+r((q,q),\bm x) = \begin{bmatrix}x_1\\ -\gamma x_2 \end{bmatrix}, + where \gamma is the coefficient of restitution (e.g., \gamma = 0.9).

+
+
+
+ +
+
+Comment on the invariant set for the bouncing ball +
+
+
+

Some authors characterize the invariant set as x_1\geq 0. But this means that as the ball touches the ground, nothing forces it to leave the location and do the transition. Instead, the ball must penetrate the ground, however tiny distance, in order to trigger the transition. The current definition avoids this.

+
+
+
+
+
+ +
+
+Another comment on the invariant set for the bouncing ball +
+
+
+

While the previous remark certainly holds, when solving the model numerically, the use of inequalities to define sets is inevitable. And some numerical solvers, in particular optimization solvers, cannot handle strict inequalities. That is perhaps why some authors are quite relaxed about this issue. We will encounter it later on.

+
+
+
+
+

Example 5 (Stick-slip friction model (Karnopp)) Consider a block of mass m placed freely on a surface. External horizontal force F_\mathrm{a} is applied to the block, setting it to a horizontaly sliding motion, against which the friction force F_\mathrm{f} is acting: +m\dot v = F_\mathrm{a} - F_\mathrm{f}(v). +

+

Common choice for a model of friction between two surfaces is Coulomb friction +F_\mathrm{f}(v) = F_\mathrm{c}\operatorname*{sgn}(v). +

+

The model is perfectly intuitive, isn’t it? Well, what if v=0 and F_\mathrm{a}<F_\mathrm{c}? Can you see the trouble?

+

One of the remedies is the Karnopp model of friction +m\dot v = 0, \qquad v=0, \; |F_\mathrm{a}| < F_\mathrm{c} + +F_\mathrm{f} = \begin{cases}\operatorname*{sat}(F_\mathrm{a},F_\mathrm{c}), & v=0\\F_\mathrm{c}\operatorname*{sgn}(v), & \mathrm{else}\end{cases} +

+

The model can be formulated as a hybrid automaton with two discrete states (modes, locations) as in Fig 7.

+
+
+
+ +
+
+Figure 7: Hybrid automaton for the Karnopp model of friction +
+
+
+
+
+

Example 6 (Rimless wheel) A simple mechanical model that is occasionally used in the walking robot community is the rimless wheel rolling down a declined plane as depicted in Fig 8.

+
+
+
+ +
+
+Figure 8: Rimless wheel +
+
+
+

A hybrid automaton for the rimless wheel is below.

+
+
+
+ +
+
+Figure 9: Hybrid automaton for a rimless wheel +
+
+
+

Alternatively, we do not represent the discrete state graphically as a node in the graph but rather as another – extending – state variable s \in \{0, 1, \ldots, 5\} within a single location.

+
+
+
+ +
+
+Figure 10: Alternative hybrid automaton for a rimless wheel +
+
+
+
+
+

Example 7 (DC-DC boost converter) The enabling mechanism for a DC-DC converter is switching. Although the switching is realized with a semiconductor switch, for simplicity of the exposition we consider a manual switch in Fig 11 below.

+
+
+
+ +
+
+Figure 11: DC-DC boost converter +
+
+
+

The switch introduces two modes of operation. But the (ideal) diode introduces a mode transition too.

+
+

The switch closed

+
+
+
+ +
+
+Figure 12: DC-DC boost converter: the switch closed +
+
+
+

+\begin{bmatrix} +\frac{\mathrm{d}i_\mathrm{L}}{\mathrm{d}t}\\ +\frac{\mathrm{d}v_\mathrm{C}}{\mathrm{d}t} +\end{bmatrix} += +\begin{bmatrix} +-\frac{R_\mathrm{L}}{L}i_\mathrm{L} & 0\\ +0 & -\frac{1}{C(R+R_\mathrm{C})} +\end{bmatrix} +\begin{bmatrix} +i_\mathrm{L}\\ +v_\mathrm{C} +\end{bmatrix} ++ +\begin{bmatrix} +\frac{1}{L}\\ +0 +\end{bmatrix} +v_0 +

+
+
+

Continuous conduction mode (CCM)

+
+
+
+ +
+
+Figure 13: DC-DC boost converter: continuous conduction mode (CCM) +
+
+
+

+\begin{bmatrix} +\frac{\mathrm{d}i_\mathrm{L}}{\mathrm{d}t}\\ +\frac{\mathrm{d}v_\mathrm{C}}{\mathrm{d}t} +\end{bmatrix} += +\begin{bmatrix} +-\frac{R_\mathrm{L}+ \frac{RR_\mathrm{C}}{R+R_\mathrm{C}}}{L} & -\frac{R}{L(R+R_\mathrm{C})}\\ +\frac{R}{C(R+R_\mathrm{C})} & -\frac{1}{C(R+R_\mathrm{C})} +\end{bmatrix} +\begin{bmatrix} +i_\mathrm{L}\\ +v_\mathrm{C} +\end{bmatrix} ++ +\begin{bmatrix} +\frac{1}{L}\\ +0 +\end{bmatrix} +v_0 +

+
+
+

Discontinuous cond. mode (DCM)

+
+
+
+ +
+
+Figure 14: DC-DC boost converter: discontinuous conduction model (DCM) +
+
+
+

+\begin{bmatrix} +\frac{\mathrm{d}i_\mathrm{L}}{\mathrm{d}t}\\ +\frac{\mathrm{d}v_\mathrm{C}}{\mathrm{d}t} +\end{bmatrix} += +\begin{bmatrix} +0 & 0\\ +0 & -\frac{1}{C(R+R_\mathrm{C})} +\end{bmatrix} +\begin{bmatrix} +i_\mathrm{L}\\ +v_\mathrm{C} +\end{bmatrix} ++ +\begin{bmatrix} +0\\ +0 +\end{bmatrix} +v_0 +

+
+

Possibly the events of opening and closing the switch can be driven by time: opening the switch is derived from the value of an input signal, closing the switch is periodic.

+
+
+
+
+ +
+
+Figure 15: Hybrid automaton for a DC-DC boost converter +
+
+
+
+
+ + +
+
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/hybrid_automata_references 9.html b/hybrid_automata_references 9.html new file mode 100644 index 0000000..a70d087 --- /dev/null +++ b/hybrid_automata_references 9.html @@ -0,0 +1,1058 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + + + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/hybrid_automata_software 17.html b/hybrid_automata_software 17.html new file mode 100644 index 0000000..acc2cb0 --- /dev/null +++ b/hybrid_automata_software 17.html @@ -0,0 +1,1087 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +
+

Matlab

+ +
+
+

Julia

+ +
+
+

Python

+
    +
  • +
+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/hybrid_equations 12.html b/hybrid_equations 12.html new file mode 100644 index 0000000..9afd42d --- /dev/null +++ b/hybrid_equations 12.html @@ -0,0 +1,1808 @@ + + + + + + + + + +Hybrid equations – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Hybrid equations

+
+ + + +
+ + + + +
+ + + +
+ + +

Here we introduce a major alternative framework to hybrid automata for modelling hybrid systems. It is called hybrid equations, sometimes also hybrid state equations to emphasize that what we are after is some kind of analogy with state equations \dot{x}(t) = f(x(t), u(t)) and x_{k+1} = g(x_k, u_k) that we are familiar with from (continuous-valued) dynamical systems. Sometimes it also called event-flow equations or jump-flow equations.

+

These are the key ideas:

+
    +
  • The (state) variables can change values discontinuously upon occurence of events – they jump.
  • +
  • Between the jumps they evolve continuously – they flow.
  • +
  • Some variables may only flow, they never jump.
  • +
  • The variables staying constant between the jumps can be viewed as flowing too.
  • +
+

The major advantage of this modeling framework is that we do not have to distinguish between the two types of state variables. This is in contrast with hybrid automata, where we have to start by classifying the state variables as either continuous or discrete before moving on. In the current framework we treat all the variables identically – they mostly flow and occasionally (perhaps never, which is OK) jump.

+
+

Hybrid equations

+

It is high time to introduce hybrid (state) equations – here they come +\begin{aligned} +\dot{x} &= f(x), \quad x \in \mathcal{C},\\ +x^+ &= g(x), \quad x \in \mathcal{D}, +\end{aligned} + where

+
    +
  • f: \mathcal{C} \rightarrow \mathbb R^n is the flow map,
  • +
  • \mathcal{C}\subset \mathbb R^n is the flow set,
  • +
  • g: \mathcal{D} \rightarrow \mathbb R^n is the jump map,
  • +
  • \mathcal{D}\subset \mathbb R^n is the jump set.
  • +
+

This model of a hybrid system is thus parameterized by the quadruple \{f, \mathcal C, g, \mathcal D\}.

+
+
+

Hybrid inclusions

+

We now extend the presented framework of hybrid equations a bit. Namely, the functions on the right-hand sides in both the differential and the difference equations are no longer assigning just a single value (as well-behaved functions do), but they assign sets! +\begin{aligned} +\dot{x} &\in \mathcal F(x), \quad x \in \mathcal{C},\\ +x^+ &\in \mathcal G(x), \quad x \in \mathcal{D}. +\end{aligned} + where

+
    +
  • \mathcal{F} is the set-valued flow map,
  • +
  • \mathcal{C} is the flow set,
  • +
  • \mathcal{G} is the set-valued jump map,
  • +
  • \mathcal{D} is the jump set.
  • +
+
+
+

Output equations

+

Typically a full model is only formed upon defining some output variables (oftentimes just a subset of possibly scaled state variables or their linear combinations). These output variables then obey some output equation +y(t) = h(x(t)), +

+

or +y(t) = h(x(t),u(t)). +

+
+

Example 1 (Bouncing ball) This is the “hello world example” for hybrid systems with state jumps (pun intended). The state variables are the height and the vertical speed of the ball. +\bm x \in \mathbb{R}^2, \qquad \bm x = \begin{bmatrix}x_1 \\ x_2\end{bmatrix}. +

+

The quadruple defining the hybrid equations is +\mathcal{C} = \{\bm x \in \mathbb{R}^2 \mid x_1>0 \lor (x_1 = 0, x_2\geq 0)\}, + +f(\bm x) = \begin{bmatrix}x_2 \\ -g\end{bmatrix}, \qquad g = 9.81, + +\mathcal{D} = \{\bm x \in \mathbb{R}^2 \mid x_1 = 0, x_2 < 0\}, + +g(\bm x) = \begin{bmatrix}x_1 \\ -\alpha x_2\end{bmatrix}, \qquad \alpha = 0.8. +

+

The two sets and two maps are illustrated below.

+
+
+
+ +
+
+Figure 1: Maps and sets for the bouncing ball example +
+
+
+
+
+

Example 2 (Bouncing ball on a controlled piston) We now extend the simple bouncing ball example by adding a vertically moving piston. The piston is controlled by a force.

+
+
+

+
Example of a ball bouncing on a vertically moving piston
+
+
+

In our analysis we neglect the sizes (for simplicity).

+

The collision happens when x_\mathrm{b} = x_\mathrm{p}, and v_\mathrm{b} < v_\mathrm{p}.

+

The conservation of momentum after a collision reads +m_\mathrm{b}v_\mathrm{b}^+ + m_\mathrm{p}v_\mathrm{p}^+ = m_\mathrm{b}v_\mathrm{b} + m_\mathrm{p}v_\mathrm{p}. +\tag{1}

+

The collision is modelled using a restitution coefficient +v_\mathrm{p}^+ - v_\mathrm{b}^+ = -\gamma (v_\mathrm{p} - v_\mathrm{b}). +\tag{2}

+

From the momentum conservation Equation 1 +v_\mathrm{p}^+ = \frac{m_\mathrm{b}}{m_\mathrm{p}}v_\mathrm{b} + v_\mathrm{p} - \frac{m_\mathrm{b}}{m_\mathrm{p}}v_\mathrm{b}^+ +

+

we substitute to Equation 2 to get +\frac{m_\mathrm{b}}{m_\mathrm{p}}v_\mathrm{b} + v_\mathrm{p} - \frac{m_\mathrm{b}}{m_\mathrm{p}}v_\mathrm{b}^+ - v_\mathrm{b}^+ = -\gamma (v_\mathrm{p} - v_\mathrm{b}), + from which we express v_\mathbb{b}^+ +\begin{aligned} +v_\mathrm{b}^+ &= \frac{1}{1+\frac{m_\mathrm{b}}{m_\mathrm{p}}}\left(\frac{m_\mathrm{b}}{m_\mathrm{p}}v_\mathrm{b} + v_\mathrm{p} + \gamma (v_\mathrm{p} - v_\mathrm{b})\right)\\ +&= \frac{m_\mathrm{p}}{m_\mathrm{p}+m_\mathrm{b}}\left(\frac{m_\mathrm{b}-\gamma m_\mathrm{p}}{m_\mathrm{p}}v_\mathrm{b} + (1+\gamma)v_\mathrm{p}\right)\\ +&= \frac{m_\mathrm{b}-\gamma m_\mathrm{p}}{m_\mathrm{b}+m_\mathrm{p}}v_\mathrm{b} + \frac{(1+\gamma)m_\mathrm{p}}{m_\mathrm{p}+m_\mathrm{b}}v_\mathrm{p} +\end{aligned}. +

+

Substitute to the expression for v_\mathbb{p}^+ to get +\begin{aligned} +v_\mathrm{p}^+ &= \frac{m_\mathrm{b}}{m_\mathrm{p}}v_\mathrm{b} + v_\mathrm{p} - \frac{m_\mathrm{b}}{m_\mathrm{p}}\left(\frac{m_\mathrm{b}-\gamma m_\mathrm{p}}{m_\mathrm{b}+m_\mathrm{p}}v_\mathrm{b} + \frac{(1+\gamma)m_\mathrm{p}}{m_\mathrm{p}+m_\mathrm{b}}v_\mathrm{p}\right)\\ +&= \frac{m_\mathrm{b}}{m_\mathrm{p}}\left(1-\frac{m_\mathrm{b}-\gamma m_\mathrm{p}}{m_\mathrm{b}+m_\mathrm{p}}\right) v_\mathrm{b} \\ +&\qquad\qquad + \left(1-\frac{m_\mathrm{b}}{m_\mathrm{p}}\frac{(1+\gamma)m_\mathrm{p}}{m_\mathrm{p}+m_\mathrm{b}}\right) v_\mathrm{p}\\ +&= \frac{m_\mathrm{b}}{m_\mathrm{b}+m_\mathrm{p}}(1+\gamma) v_\mathrm{b} + \frac{m_\mathrm{p}-\gamma m_\mathrm{b}}{m_\mathrm{p}+m_\mathrm{b}} v_\mathrm{p}. +\end{aligned} +

+

Finally we can simplify the expressions a bit by introducing m=\frac{m_\mathrm{b}}{m_\mathrm{b}+m_\mathrm{p}}. The jump equation is then +\begin{bmatrix} +v_\mathrm{b}^+\\ +v_\mathrm{p}^+ +\end{bmatrix} += +\begin{bmatrix} +m - \gamma (1-m) & (1+\gamma)(1-m)\\ +m(1+\gamma) & 1-m-\gamma m +\end{bmatrix} +\begin{bmatrix} +v_\mathrm{b}\\ +v_\mathrm{p} +\end{bmatrix}. +

+
+
+

Example 3 (Synchronization of fireflies) This is a famous example in synchronization. We consider n fireflies, x_i is the i-th firefly’s clock, normalized to [0,1]. The clock resets (to zero) when it reaches 1. Each firefly can see the flashing of all other fireflies. As soon as it observes a flash, it increases its clock by \varepsilon \%.

+

Here is how we model the problem using the four-tuple \{f, \mathcal C, g, \mathcal D\}: +\mathcal{C} = [0,1)^n = \{\bm x \in \mathbb R^n\mid x_i \in [0,1),\; i=1,\ldots,n \}, + +\bm f = [f_1, f_2, \ldots, f_n]^\top,\quad f_i = 1, \quad i=1,\ldots,n, + +\mathcal{D} = \{\bm x \in [0,1]^n \mid \max_i x_i = 1 \}, +

+

+\begin{aligned} +\bm g &= [g_1, \ldots, g_n]^\top,\\ +& \qquad g_i(x_i) = +\begin{cases} +(1 + \varepsilon)x_i, & \text{if } (1+\varepsilon)x_i < 1, \\ +0, & \text{otherwise}. +\end{cases} +\end{aligned} +

+
+
+

Example 4 (Thyristor control) Consider the circuit below.

+
+
+
+ +
+
+Figure 2: Example of a thyristor control +
+
+
+

We consider a harmonic input voltage, that is, +\begin{aligned} +\dot v_0 &= \omega v_1\\ +\dot v_1 &= -\omega v_0. +\end{aligned} +

+

The thyristor can be on (discrete state q=1) or off (q=0). The firing time \tau is given by the firing angle \alpha \in (0,\pi).

+

The state vector is +\bm x = +\begin{bmatrix} +v_0\\ v_1 \\ i_\mathrm{L} \\ v_\mathrm{C} \\ q \\ \tau +\end{bmatrix}. +

+

The flow map is +\bm f(\bm x) += +\begin{bmatrix} +\omega v_1\\ +-\omega v_0\\ +q \frac{v_\mathrm{C}-Ri_\mathrm{L}}{L}\\ +-\frac{1}{CR}v_\mathrm{C} + \frac{1}{CR}v_\mathrm{0} - \frac{1}{C}i_\mathrm{L}\\ +0\\ +1 +\end{bmatrix}. +

+

The flow set is +\begin{aligned} +\mathcal{C} &= \{\bm x \mid q=0,\, \tau<\frac{\alpha}{\omega},\, i_\mathrm{L}=0\}\\ &\qquad \cup \{\bm x \mid q=1,\, i_\mathrm{L}>0\} +\end{aligned}. +

+

The jump set is +\begin{aligned} +\mathcal{D} &= \{\bm x \mid q=0,\, \tau\geq \frac{\alpha}{\omega},\, i_\mathrm{L}=0,\, v_\mathrm{C}>0\}\\ &\qquad \cup \{\bm x \mid q=1,\, i_\mathrm{L}=0,\, v_\mathrm{C}<0\} +\end{aligned}. +

+

The jump map is +\bm g(\bm x) = +\begin{bmatrix} +u_0\\ u_1 \\ i_\mathrm{L} \\ v_\mathrm{C} \\ {\color{red} 1-q} \\ {\color{red} 0} +\end{bmatrix}. +

+

The last condition in the jump set comes from the requirement that not only must the current through the inductor be zero, but also it must be decreasing. And from the state equation it follows that the voltage on the capacitor must be negative.

+
+
+

Example 5 (Sampled-data feedback control) Another example of a dynamical system that fits nicely into the hybrid equations framework is sampled-data feedback control system. Within the feedback loop in Figure 3, we recognize a continuous-time plant and a discrete-time controller.

+
+
+
+ +
+
+Figure 3: Sampled data feedback control +
+
+
+

The plant is modelled by \dot x_\mathrm{p} = f_\mathrm{p}(x_\mathrm{p},u), \; y = h(x_\mathrm{p}). The controller samples the output T-periodically and computes its own output as a nonlinear function u = \kappa(r-y).

+

The closed-loop model is then +\dot x_\mathrm{p} = f_\mathrm{p}(x_\mathrm{p},\kappa(r-h(x_\mathrm{p}))), \; y = h(x_\mathrm{p}). +

+

The closed-loop state vector is +\bm x = +\begin{bmatrix} +x_\mathrm{p}\\ u \\ \tau +\end{bmatrix} +\in +\mathbb R^n \times \mathbb R^m \times \mathbb R. +

+

The flow set is +\begin{aligned} +\mathcal{C} &= \{\bm x \mid \tau \in [0,T)\} +\end{aligned} +

+

The flow map is +\bm f(\bm x) += +\begin{bmatrix} +f_\mathrm{p}(x_\mathrm{p},u)\\ +0\\ +1 +\end{bmatrix} +

+

The jump set is +\begin{aligned} +\mathcal{D} &= \{\bm x \mid \tau = T\} +\end{aligned} + or rather +\begin{aligned} +\mathcal{D} &= \{\bm x \mid \tau \geq T\} +\end{aligned} +

+

The jump map is +\bm g(\bm x) = +\begin{bmatrix} +x_\mathrm{p}\\ +\kappa(r-y)\\ +0 +\end{bmatrix} +

+

You may wonder why we bother with modelling this system as a hybrid system at all. When it comes to analysis of the closed-loop system, implementation of the model in Simulink allows for seemless mixing of continuous-time and dicrete-time blocks. And when it comes to control design, we can either discretize the plant and design a discrete-time controller, or design a continuous-time controller and then discretize it. No need for new theoris. True, but still, it is nice to have a rigorous framework for analysis of such systems. The more so that the sampling time T may not be constant – it can either vary randomly or perhaps the samling can be event-triggered. All these scenarios are easily handled within the hybrid equations framework.

+
+
+
+

Hybridness after closing the loop

+

We have defined hybrid systems, but what exactly is hybrid when we close a feedback loop? There are three possibilities:

+
    +
  • Hybrid plant + continuous controller.
  • +
  • Hybrid plant + hybrid controller.
  • +
  • Continuous plant + hybrid controller.
  • +
+

The first case is encountered when we use a standard controller such as a PID controller to control a system whose dynamics can be characterized/modelled as hybrid. The second scenario considers a controller that mimicks the behavior of a hybrid system. The third case is perhaps the least intuitive: although the plant to be controller is continuous(-valued), it may still make sense to design and implement a hybrid controller, see the next paragraph.

+
+
+

Impossibility to stabilize without a hybrid controller

+
+

Example 6 (Unicycle stabilization) We consider a unicycle model of a vehicle in a plane, characterized by the position and orientation, with the controlled forward speed v and the yaw (turning) angular rate \omega.

+
+
+
+ +
+
+Figure 4: Unicycle vehicle +
+
+
+

The vehicle is modelled by +\begin{aligned} +\dot x &= v \cos \theta,\\ +\dot y &= v \sin \theta,\\ +\dot \theta &= \omega, +\end{aligned} +

+

+\bm x = \begin{bmatrix} +x\\ y\\ \theta +\end{bmatrix}, +\quad +\bm u = \begin{bmatrix} +v\\ \omega +\end{bmatrix}. +

+

It is known that this system cannot be stabilized by a continuous feedback controller. The general result that applies here was published in [1]. The condition of stabilizability by a time-invariant continuous state feedback is that the image of every neighborhood of the origin under (\bm x,\bm u) \mapsto \bm f(\bm x, \bm u) contains some neighborhood of the origin. This is not the case here. The map from the state-control space to the velocity space is

+

+\begin{bmatrix} +x\\ y\\ \theta\\ v\\ \omega +\end{bmatrix} +\mapsto +\begin{bmatrix} +v \cos \theta\\ +v \sin \theta \\ +\omega +\end{bmatrix}. +

+

Now consider a neighborhood of the origin such that |\theta|<\frac{\pi}{2}. It is impossible to get \bm f(\bm x, \bm u) = \begin{bmatrix} +0\\ f_2 \\ 0\end{bmatrix}, \; f_2\neq 0. Hence, stabilization by a continuous feedback \bm u = \kappa (\bm x) is impossible.

+

But it is possible to stabilize the vehicle using a discontinuous feedback. And discontinuous feedback controllr can be viewed as switching control, which in turn can be seen as instance of a hybrid controller.

+
+
+

Example 7 (Global asymptotic stabilization on a circle) We now give a demonstration of a general phenomenon of stabilization on a manifold. We will see that even if asymptotic stabilization by a continuous feedback is possible, it may not be possible to guarantee it globally.

+
+
+
+ +
+
+Why control on manifolds? +
+
+
+

First, recall that a manifold is a solution set for a system of nonlinear equations. A prominent example is a unit circle \mathbb S_1 = \{\bm x \in \mathbb R^2 \mid x_1^2 + x_2^2 - 1 = 0\}. An extension to two variables is then \mathbb S_2 = \{\bm x \in \mathbb R^4 \mid x_1^2 + x_2^2 - 1 = 0, \, x_3^2 + x_4^2 - 1 = 0\}. Now, why shall we bother to study control within this type of a state space? It turns out that such models of state space are most appropriate in mechatronic/robotic systems wherein angular variables range more than 360^\circ. We worked on this kind of a system some time ago when designing a control system for inertially stabilized gimballed camera platforms.

+
+
+
+

In this example we restrict the motion of of a particle to sliding around a unit circle \mathbb S_1 is modelled by
+ +\dot{\bm x} = u\begin{bmatrix}0 & -1\\ 1 & 0\end{bmatrix}\bm x, + where \bm x \in \mathbb S^1,\quad u\in \mathbb R.

+

The point to be stabilized is \bm x^* = \begin{bmatrix}1\\ 0\end{bmatrix}.

+
+
+
+ +
+
+Figure 5: Asymptotic stabilization on a circle +
+
+
+

What is required from a globally asymptotically stabilizing controller?

+
    +
  • Solutions stay in \mathbb S^1,
  • +
  • Solutions converge to \bm x^*,
  • +
  • If a solution starts near \bm x^*, it stays near.
  • +
+

One candidate is \kappa(\bm x) = -x_2.

+

Define the (Lyapunov) function V(\bm x) = 1-x_1.

+

Indeed, it does qualify as a Lyapunov function because it is zero at \bm x^* and positive elsewhere. Furthermore, its time derivative along the solution trajectory is +\begin{aligned} +\dot V &= \left(\nabla_{\bm{x}}V\right)^\top \dot{\bm x}\\ +&= \begin{bmatrix}-1 & 0\end{bmatrix}\left(-x_2\begin{bmatrix}0 & -1\\ 1 & 0\end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}\right)\\ +&= -x_2^2\\ +&= -(1-x_1^2), +\end{aligned} + from which it follows that +\dot V < 0 \quad \forall \bm x \in \mathbb S^1 \setminus \{-1,1\}. +

+

With u=-x_2 the point \bm x^* is stable but not globally atractive, hence it is not globally asymptotically stable.

+

Can we do better?

+

Yes, we can. But we need to incorporate some switching into the controller. Loosely speaking anywhere except for the state (-1,0), we can apply the previously designed controller, and at the troublesome state (-1,0), or actually in some region around it, we need to switch to another controller that would drive the system away from the problematic region.

+

But we will take this example as an opportunity to go one step further and instead of just a switching controller we design a hybrid controller. The difference is that within a hybrid controller we can incorporate some hysteresis, which is a robustifying feature. In order to do that, we need to introduce a new state variable q\in\{0,1\}. Determination of the flow and jump sets is sketched in Figure 6.

+
+
+
+ +
+
+Figure 6: Definition of the sets defining a hybrid controller +
+
+
+

Note that there is no hysteresis if c_0=c_1, in which case the hybrid controller reduces to a switching controller (but more on switching controllers in the next chapter).

+

The two feedback controllers are given by +\begin{aligned} +\kappa(\bm x,0) &= \kappa_0(\bm x) = -x_2,\\ +\kappa(\bm x,1) &= \kappa_1(\bm x) = -x_1. +\end{aligned} +

+

The flow map is (DIY) +f(\bm x, q) = \ldots +

+

The flow set is +\mathcal{C} = (\mathcal C_0 \times \{0\}) \cup (\mathcal C_1 \times \{1\}). +

+

The jump set is +\mathcal{D} = (\mathcal D_0 \times \{0\}) \cup (\mathcal D_1 \times \{1\}). +

+

The jump map is +g(\bm x, q) = 1-q \quad \forall [\bm x, q]^\top \in \mathcal D. +

+

Simulation using Julia is provided below.

+
+
+Show the code +
using OrdinaryDiffEq
+
+# Defining the sets and functions for the hybrid equations
+
+c₀, c₁ = -2/3, -1/3
+
+C(x,q) = (x[1] >= c₀ && q == 0) || (x[1] <= c₁ && q == 1) # Actually not really needed, just a complement of D.
+D(x,q) = (x[1] < c₀ && q == 0) || (x[1] > c₁ && q == 1) 
+
+g(x,q) = 1-q
+
+κ(x,q) = q==0 ? -x[2] : -x[1] 
+
+function f!(dx,x,q,t)               # Already in the format for the ODE solver.
+    A = [0.0 -1.0; 1.0 0.0] 
+    dx .= A*x*κ(x,q)
+end
+
+# Defining the initial conditions for the simulation
+
+cᵢ = (c₀+c₁)/2
+x₀ = [cᵢ,sqrt(1-cᵢ^2)]
+q₀ = 1
+
+# Setting up the simulation problem
+
+tspan = (0.0,10.0)
+prob = ODEProblem(f!,x₀,tspan,q₀)
+
+function condition(x,t,integrator)
+    q = integrator.p 
+    return D(x,q)
+end
+
+function affect!(integrator)
+    q = integrator.p
+    x = integrator.u 
+    integrator.p = g(x,q)
+end
+
+cb = DiscreteCallback(condition,affect!)
+
+# Solving the simulation problem
+
+sol = solve(prob,Tsit5(),callback=cb,dtmax=0.1) # ContinuousCallback more suitable here
+
+# Plotting the results of the simulation
+
+using Plots
+gr(tickfontsize=12,legend_font_pointsize=12,guidefontsize=12)
+
+plot(sol,label=["x₁" "x₂"],xaxis="t",yaxis="x",lw=2)
+hline!([c₀], label="c₀")
+hline!([c₁], label="c₁")
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+Figure 7: Simulation of stabilization on a circle using a hybrid controller +
+
+
+
+

The solution can also be visualized in the state space.

+
+
+Show the code +
plot(sol,idxs=(1,2),label="",xaxis="x₁",yaxis="x₂",lw=2,aspect_ratio=1)
+vline!([c₀], label="c₀")
+vline!([c₁], label="c₁")
+scatter!([x₀[1]],[x₀[2]],label="x init")
+scatter!([1],[0],label="x ref")
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+Figure 8: Simulation of stabilization on a circle using a hybrid controller +
+
+
+
+
+
+
+

Supervisory control

+

Yet another problem that can benefit from being formulated as a hybrid system is supervisory control.

+
+
+
+ +
+
+Figure 9: Supervisory control +
+
+
+
+
+

Combining local and global controllers \subset supervisory control

+

As a subset of supervisory control we can view a controller that switches between a global and a local controller.

+
+
+
+ +
+
+Figure 10: Combining global and local controllers +
+
+
+

Local controllers have good transient response but only work well in a small region around the equilibrium state. Global controllers have poor transient response but work well in a larger region around the equilibrium state.

+

A useful example is that of swinging up and stabilization of a pendulum: the local controller can be designer for a linear model obtained by linearization about the upright orientation of the pendulum. But such controller can only be expected to perform well in some small region around the upright orientation. The global controller is designed to bring the pendulum into that small region.

+

The flow and jump sets for the local and global controllers are in Figure 11. Can you tell, which is which? Remember that by introducing the discrete variable q, some hysteresis is in the game here.

+
+
+
+ +
+
+Figure 11: Flow and jump sets for a local and a global controller +
+
+
+ + + +
+ + Back to top

References

+
+
[1]
R. Brockett, “Asymptotic stability and feedback stabilization,” in Differential Geometric Control Theory, R. Brockett, R. Millman, and H. Sussmann, Eds., Boston: Birkhäuser, 1983.
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/hybrid_equations.html b/hybrid_equations.html index 8c5d753..c7d6b7a 100644 --- a/hybrid_equations.html +++ b/hybrid_equations.html @@ -1177,57 +1177,57 @@

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Figure 7: Simulation of stabilization on a circle using a hybrid controller @@ -1251,54 +1251,54 @@

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Figure 8: Simulation of stabilization on a circle using a hybrid controller diff --git a/hybrid_equations_references 17.html b/hybrid_equations_references 17.html new file mode 100644 index 0000000..a431e05 --- /dev/null +++ b/hybrid_equations_references 17.html @@ -0,0 +1,1093 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

The theoretical and computational framework of hybrid equations has been mostly developed by a relatively small circle of researchers (Sanfelice, Goebel, Teel, …). The primary monograph is [1]. It is also supported by a freely available Matlab toolbox, see the section on software.

+

+

The book [2] can be regarded as a predecessor and/or complement of the just mentioned [1]. Although the book is not available online, a short version appears as an article [3] in the popular IEEE Control Systems magazine (the one with color figures :-).

+

+ + + + + Back to top

References

+
+
[1]
R. G. Sanfelice, Hybrid Feedback Control. Princeton University Press, 2021. Accessed: Sep. 23, 2020. [Online]. Available: https://press.princeton.edu/books/hardcover/9780691180229/hybrid-feedback-control
+
+
+
[2]
R. Goebel, R. G. Sanfelice, and A. R. Teel, Hybrid Dynamical Systems: Modeling, Stability, and Robustness. Princeton University Press, 2012. Available: https://press.princeton.edu/books/hardcover/9780691153896/hybrid-dynamical-systems
+
+
+
[3]
R. Goebel, R. G. Sanfelice, and A. R. Teel, “Hybrid dynamical systems,” IEEE Control Systems Magazine, vol. 29, no. 2, pp. 28–93, Apr. 2009, doi: 10.1109/MCS.2008.931718.
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/hybrid_equations_software 10.html b/hybrid_equations_software 10.html new file mode 100644 index 0000000..15cc194 --- /dev/null +++ b/hybrid_equations_software 10.html @@ -0,0 +1,1064 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +

There is a well-developed and actively maintained Matlab toolbox

+ +

The toolbox can also be installed directly from Matlab through their Add-Ons Explorer. As the stable version is already two years old and they also offer a beta version 3.1.0.04, which was last updated on April 28, 2024, the beta version seems the way to go for our purposes. The authors will certainly appreciate any feedback.

+

Other programming languages seem to be missing a similar toolbox/package. How about developing one in Julia?

+ + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/index 11.html b/index 11.html new file mode 100644 index 0000000..812e607 --- /dev/null +++ b/index 11.html @@ -0,0 +1,1048 @@ + + + + + + + + + +B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

B(E)3M35HYS – Hybrid systems

+
+ + + +
+ + + + +
+ + + +
+ + +

This website constitutes the online lecture notes for the graduate course Hybrid Systems (B3M35HYS, BE3M35HYS) taught within Cybernetics and Robotics graduate program at Faculty of Electrical Engineering, Czech Technical University in Prague.

+

Organizational instructions, description of grading policy, assignments of homework problems and other course related material relevant for officially enrolled students are located elsewhere (the course page within the FEL Moodle).

+ + + + Back to top
+ +
+
+ +
+ + + + + \ No newline at end of file diff --git a/intro 10.html b/intro 10.html new file mode 100644 index 0000000..adc188c --- /dev/null +++ b/intro 10.html @@ -0,0 +1,1175 @@ + + + + + + + + + +What is a hybrid system? – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

What is a hybrid system?

+
+ + + +
+ + + + +
+ + + +
+ + +
+

Definition of a hybrid system

+

The adjective “hybrid” is used in a common language to express that the subject under consideration has a bit of this and a bit of that… When talking about hybridness of systems, we modify this vague “definition” into a more descriptive one: a hybrid system has a bit of this and an atom of that… By this bon mot we mean that hybrid systems contain some physical subsystems and components combined with if-then-else and/or timing rules that are mostly (but not always) implemented in software. This definition is certainly not the most precise one, but it is a good starting point.

+

Even better a definition is that hybrid systems are composed of subsystems whose evolution is driven by time (discrete or continuous) and some other subsystems that evolve as dictated by (discrete) events. The former are modelled by ordinary differential equations (ODE) or differential-algebraic equations (DAE) in continuous time cases and by difference equations in the discrete time cases. The are latter are modelled by state automata or Petri nets, and they implement some propositonal (aka sentential or statement), predicate and/or temporal logics. Let’s stick to this definition of hybrid systems. As we will progress with modelling frameworks, the definition will become a bit more operational.

+
+
+
+ +
+
+Hybrid systems vs sampled-data systems +
+
+
+

It may be a bit confusing that we are introducing a new framework for the situation that we can already handle – a physical plant evolving in continuous time (and modelled by an ODE) controlled in discrete-time by a digital controller/computer. Indeed, this situation does qualify as a hybrid system. In introductory course we have learnt to design such controllers (by discretizing the system and then designing a controller for a discrete-time model, relying on ) and there was no need to introduce whatever new framework. However, this standard scenario assumes that the sampling period is constant. Only then can the standard techniques based on z-transform be applied. As soon as the sampling period is not constant, we need some more general framework – the framework of hybrid systems.

+
+
+
+
+
+ +
+
+Hybrid systems vs cyberphysical systems +
+
+
+

Recently systems containing both the computer/software/algorithmic parts and physical parts are also studied under the fancy name cyberphysical systems. The two concepts can hardly be distinguished, to be honest. I also confess I am unhappy with the narrowing of the concept of cybernetics to just computers. Cybernetics, as introduced by Norbert Wiener, already encompasses physical and biological systems among others. Anyway, that is how it is and the take-away leeson is that these days a great deal of material relevant for our course on hybrid systems can also be found in resources adopting the name cyberphysical systems.

+
+
+
+
+

Example of a hybrid system

+
+
+

+
Example of a hybrid system (from [1])
+
+
+
+
+

Hybrid system is an open and unbounded concept

+

Partly because hybrid systems are investigated by many

+
    +
  • Computer science
  • +
  • Modeling & simulation
  • +
  • Control systems
  • +
+
+

Hybrid systems in computer science

+
    +
  • They start with discrete-event systems, typically modelled by finite state automata and/or timed automata, and add some (typically simple) continuous-time dynamics.
  • +
  • Mainly motivated by analysis (verification, model checking, …): safety, liveness, fairness, …
  • +
+
+
+

Hybrid systems in modeling and simulation

+
    +
  • Even when modeling purely physical systems, it can be beneficial to approximate some fast dynamics with discontinuous transitions – jumps (diodes and other semiconductor switches, computer networks, mechanical impacts, …).
  • +
+
+
+
+ +
+
+Systems vs models +
+
+
+

Strictly speaking, we should speak about hybrid models, because modeling a given system as hybrid is already a modeller’s decision. But the terminology is already settled. After all, we also speak about “second-order systems” when we actually mean “second-order models”, or “LTI systems” when we actually mean “LTI models”.

+
+
+
+
+

Hybrid systems in control systems

+
    +
  • Typically focused on continuous-time dynamical systems to be controlled but introducing some logic through a controller (switching control, relay control, PLC, …)
  • +
  • Besides synthesis (aka control design), properties such as stability, controllability, robustness.
  • +
  • There is yet another motivation for explicitly dealing with hybridness in control systems: some systems can only be stabilized by switching and switching can be formulated within the hybrid system framework.
  • +
+ + + +
+
+ + Back to top

References

+
+
[1]
C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, 3rd ed. Cham: Springer, 2021. Available: https://doi.org/10.1007/978-3-030-72274-6
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/intro_outline 18.html b/intro_outline 18.html new file mode 100644 index 0000000..27633d8 --- /dev/null +++ b/intro_outline 18.html @@ -0,0 +1,1097 @@ + + + + + + + + + +Course outline – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Course outline

+
+ + + +
+ + + + +
+ + + +
+ + +

The course is structured into 14 topics, each of them corresponding to one lecture. The topics are as follows:

+
    +
  • Discrete-event systems +
      +
    1. (State) automata (state machines) (incl. timed variants)
    2. +
    3. Petri nets (and timed Petri nets),
    4. +
    5. Max-Plus algebra and Max-Plus Linear (MPL) systems
    6. +
  • +
  • Hybrid systems +
      +
    1. Hybrid automata
    2. +
    3. Hybrid equations
    4. +
  • +
  • Special classes of hybrid systems +
      +
    1. Reset (control) systems, Switched/switching systems, Piecewise affine systems (PWA)
    2. +
    3. Complementarity dynamical systems (and complementarity optimization constraints)
      +
    4. +
  • +
+
    +
  1. Solutions of hybrid systems
  2. +
+
    +
  • Stability of hybrid systems +
      +
    1. Common Lyapunov function +
        +
      • Quadratic Lyapunov function via linear matrix inequality (LMI) and semidefinite programming (SDP)
      • +
      • Polynomial Lyapunov function via sum-of-squares (SOS) programming
      • +
    2. +
    3. Piecewise quadratic/polynomial Lyapunov function via S-procedure
    4. +
  • +
+
    +
  1. Mixed-logical dynamical (MLD) description of hybrid systems
  2. +
  3. Model predictive control (MPC) for MLD systems
  4. +
  5. (Formal) verification of hybrid systems
  6. +
+ + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/intro_references 10.html b/intro_references 10.html new file mode 100644 index 0000000..d5ff365 --- /dev/null +++ b/intro_references 10.html @@ -0,0 +1,1141 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

The discipline of hybrid system is huge and spans several areas of science and engineering. As a result, finding a single comprehensive reference is difficult if not impossible. The more so that our selection of topics is inevitably biased. Admittedly, our selections of both topics and references will mostly be biased towards control engineering, and yet even within that discipline we have our own preferences. Therefore, we will always provide a list of references when studying particular topics.

+
+

Introductory/overview texts freely available online

+

Among the texts that provide motivation for studying hybrid systems as well as some introduction into theoretical and computational frameworks, we recommend Heemels et al. (2009), which is also available on the author’s webpage. Yet another overview, which is also available online, is Johansson (2004). And yet another is De Schutter et al. (2009), which is available on the author’s web page. The quartet of recommended online resources is concluded by Lygeros (2004).

+
+
+

Books not freely available online (at least not that we know of)

+

Among the high-quality printed books, for which we are not aware of legally available online versions, the slim book van der Schaft and Schumacher (2000) can be regarded as the classic.

+

+

The handbook Lunze and Lamnabhi-Lagarrigue (2009) contains a wealth of contributions from several authors (in fact two of the online resources linked above are chapters from this book).

+

+

The latest textbook on the topic of hybrid systems is Lin and Antsaklis (2022). The book was probably the prime candidate for the book for this course, however we wanted a slightly different emphasis on each topic.

+

+

Another relatively recent book is Sanfelice (2021). Although it is very well written and is certainly recommendable, it follows a particular framework that is not the most common one in the literature on hybrid systems – the framework of hybrid equations. But we are certainly going to introduce their approach in our course. The more so that it is supported by a freely available Matlab toolbox.

+

+

The book Goebel, Sanfelice, and Teel (2012) can be regarded as a predecessor and/or complement of the just mentioned Sanfelice (2021). Although the book is not available online, a short version appears as an article Goebel, Sanfelice, and Teel (2009) in the popular IEEE Control Systems magazine (the one with color figures :-).

+

+

Last but not least, MPC methodology is specialized to hybrid systems in Borrelli, Bemporad, and Morari (2017). Unline the other books in this list, this one is freely available on the authors’ webpage.

+

+

This list of study resources on hybrid systems is by no means exhaustive. We will provide more references in the respective chapters.

+ + + +
+ + Back to top

References

+
+Borrelli, Francesco, Alberto Bemporad, and Manfred Morari. 2017. Predictive Control for Linear and Hybrid Systems. Cambridge, New York: Cambridge University Press. http://cse.lab.imtlucca.it/~bemporad/publications/papers/BBMbook.pdf. +
+
+De Schutter, B., W. P. M. H. Heemels, J. Lunze, and C. Prieur. 2009. “Survey of Modeling, Analysis, and Control of Hybrid Systems.” In Handbook of Hybrid Systems Control: Theory, Tools, Applications, edited by Françoise Lamnabhi-Lagarrigue and Jan Lunze, 31–56. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511807930.003. +
+
+Goebel, Rafal, Ricardo G. Sanfelice, and Andrew R. Teel. 2009. “Hybrid Dynamical Systems.” IEEE Control Systems Magazine 29 (2): 28–93. https://doi.org/10.1109/MCS.2008.931718. +
+
+———. 2012. Hybrid Dynamical Systems: Modeling, Stability, and Robustness. Princeton University Press. https://press.princeton.edu/books/hardcover/9780691153896/hybrid-dynamical-systems. +
+
+Heemels, W. P. M. H., D. Lehmann, J. Lunze, and B. De Schutter. 2009. “Introduction to Hybrid Systems.” In Handbook of Hybrid Systems Control: Theory, Tools, Applications, edited by Jan Lunze and Françoise Lamnabhi-Lagarrigue, 3–30. Cambridge University Press. https://doi.org/10.1017/CBO9780511807930.002. +
+
+Johansson, Karl Henrik. 2004. “Hybrid Control Systems.” In UNESCO Encyclopedia of Life Support Systems (EOLSS). UNESCO. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-90411. +
+
+Lin, Hai, and Panos J. Antsaklis. 2022. Hybrid Dynamical Systems: Fundamentals and Methods. Advanced Textbooks in Control and Signal Processing. Cham: Springer. https://doi.org/10.1007/978-3-030-78731-8. +
+
+Lunze, Jan, and Françoise Lamnabhi-Lagarrigue, eds. 2009. Handbook of Hybrid Systems Control: Theory, Tools, Applications. 1 edition. Cambridge, UK ; New York: Cambridge University Press. +
+
+Lygeros, John. 2004. Lecture Notes on Hybrid Systems. https://people.eecs.berkeley.edu/~sastry/ee291e/lygeros.pdf. +
+
+Sanfelice, Ricardo G. 2021. Hybrid Feedback Control. Princeton University Press. https://press.princeton.edu/books/hardcover/9780691180229/hybrid-feedback-control. +
+
+van der Schaft, Arjan J., and Hans Schumacher. 2000. An Introduction to Hybrid Dynamical Systems. Lecture Notes in Control and Information Sciences 251. London: Springer-Verlag. https://doi.org/10.1007/BFb0109998. +
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/max_plus_algebra 10.html b/max_plus_algebra 10.html new file mode 100644 index 0000000..26f85d3 --- /dev/null +++ b/max_plus_algebra 10.html @@ -0,0 +1,8168 @@ + + + + + + + + + +Max-plus algebra – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Max-plus algebra

+
+ + + +
+ + + + +
+ + + +
+ + +

Max-plus algebra, also written as (max,+) algebra (and also known as tropical algebra/geometry and dioid algebra), is an algebraic framework in which we can model and analyze a class of discrete-event systems, namely event graphs, which we have previously introduced as a subset of Petri nets. The framework is appealing in that the models than look like state equations \bm x_{k+1} = \mathbf A \bm x_k + \mathbf B \bm u_k for classical linear dynamical systems. We call these max-plus linear systems, or just MPL systems. Concepts such as poles, stability and observability can be defined, following closely the standard definitions. In fact, we can even formulate control problems for these models in a way that mimicks the conventional control theory for LTI systems, including MPC control.

+

But before we get to these applications, we first need to introduce the (max,+) algebra itself. And before we do that, we recapitulate the definition of a standard algebra.

+
+
+
+ +
+
+Algebra is not only a branch of mathematics +
+
+
+

Confusingly enough, algebra is used both as a name of both a branch of mathematics and a special mathematical structure. In what follows, we use the term algebra to refer to the latter.

+
+
+
+

Algebra

+

Algebra is a set of elements equipped with

+
    +
  • two operations: +
      +
    • addition (plus, +),
    • +
    • multiplication (times, ×),
    • +
  • +
  • neutral (identity) element with respect to addition: zero, 0, a+0=a,
  • +
  • neutral (identity) element with respect to multiplication: one, 1, a\times 1 = a.
  • +
+

Inverse elements can also be defined, namely

+
    +
  • Inverse element wrt addition: -a a+(-a) = 0.
  • +
  • Inverse element wrt multiplication (except for 0): a^{-1} a \times a^{-1} = 1.
  • +
+

If the inverse wrt multiplication exists for every (nonzero) element, the algebra is called a field, otherwise it is called a ring.

+

Prominent examples of a ring are integers and polynomials. For integers, it is only the numbers 1 and -1 that have integer inverses. For polynomials, it is only zero-th degree polynomials that have inverses qualifying as polynomials too. An example from the control theory is the ring of proper stable transfer functions, in which only the non-minimum phase transfer functions with zero relative degree have inverses, and thus qualify as units.

+

Prominent example of a field is the set of real numbers.

+
+
+

(max,+) algebra: redefining the addition and multiplication

+

Elements of the (max,+) algebra are real numbers, but it is still a ring and not a field since the two operations are defined differently.

+

The new operations of addition, which we denote by \oplus to distinguish it from the standard addition, is defined as \boxed{ + x\oplus y \equiv \max(x,y). + } +

+

The new operation of multiplication, which we denote by \otimes to distinguish it from the standard multiplication, is defined as \boxed{ + x\otimes y \equiv x+y}. +

+
+
+
+ +
+
+Important +
+
+
+

Indeed, there is no typo here, the standard addition is replaced by \otimes and not \oplus.

+
+
+
+
+
+ +
+
+(min,+) also possible +
+
+
+

Indeed, we can also define the (min,+) algebra. But for our later purposes in modelling we prefer the (max,+) algebra.

+
+
+
+

Reals must be extended with the negative infinity

+

Strictly speaking, the (max,+) algebra is a broader set than just \mathbb R. We need to extend the reals with the minus infinity. We denote the extended set by \mathbb R_\varepsilon \boxed{ +\mathbb R_\varepsilon \coloneqq \mathbb R \cup \{-\infty\}}. +

+

The reason for the notation is that a dedicated symbol \varepsilon is assigned to this minus infinity, that is, \boxed +{\varepsilon \coloneqq -\infty.} +

+

It may yield some later expressions less cluttered. Of course, at the cost of introducing one more symbol.

+

We are now going to see the reason for this extension.

+
+
+

Neutral elements

+
+

Neutral element with respect to \oplus

+

The neutral element with respect to \oplus, the zero, is -\infty. Indeed, for x \in \mathbb R_\varepsilon +x \oplus \varepsilon = x, + because \max(x,-\infty) = x.

+
+
+

Neutral element with respect to \otimes

+

The neutral element with respect to \otimes, the one, is 0. Indeed, for x \in \mathbb R_\varepsilon +x \otimes \varepsilon = x, + because x+0=x.

+
+
+
+ +
+
+Nonsymmetric notation, but who cares? +
+
+
+

The notation is rather nonsymmetric here. We now have a dedicated symbol \varepsilon for the zero element in the new algebra, but no dedicated symbol for the one element in the new algebra. It may be a bit confusing as “the old 0 is the new 1”. Perhaps similarly as we introduced dedicated symbols for the new operations of addition of multiplication, we should have introduced dedicated symbols such as ⓪ and ①, which would lead to expressions such as x⊕⓪=x and x ⊗ ① = x. In fact, in some software packages they do define something like mp-zero and mp-one to represent the two special elements. But this is not what we will mostly encounter in the literature. Perhaps the best attitude is to come to terms with this notational asymetry… After all, I myself was apparently not even able to figure out how to encircle numbers in LaTeX…

+
+
+
+
+
+

Inverse elements

+
+

Inverse with respect to \oplus

+

The inverse element with respect to \oplus in general does not exist! Think about it for a few moments, this is not necessarily intuitive. For which element(s) does it exist? Only for \varepsilon.

+

This has major consequences, for example, x\oplus x=x.

+

Can you verify this statement? How is is related to the fact that the inverse element with respect to \oplus does not exist in general?

+

This is the key difference with respect to a conventional algebra, wherein the inverse element of a wrt conventional addition is -a, while here we do not even define \ominus.

+

Formally speaking, the (max,+) algebra is only a semi-ring.

+
+
+

Inverse with respect to \otimes

+

The inverse element with respect to \otimes does not exist for all elements. The \varepsilon element does not have an inverse element with respect to \otimes. But in this aspect the (max,+) algebra just follows the conventional algebra, beucase 0 has no inverse there either.

+
+
+
+
+

Powers and the inverse with respect to \otimes

+

Having defined the fundamental operations and the fundamental elements, we can proceed with other operations. Namely, we consider powers. Fot an integer r\in\mathbb Z, the rth power of x, denoted by x^{\otimes^r}, is defined, unsurprisingly as x^{\otimes^r} \coloneqq x\otimes x \otimes \ldots \otimes x.

+

Observe that it corresponds to rx in the conventional algebra x^{\otimes^r} = rx.

+

But then the inverse element with respect to \otimes can also be determined using the (-1)th power as x^{\otimes^{-1}} = -x.

+

This is not actually surprising, is it?

+

There are few more implications. For example,
+x^{\otimes^0} = 0.

+

There is also no inverse element with respect to \otimes for \varepsilon, but it is expected as \varepsilon is a zero wrt \oplus. Furthermore, for r\neq -1, if r>0 , then \varepsilon^{\otimes^r} = \varepsilon, if r<0 , then \varepsilon^{\otimes^r} is undefined, which are both expected. Finally, \varepsilon^{\otimes^0} = 0 by convention.

+
+
+

Order of evaluation of (max,+) formulas

+

It is the same as that for the conventional algebra:

+
    +
  1. power,
  2. +
  3. multiplication,
  4. +
  5. addition.
  6. +
+
+
+

(max,+) polynomials (aka tropical polynomials)

+

Having covered addition, multiplication and powers, we can now define (max,+) polynomials. In order to get started, consider the the univariate polynomial p(x) = a_{n}\otimes x^{\otimes^{n}} \oplus a_{n-1}\otimes x^{\otimes^{n-1}} \oplus \ldots \oplus a_{1}\otimes x \oplus a_{0}, + where a_i\in \mathbb R_\varepsilon and n\in \mathbb N.

+

By interpreting the operations, this translates to the following function \boxed +{p(x) = \max\{nx + a_n, (n-1)x + a_{n-1}, \ldots, x+a_1, a_0\}.} +

+
+

Example 1 (1D polynomial) Consider the following (max,+) polynomial +p(x) = 2\otimes x^{\otimes^{2}} \oplus 3\otimes x \oplus 1. +

+

We can interpret it in the conventional algebra as +p(x) = \max\{2x+2,x+3,1\}, + which is a piecewise linear (actually affine) function.

+
+
+Show the code +
using Plots
+x = -5:3
+f(x) = max(2*x+2,x+3,1)
+plot(x,f.(x),label="",thickness_scaling = 2)
+xc = [-2,1]
+yc = f.(xc)
+scatter!(xc,yc,markercolor=[:red,:red],label="",thickness_scaling = 2)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+

Example 2 (Example of a 2D polynomial) Nothing prevents us from defining a polynomial in two (and more) variables. For example, consider the following (max,+) polynomial +p(x,y) = 0 \oplus x \oplus y. +

+
+
+Show the code +
using Plots
+x = -2:0.1:2;
+y = -2:0.1:2;
+f(x,y) = max(0,x,y)
+z = f.(x',y);
+wireframe(x,y,z,legend=false,camera=(5,30))
+xlabel!("x")
+ylabel!("y")
+zlabel!("f(x,y)")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+

Example 3 (Another 2D polynomial) Consider another 2D (max,+) polynomial +p(x,y) = 0 \oplus x \oplus y \oplus (-1)\otimes x^{\otimes^2} \oplus 1\otimes x\otimes y \oplus (-1)\otimes y^{\otimes^2}. +

+
+
+Show the code +
using Plots
+x = -2:0.1:2;
+y = -2:0.1:2;
+f(x,y) = max(0,x,y,2*x-1,x+y+1,2*y-1)
+z = f.(x',y);
+wireframe(x,y,z,legend=false,camera=(15,30))
+xlabel!("x")
+ylabel!("y")
+zlabel!("p(x,y)")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+
+
+ +
+
+Piecewise affine (PWA) functions +
+
+
+

Piecewise affine (PWA) functions will turn out a frequent buddy in our course.

+
+
+
+
+

Solution set (zero set)

+

+
+
+

Matrix computations

+
+

Addition and multiplication

+

What is attractive about the whole (max,+) framework is that it also extends nicely to matrices. For matrices, whose elements are in \mathbb R_\varepsilon, we define the operations of addition and multiplication identically as in the conventional case, we just use different definitions of the two basic scalar operations. (A\oplus B)_{ij} = a_{ij}\oplus b_{ij} = \max(a_{ij},b_{ij}) +\begin{aligned} +(A\otimes B)_{ij} &= \bigoplus_{k=1}^n a_{ik}\otimes b_{kj}\\ +&= \max_{k=1,\ldots, n}(a_{ik}+b_{kj}) +\end{aligned} +

+
+
+

Zero and identity matrices

+

(max,+) zero matrix \mathcal{E}_{m\times n} has all its elements equal to \varepsilon, that is, +\mathcal{E}_{m\times n} = +\begin{bmatrix} +\varepsilon & \varepsilon & \ldots & \varepsilon\\ +\varepsilon & \varepsilon & \ldots & \varepsilon\\ +\vdots & \vdots & \ddots & \vdots\\ +\varepsilon & \varepsilon & \ldots & \varepsilon +\end{bmatrix}. +

+

(max,+) identity matrix I_n has 0 on the diagonal and \varepsilon elsewhere, that is, +I_{n} = +\begin{bmatrix} +0 & \varepsilon & \ldots & \varepsilon\\ +\varepsilon & 0 & \ldots & \varepsilon\\ +\vdots & \vdots & \ddots & \vdots\\ +\varepsilon & \varepsilon & \ldots & 0 +\end{bmatrix}. +

+
+
+

Matrix powers

+

The zerothe power of a matrix is – unsurprisingly – the identity matrix, that is, A^{\otimes^0} = I_n.

+

The kth power of a matrix, for k\in \mathbb N\setminus\{0\}, is then defined using A^{\otimes^k} = A\otimes A^{\otimes^{k-1}}.

+
+
+
+

Connection with graph theory – precedence graph

+

Consider A\in \mathbb R_\varepsilon^{n\times n}. For this matrix, we can define the precedence graph \mathcal{G}(A) as a weighted directed graph with the vertices 1, 2, …, n, and with the arcs (j,i) with the associated weights a_{ij} for all a_{ij}\neq \varepsilon. The kth power of the matrix is then

+

+(A)^{\otimes^k}_{ij} = \max_{i_1,\ldots,i_{k-1}\in \{1,2,\ldots,n\}} \{a_{ii_1} + a_{i_1i_2} + \ldots + a_{i_{k-1}j}\} + for all i,j and k\in \mathbb N\setminus 0.

+
+

Example 4 (Example) +A = +\begin{bmatrix} +2 & 3 & \varepsilon\\ +1 & \varepsilon & 0\\ +2 & -1 & 3 +\end{bmatrix} +\qquad +A^{\otimes^2} = +\begin{bmatrix} +4 & 5 & 3\\ +3 & 4 & 3\\ +5 & 5 & 6 +\end{bmatrix} +

+
+
+
+
+
+
+ + +G + + +1 + +1 + + + +1->1 + + +2 + + + +2 + +2 + + + +1->2 + + +1 + + + +3 + +3 + + + +1->3 + + +2 + + + +2->1 + + +3 + + + +2->3 + + +-1 + + + +3->2 + + +0 + + + +3->3 + + +3 + + + +
+
+
+Figure 1: An example of a precedence graph +
+
+
+
+
+
+
+

Irreducibility of a matrix

+
    +
  • Matrix in \mathbb R_\varepsilon^{n\times n} is irreducible if its precedence graph is strongly connected.
  • +
  • Matrix is irreducible iff +(A \oplus A^{\otimes^2} \oplus \ldots A^{\otimes^{n-1}})_{ij} \neq \varepsilon \quad \forall i,j, i\neq j. +\tag{1}
  • +
+
+
+
+

Eigenvalues and eigenvectors

+

Eigenvalues and eigenvectors constitute another instance of a straightforward import of concepts from the conventional algebra into the (max,+) algebra – just take the standard definition of an eigenvalue-eigenvector pair and replace the conventional operations with the (max,+) alternatives +A\otimes v = \lambda \otimes v. +

+

A few comments:

+
    +
  • In general, total number of (max,+) eigenvalues <n.
  • +
  • An irreducible matrix has only one (max,+) eigenvalue.
  • +
  • Graph-theoretic interpretation: maximum average weight over all elementary circuits…
  • +
+ +
+
+

Solving (max,+) linear equations

+

We can also define and solve linear equations within the (max,+) algebra. Considering A\in \mathbb R_\varepsilon^{n\times n},\, b\in \mathbb R_\varepsilon^n, we can formulate and solve the equation +A\otimes x = b. +

+

In general no solution even if A is square. However, often we can find some use for a subsolution defined as +A\otimes x \leq b. +

+

Typically we search for the maximal subsolution instead, or subsolutions optimal in some other sense.

+
+

Example 5 (Greatest subsolution) +A = +\begin{bmatrix} +2 & 3 & \varepsilon\\ +1 & \varepsilon & 0\\ +2 & -1 & 3 +\end{bmatrix}, +\qquad +b = +\begin{bmatrix} +1 \\ 2 \\ 3 +\end{bmatrix} +

+

+x = +\begin{bmatrix} +-1\\ -2 \\ 0 +\end{bmatrix} +

+

+A \otimes x = +\begin{bmatrix} +1\\ 0 \\ 1 +\end{bmatrix} +\leq b +

+
+

With this introduction to the (max,+) algebra, we are now ready to move on to the modeling of discrete-event systems using the max-plus linear (MPL) systems.

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/max_plus_algebra.html b/max_plus_algebra.html index 7f52bcb..b9ff892 100644 --- a/max_plus_algebra.html +++ b/max_plus_algebra.html @@ -890,43 +890,43 @@

( - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + @@ -952,3255 +952,3255 @@

( - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -4226,3257 +4226,3257 @@

( - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/max_plus_references 9.html b/max_plus_references 9.html new file mode 100644 index 0000000..3ef3213 --- /dev/null +++ b/max_plus_references 9.html @@ -0,0 +1,1099 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

One last time in this course we refer to Cassandras and Lafortune (2021), a comprehensive and popular introduction do discrete event systems. A short introduction to the framework (max,+) algebra can be found (under the somewhat less known name “Dioid algebras”) in Chapter 5.4.

+

But as a recommendable alternative, (any one of) the a series of papers by Bart de Schutter (TU Delft) and his colleagues can be read instead. For example De Schutter et al. (2020) and De Schutter and van den Boom (2000).

+

For anyone interested in learning yet more, a beautiful (and freely online) book is Baccelli et al. (2001), which we have also mentioned in the context of Petri nets.

+

Max-plus algebra is relevant outside the domain of discrete-event systems – it is also investigated in optimization for its connection with piecewise linear/affine functions. Note that the community prefers using the name tropical geometry (to emphasise that they view it as a branch of algebraic geometry). A lovely tutorial is Rau (2017).

+ + + + + Back to top

References

+
+Baccelli, François, Guy Cohen, Geert Jan Olsder, and Jean-Pierre Quadrat. 2001. Synchronization and Linearity: An Algebra for Discrete Event Systems. Web edition. Chichester: Wiley. https://www.rocq.inria.fr/metalau/cohen/documents/BCOQ-book.pdf. +
+
+Cassandras, Christos G., and Stéphane Lafortune. 2021. Introduction to Discrete Event Systems. 3rd ed. Cham: Springer. https://doi.org/10.1007/978-3-030-72274-6. +
+
+De Schutter, Bart, and Ton van den Boom. 2000. “Model Predictive Control for Max-Plus-Linear Discrete-Event Systems: Extended Report & Addendum.” Technical Report bds:99-10a. Delft, The Netherlands: Delft University of Technology. https://pub.deschutter.info/abs/99_10a.html. +
+
+De Schutter, Bart, Ton van den Boom, Jia Xu, and Samira S. Farahani. 2020. “Analysis and Control of Max-Plus Linear Discrete-Event Systems: An Introduction.” Discrete Event Dynamic Systems 30 (1): 25–54. https://doi.org/10.1007/s10626-019-00294-w. +
+
+Rau, Johannes. 2017. “A First Expedition to Tropical Geometry.” https://www.math.uni-tuebingen.de/user/jora/downloads/FirstExpedition.pdf. +
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/max_plus_software 9.html b/max_plus_software 9.html new file mode 100644 index 0000000..e8886a3 --- /dev/null +++ b/max_plus_software 9.html @@ -0,0 +1,1062 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + + + + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/max_plus_systems 12.html b/max_plus_systems 12.html new file mode 100644 index 0000000..a2ec458 --- /dev/null +++ b/max_plus_systems 12.html @@ -0,0 +1,1397 @@ + + + + + + + + + +Max-plus linear (MPL) systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Max-plus linear (MPL) systems

+
+ + + +
+ + + + +
+ + + +
+ + +

We start with an example of a discrete-event systems modelled using (max,+) algebra.

+
+

Example 1 (Production system)  

+
    +
  • There are 3 production units: \(P_1, P_2, P_3\).
  • +
  • The unit \(P_3\) waits for the outputs from the units \(P_1\) and \(P_2\).
  • +
  • Each unit introduces a processing delay: \(v_1 = 12, v_2 = 11, v_3 = 7\), respectively.
  • +
  • There are also transportation delays: \(d_2 = 2\) from the entry to \(P_2\), and \(d_4 = 1\) from \(P_2\) to \(P_3\). All the other transportation delays are negligible.
  • +
+

Timed Petri net (event graph) for the example is in Figure 1 below.

+
+
+
+ +
+
+Figure 1: Example of a production system modelled by a timed Petri net (only the transitions are timed) +
+
+
+

The Petri net can be made more compact by associating the delays also with the places, as in Figure 2 below.

+
+
+
+ +
+
+Figure 2: Example of a production system modelled by a timed Petri net (both places and transitions are timed) +
+
+
+

With the outputs from the three transitions (the rectangles) after the kth event labelled by \(x_{1,k}\), \(x_{2,k}\), and \(x_{3,k}\), the state equations are \[ +\begin{aligned} +x_{1,k} &= \max\{x_{1,k-1} + 12, u_k + 0\}\\ +x_{2,k} &= \max\{x_{2,k-1} + 11, u_k + 2\}\\ +x_{3,k} &= \max\{x_{3,k-1} + 7, x_{1,k} +12 + 0, x_{2,k} + 11 + 1\}\\ +&= \max\{x_{3,k-1} + 7, \max\{x_{1,k-1} + 12, u_k\} +12, \\ +&\qquad \max\{x_{2,k-1} + 11, u_k+2\} + 12\}\\ +&= \max\{x_{3,k-1} + 7, x_{1,k-1} + 24, x_{2,k-1} + 23, \\ +&\qquad\qquad u_k+14\}\\ +y_k &= x_{3,k} + 7 +\end{aligned} +\]

+

The state equations can be rewritten in the (max,+) algebra \[ +\begin{aligned} +\begin{bmatrix} +x_{1,k} \\ x_{2,k} \\ x_{3,k} +\end{bmatrix} +&= +\begin{bmatrix} +12 & \varepsilon & \varepsilon\\ +\varepsilon & 11 & \varepsilon\\ +24 & 23 & 7 +\end{bmatrix} +\otimes +\begin{bmatrix} +x_{1,k-1} \\ x_{2,k-1} \\ x_{3,k-1} +\end{bmatrix} +\oplus +\begin{bmatrix} +0 \\ 2 \\ 14 +\end{bmatrix} +\otimes +u_k\\ +y_k &= +\begin{bmatrix} +\varepsilon & \varepsilon & 7 +\end{bmatrix} +\otimes +x_k +\end{aligned} +\]

+
+
+

Model of an event graph as a Max-plus linear (MPL) state-space system

+

Generalizing what we have seen in the previous example, we can write the MPL state-space system (actually a model) as \[\boxed{ + \begin{aligned} + x(k) &= A\otimes x(k-1) \oplus B\otimes u(k),\\ + y(k) &= C\otimes x(k), + \end{aligned}} +\tag{1}\]

+

where \(A\), \(B\), and \(C\) are matrices of appropriate dimensions. or, equivalently (after relabelling) \[ + \begin{aligned} + x(k+1) &= A\otimes x(k) \oplus B\otimes u(k),\\ + y(k) &= C\otimes x(k), + \end{aligned} +\]
+which mimics the conventional state-space system \[ + \begin{aligned} + x(k+1) &= A x(k) + Bu(k),\\ + y(k) &= Cx(k). + \end{aligned} +\]

+

We already know this from the example, but we need to emphasize it here again: the role of the variables \(u(k), x(k), y(k)\) is that they are event times. Namely the times of

+
    +
  • arrivals of inputs,
  • +
  • beginning of processing
  • +
  • finishing of processing,
  • +
+

respectively.

+

The independent variable \(k\) is now a counter of the events.

+
+
+

State response of an MPL system

+

In order to simulate an MPL system, we can now find use of the definitions of the basic operations in (max,+) algebra that we studied previously. Note that \[ +\begin{aligned} +x_1 &= A\otimes x_0 \oplus B\otimes u_1\\ +x_2 &= A\otimes x_1 \oplus B\otimes u_2\\ + &= A\otimes (A\otimes x_0 \oplus B\otimes u_1) \oplus B\otimes u_2\\ + &= A^{\otimes^2}\otimes x_0 \oplus A\otimes B\otimes u_1 \oplus B\otimes u_2\\ + &\vdots +\end{aligned} +\] which can be generalized to \[\boxed{ +x_k = A^{\otimes^k}\otimes x_0 \oplus \bigoplus_{i=1}^k A^{\otimes^{k-i}} \otimes B\otimes u_i.} +\tag{2}\]

+
+
+
+ +
+
+Response of an LTI state-space system +
+
+
+

The response of a linear time-invariant (LTI) system described by a (vector) state equation \(x(k+1) = Ax(k) + Bu(k)\) is \[ +x_{k} = A^k x_0 + \sum_{i=0}^{k-1} A^{k-1-i}Bu_i. +\]

+
+
+
+
+
+ +
+
+Lower and upper bounds for the summation shifted by 1 +
+
+
+

Note how the lower and upper bounds for the summation are shifted by 1 compared to the traditional convolution.

+
+
+
+
+

(max,+) linearity

+

We should emphasize that the linearity exhibited by the state equation Equation 1 and the convolution Equation 2 must only be understood in the (max,+) sense.

+

Indeed, if we consider two input sequences \(u_1= \{u_{1,1},u_{1,2},\ldots\}\) and \(u_2= \{u_{2,1},u_{2,2},\ldots\}\), a (max,+)-linear combination \(\alpha \otimes u_1 \oplus \beta \otimes u_2\) of the two inputs yields the same (max,+)-linear combination of the outputs \(y_1\) and \(y_2\).

+
+
+

Input-output response of an MPL system

+

We can also eliminate the state variables from the model and aim at finding the relation between the input and output sequences \[ +U = \begin{bmatrix}u_1 \\ u_2 \\ \vdots \\ u_p\end{bmatrix}, \qquad Y = \begin{bmatrix}y_1 \\ y_2 \\ \vdots \\ y_p\end{bmatrix} +\] in the form of \[\boxed +{Y = G\otimes x_0 \oplus H\otimes U,} +\] where \[ +H = +\begin{bmatrix} +C\otimes B & \varepsilon & \varepsilon & \ldots & \varepsilon\\ +C\otimes A\otimes B & C\otimes B & \varepsilon & \ldots & \varepsilon\\ +C\otimes A^{\otimes^2}\otimes B & C\otimes A\otimes B & C\otimes B & \ldots & \varepsilon\\ +\vdots & \vdots & \vdots & \ddots & \vdots\\ +C\otimes A^{\otimes^{p-1}}\otimes B & C\otimes A^{\otimes^{p-2}}\otimes B & C\otimes A^{\otimes^{p-3}}\otimes B & \ldots & C\otimes B +\end{bmatrix} +\] and \[ +G = +\begin{bmatrix} +C \\ C\otimes A \\ C\otimes A^{\otimes^2} \\ \vdots \\ C\otimes A^{\otimes^{p-1}} +\end{bmatrix}. +\]

+
+

Example 2 (Production system) We consider again the production system in Example 1. On the time horizon of 4, and assuming zero initial state, the input-output model is paramaterized by \[ +Y = \begin{bmatrix}y_1 & y_2 & y_3 & y_4\end{bmatrix}^\top, \quad U = \begin{bmatrix}u_1 & u_2 & u_3 & u_4\end{bmatrix}^\top, +\]

+

\[ +x_0 = \begin{bmatrix}\varepsilon & \varepsilon & \varepsilon\end{bmatrix}^\top, +\]

+

\[ +H = +\begin{bmatrix} +21 & \varepsilon & \varepsilon & \varepsilon\\ +32 & 21 & \varepsilon & \varepsilon\\ +43 & 32 & 21 & \varepsilon\\ +55 & 43 & 32 & 21 +\end{bmatrix}. +\]

+
+
+
+

Analysis of an irreducible MPL system

+

We now consider an autonomous MPL system \[ +x_{k+c} = A^{\otimes^{k+c}}\otimes x_0, +\] for which we assume irreducibility of the matrix \(A\).

+

We have learnt previously, that for large enough \(k\) and \(c\), \[ +x_{k+c} = \lambda^{\otimes^c}\otimes A^{\otimes^{k}}\otimes x_0 = \lambda^{\otimes^c}\otimes x_k. +\]

+

This can be interpreted in the standard algabra as \[ +x_{k+c} = c\lambda + x_k, +\] from which it follows that \[ +x_{k+c}-x_k = c\lambda. +\]

+

This is an insightful result. When the system under consideration is a production system, then once it reaches a cyclic behaviour, the average cycle is \(\lambda\). The average production rate is then \(1/\lambda\).

+
+
+

Model Predictive Control (MPC) for MPL systems

+

Now we are finally ready to consider control problems form MPL systems. We will consider the MPC approach.

+
+

Cosf function for MPC

+

We consider the const function composed of two parts \[ +J = J_\mathrm{output} + \lambda J_\mathrm{input}. +\]

+

At “time” \(k\), with the prediction horizon \(N_\mathrm{p}\), and with the number of outputs \(n_\mathrm{y}\): \[ +J_\mathrm{output} = \sum_{j=0}^{N_\mathrm{p}-1}\sum_{i=1}^{n_\mathrm{y}} \max \{y_{i,k+j} - r_{i,k+j},0\} +\]

+

This cost function penalizes tardiness (late delivery).

+
+
+
+ +
+
+Caution +
+
+
+

Is the lower value for j correct?

+
+
+

Alternative choice of the cost function is \[ +J_\mathrm{output} = \sum_{j=0}^{N_\mathrm{p}-1}\sum_{i=1}^{n_\mathrm{y}} \left|y_{i,k+j} - r_{i,k+j} \right |, +\] which penalizes difference between the due and actual dates, or \[ +J_\mathrm{output} = \sum_{j=1}^{N_\mathrm{p}-1}\sum_{i=1}^{n_\mathrm{y}} \left |\Delta^2 y_{i,k+j}\right |, +\] which balances the output rates.

+

The input cost can be set to \[ +J_\mathrm{input} = -\sum_{j=0}^{N_\mathrm{p}-1}\sum_{l=1}^{n_\mathrm{u}} u_{l,k+j}, +\] which penalizes early feeding (favours just-in-time feeding). Note the minus sign.

+
+
+

Control horizon vs. prediction horizon

+

Assume constant feeding rate after the control horizon \(N_\mathrm{c}\) \[ +\Delta u_{k+j} = \Delta u_{k+N_\mathrm{c}-1},\qquad j=N_\mathrm{c},\ldots, N_\mathrm{p}-1 +\] where \(\Delta u_k = u_k - u_{k-1}\).

+

Alternatively, \[ +\Delta^2 u_{k+j} = 0,\qquad j=N_\mathrm{c},\ldots, N_\mathrm{p}-1 +\] where \(\Delta^2 u_k = \Delta u_k - \Delta u_{k-1} = u_k - 2u_{k-1} + u_{k-2}\).

+
+
+

Inequality constraints for MPC

+

There are several possibilities for the constraints in the MPC for MPL systems. For example, we can constrain the minimum and maximum separation of input and output events \[ +a_{k+j} \leq \Delta u_{k+j} \leq b_{k+j},\qquad j=0,1,\ldots,N_\mathrm{c}-1, +\] \[ +c_{k+j} \leq \Delta y_{k+j} \leq d_{k+j},\qquad j=0,1,\ldots,N_\mathrm{p}-1. +\]

+

We can also impose constraint on the maximum due dates for the output events \[ +y_{k+j} \leq r_{k+j},\qquad j=0,1,\ldots,N_\mathrm{p}-1. +\]

+

We can also enforce the condition that the input and output events are consecutive \[ +\Delta u_{k+j} \geq 0, \qquad j=0,1,\ldots,N_\mathrm{c}-1. +\]

+
+
+

MPC for MPL system leads to a nonlinear optimization problem

+

Our motivation for formulating the problems within the (max,+) algebra was to fake the reality a bit and pretend that the problem is linear. This allowed us to invoke many concepts that we are familiar with from linear systems theory. However, at the end of the day, when it comes to actually solving the problem, we must reveal the nonlinear nature of the problem.

+

When we consider the MPC for MPL systems, we are faced with a nonlinear optimization problem. We can use some general nonlinear solvers (fmincon, ipopt, …).

+

Alternatively, there is a dedicated framework for solving these problem. It is called Extended Linear Complementarity Problem (ELCP) and was developed by [1]. We will introduce the complementarity problem(s) later in a chapter dedicated to complementarity.

+

Yet another approach is through Mixed Integer (Linear) Programming (MILP).

+ + + +
+
+ + Back to top

References

+
+
[1]
B. De Schutter and B. De Moor, “The extended linear complementarity problem,” Mathematical Programming, vol. 71, no. 3, pp. 289–325, Dec. 1995, doi: 10.1007/BF01590958.
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mld_DHA 17.html b/mld_DHA 17.html new file mode 100644 index 0000000..845f047 --- /dev/null +++ b/mld_DHA 17.html @@ -0,0 +1,1187 @@ + + + + + + + + + +Discrete hybrid automata – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Discrete hybrid automata

+
+ + + +
+ + + + +
+ + + +
+ + +

Since the new modelling framework is expected to be useful for prediction of a system response within model predictive control, it must model a hybrid system in discrete time. This is a major difference from what we did in our course so far.

+

In particular, we are going to model a hybrid system as a discrete(-time) hybrid automaton (DHA), which means that

+
    +
  • the continuous-value dynamics (often corresponding to the physical part of the system) evolves in discrete time,
  • +
  • the events and their processing by the logical part of the system are synchronized to the same periodic clock.
  • +
+
+

Four components of a discrete(-time) hybrid automaton

+

We are already well familiar with the concept of a hybrid automaton, and the restriction to discrete time does not seem to warrant reopening the definition (modes/locations, guards, invariants/domains, reset maps, …). However, it turns out that reformulating/restructuring the hybrid automaton will be useful for our ultimate goal of developing an MPC-friendly modelling framework. In particular, we consider four components of a DHA:

+
    +
  • switched affine system (SAS),
  • +
  • mode selector (MS),
  • +
  • event generator (EG),
  • +
  • finite state machine (FSM).
  • +
+

Their interconnection is shown in the following figure.

+
+

Draw the block diagram from Bemporad’s materials (book, toolbox documentation).

+
+

Let’s discuss the individual components (and while doing that, you can think about the equivalent concept in the classical definition of a hybrid automaton such as mode, invariant, guard, …).

+
+

Switched affine systems (SAS)

+

This is a model of the continuous-value dynamics parameterized by the index i that evolves in (discrete) time +\begin{aligned} +x_c(k+1) &= A_{i(k)} x_c(k) + B_{i(k)} u_c(k) + f_{i(k)}\\ +y_c(k) &= C_{i(k)} x_c(k) + D_{i(k)} u_c(k) + g_{i(k)} +\end{aligned} +

+

In principle there is no need to restrict the right hand sides to affine functions as we did, but the fact is that the methods and tools are currently only available for this restricted class of systems.

+
+
+

Event generator (EG)

+

We consider partitioning of the state space or possibly state-control space into polyhedral regions. The system is then in the ith region of the state-input space, if the continuous-value state x_c(k) and the continuous-value control input u_c satisfy +H_i x_c(k) + J_i u_c(k) + K_i \leq 0 +

+

The event indicated by the (vector) binary variable +\delta_e(k) = h(x_c(k), u_c(k)) \in \{0,1\}^m, +

+

where +h_i(x_c(k), u_c(k)) = \begin{cases}1 & H_i x_c(k) + J_i u_c(k) + K_i \leq 0\\ 0 & \text{otherwise}. \end{cases} +

+
+
+

Finite state machine (FSM)

+

+x_d(k+1) = f_d(x_d(k),u_d(k),\delta_e(k)) +

+
+
+

Mode selector (MS)

+

i(k) \in \{1, 2, \ldots, s\}

+

+i(k) = \mu(x_d(k), u_d(k), \delta_e(k)) +

+
+
+
+

Trajectory of a DHA

+

+\begin{aligned} +\delta_e(k) &= h(x_c(k), u_c(k))\\ +i(k) &= \mu(x_d(k), u_d(k), \delta_e(k))\\ +y_c(k) &= C_{i(k)} x_c(k) + D_{i(k)} u_c(k) + g_{i(k)}\\ +y_d(k) &= g_d(x_d(k), u_d(k), \delta_e(k))\\ +x_c(k+1) &= A_{i(k)} x_c(k) + B_{i(k)} u_c(k) + f_{i(k)}\\ +x_d(k+1) &= f_d(x_d(k),u_d(k),\delta_e(k)) +\end{aligned} +

+
+
+

How to get rid of the IF-THEN conditions in the model?

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mld_intro 17.html b/mld_intro 17.html new file mode 100644 index 0000000..b598dd6 --- /dev/null +++ b/mld_intro 17.html @@ -0,0 +1,1448 @@ + + + + + + + + + +Logic vs inequalities – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Logic vs inequalities

+
+ + + +
+ + + + +
+ + + +
+ + +

Our goal now is to turn the IF-THEN conditions in the model into linear inequalities. This will allow us to formulate the model as a mathematical program, actually a mixed-integer program (MIP).

+
+

Propositional logic and connectives

+

Propositions that are either true or false are composed of elementary propositions (Boolean variables) and connectives.

+
+

Boolean variable (or elementary proposition)

+

\(X\) evaluates to true or false. Oftentimes values 0 and 1 are used, but it should be clear that these are not numbers but logical values.

+
+
+

Connectives

+
    +
  • Conjunction (logical and): \(X_1 \land X_2\)
  • +
  • Disjunction (logical or): \(X_1 \lor X_2\)
  • +
  • Negation: \(\neg X_2\) (or \(\overline{X_2}\) or \(\sim X_2\))
  • +
  • Implication: \(X_1 \implies X_2\)
  • +
  • Equivalence: \(X_1 \iff X_2\)
  • +
  • Logical XOR: \(X_1 \oplus X_2\)
  • +
+
+
+
+

Equivalences of logic propositions

+

We will heavily used the following equivalences: \[ +\begin{aligned} +X_1 \implies X_2 \qquad &\equiv \qquad \neg X_2 \implies \neg X_1,\\ +X_1 \iff X_2 \qquad &\equiv \qquad (X_1 \implies X_2) \land (X_2 \implies X_1),\\ +X_1 \land X_2 \qquad &\equiv \qquad \neg (\neg X_1 \lor \neg X_2),\\ +X_1 \implies X_2 \qquad &\equiv \qquad \neg X_1 \lor X_2. +\end{aligned} +\]

+

The last one can be seen as follows: it cannot happen that \(X1 \land \neg X2\), that is, it holds that \(\neg(X1 \land \neg X2)\). De Morgan gives \(\neg X1 \lor X2\).

+
+ + +
+

General transformation of Boolean expressions to integer inequalities

+

From Conjunctive Normal Form (CNF) \[ +\bigwedge_{j=1}^m \left[\left(\lor_{i\in \mathcal{P}_j} X_i\right) \lor \left(\lor_{i\in \mathcal{N}_j} \neg X_i\right)\right] +\] to 0-1 integer inequalities defining a polyhedron \[ +\begin{aligned} +\sum_{i\in \mathcal{P}_1} \delta_i + \sum_{i\in \mathcal{N}_1} (1-\delta_i) &\geq 1,\\ +&\vdots\\ +\sum_{i\in \mathcal{P}_m} \delta_i + \sum_{i\in \mathcal{N}_m} (1-\delta_i) &\geq 1. +\end{aligned} +\]

+
+
+

Finite state machine (FSM) using binary variables

+

Encode the discrete state variables in binary \[ +x_b \in \{0,1\}^{n_b} +\]

+

Similarly the discrete inputs \[ +u_b \in \{0,1\}^{m_b} +\]

+

The logical state equation then \[ +x_b(k+1) = f_b(x_b(k),u_b(k),\delta_e(k)) +\]

+
+

Example 1 (Example)  

+
+
+
+ +
+
+Figure 1: Example of a FSM +
+
+
+

The state update/transition equation is \[ +\begin{aligned} +x_d(k+1) = +\begin{cases} +\text{Red} & \text{if}\; ([x_d = \text{green}] \land \neg [\delta_3=1]) \lor ([x_d = \text{red}] \land \neg [\delta_3=1])\\ +\text{Green} & \text{if} \; \ldots\\ +\text{Blue} & \text{if} \; \ldots +\end{cases} +\end{aligned} +\]

+

Binary encoding of the discrete states \[ +\text{Red}: x_b = \begin{bmatrix}0\\0 \end{bmatrix}, \; \text{Green}: x_b = \begin{bmatrix}0\\1 \end{bmatrix}, \; \text{Blue}: x_b = \begin{bmatrix}1\\0 \end{bmatrix} +\]

+

Reformulating the state update equations for binary variables \[ +\begin{aligned} +x_{b1} &= (\neg [x_{b1} = 1] \land \neg [x_{b2} = 1] \land \neg [\delta_1=1]) \\ +&\quad (\neg [x_{b1} = 1] \land \neg [x_{b2} = 1] \land [\delta_1=1]) \land [u_{b2}=1]\\ +&\quad (\neg [x_{b1} = 1] \land [x_{b2} = 1] \land \neg [u_{b1}=1] \land [\delta_3=1])\\ +&\quad \lor ([x_{b1} = 1]\land \neg [\delta_2=1])\\ +x_{b2} &= \ldots +\end{aligned} +\]

+

Finally, simplify, convert to CNF.

+
+
+
+

Mixing logical and continuous

+
    +
  • see Indicator variables.
  • +
+
+

Logical implies continuous

+

\[X \implies [f(x)\leq 0]\]

+

\[[\delta = 1] \implies [f(x)\leq 0]\]

+
    +
  • introduce \(M\) \[ +f(x) \leq (1-\delta) M +\]

  • +
  • that is large enough so that when \(\delta=0\), there is no practical restriction on \(f\).

    +
      +
    • Big-M technique.
    • +
  • +
+
+
+

Continuous implies logical

+

\[[f(x)\leq 0] \implies X\]

+

\[[f(x)\leq 0] \implies [\delta = 1]\]

+
    +
  • Equivalently \[\neg [\delta = 1] \implies \neg [f(x)\leq 0],\]

  • +
  • that is, \[[\delta = 0] \implies [f(x) > 0]\]

  • +
  • Introduce \(m\) such that \(f(x)>0\) is enforced when \(\delta=0\) \[ +f(x) > m\delta +\]

  • +
  • but small enough that there is no restriction on \(f\) when \(\delta=1\).

  • +
  • For numerical reasons, modify to nonstrict inequality \[ +f(x) \geq \epsilon + (m-\epsilon)\delta, +\] where \(\epsilon\approx 0\) (for example, machine epsilon).

  • +
+
+
+

Equivalence between logical and continuous

+
    +
  • Combining the previous two implications.
  • +
+

\[ +\begin{aligned} +f(x) &\leq (1-\delta) M,\\ +f(x) &\geq \epsilon + (m-\epsilon)\delta. +\end{aligned} +\]

+
+
+
+

IF-THEN-ELSE rule as an inequality

+
    +
  • If \(X\) +
      +
    • then \(z = a^\top x + b^\top u + f\),
    • +
    • else \(z = 0\).
    • +
  • +
  • It can be expressed as a product \[ +z = \delta\,(a^\top x + b^\top u + f) +\]
  • +
+

\[ +\begin{aligned} +z &\leq M\delta,\\ +- z &\leq -m\delta,\\ +z &\leq a^\top x + b^\top u + f - m(1-\delta),\\ +-z &\leq -(a^\top x + b^\top u + f) + M(1-\delta). +\end{aligned} +\]

+
+

The reasoning is that if \(\delta=0\), then \(z\) is restricted, while \(a^\top x + b^\top u + f\) is not. And the other way around.

+
+
+
+

Another IF-THEN-ELSE rule

+
    +
  • If \(X\) +
      +
    • then \(z = a_1^\top x + b_1^\top u + f_1\),
    • +
    • else \(z = a_2^\top x + b_2^\top u + f_2\).
    • +
  • +
  • It can be expressed as \[ +\begin{aligned} +z &= \delta\,(a_1^\top x + b_1^\top u + f_1) \\ +&\quad + (1-\delta)(a_2^\top x + b_2^\top u + f_2) +\end{aligned} +\]
  • +
+

\[ +\begin{aligned} +(m_2-M_1)\delta + z &\leq a_2^\top x + b_2^\top u + f_2,\\ +(m_1-M_2)\delta - z &\leq -a_2^\top x - b_2^\top u - f_2,\\ +(m_1-M_2)(1-\delta) + z &\leq a_1^\top x + b_1^\top u + f_1,\\ +(m_2-M_1)(1-\delta) - z &\leq -a_1^\top x - b_1^\top u - f_1. +\end{aligned} +\]

+
+
+

Generation of events by mixing logical and continuous variables in inequalities

+

\[ +\begin{aligned} +h_i(x_c(k), u_c(k)) &\leq M_i (1-\delta_{e,i})\\ +h_i(x_c(k), u_c(k)) &\geq \epsilon + (m_i-\epsilon) \delta_{e,i} +\end{aligned} +\]

+
+
+

Switched affine system

+
    +
  • We want to get rid of the IF-THEN and formulate the switching mechanism into the format of inequalities too.
  • +
+

\[ +x_c(k+1) = \sum_{i=1}^s z_i(k), +\]

+
    +
  • where \[ +z_1(k) = +\begin{cases} +A_1 x_c(k) + B_1 u_c(k) + f_1 & \text{if}\;i(k)=1\\ +0 & \text{otherwise} +\end{cases} +\]
  • +
+

\[\quad \vdots\]

+

\[ +z_s(k) = +\begin{cases} +A_s x_c(k) + B_s u_c(k) + f_s & \text{if}\;i(k)=s\\ +0 & \text{otherwise} +\end{cases} +\]

+
    +
  • For each \(i\in \{1, 2, \ldots, s\}\)
  • +
+

\[ +\begin{aligned} +z_i &\leq M_i\delta_i,\\ +- z_i &\leq -m_i\delta_i,\\ +z_i &\leq a_i^\top x + b_i^\top u + f_i - m_i(1-\delta_i),\\ +-z_i &\leq -(a_i^\top x + b_i^\top u + f_i) + M_i(1-\delta_i). +\end{aligned} +\]

+
+
+

Mixed logical dynamical (MLD) system

+

\[ +\begin{aligned} +x(k+1) &= Ax(k) + B_u u(k) + B_\delta\delta + B_zz(k) + B_0\\ +y(k) &= Cx(k) + D_u u(k) + D_\delta \delta + D_z z + D_0\\ +E_\delta \delta &+ E_z z(k) \leq E_u u(k) + E_x x(k) + E_0 +\end{aligned} +\]

+
+
+

Simple example

+
+
+

HYSDEL language

+
+
+

Piecewise affine systems

+

\[ +\begin{aligned} +x(k+1) &= A_{i(k)}x(k) + B_{i(k)} u(k) + f_{i(k)}\\ +y(k) &= C_{i(k)}x(k) + D_{i(k)} u(k) + g_{i(k)}\\ +& \; H_{i(k)} x(k) + J_{i(k)} u(k) \leq K_{i(k)} +\end{aligned} +\]

+
    +
  • DHA, MLD, PWA are equivalent.
  • +
+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mld_references 9.html b/mld_references 9.html new file mode 100644 index 0000000..c9a7cf0 --- /dev/null +++ b/mld_references 9.html @@ -0,0 +1,1114 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

The MLD description of discrete-time hybrid systems was originally introduced in [1], but perhaps even more accessible introduction is in Chapter 16 of the freely downloadable book [2]. In our text here we followed their expositions.

+

Just in case some issues are still unclear, in particular those related to the connection between the constraints (inequalities) imposed on continuous (aka real) variables and logical conditions imposed on binary variable, you may like the little bit more formal discussion in Section 2.2 of the thesis [3]. Strictly speaking, this use of binary (0-1 integer) variables to encode some constraints on real variables is standard in optimization and is described elsewhere – search for indicator variables or indicator constraints. A recommendable general resource is the book (unfortunately not available online) [4], in particular its section 9.1.3 on Indicator variables.

+

All the theoretical concepts and procedures introduced in this lecture (and in those corresponding papers and books) are straightforward but rather tedious to actually implement. There is a HYSDEL language for modelling hybrid systems (discrete hybrid automata as considered in this lecture) that automates these procedures. The HYSDEL language is described not only in the documentation but also in the dedicated paper [5], which can also serve as a learning resource for the topic.

+
+

Case studies

+

Batch evaporator [6] and [7].

+ + + +
+ + Back to top

References

+
+
[1]
A. Bemporad and M. Morari, “Control of systems integrating logic, dynamics, and constraints,” Automatica, vol. 35, no. 3, pp. 407–427, Mar. 1999, doi: 10.1016/S0005-1098(98)00178-2.
+
+
+
[2]
F. Borrelli, A. Bemporad, and M. Morari, Predictive Control for Linear and Hybrid Systems. Cambridge, New York: Cambridge University Press, 2017. Available: http://cse.lab.imtlucca.it/~bemporad/publications/papers/BBMbook.pdf
+
+
+
[3]
D. Mignone, “Control and estimation of hybrid systems with mathematical optimization,” Doctoral {{Thesis}}, ETH Zurich, 2002. doi: 10.3929/ethz-a-004279802.
+
+
+
[4]
H. P. Williams, Model Building in Mathematical Programming, 5th ed. Hoboken, N.J: Wiley, 2013.
+
+
+
[5]
F. D. Torrisi and A. Bemporad, HYSDEL—a tool for generating computational hybrid models for analysis and synthesis problems,” IEEE Transactions on Control Systems Technology, vol. 12, no. 2, pp. 235–249, Mar. 2004, doi: 10.1109/TCST.2004.824309.
+
+
+
[6]
A. Bemporad, F. D. Torrisi, and M. Morari, “Discrete-time Hybrid Modeling and Verification of the Batch Evaporator Process Benchmark,” European Journal of Control, vol. 7, no. 4, pp. 382–399, Jan. 2001, doi: 10.3166/ejc.7.382-399.
+
+
+
[7]
S. Kowalewski and O. Stursberg, “The Batch Evaporator: A Benchmark Example for Safety Analysis of Processing Systems under Logic Control,” in Proceedings 4th Int. Workshop on Discrete Event Systems (WODES’98), Cagliari, Italy: IEE, London, Aug. 1998, pp. 302–307.
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mld_why 15.html b/mld_why 15.html new file mode 100644 index 0000000..2961fb2 --- /dev/null +++ b/mld_why 15.html @@ -0,0 +1,1115 @@ + + + + + + + + + +Why another framework? – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Why another framework?

+
+ + + +
+ + + + +
+ + + +
+ + +

We are going to introduce yet another framework for modeling hybrid systems – mixed logical dynamical (MLD) description. A question must inevitably pop up: “why yet another framework?”

+

The answer is, that we would like to have a model of a hybrid system that is suitable for model predictive control (MPC). Recall that the role of the model in MPC is that the model is used to define some constraints (equations and inequalities) in the numerical optimization problem. The frameworks that we considered so far did not offer it.

+

In particular, with the state variable and control input vectors composed of continuous and discrete variables +\bm x = \begin{bmatrix}\bm x_c\\\bm x_d\end{bmatrix}, \quad \bm u = \begin{bmatrix}\bm u_c\\\bm u_d\end{bmatrix}, + where \bm x_c\in\mathbb R^{n_c},\;\bm x_d\in\mathbb N^{n_d},\; \bm u_c\in\mathbb R^{m_c} and \bm u_d\in\mathbb N^{m_d}, we would like to formulate the model in the form of state equations, say +\begin{aligned} +\begin{bmatrix}\bm x_c(k+1) \\ \bm x_d(k+1)\end{bmatrix} +&= +\begin{bmatrix} \mathbf f_c(\bm x(k), \bm u(k)) \\ \mathbf f_d(\bm x(k), \bm u(k)) \end{bmatrix} +\end{aligned} +

+

Is it possible?

+

Unfortunately no. At least not exactly in this form. But something close to it is achievable instead.

+

But first we need to set the terminology and notation used to define a discrete(-time) hybrid automaton.

+ + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mpc_mld_explicit 16.html b/mpc_mld_explicit 16.html new file mode 100644 index 0000000..19efbf6 --- /dev/null +++ b/mpc_mld_explicit 16.html @@ -0,0 +1,1199 @@ + + + + + + + + + +Explicit MPC for hybrid systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Explicit MPC for hybrid systems

+
+ + + +
+ + + + +
+ + + +
+ + +

Model predictive control (MPC) is not computationally cheap (compared to, say, PID or LQG control) as it requires solving optimization problem – typically a quadratic program (QP) - online. The optimization solver needs to be a part of the controller.

+

There is an alternative, though, at least in same cases. It is called explicit MPC. The computationally heavy optimization is only perfomed only during the design process and the MPC controller is then implemented just as an affine state feedback

+

+\bm u_k(\bm x(k)) = \mathbf F_k^i \bm x(k) + \mathbf g_k^i,\; \text{if}\; \bm x(k) \in \mathcal R_k^i, +

+

with the coefficients picked from some kind of a lookup table in real time Although retreiving the coefficients of the feedback controller is not computationally trivial, still it is cheaper than full optimization.

+
+

Multiparametric programming

+

The key technique for explicit MPC is multi-parametric programming. In order to explain it, consider the following problem

+

+J^\ast(x) = \inf_z J(z;x). +

+

The z variable is an optimization variable, while x is a parameter. For a given parameter x, the cost function J is minimized. We study how the optimal cost J^\ast depends on the parameter, hence the name parametric programming. If x is a vector, the name of the problem changes to multiparametric programming.

+
+

Example: scalar variable, single parameter

+

Consider the following cost function J(z;x) in z parameterized by x. The optimization variable z is constrained and this constraint is also parameterized by x. +\begin{aligned} +J(z;x) &= \frac{1}{2} z^2 + 2zx + 2x^2 \\ +\text{subject to} &\quad z \leq 1 + x. +\end{aligned} +

+

In this simple case we can aim at analytical solution. We proceed in the standard way – we introduce a Lagrange multiplicator \lambda and form the augmented cost function +L(z,\lambda; x) = \frac{1}{2} z^2 + 2zx + 2x^2 + \lambda (z-1-x). +

+

The necessary conditions of optimality for the inequality-constrained problem come in the form of KKT conditions +\begin{aligned} +z + 2x + \lambda &= 0,\\ +z - 1 - x &\leq 0,\\ +\lambda & \geq 0,\\ +\lambda (z - 1 - x) &= 0. +\end{aligned} +

+

The last condition – the complementarity condition – gives rise to two scenarios: one corresponding to \lambda = 0, and the other corresponding to z - 1 - x = 0. We consider them separately below.

+

After substituting \lambda = 0 into the KKT conditions, we get +\begin{aligned} +z + 2x &= 0,\\ +z - 1 - x & \leq 0. +\end{aligned} +

+

From the first equation we get how z depends on x, and from the second we obtain a bound on x. Finally, we can also substitute the expression for z into the cost function J to get the optimal cost J^\ast as a function of x. All these are summarized here +\begin{aligned} +z &= -2x,\\ +x & \geq -\frac{1}{3},\\ +J^\ast(x) &= 0. +\end{aligned} +

+

Now, the other scenario. Upon substitutin z - 1 - x = 0 into the KKT conditions we get

+

+\begin{aligned} +z + 2x + \lambda &= 0,\\ +z - 1 - x &= 0,\\ +\lambda & \geq 0. +\end{aligned} +

+

From the second equation we get the expression for z in terms of x, substituting into the first equation and invoking the condition on nonnegativity of \lambda we get the bound on x (not suprisingly it complements the one obtained in the previous scenario). Finally, substituting for z in the cost function J we get a formula for the cost J^\ast as a function of x.

+

+\begin{aligned} +z &= 1 + x,\\ +\lambda &= -z - 2x \geq 0 \quad \implies \quad x \leq -\frac{1}{3},\\ +J^\ast(x) &= \frac{9}{2}x^2 + 3x + \frac{1}{2}. +\end{aligned} +

+

The two scenarios can now be combined into a single piecewise affine function z(x) +z(x) = \begin{cases} +1+x & \text{if } x \leq -\frac{1}{3},\\ +-2x & \text{if } x > -\frac{1}{3}. +\end{cases} +

+
x = range(-1, 1, length=100)
+z(x) = x <= -1/3 ? 1 + x : -2x
+Jstar(x) = x <= -1/3 ? 9/2*x^2 + 3x + 1/2 : 0
+
+using Plots
+plot(x, z.(x), label="z(x)")
+vline!([-1/3],line=:dash)
+xlabel!("x")
+ylabel!("z(x)")
+

and a piecewise quadratic cost function J^\ast(x) +J^\ast(x) = \begin{cases} +\frac{9}{2}x^2 + 3x + \frac{1}{2} & \text{if } x \leq -\frac{1}{3},\\ +0 & \text{if } x > -\frac{1}{3}. +\end{cases} +

+
plot(x, Jstar.(x), label="J*(x)")
+vline!([-1/3],line=:dash)
+xlabel!("x")
+ylabel!("J*(x)")
+ + +
+
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mpc_mld_online 10.html b/mpc_mld_online 10.html new file mode 100644 index 0000000..523d43e --- /dev/null +++ b/mpc_mld_online 10.html @@ -0,0 +1,1131 @@ + + + + + + + + + +Online MPC for hybrid systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Online MPC for hybrid systems

+
+ + + +
+ + + + +
+ + + +
+ + +
+

Optimal control on a finite horizon

+
+

Cost function

+

First, we need to set the cost function for the optimal control problem. As usual in optimal control, we want to impose different weights on invididual state and control variables. The most popular is the quadratic cost function well known from the LQ-optimal control

+

\[ +J_0(x(0),U_0) = x_N^T S_N x_N + \sum_{k=0}^{N-1} \left( x_k^T Q x_k + u_k^T R u_k \right) +\]

+

But other (weighted) norms can also be used, in particular 1-norm and infinity-norm

+

\[ +J_0(x(0),U_0) = \|S_N x_N\|_1 + \sum_{k=0}^{N-1} \left( \|Q x_k\|_1 + \|R u_k\|_1 \right), +\]

+

\[ +J_0(x(0),U_0) = \|S_N x_N\|_{\infty} + \sum_{k=0}^{N-1} \left( \|Q x_k\|_{\infty} + \|R u_k\|_{\infty} \right). +\]

+
+
+
+

Optimization problem

+

Combining the cost function with the MLD model, and perhaps we some extra constraints imposed on the control inputs as well as state variables, we get \[ +\operatorname*{minimize}_{u_0, u_1, \ldots, u_{N-1}} J_0(x(0),(u_0, u_1, \ldots, u_{N-1})) +\]

+

subject to \[ +\begin{aligned} +x_{k+1} &= Ax_k + B_u u_k + B_\delta\delta_k + B_z z_k + B_0\\ +y_k &= Cx_k + D_u u_k + D_\delta \delta_k + D_z z_k + D_0\\ +E_\delta \delta_k &+ E_z z_k \leq E_u u_k + E_x x_k + E_0 \\ +u_{\min} &\leq u_k \leq u_{\max} \\ +x_{\min} &\leq x_k \leq x_{\max} \\ +P x_N &\leq r \\ +x_0 &= x(0) +\end{aligned} +\]

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mpc_mld_references 17.html b/mpc_mld_references 17.html new file mode 100644 index 0000000..79e708f --- /dev/null +++ b/mpc_mld_references 17.html @@ -0,0 +1,1085 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

The main resource for us is the Chapter 17 of the freely available book [1] that we already referred to in the previous chapter.

+

Those who have not been exposed to fundamentals of MPC can check the Chapter 12 in the same book. Alternatively, our own introduction to the topic in the third chapter/week of the Optimal and robust control course can be found useful.

+ + + + + Back to top

References

+
+
[1]
F. Borrelli, A. Bemporad, and M. Morari, Predictive Control for Linear and Hybrid Systems. Cambridge, New York: Cambridge University Press, 2017. Available: http://cse.lab.imtlucca.it/~bemporad/publications/papers/BBMbook.pdf
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/mpc_mld_software 17.html b/mpc_mld_software 17.html new file mode 100644 index 0000000..1c3af92 --- /dev/null +++ b/mpc_mld_software 17.html @@ -0,0 +1,1059 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +

Essentially the same as in the previous chapter.

+ + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/petri_nets 8.html b/petri_nets 8.html new file mode 100644 index 0000000..486ad20 --- /dev/null +++ b/petri_nets 8.html @@ -0,0 +1,1679 @@ + + + + + + + + + +Petri nets – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Petri nets

+
+ + + +
+ + + + +
+ + + +
+ + +

In this chapter we introduce another formalism for modelling discrete event systems (DES) – Petri nets. Petri nets offer an alternative perspective on discrete even systems compared to automata. And it is good to have alternatives, isn’t it? For some purposes, one framework can be more appropriate than the other.

+

Furthermore, the ideas behind Petri nets even made it into international standards. Either directly or through the derived GRAFCET language, which in turn served as the basis for the Sequential Function Chart (SFC) language for PLC programming. See the references.

+

Last but not least, an elegant algebraic framework based on the so-called (max,+) algebra has been developed for a subset of Petri nets (so-called event graphs) and it would be a shame not to mention it in our course (in the next chapter).

+
+

Definition of a Petri net

+

Similarly as in the case of automata, a Petri net (PN) can be defined as a tuple of sets and functions: \boxed{PN = \{\mathcal{P}, \mathcal{T}, \mathcal{A}, w\},} where

+
    +
  • \mathcal{P} = \{p_1, \dots, p_n\} is a finite set of places,
  • +
  • \mathcal{T} = \{t_1, \dots, t_m\} is a finite set of transitions,
  • +
  • \mathcal{A} \subseteq (\mathcal{P} \times \mathcal{T}) \cup (\mathcal{T} \times \mathcal{P}) is a finite set of arcs, and these arcs are directed and since there are two types of nodes, there are also two types of arcs: +
      +
    • (p_i, t_j) \in \mathcal{A} is from place p_i to transition t_j,
    • +
    • (t_j, p_i) \in \mathcal{A} is from transition t_j to place p_i,
    • +
  • +
  • w : \mathcal{A} \to \mathbb{N} is a weight function.
  • +
+

Similarly as in the case of automata, Petri nets can be visualized using graphs. But this time, we need to invoke the concept of a weighted bipartite graph. That is, a graph with two types of nodes:

+
    +
  • places = circles,
  • +
  • transitions = bars.
  • +
+

The nodes of different kinds are connected by arcs (arrowed curves). A integer weights are associated with arcs. Alternatively, for a smaller weight (2, 3, 4), the weithg can be graphically encoded by drawing multiple arcs.

+
+

Example 1 (Simple Petri net) We consider just two places, that is, \mathcal{P} = \{p_1, p_2\}, and one transition, that is, \mathcal{T} = \{t\}. The set of arcs is \mathcal{A} = \{\underbrace{(p_1, t)}_{a_1}, \underbrace{(t, p_2)}_{a_2}\}, and the associated weights are w(a_1) = w((p_1, t)) = 2 and w(a_2) = w((t, p_2)) = 1. The Petri net is depicted in Fig 1.

+
+
+
+ +
+
+Figure 1: Example of a simple Petri net +
+
+
+
+
+

Additional definitions

+
    +
  • \mathcal{I}(t_j) … a set of input places of the transition t_j,
  • +
  • \mathcal{O}(t_j) … a set of output places of the transition t_j.
  • +
+
+

Example 2 (More complex Petri net)  

+
    +
  • \mathcal{P} = \{p_1, p_2, p_3, p_4\},
  • +
  • \mathcal{T} = \{t_1, t_2, t_3, t_4, t_5\},
  • +
  • \mathcal{A} = \{(p_1, t_1), (t_1, p_1), (p_1, t_2),\ldots\},
  • +
  • w((p_1, t_1)) = 2, \; w((t_1, p_1)) = 1, \; \ldots
  • +
+
+
+
+ +
+
+Figure 2: Example of a more complex Petri net +
+
+
+
+
+
+

Marking and marked Petri nets

+

An important concept that we must introduce now is that of marking. It is a function that assigns an integer to each place x: \mathcal{P} \rightarrow \mathbb{N}.

+

The vector composed of the values of the marking function for all places \bm x = \begin{bmatrix}x(p_1)\\ x(p_2)\\ \vdots \\ x(p_n) \end{bmatrix} can be viewed as the state vector (although the Petri nets community perhaps would not use this terminology and stick to just marking).

+

Marked Petri net is then a Petri net augmented with the marking

+

MPN = \{\mathcal{P}, \mathcal{T}, \mathcal{A}, w,x\}.

+
+

Visualization of marked Petri net using tokens

+

Marked Petri net can also be visualized by placing tokens (dots) into the places. The number of tokens in a place corresponds to the value of the marking function for that place.

+
+

Example 3 (Marked Petri net) Consider the Petri net from Example 1. The marking function is x(p_1) = 2 and x(p_2) = 1, which assembled into a vector gives \bm x = \begin{bmatrix}1\\ 0 \end{bmatrix}. The marked Petri net is depicted in Fig 3.

+
+
+
+ +
+
+Figure 3: Example of a marked Petri net +
+
+
+

For another marking, namely \bm x = \begin{bmatrix}2\\ 1 \end{bmatrix}, the marked Petri net is depicted in Fig 4.

+
+
+
+ +
+
+Figure 4: Example of a marked Petri net with different marking +
+
+
+
+
+
+
+

Enabling and firing of a transition

+

Finally, here comes the enabling (pun intended) component of the definition of a Petri net – enabled transition. A transition t_j does not just happen – we say fire – whenever it wants, it can only happen (fire) if it is enabled, and the marking is used to determined if it is enabled. Namely, the transition is enabled if the value of the marking function for each input place is greater than or equal to the weight of the arc from that place to the transition. That is, the transition t_j is enabled if +x(p_i) \geq w(p_i,t_j)\quad \forall p_i \in \mathcal{I}(t_j). +

+
+
+
+ +
+
+Can but does not have to +
+
+
+

The enabled transition can fire, but it doesn’t have to. We will exploit this in timed PN.

+
+
+
+

Example 4 (Enabled transition) See the PN in Example 3: in the first marked PN the transition cannot fire, in the second it can.

+
+
+
+
+

State transition function

+

We now have a Petri net as a conceptual model with a graphical representation. But in order to use it for some quantitative analysis, it is useful to turn it into some computational form. Preferrably a familiar one. This is done by defining a state transition function. For a Petri net with n places, the state transition function is +f: \mathbb N^n \times \mathcal{T} \rightarrow \mathbb N^n, + which reads that the state transition fuction assignes a new marking (state) to the Petri net after a transition is fired at some given marking (state).

+

The function is only defined for a transition t_j iff the transition is enabled.

+

If the transition t_j is enabled and fired, the state evolves as +\bm x^+ = f(\bm x, t_j), + where the individual components of \bm x evolve according to \boxed{ + x^+(p_i) = x(p_i) - w(p_i,t_j) + w(t_j,p_i), \; i = 1,\ldots,n.} +

+

This has a visual interpretation – a fired transition moves tokens from the input to the output places.

+
+

Example 5 (Moving tokens around) Consider the PN with the initial marking (state) \bm x_0 = \begin{bmatrix}2\\ 0\\ 0\\ 1 \end{bmatrix} (at discrete time 0), and the transition t_1 enabled

+
+
+

+
Example PN in the initial state at time 0, the transition t_1 enabled, but not yet fired
+
+
+
+
+
+ +
+
+Conflict in notation. Again. Sorry. +
+
+
+

We admit the notation here is confusing, because we use the lower index 0 in \bm x_0 to denote the discrete time, while the lower index 1 in t_1 to denote the transition and the lower indices 1 and 2 in p_1 and p_2 just number the transitions and places, respectively. We could have chosen something like \bm x(0) or \bm x[0], but we dare to hope that the context will make it clear.

+
+
+

Now we assume that t_1 is fired

+
+
+
+ +
+
+Figure 5: Transition t_1 in the example PN fired at time 1 +
+
+
+

The state vector changes to \bm x_1 = [1, 1, 1, 1]^\top, the discrete time is 1 now.

+

As a result of this transition, note that t_1, t_2, t_3 are now enabled.

+
+
+
+ +
+
+Number of tokens need not be preserved +
+
+
+

In the example we can see for the first time, that the number of tokens need not be preserved.

+
+
+

Now fire the t_2 transition

+
+
+

+
Transition t_2 in the example PN fired at time 2
+
+
+

The state vector changes to \bm x_2 = [1, 1, 0, 2]^\top, the discrete time is 2 now.

+

Good, we can see the idea. But now we go back to time 1 (as in Fig 5) to explore the alternative evolution. With the state vector \bm x_1 = [1, 1, 1, 1]^\top and the transitions t_1, t_2, t_3 enabled, we fire t_3 this time.

+
+
+

+
Transition t_3 in the example PN fired at time 1
+
+
+

The state changes to \bm x_2 = [0, 1, 0, 0]^\top, the discrete time is 2. Apparently the PN evolved into at a different state. The lesson learnt with this example is that the order of firing of enabled transitions matters.

+
+
+
+
+ +
+
+The order in which the enabled transitions are fired does matter +
+
+
+

The dependence of the state evolution upon the order of firing the transitions is not surprising. Wwe have already encountered it in automata when the active event set for a given state contains more then a single element.

+
+
+
+
+

Reachability

+

We have started talking about states and state transitions in Petri nets, which are all concepts that we are familiar with from dynamical systems. Another such concept is reachability. We explain it through an example.

+
+

Example 6 (Not all states are reachable)  

+
+
+

+
Example of an unreachability in a Petri net
+
+
+

The Petri net is initial in the state [2,1]^\top. The only reachable state is [0,2]^\top.

+

By the way, note that the weight of the arc from the place p_1 to the transition t is 2, so both tokens are removed from the place p_1 when the transition t fires. But then the arc to the place p_2 has weight 1, so only one token is added to the place p_2. The other token is “lost”.

+
+
+

Reachability tree and graph

+

Here we introduce two tools for analysis of reachability of a Petri net.

+
+

Example 7 Consider the following example of a Petri net.

+
+
+

+
Example of a Petri net for reachability analysis
+
+
+

In Fig 6 we draw a reachability tree for this Petri net.

+
+
+
+ +
+
+Figure 6: Reachability tree for an example Petri net +
+
+
+

In Fig 7 we draw a reachability graph for this Petri net.

+
+
+
+ +
+
+Figure 7: Reachability graph for an example Petri net +
+
+
+
+
+
+
+

Number of tokens need not be preserved

+

We have already commented on this before, but we emphasize it here. Indeed, it can be that +\sum_{p_i\in\mathcal{O}(t_j)}w(t_j,p_i) < \sum_{p_i\in\mathcal{I}(t_j)} w(p_i,t_j) +

+

or

+

+\sum_{p_i\in\mathcal{O}(t_j)}w(t_j,p_i) > \sum_{p_i\in\mathcal{I}(t_j)} w(p_i,t_j) +

+

With this reminder, we can now hightlight several patters that can be observed in Petri nets.

+
+
+

AND-convergence, AND-divergence

+

+
+
+

OR-convergence and OR-divergence

+

+
+
+

Nondeterminism in a PN

+

In the four patters just enumerated, we have seen that the last one – the OR-divergence – is not deterministic. Indeed, consider the following example.

+
+

Example 8  

+
+
+

+
Nondeterminism in a Petri net
+
+
+
+

In other words, we can incorporate a nondeterminism in a model.

+
+
+
+ +
+
+Nondeterminism in automata +
+
+
+

Recall that something similar can be encountered in automata, if the active event set for a given state contains more than one element (event,transition).

+
+
+
+
+

Subclasses of Petri nets

+

We can identify two subclasses of Petri nets:

+
    +
  • event graphs,
  • +
  • state machines.
  • +
+
+

Event graph

+
    +
  • Each place has just one input and one output transition (all ws equal to 1).
  • +
  • No OR-convergence, no OR-divergence.
  • +
  • Also known as Decision-free PN.
  • +
  • It can model synchronization.
  • +
+
+

Example 9 (Event graph)  

+
+
+

+
Example of an event graph
+
+
+
+
+
+

State machine

+
    +
  • Each transition has just one input and one output place.
  • +
  • No AND-convergence, no AND-divergence.
  • +
  • Does not model synchronization.
  • +
  • It can model race conditions.
  • +
  • With no source (input) and sink (output) transitions, the number of tokens is preserved.
  • +
+
+

Example 10 (State machine)  

+
+
+

+
Example of a state machine
+
+
+
+
+
+

Incidence matrix

+

We consider a Petri net with n places and m transitions. The incidence matrix is defined as +\bm A \in \mathbb{Z}^{n\times m}, + where +a_{ij} = w(t_j,p_i) - w(p_i,t_j). +

+
+
+
+ +
+
+Transpose +
+
+
+

Some define the incidence matrix as the transpose of our definition.

+
+
+
+
+

State equation for a Petri net

+

With the incidence matrix defined above, the state equation for a Petri net can be written as +\bm x^+ = \bm x + \bm A \bm u, + where \bm u is a firing vector for the enabled j-th transition +\bm u = \bm e_j = \begin{bmatrix}0 \\ \vdots \\ 0 \\ 1\\ 0\\ \vdots\\ 0\end{bmatrix} + with the 1 at the j-th position.

+
+
+
+ +
+
+State vector as a column rather then a row +
+
+
+

Note that in [1] they define everything in terms of the transposed quantities, but we prefer sticking to the notion of a state vector as a column.

+
+
+
+

Example 11 (State equation for a Petri net) Consider the Petri net from Example 5, which we show again below in Fig 8.

+
+
+
+ +
+
+Figure 8: Example of a Petri net +
+
+
+

The initial state is given by the vector +\bm x_0 += +\begin{bmatrix} +2\\ 0\\ 0\\ 1 +\end{bmatrix} +

+

The incidence matrix is +\bm A = \begin{bmatrix} +-1 & 0 & -1\\ +1 & 0 & 0\\ +1 & -1 & -1\\ +0 & 1 & -1 +\end{bmatrix} +

+

And the state vector evolves according to +\begin{aligned} +\bm x_1 &= \bm x_0 + \bm A \bm u_1\\ +\bm x_2 &= \bm x_1 + \bm A \bm u_2\\ +\vdots & +\end{aligned} +

+
+
+
+
+ +
+
+Caution +
+
+
+

We repeat once again just to make sure: the lower index corresponds to the discrete time.

+
+
+
+
+
+

Queueing systems modelled by PN

+

Although Petri nets can be used to model a vast variety of systems, below we single out one particular class of systems that can be modelled by Petri nets – queueing systems. The general symbol is shown in Fig 9.

+
+
+
+ +
+
+Figure 9: Queing systems and its components and transitions/events +
+
+
+

We can associate the transitions with the events in the queing system:

+
    +
  • a is a spontaneous transition (no input places).
  • +
  • s needs a customer in the queue and the server being idle.
  • +
  • c needs the server being busy.
  • +
+

We can now start drawing the Petri net by drawin the bars corresponding to the transitions. Then in between every two bars, we draw a circle for a place. The places can be associated with three bold-face letters above, namely:

+

\quad \mathcal{P} = \{Q, I, B\}, that is, queue, idle, busy.

+
+
+

+
Petri net corresponding to the queing system
+
+
+
+
+
+ +
+
+The input transition adds tokens to the system +
+
+
+

The transition a is an input transition – the tokens are added to the system through this transition.

+
+
+

Note how we consider the token in the I place. This is not only to express that the server is initially idle, ready to serve as soon as a customer arrives to the queue, it also ensures that no serving of a new customer can start before the serving of the current customer is completed.

+

The initial state: [0,1,0]^\top. Consider now a particular trace (of transitions/events) \{a,s,a,a,c,s,a\}. Verify that this leads to the final state [2,0,1]^\top.

+
+

Some more extensions

+

We can keep adding features to the model of a queing system. In particular,

+
    +
  • the arrival transition always enabled,
  • +
  • the server can break down, and then be repaired,
  • +
  • completing the service \neq customer departure.
  • +
+

These are incorporated into the Petri net in Fig 10.

+
+
+
+ +
+
+Figure 10: Extended model of a queueing system +
+
+
+

In the Petri net, d is an output transition – the tokens are removed from the system.

+
+

Example 12 (Beverage vending machine) Below we show a Petri net for a beverage vending machine. While building it, we find it useful to identify the events/transitions that can happen in the system.

+
+
+

+
Petri net for a beverage vending machine
+
+
+
+
+
+
+

Some extensions of basic Petri nets

+
    +
  • Coloured Petri nets (CPN): tokens can by of several types (colours), and the transitions can be enabled only if the tokens have the right colours.
  • +
  • +
+

We do not cover these extensions in our course. But there is one particular extension that we do want to cover, and this amounts to introducing time into Petri nets, leading to timed Petri nets, which we will discuss in the next chapter.

+ + + +
+ + Back to top

References

+
+
[1]
C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, 3rd ed. Cham: Springer, 2021. Available: https://doi.org/10.1007/978-3-030-72274-6
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/petri_nets_references 17.html b/petri_nets_references 17.html new file mode 100644 index 0000000..75ebe5d --- /dev/null +++ b/petri_nets_references 17.html @@ -0,0 +1,1131 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

Literature for Petri nets is vast, but a decent (and perfectly satisfactory) introduction can be found in Chapter 4 and 5.3 (for the timed PN) of the classical (and award-winning) reference [1]. Note that electronic version (in fact, PDF) is accessible through the NTK library (upon CTU login, for example to usermap first).

+

A nice introduction is also in Chapter 2 of the freely online available book [2].

+

The survey paper that is particularly focused on Petri nets from the control systems perspective is [3] and it gives a wealth of other references.

+

A few more monographs, mostly inclined towards control systems, are [4], [5], [6].

+
+

Petri nets and their derivatives such as Grafcet in international standards

+

We mention at the beginning of this chapter that Petri nets have made it to international standards. Here they are: [7], [8], and [9].

+

Based on Petri nets, another framework has been derived and standardized, namely GRAFCET, see [10] and [11], upon which, in turn, the popular Sequential Function Chart (SFC) language for PLC programming [12] is based.

+ + + +
+ + Back to top

References

+
+
[1]
C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, 3rd ed. Cham: Springer, 2021. Available: https://doi.org/10.1007/978-3-030-72274-6
+
+
+
[2]
F. Baccelli, G. Cohen, G. J. Olsder, and J.-P. Quadrat, Synchronization and linearity: An algebra for discrete event systems, Web edition. Chichester: Wiley, 2001. Available: https://www.rocq.inria.fr/metalau/cohen/documents/BCOQ-book.pdf
+
+
+
[3]
A. Giua and M. Silva, “Petri nets and Automatic Control: A historical perspective,” Annual Reviews in Control, vol. 45, pp. 223–239, Jan. 2018, doi: 10.1016/j.arcontrol.2018.04.006.
+
+
+
[4]
J. O. Moody, Supervisory Control of Discrete Event Systems Using Petri Nets. in The International Series on Discrete Event Dynamic Systems. New York, NY: Springer, 31 {\v c}ervence 1998. Available: https://doi.org/10.1007/978-1-4615-5711-1
+
+
+
[5]
B. Hrúz and M. Zhou, Modeling and Control of Discrete-event Dynamic Systems: With Petri Nets and Other Tools. in Advanced Textbooks in Control and Signal Processing (C&SP). London: Springer, 2007. Available: https://doi.org/10.1007/978-1-84628-877-7
+
+
+
[6]
W. Reisig, Understanding Petri Nets: Modeling Techniques, Analysis Methods, Case Studies. Berlin; Heidelberg: Springer, 2013. Available: https://doi.org/10.1007/978-3-642-33278-4
+
+
+
[7]
ISO/IEC 15909-1:2019 Systems and software engineering — High-level Petri nets — Part 1: Concepts, definitions and graphical notation.” ISO/IEC, Aug. 2019. Accessed: Sep. 27, 2023. [Online]. Available: https://www.iso.org/standard/67235.html
+
+
+
[8]
ISO/IEC 15909-2:2011 Systems and software engineering — High-level Petri nets — Part 2: Transfer format.” ISO/IEC, Feb. 2011. Accessed: Sep. 27, 2023. [Online]. Available: https://www.iso.org/standard/43538.html
+
+
+
[9]
ISO/IEC 15909-3:2021: Systems and software engineering — High-level Petri nets — Part 3: Extensions and structuring mechanisms.” ISO/IEC, 2021. Accessed: Sep. 29, 2023. [Online]. Available: https://www.iso.org/standard/81504.html
+
+
+
[10]
IEC 60848:2013 GRAFCET specification language for sequential function charts.” IEC, Feb. 2013. Available: https://webstore.iec.ch/publication/3684
+
+
+
[11]
C. Johnsson and K.-E. Årzén, “Grafchart and its Relations to Grafcet and Petri Nets,” IFAC Proceedings Volumes, vol. 31, no. 15, pp. 95–100, Jun. 1998, doi: 10.1016/S1474-6670(17)40535-0.
+
+
+
[12]
IEC 61131-3 Programmable controllers - Part 3: Programming languages.” IEC, Feb. 2013. Accessed: Jan. 08, 2023. [Online]. Available: https://webstore.iec.ch/publication/4552
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/petri_nets_software 12.html b/petri_nets_software 12.html new file mode 100644 index 0000000..f7dbb3e --- /dev/null +++ b/petri_nets_software 12.html @@ -0,0 +1,1093 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +

Petri nets constitute a powerful and flexible framework for modelling discrete-event systems, and yet the selection of mature and well-maintained software tools is not particularly wide. How come? Petri nets have already inspired a few other frameworks such as Grafcet and SFC for PLC programming, as we discuss in the overview of the literature. The “vanilla version” of Petri nets then serves mainly for academic research.

+
+

Matlab

+ +
+
+

Python

+

SNAKES (github)

+
+
+

Julia

+ +
+
+

Standalone

+ + + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/petri_nets_timed 17.html b/petri_nets_timed 17.html new file mode 100644 index 0000000..0f9f7e5 --- /dev/null +++ b/petri_nets_timed 17.html @@ -0,0 +1,1316 @@ + + + + + + + + + +Timed Petri nets – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Timed Petri nets

+
+ + + +
+ + + + +
+ + + +
+ + +

Recall that when introducing enabled transitions, we emphasized that these can but do not have to fire immediately after having been enabled \boxed{\mathrm{ENABLING} \neq \text{FIRING}.}

+
+

Delays associated with transitions

+

Well then, the enabled transitions do not have to fire immediately, but they can possibly fire with some delay after enabling. This is for the first time that we are introducing the concept of time into the Petri nets, isn’t it?

+

For the jth transition, the delay of the kth firing is v_{j,k}, and we collect the sequence of delayes into v_j = \{v_{j,1}, v_{j,2}, \ldots \}.

+

But not all transitions have to be timed. Denote the timed transitions \mathcal{T}_\mathrm{D}\subseteq \mathcal{T}. We define the clock structure for a PN as \mathcal{V} = \{v_j\mid t_j\in\mathcal{T}_\mathrm{D}\}.

+

The definition of a timed Petri net (TPN) is then obtained by augmenting the definition of a Petri net with the clock structure

+

\boxed{TPN = \{\mathcal{P}, \mathcal{T}, \mathcal{A}, w, x, \mathcal{V}\}}.

+
+

Example 1 (Timed Petri net) Model of processing multiple tasks: task 1 and task 2 are processed sequentially, and task 3 is processed in parallel with them; task 4 can only be processed after both tasks 2 and 3 have been finished. Finishing individual tasks corresponds to the individual transitions. The transition 4 is untimed, it only expresses the logical requirement.

+
+
+

+
Example of a timed Petri net
+
+
+
+
+
+
+ +
+
+Rectangles instead of bars +
+
+
+

Sometimes instead of a bar, the untimed transitions are modelled using similarly thin rectangles as the timed transitions, but filled.

+
+
+
+

Places can also be delayed

+

With delays associated with just one type of a node in a Petri net, the situation is rather nonsymmetric. In some literature, delays can also associated with places. And yet in some other literature delays are only associated with places. Such delays associated with places are called holding time for a place It is the minimum duration the token must rest in the place. But the token can stay longer if the output transition is waiting for other places.

+
+
+
+ +
+
+Delays associated with transitions and places +
+
+
+

There is a major difference in delays associated with places compared to the delays associated with transitions. While the former tells the minimum duration the token has to dwell in the place, the latter tell the exact delay with which the transition fires after having been enabled.

+
+
+
+
+
+

Timed Petri net dynamics

+

With the introduction of time into the Petri nets, we can now study the dynamics of the system. For general Petri nets, alhough perfectly doable, it may quicly become too complex, and therefore here we only consider event graphs.

+

Some notation:

+
    +
  • \{\tau_{j,1}, \tau_{j,2}, \ldots\} are the firing times of the jth transition,
  • +
  • \{\pi_{i,1},\pi_{i,2},\ldots\} are the times when the ith place receives a token,
  • +
  • x_i = x(p_i) is the number of tokens at the ith place,
  • +
  • x_{i,k} = \left.x(p_i)\right|_k, is the number of tokens at the ith place after the kth firing.
  • +
+

Now, assume first that x_{i,0} = 0. We can then relate the time of the arrival of the token to the place with the firing of the transition from which the token arrives \pi_{i,k} = \tau_{j,k},\quad p_i\in \mathcal{O}(t_j).

+

But generally x_{i,0} \neq 0 and the above relation needs to be modified to \pi_{i,k+x_{i,0}} = \tau_{j,k},\quad p_i\in \mathcal{O}(t_j), or, equivalently \boxed{\pi_{i,k} = \tau_{j,k-x_{i,0}},\quad p_i\in \mathcal{O}(t_j).} \tag{1}

+

This can be read in the following way. If there are initially, say, 3 tokens in the place, the time of the arrival of the 4th token is the time of the first firing of the transition from which the 4th token arrives.

+

Good. Keep this result in mind. Now we go for another.

+

For an untimed transition with a single input place, the firing time is the same as the time of the arrival of the token to the place +\tau_{j,k} = \pi_{i,k}. +

+

Modifying this result for a timed transition with a single input place we get +\tau_{j,k} = \pi_{i,k} + v_{j,k}. +

+

In words, the firing time is given by the time of the arrival of the token to the place, which enables the transition, and the delay associated with the transition.

+

Finally, we extend this result to the case of a timed transition with multiple input places \boxed{ +\tau_{j,k} = \max_{p_i\in\mathcal{I}(t_j)}\{\pi_{i,k}\} + v_{j,k}.} +\tag{2}

+

This is the other promised important result. Keep both boxed formulas Equation 1 and Equation 2 handy, they will be needed in what is coming.

+
+

Example 2 (Timed Petri net dynamics) Consider the Petri net with three places and two transitions, one of which is timed, as in Fig 1.

+
+
+
+ +
+
+Figure 1: Example of a Petri net for which the dynamics is analyzed +
+
+
+

We first use Equation 2 to write down the firing times of the two transitions +\begin{aligned} +\tau_{1,k} &= \max\{\pi_{1,k},\pi_{3,k}\}\\ +\tau_{2,k} &= \pi_{2,k}+v_{2,k}. +\end{aligned} +

+

Now we apply Equation 1 to write down the times of the arrival of the tokens to the places +\begin{aligned} +\pi_{1,k} &= \tau_{1,k-1}, \qquad k=2,\ldots, \qquad \pi_{1,0} = 0\\ +\pi_{2,k} &= \tau_{1,k-1}, \qquad k=2,\ldots, \qquad \pi_{2,0} = 0\\ +\pi_{3,k} &= \tau_{2,k}, \qquad k=1,\ldots +\end{aligned} +

+

Substituting from the latter into the former we get +\begin{aligned} +\tau_{1,k} &= \max\{\tau_{1,k-1},\tau_{1,k-1}+v_{2,k}\}\\ +&= \tau_{1,k-1}+v_{2,k}, \quad \tau_{1,k} = 0\\ +\tau_{2,k} &= \tau_{1,k-1}+v_{2,k}. +\end{aligned} +

+

This is the ultimate model for the dynamics of the Petri net. Should we need it, we can also get similar expressions for the times of the arrival of the tokens to the places.

+
+
+
+
+ +
+
+Update equations for times and not states +
+
+
+

While with state equations we compute a sequences of values of the state vector (\bm x_0, \bm x_1, \bm x_2, \ldots), in other words, we compute the evolution of the state in time, here we compute the sequences of times when transitions fire (or token arrive to places). This update scheme for times resembles the state equations, but the interpretation is different.

+
+
+
+
+

Queueing system using TPN

+

We can also model a queueing system using a TPN. The Petri net is shown in Fig 2.

+
+
+
+ +
+
+Figure 2: Timed Petri net modelling a queueing system +
+
+
+

Of the three transitions \mathcal{T} = \{a,s,c\}, which we have already identified previously, we assume that only are times, namely \mathcal{T}_\mathrm{D} = \{a,c\}. The associated firing delays are \bm v = \begin{bmatrix}v_a \\ v_c\end{bmatrix}.

+

For convenience we relabel the firing times of the transitions. Instead of t_{a,k} we will use a_k, and similarly s_k, and c_k. Application of Equation 2 and Equation 1 gives +\begin{aligned} +a_k &= a_{k-1} + v_{a,k},\quad k=1,2,\ldots,\quad a_0 = 0\\ +s_k &= \max\{\pi_{Q,k},\pi_{I,k}\}\\ +c_k &= \pi_{B,k} + v_{c,k}\\ +\pi_{Q,k} &= a_{k},\quad k=1,2,\ldots\\ +\pi_{I,k} &= c_{k-1},\quad k= 2, \ldots, \quad \pi_{I,0}=1\\ +\pi_{B,k} &= s_{k},\quad k=1,2,\ldots\\ +\end{aligned} +

+

Combining gives the desired update equations +\begin{aligned} +a_k &= a_{k-1} + v_{a,k},\quad k=1,2,\ldots,\quad a_0 = 0\\ +s_k &= \max\{a_{k},c_{k-1}\}\\ +c_k &= s_{k} + v_{c,k}\\ +&= \max\{a_{k},c_{k-1}\} + v_{c,k},\quad k=1,\ldots, \quad c_0=0 +\end{aligned} +

+

The time of completing the kth task is given by the time at which the previous task was completed and the time needed to complete the kth task itself, unless there is a gap in the queue after finishing the previous task, in which case the server must wait for the next task to arrive.

+
+

Example 3 (Timed Petri net for synchronization of train lines) We consider three closed rail tracks and two stations as in Fig 3.

+
+
+
+ +
+
+Figure 3: Example with three train lines +
+
+
+

Departure of a train at a station must be synchronized with arrival of the other train so that passengers can change train. The timed Petri net for this system is shown in Fig 4.

+
+
+
+ +
+
+Figure 4: Timed Petri net for the example of synchronization of three train lines +
+
+
+

If time is associated with the places, the Petri net simplifies significantly to Fig 5.

+
+
+
+ +
+
+Figure 5: Timed Petri net for the example of synchronization of three train lines +
+
+
+
+
+

Example 4 (Manufacturing) tbd

+
+
+
+

Extensions

+
+

Stochastic Petri nets (SPN)

+

Numerous extensions are possible, some of which we have already mentioned when discussing untimed Petri nets. But upon introducing time, stochastic Petr nets can be concived, in which delays are random variables.

+ + +
+
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/search.json b/search.json index aac224d..7756a6c 100644 --- a/search.json +++ b/search.json @@ -510,7 +510,7 @@ "href": "des_automata.html#extensions", "title": "State automata", "section": "Extensions", - "text": "Extensions\nThe concept of an automaton can be extended in several ways. In particular, the following two extensions introduce the concept of an output to an automaton.\n\nMoore machine\nOne extension of an automaton with outputs is Moore machine. The outputs assigned to the states by the output function y = g(x).\nThe output is produced (emitted) when the (new) state is entered.\nNote, in particular, that the output does not depend on the input. This has a major advantage when a feedback loop is closed around this system, since no algebraic loop is created.\nGraphically, we make a conventions that outputs are the labels of the states.\n\nExample 8 (Moore machine) The following automaton has just three states, but just two outputs (FLOW and NO FLOW).\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\nclosed\n\nNO FLOW\nValve\nclosed\n\n\n\ninit->closed\n\n\n\n\n\npartial\n\nFLOW\nValve\npartially\nopen\n\n\n\nclosed->partial\n\n\nopen valve one turn\n\n\n\npartial->closed\n\n\nclose valve one turn\n\n\n\nfull\n\nFLOW\nValve\nfully open\n\n\n\npartial->full\n\n\nopen valve one turn\n\n\n\nfull->closed\n\n\nemergency shut off\n\n\n\nfull->partial\n\n\nclose valve one turn\n\n\n\n\n\n\nFigure 9: Example of a digraph representation of the Moore machine for a valve control\n\n\n\n\n\n\n\n\nMealy machine\nMealy machine is another extension of an automaton. Here the outputs are associated with the transitions rather than the states.\nSince the events already associated with the states can be viewed as the inputs, we now have input/output transition labels. The transition label e_\\mathrm{i}/e_\\mathrm{o} on the transion from x_1 to x_2 reads as “the input event e_\\mathrm{i} at state x_1 activates the transition to x_2, which outputs the event e_\\mathrm{o}” and can be written as x_1\\xrightarrow{e_\\mathrm{i}/e_\\mathrm{o}} x_2.\nIt can be viewed as if the output function also considers the input and not only the state y = e_\\mathrm{o} = g(x,e_\\mathrm{i}).\nIn contrast with the Moore machine, here the output is produced (emitted) during the transition (before the new state is entered).\n\nExample 9 (Mealy machine) Coffee machine: coffee for 30 CZK, machine accepting 10 and 20 CZK coins, no change.\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\n0\n\nNo coin\n\n\n\ninit->0\n\n\n\n\n\n10\n\n10 CZK\n\n\n\n0->10\n\n\ninsert 10 CZK / no coffee\n\n\n\n20\n\n20 CZK\n\n\n\n0->20\n\n\ninsert 20 CZK / no coffee\n\n\n\n10->0\n\n\ninsert 20 CZK / coffee\n\n\n\n10->20\n\n\ninsert 10 CZK / no coffee\n\n\n\n20->0\n\n\ninsert 10 CZK / coffee\n\n\n\n20->10\n\n\ninsert 20 CZK / coffee\n\n\n\n\n\n\nFigure 10: Example of a digraph representation of the Mealy machine for a coffee machine\n\n\n\n\n\n\n\nExample 10 (Reformulate the previous example as a Moore machine) Two more states wrt Mealy\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\n0\n\nNO COFFEE\nNo\ncoin\n\n\n\ninit->0\n\n\n\n\n\n10\n\nNO COFFEE\n10\nCZK\n\n\n\n0->10\n\n\ninsert 10 CZK\n\n\n\n20\n\nNO COFFEE\n20\nCZK\n\n\n\n0->20\n\n\ninsert 20 CZK\n\n\n\n10->20\n\n\ninsert 10 CZK\n\n\n\n30\n\nCOFFEE\n10+20\nCZK\n\n\n\n10->30\n\n\ninsert 20 CZK\n\n\n\n20->30\n\n\ninsert 10 CZK\n\n\n\n40\n\nCOFFEE\n20+20\nCZK\n\n\n\n20->40\n\n\ninsert 20 CZK\n\n\n\n30->0\n\n\n\n\n\n30->10\n\n\ninsert 10 CZK\n\n\n\n30->20\n\n\ninsert 20 CZK\n\n\n\n40->10\n\n\n\n\n\n40->20\n\n\ninsert 10 CZK\n\n\n\n40->30\n\n\ninsert 20 CZK\n\n\n\n\n\n\nFigure 11: Example of a digraph representation of the Moore machine for a coffee machine\n\n\n\n\n\n\n\n\n\n\n\n\nNote\n\n\n\nThere are transitions from 30 and 40 back to 0 that are not labelled by any event. This does not seem to follow the general rule that transitions are always triggered by events. Not what? It can be resolved upon introducing time as the timeout transitions.\n\n\n\nExample 11 (Dijkstra’s token passing) The motivation for this example is to show that it is perhaps not always productive to insist on visual description of the automaton using a graph. The four components of our formal definition of an automaton are just enough, and they translate directly to a code.\nThe example comes from the field of distributed computing systems. It considers several computers that are connected in ring topology, and the communication is just one-directional as Fig 12 shows. The task is to use the communication to determine in – a distributed way – which of the computers carries a (single) token at a given time. And to realize passing of the token to a neighbour. We assume a synchronous case, in which all the computers are sending simultaneously, say, with some fixed sending period.\n\n\n\n\n\n\n\n\nG\n\n\n0\n\n0\n\n\n\n1\n\n1\n\n\n\n0->1\n\n\n\n\n\n2\n\n2\n\n\n\n1->2\n\n\n\n\n\n3\n\n3\n\n\n\n2->3\n\n\n\n\n\n3->0\n\n\n\n\n\n\n\n\nFigure 12: Example of a ring topology for Dijkstra’s token passing in a distributed system\n\n\n\n\n\nOne popular method for this is called Dijkstra’s token passing. Each computer keeps a single integer value as its state variable. And it forwards this integer value to the neighbour (in the clockwise direction in our setting). Upon receiving the value from the other neighbour (in the counter-clockwise direction), it updates its own value according to the rule displayed in the code below. At every clock tick, the state vector (composed of the individual state variables) is updated according to the function update!() in the code. Based on the value of the state vector, an output is computed, which decodes the informovation about the location of the token from the state vector. Again, the details are in the output() function.\n\n\nShow the code\nstruct DijkstraTokenRing\n number_of_nodes::Int64\n max_value_of_state_variable::Int64\n state_vector::Vector{Int64}\nend\n\nfunction update!(dtr::DijkstraTokenRing) \n n = dtr.number_of_nodes\n k = dtr.max_value_of_state_variable\n x = dtr.state_vector\n xnext = copy(x)\n for i in eachindex(x) # Mind the +1 shift. x[2] corresponds to x₁ in the literature.\n if i == 1 \n xnext[i] = (x[i] == x[n]) ? mod(x[i] + 1,k) : x[i] # Increment if the left neighbour is identical.\n else \n xnext[i] = (x[i] != x[i-1]) ? x[i-1] : x[i] # Update by the differing left neighbour.\n end\n end\n dtr.state_vector .= xnext \nend\n\nfunction output(dtr::DijkstraTokenRing) # Token = 1, no token = 0 at the given position. \n x = dtr.state_vector\n y = similar(x)\n y[1] = iszero(x[1]-x[end])\n y[2:end] .= .!iszero.(diff(x))\n return y\nend\n\n\noutput (generic function with 1 method)\n\n\nWe now rund the code for a given number of computers and some initial state vector that does not necessarily comply with the requirement that there is only one token in the ring.\n\n\nShow the code\nn = 4 # Concrete number of nodes.\nk = n # Concrete max value of a state variable (>= n).\n@show x_initial = rand(0:k,n) # Initial state vector, not necessarily acceptable (>1 token in the ring).\ndtr = DijkstraTokenRing(n,k,x_initial)\n@show output(dtr) # Show where the token is (are).\n\n@show update!(dtr), output(dtr) # Perform the update, show the state vector and show where the token is.\n@show update!(dtr), output(dtr) # Repeat a few times to see the stabilization. \n@show update!(dtr), output(dtr)\n@show update!(dtr), output(dtr)\n@show update!(dtr), output(dtr)\n\n\nx_initial = rand(0:k, n) = [1, 2, 1, 4]\noutput(dtr) = [0, 1, 1, 1]\n(update!(dtr), output(dtr)) = ([1, 1, 2, 1], [1, 0, 1, 1])\n(update!(dtr), output(dtr)) = ([2, 1, 1, 2], [1, 1, 0, 1])\n(update!(dtr), output(dtr)) = ([3, 2, 1, 1], [0, 1, 1, 0])\n(update!(dtr), output(dtr)) = ([3, 3, 2, 1], [0, 0, 1, 1])\n(update!(dtr), output(dtr)) = ([3, 3, 3, 2], [0, 0, 0, 1])\n\n\n([3, 3, 3, 2], [0, 0, 0, 1])\n\n\nWe can see that although initially the there can be more tokens, after a few iterations the algorithm achieves the goal of having just one token in the ring.\n\n\n\nExtended-state automaton\nYet another extension of an automaton is the extended-state automaton. And indeed, the hyphen is there on purpose as we extend the state space.\nIn particular, we augment the state variable(s) that define the states/modes/locations (the nodes in the graph) by additional (typed) state variables: Int, Enum, Bool, …\nTransitions from one mode to another are then guarded by conditions on theses new extra state variables.\nBesides being guarded by a guard condition, a given transition can also be labelled by a reset function that resets the extended-state variables.\n\nExample 12 (Counting up to 10) In this example, there are two modes (on and off), which can be captured by a single binary state variable, say x. But then there is an additional integer variable k, and the two variables together characterize the extended state.\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\nOFF\n\nOFF\n\n\n\ninit->OFF\n\n\nint k=0\n\n\n\nON\n\nON\n\n\n\nOFF->ON\n\n\npress\n\n\n\nON->OFF\n\n\n(press ⋁ k ≥ 10); k=0\n\n\n\nON->ON\n\n\n(press ∧ k < 10); k=k+1\n\n\n\n\n\n\nFigure 13: Example of a digraph representation of the extended-state automaton for counting up to ten", + "text": "Extensions\nThe concept of an automaton can be extended in several ways. In particular, the following two extensions introduce the concept of an output to an automaton.\n\nMoore machine\nOne extension of an automaton with outputs is Moore machine. The outputs assigned to the states by the output function y = g(x).\nThe output is produced (emitted) when the (new) state is entered.\nNote, in particular, that the output does not depend on the input. This has a major advantage when a feedback loop is closed around this system, since no algebraic loop is created.\nGraphically, we make a conventions that outputs are the labels of the states.\n\nExample 8 (Moore machine) The following automaton has just three states, but just two outputs (FLOW and NO FLOW).\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\nclosed\n\nNO FLOW\nValve\nclosed\n\n\n\ninit->closed\n\n\n\n\n\npartial\n\nFLOW\nValve\npartially\nopen\n\n\n\nclosed->partial\n\n\nopen valve one turn\n\n\n\npartial->closed\n\n\nclose valve one turn\n\n\n\nfull\n\nFLOW\nValve\nfully open\n\n\n\npartial->full\n\n\nopen valve one turn\n\n\n\nfull->closed\n\n\nemergency shut off\n\n\n\nfull->partial\n\n\nclose valve one turn\n\n\n\n\n\n\nFigure 9: Example of a digraph representation of the Moore machine for a valve control\n\n\n\n\n\n\n\n\nMealy machine\nMealy machine is another extension of an automaton. Here the outputs are associated with the transitions rather than the states.\nSince the events already associated with the states can be viewed as the inputs, we now have input/output transition labels. The transition label e_\\mathrm{i}/e_\\mathrm{o} on the transion from x_1 to x_2 reads as “the input event e_\\mathrm{i} at state x_1 activates the transition to x_2, which outputs the event e_\\mathrm{o}” and can be written as x_1\\xrightarrow{e_\\mathrm{i}/e_\\mathrm{o}} x_2.\nIt can be viewed as if the output function also considers the input and not only the state y = e_\\mathrm{o} = g(x,e_\\mathrm{i}).\nIn contrast with the Moore machine, here the output is produced (emitted) during the transition (before the new state is entered).\n\nExample 9 (Mealy machine) Coffee machine: coffee for 30 CZK, machine accepting 10 and 20 CZK coins, no change.\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\n0\n\nNo coin\n\n\n\ninit->0\n\n\n\n\n\n10\n\n10 CZK\n\n\n\n0->10\n\n\ninsert 10 CZK / no coffee\n\n\n\n20\n\n20 CZK\n\n\n\n0->20\n\n\ninsert 20 CZK / no coffee\n\n\n\n10->0\n\n\ninsert 20 CZK / coffee\n\n\n\n10->20\n\n\ninsert 10 CZK / no coffee\n\n\n\n20->0\n\n\ninsert 10 CZK / coffee\n\n\n\n20->10\n\n\ninsert 20 CZK / coffee\n\n\n\n\n\n\nFigure 10: Example of a digraph representation of the Mealy machine for a coffee machine\n\n\n\n\n\n\n\nExample 10 (Reformulate the previous example as a Moore machine) Two more states wrt Mealy\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\n0\n\nNO COFFEE\nNo\ncoin\n\n\n\ninit->0\n\n\n\n\n\n10\n\nNO COFFEE\n10\nCZK\n\n\n\n0->10\n\n\ninsert 10 CZK\n\n\n\n20\n\nNO COFFEE\n20\nCZK\n\n\n\n0->20\n\n\ninsert 20 CZK\n\n\n\n10->20\n\n\ninsert 10 CZK\n\n\n\n30\n\nCOFFEE\n10+20\nCZK\n\n\n\n10->30\n\n\ninsert 20 CZK\n\n\n\n20->30\n\n\ninsert 10 CZK\n\n\n\n40\n\nCOFFEE\n20+20\nCZK\n\n\n\n20->40\n\n\ninsert 20 CZK\n\n\n\n30->0\n\n\n\n\n\n30->10\n\n\ninsert 10 CZK\n\n\n\n30->20\n\n\ninsert 20 CZK\n\n\n\n40->10\n\n\n\n\n\n40->20\n\n\ninsert 10 CZK\n\n\n\n40->30\n\n\ninsert 20 CZK\n\n\n\n\n\n\nFigure 11: Example of a digraph representation of the Moore machine for a coffee machine\n\n\n\n\n\n\n\n\n\n\n\n\nNote\n\n\n\nThere are transitions from 30 and 40 back to 0 that are not labelled by any event. This does not seem to follow the general rule that transitions are always triggered by events. Not what? It can be resolved upon introducing time as the timeout transitions.\n\n\n\nExample 11 (Dijkstra’s token passing) The motivation for this example is to show that it is perhaps not always productive to insist on visual description of the automaton using a graph. The four components of our formal definition of an automaton are just enough, and they translate directly to a code.\nThe example comes from the field of distributed computing systems. It considers several computers that are connected in ring topology, and the communication is just one-directional as Fig 12 shows. The task is to use the communication to determine in – a distributed way – which of the computers carries a (single) token at a given time. And to realize passing of the token to a neighbour. We assume a synchronous case, in which all the computers are sending simultaneously, say, with some fixed sending period.\n\n\n\n\n\n\n\n\nG\n\n\n0\n\n0\n\n\n\n1\n\n1\n\n\n\n0->1\n\n\n\n\n\n2\n\n2\n\n\n\n1->2\n\n\n\n\n\n3\n\n3\n\n\n\n2->3\n\n\n\n\n\n3->0\n\n\n\n\n\n\n\n\nFigure 12: Example of a ring topology for Dijkstra’s token passing in a distributed system\n\n\n\n\n\nOne popular method for this is called Dijkstra’s token passing. Each computer keeps a single integer value as its state variable. And it forwards this integer value to the neighbour (in the clockwise direction in our setting). Upon receiving the value from the other neighbour (in the counter-clockwise direction), it updates its own value according to the rule displayed in the code below. At every clock tick, the state vector (composed of the individual state variables) is updated according to the function update!() in the code. Based on the value of the state vector, an output is computed, which decodes the informovation about the location of the token from the state vector. Again, the details are in the output() function.\n\n\nShow the code\nstruct DijkstraTokenRing\n number_of_nodes::Int64\n max_value_of_state_variable::Int64\n state_vector::Vector{Int64}\nend\n\nfunction update!(dtr::DijkstraTokenRing) \n n = dtr.number_of_nodes\n k = dtr.max_value_of_state_variable\n x = dtr.state_vector\n xnext = copy(x)\n for i in eachindex(x) # Mind the +1 shift. x[2] corresponds to x₁ in the literature.\n if i == 1 \n xnext[i] = (x[i] == x[n]) ? mod(x[i] + 1,k) : x[i] # Increment if the left neighbour is identical.\n else \n xnext[i] = (x[i] != x[i-1]) ? x[i-1] : x[i] # Update by the differing left neighbour.\n end\n end\n dtr.state_vector .= xnext \nend\n\nfunction output(dtr::DijkstraTokenRing) # Token = 1, no token = 0 at the given position. \n x = dtr.state_vector\n y = similar(x)\n y[1] = iszero(x[1]-x[end])\n y[2:end] .= .!iszero.(diff(x))\n return y\nend\n\n\noutput (generic function with 1 method)\n\n\nWe now rund the code for a given number of computers and some initial state vector that does not necessarily comply with the requirement that there is only one token in the ring.\n\n\nShow the code\nn = 4 # Concrete number of nodes.\nk = n # Concrete max value of a state variable (>= n).\n@show x_initial = rand(0:k,n) # Initial state vector, not necessarily acceptable (>1 token in the ring).\ndtr = DijkstraTokenRing(n,k,x_initial)\n@show output(dtr) # Show where the token is (are).\n\n@show update!(dtr), output(dtr) # Perform the update, show the state vector and show where the token is.\n@show update!(dtr), output(dtr) # Repeat a few times to see the stabilization. \n@show update!(dtr), output(dtr)\n@show update!(dtr), output(dtr)\n@show update!(dtr), output(dtr)\n\n\nx_initial = rand(0:k, n) = [1, 1, 0, 3]\noutput(dtr) = [0, 0, 1, 1]\n(update!(dtr), output(dtr)) = ([1, 1, 1, 0], [0, 0, 0, 1])\n(update!(dtr), output(dtr)) = ([1, 1, 1, 1], [1, 0, 0, 0])\n(update!(dtr), output(dtr)) = ([2, 1, 1, 1], [0, 1, 0, 0])\n(update!(dtr), output(dtr)) = ([2, 2, 1, 1], [0, 0, 1, 0])\n(update!(dtr), output(dtr)) = ([2, 2, 2, 1], [0, 0, 0, 1])\n\n\n([2, 2, 2, 1], [0, 0, 0, 1])\n\n\nWe can see that although initially the there can be more tokens, after a few iterations the algorithm achieves the goal of having just one token in the ring.\n\n\n\nExtended-state automaton\nYet another extension of an automaton is the extended-state automaton. And indeed, the hyphen is there on purpose as we extend the state space.\nIn particular, we augment the state variable(s) that define the states/modes/locations (the nodes in the graph) by additional (typed) state variables: Int, Enum, Bool, …\nTransitions from one mode to another are then guarded by conditions on theses new extra state variables.\nBesides being guarded by a guard condition, a given transition can also be labelled by a reset function that resets the extended-state variables.\n\nExample 12 (Counting up to 10) In this example, there are two modes (on and off), which can be captured by a single binary state variable, say x. But then there is an additional integer variable k, and the two variables together characterize the extended state.\n\n\n\n\n\n\n\n\nG\n\n\ninit\ninit\n\n\n\nOFF\n\nOFF\n\n\n\ninit->OFF\n\n\nint k=0\n\n\n\nON\n\nON\n\n\n\nOFF->ON\n\n\npress\n\n\n\nON->OFF\n\n\n(press ⋁ k ≥ 10); k=0\n\n\n\nON->ON\n\n\n(press ∧ k < 10); k=k+1\n\n\n\n\n\n\nFigure 13: Example of a digraph representation of the extended-state automaton for counting up to ten", "crumbs": [ "1. Discrete-event systems: Automata", "State automata" @@ -1720,7 +1720,7 @@ "href": "complementarity_simulations.html", "title": "Simulations of complementarity systems using time-stepping", "section": "", - "text": "One of the useful outcomes of the theory of complementarity systems is a new family of methods for numerical simulation of discontinuous systems. Here we will demonstrate the essence by introducing the method of time-stepping. And we do it by means of an example.\n\nExample 1 (Simulation using time-stepping) Consider the following discontinuous dynamical system in \\mathbb R^2: \n\\begin{aligned}\n\\dot x_1 &= -\\operatorname{sign} x_1 + 2 \\operatorname{sign} x_2\\\\\n\\dot x_2 &= -2\\operatorname{sign} x_1 -\\operatorname{sign} x_2.\n\\end{aligned}\n\nThe state portrait is in Fig. 1.\n\n\nShow the code\nf₁(x) = -sign(x[1]) + 2*sign(x[2])\nf₂(x) = -2*sign(x[1]) - sign(x[2])\nf(x) = [f₁(x), f₂(x)]\n\nusing CairoMakie\nfig = Figure(size = (800, 800),fontsize=20)\nax = Axis(fig[1, 1], xlabel = \"x₁\", ylabel = \"x₂\")\nstreamplot!(ax,(x₁,x₂)->Point2f(f([x₁,x₂])), -2.0..2.0, -2.0..2.0, colormap = :magma)\nfig\n\n\n\n\n\n\n\n\nFigure 1: State portrait of the discontinuous system\n\n\n\n\n\nOne particular (vector) state trajectory is in Fig. 2.\n\n\nShow the code\nusing DifferentialEquations\n\nfunction f!(dx,x,p,t)\n dx[1] = -sign(x[1]) + 2*sign(x[2])\n dx[2] = -2*sign(x[1]) - sign(x[2])\nend\n\nx0 = [-1.0, 1.0]\ntfinal = 2.0\ntspan = (0.0,tfinal)\nprob = ODEProblem(f!,x0,tspan)\nsol = solve(prob)\n\nusing Plots\nPlots.plot(sol,xlabel=\"t\",ylabel=\"x\",label=false,lw=3)\n\n\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 2: Trajectory of the discontinuous system\n\n\n\n\nWe can also plot the trajectory in the state space, as in Fig. 3.\n\n\nShow the code\nPlots.plot(sol[1,:],sol[2,:],xlabel=\"x₁\",ylabel=\"x₂\",label=false,aspect_ratio=:equal,lw=3,xlims=(-1.2,0.5))\n\n\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 3: Trajectory of the discontinuous system in the state space\n\n\n\n\nNow, how fast does the solution approach the origin?\nLet’s use the 1-norm \\|\\bm x\\|_1 = |x_1| + |x_2| to measure how far the trajectory is from the origin. We then ask: \n\\frac{\\mathrm d}{\\mathrm dt}\\|\\bm x\\|_1 = ?\n\nWe avoid the troubles with nonsmoothness of the absolute value by consider each quadrant separately. Let’s start in the first (upper right) quadrant, that is, x_1>0 and x_2>0, and therefore |x_1| = x_1, \\;|x_2| = x_2, and therefore \n\\frac{\\mathrm d}{\\mathrm dt}\\|\\bm x\\|_1 = \\dot x_1 + \\dot x_2 = 1 - 3 = -2.\n\nThe situation is identical in the other quadrants. And, of course, undefined on the axes.\nThe conclusion is that the trajectory will hit the origin in finite time: for, say, x_1(0) = 1 and x_2(0) = 1 , the trajectory hits the origin at t=(|x_1(0)|+|x_2(0)|)/2 = 1. But with an infinite number of revolutions around the origin…\nHow will a standard algoritm for numerical simulation handle this? Let’s have a look at that.\n\nForward Euler with fixed step size\n\n\\begin{aligned}\n{\\color{blue}x_{1,k+1}} &= x_{1,k} + h (-\\operatorname{sign} x_{1,k} + 2 \\operatorname{sign} x_{2,k})\\\\\n{\\color{blue}x_{2,k+1}} &= x_{1,k} + h (-2\\operatorname{sign} x_{1,k} - \\operatorname{sign} x_{2,k}).\n\\end{aligned}\n\n\n\nShow the code\nf(x) = [-sign(x[1]) + 2*sign(x[2]), -2*sign(x[1]) - sign(x[2])]\n\nusing LinearAlgebra\nN = 1000\nx₀ = [-1.0, 1.0] \nx = [x₀]\ntfinal = norm(x₀,1)/2\ntfinal = 5.0\nh = tfinal/N \nt = range(0.0, step=h, stop=tfinal)\n\nfor i=1:N\n xnext = x[i] + h*f(x[i]) \n push!(x,xnext)\nend\n\nX = [getindex.(x, i) for i in 1:length(x[1])]\n\nPlots.plot(t,X,lw=3,label=false,xlabel=\"t\",ylabel=\"x\")\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBackward Euler\n\n\\begin{aligned}\n{\\color{blue} x_{1,k+1}} &= x_{1,k} + h (-\\operatorname{sign} {\\color{blue}x_{1,k+1}} + 2 \\operatorname{sign} {\\color{blue}x_{2,k+1}})\\\\\n{\\color{blue} x_{2,k+1}} &= x_{1,k} + h (-2\\operatorname{sign} {\\color{blue}x_{1,k+1}} - \\operatorname{sign} {\\color{blue}x_{2,k+1}}).\n\\end{aligned}\n\n\n\nFormulation using LCP\nInstead solving the above nonlinear equations with discontinuities, we introduce new variables u_1 and u_2 as the outputs of the \\operatorname{sign} functions: \n\\begin{aligned}\n{\\color{blue} x_{1,k+1}} &= x_{1,k} + h (-{\\color{blue}u_{1}} + 2 {\\color{blue}u_{2}})\\\\\n{\\color{blue} x_{2,k+1}} &= x_{1,k} + h (-2{\\color{blue}u_{1}} - {\\color{blue}u_{2}}).\n\\end{aligned}\n\nBut now we have to enforce the relation between \\bm u and \\bm x_{k+1}. Recall the standard definition of the \\operatorname{sign} function is \n\\operatorname{sign}(x) = \\begin{cases}\n1 & x>0\\\\\n0 & x=0\\\\\n-1 & x<0,\n\\end{cases}\n but we change the definition to a set-valued function \n\\begin{cases}\n\\operatorname{sign}(x) = 1 & x>0\\\\\n\\operatorname{sign}(x) \\in [-1,1] & x=0\\\\\n\\operatorname{sign}(x) = -1 & x<0.\n\\end{cases}\n\nAccordingly, we set the relationship between \\bm u and \\bm x to \n\\begin{cases}\nu_1 = 1 & x_1>0\\\\\nu_1 \\in [-1,1] & x_1=0\\\\\nu_1 = -1 & x_1<0,\n\\end{cases}\n and \n\\begin{cases}\nu_2 = 1 & x_2>0\\\\\nu_2 \\in [-1,1] & x_2=0\\\\\nu_2 = -1 & x_2<0.\n\\end{cases}\n\nBut these are mixed complementarity contraints we have defined previously! \n\\boxed{\n\\begin{aligned}\n\\begin{bmatrix}\n{\\color{blue} x_{1,k+1}}\\\\\n{\\color{blue} x_{1,k+1}}\n\\end{bmatrix}\n&=\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n{\\color{blue}u_{1}}\\\\\n{\\color{blue}u_{2}}\n\\end{bmatrix}\\\\\n-1 \\leq {\\color{blue} u_1} \\leq 1 \\quad &\\bot \\quad -{\\color{blue}x_{1,k+1}}\\\\\n-1 \\leq {\\color{blue} u_2} \\leq 1 \\quad &\\bot \\quad -{\\color{blue}x_{2,k+1}}.\n\\end{aligned}\n}\n\n\n\n9 possible combinations\nThere are now 9 possible combinations of the values of u_1 and u_2. Let’s explore some: x_{1,k+1} = x_{2,k+1} = 0, while u_1 \\in [-1,1] and u_2 \\in [-1,1]:\n\n\\begin{aligned}\n\\begin{bmatrix}\n0\\\\\n0\n\\end{bmatrix}\n&=\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n{\\color{blue}u_{1}}\\\\\n{\\color{blue}u_{2}}\n\\end{bmatrix}\\\\\n& -1 \\leq {\\color{blue} u_1} \\leq 1, \\quad -1 \\leq {\\color{blue} u_2} \\leq 1\n\\end{aligned}\n\n\nHow does the set of states from which the next state is zero look like? \n\\begin{aligned}\n-\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}^{-1}\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix}\n&= h\n\\begin{bmatrix}\n{\\color{blue}u_{1}}\\\\\n{\\color{blue}u_{2}}\n\\end{bmatrix}\\\\\n-1 \\leq {\\color{blue} u_1} \\leq 1, \\quad -1 &\\leq {\\color{blue} u_2} \\leq 1\n\\end{aligned}\n\n\n\n\\begin{bmatrix}\n-h\\\\-h\n\\end{bmatrix}\n\\leq\n\\begin{bmatrix}\n0.2 & 0.4 \\\\\n-0.4 & 0.2\n\\end{bmatrix}\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix}\n\\leq\n\\begin{bmatrix}\nh\\\\ h\n\\end{bmatrix}\n\nFor h=0.2\n\n\nShow the code\nusing LazySets\nh = 0.2\nH1u = HalfSpace([0.2, 0.4], h)\nH2u = HalfSpace([-0.4, 0.2], h)\nH1l = HalfSpace(-[0.2, 0.4], h)\nH2l = HalfSpace(-[-0.4, 0.2], h)\n\nHa = H1u ∩ H2u ∩ H1l ∩ H2l\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIndeed, if the current state is in this rotated square, then the next state will be zero.\n\n\nAnother\nu_1 = 1, u_2 = 1:\n\n\\begin{aligned}\n\\begin{bmatrix}\n{\\color{blue} x_{1,k+1}}\\\\\n{\\color{blue} x_{1,k+1}}\n\\end{bmatrix}\n&=\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n{1}\\\\\n{1}\n\\end{bmatrix}\\\\\n\\color{blue}x_{1,k+1} &\\geq 0\\\\\n\\color{blue}x_{2,k+1} &\\geq 0\n\\end{aligned}\n which can be reformatted to \n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n1\\\\\n1\n\\end{bmatrix}\\geq \\bm 0\n\n\nand further to \n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix}\n\\geq h\n\\begin{bmatrix}\n-1\\\\\n3\n\\end{bmatrix}\n\n\n\n\nShow the code\nusing LazySets\nh = 0.2\nA = [-1.0 2.0; -2.0 -1.0]\nu = [1.0, 1.0]\nb = h*A*u\n\nH1 = HalfSpace([-1.0, 0.0], b[1])\nH2 = HalfSpace([0.0, -1.0], b[2])\nHb = H1 ∩ H2\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\nPlots.plot!(Hb)\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAll nine regions\n\n\nShow the code\nusing LazySets\nh = 0.2\nA = [-1.0 2.0; -2.0 -1.0]\n\nu = [1, -1]\nb = h*A*u\n\nH1 = HalfSpace(-[1.0, 0.0], b[1])\nH2 = HalfSpace([0.0, 1.0], -b[2])\nHc = H1 ∩ H2\n\nu = [-1, 1]\nb = h*A*u\n\nH1 = HalfSpace([1.0, 0.0], -b[1])\nH2 = HalfSpace(-[0.0, 1.0], b[2])\nHd = H1 ∩ H2\n\nu = [-1, -1]\nb = h*A*u\n\nH1 = HalfSpace([1.0, 0.0], -b[1])\nH2 = HalfSpace([0.0, 1.0], -b[2])\nHe = H1 ∩ H2\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\nPlots.plot!(Hb)\nPlots.plot!(Hc)\nPlots.plot!(Hd)\nPlots.plot!(He)\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSolutions using a MCP solver\n\n\nShow the code\nM = [-1 2; -2 -1]\nh = 2e-1\ntfinal = 2.0\nN = tfinal/h\n\nx0 = [-1.0, 1.0]\nx = [x0]\n\nusing JuMP\nusing PATHSolver\n\nfor i = 1:N\n model = Model(PATHSolver.Optimizer)\n set_optimizer_attribute(model, \"output\", \"no\")\n set_silent(model)\n @variable(model, -1 <= u[1:2] <= 1)\n @constraint(model, -h*M * u - x[end] ⟂ u)\n optimize!(model)\n push!(x, x[end]+h*M*value.(u))\nend\n\nt = range(0.0, step=h, stop=tfinal)\nX = [getindex.(x, i) for i in 1:length(x[1])]\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\nPlots.plot!(Hb)\nPlots.plot!(Hc)\nPlots.plot!(Hd)\nPlots.plot!(He)\nPlots.plot!(X[1],X[2],xlabel=\"x₁\",ylabel=\"x₂\",label=\"Time-stepping\",aspect_ratio=:equal,lw=3,markershape=:circle)\nPlots.plot!(sol[1,:],sol[2,:],label=false,lw=3)\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Back to top", + "text": "One of the useful outcomes of the theory of complementarity systems is a new family of methods for numerical simulation of discontinuous systems. Here we will demonstrate the essence by introducing the method of time-stepping. And we do it by means of an example.\n\nExample 1 (Simulation using time-stepping) Consider the following discontinuous dynamical system in \\mathbb R^2: \n\\begin{aligned}\n\\dot x_1 &= -\\operatorname{sign} x_1 + 2 \\operatorname{sign} x_2\\\\\n\\dot x_2 &= -2\\operatorname{sign} x_1 -\\operatorname{sign} x_2.\n\\end{aligned}\n\nThe state portrait is in Fig. 1.\n\n\nShow the code\nf₁(x) = -sign(x[1]) + 2*sign(x[2])\nf₂(x) = -2*sign(x[1]) - sign(x[2])\nf(x) = [f₁(x), f₂(x)]\n\nusing CairoMakie\nfig = Figure(size = (800, 800),fontsize=20)\nax = Axis(fig[1, 1], xlabel = \"x₁\", ylabel = \"x₂\")\nstreamplot!(ax,(x₁,x₂)->Point2f(f([x₁,x₂])), -2.0..2.0, -2.0..2.0, colormap = :magma)\nfig\n\n\n\n\n\n\n\n\nFigure 1: State portrait of the discontinuous system\n\n\n\n\n\nOne particular (vector) state trajectory is in Fig. 2.\n\n\nShow the code\nusing DifferentialEquations\n\nfunction f!(dx,x,p,t)\n dx[1] = -sign(x[1]) + 2*sign(x[2])\n dx[2] = -2*sign(x[1]) - sign(x[2])\nend\n\nx0 = [-1.0, 1.0]\ntfinal = 2.0\ntspan = (0.0,tfinal)\nprob = ODEProblem(f!,x0,tspan)\nsol = solve(prob)\n\nusing Plots\nPlots.plot(sol,xlabel=\"t\",ylabel=\"x\",label=false,lw=3)\n\n\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 2: Trajectory of the discontinuous system\n\n\n\n\nWe can also plot the trajectory in the state space, as in Fig. 3.\n\n\nShow the code\nPlots.plot(sol[1,:],sol[2,:],xlabel=\"x₁\",ylabel=\"x₂\",label=false,aspect_ratio=:equal,lw=3,xlims=(-1.2,0.5))\n\n\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 3: Trajectory of the discontinuous system in the state space\n\n\n\n\nNow, how fast does the solution approach the origin?\nLet’s use the 1-norm \\|\\bm x\\|_1 = |x_1| + |x_2| to measure how far the trajectory is from the origin. We then ask: \n\\frac{\\mathrm d}{\\mathrm dt}\\|\\bm x\\|_1 = ?\n\nWe avoid the troubles with nonsmoothness of the absolute value by consider each quadrant separately. Let’s start in the first (upper right) quadrant, that is, x_1>0 and x_2>0, and therefore |x_1| = x_1, \\;|x_2| = x_2, and therefore \n\\frac{\\mathrm d}{\\mathrm dt}\\|\\bm x\\|_1 = \\dot x_1 + \\dot x_2 = 1 - 3 = -2.\n\nThe situation is identical in the other quadrants. And, of course, undefined on the axes.\nThe conclusion is that the trajectory will hit the origin in finite time: for, say, x_1(0) = 1 and x_2(0) = 1 , the trajectory hits the origin at t=(|x_1(0)|+|x_2(0)|)/2 = 1. But with an infinite number of revolutions around the origin…\nHow will a standard algoritm for numerical simulation handle this? Let’s have a look at that.\n\nForward Euler with fixed step size\n\n\\begin{aligned}\n{\\color{blue}x_{1,k+1}} &= x_{1,k} + h (-\\operatorname{sign} x_{1,k} + 2 \\operatorname{sign} x_{2,k})\\\\\n{\\color{blue}x_{2,k+1}} &= x_{1,k} + h (-2\\operatorname{sign} x_{1,k} - \\operatorname{sign} x_{2,k}).\n\\end{aligned}\n\n\n\nShow the code\nf(x) = [-sign(x[1]) + 2*sign(x[2]), -2*sign(x[1]) - sign(x[2])]\n\nusing LinearAlgebra\nN = 1000\nx₀ = [-1.0, 1.0] \nx = [x₀]\ntfinal = norm(x₀,1)/2\ntfinal = 5.0\nh = tfinal/N \nt = range(0.0, step=h, stop=tfinal)\n\nfor i=1:N\n xnext = x[i] + h*f(x[i]) \n push!(x,xnext)\nend\n\nX = [getindex.(x, i) for i in 1:length(x[1])]\n\nPlots.plot(t,X,lw=3,label=false,xlabel=\"t\",ylabel=\"x\")\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBackward Euler\n\n\\begin{aligned}\n{\\color{blue} x_{1,k+1}} &= x_{1,k} + h (-\\operatorname{sign} {\\color{blue}x_{1,k+1}} + 2 \\operatorname{sign} {\\color{blue}x_{2,k+1}})\\\\\n{\\color{blue} x_{2,k+1}} &= x_{1,k} + h (-2\\operatorname{sign} {\\color{blue}x_{1,k+1}} - \\operatorname{sign} {\\color{blue}x_{2,k+1}}).\n\\end{aligned}\n\n\n\nFormulation using LCP\nInstead solving the above nonlinear equations with discontinuities, we introduce new variables u_1 and u_2 as the outputs of the \\operatorname{sign} functions: \n\\begin{aligned}\n{\\color{blue} x_{1,k+1}} &= x_{1,k} + h (-{\\color{blue}u_{1}} + 2 {\\color{blue}u_{2}})\\\\\n{\\color{blue} x_{2,k+1}} &= x_{1,k} + h (-2{\\color{blue}u_{1}} - {\\color{blue}u_{2}}).\n\\end{aligned}\n\nBut now we have to enforce the relation between \\bm u and \\bm x_{k+1}. Recall the standard definition of the \\operatorname{sign} function is \n\\operatorname{sign}(x) = \\begin{cases}\n1 & x>0\\\\\n0 & x=0\\\\\n-1 & x<0,\n\\end{cases}\n but we change the definition to a set-valued function \n\\begin{cases}\n\\operatorname{sign}(x) = 1 & x>0\\\\\n\\operatorname{sign}(x) \\in [-1,1] & x=0\\\\\n\\operatorname{sign}(x) = -1 & x<0.\n\\end{cases}\n\nAccordingly, we set the relationship between \\bm u and \\bm x to \n\\begin{cases}\nu_1 = 1 & x_1>0\\\\\nu_1 \\in [-1,1] & x_1=0\\\\\nu_1 = -1 & x_1<0,\n\\end{cases}\n and \n\\begin{cases}\nu_2 = 1 & x_2>0\\\\\nu_2 \\in [-1,1] & x_2=0\\\\\nu_2 = -1 & x_2<0.\n\\end{cases}\n\nBut these are mixed complementarity contraints we have defined previously! \n\\boxed{\n\\begin{aligned}\n\\begin{bmatrix}\n{\\color{blue} x_{1,k+1}}\\\\\n{\\color{blue} x_{1,k+1}}\n\\end{bmatrix}\n&=\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n{\\color{blue}u_{1}}\\\\\n{\\color{blue}u_{2}}\n\\end{bmatrix}\\\\\n-1 \\leq {\\color{blue} u_1} \\leq 1 \\quad &\\bot \\quad -{\\color{blue}x_{1,k+1}}\\\\\n-1 \\leq {\\color{blue} u_2} \\leq 1 \\quad &\\bot \\quad -{\\color{blue}x_{2,k+1}}.\n\\end{aligned}\n}\n\n\n\n9 possible combinations\nThere are now 9 possible combinations of the values of u_1 and u_2. Let’s explore some: x_{1,k+1} = x_{2,k+1} = 0, while u_1 \\in [-1,1] and u_2 \\in [-1,1]:\n\n\\begin{aligned}\n\\begin{bmatrix}\n0\\\\\n0\n\\end{bmatrix}\n&=\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n{\\color{blue}u_{1}}\\\\\n{\\color{blue}u_{2}}\n\\end{bmatrix}\\\\\n& -1 \\leq {\\color{blue} u_1} \\leq 1, \\quad -1 \\leq {\\color{blue} u_2} \\leq 1\n\\end{aligned}\n\nHow does the set of states from which the next state is zero look like? \n\\begin{aligned}\n-\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}^{-1}\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix}\n&= h\n\\begin{bmatrix}\n{\\color{blue}u_{1}}\\\\\n{\\color{blue}u_{2}}\n\\end{bmatrix}\\\\\n-1 \\leq {\\color{blue} u_1} \\leq 1, \\quad -1 &\\leq {\\color{blue} u_2} \\leq 1\n\\end{aligned}\n\n\n\\begin{bmatrix}\n-h\\\\-h\n\\end{bmatrix}\n\\leq\n\\begin{bmatrix}\n0.2 & 0.4 \\\\\n-0.4 & 0.2\n\\end{bmatrix}\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix}\n\\leq\n\\begin{bmatrix}\nh\\\\ h\n\\end{bmatrix}\n\nFor h=0.2\n\n\nShow the code\nusing LazySets\nh = 0.2\nH1u = HalfSpace([0.2, 0.4], h)\nH2u = HalfSpace([-0.4, 0.2], h)\nH1l = HalfSpace(-[0.2, 0.4], h)\nH2l = HalfSpace(-[-0.4, 0.2], h)\n\nHa = H1u ∩ H2u ∩ H1l ∩ H2l\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIndeed, if the current state is in this rotated square, then the next state will be zero.\n\n\nAnother\nu_1 = 1, u_2 = 1:\n\n\\begin{aligned}\n\\begin{bmatrix}\n{\\color{blue} x_{1,k+1}}\\\\\n{\\color{blue} x_{1,k+1}}\n\\end{bmatrix}\n&=\n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n{1}\\\\\n{1}\n\\end{bmatrix}\\\\\n\\color{blue}x_{1,k+1} &\\geq 0\\\\\n\\color{blue}x_{2,k+1} &\\geq 0\n\\end{aligned}\n which can be reformatted to \n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix} + h\n\\begin{bmatrix}\n-1 & 2 \\\\\n-2 & -1\n\\end{bmatrix}\n\\begin{bmatrix}\n1\\\\\n1\n\\end{bmatrix}\\geq \\bm 0\n and further to \n\\begin{bmatrix}\nx_{1,k}\\\\\nx_{2,k}\n\\end{bmatrix}\n\\geq h\n\\begin{bmatrix}\n-1\\\\\n3\n\\end{bmatrix}\n\n\n\nShow the code\nusing LazySets\nh = 0.2\nA = [-1.0 2.0; -2.0 -1.0]\nu = [1.0, 1.0]\nb = h*A*u\n\nH1 = HalfSpace([-1.0, 0.0], b[1])\nH2 = HalfSpace([0.0, -1.0], b[2])\nHb = H1 ∩ H2\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\nPlots.plot!(Hb)\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAll nine regions\n\n\nShow the code\nusing LazySets\nh = 0.2\nA = [-1.0 2.0; -2.0 -1.0]\n\nu = [1, -1]\nb = h*A*u\n\nH1 = HalfSpace(-[1.0, 0.0], b[1])\nH2 = HalfSpace([0.0, 1.0], -b[2])\nHc = H1 ∩ H2\n\nu = [-1, 1]\nb = h*A*u\n\nH1 = HalfSpace([1.0, 0.0], -b[1])\nH2 = HalfSpace(-[0.0, 1.0], b[2])\nHd = H1 ∩ H2\n\nu = [-1, -1]\nb = h*A*u\n\nH1 = HalfSpace([1.0, 0.0], -b[1])\nH2 = HalfSpace([0.0, 1.0], -b[2])\nHe = H1 ∩ H2\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\nPlots.plot!(Hb)\nPlots.plot!(Hc)\nPlots.plot!(Hd)\nPlots.plot!(He)\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSolutions using a MCP solver\n\n\nShow the code\nM = [-1 2; -2 -1]\nh = 2e-1\ntfinal = 2.0\nN = tfinal/h\n\nx0 = [-1.0, 1.0]\nx = [x0]\n\nusing JuMP\nusing PATHSolver\n\nfor i = 1:N\n model = Model(PATHSolver.Optimizer)\n set_optimizer_attribute(model, \"output\", \"no\")\n set_silent(model)\n @variable(model, -1 <= u[1:2] <= 1)\n @constraint(model, -h*M * u - x[end] ⟂ u)\n optimize!(model)\n push!(x, x[end]+h*M*value.(u))\nend\n\nt = range(0.0, step=h, stop=tfinal)\nX = [getindex.(x, i) for i in 1:length(x[1])]\n\nusing Plots\nPlots.plot(Ha, aspect_ratio=:equal,xlabel=\"x₁\",ylabel=\"x₂\",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))\nPlots.plot!(Hb)\nPlots.plot!(Hc)\nPlots.plot!(Hd)\nPlots.plot!(He)\nPlots.plot!(X[1],X[2],xlabel=\"x₁\",ylabel=\"x₂\",label=\"Time-stepping\",aspect_ratio=:equal,lw=3,markershape=:circle)\nPlots.plot!(sol[1,:],sol[2,:],label=false,lw=3)\n\n\n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Back to top", "crumbs": [ "9. Complementarity systems", "Simulations of complementarity systems using time-stepping" diff --git a/sitemap.xml b/sitemap.xml index 32ae305..4e43f45 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -126,7 +126,7 @@ https://hurak.github.io/hys/complementarity_simulations.html - 2024-12-01T15:03:16.724Z + 2024-12-02T10:25:44.967Z https://hurak.github.io/hys/classes_PWA.html diff --git a/solution_concepts 11.html b/solution_concepts 11.html new file mode 100644 index 0000000..32bf7e2 --- /dev/null +++ b/solution_concepts 11.html @@ -0,0 +1,1213 @@ + + + + + + + + + +Solution concepts – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Solution concepts

+
+ + + +
+ + + + +
+ + + +
+ + +

Now that we know how to model hybrid systems, we need to define what we mean by a solution to a hybrid system. The definitions are not as straightforward as in the continuous-time or discrete-time case and mastering them is not only of theoretical value.

+
+

Hybrid time and hybrid time domain

+

Even before we start discussing the concepts of a solution, we need to discuss the concept of time in hybrid systems. Of course, hybrid systems live in the same world as we do, and therefore they evolve in the same physical time, but it turns out that we can come up with an artificial concept of hybrid time that makes modelling and analysis of hybrid systems convenient.

+

Recall that in continuous-time systems, the continuous time t\in\mathbb R_{\geq 0}, and in discrete-time systems, the discrete “time” k\in\mathbb N. We put the latter in quotation marks since k is not really the time but rather it should be read as the kth transition of the system. Now, the idea is to combine these two concepts of time into one, and we call it the hybrid time (t,j), \; t\in \mathbb R_{\geq 0},\, j\in \mathbb N.

+

If you think that it is redundant, note that since hybrid systems can exhibit discrete-event system behaviour, a transition from one discrete state to another can happen instantaneously. In fact, several such transitions can take no time at all. It sounds weird, but that is what the mathematical model allows. That is why determining t need not be enought and we also need to specify j.

+

The set of all hybrid times for a given hybrid system is called hybrid time domain +E \subset [0,T] \times \{0,1,2,\ldots, J\}, + where T and J can be finite or \infty.

+

In particular, +E = \bigcup_{j=0}^J \left([t_j,t_{j+1}] \times \{j\}\right) +\tag{1}

+

where 0=t_0 < t_1 < \ldots < t_J = T.

+

The meaning of Eq. 1 can be best explained using Fig. 1 below.

+
+
+
+ +
+
+Figure 1: Example of a hybrid time domain +
+
+
+

Note that if two hybrid times are from the same hybrid domain, we can decide if (t,j) \leq (t',j'). In other words, the set of hybrid times is totally ordered.

+
+
+

Hybrid arc

+

Hybrid arc is just a terminology used in the literature for hybrid state trajectory. It is a function that assigns a state vector x to a given hybrid time (t,j) +x: E \rightarrow \mathbb R^n. +

+

For each j the function t \mapsto x(t,j) is absolutely continuous on the interval I^j = \{t \mid (t,j) \in E\}.

+
+
+
+ +
+
+Inconsitent notation +
+
+
+

We admit here that we are not going to be 100% consistent in the usage of the notation x(t,j) in the rest of our course. Oftentimes use x(t) even within hybrid systems when we do not need to index the jumps.

+
+
+

It is perhaps clear now, that hybrid time domain can only be determined once the solution (the arc, the trajectory) is known. This is in sharp contrast with the continuous-time or discrete-time system – we can formulate the problem of finding solution to \dot x(t) = 3x(t), \, x(0) = 1 on the interval [0,2], where the interval was set even before we know how the solution looks like.

+
+
+

Solutions of autonomous (no-input) systems

+

Finally we can formalize the concept of a solution. A hybrid arc x(\cdot,\cdot) is a solution to the hybrid equations given by the common quadruple \{\mathcal{C},\mathcal{D},f,g\} (or \{\mathcal{C},\mathcal{D},\mathcal{F},\mathcal{G}\} for inclusions), if

+
    +
  • the initial state x(0,0) \in \overline{\mathcal{C}} \cup \mathcal{D}, and
  • +
  • for all j such that I^j = \{t\mid (t,j)\in E\} has a nonempty interior \operatorname{int}I^j +
      +
    • x(t,j) \in \mathcal C \; \forall t\in \operatorname{int}I^j,
    • +
    • \dot x(t,j) = f(x(t,j)) \; \text{for almost all}\; t\in I^j, and
    • +
  • +
  • for all (t,j)\in E such a (t,j+1)\in E +
      +
    • x(t,j) \in \mathcal{D}, and
    • +
    • x(t,j+1) = g(x(t,j)).
    • +
  • +
+

Make the modifications for the \{\mathcal{C},\mathcal{D},\mathcal{F},\mathcal{G}\} version by yourself.

+
+

Example 1 (Solution) En example of a solution is in Fig. 2. Follow the solution with your finger and make sure you understand what and why is happing. In particular, in the overlapping region, the solution is not unique. While it can continue flowing, it can also jump.

+
+
+
+ +
+
+Figure 2: Example of a solution +
+
+
+
+
+
+

Hybrid input

+

Similarly as we considered the state as a function of the hybrid time, we can consider the input as a function of the hybrid time. With its own hybrid domain E_\mathrm{u}, the input is +u: E_\mathrm{u} \rightarrow \mathbb R^m. +

+

For each j the function t \mapsto u(t,j) must be… well-behaved… For example, piecewise continuous on the interval I^j = \{t \mid (t,j) \in E_\mathrm{u}\}.

+
+
+

Solutions of systems with inputs

+

We assume that hybrid time domains for the arcs and inputs are the same. A solution must satisfy the same conditions as in the case of autonomous systems, but with the input taken into account. For completeness we state the conditions here:

+
    +
  • The initial state-control pair (x(0,0),u(0,0)) \in \overline{\mathcal{C}} \cup \mathcal{D}, and
  • +
  • for all j such that I^j = \{t\mid (t,j)\in E\} has a nonempty interior \operatorname{int}I^j +
      +
    • (x(t,j),u(t,j)) \in \mathcal C \; \forall t\in \operatorname{int}I^j,
    • +
    • \dot x(t,j) = f(x(t,j),u(t,j)) \; \text{for almost all}\; t\in I^j, and
    • +
  • +
  • for all (t,j)\in E such a (t,j+1)\in E +
      +
    • (x(t,j),u(t,j)) \in \mathcal{D}, and
    • +
    • x(t,j+1) = g(x(t,j),u(t,j)).
    • +
  • +
+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/solution_references 10.html b/solution_references 10.html new file mode 100644 index 0000000..ca511fb --- /dev/null +++ b/solution_references 10.html @@ -0,0 +1,1097 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

Any discussion of the concept(s) of solution of a hybrid system must start with the concept of a hybrid time and hybrid time domain. Within the hybrid automata framework this is discussed in the references that we have already made. A particularly popular and recommendable are the (online available) lecture notes [1]. Its updated and extended version [2] is no longer available online – we can only guess that the authors are turning it into a printed textbook. Another hybrid automata textbook that discusses these concepts is [3] (section 2.2.3), but it is not available online either. Anyway, the same concept is also discussed within the hybrid equations framework as introduced, for example, in [4], which can be downloaded (within institutional subscription). In fact, we find their version of hybrid time and hybrid time domain even more (visually) appealing.

+

A transition from one discrete state to another, even if not accompanied by a jump (or reset) of the continuous state variable can be modeled as a discontinuity of the functions on right hand side of the differential equation. Depending on circumstances, more or less peculiar phenomena can occur due to these discontinuities.These issues are discussed in quite some detail in the (fairly readable) paper [5]. Very much recommendable.

+ + + + + Back to top

References

+
+
[1]
J. Lygeros, Lecture Notes on Hybrid Systems. 2004. Available: https://people.eecs.berkeley.edu/~sastry/ee291e/lygeros.pdf
+
+
+
[2]
J. Lygeros, S. Sastry, and C. Tomlin, “Hybrid Systems: Foundations, advanced topics and applications,” Jan. 2020. Available: https://www-inst.eecs.berkeley.edu/~ee291e/sp21/handouts/hybridSystems_monograph.pdf
+
+
+
[3]
H. Lin and P. J. Antsaklis, Hybrid Dynamical Systems: Fundamentals and Methods. in Advanced Textbooks in Control and Signal Processing. Cham: Springer, 2022. Accessed: Jul. 09, 2022. [Online]. Available: https://doi.org/10.1007/978-3-030-78731-8
+
+
+
[4]
R. Goebel, R. G. Sanfelice, and A. R. Teel, “Hybrid dynamical systems,” IEEE Control Systems Magazine, vol. 29, no. 2, pp. 28–93, Apr. 2009, doi: 10.1109/MCS.2008.931718.
+
+
+
[5]
J. Cortes, “Discontinuous dynamical systems: A tutorial on solutions, nonsmooth analysis, and stability,” IEEE Control Systems Magazine, vol. 28, no. 3, pp. 36–73, Jun. 2008, doi: 10.1109/MCS.2008.919306.
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/solution_types 9.html b/solution_types 9.html new file mode 100644 index 0000000..b7a1a51 --- /dev/null +++ b/solution_types 9.html @@ -0,0 +1,1235 @@ + + + + + + + + + +Types of solutions – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Types of solutions

+
+ + + +
+ + + + +
+ + + +
+ + +

Now that we know, what a hybrid arc (trajectory) needs to satisfy to be a solution of a hybrid system, we can classify the solutions into several types. And we base this classification on their hybrid time domain E:

+
+
Trivial
+
+just one point. +
+
Nontrivial
+
+at least two points; +
+
Complete
+
+if the domain is unbounded; +
+
Bounded, compact
+
+if the domain is bounded, compact (well, it is perhaps a bit awkward to call a solution bounded just based on boundednes of its time domain as most people would interpret the boundedness of a solution with regard to the values of the solution); +
+
Discrete
+
+if nontrivial and E\subset \{0\} \times \mathbb N; +
+
Continuous
+
+if nontrivial and E\subset \mathbb R_{\geq 0} \times \{0\}; +
+
Eventually discrete
+
+if T = \sup_E t < \infty and E \cap (\{T\}\times \mathbb N) contains at least two points; +
+
Eventually continuous
+
+if J = \sup_E j < \infty and E \cap (\mathbb R_{\geq 0} \times \{J\}) contains at least two points; +
+
Zeno
+
+if complete and \sup_E t < \infty; +
+
Maximal
+
+It cannot be extended. A solution x(t,j) defined on the hybrid time domain E is maximal, if on an extended hybrid time domain E^\mathrm{ext} such that E\subset E^\mathrm{ext}, there is no solution x^\mathrm{ext}(t,j) that coincides with x on E. Some literature uses the “linguistic” terminology that a maximal solution is not a prefix to any other solution. Complete solutions are maximal. But not vice versa. +
+
+
+
+
+ +
+
+Tip +
+
+
+

It is certainlty helpful to sketch the times domains for the individual classes of solutions.

+
+
+
+

Examples of types of solutions

+
+

Example 1 (Example of a (non-)maximal solution) +\dot x = 1, \; x(0) = 1 +

+

+(t,j) \in [0,1] \times \{0\} +

+

Now extend the time domain to +(t,j) \in [0,2] \times \{0\}. +

+

Can we extend the solution?

+
+
+

Example 2 (Maximal but not complete continuous solution) Finite escape time

+

\dot x = x^2, \; x(0) = 1,

+

x(t) = 1/(1-t)

+
+
+

Example 3 (Discontinuous right hand side) \dot x = \begin{cases}-1 & x>0\\ 1 & x\leq 0\end{cases}, \quad x(0) = -1 (unless the concept of Filippov solution is invoked).

+
+
+

Example 4 (Zeno solution of the bouncing ball) Starting on the ground with some initial upward velocity +h(t) = \underbrace{h(0)}_0 + v(0)t - \frac{1}{2}gt^2, \quad v(0)=1 +

+

What time will it hit the ground again? +0 = t - \frac{1}{2}gt^2 = t(1-\frac{1}{2}gt) +

+

t_1=\frac{2}{g}

+

Simplify (scale) the computations just to get the qualitative picture: set g=2, which gives t_1 = 1.

+

t_1=1:

+

v(t_1^+) = \gamma v(t_1) = \gamma v(0) = \gamma

+

The next hit will be at t_1 + \tau_1 h(t_1 + \tau_1) = 0 = \gamma \tau_1 - \tau_1^2 = \tau_1(\gamma - \tau_1) \tau_1 = \gamma

+

t_2 = t_1+\tau_1 = 1 + \gamma:\quad \ldots

+

t_k = 1 + \gamma + \gamma^2 + \ldots + \gamma^k:\quad \ldots \boxed{\lim_{k\rightarrow \infty} t_k = \frac{1}{1-\gamma} < \infty}

+

Infinite number of jumps in a finite time!

+
+
+

Example 5 (Water tank)  

+
+
+

+
Switching between two water tanks
+
+
+

+\max \{Q_\mathrm{out,2}, Q_\mathrm{out,2}\} \leq Q_\mathrm{in} \leq Q_\mathrm{out,2} + Q_\mathrm{out,2} +

+
+
+

+
Hybrid automaton for switching between two water tanks
+
+
+
+
+

Example 6 ((Non)blocking and (non)determinism in hybrid systemtems)  

+
+
+

+
Example of an automaton exhibitting (non)blocking and (non)determinism
+
+
+
    +
  • x(0) = -3
  • +
  • x(0) = -2
  • +
  • x(0) = -1
  • +
  • x(0) = 0
  • +
+
+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/stability_concepts 12.html b/stability_concepts 12.html new file mode 100644 index 0000000..bf7d53a --- /dev/null +++ b/stability_concepts 12.html @@ -0,0 +1,1232 @@ + + + + + + + + + +Stability of hybrid systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Stability of hybrid systems

+
+ + + +
+ + + + +
+ + + +
+ + +

As we have recalled in the recap section on stability of continuous dynamical systems, stability is a property of an equilibrium. But what is an equilibrium of a hybrid systems? It turns out that the definition is not as straightforward as in the continuous case. It also depends on the chosen framework for modelling of hybrid systems.

+
+

Equilibrium of a hybrid system modelled by a hybrid automaton

+

First, an equilibrium of a hybrid automaton is a point \bm x_\mathrm{eq} in the continuous state space \mathcal X\subset \mathbb R^n.

+
+
+
+ +
+
+Note +
+
+
+

Although we often assume the equilibrium at the origin, that is, \bm x_\mathrm{eq} = \mathbf 0, the assumption does not have to be invoked in order to provide the definition.

+
+
+

We now consider a hybrid automaton for which the dynamics of each individual mode q is given by \dot{\bm x} = \mathbf f_q(\bm x). The invariants (or domains) of each mode are \mathcal X_q, \, q=1, \ldots, m.

+

The definition of the equilibrium \bm x_\mathrm{eq} that is ofter found in the literature imposes these two conditions:

+
    +
  • \mathbf 0 = \mathbf f_q(\bm x_\mathrm{eq}) for all q\in \mathcal Q,
  • +
  • the reset map r(q,q',\bm x_\mathrm{eq}) = \bm x_\mathrm{eq}.
  • +
+

The first condition states that the point in the continuous state space should qualify as an equilibrium for each mode. This might appear unnecessarily restrictive (what if the particular \bm x_\mathrm{eq} is not an element of \mathcal X_q for all q?) as we discuss later. But note that this definition appears in several resources. For example, in the definition 4.9 in the section 4.2 in [1] or the definition 8.2 in the section 8.2 in (no longer available online) [2].

+

The second condition states that the system can be regarded as resting at the equlibrium even if it jumps from one discrete state (mode) to another (while staying in the equilibrium continuous state).

+
+
+

Equilibrium of a hybrid system modelled by hybrid equations

+

The state vector within this modelling framework is composed by both the discrete and continuous state variables. The two conditions for the equilibrium of a hybrid automata can be translated into the hybrid equation framework, which means that the equilibrium is not just a single point but rather a set of points.

+
+

Example 1 (Equilibrium of a hybrid system modelled by hybrid equations) Consider a hybrid system modelled by hybrid equations, for which the state space is given by \mathcal X = \{0,1\} \times \mathbb R. The dynamics of the system is given by

+
+

This makes the analysis significantly more challenging. Therefore, in our lecture we will only consider stability of hybrid automata.

+
+
+

Stability of a hybrid automaton

+

The equilibrium \bm x_\mathrm{eq}=\mathbf 0 is stable if for a given \varepsilon > 0 there exists \delta > 0 such that for all hybrid systems executions/trajectories starting at (q_0,\bm x_0), +\|\bm x_0\| < \delta \Rightarrow \|\bm x(\tau)\| < \varepsilon, \; \forall \tau \in \mathcal{T}, + where \tau is a hybrid time and \mathcal{T} is the hybrid time domain.

+
+
+

Asymptotic stability

+

The equilibrium is stable and furthermore we can choose some \delta such that +\|\bm x_0\| < \delta \quad \Rightarrow \quad \lim_{\tau\rightarrow \tau_\infty} \|\bm x(\tau)\| = 0, + where \tau_\infty<\infty if the execution is Zeno and \tau_\infty=\infty otherwise.

+
+
+

Is stability of the individual dynamics enough?

+
+
+

+
Hybrid automaton that is unstable due to switching even though the two modes are stable
+
+
+

+A_1 = +\begin{bmatrix} +-1 & -100\\ 10 & -1 +\end{bmatrix}, \quad +A_2 = +\begin{bmatrix} +-1 & 10\\ -100 & -1 +\end{bmatrix} +

+
    +
  • Both are stable.
  • +
  • Switching can be destabilizing.
  • +
+
+
+

Can the individual dynamics be unstable?

+
+
+

+
Hybrid automaton that is stable thanks to switching even though the two modes are unstable
+
+
+

+A_1 = +\begin{bmatrix} +1 & -100\\ 10 & 1 +\end{bmatrix}, \quad +A_2 = +\begin{bmatrix} +1 & 10\\ -100 & 1 +\end{bmatrix} +

+
    +
  • Both are unstable.
  • +
  • Switching can be stabilizing.
  • +
+ + + +
+ + Back to top

References

+
+
[1]
H. Lin and P. J. Antsaklis, Hybrid Dynamical Systems: Fundamentals and Methods. in Advanced Textbooks in Control and Signal Processing. Cham: Springer, 2022. Accessed: Jul. 09, 2022. [Online]. Available: https://doi.org/10.1007/978-3-030-78731-8
+
+
+
[2]
J. Lygeros, S. Sastry, and C. Tomlin, “Hybrid Systems: Foundations, advanced topics and applications,” Jan. 2020. Available: https://www-inst.eecs.berkeley.edu/~ee291e/sp21/handouts/hybridSystems_monograph.pdf
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/stability_recap 17.html b/stability_recap 17.html new file mode 100644 index 0000000..dfd8838 --- /dev/null +++ b/stability_recap 17.html @@ -0,0 +1,1263 @@ + + + + + + + + + +Recap of stability analysis for continuous dynamical systems – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Recap of stability analysis for continuous dynamical systems

+
+ + + +
+ + + + +
+ + + +
+ + +

Before we start discussing stability of hybrid dynamical systems, it will not hurt to recapitulate the stability analysis for continuous (both in value and in time) dynamical systems modelled by the standard state equation

+

\dot{\bm{x}} = \mathbf f(\bm x).

+
+

Equilibrium

+

Loosely speaking, equilibrium is a state at which the system can rest indefinitely when undisturbed by external disturbances. More technically speaking, equilibrium is a point in the state space, that is, a vector \bm x_\mathrm{eq}\in \mathbb R^n, at which the vector field \mathbf f vanishes, that is,
+\mathbf f(\bm x_\mathrm{eq}) = \mathbf 0.

+

Without loss of generality we often assume that \bm x_\mathrm{eq} = \mathbf 0, because if the equilibrium is considered anywhere else than at the origin, we can alway introduce a new shifted state vector \bm x_\mathrm{new}(t) = \bm x(t) - \bm x_\mathrm{eq}.

+
+
+
+ +
+
+An equilibrium and not a system is what we analyze for stability +
+
+
+

Although every now and then we may hear the term stability attributed to a system, strictly speaking it is an equilibrium that is stable or unstable. For linear systems, there is not need to distinguish between the two, for nonlinear systems it can easily happen that some equilibrium is stable while some other is unstable.

+
+
+
+
+

Lyapunov stability

+

One of the most common types of stability is Lyapunov stability. Loosely speaking, it means that if the system starts close to the equilibrium, it stays close to it. More formally, for a given \varepsilon>0, there is a \delta>0 such that …

+
+
+

Attractivity

+

This is another property of an equilibrium. If it is (locally) attractive, it means that if the systems starts close to the equilibrium, it will converge to it. The global version of attractivity means that the system asymptotically converges to the equilibrium from anywhere.

+

Perhaps it is not immediately clear that attractivity is distinct from (Lyapunov) stability. The following example shows an attractive but Lyapunov unstable equilibrium.

+
+

Example 1 (Example of an attractive but unstable equilibrium)  

+
+
+Show the code +
f(x) = [(x[1]^2*(x[2]-x[1])+x[2]^5)/((x[1]^2+x[2]^2)*(1+(x[1]^2+x[2]^2)^2)); 
+        (x[2]^2*(x[2]-2x[1]))/((x[1]^2+x[2]^2)*(1+(x[1]^2+x[2]^2)^2))]
+
+using CairoMakie
+fig = Figure(; size = (800, 800),fontsize=20)
+ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
+streamplot!(ax,x->Point2f(f(x)), -1.5..1.5, -1.5..1.5, colormap = :magma)
+fig
+
+
+
+
+

+
+
+
+
+
+
+
+

Asymptotic stability

+

Combination of Lyapunov stability and attractivity is called assymptotic stability.

+

If the attractivity is global, the assymptotic stability is called global too.

+
+
+

Exponential stability

+

Exponential convergence.

+
+
+

Stability of time-varying systems

+

Stability (Lyapunov, asymptotic, …) is called uniform, if it is independent of the inititial time.

+
+
+

Stability analysis via Lyapunov function

+

Now that we recapitulated the key stability concepts, it is time to recapitulate the methods of checking if this or that type of stability is achieved. The classical method is based on the searching for a Lyapunov function.

+

Lyapunov function is a scalar function V(\cdot)\in\mathcal{C}_1 defined on open \mathcal{D}\subset \mathbb{R}^n containing the origin (the equilibrium) that satisfies the following conditions V(0) = 0, \; V(x) > 0\, \text{for all}\, x\in \mathcal{D}\setminus \{0\},

+

\underbrace{\left(\nabla V(x)\right)^\top f(x)}_{\frac{\mathrm d}{\mathrm d t}V(x(t))} \leq 0.

+

In words, Lyapunov function for a given system and a given equilibrium is a function that is positive everywhere except at the origin, where it is zero (we call such function positive definite), and its derivative along the trajectories of the system is nonpositive (aka positive semidefinite), which is a way to guarantee that the function does not increase along the trajectories. If such function exists, the equilibrium is Lyapunov stable.

+

If the latter condition is made strict, that is, if \left(\nabla V(x)\right)^\top f(x) < 0, which is a way to guarantee that the function decreases along the trajectories, the equilibrium is asymptotically stable.

+

The interpretation is quite intuitive: …

+
+
+

LaSalle’s invariance principle

+

A delicate question is if the derivative of the Lyapunov function ocassionally vanishes, it it automatically means that the equilibrium is not assymptotically stable. The aswer is: not necessarily. LaSalle’s invariance principle states that even if the derivative of the Lyapunov function occasionally vanishes, the equilibrium can still be asymptotically stable, provided some condition is satisfied. We will not elaborate on it here. Look it up in your favourite nonlinear (control) system textbook.

+
+
+

Formulated using comparison functions

+

The above properties of the Lyapunov function be also be formulated using comparison functions. For Lyapunov stability, the following holds \kappa_1(\|x\|) \leq V(x) {\color{gray}\leq \kappa_2(\|x\|)}, where

+
    +
  • \kappa_1(\cdot), \kappa_2(\cdot) are class \mathcal{K} comparison functions, that is, they are continuous, zero at zero and (strictly) increasing.
  • +
  • If \kappa_1 increases to infinity (\kappa_1(\cdot)\in\mathcal{K}_\infty), the stability is global.
  • +
+

For asymptotic stability

+

\left(\nabla V(x)\right)^\top f(x) \leq -\rho(\|x\|), where \rho(\cdot) is a positive definite continuous function, zero at the origin.

+
+

The upper bound \kappa_2(\cdot) does not have to be there, it is automatically satisfied for time-invariant systems. It does have to be imposed for time-varying systems though.

+
+
+
+

Exponential stability

+

k_1 \|x\|^p \leq V(x) \leq k_2 \|x\|^p,

+

\left(\nabla V(x)\right)^\top f(x) \leq -k_3 \|x\|^p.

+
+
+

Exponential stability with quadratic Lyapunov function

+

+V(x) = x^\top P x +

+

\lambda_{\min} (P) \|x\|^2 \leq V(x) \leq \lambda_{\max} (P) \|x\|^2

+
+
+

Converse theorems

+
    +
  • for (G)UAS,
  • +
  • for Lyapunov stability only time-varying Lyapunov function guaranteed.
  • +
+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/stability_references 5.html b/stability_references 5.html new file mode 100644 index 0000000..69cdcdf --- /dev/null +++ b/stability_references 5.html @@ -0,0 +1,1150 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

The lecture was partly built upon the chapter 4 of the textbook [1], which in turn was (to a large extent) built upon chapter 2 of the research monograph [2]. None of these two is available online, but fortunately, the latter has a free shorter online version [3]. The chapter 4 (pages 20 through 27) give the necessary material. Possibly, the chapter 3 can serve with some recap of Lyapunov analysis of stability.

+

The lecture was also partly inspired by the sections 8.2 and 8.3 (pages 158–168) of the text [4], which used to be available online, but has resently dissapeared – most probably it is about to be publised as a textbook.

+

Some more online resources, in particular for multiple (also piecewise) Lyapunov functions, are [5], [6], [7], [8]. The are all quite readable.

+
+

Linear matrix inequalities

+

The topic of linear matrix inequalities and the related semidefinite programming, which we used for analysis of stability, is dealt with in numerous resources, many of them available online. The monograph [9] was one of the first systematic treatments of the topic and still offers a relevant material. The authors also provide some shorter teaching material [10], tailored to their Matlab toolbox called CVX. Alternatively, the text [11] is even richer by two pages. Another recommendable lecture notes are also available for free: [12]. Finally, a section on Semidefinite programming in the documentation for Yalmip software can also serve as learning resource.

+
+

S-procedure

+

Some treatment of S-procedure is in [9], pages 23 and 24, and [13], page 655.

+
+
+
+

Sum-of-squares programming

+

The topic of sum-of-squares programming, which we also relied upon in analysis of stability, is a trending topic in optimization and a wealth of resources are available. As an introduction, the paper [14] is recommendable. The computational problems described in the paper can be solved in Matlab using the SOSTOOLS toolbox. Its documentation [15] can serve as yet another tutorial. Last but not least, YALMIP software contains a well-developed section on Sum-of-squares programming.

+ + + +
+ + Back to top

References

+
+
[1]
H. Lin and P. J. Antsaklis, Hybrid Dynamical Systems: Fundamentals and Methods. in Advanced Textbooks in Control and Signal Processing. Cham: Springer, 2022. Accessed: Jul. 09, 2022. [Online]. Available: https://doi.org/10.1007/978-3-030-78731-8
+
+
+
[2]
D. Liberzon, Switching in Systems and Control. in Systems & Control: Foundations & Applications. Boston, MA: Birkhäuser, 2003. Available: https://doi.org/10.1007/978-1-4612-0017-8
+
+
+
[3]
D. Liberzon, “Switched Systems: Stability Analysis and Control Synthesis,” Lecture {{Notes}}, 2007. Available: http://liberzon.csl.illinois.edu/teaching/Liberzon-LectureNotes.pdf
+
+
+
[4]
J. Lygeros, S. Sastry, and C. Tomlin, “Hybrid Systems: Foundations, advanced topics and applications,” Jan. 2020. Available: https://www-inst.eecs.berkeley.edu/~ee291e/sp21/handouts/hybridSystems_monograph.pdf
+
+
+
[5]
R. A. Decarlo, M. S. Branicky, S. Pettersson, and B. Lennartson, “Perspectives and results on the stability and stabilizability of hybrid systems,” Proceedings of the IEEE, vol. 88, no. 7, pp. 1069–1082, Jul. 2000, doi: 10.1109/5.871309.
+
+
+
[6]
M. Johansson and A. Rantzer, “Computation of piecewise quadratic Lyapunov functions for hybrid systems,” IEEE Transactions on Automatic Control, vol. 43, no. 4, pp. 555–559, Apr. 1998, doi: 10.1109/9.664157.
+
+
+
[7]
S. Pettersson and B. Lennartson, “Hybrid system stability and robustness verification using linear matrix inequalities,” International Journal of Control, vol. 75, no. 16–17, pp. 1335–1355, Jan. 2002, doi: 10.1080/0020717021000023762.
+
+
+
[8]
A. Hassibi and S. Boyd, “Quadratic stabilization and control of piecewise-linear systems,” in Proceedings of the 1998 American Control Conference. ACC (IEEE Cat. No.98CH36207), Jun. 1998, pp. 3659–3664 vol.6. doi: 10.1109/ACC.1998.703296.
+
+
+
[9]
S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory. in Studies in Applied and Numerical Mathematics. Society for Industrial and Applied Mathematics, 1994. Accessed: Apr. 16, 2021. [Online]. Available: https://web.stanford.edu/~boyd/lmibook/
+
+
+
[10]
S. Boyd, “Solving semidefinite programs using cvx,” Stanford University, Stanford, CA, Lecture Notes for {{EE363}}, 2008. Accessed: Aug. 22, 2024. [Online]. Available: https://stanford.edu/class/ee363/notes/lmi-cvx.pdf
+
+
+
[11]
S. Boyd, EE363 Review Session 4: Linear Matrix Inequalities,” Stanford University, Stanford, CA, Lecture Notes for {{EE363}}, 2008. Accessed: Aug. 22, 2024. [Online]. Available: https://stanford.edu/class/ee363/sessions/s4notes.pdf
+
+
+
[12]
C. W. Scherer and S. Weiland, “Linear matrix inequalities in control,” Jan. 2015. Accessed: Apr. 16, 2021. [Online]. Available: https://www.imng.uni-stuttgart.de/mst/files/LectureNotes.pdf
+
+
+
[13]
S. Boyd and L. Vandenberghe, Convex Optimization, Seventh printing with corrections 2009. Cambridge, UK: Cambridge University Press, 2004. Available: https://web.stanford.edu/~boyd/cvxbook/
+
+
+
[14]
A. Papachristodoulou and S. Prajna, “A tutorial on sum of squares techniques for systems analysis,” in Proceedings of the 2005 American Control Conference, Portland, OR, USA: IEEE, Jun. 2005, pp. 2686–2700 vol. 4. doi: 10.1109/ACC.2005.1470374.
+
+
+
[15]
A. Papachristodoulou et al., SOSTOOLS Sums of Squares Optimization Toolbox for Matlab: User’s Guide.” University of Oxford Control Group, Sep. 2021. Available: https://github.com/oxfordcontrol/SOSTOOLS/blob/SOSTOOLS400/docs/sostools.pdf
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/stability_software 10.html b/stability_software 10.html new file mode 100644 index 0000000..c7830b9 --- /dev/null +++ b/stability_software 10.html @@ -0,0 +1,1096 @@ + + + + + + + + + +Software – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Software

+
+ + + +
+ + + + +
+ + + +
+ + +

In our course we formulated the problem of checking stability that that of constructing a Lyapunov function, which in turn was formulated as a problem of solving an optimization problem of semidefinite programming (with linear matrix inequalities, LMI) or positive (nonnegative) polynomial programming (via sum-of-squares (SOS) programming). Hence, we need to be able to formulate and solve those optimization problems.

+
+

Matlab

+ +
+
+

Julia

+ +
+
+

Python

+ + + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/stability_via_common_lyapunov_function 10.html b/stability_via_common_lyapunov_function 10.html new file mode 100644 index 0000000..1dc641a --- /dev/null +++ b/stability_via_common_lyapunov_function 10.html @@ -0,0 +1,1461 @@ + + + + + + + + + +Stability via common Lyapunov function – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Stability via common Lyapunov function

+
+ + + +
+ + + + +
+ + + +
+ + +

Having just recalled the stability analysis based on searching for a Lyapunov function, we now extend the analysis to hybrid systems, in particular, hybrid automata.

+
+

Hybrid system stability analysis via Lyapunov function

+

In contrast to continuous systems where Lyapunov functin should decrease along the state trajectory to certify asymptotic stability (or at least should not increase to certify Lyapunov stability), in hybrid systems the requirement can be relaxed a bit.

+

A Lyapunov function should still decrease along the continuous state trajectory while the system resides in a given discrete state (mode), but during the transition between the modes the function can be discontinuous, and can even increase. But then upon return to the same discrete state, the function should have lower value than it had last time it entered the state to certify asymptotic stability (or at least not larger for Lyapunov stability), see Fig. 1.

+
+
+
+ +
+
+Figure 1: Example of an evolution of a Lyapunov function of a hybrid automaton in time +
+
+
+

Formally, the function V(q,\bm x) has two variables q and \bm x, it is smooth in \bm x, and
+ +V(q,\bm 0) = 0, \quad V(q,\bm x) > 0 \; \text{for all nonzero} \; \bm x \; \text{and for all nonzero} \; q, +

+

+\left(\nabla_x V(q,\bm x)\right)^\top \mathbf f(q,\bm x) < 0 \; \text{for all nonzero} \; \bm x \; \text{and for all nonzero} \; q, +

+

and the discontinuities at transitions must satisfy the conditions sketched in Fig. 1. If Lyapunov stability is enough, the strict inequality can be relaxed to nonstrict.

+
+

Stricter condition on Lyapunov function

+

Verifying the properties just described is not easy. Perhaps the only way is to simulate the system, evaluate the function along the trajectory and check if the conditions are satisfied, which is hardly useful. A stricter condition is in Fig. 2. Here we require that during transitions to another mode, the fuction should not increase. Discontinuous reductions in value are allowed, but enforcing continuity introduced yet another simplification that can be plausible for analysis.

+
+
+
+ +
+
+Figure 2: Example of an evolution of a restricted Lyapunov function of a hybrid automaton in time +
+
+
+
+
+
+

Further restricted set of candidate functions: common Lyapunov function (CLF)

+

In the above, we have considered a Lyapunov function V(q,\bm x) that is mode-dependent. But what if we could find a single Lyapunov function V(\bm x) that is common for all modes q? This would be a great simplification.

+

This, however, implies that at a given state \bm x, arbitrary transition to another mode q is possible. We add and adjective uniform the stabilty (either Lyapunov or asymptotic) to emphasize that the function is common for all modes.

+

In terminology of switched systems, we say that arbitrary switching is allowed. And staying in the domain of switched systems, the analysis can be interpreted within the framework of differential inclusions +\dot{\bm x} \in \{f_1(\bm x), f_2(\bm x),\ldots,f_m(\bm x)\}. +

+
+
+

(Global) uniform asymptotic stability

+

Having agreed that we are now searching for a single function V(\bm x), the function must satisfy \boxed{\kappa_1(\|\bm x\|) \leq V(\bm x) \leq \kappa_2(\|\bm x\|),} where \kappa_1(\cdot), \kappa_2(\cdot) are class \mathcal{K} comparison functions, and \kappa_1(\cdot)\in\mathcal{K}_\infty if global asymptotic stability is needed, and +\boxed{\left(\nabla V(\bm x)\right)^\top \mathbf f_q(\bm x) \leq -\rho(\|\bm x\|),\quad q\in\mathcal{Q},} + where \rho(\cdot) is a positive definite continuous function, zero at the origin. If all these are satisfied, the system is (globally) uniformly asymptotically stable (GUAS).

+

Similarly as we did in continuous systems, we must ask if stability implies existence of a common Lyapunov function. An affirmative answer comes in the form of a converse theorem for global uniform asymptotic stability (GUAS).

+

This is great, a CLF exists for a GUAS system, but how do we find it? We must restrict the set of candidate functions and then search within the set. Obviously, if we fail to find a function in the set, we must extend the set and search again…

+
+
+

Common quadratic Lyapunov function (CQLF)

+

An immediate restriction of a set of Lyapunov functions is to quadratic functions +V(\bm x) = \bm x^\top \mathbf P \bm x, + where \mathbf P=\mathbf P^\top \succ 0.

+

This restriction is also quite natural because for linear systems, it is known that we do not have to consider anything more complicated than quadratic functions. This does not hold in general for nonlinear and hybrid systems. But it is a good start. If we succeed in finding a quadratic Lyapunov function, we can be sure that the system is stable.

+

Here we start by considering a hybrid automaton for which at each mode the dynamics is linear. We consider r continuous-time LTI systems parameterized by the system matrices \mathbf A_i for i=1,\ldots, r as +\dot{\bm x} = \mathbf A_i \bm x(t). +

+

Time derivatives of V(\bm x) along the trajectory of the i-th system + \nabla V(\bm x)^\top \left.\frac{\mathrm d \bm x}{\mathrm d t}\right|_{\dot{\bm x} = \mathbf A_i \bm x} = \bm x^\top(\mathbf A_i^\top \mathbf P + \mathbf P\mathbf A_i)\bm x, +

+

which, upon introduction of new matrix variables \mathbf Q_i=\mathbf Q_i^\top given by
+ + \mathbf A_i^\top \mathbf P + \mathbf P\mathbf A_i = \mathbf Q_i,\qquad i=1,\ldots, r + yields
+ + \dot V(\bm x) = \bm x^\top \mathbf Q_i\bm x, + from which it follows that \mathbf Q_i (for all i=1,\ldots,r) must satisfy
+ + \bm x^\top \mathbf Q_i \bm x \leq 0,\qquad i=1,\ldots, r +
+for (Lyapunov) stability and
+ + \bm x^\top \mathbf Q_i \bm x < 0,\qquad i=1,\ldots, r +
+for asymptotic stability.

+

As a matter of fact, we could proceeded without introducing new variables \mathbf Q_i and just write the conditions directly in terms of \mathbf A_i and \mathbf P + \bm x^\top (\mathbf A_i^\top \mathbf P + \mathbf P\mathbf A_i) \bm x \leq 0,\qquad i=1,\ldots, r +
+for (Lyapunov) stability and
+ + \bm x^\top (\mathbf A_i^\top \mathbf P + \mathbf P\mathbf A_i) \bm x < 0,\qquad i=1,\ldots, r +
+for asymptotic stability.

+
+
+

Linear matrix inequality (LMI)

+

The conditions of quadratic stability that we have just derived are conditions on functions. However, in this case of a quadratic Lyapunov function and linear systems, the condition can also be written directly in terms of matrices. For that we use the concept of a linear matrix inequality (LMI).

+

Recall that a linear inequality is an inequality of the form \underbrace{a_0 a_1x_1 + a_2x_2 + \ldots + a_rx_r}_{a(\bm x)} > 0.

+
+
+
+ +
+
+Linear vs. affine +
+
+
+

We could perhaps argue that as the function a(x) is an affine and not linear, the inequality should be called an affine inequality. However, the term linear inequality is well established in the literature. It can be perhaps justified by moving the constant term to the right-hand side, in which case we have a linear function on the left and a constant term on the right, which is the same situation as in the \mathbf A\bm x=\mathbf b equation, which we call linear without hesitation.

+
+
+

A linear matrix inequality is a generalization of this concept where the coefficients are matrices.

+

+\underbrace{\mathbf A_0 + \mathbf A_1\bm x_1 + \mathbf A_2\bm x_2 + \ldots + \mathbf A_r\bm x_r}_{\mathbf A(\bm x)} \succ 0. +

+

Besides having matrix coefficients, another crucial difference is the meaning of the inequality. In this case it should not be interpreted component-wise but rather \mathbf A(\bm x)\succ 0 means that the matrix \mathbf A(\bm x) is positive definite.

+

Alternatively, the individual scalar variables can be assembled into matrices, in which case the LMI can have the form with matrix variables +\mathbf F(\bm X) = \mathbf F_0 + \mathbf F_1\bm X\mathbf G_1 + \mathbf F_2\bm X\mathbf G_2 + \ldots + \mathbf F_k\bm X\mathbf G_k \succ 0, + but the meaning of the inequality remains the same.

+

The use of LMIs is widespread in control theory. Here we formulate the LMI feasibility problem: does \bm X=\bm X^\top exist such that the LMI \mathbf F(\bm X)\succ 0 is satisfied?

+
+
+

CQLF as an LMI

+

Having formulated the problem of asymptotic stability using functions, we now rewrite it using matrices as an LMI: +\begin{aligned} +\mathbf P &\succ 0,\\ +\mathbf A_1^\top \mathbf P + \mathbf P\mathbf A_1 &\prec 0,\\ +\mathbf A_2^\top \mathbf P + \mathbf P\mathbf A_2 &\prec 0,\\ +& \vdots \\ +\mathbf A_r^\top \mathbf P + \mathbf P\mathbf A_r &\prec 0. +\end{aligned} +

+
+
+

Solving in Matlab using YALMIP or CVX

+

Most numerical solvers for semidefinite programs (SDP) can only handle nonstrict inequalities. We can enforce strict inequality by introducing some small \epsilon>0: +\begin{aligned} +\mathbf P &\succeq \epsilon I,\\ +\mathbf A_1^\top \mathbf P + \mathbf P\mathbf A_1 &\preceq \epsilon I,\\ +\mathbf A_2^\top \mathbf P + \mathbf P\mathbf A_2 &\preceq \epsilon \mathbf I,\\ +& \vdots \\ +\mathbf A_r^\top \mathbf P + \mathbf P\mathbf A_r &\preceq \epsilon \mathbf I. +\end{aligned} +

+

For LMIs with no affine term, we can multiply them (by 1/\epsilon) to get the identity matrix on the right-hand side: +\begin{aligned} +\mathbf P &\succeq \mathbf I,\\ +\mathbf A_1^\top \mathbf P + \mathbf P\mathbf A_1 &\preceq \mathbf I,\\ +\mathbf A_2^\top \mathbf P + \mathbf P\mathbf A_2 &\preceq \mathbf I,\\ +& \vdots \\ +\mathbf A_r^\top \mathbf P + \mathbf P\mathbf A_r &\preceq \mathbf I. +\end{aligned} +

+
+
+

Solution set of an LMI is convex

+

An important property of a solution set of an LMI is that it is convex. Indeed, it is a crucial property. An implication is that if a solution \mathbf P exists for the r inequalities, then it is also a solution for an inequality given by and convex combination of the matrices \mathbf A_i. That is, if a solution \mathbf P=\mathbf P^\top \succ 0 exists such that +\begin{aligned} +\mathbf A_1^\top \mathbf P + \mathbf P\mathbf A_1 &\prec 0,\\ +\mathbf A_2^\top \mathbf P + \mathbf P\mathbf A_2 &\prec 0,\\ +& \vdots \\ +\mathbf A_r^\top \mathbf P + \mathbf P\mathbf A_r &\prec 0, +\end{aligned} + then \mathbf P also solves the convex combination +\left(\sum_{i=1}^r\alpha_i \mathbf A_i\right)^\top \mathbf P + \mathbf P\left(\sum_{i=1}^r\alpha_i \mathbf A_i\right) \prec 0, + where \alpha_1, \alpha_2, \ldots, \alpha_r \geq 0 and \sum_i \alpha_i = 1.

+

This leads to an interesting interpretation of the requirement of (asymptotic) stability in presence of arbitrary switching – every convex combination of the systems is stable. While we do not exploit it in our course, note that it can be used in robust control design – an uncertain system is modelled by a convex combination of some vertex models. When designing a single (robust) controller, it is sufficient to guarantee stability of the vertex models using a single Lyapunov function, and the convexity property ensures that the controller is also robust with respect to the uncertain system. A powerful property! On the other hand, rather too strong because it allows arbitrarily fast changes of parameters.

+

Note that we can use the convexity property when formulating this problem equivalently as the problem of stability analysis of the linear differential inclusion +\dot{\bm x} \in \mathcal{F}(\bm x), + where \mathcal{F}(\bm x) = \overline{\operatorname{co}}\{\mathbf A_1\bm x, \mathbf A_2\bm x, \ldots, \mathbf A_r\bm x\}.

+
+
+

What if quadratic LF is not enough?

+

So far we considered quadratic Lyapunov functions – and tt may be useful to display their prescription explicitly in the scalar form +\begin{aligned} +V(\bm x) &= \bm x^\top \mathbf P \bm x\\ +&= \begin{bmatrix}x_1 & x_2\end{bmatrix} \begin{bmatrix} p_{11} & p_{12}\\ p_{12} & p_{22}\end{bmatrix} \begin{bmatrix}x_1\\ x_2\end{bmatrix}\\ +&= p_{11}x_1^2 + 2p_{12}x_1x_2 + p_{22}x_2 +\end{aligned} + to show that, indeed, a quadratic Lyapunov function is a (multivariate) quadratic polynomial.

+

Now, if quadratic polynomials are not enough, it is natural to consider polynomials of a higher degree. The crucial question is, however: how do we enforce positive definiteness?

+
+
+

Positive/nonnegative polynomials

+

The question that we ask is this: is the polynomial p(\bm x), \; \bm x\in \mathbb R^n, positive (or at least nonnegative) on the whole \mathbb R^n? That is, we ask if +p(\bm x) > 0,\quad (\text{or}\quad p(\bm x) \geq 0)\; \forall \bm x\in\mathbb R^n. +

+
+

Example 1 Consider the polynomial p(\bm x)= 2x_1^4 + 2x_1^3x_2 - x_1^2x_2^2 + 5x_2^4. Is it nonnegative for all x_1\in\mathbb R, x_2\in\mathbb R?

+
+

Additionally, \bm x can be restricted to some \mathcal X\sub \mathbb R^n and we ask if
+ + p(\bm x) \geq 0 \;\forall\; \bm x\in \mathcal X. +

+

Once we started working with polynomials, semialgebraic sets \mathcal X are often considered, asthese are defined by polynomial inequalities such as +g_j(\bm x) \geq 0, \; j=1,\ldots, m. +

+

But this we are only going to need in the next chapter, when we consider Multiple Lyapunov Functions (MLF) approach to stability analysis.

+
+

How can we check positivity/nonnegativity of polynomials?

+

Gridding is certainly not the way to go – we need conditions on the coefficients of the polynomial so that we can do some optimization later.

+
+

Example 2 Consider a univariate polynomial +p(x) = x^4 - 4x^3 + 13x^2 - 18x + 17. + Does it hold that p(x)\geq 0 \; \forall x\in \mathbb R? Without plotting (hence gridding) we can hardly say. But what if we learn that the polynomial can be written as +p = (x-1)^2 + (x^2 - 2x + 4)^2 +

+

Obviously, whatever the two squared polynomials are, after squaring they become nonnegative. And summing nonnegative numbers yields a nonnegative result. Let’s generalize this.

+
+
+
+
+

Sum of squares (SOS) polynomials

+

If we can express the polynomial as a sum of squares (SOS) of some other polynomials, the original polynomial is nonnegative, that is, +\boxed{p(\bm x) = \sum_{i=1}^k p_i(\bm x)^2\; \Rightarrow \; p(\bm x) \geq 0,\; \forall \bm x\in \mathbb R^n.} +

+

The converse does not hold in general – not every nonnegative polynomial is SOS! There are only three cases, for which SOS is a necessary and sufficient condition of nonnegativeness:

+
    +
  • n=1: univariate polynomials. The degree (the highest power) d can be arbitrarily high (but even, obviously),
  • +
  • d = 2 and n is arbitrary: multivariate polynomials of degree two (note that for p(\bm x) = x_1^2 + x_1x_2^2 the degree d=3).
  • +
  • n=2 and d = 4: bivariate polynomials of degree 4 (at maximum).
  • +
+

For all other cases all we can say is that p(\bm x)\, \text{is}\, \mathrm{SOS} \Rightarrow p(\bm x)\geq 0\, \forall \bm x\in \mathbb R^n.

+
+
+
+ +
+
+Note +
+
+
+

Hilbert conjectured in 1900 in the 17th problem that every nonnegative polynomial can be written as a sum of squares of rational functions. This was later proved correct. It turns out, that this fact is not as useful as the SOS using polynomials because of impossibility to state apriori the bounds on the degrees of the polynomials defining those rational functions.

+
+
+
+
+

How to get an SOS representation of a polynomial (or prove that none exist)?

+
+

Univariate case

+

Back to the univariate example first. And we do it through an example.

+
+

Example 3 We consider the polynomial in the Example 2. One of the two squared polynomials is x^2 - 2x + 4. We can write it as x^2 - 2x + 4 = \underbrace{\begin{bmatrix}4 & -2 & 1\end{bmatrix}}_{\bm v^\top} \underbrace{\begin{bmatrix} 1 \\ x \\ x^2\end{bmatrix}}_{\bm z}. +

+

Then the squared polynomial can be written as +(x^2 - 2x + 4)^2 = \bm z^\top \bm v \bm v^\top \bm z. +

+

Note that the the product \bm v \bm v^\top is a positive semidefinite matrix of rank one.

+

We can similarly express the second squared polynomial +x-1 = \underbrace{\begin{bmatrix} -1 & 1 & 0\end{bmatrix}}_{\bm v^\top} \underbrace{\begin{bmatrix} 1 \\ x \\ x^2\end{bmatrix}}_{\bm z} + and then +(x - 1)^2 = \bm z^\top \begin{bmatrix} -1 \\ 1 \\ 0\end{bmatrix} \begin{bmatrix} -1 & 1 & 0\end{bmatrix} \bm z. +

+

Summing the two squares we get the original polynomial. But while doing this, we can sum the two rank-one matrices. +\begin{aligned} +p(x) &= x^4 - 4x^3 + 13x^2 - 18x + 17\\ +&= \begin{bmatrix} 1 & x & x^2\end{bmatrix} \bm P \begin{bmatrix} 1 \\ x \\ x^2\end{bmatrix} +\end{aligned}, + where \bm P\succeq 0 is +\bm P = \underbrace{\begin{bmatrix} 4 \\ -2 \\ 1\end{bmatrix} \begin{bmatrix} 4 & -2 & 1\end{bmatrix}}_{\mathbf P_1} + \underbrace{\begin{bmatrix} -1 \\ 1 \\ 0\end{bmatrix} \begin{bmatrix} -1 & 1 & 0\end{bmatrix}}_{\mathbf P_2} +

+

The matrix that defines the quadratic form is positive semidefinite and of rank 2. Indeed, the rank of the matrix is given by the number of squared terms in the SOS decomposition.

+
+
+
+

Multivariate case

+

In a general multivariate case we can proceed similarly. Just form the vector \bm z from all possible monomials: + \bm z = \begin{bmatrix}1 \\ x_1 \\ x_2 \\ \\ \vdots \\ x_n\\ x_1^2 \\ x_1 x_2 \\ \vdots \\ x_n^2\\\vdots \\x_1x_2\ldots x_n^{2}\\ \vdots \\ x_n^d \end{bmatrix} +

+

But how to determine the coefficients of the matrix? We again show it by means of an example.

+
+

We consider the polynomial p(x_1,x_2)=2x_1^4 +2x_1^3x_2 − x_1^2x_2^2 +5x_2^4. We define the vector \bm z as +\bm z = \begin{bmatrix} x_1^2 \\ x_1x_2 \\ x_2^2\end{bmatrix}. +

+

Note that we could also write it fully as +\bm z = \begin{bmatrix} 1 \\ x_1\\ x_2\\ x_1^2 \\ x_1x_2 \\ x_2^2\end{bmatrix}, + but we can see that the first three terms are not going to be needed. If you still cannot see it, feel free to continue with the full version of \bm z, no problem.

+

Then the polynomial can be written as +p(x_1,x_2)=\begin{bmatrix} x_1^2 \\ x_1x_2 \\ x_2^2\end{bmatrix}^\top \begin{bmatrix} p_{11} & p_{12} & p_{13}\\ p_{12} & p_{22} & p_{23}\\p_{13} & p_{23} & p_{33}\end{bmatrix} \begin{bmatrix} x_1^2 \\ x_1x_2 \\ x_2^2\end{bmatrix}. +

+

After multiplying the producs out, we get +\begin{aligned} +p(x_1,x_2)&={\color{blue}p_{11}}x_1^4 + {\color{blue}p_{33}}x_2^4 \\ +&\quad + {\color{blue}2p_{12}}x_1^3x_2 + {\color{blue}2p_{23}}x_1x_2^3\\ +&\quad + {\color{blue}(2p_{13} + p_{22})}x_1^2x_2^2 +\end{aligned} +

+

There are now 5 coefficients in the above polynomial (they are highlighted in blue). Equating them to some particular values gives 5 equations.

+

But the matrix \mathbf P is parameterized by 6 coefficients. Hence we need one more equation. The sixth is the LMI condition \mathbf P\succeq 0.

+
+
+
+
+

Searching for a SOS polynomial Lyapunov function

+

We can now formulate the search for a nonnegative polynomial function V(\bm x) as a search within the set of SOS polynomials of a prescribed degree d.

+

However, for a Lyapunov function, we need positiveness (actually positive definiteness), not just nonnegativeness.Therefore, instead of V(\bm x) \; \text{is SOS}, we are going to impose the condition +\boxed{ +V(\bm x) - \phi(\bm x) \; \text{is SOS},} + where \phi(\bm x) = \gamma \sum_{i=1}^n\sum_{j=1}^{d} x_i^{2j} for some (typically very small) \gamma > 0. This is pretty much the same trick that we used in the LMI formulation when we needed enforce strict inequalities.

+

What remains to complete the conditions for Lyapunov stability, is to express the requirement that the time derivative of V(\bm x) is nonpositive. This can also be done by requiring that minus the time derivative of the polynomial Lyapunov function, which is a polynomial too, is SOS too.

+

+\boxed +{-\nabla V(\bm x)^\top \mathbf f(\bm x)\; \text{is SOS}.} +

+

If asymptotic stability is required instead of just Lyapunov one, the condition is that the time derivative is strictly negative, that is, we can require that +\boxed +{-\nabla V(\bm x)^\top \mathbf f(\bm x) - \phi(\bm x)\; \text{is SOS}.} +

+
+
+

Searching for a SOS polynomial common Lyapunov function

+

Back to hybrid systems. For Lyapunov stability, the problem is to find V(\bm x) such that

+

+\boxed{ +\begin{aligned} +V(\bm x) - \phi(\bm x) \; &\text{is SOS},\\ +-\nabla V(\bm x)^\top \mathbf f_1(\bm x)\; &\text{is SOS},\\ +-\nabla V(\bm x)^\top \mathbf f_2(\bm x)\; &\text{is SOS},\\ +\vdots\\ +-\nabla V(\bm x)^\top \mathbf f_r(\bm x)\; &\text{is SOS}, +\end{aligned} +} + while for asymptotic stability, the conditions are +\boxed{ +\begin{aligned} +V(\bm x) - \phi(\bm x) \; &\text{is SOS},\\ +-\nabla V(\bm x)^\top \mathbf f_1(\bm x) - \phi(\bm x)\; &\text{is SOS},\\ +-\nabla V(\bm x)^\top \mathbf f_2(\bm x) - \phi(\bm x)\; &\text{is SOS},\\ +\vdots\\ +-\nabla V(\bm x)^\top \mathbf f_r(\bm x) - \phi(\bm x)\; &\text{is SOS}. +\end{aligned} +} +

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/stability_via_multiple_lyapunov_function 18.html b/stability_via_multiple_lyapunov_function 18.html new file mode 100644 index 0000000..b97be09 --- /dev/null +++ b/stability_via_multiple_lyapunov_function 18.html @@ -0,0 +1,1550 @@ + + + + + + + + + +Stability via multiple Lyapunov functions – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Stability via multiple Lyapunov functions

+
+ + + +
+ + + + +
+ + + +
+ + +

We start with an example in which we show it can happen that even if the hybrid (switched) system is stable, there is no common quadratic Lyapunov function.

+
+

Example 1 (No common quadratic Lyapunov function can be found) We consider a switched system with two modes, linear models of which are parameterized by the matrices \mathbf A_1 and \mathbf A_2:

+
+
A₁ = [-0.1 -1; 2 -0.1]
+A₂ = [-0.1 -2; 1 -0.1]
+
+

Switching curve is given by x_1=0, that is, the vertical axis. State portrait is in Fig. 1.

+
+
+Show the code +
f₁(x) = A₁*x
+f₂(x) = A₂*x
+f(x) = x[1] <= 0.0 ? f₁(x) : f₂(x)
+
+using CairoMakie
+fig = Figure(size = (800, 800),fontsize=20)
+ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
+streamplot!(ax,(x₁,x₂)->Point2f(f([x₁,x₂])), -2.0..2.0, -2.0..2.0, colormap = :magma)
+vlines!(ax,0.0,ymin=-2.0,ymax=2.0, color = :red, linewidth=3)
+fig
+
+
+
+
+
+ +
+
+Figure 1: State portrait for the switched system with no common quadratic Lyapunov function +
+
+
+
+
+

The equilibrium (the origin) of this switched system appears stable.

+

The individual systems are stable, which we can immediately see by computing the eigenvalues of the matrices A_1 and A_2:

+
+
using LinearAlgebra
+eigvals(A₁), eigvals(A₂)
+
+
(ComplexF64[-0.1 - 1.4142135623730951im, -0.1 + 1.4142135623730951im], ComplexF64[-0.1 - 1.4142135623730951im, -0.1 + 1.4142135623730951im])
+
+
+

We now try to find a common quadratic Lyapunov function for both subsystems. We will formulate the problem as an LMI feasibility problem.

+
+
using Convex, SCS
+X = Semidefinite(2)
+constraint₁ = A₁'*X + X*A₁ ⪯ -Matrix{Float64}(I, 2, 2)
+constraint₂ = A₂'*X + X*A₂ ⪯ -Matrix{Float64}(I, 2, 2)
+constraints = [constraint₁, constraint₂]
+problem = satisfy(constraints)
+solve!(problem,SCS.Optimizer,silent=true)
+
+
Problem statistics
+  problem is DCP         : true
+  number of variables    : 1 (4 scalar elements)
+  number of constraints  : 3 (12 scalar elements)
+  number of coefficients : 24
+  number of atoms        : 10
+
+Solution summary
+  termination status : INFEASIBLE
+  primal status      : INFEASIBLE_POINT
+  dual status        : INFEASIBILITY_CERTIFICATE
+
+Expression graph
+  satisfy
+   └─ nothing
+  subject to
+   ├─ PSD constraint (convex)
+   │  └─ + (affine; real)
+   │     ├─ 2×2 Matrix{Float64}
+   │     └─ Convex.NegateAtom (affine; real)
+   │        └─ …
+   ├─ PSD constraint (convex)
+   │  └─ + (affine; real)
+   │     ├─ 2×2 Matrix{Float64}
+   │     └─ Convex.NegateAtom (affine; real)
+   │        └─ …
+   ├─ PSD constraint (convex)
+   │  └─ 2×2 real variable (id: 181…944)
+   ⋮
+
+
+

The solver does not find a solution. Well, perhaps trying another solver or two would make our conclusion more robust (MosekTools is another recommendable alternative to SCS). But we are now be tempted to conclude that there is no common quadratic Lyapunov function for both subsystems.

+
+
+
+ +
+
+Dual LMI problem confirms the infeasibility of the primal one +
+
+
+

It is possible to formulate a dual LMI problem. Its feasibility certifies infeasibility of the primal one. Namely, the problem is, if there are matrices \mathbf R_1\succ 0 and \mathbf R_2\succ 0 such that the following LMI holds: +\mathbf R_1\mathbf A_1^\top+\mathbf A_1\mathbf R_1 + \mathbf R_2\mathbf A_2^\top+\mathbf A_2\mathbf R_2 \prec 0. +

+
+
+

The conclusion about the imposibility to find a single quadratic Lyapunov function for both subsystems is also suppported by plotting the invariant sets for the two subsystems. First, we need to compute Lyapunov functions for the two subsystems.

+
+
X₁ = Semidefinite(2)
+constraint₁ = A₁'*X₁ + X₁*A₁ ⪯ -Matrix{Float64}(I, 2, 2)
+problem₁ = satisfy(constraint₁)
+solve!(problem₁,SCS.Optimizer,silent=true)
+X₁.value
+
+
2×2 Matrix{Float64}:
+ 423.71     12.5739
+  12.5739  212.123
+
+
+
+
X₂ = Semidefinite(2)
+constraint₂ = A₂'*X₂ + X₂*A₂ ⪯ -Matrix{Float64}(I, 2, 2)
+problem₂ = satisfy(constraint₂)
+solve!(problem₂,SCS.Optimizer,silent=true)
+X₂.value
+
+
2×2 Matrix{Float64}:
+ 212.123   -12.5739
+ -12.5739  423.71
+
+
+

Generally, a Lyapunov function has the property that its sublevel set \{\bm x \mid V(\bm x) \leq \alpha\} is forward (also positive) invariant. We plot invariant sets corresponding to some particular value for both subsystems superposed on the state portrait in Fig. 2.

+
+
+Show the code +
x1s = LinRange(-2, 2, 100)
+x2s = LinRange(-2, 2, 100)
+V₁(x) = x'*X₁.value*x
+V₂(x) = x'*X₂.value*x
+V1s = [V₁([x₁,x₂]) for x₁ in x1s, x₂ in x2s]
+V2s = [V₂([x₁,x₂]) for x₁ in x1s, x₂ in x2s]
+contour!(x1s, x2s, V1s, levels=[300.0], linewidth=3, color=:blue)
+contour!(x1s, x2s, V2s, levels=[300.0], linewidth=3, color=:green)
+fig
+
+
+
+
+
+ +
+
+Figure 2: Invariant ellipses for the two subsystems superposed on the state portrait +
+
+
+
+
+

We can see in Fig. 2 that none of the two ellipses works as an invariant set for the switched system – a state trajectory entering the set leaves it afterwards. No way to to come up with a single ellipse (hence a single quadratic Lyapunov function) that would work here.

+
+

Good, we have seen in the example that it is not always possible to find a common quadratic Lyapunov function for a switched system, even it it is stable. We need to expand the set of functions in which we search for a Lyapunov function. We have proposed one way to do it in the previouse chapter wherein we considered higher degree polynomials on which we imposed the nonnegativity constraint through in the form of SOS constraint. Here we are going to consider another approach. We are goint to stitch together several Lyapunov-like functions, each of which is a Lyapunov function only on some subset of the state space (that is why we call them just Lyapunov-like and not Lyapunov). This approach is sometimes called the Multiple Lyapunov Function (MLF) approach, or Piecewise Lyapunov Function approach.

+
+

Multiple Lyapunov Function (MLF) approach to analysis of stability

+

Instead of just a single common Lyapunov function V(\bm x), we are now going to consider several Lyapunov-like functions V_i(\bm x),\; i=1,\ldots,r, that qualify as Lyapunov function only on some subsets of the state space \Omega_i. And we “stitch” them together to form a piecewise Lyapunov function V(\bm x): +V(x) = +\begin{cases} +V_1(\bm x) & \text{if } \bm x\in \Omega_1, \\ +\vdots\\ +V_r(\bm x) & \text{if } \bm x\in \Omega_r. +\end{cases} +

+
+
+

S-procedure

+

In order to restrict the requirement of positive definiteness of the Lyapunov function to some region in the state space, and similarly for the regurement of negative definiteness of its time derivative, we need to introduce the the S-procedure. This is a result about solvability of two or more quadratic inequalities, not necessarily convex ones (for convex problems we have nonlinear Farkas’ lemma). Origins of this result can be found in the control theory (analysis of stability of nonlinear systems, hence the S letter) with the first rigorous result provided by Yakubovich in 1970s.

+

It gives conditions under which (satisfaction of) one quadratic inequality follows from (satisfaction of) another one (or more). Namely, it gives a condition under which the following implication holds:

+

\boxed +{\text{Quadratic inequality \#1 satisfied by some}\; \bm x \Rightarrow \text{Quadratic inequality \#0 satisfied by the same}\; \bm x.} +

+

In other words, it gives a condition under which the solution set of the inequality #1 denoted as \mathcal X_1 is included in the solution set \mathcal X_0 of the inequality #0.

+
+

S-procedure with nonstrict inequalities

+

Consider the general quadratic functions F_i(\bm x) = \bm x^\top \mathbf A_i \bm x + 2\mathbf b_i^\top \bm x + c_i, \; i=0,\ldots, p.

+

The question is: under which conditions it holds that F_0(\bm x) \geq 0 for all \bm x satisfying F_i(\bm x)\geq 0,\; i=1,\ldots,p ?

+

In other words, we are looking for conditions under which the implication +F_i(\bm x) \geq 0,\; i=1,\ldots,p \quad \Rightarrow \quad F_0(\bm x) \geq 0 +
+holds.

+

In the simplest (yet relevant) case p=1 we search for conditions under which F_0(\bm x) \geq 0 for all \bm x satisfying F_1(\bm x)\geq 0, that is, conditions under which the implication +F_1(\bm x) \geq 0 \quad \Rightarrow \quad F_0(\bm x) \geq 0 +
+holds.

+
+
+

Sufficient conditions

+

The existence of \alpha_i\geq 0,\; i=1,\ldots,p such that +F_0(\bm x)-\sum_{i=1}^p \alpha_i F_i(\bm x) \geq 0 + is sufficient for the original implication to hold. Generally, it is not necessary; the condition is conservative.

+

It can be formulated as an LMI +\begin{bmatrix} +\mathbf A_0 & \mathbf b_0 \\ +\mathbf b_0^\top & c_0 +\end{bmatrix} - +\sum_{i=1}^p +\alpha_i +\begin{bmatrix} +\mathbf A_i & \mathbf b_i \\ +\mathbf b_i^\top & c_i +\end{bmatrix} +\succeq 0 +
+where \alpha_i \geq 0,\; i=1,\ldots,p.

+
+
+

Sufficient and also necessary

+

It is nontrivial that for p=1 it is also necessary, provided that there is some \bm x_0 such that F_1(\bm x_0)>0. Then we have the following equivalence between the two constraints: +\begin{aligned} +F_0(\bm x) &\geq 0 \; \forall \bm x \;\mathrm{satisfying}\; F_1(\bm x)\geq 0 \\ +&\Longleftrightarrow \\ +F_0(\bm x)-\alpha F_1(\bm x) &\geq 0,\;\text{for some}\; \alpha\in\mathbb{R}, \; \alpha\geq 0, +\end{aligned} +
+which again can be formulated as an LMI, namely
+ +\begin{bmatrix} +\mathbf A_0 & \mathbf b_0 \\ +\mathbf b_0^\top & c_0 +\end{bmatrix} - \alpha +\begin{bmatrix} +\mathbf A_1 & \mathbf b_1 \\ +\mathbf b_1^\top & c_1 +\end{bmatrix} +\succeq 0,\quad \alpha\geq 0. +

+
+
+

More on S-procedure

+

There are several variants

+
    +
  • strict, nonstrict or mixed inequalities,
  • +
  • just two or more,
  • +
  • some of the constraints can be equations.
  • +
+
+
+
+

Piecewise quadratic Lyapunov function

+

We now restrict ourselves to quadratic Lyapunov-like functions, that is, quadratic functions V_i(\bm x) = \bm x^\top \mathbf P_i \bm x that qualify as Lyapunov functions only on respective subsets \Omega_i\sub \mathbb R^n:

+

+V_i(\bm x) = \bm x^\top \mathbf P_i \bm x > 0\quad \forall \;\bm x\in \Omega_i, +

+

+\dot V_i(\bm x) = \bm x^\top \left( \mathbf A_i^\top \mathbf P_i + \mathbf P_i \mathbf A_i \right) \bm x < 0\quad \forall \;\bm x\in \Omega_i. +

+
+
+

Using comparison functions and nonstrict inequalities

+

We can use our good old comparison functions to formulate the conditions of positive definiteness and negative definiteness. +\alpha_1 \bm x^\top \mathbf I \bm x \leq \bm x^\top \mathbf P_i \bm x \leq \alpha_2 \bm x^\top \mathbf I \bm x \quad \forall \;\bm x\in \Omega_i, +

+

+\bm x^\top \left( \mathbf A_i^\top \mathbf P_i + \mathbf P_i \mathbf A_i \right) \bm x \leq -\alpha_3 \bm x^\top \mathbf I \bm x\quad \forall \;\bm x\in \Omega_i. +

+

The difference now is that these conditions are only reuired to hold on some state regions, some subsets of the state space. It is now time to discuss how to characterize those regions.

+
+
+

Characterization of subsets of state space using LMI

+

Some subsets \Omega_i\sub \mathbb R^n characterized using linear and quadratic inequalities can be formulated within the LMI framework as +\bm x^\top \mathbf Q_i \bm x \geq 0. +

+

In particular, centered ellipsoids and cones.

+

For example,
+ +\begin{aligned} +\Omega_i &= \{\bm x \in \mathbb R^n \mid (\mathbf c^\top \bm x \geq 0 \land \mathbf d^\top \bm x \geq 0) \\ +& \qquad \qquad \qquad \lor (\mathbf c^\top \bm x \leq 0 \land \mathbf d^\top \bm x \leq 0)\}. +\end{aligned} +

+

This constraint can be reformulated as +(\mathbf c^\top \bm x) (\mathbf d^\top \bm x) \geq 0, +
+which can be reformatted to +\bm x^\top \mathbf c \mathbf d^\top \bm x \geq 0, +
+which can further be symmetrized to +\bm x^\top \left(\frac{\mathbf c \mathbf d^\top + \mathbf d \mathbf c^\top}{2}\right) \bm x \geq 0. +

+

More general sets (general polyhedra, noncentered ellipsoids) can also be modelled using LMI too… We are going to have a look at them, but first we hurry to show how to combine the subset characterization and Lyapunov-ness using the S-procedure.

+
+
+

Combining the subset characterization and Lyapunov-ness using the S-procedure

+

We want to learn if the the following hold +\alpha_i \bm x^\top \mathbf I \bm x \leq \bm x^\top \mathbf P_i \bm x, +

+

+\bm x^\top \left( \mathbf A_i^\top \mathbf P_i + \mathbf P_i \mathbf A_i \right) \bm x \leq -\gamma_i \bm x^\top \mathbf I \bm x, +

+

but not for all \bm x, but only for \bm x\in \Omega_i, that is, all \bm x satisfying \bm x^\top \mathbf Q_i \bm x \geq 0. But this is now a perfect opportunity for application of the S-procedure:

+

+\mathbf P_i - \alpha_i \mathbf I - \mu_i \mathbf Q_i \succeq 0,\quad \mu_i \geq 0,\; \alpha_i > 0, +

+

+\mathbf A_i^\top \mathbf P_i + \mathbf P_i \mathbf A_i + \gamma_i \mathbf I + \xi_i \mathbf Q \preceq 0,\quad \mu_i \geq 0,\; \gamma_i > 0. +

+
+
+

More general sets using LMI

+

How can we model more general sets using LMI?

+

The inequality +\bm x^\top \mathbf Q \bm x + 2\mathbf r^\top \bm x + s \geq 0, +

+

can be reformulated as +\begin{bmatrix} +\bm x^\top & 1 +\end{bmatrix} +\underbrace{ +\begin{bmatrix} +\mathbf Q & \mathbf r \\ \mathbf r^\top & s +\end{bmatrix}}_{\bar{\mathbf{Q}}} +\underbrace{ +\begin{bmatrix} +\bm x \\ 1 +\end{bmatrix}}_{\bar{\bm x}} +\geq 0, +

+

that is, as an LMI +\begin{bmatrix} +\mathbf Q & \mathbf r \\ \mathbf r^\top & s +\end{bmatrix} +\succeq 0. +

+
+

Affine subspace

+

+\mathbf c^\top \bm x + d \geq 0, +

+

+\begin{bmatrix} +\bm x^\top & 1 +\end{bmatrix} +\begin{bmatrix} +\mathbf 0 & \mathbf c \\ \mathbf c^\top & 2d +\end{bmatrix} +\begin{bmatrix} +\bm x \\ 1 +\end{bmatrix} +\geq 0, +

+

+\begin{bmatrix} +\mathbf 0 & \mathbf c \\ \mathbf c^\top & 2d +\end{bmatrix} +\succeq 0. +

+

But then the Lyapunov-like functions and system matrices must also be extended +V(\bm x) = +\begin{bmatrix} +\bm x^\top & 1 +\end{bmatrix} +\underbrace{ +\begin{bmatrix} +\mathbf P & \mathbf P_{12} \\ \mathbf P_{12} & P_{22} +\end{bmatrix}}_{\bar{\mathbf P}} +\begin{bmatrix} +\bm x \\ 1 +\end{bmatrix}, +

+

+\bar{\mathbf{A}} = +\begin{bmatrix} +\mathbf A & \mathbf 0 \\ \mathbf 0 & 0 +\end{bmatrix}. +

+
+
+
+

Continuity conditions

+

The boundary between the regions \Omega_i and \Omega_j described by +\Omega_{ij} = \{\bm x \in \mathbb R^n \mid \mathbf F_{ij} \bm z + \mathbf{l}_{ij}\}, + where \bm z\in \mathbb R^p, \mathbf F_{ij}\in \mathbb R^{n\times p}, and \mathbf l_{ij}\in \mathbb R^{n}.

+

The continuity conditions are +V_i(\bm x) = V_j(\bm x) \quad \forall \bm x \in \Omega_{ij}, + which can be reformulated as +\begin{aligned} +&\left(\mathbf F_{ij} \bm z + \mathbf{l}_{ij}\right)^\top \mathbf P_i \left(\mathbf F_{ij} \bm z + \mathbf{l}_{ij}\right) \\ +& \qquad + 2\left(\mathbf F_{ij} \bm z + \mathbf{l}_{ij}\right)^\top \bm q_i + r_i \\ +&= \left(\mathbf F_{ij} \bm z + \mathbf{l}_{ij}\right)^\top \mathbf P_j \left(\mathbf F_{ij} \bm z + \mathbf{l}_{ij}\right) \\ +& \qquad + 2\left(\mathbf F_{ij} \bm z + \mathbf{l}_{ij}\right)^\top \bm q_j + r_j +\end{aligned}. +

+

Collecting the terms with equal powers of \bm z. +\begin{aligned} +\mathbf F_{ij}^\top (\mathbf P_1 - \mathbf P_2) \mathbf F_{ij} &= 0, \\ +\mathbf F_{ij}^\top (\mathbf P_1 - \mathbf P_2) \mathbf l_{ij} + (\mathbf q_1-\mathbf q_2) &= 0, \\ +\mathbf l_{ij}^\top (\mathbf P_1 - \mathbf P_2)\mathbf l_{ij} + 2\mathbf l_{ij}^\top (\mathbf q_1-\mathbf q_2) + r_1-r_2 &= 0. +\end{aligned} +

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/stability_via_multiple_lyapunov_function.html b/stability_via_multiple_lyapunov_function.html index 97c1a82..c12447f 100644 --- a/stability_via_multiple_lyapunov_function.html +++ b/stability_via_multiple_lyapunov_function.html @@ -784,7 +784,7 @@

Stability via multiple Lyapunov functions

│ └─ Convex.NegateAtom (affine; real) │ └─ … ├─ PSD constraint (convex) - │ └─ 2×2 real variable (id: 237…093) + │ └─ 2×2 real variable (id: 165…541) ⋮ diff --git a/verification_barrier 10.html b/verification_barrier 10.html new file mode 100644 index 0000000..3694feb --- /dev/null +++ b/verification_barrier 10.html @@ -0,0 +1,2694 @@ + + + + + + + + + +Barrier certificates – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Barrier certificates

+
+ + + +
+ + + + +
+ + + +
+ + +

This is another technique for verification of safety of hybrid system. Unlike the optimal-control based and set-propagation based techniques, it is not based on explicit computational characterization of the evolution of states in time. Instead, it is based on searching for a function of a state that satisfies certain properties. The function is called a barrier function and it serves as a certificate of safety.

+

For notational and conceptual convenience we start with an explanation of the method for continuous systems, and only then we extend it to hybrid systems.

+
+

Barrier certificate for continuous systems

+

We consider a continuous-time dynamical system modelled by +\dot{\bm x}(t) = \mathbf f(\bm x, \bm d), + where \bm d represents an uncertainy in the system description – it can be an uncertain parameter or an external disturbance acting on the system.

+

We now define two regions of the state space:

+
    +
  • the set of initial states \mathcal X_0,
  • +
  • and the set of unsafe states \mathcal X_\mathrm{u}.
  • +
+

Our goal is to prove (certify) that the system does not reach the unsafe states for an arbitrary initial state \bm x(0)\in \mathcal X_0 and for an arbitrary d\in \mathcal D.

+

We define a barrier function B(\bm x) with the following three properties

+

B(\bm x) > 0,\quad \forall \bm x \in \mathcal X_\mathrm{u},

+

B(\bm x) \leq 0,\quad \forall \bm x \in \mathcal X_0,

+

\nabla B(\bm x)^\top \mathbf f(\bm x, \bm d) \leq 0,\quad \forall \bm x, \bm d \, \text{such that} \, B(\bm x) = 0.

+

Now, upon finding a function B(\bm x) with such properties, we will prove (certify) safety of the system – the function serves as a certificate of safety.

+
+
+
+ +
+
+Note +
+
+
+

It cannot go unnoticed that the properties of a barrier function B(\bm x) and the motivation for its finding resemble those of a Lyapunov function. Indeed, the two concepts are related. But they are not the same.

+
+
+

How do we find such function? We will reuse the computational technique based on sum-of-squares (SOS) polynomials that we already used for Lyapunov functions. But first we need to handle one unpleasant aspect of the third condition above – nonconvexity of the set given by B(\bm x) = 0.

+
+
+

Convex relaxation of the barrier certificate problem

+

We relax the third condition so that it holds not only at B(\bm x) = 0 but everywhere. The three conditions are then B(\bm x) > 0,\quad \forall \bm x \in \mathcal X_\mathrm{u},

+

B(\bm x) \leq 0,\quad \forall \bm x \in \mathcal X_0,

+

\nabla B(\bm x)^\top \mathbf f(\bm x, \bm d) \leq 0,\quad \forall \bm x\in \mathcal X, \bm d \in \mathcal D.

+

Let’s now demonstrate this by means of an example.

+
+

Example 1 Consider the system modelled by +\begin{aligned} +\dot x_1 &= x_2\\ +\dot x_2 &= -x_1 + \frac{p}{3}x_1^3 - x_2, +\end{aligned} + where the parameter p\in [0.9,1.1].

+

The initial set is given by +\mathcal X_0 = \{ \bm x \in \mathbb R^2 \mid (x_1-1.5)^2 + x_2^2 \leq 0.25 \} + and the unsafe set is given by +\mathcal X_\mathrm{u} = \{ \bm x \in \mathbb R^2 \mid (x_1+1)^2 + (x_2+1)^2 \leq 0.16 \}. +

+

The vector field \mathbf f and the initial and unsafe sets are shown in the figure below.

+
+
+Show the code +
using SumOfSquares
+using DynamicPolynomials
+# using MosekTools     
+using CSDP
+
+optimizer = optimizer_with_attributes(CSDP.Optimizer, MOI.Silent() => true)
+model = SOSModel(optimizer)
+@polyvar x[1:2] 
+
+p = 1;
+
+f = [ x[2],
+     -x[1] + (p/3)*x[1]^3 - x[2]]
+
+g₁ = -(x[1]+1)^2 - (x[2]+1)^2 + 0.16  # 𝒳ᵤ = {x ∈ R²: g₁(x) ≥ 0}
+h₁ = -(x[1]-1.5)^2 - x[2]^2 + 0.25    # 𝒳₀ = {x ∈ R²: h₁(x) ≥ 0}
+
+X = monomials(x, 0:4)
+@variable(model, B, Poly(X))
+
+ε = 0.001
+@constraint(model, B >= ε, domain = @set(g₁ >= 0))
+
+@constraint(model, B <= 0, domain = @set(h₁ >= 0))
+
+using LinearAlgebra # Needed for `dot`
+dBdt = dot(differentiate(B, x), f)
+@constraint(model, -dBdt >= 0)
+
+JuMP.optimize!(model)
+
+JuMP.primal_status(model)
+
+import DifferentialEquations, Plots, ImplicitPlots
+function phase_plot(f, B, g₁, h₁, quiver_scaling, Δt, X0, solver = DifferentialEquations.Tsit5())
+    X₀plot = ImplicitPlots.implicit_plot(h₁; xlims=(-2, 3), ylims=(-2.5, 2.5), resolution = 1000, label="X₀", linecolor=:blue)
+    Xᵤplot = ImplicitPlots.implicit_plot!(g₁; xlims=(-2, 3), ylims=(-2.5, 2.5), resolution = 1000, label="Xᵤ", linecolor=:teal)
+    Bplot  = ImplicitPlots.implicit_plot!(B; xlims=(-2, 3), ylims=(-2.5, 2.5), resolution = 1000, label="B = 0", linecolor=:red)
+    Plots.plot(X₀plot)
+    Plots.plot!(Xᵤplot)
+    Plots.plot!(Bplot)
+    (vx, vy) = [fi(x[1] => vx, x[2] => vy) for fi in f]
+    ∇pt(v, p, t) = (v[1], v[2])
+    function traj(v0)
+        tspan = (0.0, Δt)
+        prob = DifferentialEquations.ODEProblem(∇pt, v0, tspan)
+        return DifferentialEquations.solve(prob, solver, reltol=1e-8, abstol=1e-8)
+    end
+    ticks = -5:0.5:5
+    X = repeat(ticks, 1, length(ticks))
+    Y = X'
+    Plots.quiver!(X, Y, quiver = (x, y) -> (x, y) / quiver_scaling, linewidth=0.5)
+    for x0 in X0
+        Plots.plot!(traj(x0), idxs=(1, 2), label = nothing)
+    end
+    Plots.plot!(xlims = (-2, 3), ylims = (-2.5, 2.5))
+end
+
+phase_plot(f, value(B), g₁, h₁, 10, 30.0, [[x1, x2] for x1 in 1.2:0.2:1.7, x2 in -0.35:0.1:0.35])
+
+
+
┌ Warning: At t=4.423118290940107, dt was forced below floating point epsilon 8.881784197001252e-16, and step error estimate = 1.139033855908175. Aborting. There is either an error in your model specification or the true solution is unstable (or the true solution can not be represented in the precision of Float64).
+└ @ SciMLBase ~/.julia/packages/SciMLBase/VAClc/src/integrator_interface.jl:623
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+
+

Barrier certificate for hybrid systems

+

For a hybrid automaton with l locations \{q_1,q_2,\ldots,q_l\}, not just one but l barrier functions/certificates are needed:

+

B_i(\bm x) > 0,\quad \forall \bm x \in \mathcal X_\mathrm{u}(q_i),

+

B_i(\bm x) \leq 0,\quad \forall \bm x \in \mathcal X_0(q_i),

+

\nabla B_i(\bm x)^\top \mathbf f_i(\bm x, \bm u) \leq 0,\quad \forall \bm x, \bm u \, \text{such that} \, B_i(\bm x) = 0,

+

+\begin{aligned} +B_i(\bm x) \leq 0,\quad &\forall \bm x \in \mathcal R(q_j,q_i,\bm x^-)\,\text{for some}\, q_j\,\\ +&\text{and}\, \bm x^-\in\mathcal G(q_j,q_i)\,\text{with}\, B_j(\bm x^-)\leq 0. +\end{aligned} +

+
+
+

Convex relaxation of barrier certificates for hybrid systems

+

\nabla B_i(\bm x)^\top \mathbf f_i(\bm x, \bm u) \leq 0,\quad \forall \bm x\in X_0(q_i), \bm u\in\mathcal U(q_i),

+

+\begin{aligned} +B_i(\bm x) \leq 0,\quad &\forall (\bm x, \bm x^-)\,\text{such that}\, \bm x \in \mathcal R(q_j,q_i,\bm x^-), \\ +&\text{and}\, \bm x^-\in\mathcal G(q_i,q_j). +\end{aligned} +

+ + +
+ + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/verification_barrier.html b/verification_barrier.html index 850ab22..295f60d 100644 --- a/verification_barrier.html +++ b/verification_barrier.html @@ -816,1403 +816,1403 @@

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/verification_intro 15.html b/verification_intro 15.html new file mode 100644 index 0000000..5ff709d --- /dev/null +++ b/verification_intro 15.html @@ -0,0 +1,1059 @@ + + + + + + + + + +What is verification? – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

What is verification?

+
+ + + +
+ + + + +
+ + + +
+ + +

About…

+ + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/verification_reachability 10.html b/verification_reachability 10.html new file mode 100644 index 0000000..faddb17 --- /dev/null +++ b/verification_reachability 10.html @@ -0,0 +1,1059 @@ + + + + + + + + + +Reachability – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Reachability

+
+ + + +
+ + + + +
+ + + +
+ + +

About this site

+ + + + Back to top
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/verification_references 17.html b/verification_references 17.html new file mode 100644 index 0000000..c946582 --- /dev/null +++ b/verification_references 17.html @@ -0,0 +1,1126 @@ + + + + + + + + + +Literature – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Literature

+
+ + + +
+ + + + +
+ + + +
+ + +

The topic of verification of hybrid systems is vast. While we only reserved a single week/chapter/block for it, it would easily fill a dedicated course, supported by a couple of books. Having a smaller time budget, we have still found some encouragement that modest introduction is feasible in the Chapter 3 of [1]. Although we are not following the book closely, we are covering some of their topics.

+

Among general references for hybrid system verification, we can recommend [2]. Although the book is not freely available for download, its web page contains quite some additional material such as slides and codes.

+
+

Reachability analysis

+

[3] [4]

+
+
+

Barier certificates

+

[5]

+
+
+

Temporal logics

+

[6], [7]

+

[8], [9].

+ + + +
+ + Back to top

References

+
+
[1]
H. Lin and P. J. Antsaklis, Hybrid Dynamical Systems: Fundamentals and Methods. in Advanced Textbooks in Control and Signal Processing. Cham: Springer, 2022. Accessed: Jul. 09, 2022. [Online]. Available: https://doi.org/10.1007/978-3-030-78731-8
+
+
+
[2]
S. Mitra, Verifying Cyber-Physical Systems: A Path to Safe Autonomy. in Cyber Physical Systems Series. Cambridge, MA, USA: MIT Press, 2021. Available: https://sayanmitracode.github.io/cpsbooksite/about.html
+
+
+
[3]
M. Althoff, G. Frehse, and A. Girard, “Set Propagation Techniques for Reachability Analysis,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 4, no. 1, pp. 369–395, 2021, doi: 10.1146/annurev-control-071420-081941.
+
+
+
[4]
M. Althoff, N. Kochdumper, M. Wetzlinger, and T. Ladner, CORA 2024 Manual.” 2023. Accessed: Jan. 10, 2023. [Online]. Available: https://tumcps.github.io/CORA/data/archive/manual/Cora2024Manual.pdf
+
+
+
[5]
S. Prajna and A. Jadbabaie, “Safety Verification of Hybrid Systems Using Barrier Certificates,” in Hybrid Systems: Computation and Control, R. Alur and G. J. Pappas, Eds., in Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 2004, pp. 477–492. doi: 10.1007/978-3-540-24743-2_32.
+
+
+
[6]
C. Baier and J.-P. Katoen, Principles of Model Checking. Cambridge, MA, USA: MIT Press, 2008. Available: https://mitpress.mit.edu/books/principles-model-checking
+
+
+
[7]
E. M. Clarke, Jr, O. Grumberg, D. Kroening, D. Peled, and H. Veith, Model Checking, 2nd ed. in Cyber Physical Systems Series. Cambridge, MA, USA: MIT Press, 2018. Available: https://mitpress.mit.edu/9780262038836/model-checking/
+
+
+
[8]
R. M. Murray, U. Topcu, and N. Wongpiromsarn, “Lecture 3 Linear Temporal Logic (LTL).” Belgrade (Serbia), Mar. 2020. Available: http://www.cds.caltech.edu/~murray/courses/eeci-sp2020/L3_ltl-09Mar2020.pdf
+
+
+
[9]
N. Wongpiromsarn, R. M. Murray, and U. Topcu, “Lecture 4 Model Checking and Logic Synthesis.” Belgrade (Serbia), Mar. 2020. Available: http://www.cds.caltech.edu/~murray/courses/eeci-sp2020//L4_model_checking-09Mar2020.pdf
+
+
+ + +
+
+ +
+ + + + + \ No newline at end of file diff --git a/verification_temporal_logics 9.html b/verification_temporal_logics 9.html new file mode 100644 index 0000000..a8763e6 --- /dev/null +++ b/verification_temporal_logics 9.html @@ -0,0 +1,1191 @@ + + + + + + + + + +Temporal logics – B(E)3M35HYS – Hybrid systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Temporal logics

+
+ + + +
+ + + + +
+ + + +
+ + +

It is natural to invoke the standard (propositional) logic when defining whatever requirements – we require that “if this and that conditions are satisfied, then yet another condition must not hold”, and so on.

+

It turns out, however, that the spectrum of requirements expressed with propositional logic is not rich enough when specifying requirements for discrete-event and hybrid systems whose states evolve causally in time. Temporal logics add some more expressiveness.

+

Indeed, the plural is correct – there are several temporal logics. Before listing the most common ones, we introduce the key temporal operators that are going to be used together with logical operators.

+
+

Temporal operators

+

The name might be misleading here – the adjective temporal has nothing to do with time as measured by the wall clock. Instead, as (discrete) state trajectories form sequencies, temporal operators help express when certain properties must (or must not) be satisfied along the state trajectories.

+
+

Example 1 Consider the state automaton for a controller for two traffic lights. The state trajectory for each light is a sequence of color states \{\text{green}, \text{yellow}, \text{red}, \text{red-yellow}\} of the traffic light. We may want to impose a requirement such that \text{green} is never on at both lights at the same time. This we can easily express just with the standard logical operators, namely, \neg(\text{green}_1 \land \text{green}_2). But now consider that we require that sooner or later, \text{green} must be on for each light (to guarantee fairness). And that this must be true all the time, that is, \text{green} must come infinitely often. And, furthermore, that \text{red} cannot come imediately after its respective \text{green}.

+
+

Requirements like these cannot be expressed with standard logical operators such as \lnot, \land, \lor, \implies and \iff, and temporal operators must be introduced. Here they are.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Temporal operators
SymbolAlternative symbolMeaning
\mathbf{F}\DiamondEventually (Finally)
\mathbf{G}\BoxGlobally (Always)
\mathbf{X}\bigcircNeXt
\mathbf{U}\sqcupUntill
+

We are explaining their use while we introduce our first temporal logic.

+
+
+

Linear temporal logic (LTL)

+

“Linear” refers to linearity in time (one after another, as opposed to branching). Consider a sequence of discrete states (aka state trajectory or path) of a given discrete-event or hybrid system that is iniated at some state x.

+

We now consider some property \phi(x) of a sequence of states emanating from the state x. \phi(x) evaluates to true or false. And indeed, while the argument of \phi is a particular state, when evaluating \phi, the full sequence is taken into consideration when evaluating \phi(x).

+

In order to be able to express requirements on future states, \phi() cannot be just a logical formula, it must by a LTL formula. Here comes a formal definition +\begin{aligned} +\phi &= \text{true} \, | \, p \, | \, \neg \phi_1 \, | \, \phi_1 \land \phi_2 \, | \, \phi_1 \lor \phi_2 \\ +&\quad | \, \mathbf{X} \phi_1 \, | \, \mathbf{F} \phi_1 \, | \, \mathbf{G} \phi_1 | \, \phi_1 \mathbf{U} \phi_2 +\end{aligned} +

+

Having an LTL formula, we write that a state sequence emanating from a given discrete state x satisfies the formula as x \models \phi if \phi is true for all possible state trajectories starting at this state.

+
+
+

Examples of LTL formulas

+

+\mathbf{G}\neg \phi +

+

+\mathbf{G}\mathbf{F} \phi +

+

+\mathbf{F}\mathbf{G} \phi +

+

+\mathbf{F}(\phi_1 \land \mathbf{X}\mathbf{F}\phi_2) +

+
+
+

CTL* (CTL mixed with LTL) supports branching

+
    +
  • Existential quantifiers needed +
      +
    • \mathbf{A}: For all
    • +
    • \mathbf{E}: There exists
    • +
  • +
+ + +
+ + Back to top
+ + +
+ + + + + + \ No newline at end of file