My Cribsheet for Dynamical System

The most general (and maybe most important) fact I know about differential equations is that: we generally do not know how to solve them. There are some very simple or very luckily pretty equations that people have systematically solved. For the rest of infinitely many unsolvable equations, one can only analyze them if not devoted to a career of solving them.

1 Ordinary Differential Equation (ODE)

The term "ordinary" is used in contrast with partial differential equations (PDE) which may be with respect to more than one independent variable.

There are several properties that we use to categorize ODEs.

1.1 First-Order Linear ODE

First-Order Linear Homogeneous ODE

A general first-order linear homogeneous ODE can be written as

dxdt=p(t)x

The solution is straight-forward. We separate variables and integrate both sides:

1xdx=p(t)dtlnx=1xdx=p(t)dt x(t)=cep(t)dt

First-Order Linear Non-homogeneous ODE

A general first-order linear non-homogeneous ODE can be written as

dxdt=p(t)x+q(t)

Its solution is

x(t)=ep(t)dt[c+t0tq(s)ep(s)dsds]=cep(t)dt+ep(t)dtt0tq(s)ep(s)dsds

The first component of the solution is the same as the solution to the homogeneous ODE (the solution when q(t)=0), which is called the homogeneous solution (齐次方程的通解). The second component of the solution is called the particular solution (非齐次方程的特解). The total solution is the sum of the homogenous solution and the particular solution.

1.2 Second-Order Linear ODE with Constant Coefficients

For second-order linear ODEs, we generally do not have solutions if the coefficients are functions of t (like p(t),q(t) in first-order linear ODEs we have tackled). We thus tackle the second-order linear ODEs with constant coefficient:

d2xdt2+pdxdt+qx=f(t)

1.3 First-Order Linear Differential Equation System

A first-order linear differential equation system can be written as

dxdt=Px+q(t)

where the unknown x(t) is a n-dim column vector; P is a n×n constant square matrix; q(t) is a known n-dim column vector function.

1.4 Higher-Order Linear ODE with Constant Coefficients

Apologies for dragging in another notation for derivatives: x(n)=dnxdtn.

A univariate linear ODE of order n

x(n)+an1x(n1)++a1x(1)+a0x=f(t)

is equivalent to the following first-order n-dim linear differential equation system

ddt[x(n1)x(n2)x(1)x]=[an1a2a1a0100001000010][x(n1)x(n2)x(1)x]+[f(t)000].

Solve this linear differential equation system. The solution to the original n-th order ODE lies in the last element of the column vector.

2 Stability Analysis

When a differential equation does not have analytical solutions, we turn to qualitative tools to analyze certain properties of our interest. An important property we are usually interested in is stability --- stability of the solution under small perturbations.

This line of tools were first studied by Lyapunov.

2.1 Fixed Point

The general notion of stability is defined on the solution to a differential equation. The solution x(t)=ϕ(t),t[t0,) is stable if ϵ>0, there exists δ=δ(ϵ)>0 such that the solution is close to ϕ(t), namely,

|x(t;x0)ϕ(t)|<ϵ,

so long as the initialization is close |x0ϕ(t0)|<δ.

This is quite hard.

More often, we talk about the stability of a fixed point, that is a constant solution. Fixed points of the differential equation x=f(x) are the zeros of f(x). Let's say xe is a zero of f(x), that is f(xe)=0. Then, the constant function x(t)=xe is a valid solution to the differential equation. It is called a fixed point because x initialized at xe will stay at xe thereafter. If x converged to xe when initialized at a point very close to xe, the fixed point xe is a stable fixed point.

Visually, you can also look at the time-course trajectories of a dynamical system and roughly tell a fixed point is a stable fixed point if all trajectories converge to that point. Here is a schematic visualization of 4 of the most common kinds of fixed points stolen from Wikipedia.

Fixed Points

2.2 Stability Analysis of Linear Dynamical System

For linear dynamical systems, we often whether talk about the zero fixed point xe=0​ is stable or not. Consider the linear dynamical system

dxdt=Px

It is obvious that xe=0 is a fixed point. Is it stable?

We know that the solution to this dynamical system is

x(t)=i=1ncieλitri

If all eigenvalues are negative, we can see the system will always converge to zero x()=0. This is indeed the condition for the zero fixed point to be stable. More rigorously, we take complex eigenvalues into account. The zero fixed point xe=0 is (asymptotically) stable if and only if all eigenvalues of matrix P have negative real parts Re(λi)<0.

The 2D Case

This picture comes up a lot in anything that relates to dynamical systems. But simply looking at it makes me feel uneasy. Both its merit and sin is that it contains a lot of information.

This picture is about judging what type the zero fixed point is by looking at the determinant and trace of matrix P (or matrix A in their notation) in a 2D dynamical system.

2D Dynamical Systems

The underlying principle is the same: the zero fixed point is stable if and only if all eigenvalues have negative real parts. For 2D systems, this can be done by looking at the determinant and trace, because the determinant equals the product of all eigenvalues and the trace equals the sum of all eigenvalues. For 2D systems, we have

λ1λ2=det(P),λ1+λ2=Tr(P).

Equivalently, we have λ1,λ2 are the solution to the quadratic equation λ2Tr(P)λ+det(P)=0. Hence, Δ=Tr(P)24det(P) can tell us whether the two eigenvalues are real or complex. You can also see that the zero fixed point is stable if the determinant is positive and trace is negative.

2.3 Stability Analysis of Nonlinear Dynamical System

There is not much to add here. For nonlinear dynamical systems, we linearize the dynamics and treat them with linear stability analysis.

"Linearize" means first-order Taylor expansion. This is a legit simplification because stability is a property about how small perturbations influence the system ---- first-order Taylor expansion is usually accurate enough around small perturbations.

Let's be more specific. Consider nonlinear dynamics:

ddtx=f(x)

Assume xe is a fixed point, namely f(xe)=0. Since we are interested in the dynamics around xe, we linearize the f(x) around xe.

f(x)f(xe)+J(xxe)=J(xxe)

We use J to denote the Jacobian matrix, that is the first-order partial derivatives of f(x) evaluated at xe.

J=f(x)x|xe

Since the dynamics around xe is well approximated by x=f(x)J(xxe), the question of whether xe is stable in the nonlinear dynamical system x=f(x) can be answered by whether 0 is stable in the linear dynamical system x=Jx.

3 From A Signal Processing Perspective

Signal Processing provides a very nice perspective on dynamical systems, especially linear dynamical systems.

Green Function

Green function is the impulse response of a nonhomogeneous linear differential operator with specified initial or boundary conditions. If knowing the Green function, the convolution between the Green function and the nonhomogeneous term gives us a particular solution to the nonhomogenouse ODE.

Proof I

Green Function is usually a late chapter in a differential equation textbook. But the impulse response is often the first chapter in a signal and system textbook. This is probably because its proof looks quite simple there.

In the signal and system context, the system is not necessarily defined by differential equations. It can be any linear time-invariant (LTI) systems.

Let us assume the impulse response of a LTI system is g(t), that is

δ(t)LTIg(t)

Since the system is time-invariant, we can shift the time by t

δ(tt)LTIg(tt)

Since the system is linear, we can scale both sides by f(t)

f(t)δ(tt)LTIf(t)g(tt)

We then integrate both sides and finishes the proof

f(t)δ(tt)dtLTIf(t)g(tt)dt

The lefthand side is f(t) by the property of the delta function. The righthand side is a convolution between f(t) and g(t).

Proof II

Green function is also the inverse Laplace transform of the inverse of the system's frequency response. Say we have a linear differential operator L. We know that L acting on the Green function G(t) gives us the delta function,

Lg(t)=δ(t)

We can do Laplace transform on both sides and look at the equation in the frequency domain:

H(s)G(s)=1G(s)=1H(s)

Here H(s) is called the frequency response of the dynamical system. It is a polynomial in linear ODEs with constant coefficients.

Now we are to find the function that gives us x(t) when being acted on by the linear operator L,

Lx(t)=f(t)

We do Laplace transform on both sides and get

H(s)X(s)=F(s)

We can solve the Laplace transform of x(t) by division and Laplace transform back to obtain the solution

X(s)=F(s)H(s)=F(s)G(s)x(t)=L1[F(s)G(s)]=f(t)g(t)

Through the Laplace transform procedure, we see that the frequency domain perspective is very useful for solving ODEs because invert a differential operator in the frequency domain is just division.

Solving First-Order Linear ODE with Green Function

Consider a first-order linear non-homogeneous ODE:

x˙+px=q(t)

Definitions

Solutions

Remark: Since I am a little paranoid about writing the complete solution as a convolution, I find that this is possible but not pretty. The complete solution is

x(t)=L1[x0+Q(s)s+p]=L1[x0+Q(s)]g(t)=(x0δ(t)+q(t))g(t)

© Yedi Zhang | Last updated: February 2024