Planar flows
Review of Eigenstuff
Once we go beyond one-dimensional maps and flows, even linear systems become a lot more interesting and complicated. Since linear maps and flows are well understood, they are extremely useful as approximations to nonlinear systems. Linear algebra is a crucial tool for studying higher-dimensional systems.
Since maps send a n-dimensional space to itself, linear maps can be represented as square matrices. Particularly when studying iteration of linear maps, it helps to know the eigenvalues and eigenvectors of the map (or at least have bounds on these quantities). In this subsection we will briefly review their definitions and how to compute them in low dimensions.
A nonzero vector v is an eigenvector of a square matrix A with eigenvalue \lambda if Av = \lambda v.
For small matrices we can first compute the eigenvalues \lambda through the characteristic equation
where I is the n by n identity matrix.
For each root \lambda_i of the characteristic equation, we can compute the eigenspace of \lambda_i by finding the kernel of A - \lambda_i I by row-reduction. This will be at least one-dimensional, and each element of a basis for the kernel will be an independent eigenvector with eigenvalue \lambda.
Some matrices do not have an n-dimensional basis of eigenvectors, which can make the analysis of their iterates more complicated. For example, the matrix
has the characteristic equation
which only has a single root \lambda_1 = 0, although it has an algebraic multiplicity of 2. In this case, A - \lambda_i I = A and it is already fully row-reduced. The kernel consists of vectors (a,0) where a must be nonzero. This is a one-dimensional space, and the simplest basis vector would be v_1 = (1,0). The geometric multiplicity of the eigenspace is 1. This means the matrix is not diagonalizable.
My ODE and linear book has a chapter on eigenvectors and eigenvalues if you want a more thorough explanation.
The trace-determinant plane
For a two by two matrix, the characteristic equation of a matrix A can be written as
so the eigenvalues are completely determined by the trace and determinant of the matrix. Below we show the trajectories of two-dimensional systems of linear differential equations x' = Ax, where you can control the trace and determinant using the legend in the upper left.
Pendulum
An ideal nonlinear pendulum, consisting of a massless rod of length L suspending a point mass of mass m, subject to a constant downward gravitational acceleration of -g, satisfys the ODE
where k = g/L.
There is a conserved energy for this equation,
where v = \frac{d \theta}{d t}. Systems with a conserved energy can also be thought of in a calculus of variations framework, in which certain quantities are minimized (or at least the solutions are critical points). One such generalization is that of Hamiltonian systems (see the "Classical and Celestial Mechanics" chapter for more details.)
Nullclines
Especially in the plane, it can be useful to examine the nullclines of a system, which are the 0-level sets of the component slope functions. For example, the system
has the parabola y = x^2 as its nullcline for the x component, and the line x=2 as the nullcline for the y component.
Lotka-Volterra models
In 1910 Alfred Lotka considered a simple planar ODE model to study a type of autocatalytic chemical reaction. In the 1920s, both Lotka and the Italian mathematician Vito Volterra used the same system in the context of a simplified ecological model in which there are two species, one of which is a prey species for the other (the predator species).
The simplest version of the predator prey system is:
in which x is the prey population, and y the predator population (in some units, not necessarily individuals). The parameters a,b,c, and d are usually assumed to be positive, and we are mainly interested in positive values of x and y.
Somewhat surprisingly, the predator-prey system has a conserved potential-like quantity,
If we use variables u = \log x and v = \log y, we can write the system in Hamiltonian form with the Hamiltonian
and
A similar model can be used for two competing species:
Hopf Bifurcations
In two dimensions there is an important common type of bifurcation of equilibria known as the Hopf bifurcation. In this type of bifurcation, as a parameter is varied an equilibrium changes its stability type and a periodic solution is created around it.
As an example of a Hopf bifurcation, consider the following system in variables x and y, with parameter b:
which is a particular (simplified) case of the 1968 Selkov model for glycolysis; the variables x and y are related to the concentrations of the chemicals ADP and ATP in a cell. For each b there is a unique equilibrium at x = b, y = \frac{10 b}{(10 b^2 + 1)}. The Jacobian of the system is
which at the equilibrium becomes
The eigenvalues are the somewhat intimidating functions of b:
In the Selkov model, b is a positive parameter. Between b \approx 0.016 and b \approx 2.38, the discriminant is negative, and there are complex eigenvalues. When b < \sqrt{\frac{4 - \sqrt{5}}{10}} \approx 0.42, or b > \sqrt{\frac{4 + \sqrt{5}}{10}} \approx 0.79, the real part of the eigenvalues is negative and the equilibrium is stable. When b is in the interior of the interval w(0.42, 0.79), the equilibrium is unstable but there is stable periodic orbit around it.
An animation from Wikipedia of these bifurcations is shown below.

The bifurcation at b \approx 0.42, in which a stable limit cycle appears after equilibrium changes from stable to unstable, is called a super-critical Hopf bifurcation. The other bifurcation, at b \approx 0.79, is also super-critical (we view the bifurcation as happening in the stable->unstable parameter direction of the equilibrium). In a sub-critical bifurcation, when the equilibrium is stable it has a nearby unstable limit cycle, which merges with the equilibrium at the bifurcation value.
Lindstedt-Poincare perturbation method
We will illustrate the Lindstedt-Poincare perturbation method on a classic example, a cubically perturbed harmonic oscillator (sometimes called the Duffing oscillator):
with initial condition x(0) = a and x'(0) = 0.
A key idea of Lindstedt is to perturb the time variable in a controlled way by using \tau = w t where
If we write \dot{x} for \frac{d x}{d \tau}, considering x as a function of \tau, then we get
Now we look for an expansion of x(\tau) in the form
The linear equation with \epsilon = 0 is solved by x_0 = a \cos(\tau). Then the first order (in \epsilon) approximation becomes
with initial conditions x_1(0) = x_1'(0) = 0.
Using the trigonometric identity (\cos(\tau))^3 = \frac{\cos(3 \tau)}{4} +\frac{3 \cos( \tau)}{4}, the solution to that nonhomogeneous linear ODE is
For a periodic solution we cannot have a \tau \sin(\tau) term, so we choose w_1 = -\frac{3 a^2}{8}, and then
and the full first order solution is
van der Pol oscillator
The van der Pol oscillator is an interesting example of a planar nonlinear system, originally introduced by the Dutch physicist Balthasar van der Pol in the 1920s to model electrical oscillators in circuits and biology (heartbeats).
Lienard systems
Lienard generalized the van der Pol oscillator
to a differential equation of the form
which can be equivalently written as
with
So G(0) = 0, and G(x) is assumed to be an odd function (G(x) = - G(-x)) that initially decreases but then has a unique positive zero z, after which it increases without bound. For the van der Pol oscillator (with \alpha=1), G(x) = x^3/3 - x.

Above is a plot of the van der Pol oscillator's vector field (x' = y - x^3/3 + x, and y' = -x), along with the vertical nullcline y = G(x).
Note that the y-axis is a horizontal nullcline, and \dot{y} < 0 for x > 0.
For these systems we can prove that there is a unique periodic orbit which is the \omega-limit set of every initial condition except for the fixed point (0,0).
Our proof, which is based off one in the book Dynamical Systems by Shlomo Sternberg, relies on studying the behavior of the function
Along a trajectory, the t-derivative of R is
This shows that \dot{R} is positive in the vertical strip - z < x < z, so the origin is repelling.
Because of the symmetry of the flow, it suffices to look at the change in R from an initial point on the positive y-axis until the trajectory hits the negative y-axis for the first time. For an initial condition (0,y_0) with y_0>0, let t_1 be the first positive time at which x(t) = 0, and y(t_1) = y_1. The change in R will be
We want to show that \delta(y) is monotonically decreasing for y > r where r is the unique number for which the initial condition (0,r) will pass through (z,0). For y_0 < r, \dot{R} will be positive over the entire orbit segment with t \in [0,t_1], so \delta(y_0) > 0 and the orbit cannot be periodic.

The above figure shows some of the key orbit segments used in the proof, for the van der Pol oscillator (with \alpha=1). Note that the aspect ratio is not equal 1 to show more detail. The blue trajectory starts at a small value of y on the positive y-axis; two sequential intersections with the negative y-axis are shown. The red trajectory is for the starting y value y_r such that the trajectory passes through (z,0) (for the van der Pol oscillator, z=1, and y_r \approx 0.6).
The two trajectories shown in green start above y_r and we want to compare their values of \delta(y). We split each trajectory into five pieces, A (green), B_1 (purple), B_2 (light blue), B_3 (purple again), and C (green again). The splits in the B region are defined only for the larger starting y value, and are determined relative to the first orbit. We corresponding split the \delta(y) integral up into pieces
Along the green portions we can use x as a parameter of integration. For the first green segment (where y>0), the portion of the integral defining \delta(y) becomes
For each x, we can compare the two integrands (along the two upper green segments) and see that for r < \tilde{y}_0 < y_0, \delta_A(y_0) < \delta_A(\tilde{y}_0) since the -x G(x) > 0 and the denominators are positive and monotonically increasing functions of y.
A similar argument applies to the lower green segments, where the denominator has the opposite behavior in y but we are integrating from larger to smaller x-values along the trajectory, so \delta_C(y_0) < \delta_C(\tilde{y}_0) as well (where \delta_C refers to the portion of the integral along the C segments).
For the B segments we can parameterize the curves with y,
Along the purple (B_1 and B_3) segments of the second trajectory starting at \tilde{y}_0, G > 0 but we are integrating in the negative y direction, so these segments will contribute a negative amount to \delta(\tilde{y_0}).
Finally, for the segment B_2, for each value of y in the integral, the corresponding x(y) on the trajectory is larger for the y_0 trajectory compared to the inner \tilde{y_0} trajectory. Since G is assumed to be increasing when x > z, and we are integrating in the negative y direction, \delta_{B_2}(y_0) < \delta_{B_2}(\tilde{y}_0).
So \delta(y) is a decreasing function of y. We also know that \delta_{B_2} can be arbitrarily small since we are assuming that \lim_{x \rightarrow \infty} G(x) = \infty, so there is a unique zero of \delta(y) somewhere on the positive y-axis. (For the van der Pol oscillator with \alpha=1, this unique y value is approximately 2.1727. )
Some trajectories starting from below and above the periodic orbit are shown in the figure below. }

Poincare-Bendixson theorem
The Poincare-Bendixson theorem is a strong result that shows that planar flows are not that complicated, at least compared to flows in higher dimensions.
There are many ways to state the theorem; here we follow Sternberg's style of separating the cases in which there are equilibria or not:
A) If x(t) is a (forward) bounded trajectory of a vector field, F in the plane, and if \omega(x) := \omega(x(0)) contains no zeros of F then either:
(1) x(t)= \omega(x) is a periodic trajectory, or
(2) \omega(x) consists of a periodic trajectory of F which x approaches spirally from the inside or from the outside.
B)
1) If \omega(x) consists only of zeros of F , then, in fact, \omega(x) consists of the single point, A, and x(t) approaches A as t \rightarrow \infty. Otherwise
2) \omega(x) consists of a finite set \{A_n\} of zeros of F and a set of trajectories, \{C_a\}, where, as t \rightarrow \infty each C_a approaches one of the zeros.
Bendixson's nonexistence criterion
For the planar system x' = F(x), where F has component functions f_1 and f_2, suppose that there is a trajectory forming a simple loop C. Then the line integral
since F is parallel to the tangent vector to C, and thus perpendicular to the normal vector n. The planar divergence theorem (or Green's theorem) tells us that
where D is the interior of C. This means that the divergence of F cannot be nonzero in D, since otherwise it would have the same sign everywhere and the area integral would be nonzero.
Index theory in the plane
Instead of integrating the flux of the vector field through a loop, it turns out to be very useful to integrate the change in angle of a vector field over a loop. The angle is only defined for a nonzero vector field, so there cannot be any equilibria on the loop. Since the angle must return to its starting value, the total change in angle will be a multiple of 2 \pi. If we divide by 2 \pi, we get an integer that is called the index of the vector field over the loop.
In coordinates, the increment of angle change is
If the curve C is parameterized on the interval [0,1], and \dot{x_1} = f_1, \dot{x_2} = f_2, then the index is
As an example, consider the linear vector field f_1 = -y, f_2 = x which has counter-clockwise circular orbits. We can parameterize the orbits of this system on [0,1] as x_1 = r \cos(2 \pi t), x_2 = r \sin(2 \pi t), and then index for a circle centered at the origin is then
For a more complicated example, consider the "monkey saddle" system

We can see from the plot that the index around the equilibrium at (0,0) is -2. We can conform this from the integral, which is relatively easy to compute as the numerator \dot{f_2}f_1 - \dot{f_1} f_2 simplifies to -2 (\cos(t)^2 - \sin(t)^2)^2 - 8 \cos(t)^2 \sin(t)^2 = -2 and the denominator simplifies to
Hilbert's 16th problem
Hilbert's 16th problem (among his famous 23 problems listed an address at the International Congress of Mathematicians in 1900) has two parts, the second of which is quite simple to state in our context: is there a bound, depending on the degree of the polynomials, for the number of limit cycles of a two-dimensional polynomial system of ODEs? The answer is unknown even in the degree two case.
Exercises
-
- Compute the eigenvalues of the matrix
A = \left(\begin{array}{rr} 0 & 1 \\ -4 & 0 \end{array}\right) \- Sketch the orbits of the system
x' = yy' = - 4x -
Convert the differential equation x'' + 3x' - 4x = 0 to a system of first order equations and sketch the trajectories of the resulting system.
-
Suppose the following system of ODEs models a pair of predator-prey species; the predator (y) hunts in packs, so we use predation terms proportional to y^2:
x' = x(1-x) - x y^2/10y' = -y + 2 x y(a) Sketch the nullclines of the system in the x\ge 0, y \ge 0 quadrant.
(b) What are the possible equilibria and their stability (restricted to the positive quadrant).
-
For the system
x_1' = -3 x_1x_2' = - 8 x_1^2 + 2 x_2-
(a) Find the solution explicitly for an arbitrary starting point (a,b).
-
(b) What is the set of initial conditions for which the solution will limit to the origin as t \rightarrow \infty?
-
(c) What is the set of initial conditions for which the solution will limit to the origin as t \rightarrow - \infty?
-
-
If A is a real 2 by 2 matrix,
A = \left ( \begin{array}{cc} a & b \\ c & d \end{array} \right )and x is a two-dimensional column vector, what conditions on the entries of A guarantee that the equilibria of x' = A x are isolated?
License
This work (text, mathematical images, and javascript applets) is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Biographical images are from Wikipedia and have their own (similar) licenses.