where and constitute the state vector , is the only component of the input vector , and makes up the output vector
This creates the state vector in Mathematica.
This sets the input and output vectors.
To obtain and , we observe that their Mathematica equivalents f and h are simply the derivative D[x, t] and the output vector y expressed via the state and input variables.
The expression for the derivative contains an undesirable variable, , which is among neither state nor input variables.
The replacement rule stored as sln helps to get rid of .
The expression for function h is trivial.
So far we have used the built-in Mathematica functions. Now it's time to make accessible the library of functions provided in Control System Professional.
This loads the application.
For most Control System Professional functions, the input state-space model must be linear. Therefore, our first task will be to linearize the model, that is, represent it in the form
This is the purpose of the function Linearize, which, given the nonlinear functions f and h and the lists of state and input variables, supplied together with values at the nominal point (the point in the vicinity of which the linearization will take place), returns the control object StateSpace[a, b, c, d], where matrices a, b, c, and d are the coefficients , , , and .
This performs the linearization.
Mapping the built-in Mathematica function Factor onto components of the state-space object simplifies the result somewhat. (Here /@ is a shortcut for the Map command.)
TraditionalForm often gives a more compact representation for control objects.
Now let us design a state feedback controller that will stabilize the pendulum in a vertical position near the nominal point. One way to do this is to place the poles of the closed-loop system at some points and on the left-hand side of the complex plane.
In this particular case, Ackermann's formula is used. The result is a matrix comprising the feedback gains.
Note that we were able to obtain a symbolic solution to this problem and thus see immediately that, for example, only the first gain depends on and so would be affected should our pendulum get sent to Mars (and the change would be linear in ). We also see that the first gain depends on the product of pole values, the second gain on their sum, and so on.
To check if the pole assignment has been performed correctly, we can find the poles of the closed-loop system, that is, the eigenvalues of the matrix
This extracts the matrices from their StateSpace wrapper.
We see that the eigenvalues of the closed-loop system are indeed as required.
With Control System Professional, we can also design the state feedback using the optimal linear-quadratic (LQ) regulator. This approach is more computationally intensive, so it is advisable to work with inexact numeric input. For convenience in presenting results, we switch to the control print display.
This is the particular set of numeric values (all in SI) we will use.
Here our system is numericalized.
Let and be identity matrices.
LQRegulatorGains solves the Riccati equations and returns the corresponding gain matrix.
Here are the poles our system will possess when we close the loop.
Let us make some simulations of the linearized system as well as the original, nonlinear system stabilized with one of the controllers we have designed--say the one obtained with Ackermann's formula. We start with the linearized system and compute the transient response of the system for the initial values of of 0.5, 1, and 1.2, assuming in all cases that . The same initial conditions will then be used for the nonlinear system, and the results will be compared.
Here is the list of initial conditions for θ.
This is the linearized system after the closing state feedback. The function StateFeedbackConnect is described in together with other utilities for interconnecting systems.
To compute how the initial condition in θ decays in the absence of an input signal, we can use OutputResponse.
In this particular case, the input arguments to OutputResponse are the system to be analyzed, the input signal (which is 0 for all t), the time variable t, and the initial conditions for the state variables supplied as an option. The initial value for θ is denoted as angle.
Here is the plot of the previous function for the chosen values . We store it as plot for future reference.
The case of actual nonlinear system stabilized with the linear controller is more interesting, but requires some work on our part. We note that when the control loop is closed, the input variable--the force applied by the motor of the cart--tracks changes in state variables and .
First we prepare the input rules. As we have only one input, there is only one rule in the list.
Recall that we store the description of our nonlinear system as sln.
Now we numericalize the rule, substitute the feedback rules, and, to convert the rule to an equation, apply the head Equal to it (@@ is the shorthand form of the Apply function). The resultant differential equation is labeled de.
This solves the differential equation with the initial conditions for every value in the list one by one and returns a list of solutions. The time t is assumed to vary from 0 to 4 seconds.
In several graphs that follow, we show the results for as a solid line, for as a dashed-dotted one, and for as a dashed line. This changes the Plot options to reflect that convention and adjusts a few other nonautomatic values for plot options.
The results for θ are now presented graphically. We can see that the controller succeeds in driving the pendulum to its equilibrium position for all three initial displacements. The plot is stored as plot1.
We can also see that, once the angle θ[t] has come to zero, the derivative vanishes as well. This means that the pendulum is not about to oscillate around its equilibrium position, at least not when driven from the displacements we are considering for now.
Here is the plot of input force versus time.
Finally, we compare the graphs of θ for the nonlinear and linear systems and see that the only case of smallest initial displacement is treated adequately by the linear model.
The transient responses suggest that our linear feedback is not sufficiently prompt in reacting to moderate and large initial displacements , and that may cause problems for still larger angles. The case rad is almost critical. Indeed, for a slightly larger displacement, rad, the system becomes hard to control.
We solve the same equation for another set of initial conditions.
In the following graphs, we will plot the results for as a solid line and one of our previous curves (namely, ) as a dashed line. This sets the new options.
We find that the pendulum still could be driven from to , but now it oscillates badly around the equilibrium point.
Of course, the cart in our particular model of the pendulum would not allow the pendulum to rotate in circles, but, for the sake of argument, we will assume that it would.
The variations in become more complex and far more intense.
This is the force the motor must exert to maintain the process.
The real actuator may not be up to the task. If the maximum force the motor can provide is, say, N, and the feedback saturates at that limit, the controller fails to balance the pendulum.
To model this situation, we create a clip function.
Here is how it works: everything beyond the interval from to gets cut off.
We use clip to saturate the feedback.
This is the new differential equation for θ under the saturated feedback.
This solves it.
Finally, we plot the state response--for as a solid line and for as a dashed one. It is clear that the controller fails to return the pendulum to its equilibrium position.