[MUSIC] Welcome to week two. We'll start talking about controls this week. Let's go back and think about a simple dynamic system. Here's the block on a frictionless plane again, and now we've set its mass to 1 and we've called the force that you can apply on it u. Again, using Newton's law, you know that x double dot = u. Now what we are going to do is reinterpret this, and think about u as an input of some kind, and x as an output of some kind. This kind of thinking gives rise to a field of engineering mathematics called control theory. Where you have some task that's expressed in terms of the outputs and you have to design the input to perform the task. Let's assume for our example in this lecture, that we want to send x to 0. An intuitive idea for how this might be accomplished is, in terms of a very familiar physical object, a spring. The idea is that we attach a spring to the block and configure things so that the rest length of the spring is attained when the block is at the goal position. Here's a cartoon describing the situation and the spring constant of this Hooke's law spring is kp. Writing down Hooke's law, you can see that the force exerted by the spring is proportional to the error in position. You can try using MATLAB to integrate this ODE x double dot = up(x). What you should see is behavior very like the simple harmonic oscillator from last week. You can change the stiffness, kp, to approach the goal position, 0, faster. But the problem is that the block doesn't really go and stop at 0, it overshoots. Now what we need to do, again in very physical terms, is to add some dissipation so that the block actually slows down and stops at the desired position. We need to remove energy, and usually we can do that by adding some kind of friction force. The easiest to model is viscous friction, which is the kind of friction that happens in fluids. And the force exerted by viscous friction can be modeled very simply as being proportional to the velocity of the block. Here's ud, the dissipative force, and it's proportional to x dot. You can simply add this dissipative force to your spring force from last time. And now, you can try to tune your kp and kd such that you get the desired behavior. That is, the block approaches 0 fast, but it doesn't overshoot. You had a very similar exercise in your aerial robotics course and we are just revisiting those ideas. The physical analogies of the spring and dissipation used here are valuable not just for the control idea, but also in thinking about how to prove stability. In particular, the physical spring and damper also invoke a concept of physical mechanical energy. But remember that the spring and the damper were both added by you, the controls designer, and is really best thought of as a virtual spring and a virtual damper. And the total mechanical energy of the physical system is actually the virtual total mechanical energy. The great thing about PD control is that it's very easy to prove stability in terms of total mechanical energy. Take the time derivative of this eta expression and substitute in x double dot from the previous slide. Notice that the total energy, eta, is monotonically decreasing where eta dot is less than zero. Which means that eta defines some kind of hill which you're rolling downhill on. Have your MATLAB simulation plot eta and eta dot. Here's a visualization from the mobility course. We talked about similar ideas. The basin of this energy hill, which was created virtually by you, the controls designer, is a very powerful way to think about controls. The PD controller created this artificial energy basin. Let's talk about where PD control is applicable. In the previous slides, we talked about a linear double integrator system. And we applied a PD controller, which is a linear function of the state. A linear system stability proves and implementation are both quite simplified. But that doesn't mean these ideas are useless in nonlinear systems. Here's a nonlinear system where f and g are nonlinear functions of their inputs. If g is invertible, note, that you can just set u to be this function and we're back to x double dot = v. This process is sometimes called feedback linearization or inverse dynamics. You can set v to be your PD controller from last time and achieve identical behavior. However, note that u, the actual input that you are giving to your system, is a complicated function of g and f. When you're implementing these kind of algorithms on a physical platform, errors in state estimates, x and x dot, and also parametric uncertainties might make you be inaccurate, and that sometimes has disastrous results. We'll try to focus on control strategies that are easier to implement. Let's go back and think about our PD controller. Recall from your mobility course that nonlinear systems close to their equilibria behave like linear systems. This was the hyperbolic approximation that you talked about before. Let's think about this nonlinear plant, where f and g are chosen to be these particular nonlinear functions of x. And we'll set u to be the PD controller from earlier in this lecture. If you stimulate the system in MATLAB, which I've done, we'll see that the behavior of the nonlinear system resembles the behavior that you saw from your linear system, the double integrator before. But the system is not linear and your input u is a very simple function. We're trying to use these ideas as much as possible.