Now, we know kinematics. We have reviewed the very basics of getting equations of motion, momentum, inertia, frames, details how this matters. Now, we're going to talk about control theory, and review this again, because you will be doing control theory. You're going to be making outer control loops, MRP, quaternions, whatever. You're going to have inner control loop servo, you're going to have gimbal rate control that you have to do with CMGs to control these motions. You're going to need these loops everywhere. I want to do a quick review. Like I say, we're going to go beyond just time independent system too. We're going to talk about things that being decrescent and other stuff. Not today, but we'll get to that here shortly. Then after this, we're done. Then we continue on with new, then we really build up with how to put this together for complicated systems. This is review. A state vector, x is equal to x_1 through x_N. Looks so simple. Who can tell me what a state is? This is like prelim preparation. I should charge you guys for extra, I should say. I know the conversation going on right there. Yes. Henry, what do you think? What is a state? Well, a state vector is the minimum number of variables that normally describe a data system. Yeah. It's a good way to write it. If I only have attitude motion, I might need three states, and then you might also need three rates. Why would I need states and rates for rigid body dynamics? Because it's not enough to just describe where the system is, you have to describe how it's changing. Yeah. How it's changing as well because your equations, motion are being second-order. Sometimes you have first-order differential equations. The motor torque equations will end up being a first-order relationship that we have. This torque applies to everything. The wheel speed, that's our variable. We will see sometimes second-order, sometimes first, but that depends now. You have a series of variables that you have to put in together. This is typically written in first-order form. To do this, if you have a second-order dynamical system, you define the position and the velocity vector. That's your state space. Then taking its derivative, you can put in the kinematics and you put in the kinetics. What is v. the acceleration? That's where all the F equals ma stuff comes in, and there you go. If it doesn't depend on time, we call it an autonomous system. A non-autonomous system, all of a sudden, there is time dependency. We have this like a dual spinner. You may be spinning up a dual spinner. This could be an open-loop thing and you just go, for 30 seconds, apply 0.1 newton meters of torque onto this wheel and just let it go. There's no feedback, there's no state dependency, it just goes on for a certain amount of time and then off. That's it. That all of a sudden would become a time dependent system. Sometimes we also have launches. How do you converge? You've launched a satellite and you want to intercept with an asteroid. Well, you can guarantee you're going to converge if you launch it the right day. Otherwise, you don't have enough fuel, so all of a sudden the stability to convergence has to do with, when did you launch? Where was the system? If you launched too early, the asteroid isn't where you need it to be. If you launched too late, it's already passed your rendezvous point. All of a sudden, there's a time dependency that comes into this, and it's not just a classic feedback thing. U is our control variable. Typically, we just have it here as states. In this class, I don't think there's any need to make it time dependent. We will have g as a general one, this could be a linear control, this could be nonlinear. That's why I just have it as a function g. Then if you plug in this u into your equations of motion, this comes as the closed loop dynamics or your actual dynamics has changed. This is what was happened naturally, but you are putting a control that makes this spin actually change orientation and settle out like this. Think of control as taking what nature gave you, and you change it to do something you need for the mission. Off deployment, it's spinning stably here, but I need to stop spinning and point at boulder so we can talk and send data back and forth. That's what you need. The control changes that. How do we do it in a stable manner? An equilibrium is what in plain words? Here's the math definition. What do you think? [inaudible] I'm so glad you said that. The answer was, if something reaches an equilibrium and is perturbed, it returns back to it. I'm happy you said it because it's complete nonsense. But it's such a popular set of complete nonsense. People keep saying this especially prelim exams, so anyone of you don't say that. Nick, what you described is not an equilibrium. What did he describe? Stability. Stability. But we all do it so often that we somehow, because we find equilibrias and then we always study, is it stable about this? Is this a good equilibria? But there's two distinct factors like let's take the planar pen. This is one equilibrium, and this is an equilibrium. But not both of them are stable. We preferred this, and with our feedback control, we want to turn our pointed bolder and stop spinning condition into an equilibrium because it will mean, all my states and rates and everything goes to zero. But that means by states relative to what I'm supposed to be doing. That's a tracking problem or regulation problem just mean stop spinning just bringing everything to zero. Those are the two distinctions. Think of equilibrium as what brings all your state rates. That's why the f function, that's your x.20 and planar pendulum is one here and here. Sometimes in class, we'll have spin equilibria like spacecraft. Abby, was talking about spinning about principal axis. It turns out if you spin about any principal axis, all this gyroscopic go to zero and those are equilibrium. But as we also know, not all spins about principal axes are stable, some are stable, some are not. Again, it's two distinct factors. Good, I'm glad because now you'll never do that again, that's perfect. We have equilibrium, and those are basically something that makes it constant. Now, these basic definitions for a minute. That's good, we're going to read through this. With this control, why am I doing it? I want to make sure you guys are back on the [inaudible] because it's been a few months since you've seen this if you just took it at spring, or it might be a few years if you've had it elsewhere. If I'm talking about neighborhoods, what am I talking about? Gavin, I haven't heard from you. You are talking about a region around the sphere. Right. Typically in this class, we cannot use balls because we use an L2 norm. L2 is like you making this hypersphere around your reference states, and then we can guarantee them within this sphere or such a sphere exist, such that if I'm within it I am guaranteed some stability properties or instability. If there's a stability theorem, there is an instability theorem. But being ethical control engineers, we try to make stable controls, not being evil and make things turbulence to crash, so we'll mostly talk about stability. But if you're doing general dynamical systems, trying to prove this approach to an asteroid or this balance that you have is going to be unstable, you can go look up, it flips a lot of the signs and arguments become reversed. But you can also prove neighborhoods about which something will be unstable, like an inverted pendulum if you want to study them. Good a neighborhood, X_r is our reference. It can vary with time. If I have a time-varying state, my pen should follow the square of some reference and move at it at 10 centimeters per second. Does that immediately make my control problem non-autonomous where it's time-dependent? Why not, Anthony? Because you're not doing anything dependent on time, it's just you made this. Well, but there's time that appears, your reference is in there. You're right that it doesn't have to be time-dependent because time dependency must be in your closed-loop dynamics. If you do closed-loop dynamics and we're doing tracking, you want X to go to X_r ideally, we're jumping around a little bit. What do we do typically? What should go to zero? We like things going to zero. The difference. The difference. We define a new variable, Delta X, or a set Delta X, which is X minus X_r. While X_r varies with time, your control will just be like this, and that's fine. You can use your classic 5010 stuff, that's why not all worked because we just defined our motion relative to the reference and then everything was cool. In the closed-loop dynamics you had, there was actually that one line that had no explicit time dependency in there for the stability arguments, and that's fine. In the control U, you had X_r, but once you plugged it in, a lot of things canceled, compensated, and you had it purely in terms of Delta X, which was independent of time, and that's why we had uniform stability. We'll talk about that. Neighborhoods, typically L2, we're not doing anything fancy or other controls, especially if that's adaptive control theory from neighbor [inaudible], she's got a great textbook out on this stuff. They often talk about L1 norms and other ways, reasons, for guaranteed stability with adaptation. You might see other norms depending on what you look at. What did it mean to be Lagrange stable? Plain words, versus Lyapunov stable. It means that once you enter a neighborhood you will not leave the neighborhood again. True, but that's true for both Lagrange and Lyapunov stable. What was the difference between the neighborhood for Lagrange and Lyapunov? Independent [inaudible] conditions. Yes. Lagrange is basically boundedness. I'm guaranteeing if you have a spring-mass-damper system, it will settle at zero. You take a spring-mass-damper and you tilt it slightly with gravity, it's going to settle, but it always settles 10 centimeters offset. It will settle there regardless of your initial conditions, always to the same. You can say I will converge to within 10 centimeters. That's it. We have that a lot in Lagrange. Lyapunov basically says, for any final neighborhood, I want to bounce within one degree, I can come up with an initial neighborhood such that you will enter that final neighborhood and stay in there. That's like a spring-mass system, it's just an oscillator response. If you don't want to be more than 10 degrees, you just deflected eight degrees, let go, and it's going to bounce eight degrees back and forth, back and forth, back and forth. It doesn't converge, it just keeps bouncing. But if you only want to be one degree, you get to pick it. You just have to have initial conditions that deflect less than one degree at rest, let go, and it will bounce. That makes sense? Lyapunov stability is like marginal stability there that you get to pick a neighborhood, but it doesn't guarantee it's going to go to zero. It may or may not, we don't know. If we go further, asymptotic stability actually means these things will asymptotically with infinite time get there. Some newer control stuff also talks about finite time or fixed time, those are different things, I'm not covering that in this class. But if some of you might want to research those papers for your projects, that's an option, different control bombs.