At this point, we have discussed the difference between something being, now, if you're closed loop dynamics has explicit time dependency. Then we're trying to figure out the stability of the closed loop system and you can look at the stability by just saying look, if I start at noon this thing will stabilize. If you start at three, who knows? I don't know. You can put it in a special condition we could also start. You mentioned magnetic fields and so forth. You might assume if I'm right over the poles, this is going to work in that region. Later on, maybe, maybe not. Then you can argue Lyapunov stability of a time dependent system. But you have a temporal dependency. That's the key takeaway here. Uniform stability means what Paul? If you hear uniform? [inaudible] Exactly. Now all of a sudden the Delta, that was the big difference. That Delta neighborhoods depended on time. Do I start today? Do I start six months from now? If you have uniform stability, it's a stronger form of that stability. You can have uniform stability, uniform lagrange, uniform asymptotic. All these things can be done in a temporal local way. Besides spatial locally, we called that local and global. But for time we usually just have stability or uniform stability. But it's in essence, it's a local time versus global time. Uniform means global essentially from temporal spectrum. That's one way to think of it. These are the definitions beautiful, but that's useless because then how do you prove these things. In the math books, if they have a simple system, they can sometimes derive the solution to the system and prove the system satisfies these properties. Great. But what if you have attitude dynamics? There is no analytic answer to attitude dynamics, especially not with arbitrary controls thrown in. That approach never works. That's the beauty of Lyapunov stuff. Lyapunov direct method doesn't require you to actually find a analytics solution to the differential equation to prove that it dissatisfy all these properties. You sidestep it and go, this is what we want. But Lyapunov has proven that if you can find the positive definite function with all these properties, take a derivative and the system now is negative semi-definite at least then you got these guarantees. You just plug in differential equations instead of having to solve differential equations. That was really cool. But for uniform stability, we're going to have to adjust that. We're going to need a little bit stronger arguments to do this and the problem will always be time. Let's look at what's happening here. A positive definite function here is a scalar continuous function like before. But now instead of just depending on the states that we care about, from a control problem, stability problem. I cared about the angle or angle and rates and so forth. You may also have time in your Lyapunov function. That's fine because you might have to include time to make all the math work out and get functions of a certain mathematical property. It said to be locally positive definite or positive semi-definite, I'm just doing it in ones. Either it's greater than or greater than or equal to, and it used to be just greater than zero. That was fine. Now it says it has to be greater than this weird W_1 function. W_1 only depends on states, that's a key argument and it is positive definite. Our classic positive definite definition. What does this mean graphically? We draw our axes. You're going to have some positive-definite functions and anything positive-definite needs to be zero at the origin. Otherwise just the positive functions that have positive definite function. Let's say this is your nominal and then the actual value varies around it. This is a sine wave. These fluctuations can go up and go down depending on when you look at this function, it might look like this. Or with these variations, it might look like this. This is some variations to it that are time-dependent. What this definition says is this is only a positive definite function if you can come up with a W function, which only depends on the states that is a lower bound. Now what does that mean? That means with your time-dependent Lyapunov function that you had here. Time can never make your Lyapunov function go to zero. That's essentially what it means. If that's true, then you should be able to come up with, it might be a super shallow lower bound, but something positive, that's the toughness. Zero at the origin and non-zero in that neighborhood and you can always do local and global, anything like before. I'm not going to go through global and local arguments again. But that makes sense? If you have a time-dependent Lyapunov function says positive definite. Once you've told the audience is time will not derive in this neighborhood, my function to zero. Otherwise, you would have what? You would have yellow, right? You'll do something and then there's a variation, go wow, that was a bad day. Something happened and that would have completely destroyed your definiteness and he brought it to zero and all this good stuff, right? Positive-definite, always positive and time doesn't matter, that neighborhood, it will be positive way. Let me just kill that one so I don't have to switch between them. Okay, cool. That's the slight modification. Instead of greater than zero, it's greater than a classic position , positive definite function. These are some simple examples. You can have, for example, x^2 equal the V function is x^2 over 2. It only has states in it and often I'll show you one example. Sometimes we can use even though our closed dynamics might have time in it, I might be able to write a V function that is positive definite and I don't need time in it explicitly. If I write it as x^2 over two, can you come up with a lower bound that guarantees this V is always greater or equal to that function. What do you think? Andrew. [inaudible] Four, yeah. x^2 over 2 times 0.99999 or something, or it also said greater than, if it's semi-definite, it could be equal to it, if you need to be last, is to make it a decision less, right? It's trivial. In that case, trivially this is always going to be satisfied and it is a positive definite function, right? Because it only depends on its states. Easy. What about this one though? x^2 over 2. What's the definiteness of this function? Joao. Positive definite. Sorry. Is it positive definite? Give me a W_1 that is positive definite and guaranteed to be less than that. What is it? Of concern is the exponential term, right? Let me think about that, right? Is this time ever going to go to zero for a finite time at least? No. Will you be able to come up with a lower bound such that e to the minus t will never force this function beneath that bound? No. What is the definiteness of this one? It will be positive semi-definite. Semi-definite essentially, right. It can go to zero. You could have zero function. What v is equal to zero is semi-definite at zero, and zero or greater everywhere, you could do that. I think I'm missing a word, positive definite, that should be in bracket semi there as well. Yeah, that makes sense. Because otherwise, I'm always greater than or equal and just touching that W_1 doesn't matter. I think I'm missing a semi too. There's a case too, there's some subtleties that happen. That's not positive definite in this case because you cannot come up with this argument. No matter what you pick, give it enough time and this is going to force this function down. The exponential term will win. At the crescent function is the opposite. We're just being positive definite. Again, v depends on x and t now. Positive-definite meant t will never make it go to zero. The crescent means t never makes it go infinite, essentially, that's the way I think of it. Because we can now come up with another positive-definite upper bound, W_2. We can guarantee that this V function is not going to exceed that, all right? No, there we go. What that means is if I'm going back to red, this is the lower bound. You can come up with another bound and I'll say, this is your W_2 of x and whatever happens, I'm going to get rid of the yellow. We didn't want yellow. But whatever happens there, it's happening above a minimum and below a maximum. It's bounded the temporal variations, and this happens a lot. If you think of magnetic torque influences, I don't know exactly what it's going to be, but I know it cannot be bigger than this. There's a max and a min torque positive and negative that could be applied. Often we have temporal effects that are happening that are somewhat bounded and we can adjust for that like that. All right? That's good. Even if you have atmospheric torques and you're looking at your orbit as a point-mass, or atmospheric forces. There's a disturbance acting on it. But depending on the attitude, this force can vary and your attitude depends purely on time and mission objectives so it's temporal thing. But you can still bound how much is it going like this against us? The atmosphere, is it flying like this. This is probably worst-case, this is probably a minimum case. I can come up with variations and bounded above and beneath. This is a common thing we find in engineering applications that we have. They put that next to it. There we go. Lyapunov's direct method that we had for stability, it was just if you can find a V function that is a Lyapunov function and you prove in the system is stable and that required V to be positive and it continuous partial derivatives and V dot to be negative semi-definite. How that's modified, now for uniform stability, you're going to have to find the V function, which can depend on time itself. But it has to be positive definite decrescent. So being PD and decrescent means there's a lower and there's an upper bound. That's what we're talking about here in this function. So you can lower and upper bound that one. This is what makes up just being positive, definite is not enough. It has to be positive definite and decrescent is V function. Your V dot, which you take a derivative and V depends on x and t. So there'll be an explicit time derivative plus the partial of V with respect to x times your differential equations has to be negative or equal to zero. This looks like the semidefinite stuff that we talked about. It's not exactly, but it's close. Then this is uniformly stable. This is what you can argue. Again, these are necessary, not sufficient if you cannot find this doesn't mean it's not uniform and stable. It just means you don't have good enough math to really prove it or this system, maybe you didn't figure out the right Lyapunov function. It doesn't get very complicated. People do is on power plants and other weird systems and you come up with these right functions. So that's uniform stability. If you want to get asymptotic stability, we required V dots to be negative everywhere except for the origin. It would not go to zero unless you have converged. Then negative definiteness meant we will come to rest and actually converge. Well with time, we need a slightly stronger argument. Essentially we have to say that a V dot now is negative definite because like with positive-definite being greater than zero was not enough. It had to be lower bounded by a state-dependent positive definite function W_1. So here I have another, this is written as a positive definite function. W_3 and V dot has to be less than minus W_3. Minus W_3 is a negative definite expression. It's not just less than zero, but negative that and negative or equal to is fine. Because it reaches that function, it will still converge. Then it's uniformly asymptotically stabilizing. Make sense? Just again, this whole world is there's lots of stuff and linear systems, how do you prove stability? There's whole books on just this topic. Things get a lot more complicated, but I wanted to throw some highlights in there, talk a little bit about time-dependency because we'd often do a tracking problem. In this class, for the most part, we do a coordinate change where our closed loop tracking errors somewhat hide that we're doing motion with respect to maybe a time dependent system. Again, to highlight your dynamics, this is actually what we have. Well, this is actually, this is not, this is the closed loop dynamics, I shouldn't say that. But if you closed loop dynamics have time in explicitly and you can come up with the V function that doesn't depend on time, but it's positive definite. Take a derivative, muliti-plug-in the derivative, maybe something happens to the time part and these terms cancel and you can come up with a good V function. Then it only depends on states. You can use your classic definitions and still argue uniforms stability. Because every V function that only depends on state is inherently positive definite and decrescent and the V dot tool satisfy that one. So it's a nice thing. Sometimes, this works that easily, not always. So let's do one simple example. In the exercises, you're going to do a belief, a first-order system and look at, I believe, I forget the exact problem. Look at stability. Here's a classic. We have the natural system as a spring mass system. I'm doing a control, and my control is going to be minus cx dot. We're just adding a damping term. If you have some rates, bring him to rest whereas it going to settle. If c is a constant, you know from linear stuff spring mass damper, it's going to settle at the origin always, without any other disturbances. But here all of a sudden our c varies with time. Maybe your damping. If this is a wheel with grease and coefficients, it depends on how long you've been running. Thermal effects may warm up the grease or cool it and now it's sluggish or maybe heat to start breaking down your grease properties and it starts to become less frictionless. There's all kinds of reason why this could be time dependent, and you're trying to figure out, well, am I still going to be stable or is anything weird going to happen? Once we put this you into the system, our closed-loop dynamics now do become explicitly time-dependent. How can we, this is an easy quick example of how to look at this one. Here I have to come up with a V function, and I've chosen, like with we did before, I'm using this is massless spring mass damper system. I use kinetic energy plus the spring potential energy, and you notice this only depends on the states, not time. So inevitably you could take this V multiply times 0.99 and multiply it times 0.11 and prove that it is positive-definite decrescent, it's trivial. That's good. No problem there. But when you differentiate it, plug in the equations of motion like we did before with a spring mass damper, you get x. times u, and this is the u, and you end up with a V. that is minus the damping coefficient, which depends on times x. squared. Now, what is happening here? Sorry. Let's see. What can we argue? I've got some lint going up my nose. Here, we want to argue and the statement here is if C is greater than or equal to system, the system is uniformly stable. For uniform stability, we needed to prove that V is positive-definite decrescent, trivially in this case, and V. had to be what? Had to be negative definite. No, for stability. Look at the V. just has to be less than or equal to 0. That's all we require. Negative semidefinite. Negative semidefinite. That's why here, that is the case, and we're fine. Even if damping goes to 0, you've lost your damping, but the system would just keep oscillating. It won't grow, it doesn't get unstable, so if you lose damping at any point, you actually are still Lyapunov stable. That's what you're arguing here for this system. Even if at some point you lose all the damping, your system is still uniform and stable. Nothing else will drive it to go unstable at this case. Not a super strong argument, but it's what's happening versus why can't we argue this is asymptotically stable? We had this proof here, and this is where you see the complications coming in. V. has to be less than that. Why can't we apply that here? Like I could say my c, if I assume my c, and don't make any assumptions here, if c is greater than or equal to 0, but what if c is just greater than 0? You have damping. You always have damping, but sometimes you may have more damping and then you have less damping. You can come up with a lower bound on your damping and say, I will have at least this much, but it might be anything else. Now the time-dependency will never make this actually go to 0. It may make it small but not zero. Wouldn't that argue this is asymptotically stable? Intuition says, yeah, but I feel it's more complex in that because it's not bounded by that W3. Yeah, you're focused on one time. The argument actually has nothing to do with time. It goes back to basics. If this c were constant and V. is minus cx. squared, is this Lyapunov function negative definite or negative semi-definite? Definite. No. What are the states that we care about, Julian? The state plus- State and rates. Don't forget the basics again. That's why in this case, at first you might be glancing like, "Oh, cool. As long as I have some damping, I can prove I can come up with whatever the lower bound is. My V. will be at least this much negative, maybe more." Well, true. You need extra steps, basically. This system, if you have positive damping, will converge, which is cool, but up to now, those arguments, we can't just apply it, because my V. earlier, I can put in any state x and make it go to zero, so I have to be more careful in how to apply that. Anyway, so the takeaway point, you're not now masters of time-dependence, closed-loop dynamics, this is just a glimpse at. It gets complicated. There's a lot more stuff, but these are things if you're interested, if you're thinking about projects, these are great ways to take a paper that deals with something like this and study up on it depending on what you want to do for your research. But I wanted at least to throw out some hints there.