[MUSIC] Welcome to this module. In this course, it's mainly about learning ways to create real time schedules and learning about which scheduling method works best in which case. So let's now actually start looking into how to create a real schedule from a set of tasks. In this lesson, we will introduce the most basic form of scheduler, the static scheduler, to have a basis on top of which to build more advanced schedulers in other lessons. Static schedulers are very common in real-time systems. They are both simple and predictable, which are two desirable properties. And we'll go through such schedulers work. The static schedule is stored in a table as a set t, T, where the small t is the scheduling time and the big T is the task, or the lack of task, called a hole, starting at time t. The scheduler is the one placing the jobs on the timeline, and should do so without violating the deadlines of any tasks. It is, however, possible to easily check if a schedule is not going to meet all deadlines without scheduling all the jobs, so that systems can be quickly redesigned if it's obvious that no schedule is possible without violating the deadlines. A simple test for theoretically verifying a real-time system is by calculating the utilization. This compares the amount of time the tasks occupy the CPU with how much time is left as holes. The relation is dependent on the execution time of the tasks and how often the tasks re-occur within the period. With a fast period and high execution time, the tasks are going to occupy a large amount of CPU, and the CPU gets more utilized. The utilization is therefore defined as the sum of all tasks' execution times divided by their periods. And if the sum is greater than 1, then the system is guaranteed not feasible. We can look at an example of this. So here we have a task set with tasks T1, T2, and T3. The periods are 3, 4, and 10, and the execution times 1, 1, and 2. So what is the utilization? We calculate the utilization as U = 1/3 + 1/4 + 2/10. And this is equal to 0.7833, which is now smaller than 1. Now this only tells us that we cannot guarantee that the schedule is not feasible. We can still not guarantee that it is feasible, because some jobs could occur at the same time. So even if there are holes left, they could be in the wrong place. So this rule set is something we want to extend in other lessons, to provide more accurate tools for also verifying real time schedules. The rules highly depend on the scheduler and its algorithm making the scheduling decisions. Some schedulers work better in some cases, and some in other cases. So therefore, we want to start looking at how a scheduler actually works. Upon start-up, the OS creates all tasks and allocates their respective memories, and this is stored in the memory heap. The OS uses a timer set to expire at the first scheduling point. And when this point is reached, the host sets the timer to expire at the next scheduling point, determined by the static schedule. This means that a job from task T is scheduled at time t, and these are basically all the components needed to create a static schedule. Let's look at the serial code for a static scheduler. On the first line, the schedule is defined as (tk, Tk(tk)) for all tasks k until N-1. And when the scheduling starts, the next decision point and table entry is 0, because the system has not yet progressed. Then the timer expires at tk, which is at time 0. As the system is initialized, we begin a loop doing the following. The system gets a timer interrupt from the schedule and in case an aperiodic job is currently executing, this is then interrupted. The task set for scheduling T is the kth task in the schedule, and this means tk and i are iterated. The following task to schedule after this is selected with module N. Because if the last task in the schedule was selected, the scheduler starts from the first one again. The timer is now set to expire at the ith task, selected with respect to in which hyperperiod the scheduler is currently in, plus the next task of set. In case the system is now idle, the aperiodic jobs can execute, if such exist. And otherwise, task T is executing. This loop is then running forever until the system is terminated. And this sums up the static schedule implementation. In this example, we have tasks T1, T2, T3, T4, and T5 scheduled in some order decided by the timetable. At time 0, the timer interrupts the system and T1 is selected. At time 1, T3 is selected. At time 3, T2 is selected. And then at time 5, T4 is selected. And finally, at time 7.5, T5 is selected. So again, the scheduling points for our task T at time t is stored in the timetable. We introduced in this lesson the static scheduler. This is a very simple form of scheduler, and its advantage is its predictability and that the scheduling complexity is very low. All scheduling points are created offline, which means that we can run more powerful algorithms only once, offline, to optimize the schedule for the tasks we have. A static scheduler is, however, not very flexible. If you want to make changes to the schedule by adding, modifying, or removing tasks, you must run the algorithm again offline for each modification.