[MUSIC] In the first part of this course, we focused mostly on periodic tasks, which means that the generation of the jobs is always predictable. But the real-time systems do not only process predictable periodic jobs re-occurring with a given frequency. Now we would like to include unpredictable jobs also in the schedule. We call non-planned or unpredicted jobs with no real-time requirements aperiodic jobs. And this lesson will present some methods to optimise the execution of unpredictable non-periodic jobs, while not cause deadline violations for the periodic jobs. The arrival of aperiodic jobs can be seen as relatively random, meaning it can happen at any time. But in some cases, it can be predicted with, for example, a statistical distribution. In case the execution time is not known, it must simply fill up the empty slots, until it is completed. Otherwise, the completion time can also be calculated. The periodic jobs have higher priority than the aperiodic jobs and will preempt the aperiodic job to not violate their own deadlines. It is, however, usually better to complete the aperiodic jobs as early as possible for better user experience, provided that no periodic jobs missed their deadlines. And there are methods to rearrange the periodic jobs on demand to decrease the completion time of the aperiodic job, without affecting the deadline of the periodic jobs. The naive algorithm for handling aperiodic jobs is to not touch the period job, and schedule the aperiodic jobs only where nothing else is executing. In situations where a second aperiodic job arrives while the first job has not yet been completed, then it is placed in a FIFO queue for aperiodic jobs. Again, the naive algorithm executes the aperiodic jobs in the order that they come in, with no priorities. So let's take another example where we have an already scheduled system with periodic jobs and we have a naive scheduler for aperiodic jobs. An aperiodic job A1 arrives at time 2 with a length of 1 millisecond. Since T5 is scheduled at this point, A1 is placed in an aperiodic job queue and held there until it can be scheduled. At time 3.5, the system is idle, and 0.5 of the total 1 millisecond can be scheduled. Later, at time 6.5, the outer 0.5 milliseconds can be executed. So A1 completes at time 7, and its total response time is then 5 milliseconds. There is, however, no point in scheduling the periodic jobs faster than their deadline, right? Because in a real-time system, you only need to guarantee the timeline for the real-time jobs. The performance is not important. So why not move the periodic jobs as late as possible, to make room for the aperiodic jobs? As long as the periodic jobs meet the deadline, everything is fine. So let's assume T1 has a deadline of 7. Then T1 can be split into two parts and scheduled from 4.5 to 5 and from 6.5 to 7. We now can schedule A1 into space between 3.5 and 4.5, and this will reduce the execution time of A1 to only 2.5 milliseconds. So this here was the approach of slack stealing. We have a set of aperiodic tasks like in the table, A1, A2 and A3, and they have different release times and execution times, as you see. The jobs can now be scheduled either in the end of the frame, or at the beginning. So we have an already existing schedule of periodic tasks and I mentioned aperiodic jobs. In this case, we have frame size of 4, which is indicated by the vertical bars. And we asked the question, what is the average response time for the aperiodic jobs in case A, with aperiodic jobs scheduled in the end of the frame and B, with aperiodic jobs scheduled in the beginning? We see A1 released at time 4, A2 released that time 9.5, and A3 released at time 10.5. We first illustrate the normal approach of scheduling the aperiodic jobs. Then the straight form of scheduling is to put A1 in the first empty slot, at time 7 until 8, and the other part of A1 from 10 to 10.5. Because A1 was scheduled before A2, A2 must wait until 10.5 to be scheduled. And of course, A3 is put after A2, also in two parts. With slack stealing, on the other hand, A1 at time 7 can be moved to time 4, because the periodic tasks T1, T3, and T4 can still be scheduled into frame, and no deadlines are missed. Also in the other part of A1 at time 10, it can be moved to time 8. Then A2 at time 10.5, can be moved right there after T1, because A1 was released exactly at this point. A3, in frame 3, cannot be moved because it was released too late to push the periodic jobs further back. And A1 and A2 have already used up the space. But the other part of A3 in frame 4 can be moved to the beginning of this frame by pushing T1 and T2 further back. When calculating the average response time of the aperiodic jobs we get for the normal case the time 4.5. And we get for the slack stealing case, 2.5. In this case, the slack stealing approach in case B has clearly a shorter response time. And for a generic schedule, there are further readings on average response times in queuing-theory available in the link. In conclusion, aperiodic jobs do not have a hard deadline per se. But as we have seen in this lesson, it is important to still optimize their response times, to improve the usability of the system. You have seen the slack stealing method, which will move the periodic jobs to the end of the frame, and therefore allow the aperiodic job to execute earlier.