Reconfigurable systems, WHILE PROVIDING NEW INTERESTING FEATURES in the field of hardware/software co-design, and more in general in the embedded computing system design, also introduce NEW PROBLEMS TO BE FACED in their implementation and management. This is particularly true for systems that implement self partial reconfiguration, such as Xilinx platforms. This class will present several scenarios where the reconfiguration can be effective, such as the needs for runtime flexibility, lack of resources, showing also some drawbacks introduced by this new feature. THESE DRAWBACKS WON’T BE AN ISSUE, we are interested in doing science, and wherever we are encountering drawbacks we see room for improvements and that is exactly what we are going to do. We will show the presence of two different kinds of limits, theoretical and physical ones, trying to highlight possible solutions to both of these. We know that we may be in interested in supporting runtime reconfiguration because of the increasing need for runtime flexibility. This flexibility may come from the need to support new standards, which can be easily the case of media processing, or telecommunication applications, or the addition of new features. A reconfigurable solution may also be a very interesting platform to have where a mix of different Hardware-Software implementations can be evaluated to speed up the overall computation of the final system or to find the proper trade-off for the specific scenario we have to deal with. This is the classical scenario of the 90%-10% rule, sometimes also known as the 80%-20% rule. To make it simple, the 90%-10% rules is based on the observation that, on one hand, the 90% of the execution is spent in 10% of the code, example of this are inner loops, as in the case of stencil computations, or computational intense code. While, on the other hand, 10% of the execution is spent in 90% of the code, examples of this are exceptions and user interaction. This can be useful to identify the portions of the application that are good candidate to be executed in hardware, in fact, while the 90% exception code is going to run as executable files on processors, the remaning 10% computational intense code can be executed as hardware on reconfigurable devices. Furthermore, from a practical perspective, one of the most obvious reason to use a reconfigurable architecture is because we may have to implement a new version of a system that is too large to fit on the device all at once. Within this context, we may be interested in implementing different chunks of the system while sharing the same underlying platform. Let’s try to formalise these scenarios by using a graph we have been already working with. On one hand, on the X axis we have the area, or, if you want, the resources we are going to use to implement our system. On the other hand, on the Y axis we may have time, and with this we can refer to the overall execution time needed to run our system. Within this context we can see that a point in this cartesian coordinate system is nothing more than a specific implementation of a system by using a certain amount of area/resources that will imply a certain time needed to complete the execution by using those resources. Now, given this representation we can draw two point S_o and S_w. S_o is the OPTIMAL SYSTEM IMPLEMENTATION, it is an implementation that is using as many resources as needed by the application to remove all the structural hazards. This implementation is also guaranteeing us the best performance in terms of execution time. On the opposite, we can design the WORST SYSTEM IMPLEMENTATION. Remember that we are considering execution time on the Y-axis, this means that by words we are identifying the implementation that is using the minimum set of resource to implement the desired solution which is providing us with the highest execution time but, maybe, with a solution that can be interesting for other parameters, such as area or power. Given this two points we can identify a line of FEASIBLE IMPLEMENTATIONS connecting them. These solutions are theoretical ones, this means that we can see them as the set of implementations that are providing us always the best execution time given a certain number of resources. Within this contest, this line is partitioning our space into two regions: a FEASIBLE SOLUTION SPACE and an UNFEASIBLE ONE. The unfeasible solution space is characterised by all the solutions that cannot be implemented because of the physic of the system. In other words, by being below the FEASIBILITY LINE means that it would be possible to have an execution time lower than the theoretical one achievable by using at the best the available resource used to implement that solution... which cannot be done. On the other hand, for an implementation being in the feasible solution space means that the implementation can be realised. Now, remember that the line is the set of the theoretical implementations, that means that a real implementation of a system on a set of resources may not be able to meet the theoretical expectation. This is totally reasonable. At the end of day, we need time to move data, we have delays because of the routing, and these are just few examples of reasons why we may not be able to meet our theoretical expectation. Lowering that point as close as possible to the line is exactly what we, as system designers, are interested in. Being able to design a line, as close as possible to the real performance is what we, as theoretical scientists working in this field, are interested in. Now, given the system requirements, expressed, as an example, in terms of execution time, we can then derive the amount of resources, A_de, needed to obtain them and this can be exactly the time in which issues are going to pop-up... What if the device is not big enough to provide us the necessary amount of resources?