In this video, I will be describing the project that covers the concepts you have learned in this course module on network functions virtualization. The aim of the project is to build an NFV Orchestrator. The workshops gave you an insight into the basic tools and concepts for configuring traffic forwarding through network functions. In this project, you will tie all of those lessons together to automate the configuration steps so as to support dynamic creation and scaling of instances. Network functions are rarely deployed in isolation. They are commonly deployed as part of a chain, which is often called service function chaining. For instance, as shown in the figure, a firewall and NAT network functions have been chained together so that all traffic originating from the IP subnet, 192.168.1.0/24 and leaving the enterprise flows through the firewall, then through the NAT, before emerging out of the enterprise. Service function chaining allows network functions to be independent, so that they can be scaled independently. It avoids having to build large monolithic network functions that perform the tasks of both firewall and NAT, for instance. Often, multiple network function instances are shared among multiple chains. For instance, the diagram shows how the NAT instance is shared between two network function chains that process packets from a particular source subnet. However, for this project, we don't consider that scenario for the sake of simplicity. That is, we will assume that each NF instance belongs to only one network function chain. Typical NFV Infrastructure has a control plane that is responsible for managing the compute and network resources, as well as the functions deployed on those resources. We refer to the centralized part of the control plane as the orchestrator. An important component of the orchestrator is the NFV Infrastructure Manager, or NFVI Manager for short. The NFVI Manager, listens to high-level requests coming from system administrators or tenants for registering network function chains and launching NF instances. The NFVI Manager interacts with the Compute Infrastructure to launch NF instances in response to an incoming request. The NFVI Manager is also responsible for storing information about registered network function chains and deployed instances. Another important component in the NFV Orchestrator is the SDN controller, which manages the network infrastructure. This means communicating with the switches using SDN Southbound API's like OpenFlow. The SDN controller performs topology discovery and configures custom packet forwarding on OpenFlow switches. In order to properly configure traffic forwarding for NF chains, it needs information about what chains are registered, and where the instances of a network function chain are deployed. All such information is retrieved from the cluster state. The cluster state is meant for persistently storing information about registered network function chains running NF instances and the network topology. All of this information is populated by the NFVI manager and the SDN controller. The cluster state is queried for NF placement and traffic forwarding decisions. So in this project, you will learn how to implement a basic orchestrator for NFV Infrastructure. In order to simplify the problem, you will implement the orchestrator as a single process. This involves first building a REST web service as part of the real controller app that will listen to incoming requests for NF chain registration and dynamic deployment of NF instances. Since the NFVI manager and SDN controller are part of the same process, the cluster state should be maintained as in-memory data structures. Furthermore, the orchestrator would be running on the same machine as the network topology, as was the case with previous workshops. Therefore, network function instances would be deployed by using Python's subprocess module to create Docker containers. After the containers are deployed, the SDN controller should configure correct packet forwarding in order to realize NF chains. For instance, all traffic from the designated source should flow through the correct sequence of NFS and then to the designated destination. Likewise, all reverse traffic coming from the designated destination to source, should traverse through the network functions in the reverse order. The traffic forwarding should be such that it maintains connection affinity. We have already seen how to implement connection affinity in the previous workshop. Finally, you will create a test infrastructure for testing the NFVI control plane. In this project, you will be working with A fixed network topology, which is shown on the right. In a real world deployment, topology changes are common, and topology discovery will help the SDN controller to be updated with such changes. However, for this project, topology discovery is out of the scope for the sake of simplicity. You can use the fact that the network topology will be fixed to simplify the implementation of the orchestrator, possibly by hard coding information. Endpoint hosts and network function instances would later be attached to this network topology. Next, you would need to instantiate endpoints that would generate traffic that invokes the network function chains. Create the endpoints as Docker containers using the Docker utility and connect them to the network topology using ovs-docker utility. The number of endpoints would be varied during testing. So your orchestrator implementation should be able to support any number of endpoints. Next, the system administrator would register network function chains, and launch NF instances for which you need to implement a web service that exposes a REST API. Here, we're looking at how the administrator would use the web service API to register NF chains. The web service should expose the path http://<IP>:<port>/register_sfc, and accept PUT requests to this URL. The metadata of each request would be provided as the JSON data of that HTTP request. The format of the JSON metadata is shown on the right. For example, let's consider an NF chain that requires packets flowing out of the internal host and destined to the external host to traverse through a firewall and NAT instance. The metadata of the NF chain registration request should include the shown fields. The metadata contains a description of the network function chain which consists of an array of constituent network function names, which in this case is firewall and NAT, and a unique ID of the chain. After that, the JSON would contain a subdocument for each NF identifier in the NF chain. For example, the NAT subdocument describes the second network function in the chain. Each NAT NF instance would be a container based on the image NAT. Each such container should be configured to have two interfaces, namely eth0 and eth1. After creating the container and network interfaces, the NFVI Manager would need to execute an init script at the path /init_nat.sh. When the NFVI Manager executes the script, it would have to provide arguments, which will be described in a later slide. After describing the network functions, the request JSON should have a description of the source and destination of traffic for this chain. In this project, the source and destination will be Docker containers. Both source and destination needs to specify the IP and MAC of the network interface of that container. They also need to provide which switch and at what port that container connects to the network topology. After the administrators submits a request for registering an NF chain, they need to use the web service to launch network function instances belonging to that chain. For that, the web service should expose a path of the form http://<IP>:<port>/launch_sfc, and accept PUT requests to this URL. As before, the metadata of the request will be contained in the PUT request's body as a JSON document. The JSON document will contain the ID of the chain for which these network function instances are to be launched. As we discussed, each NF chain was represented by an array of NF identifiers in the chain registration request. For each network function that needs to be launched, the JSON contains a subdocument with that network functions identifier. For example, in this request, the administrator wants to launch instances of the NAT network function. The subdocument with NAT as the key contains a list with each element of the list representing one instance of the NAT function. Each NF instance contains necessary information for launching that instance. It contains the arguments for the initialization scripts and the IPs for the interfaces of that container. In this example JSON, we are creating two instances of NAT and one instance of firewall. Note that the firewall instance does not specify IPs for the container ports. Specifying IP for container ports is optional, and if it is not provided, you should not assign an IP address to them. The same API would be used for scaling out the NF instances of a given chain. In this project, we are not dealing with scaling in of network function chain instances just for the sake of simplicity. When the NFVI manager receives a request for launching network function instances, you need to perform the following steps. You need to select the switch in the network topology on which this container will be connected, then deploy the container using Docker command line and Python's subprocess module. After deploying the container, add ports to the container using the ovs-docker utility. Then run the initialization script in the container with the arguments provided in the HTTP request JSON. Finally, extract the IP and MAC information from the container for each of the ports and populate this in the cluster-state. This information would then be used by the SDN controller for configuring packet forwarding. The NAT network function modifies IP header of packets. In order to perform correct packet forwarding, the original header information is needed. For instance, in the figure shown on the right, depending on the source IP, packets coming out of the NAT need to be sent either to the IDS instance or directly out to the Internet. In the lectures, you have covered techniques for performing correct packet forwarding in the presence of header modifying network functions. In this project, however, we made an assumption that we don't share network instances among multiple chains. In other words, all packets belonging to a given chain would always traverse the same set of network functions. That means, the example shown would consist of two separate chains, one for each IP subnet. Use this information to simplify your implementation. As part of this project, you need to come up with the technique of making routing decisions so that connection affinity is ensured. In other words, packets belonging to both directions of traffic for the same connection traverse through the same set of network function instances, and you need to be able to do that in spite of having network functions that modify the headers of packets. In order to test the orchestrator, you need to design a traffic generator. The traffic generator takes as input a traffic profile as shown on the right. Each profile states the source and destination of the traffic flows and contains multiple traffic flow descriptions. In this example, the traffic flow from container int to container ext is divided into two periods. The first period lasts from zero seconds after test start to 10 seconds and creates five flows during that period. The second period lasts from 10 seconds to 30 seconds and creates 15 flows. The specification of the traffic profile is not fixed. You are encouraged to tweak the traffic profile specification to make tests more flexible and realistic. However, you are expected to provide very similar functionality from the traffic generator. During the testing period, you should be able to launch additional network function instances of the registered chains, thus demonstrating the ability of the control plane to scale out the NF chain instances. For this project, you need to submit the following: The SDN controller logic as a Python script, ryu_app.py. Make sure you clearly demarcate the code sections belonging to the SDN controller and those belonging to the NFVI manager. Include the JSON specification of all the traffic profiles with which you tested the system, a report describing an outline of your implementation of the NFV control plane. It should contain a description of the data structures that are used to maintain cluster state. You should also describe the techniques used for maintaining connection affinity despite the presence of header modifying network functions.