In this module, we will learn how to train, tune and serve a model manually from the Jupyter Notebook on AI Platform. Here's the agenda for today. We will first briefly go over all the components of the model development process and how they interact with each other. Then we will focus on the dataset creation part of the story. Next, we will explain how to write AML model in [inaudible] learn, how hyperparameters can be tuned with AI platform and how to package the model ancient Docker Training Container. Then, we'll show you how to build, push, train and tune the model, using the Training Container. Finally, we'll go over how to deploy the model, as a rest API on AI Platform inquiry it. When you're building a Machine Learning model, you have three steps essentially. Step 1 is to create your dataset, which will be used to build the model. After you create a dataset and apply any needed transformations to it, you build the model. When the model is built and performing as expected. The last major step is to operationalize it, which means you train it at scale and deploy it. Operationalizing the model on AI Platform, consists itself of three main steps. The first step is to implement a tunable training application, which requires writing your model into a trained up by file. This will consists of the training code, in the configuration of the parameters. The second step, is to package the training code into a Docker Container, with all the training code dependencies, Operating Systems, libraries, assets, etc. These Docker Container, is used to kick off the training at scale on AI platform. The definition in configuration of this Docker Container, is specified in a file called Dockerfile. The last step is to specify the training configurations, such as the hyperparameter ranges to be tuned i1`1` n the config.yaml file. We'll go over the details of each of these three steps in the next sections. When developing an ML model, developers and data scientists, usually develop most of their code on Jupyter Notebooks. On AI Platform Notebook, is a configurable Jupyter Notebook server on AI platform. This is a typical flow. Load the training data from BigQuery. You can then store the training files, which have been already transformed in Cloud Storage, package the training code into a train.py file. This call will be pushed as a Docker image into Container Registry, trigger the trainee on AI Platform training. AI Platform also stores the training artifacts such as the trained model in Cloud Storage. Deploy the training model using AI platform prediction, so the model can be served. AI platform prediction does that, by retrieving the saved model from Cloud Storage in deploying it as an API. You probably noticed that I mentioned only the orange boxes in this diagram, not the white ones. This is because the description I just gave you, covers a manual process for the mouse tabs we discussed. The focus of this course, is how to automate this process, which includes services to allow for version control of the source code, continuous integration and deployment in pipelines. Here's a different view of how the components we've discussed so far. Let's see with the help of this diagram, how we are going to use them and how they interact with each other. At the center, that is the Jupyter lab Notebook, where we experiment much with our code, and interact with all the other components prominent. Again, this components are BigQuery, which you can use to create repeatable splits of the dataset, train validation in testlets. Cloud Storage, where we export the dataset splits as CSV files and CSV files are being consumed by the model. We also store the training models in Cloud Storage. The next component is Container Registry, or GCR/IO, where we store the training Container that packages are training code. The Docker containers. AI Platform Training, which is in charge of running the training containers for training in hyperparameter tune, and finally, AI Platform prediction, which takes a train model stored in Cloud Storage and deploys it as a recipe either inquiry. In the next section, we'll dissect each bar of this diagram and learn how to use each of these components. In the experimental phase that we have discussed so far in this module, every step of the process is done manually. Which means that, we manually run a cell in the Notebook for a given action of the process should be triggered. Such as building the training container around the training code or pushing it to Container Registry. In the next module, we'll see how to automate this process. We want each push to the code repository, that contains our training code, to trigger a rebuild of the assets that constitute our Machine Learning pipeline, Training Container, hyperparameter tuning and all those. We can then extend these Automation to automatically retrain the model, exporting the newly trained model to the model Registry and deploying it, into specifies serving infrastructure. This is called continuous integration in continuous delivery, or CICD.