Welcome, everyone. In this video we'll be going over the MNIST API project. And essentially what that is is creating a web API to access a machine learning model that classifies images. So first off, we're going to start off with an overview of the video. So as you can see here, the first thing we're going to go through is the MNIST API Structure, just the structure of the whole project itself. And then go over a description of what it is and finally show the execution of the project. So just first off, let's talk about MNIST. If you haven't heard of it before, it's a data set of handwritten digits and they're all labeled 0 to 9 for each digit. And essentially, it's 70,000 images and it's really common in machine learning to use this as your first data set to train a model on. So we need two parts for this project. We need the EC2 server and a client, which can be any computer or the DragonBoard or a laptop. And for the EC2 server, we need Tensorflow, which is a machine learning framework, flask for the web servers to actually access the Tensorflow model. And then on that web server, we'll first need to train a model on the MNIST data set. And after we've trained that model, we want to load the trained model to predict new images and get the response back. Just a note, the MNIST model weights, basically, how, when you train a model you develop weights for that model. They're very large, so we can't provide them on GitHub, however, we will provide a link to the pre-trained weight, so you don't have to train the model yourself and you can just load up our weights and use that model. [COUGH] And then [COUGH] So essentially what will happen on the EC2 server is it gets a request with an image. It uploads the image onto the server and then the model will take that image and spit out a number, 0 to 9 and tell us what it thinks it is and then we're just going to return that number to the person who sent that request, such as the client. Now, on the client side, we just want to have something that allows you to draw a digit, so we'll be using OpenCV for that and then we want to send that image to the web server and print out the response. So it's a pretty simple thing. So next up, we're just going to go with a description of the project. [COUGH] Now, I already kind of described it before but we're just going to do a brief, a more specific one. So the API where anyone can send an image and have it recognized. Once you have this setup, anybody can send and request to this URL and send in an image as long as it fits the parameters and they can see what your model predicts. So it's really nice in the fact, that you don't need a set up like security or credentials for people, it's just open to everyone. The next thing is, you don't need to do the image processing on the client. For example, if you have a very small, weak device such as an Arduino that's connected to Wi-Fi and you can't do machine learning model on it but you still want to do some image processing or something. You can just send it to this URL really quickly and then it's going to respond back with just a data value from zero to nine. And lastly, it acts similar to how Amazon Rekognition works. So Amazon Rekognition API is kind of similar. It's a little bit more complex because it can do a lot more with image and give a lot more information. But essentially, you give it an image to a URL and then their server takes it in, processes it and then spits out all the labels that you could see, if you've ever used Amazon Rekognition. So finally, we had to go to my computer and show a demo of the project. So I just gotta login real quick. [COUGH] So here we are at the EC2 dashboard and we can see a bunch of servers that we've setup over the time of this course. And here is the server that we want to connect to, which is mnistConda. I've already connected to it but just to show you how again. You just do this and copy that and make sure you understand directory as your permission file or PEM file. And then you can just go to it. So now, I've already SSHed into the server and now, we just want to run our file. But first, let's see what's on the server. And we can see here that it's image and then there's three parts here that you want to note is all of these model.ckpts. These are essentially the weights for the tensor flow model that we can't give it to you directly through GitHub, but these will be provided on the links, as mentioned before. And here are the two files we're going to use. server.py which should run on the server and useAPI which should run on whatever device you want to use to access the API. So in order to run this we just want to run sudo python server. It's going to take a second and there you go. It's says it's running on port 80. So now, we want to use the API. And here, we have a simple program that just lets us draw an image, a 28 by 28 pixel image because that's what the API expects. And just going to run this. So here is a 28 by 28 image. And I'm just going to try to draw 1 and see what the classifier thinks it is. Now, I did make a mistake there. Again, these are supposed to be handwritten, so error is supposed to be considered. And just press Esc and you can see here that it says it predicted a 1. Now, that's not very interesting, it's kind of an easy thing to predict a 1 when it's just a straight line. So let's just try a more complicated one and see if it fails or does something else. So here we have, I'm going to try to draw a 3. It's kind of tedious using this program because you have to click each pixel that you want to change. And press Esc. And it predicted a 2. You can see here the model's not that great but it tried. 3 is kind of close to a 2, so it's close enough. Let's just give it one more go to see if it can try to get, let's say a 7. And, I think that looks like a handwritten 7. And no, it is not going to be a handwritten 7. [LAUGH] This not a very complex and very accurate model, but it does work for occasional times, it just hasn't been trained on the exact images that we're going to be inputting. Considering that we're inputting these pixel images instead of actual hand-drawn images, it's very different from what it expects. These are just complete drop offs from 0 to 255, whereas, the handwritten images had kind of gradients. So yeah, the good thing about this project is you can just take out this model and put it a better model such as maybe Google's image recognition model. And then you can use that and let people play with that model. [COUGH] Well, this has been the MNIST API project and be sure to play around with it and see what you can do. Maybe do something interesting with it, connect it to one of your projects. But yeah, stay around for the Code Walkthrough and we'll be going over briefly how it works generally.