Machine learning is a discipline inside of artificial intelligence. The term AI was originally coined in 1956 and has since evolved to encompass many fields of study that are commonplace in today's technological discussions. Topics like machine learning, natural language processing or NLP and computer vision all fall under the modern umbrella that is AI. Now if you're thinking of AI is killer robots to take over the world, your field of interest is more HLI or human like intelligence. You'll notice that deep learning is yet an additional sub discipline within machine learning. And it's captured a lot of attention as it has started to even rival and surpass the human ability to perform complex tasks like image recognition, speech recognition, language translation and much, much more. Now that we know when AI, ML and deep learning came about how about a one sentence definition of ML. ML at its core labels things for you, show a model a bunch of good historical sales trends, data for your clothing store and a model can predict next month's sales. Show a model, lots of photos of cars and what the correct make and model is. With enough examples, the model classify new unlabeled car photos for you. Let's dive a bit more with an example. On the weekends you'll likely find me scouting the web for new sci fi movies and tv series to watch. Now I can tell you what previous sci-fi movies I like and I can do a decent job at narrowing down the list of potential new options myself. But I don't have all the time in the world to scan through and classified good sci-fi movies to watch. I do have a general intuition of what I like, which is also reflected in my history of movies watched. So things are, has to be set in space, in the near future, it's shorter than two hours and there's no crazy aliens, no horror or anything like that. And I can provide you the list of movies that I also liked. We can then train a model to label, in this case it's classifying whether or not I like a new sci fi movie or a series that comes out which is about space drama. The key difference though, is that although I have an intuition of what I like and I'm providing you the model with a list of movies that I liked and didn't like. I'm not providing the model with a hard coded recipe from narrowing down the movies like if it's shorter than two hours then prioritize the space movie as long as it's not horror, that sort of thing. The beauty of ML is that it comes up with this recipe by itself based on the correctly labeled examples that it has seen so far. Now, imagine if I didn't provide any insight behind my movie selection process other than all the movies and the tv series in the past, would you have any basis to even build those hard coded rules? Not anymore. And what if I ask you to predict across all genres of movies, which could have very different aspects. How could you maintain a rules base of things like if comedy and actor equals john else if not horror and duration less than two hours. All that gets unwieldy. Let the machine learning model figure out the recipe that ties your historical labeled data to the predictions on unseen data. Now, let's extrapolate this to a real world application. Let's take Google search for example, say you go to google and you search for giants. What should we show you as your results page to make the most relevant for you? Well, if you're in California like me, should we show you the results for the San Francisco Giants? It's a baseball team maybe list some local games nearby. What about if you're based in New York? Should be tailor the results to show the New York Giants football team instead of one of the rules? Well, up until a few years ago, this is exactly how Google search worked. There were a ton of rules that were part of the search engine code base to decide which sports team to show and where based on where the user was. If the query is Giants and the users in the Bay Area, show them the results by the San Francisco Giants. If the user's in the new New York area, show them the results about the New York Giants. And if they're anywhere else, show them the results about tall people or giants. Those of you have worked with SQL before, just imagine how many case statements this would be and how hard it would be to maintain. And that's just for one query multiply this by the large variety of queries that people make, where they make them from, what device they're on. You can imagine how complex the whole code base had become and the code base was getting really unwieldy. Hard coded rules are hard to maintain and this is exactly where ML comes into play. It scales much better because it requires no hand coded rules and it's all automated. Our data set in this case is we know historically in the search engine result pages, which links that people clicked on. Why couldn't we just train an ML model to provide input into the search ranking? And that's exactly what Google itself has done internally. And they used a deep learning ML model called Rank Brain. After rolling it out, the quality of search engine results improved dramatically with the signal coming from Rank Brain becoming one of the top three influencers for how results are ranked. If you're interested, I'll provide a link where you can read more about it. Now to recap, machine learning want to lead with examples, not with instructions. Any business applications where you have these long case statements of if then else and logically hard coded all that stuff together, but you do have a history of good labeled data. That's a possible application for machine learning. Now, deep learning, remember that's that sub discipline of machine learning, is useful for when we as humans can't even map out our own tuition about what makes a prediction correct or not. So what do you see here? Now your eyes and your brain have the benefit of many, many, many years of evolution and intuition to allow you to perceive and interpret all those pixels on the screen. How could we teach a machine to understand that this picture here is a cat. If you let you follow yourself back into the rules making bad habits that we want to try to avoid. You might say, well, look for a cat like eyes in these images. Okay, what about this image? Your brain still knows it's a cat but the machine now is no basis to go off with an old rule of just look at the eyes and determine if it's cat like eyes. Okay, okay, what happens if we added a bunch more hard coded rules like this, look for the ears, the eyes and the nose. All right, is this still a cat? What about this? Again, you get the point. Hard coding rules completely fails us here and that's where deep learning comes into play. When we just have labeled examples and we completely let the model figure out how to build a good recipe to answer the question, what is a cat? And in 2012, that's exactly what the google research team with Jeff Dean and Andrew Ng did. What you see here is what the deep learning neural network figured out, what a cat is based on looking at over 10 million images and processing the model over 16,000 computers. Now, a familiar architecture for deep learning is the neural network, which is the model inspired by our own human brains. Here, it takes the input image that you see there and classifies it as a cat or a dog. And again, we're not telling the model to focus on looking for dog collars or cat whiskers. It builds its own recipe for determining the correct label and applies it to the end. As you can see from the image, modern ML models can scale and handle even tricky data points like this dog hiding in the laundry basket.