Learning machines: a future look. Our goals for this lesson are; first, to analyze the trajectory of machine learning and AI research from an ethical viewpoint. Basically, where is this all heading and how do we insert ethics into the conversation early enough where we don't miss out? Second, we're going to define general versus narrow intelligence and compare the two. So to start off, let's talk about these two pictures. It's very hard to make the leap from one to another. How do we get to robots that are fully aware from the predictive models we see today with just an input and output? The learning algorithms that we're talking about, fed more data in conjunction with model breakthroughs, eventually lead to intelligence that's hard to comprehend. So let's talk about the different evolutions necessary to get to this point. It can be broken down to data and decisions, to components of learning and model prediction that we can track through different evolutions. First, let's talk about the data side. Today, researchers need to clean, and parse, and then train models based on large datasets, and that is an incredibly time-consuming thing. It also imparts a lot of control over the model. The first step in this evolution, which we'll call Evolution 1, is that the model is able to self clean and identify its own attributes from the provided datasets. You can imagine this saving a lot of time and basically allowing researchers to take a step back as models can actually identify from a dataset what it can train itself off of. We're starting to see some models that are able to train themselves, but this is still a few years away. The second evolution is even farther in the horizon, which we'll call Evolution 2. This is a model that's able to actually self learn by identifying datasets from anywhere on the Internet. Think of an Internet connected predictive model that, given a goal, can go out and identify data, pull it off of the web, and then self learn, a very powerful ability. We'll refer to that as Evolution 2. Now, let's turn to the decision side. A predictive model today makes a prediction based on a big amount of data and it can affect humans, for example, a loan approval, credit card approval. But a human is not far behind in that prediction. If a model is unable to predict within a certain degree of accuracy, that loan prediction will get kicked off to a human supervisor that can then look into it further and decide. So there are humans very close behind in the predictions made today. The first evolution in the decision-making realm is a model that's able to make a decision that affects human life without a human being involved in the decision-making process. The best example of this is a self-driving car. When we begin giving control of a human life over to a model, there are some big ethical implications with Evolution 1. Then the second evolution of decision-making in machine learning is a model that's able to make a decision that doesn't just impact one human life, but has the ability to impact countries and societies. So think of a sophisticated war games AI that's better able to predict war and opponent moves better than the best human general. At some point you turn over the decision-making ability to that AI for speed and that back and forth with a potential opponent has the ability to impact countries, societies, and eventually the entire world, so that's a big jump there. So let's talk now about Evolution 1 in the context of ethics. How do we program these ethical considerations into models that start to have more and more control? First, on the data side, for a model to self-train and be able to clean and parse its own datasets, it needs what we can call ethical programming. Again, these models are not going to be aware, at least at this stage, and we need to be able to, as researchers, tell it every single ethical guideline it needs to look for. How can the model ensures that it's accurate? How can the model ensure that it's explainable, and also fair? So an example of this could be programming a list of 100 different fairness and bias tests for the model to run and adjust for. If it fails any of these tests, it goes back to the data and tries to predict a new model and then create that model. On the decision-making side, for any decision that involves a human life, we need the model to have a clear explanation of the reasoning and priorities for making that decision. For example, with a self-driving car impact, if there are two pedestrians that enter the road and one occupant in the car, what should the self-driving car do in this case? Does it swerve to avoid hitting those two, to protect the life of the occupant, or does it prioritize two lives over the life of the driver? It can be tricky in this case to think of a moral code that needs to be programmed into this AI, but we will need a clear explanation of these priorities before these decisions are allowed to be made. Now, let's talk about Evolution 2. This is where we really get to that next stage in intelligence. On the data side, for a model to be able to be hooked up to the Internet, we need to mitigate for any unknown unknowns. For example, a model that consumes all Internet data could really have the potential power to, if hooked up to the correct financial markets, quickly surpass the GDP of some countries. These are potentially very powerful algorithms that we're dealing with in Evolution 2. On the decision-making side, we need to make sure that there is 100 percent transparency into the thinking of these models before actually allowing decisions to be made. The example here is a model that assists in war decisions that, let's say, if given the top polluting nation as a potential adversary, views the destruction of that nation as a positive for the environment. That decision would be so difficult that we really would need to make sure that these models are properly trained and transparent. Now that we're talking about intelligence that's really surpassing the abilities of humans, let's talk about these two different types of intelligence to further diversify. The first is specific or narrow intelligence. The definition of narrow AI is a specific type of artificial intelligence in which a technology outperforms humans in some very narrowly defined task. So this is one task that a model can excel at. Right now it's loans, and then it could be self-driving in the future, and then eventually deciding policy. But the important thing to note is that the evolution is in the importance and scope of the task at hand, but it is only one task. Which then brings up the question, what does a machine learning algorithm look like when it's able to make decisions in more than one task category? So in this example here, we have loan decisions as our previous example and let's bring another one in, prison sentencing. Being able to decide whether someone should walk free or spend more time in jail is a very difficult decision to make. If you have the same model that is able to make decisions with both, you start to transcend the attributes of a traditional model to a higher human value, in this case, character. As a model can make decisions in more and more categories, it approaches what is called broad or general intelligence. A general AI is defined as the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. The evolution on this side is going from handling two decisions, to five, to 10, to all of a sudden a machine learning algorithm that's able to make a decision better than a human given any specific task. The evolution here is in the breadth of tasks, the decisions that can be handled better than any human expert in the field. This is really difficult to think about because as this humorous graphic shows, we tend to look down on these machine learning algorithms that have the intellect of a small animal here and say, "That's interesting. I can definitely see how things are getting smarter." But there is really no way this is going to approach human intelligence any time soon. But the real key that we'll talk about in more depth and is worth exploring is that these models can explode in intelligence in a very short period of time. As an example on the data side we just looked at, being hooked up to the Internet, being able to grab and potentially consume all of the Internet's data in a very short period of time, combined with the right learning model could lead to this scenario here, where in a very short period of time, maybe even weeks or months, a model could go from looking like a very unintelligent, training heavy model to something that surpasses human intelligence of even the best human experts in the field. We'll talk more about the ethical implications in this and future videos, but that's it for now. We'll see you in the comment section.