Ethics in the age of AI, exploring the algorithms that power everyday life. As more predictive models get deployed in the real world, whether it's a credit card application or a loan decision, job application or college filter, we see certain groups benefiting more than others. And this isn't surprising. Due to the nature of machine learning, those who won in the past will continue to win and that's just based solely on reflecting our data as a society. So in this course, it's our goal to discover the full scope of the problem and figure out what to do with it as ethical machine learning researchers. And in the first course of the series, we're going to start off by exploring the evolution from simple algorithms to the complex predictive models we use today. We're also going to discuss how machine learning is used by companies in research institutions and the current limitations of using modeling to make those decisions. And then we're going to take a look at the future trends to see where this is all heading. We started out with the most simple algorithms an what happens when we get to full fledged artificial intelligence? And of course, how to handle including morality into the increasingly smart algorithms. Then in the second course in the series, we're going to move into fairness and bias and how to create more ethical models as well as fix fairness issues in existing models. We're going to talk about how predictive models used for everything from loan decisions to credit card companies they can balance out opportunity while still protecting the financial interest in those providing them. It's all about balance. And then we're going to cover the human factors involved with machine learning, including cognitive biases that introduce bias into our datasets, all with the goal of minimizing the bias of the model as much as possible. And then in course three of the series, we're going to look at privacy and transparency and consider them when building machine learning models. To make a truly ethical model, we need to protect all of those involved, right? Including those who have data used in training these models. We're going to discuss differential privacy and other techniques used to ensure our models are protected and private data can't be leaked. And then we're going to explore how to make our algorithms more transparent so that we can avoid unfair models getting a pass just because no one is looking at the decisions they're making. And finally, we will have a capstone project to wrap things up. In this capstone project, we will build fictional models that, one, reduce bias, two, increase fairness, and three, consider privacy and transparency. Essentially everything we've learned in the course so far. After completing this project, we're confident you'll be ready to join a machine learning team inside a research institution or company and have an immediate impact on increasing the fairness of existing and new models. So why talk about ethics in AI today? The first is that models are outgrowing their creators for the very first time. Researchers are now actually not aware of some of the predictions that their models are making. They are in control of the inputs but not the outputs. Things in the middle are just simply too complex, and this leads into our next point. This is really our last chance for fairness and to avoid bias in these systems. As these systems grow more and more complex, we're going to be able as humans to step back and take a look at what's going on, but we won't be able to understand how these decisions are being made. And finally, real harm is now being caused by these algorithms as companies put them into action without fully understanding all of the different consequences. So for example, when you go to apply for a loan, a company has put in an algorithm that may judge that you are not worthy of a loan and that is doing real harm to you if it's actually not an algorithm that has been designed to be ethically sound. And this is a course where philosophy really meets computer science. We're going to ask some very big questions that there aren't necessarily one correct answer to. Like, what does it mean to be fair? Are we leveling the playing fields that our predictive models give people equality of opportunity? Or at what point do we think it's right to influence the outcomes themselves? And also what does it mean to be biased? Is it biased if we are just feeding into the model the current state of the world and it reflects that state? Or do we need to step in and build for what we actually desire in the models? There are a ton of different factors to consider, all with the goal of including more ethical considerations in machine learning. And a little bit about me, your instructor. I work in technology as a software developer and teacher, and I began taking machine learning seriously when the first no code solutions went online in about 2015. And in the five years since, I've explored the technical side of machine learning, and I've been amazed with the progress and it's really hard to keep up with each white paper. And as I watched public interest in these algorithms increase with invents like the AlphaGo release and the Cambridge Analytical scandal, I began to explore the human and ethical side in much more depth. And after hearing a conference speaker mention that the morals we program into machine learning today are those that we will see of the robots of tomorrow, I realized that now is really the time to highlight how important this topic is. I look forward to teaching you in this course. With that said, ethics and machine learning are both massive topics and we will only be scratching the surface in this specialization. But after these lessons, I'm confident you'll be able to have an immediate impact in tipping the scales toward the more just and moral path. See you in the course.