What Is Machine Learning? Definition, Types, and Examples

Written by Coursera Staff • Updated on

Machine learning is a common type of artificial intelligence. Learn more about this exciting technology, how it works, and the major types powering the services and applications we rely on every day.

[Featured Image] A woman uses her mobile phone in a coffee shop.

Machine learning is a subfield of artificial intelligence that uses algorithms trained on data sets to create models that enable machines to perform tasks that would otherwise only be possible for humans, such as categorizing images, analyzing data, or predicting price fluctuations.

Today, machine learning is one of the most common forms of artificial intelligence and often powers many of the digital goods and services we use every day. 

In this article, you’ll learn more about what machine learning is, including how it works, different types of it, and how it's actually used in the real world. We’ll take a look at the benefits and dangers that machine learning poses, and in the end, you’ll find some cost-effective, flexible courses that can help you learn even more about machine learning. 

Beginner-friendly machine learning courses

Interested in learning more about machine learning but aren't sure where to start? Consider enrolling in one of these beginner-friendly machine learning courses on Coursera today:

In Open.AI and Stanford's Machine Learning Specialization, you'll master fundamental AI concepts and develop practical machine-learning skills in as little as two months.

The University of London's Machine Learning for All course will introduce you to the basics of how machine learning works and guide you through training a machine learning model with a data set on a non-programming-based platform.

Placeholder

Machine learning definition 

Machine learning is a subfield of artificial intelligence (AI) that uses algorithms trained on data sets to create self-learning models that are capable of predicting outcomes and classifying information without human intervention. Machine learning is used today for a wide range of commercial purposes, including suggesting products to consumers based on their past purchases, predicting stock market fluctuations, and translating text from one language to another. 

In common usage, the terms “machine learning” and “artificial intelligence” are often used interchangeably with one another due to the prevalence of machine learning for AI purposes in the world today. But, the two terms are meaningfully distinct. While AI refers to the general attempt to create machines capable of human-like cognitive abilities, machine learning specifically refers to the use of algorithms and data sets to do so.

Read more: Machine Learning vs. AI: Differences, Uses, and Benefits

Examples and use cases

Machine learning is typically the most mainstream type of AI technology in use around the world today. Some of the most common examples of machine learning that you may have interacted with in your day-to-day life include:

  • Recommendation engines that suggest products, songs, or television shows to you, such as those found on Amazon, Spotify, or Netflix. 

  • Speech recognition software that allows you to convert voice memos into text.

  • A bank’s fraud detection services automatically flag suspicious transactions. 

  • Self-driving cars and driver assistance features, such as blind-spot detection and automatic stopping, improve overall vehicle safety. 

Learn more about the real-world applications of machine learning in this lecture from Stanford and DeepLearning.AI's Machine Learning Specialization:

Read more: 9 Real-Life Machine Learning Examples

How does machine learning work? 

Machine learning is both simple and complex. 

At its core, the method simply uses algorithms – essentially lists of rules – adjusted and refined using past data sets to make predictions and categorizations when confronted with new data. For example, a machine learning algorithm may be “trained” on a data set consisting of thousands of images of flowers that are labeled with each of their different flower types so that it can then correctly identify a flower in a new photograph based on the differentiating characteristics it learned from other pictures.  

To ensure such algorithms work effectively, however, they must typically be refined many times until they accumulate a comprehensive list of instructions that allow them to function correctly. Algorithms that have been trained sufficiently eventually become “machine learning models,” which are essentially algorithms that have been trained to perform specific tasks like sorting images, predicting housing prices, or making chess moves. In some cases, algorithms are layered on top of each other to create complex networks that allow them to do increasingly complex, nuanced tasks like generating text and powering chatbots via a method known as “deep learning.”

As a result, although the general principles underlying machine learning are relatively straightforward, the models that are produced at the end of the process can be very elaborate and complex.  

Machine learning vs. deep learning 

As you’re exploring machine learning, you’ll likely come across the term “deep learning.” Although the two terms are interrelated, they're also distinct from one another. 

Machine learning refers to the general use of algorithms and data to create autonomous or semi-autonomous machines. Deep learning, meanwhile, is a subset of machine learning that layers algorithms into “neural networks” that somewhat resemble the human brain so that machines can perform increasingly complex tasks. 

Read more: Deep Learning vs. Machine Learning: Beginner’s Guide

Placeholder

Types of machine learning 

Several different types of machine learning power the many different digital goods and services we use every day. While each of these different types attempts to accomplish similar goals – to create machines and applications that can act without human oversight – the precise methods they use differ somewhat. 

To help you get a better idea of how these types differ from one another, here’s an overview of the four different types of machine learning primarily in use today. 

1. Supervised machine learning 

In supervised machine learning, algorithms are trained on labeled data sets that include tags describing each piece of data. In other words, the algorithms are fed data that includes an “answer key” describing how the data should be interpreted. For example, an algorithm may be fed images of flowers that include tags for each flower type so that it will be able to identify the flower better again when fed a new photograph. 

Supervised machine learning is often used to create machine learning models used for prediction and classification purposes. 

2. Unsupervised machine learning 

Unsupervised machine learning uses unlabeled data sets to train algorithms. In this process, the algorithm is fed data that doesn't include tags, which requires it to uncover patterns on its own without any outside guidance. For instance, an algorithm may be fed a large amount of unlabeled user data culled from a social media site in order to identify behavioral trends on the platform. 

Unsupervised machine learning is often used by researchers and data scientists to identify patterns within large, unlabeled data sets quickly and efficiently. 

3. Semi-supervised machine learning 

Semi-supervised machine learning uses both unlabeled and labeled data sets to train algorithms. Generally, during semi-supervised machine learning, algorithms are first fed a small amount of labeled data to help direct their development and then fed much larger quantities of unlabeled data to complete the model. For example, an algorithm may be fed a smaller quantity of labeled speech data and then trained on a much larger set of unlabeled speech data in order to create a machine learning model capable of speech recognition. 

Semi-supervised machine learning is often employed to train algorithms for classification and prediction purposes in the event that large volumes of labeled data is unavailable. 

4. Reinforcement learning 

Reinforcement learning uses trial and error to train algorithms and create models. During the training process, algorithms operate in specific environments and then are provided with feedback following each outcome. Much like how a child learns, the algorithm slowly begins to acquire an understanding of its environment and begins to optimize actions to achieve particular outcomes. For instance, an algorithm may be optimized by playing successive games of chess, which allows it to learn from its past successes and failures playing each game. 

Reinforcement learning is often used to create algorithms that must effectively make sequences of decisions or actions to achieve their aims, such as playing a game or summarizing an entire text. 

Read more: 3 Types of Machine Learning You Should Know

Machine learning benefits and risks 

Machine learning is already transforming much of our world for the better. Today, the method is used to construct models capable of identifying cancer growths in medical scans, detecting fraudulent transactions, and even helping people learn languages. But, as with any new society-transforming technology, there are also potential dangers to know about. 

At a glance, here are some of the major benefits and potential drawbacks of machine learning: 

BenefitDangers
Decreased operational costs: AI and machine learning may help businesses to automate some of its jobs, causing overall operational costs to decrease.Job layoffs: as some jobs are automated, workers in the impacted field will likely face layoffs that could force them to switch to a new career or risk long-term unemployment.
Improved operational efficiency and accuracy: Machine learning models are able to perform certain narrow tasks with extreme efficiency and accuracy, ensuring that some tasks are completed to a high degree in a timely manner.Lack of human element: Models that are tasked with doing a very narrow task may also miss many of the “human” aspects of the job that are important to it but potentially overlooked by developers.
Improved insights: Machine learning has the potential to quickly identify trends and patterns in large amounts of data that would be time consuming for humans. These insights can equip businesses, researchers, and society as a whole with new knowledge that has the potential to help them achieve their overall goals.Ingrained biases: Just like the humans that create them, machine learning models can exhibit bias due to the occasionally skewed data sets that they’re trained on.

Have career questions? We have answers.

Subscribe to Coursera Career Chat on LinkedIn to receive our weekly, bite-sized newsletter for more work insights, tips, and updates from our in-house team.

Placeholder

Learn more with Coursera 

AI and machine learning are quickly changing how we live and work in the world today. As a result, whether you’re looking to pursue a career in artificial intelligence or are simply interested in learning more about the field, you may benefit from taking a flexible, cost-effective machine learning course on Coursera. 

In DeepLearning.AI and Stanford’s Machine Learning Specialization, you’ll master fundamental AI concepts and develop practical machine learning skills in the beginner-friendly, three-course program by AI visionary Andrew Ng.

In IBM’s Machine Learning Professional Certificate, you’ll master the most up-to-date practical skills and knowledge machine learning experts use in their daily roles, including how to use supervised and unsupervised learning to build models for a wide range of real-world purposes. 

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.