Let's summarize what we've learned in this DataOps module. DataOps provides a means to focus on delivering high-quality, trusted data quickly in response to business demands that will feed the decision-making processes that drive successful businesses. This methodology describes a repeatable process to build and deploy analytics and data pipelines. By following data governance and model management practices, you can deliver high-quality enterprise data to enable AI. Successful implementation of this methodology allows an organization to know, trust, and use data to drive value. We've described three phases: establish, iterate, and improve DataOps. Establish DataOps sets you up for success by determining the operating environment, the tools available to you, and the people that can make DataOps a success. This phase should build on any existing data topology and architecture work, as well as any data governance and stewardship that already exists. It's all about ensuring that the DataOps team understands the operating conditions they need to adhere to. Data strategy determines the plans for how data is managed and stored. Do we want to use the Cloud or multiple Clouds, what are the regulations that dictate what we can and cannot do with the data? The team step looks at the right makeup for delivering data to requirements, incorporating the knowledge of the business with the knowledge of the IT landscape, and changing that team makeup to suit the needs of what is to be delivered. Toolchain focuses on automating the data pipeline and reducing the manual intervention required to the minimum needed, ideally, only intervening for exceptions. Establish baseline determines the guardrails for each data sprint, how to organize data catalogs, as well as the governance artifacts that streamline the discovery of data, such as business terms, data classes, and reference data. This step also considers how to organize data privacy rules, which help to protect data from inadvertent or malicious misuse. Finally, for establish, establish business priorities is all about building a pipeline or backlog of well-defined requests for data, each of which fully defines what data is required, and is scored according to its benefit to the business. Now that we've created just enough for the framework to operate, we can begin to work on that backlog. For each data sprint, we can choose overlapping tasks from the backlog. Using the toolchain do benefit from automation, we can work on delivering each one. We step through the tasks that we need to do to curate the data, running data discovery and classification to add data sources that supply what is needed and make it consumable and searchable by data consumers, assessing the quality and dynamically protecting it with data protection rules before letting data consumers work with it and play with it to get a real feel for whether this data satisfies their requirements. Once we've allowed the data consumers to provide feedback and progress, we can then decide whether to put repeatable data pipelines in place, whether that involves physical data movement or setting up data virtualization. Now we decide whether we're finished, in which case we can promote the new data assets, as well as any data governance assets we created in this sprint to our production catalog, making this data available to all relevant data consumers across the enterprise. The key aspect of the data sprint is its focus on high-scoring data issues which ensure that DataOps is delivering business value. Its pinpoint focus on delivering just what was asked for so that we don't get into scope creep and costly delays. The third and last phase is improve DataOps. Like a retrospective, this allows a period of reflection at the end of every sprint, where we look to see what we can improve in each and every step of the way. It consists of a phase of honest self-questioning and ensures that each iteration is given the best chance of delivering quickly and with the best quality possible. The DataOps methodology provides a robust framework that helps to know what data you have to address specific business needs, to trust that data because its meaning is made clear, its quality is understood, and it is protected comprehensively, and to use the data directly, enabling self-service to enable data consumers to react to business challenges and opportunities quickly.