- [Alana] Next, we can talk about testing, and how testing fits into a continuous integration and continuous delivery pipeline. By performing testing at different stages of our development cycle, we are creating a measure of quality. When you write code, that code is going to do exactly what you tell it to do. The important question is are we telling it to do the right thing? For example, I've added a purchasing feature to our application. The code now calculates tax for every purchase. The important question here is, are we calculating tax correctly? There can be many, many variables that go into this calculation. How much was the purchase? Where in the world did the purchase originate? Taking this example, how have you tested application features like this in the past? When I first started coding, I would make the changes locally, click around in a build in my local environments. I would also roll the updated application out to a staging environment for stakeholders to perform some testing. Manual testing like this comes with some serious disadvantages. Where do we document the tests that were performed? Are those documents kept up to date? And if we discover an issue with tax calculation in production that I thought we tested, that might leave us wondering how did it get through? Take a moment to think how expensive it would be to fix an issue that made it all the way into production. Consider the scenario where we have been miscalculating tax for a few weeks. Now, we need to issue refunds or charge extra, amend tax documentation, and take resources away from other projects for the cleanup and fix. Pretty expensive. But let's say we fix it, and we're now happy with this new purchasing feature. We can roll it out and stop testing it now, right? Well, not quite. We're going to keep adding features to our application, so we want to be sure we aren't breaking any existing features. We need to regression test. This is where we go back through the application tests to be sure all the existing functionality hasn't been broken with a new change. It's easy to say I'm changing X, so there is no way that will break Y. Those words could come back to haunt you if you aren't testing for regressions. If we were doing this manually, we would need to maintain documentation of scripts for testers to run through for every regression test. This becomes pretty resource-intensive and slows us down from getting a feature in front of users. So how can we apply some automation here? One thing we can do is use a testing framework for the application language to start writing some automated tests. For Python, you might be familiar with PyUnits, or the unit test package, which is part of the standard library. Java programmers might be using JUnit or similar. .NET developers will be using xUnit.net. Each of these frameworks provides the same basic functionality, a framework to write and run your test cases in code. The popular frameworks all come with some IDE integration, so running and debugging tests can be as easy as clicking a button in your IDE. We'll be looking at two different types of testing, integration testing and unit testing. An integration test tests multiple modules working together. Completing a game in the sample trivia game app in a production-like environment tests everything in the application. Is a new game getting written to the database? Are scores being calculated correctly? Are questions getting sent to players? A unit test is testing functionality in isolation. That is, are we calculating our scores correctly? This might be a bit different to how you're coding now, but this is a very good thing. It's a good goal to write your code in a way that a piece of functionality can be tested in isolation. Here, we can also take advantage of mocking frameworks. The routine that needs to calculate scores needs to read the most recent answer from the database. We also need to update the score in the database and send updated scores to all the players connected to the game. If I wanted to isolate the calculate scores from the external systems, I can use a library to mock the conversations from these external systems. When this is mocked, I can simulate different responses from the external systems for different scenarios. If I wanted to be sure some error handling is working, I can mock a simulated failure from the database. Russ will be going into more detail here and look at the unit test included in the sample. This course, again, focuses more on DevOps practices, so if you aren't familiar with Python coding, that's perfectly fine. What is most important here are the concepts we have discussed: integration tests for testing multiple modules working together, unit tests for testing a module in isolation, and using mocking to simulate modules outside of the module under test. All right, take it away, Russ. - [Russ] Starting with the scores event variable, we have an object that looks a lot like the event that is going to get passed through the calculate scores Lambda function. The event contains a game ID, a single question and answer, and our position within the questions, currently zero. The test down here, test_trivia_calculate_scores_correct, begins using the mock object to patch two variables inside the application. TABLE is the DynamoDB client. MANAGEMENT is an API Gateway API used to handle the WebSocket communications. The patched DynamoDB client is being set up to return some simulated game information. Then, we enter into the implementation of trivia_calculate_scores, over here. We grab some information from the event parameter. In the production application, this will be called by Step Functions. Now, we query the DynamoDB table for the connections that belong to this game, but we aren't talking to Dynamo. This is a unit test, so we are getting the mocked response back from the DynamoDB object. We just want to focus on the implementation of one thing. We iterate over the connected players here, and compare the answer from the Dynamo table to the correct answer. If it's good, we increment the score and call the Dynamo client to update the score, here. We have patched the Dynamo client with a mock, so our tests will assert that this call happened and pass the correct parameters. In this case, the score that we expect. We build a list of connection IDs, players, and scores, and pass this to send_broadcast to update the connected players with the score. When we return to the unit test, you will see this is an assert checking for the calls that send_broadcast makes. Next, there's some logic to update the data structure that tracks the current question position, and a test to see if we need to send a game over message. Back inside the unit test, we have two asserts checking to see if the DynamoDB client was called with the expected parameters and again, here, for the API Gateway MANAGEMENT client. That was a very quick introduction to the types of automated testing we have added to our workflow. Our goals with all testing are the same, we want an indication on the code quality that starts at the beginning of the project and captures the knowledge and edge cases that we are testing in the application. Once this is automated, we can easily run tests at any time.