Welcome back. We're here to talk about a case study right now. This case study is a big architecture, something that Mike and I worked on together a while ago. Most of the rest of the course, the next few exercises, readings, videos are all based on the architecture that we used in this case study. To set the stage, this was a contact tracing app and it was started in the very early days of COVID. Think March 2020. Everyone was on lock down. We're all in our basements writing this contact tracing app. The idea was that contact tracing is a process that's been around for a long time. If someone gets infected with the disease, in this case, COVID, they would have these people called contact tracers that would go out and talk to these people and find everybody they'd come into contact with and then warn them that they might be infected. COVID really tested the scale of that contact tracing. We weren't able to do it fast enough in a lot of cases. One attempt was to try and use our phones and Bluetooth Low Energy to tell when two people were close to each other. If one of them got infected, the other could be notified. This was a high-profile government initiative. It was supposed to be built really quickly and then launched and in Big Bang release, which would mean a live television event. The head of state of this country was going to get on TV and announce. All of a sudden that this contact tracing app was available. Celebrities, we're going to be launching social media campaigns. Because of this, we're going to get to five million requests per hour, which if you break that down into requests per minute, 83,000 which is 1,400 requests per second, which is quite a lot. But the big thing was that would be sustained for 1-2 hours. This is a level of traffic that not many people have dealt with. We're brought in partway through this engagement to make sure that we could actually hit these numbers. As a starting point, we saw a few things that maybe worried us a little bit in terms of hitting that 1,400 requests per second. Things like really heavy ORM, some synchronous messaging built as a monolith didn't really have any load testing or benchmark testing in place. Then using things like spring data REST, which can be great for getting going quickly, but don't give you the tools that you need to. We saw this and we thought, we were probably not going to hit our mark and so we'd like to take some action. But the first thing that we did was build in an automated benchmark suite. We said, before we do anything, let's measure. Let's see where we are now. Start making changes. Ideally, we'd see performance with those changes. We actually had our whole team that wasn't even working on the core app, but they were just working on this automated benchmark app. I think it was a team of 4-6 people that were just trying to write this app that could register for our contact tracing application as fast as it could. With this framework, we made some changes. Went to using writing our controllers by hand rather than using the spring data REST, which doesn't give us a whole lot of control. This let us speed things up and have really fine gain control about the requests coming in. We went from using a pretty heavy over ORM hibernate to using just raw JDBC. A couple of things here were nice. One, an ORM might generate some non-performance queries. The queries might get too much data. They might do too many joins. They might just slow things that you don't need. ORMs also tend to be pretty chatty. If we want to update a record, often an ORM will do a select first and then it'll do an update. That might not be too big a deal when you're writing your average Web App. But when you do need to get to 1,400 requests per second, it's a really big deal. The other thing is that we knew that we are using SQL Server for this one, which we knew was really fast. We knew it was going to be at its fastest when we were just doing reading and writing. Just inserts and selects, no updates or deletes. We went from synchronous spring messaging to RabbitMQ. This let us process messages asynchronously. We're going to dig into this RabbitMQ side of things quite a bit over the next few videos and readings in labs. If you've always wanted to learn about RabbitMQ messaging, you're in luck. We move from a monolith to our app continuum architecture. I think we linked to an app continuum article previously in this course. But basically, we had multiple parts of the app that needed to scale separately having separate deployable and separate services working within the same codebase gave us the best of both worlds. Let us scale things independently, but also move quickly on the codebase itself. As I mentioned, we had this automated benchmark testing that we built out. This gave us confidence that every commit that we got in helped make things better, helped us achieve our benchmark goals. This actually ran with every single build on CI or developers could run it on-demand on their machines. This caught a couple of regressions for us. It caught a couple of things we introduce that were slow and we didn't know were slow. It also let us give us confidence that this thing was going to work. We also moved from a direct exchange, which we'll talk about later to a consistent hash exchange. I won't go too far into this, but we'll be digging into what these two terms mean and why they're important later on in this course. Take a look. You've got a couple of readings coming up. Then we'll dig further into messaging cues and the codebase that you'll be working with that mirrors this architecture. Thanks.