When it comes to working in the kitchen, I've been told that I'm really good at getting in the way. I have a bad habit of making dozens of trips from where I'm preparing the ingredients over to where the stove is, when the logical thing to do would be to bring what I'm working on over to where I know it ultimately needs to go, therefore, not being at risk of dropping something on the floor or introducing latency when I invariably get in someone else's way. I can't help it. It's just what I do. As an application architect, it goes without saying that the choices we make and where our applications and data will live and run, especially in relation to one another, make up a significant part of just how well a solution will perform and scale. Data on z is usually thought of as system of record data, though that system of record data is often used across all lines of business and is leveraged in just about every conceivable way, from mobile apps, to IoT devices, to reports, to training new AI models. Business data is what keeps us moving and we need to consider how to keep that data available and connected while also ensuring the security, speed, and scalability required. Additionally, many applications will have SLAs stating just how quickly those services need to respond to requests. In an environment where the applications need to traverse a network and several software stacks to reach that data, we can leave our SLAs vulnerable to just about anything that might happen between here and there. You already know that z/OS services and data are supported by DB2, VSAM, IMS and CICS, but with the addition of z/OS Container extensions, IBM Cloud Paks, MongoDB, Open Source Software, and a Red Hat OpenShift, consider the option of co-locating applications onto the z platform where they are not only closer to the data, but where applications can begin to take advantage of the benefits of z. For example, one organization with an IBM webs for your application accessing an IBM Z DB2 database wanted to improve performance from what it had in a distributed environment where the application was reaching out across a network to get to that system of record data. They were able to containerize the IBM WebSphere application, bring that on to z where the IBM Z DB2 runs drastically cutting down on latency and getting huge performance gains from learning java on z. Let's put a number to that claim of huge. It was a 40 times increase in queries per minute after co-locating the application onto z. That's not just because now data is accessed over a simple direct hyper socket link rather than over a wire onto a traditional network infrastructure. A lot of those performance gains are thanks to industry leading job of performance on z where you can see a 7-15 times reduction in latency right there. Besides the obvious lower total cost of ownership advantages of z, there are benefits to be had in simplifying the end-to-end pipeline for developers, securing data, handling dynamic workload surges and scalability and easing the deployment of packaged solutions. It's important to realize that z is capable of supporting applications in more ways than you may realize. You already know it's where CICS, IMS and Unix System Services can run applications, but with recent innovations there are a growing number of options, thanks to container runtime, such as Docker and Podman available on Linux, IBM Z Container Extensions on z/OS, and the large number of Linux distributions available such as Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu enterprise support, Debian and Fedora. Plus z/OS itself even offer support for JavaScript, Go, Java, Python, and many AI and ML frameworks. When you consider what gives us the greatest performance, flexibility, and security, it comes down to a place for everything and everything in its place.