If you're using TensorFlow 2, and in particular, if you'd be learning it using myCourses, you'll have been using high-level APIs and Keras, an Eager mode for easier development and debugging. But sometimes you might just want to squeeze that extra bit of performance out of your code, and one way to do that is to use graph-based models. This week, you'll look at the AutoGraph technology that makes development of graph-based code a little easier. Previously you saw how Eager mode can make it easier for you to write code that gives you immediate results. TensorFlow was originally designed around programming being done in graph mode, where you had to define a graph with all of your operations before you executed it. For example, if you wanted a formula like ReLu of y equals ReLu of Wx plus b, you'd have a graph like this. You would treat w and x as variables that get loaded into a multiplication operation, or op for short, the results of which will get loaded into an add op along with the variable b, and the results of that were loaded into another op such as ReLu to give us the answer y. While these may not seem to be as intuitive to you as a Python developer, they do operate really quickly, and using graphs can definitely speed up training and inference time. But they are difficult to code, and because the operations such as multiply, add, and ReLu don't take place until the graph is fully designed, they can be difficult to debug. It goes beyond just variables and ops. Consider for example, control flow. Here we have a function with an "if" statement in it. If x is greater than zero, return x squared, otherwise return x. With Eager execution in Python, you write very familiar and very simple Pythonic code. But graphs don't support "if" conditionals, so you would have to write code like this using a tf.cond conditional. Here, you can compare if x is greater than zero using tf.greater, with the next parameter of being a function that's called if it's true, so we'll call that if true, and then parameter after that then names the function to be called if it's false, so we can call that if false. Calling if true will return x squared. Calling if false will return x. Eager mode lets you write more or less standard Python code with standard control flow syntax. But you lose some of the benefits that graphs give you. In AutoGraph, graphs have explicit dependencies. By this, I mean that if you look at any node in the graph, you can find out which operations it depends on to execute beforehand by tracing backwards through the graph. Knowing the dependencies for each operation allow you to look for efficiencies when certain operations, allowing you to run some operations in parallel, for example, or you could distribute them for different machines. Eager and graph mode do seem to conflict with each other, but we can imagine benefiting from both approaches. Using Eager mode, for exampe, if we develop a new model or we're debugging, and then switch to graph mode if we want to squeeze some more performance out of it. Or maybe we're ready to deploy to production and we want the best possible model. You might wonder, why would you write code like this instead of the easy and familiar Pythonic way of doing it? Graphs do have explicit dependencies, and that makes it relatively easy for you to parallelize and distribute the computation. That also allows for a whole program optimizations like kernel fusion when using GPUs and something in TensorFlow called XLA. But all that is beyond the scope of this course. You can benefit from using both approaches. One workflow is to use Eager mode when developing and debugging a new model, and then switching to graph mode if you want to speed it up. Here's where AutoGraph really can help you. It's a technology that allows you to take your Eager-style Pythonic code and automatically turn it into graphs and vice versa.