Hi, I'm Jordi Arnal, professor at the Master of video games creation in the Autonomous University of Barcelona, and I'm here to explain you the technologic part of the course "Video games: what are we talking on?" This module's content will be "graphic APIs in video games", "IDEs for video games development", "physical libraries", "audio libraries", "other components used in video games", "What are we going to do in this technology course?", "video games loops" and "what concepts are we going to learn?". Graphics API in video games are those API which allow us to render graphic elements inside our video game. In the industry we basically talk about two main APIs. The first one is Microsoft's DirectX, which is based on Microsoft platforms such as Windows or Xbox. The second one, OpenGL, an open source API which can be implemented in any platform. Finally we can find the specific Nintendo and Sony graphic APIs which are based on OpenGL and are specific for their platforms. One of the components used on video games are physic libraries. This kind of libraries aren't usually implemented inside the enterprise, and we usually use known libraries which we know that work correctly inside the video games development. The two main physic libraries on the market currently are Nvidia Physx and Havok Physics. We can find both of them in many of the commercial video games we will find at the market. They can be easily integrated and they are easy to implement and operate. Finally we also have the Bullet Physics Library which allows us to easily implement physics inside our video game. Some other libraries which aren't usually implemented inside enterprises are audio libraries. Again, as in physics, we have two predominating libraries on the market. They are WWise, currently very extended between commercial video games, and FMOD Library. Their methodology and their work idea are very similar, so we will count on both of them inside the video games area. Another library used, especially during the last years, and very easy to use and install inside an engine is the OpenAL library. Other components used in video games are libraries we will now mention: First, LUA, Speedtree, Autodesk Scaleform, AntTweakBar and Cal3D. LUA library is a scripting library integrated in many video games which allows us to easily implement the video game's gameplay through a script file. The second one, Speedtree, is a library which allows us to renderize trees inside a video game and has been very extended those years. Austodesk Scaleform is a Graphics User Interface library which will allow us to easily renderize information at a user level. that is, information such as the user's health, buttons, menus, etc. The next library we can find is AntTweakBar. AntTweakBar is a library that will let us modify values of our video game in an easy way through a very attainable graphic interface, a library very easy to integrate and very comfortable to modify values. Finally, Cal3D is a library that will allow us to implement skeletal animation inside our video game. What will we do during our technology course? In the technology course we will present later we will implement a small programmed C++ engine under DirectX 11 in a 64bits setting. The video game's final result will be an FPS appearance, a First Person Shooter with elements from Quake. So we will have a final result similar to the one on the image. Next we will talk on a video game's loop. A video game's loop has three main elements. The initialization part is where we will charge geometry, texture, sound, etc. and we will get the game scene ready. The update part, in which we start calculating the time that has passed since the last frame, called offset time, and we update each of our video game elements depending on this offset time. This means we will modify the player's position depending on the offset time, as well as the enemy projectiles, the characters' animations, etc. Finally, the third block is the render method, which will be the responsible of painting the player, the enemies, the scene, the items and the postprocessing effects. This would be the classical loop for a video game. However, thanks to the current computers' power, this update and this render can be divided in many updates and renders and distributed so that they are executed in different processors. Finally, I will talk on the cloud computing. Cloud computing consists on distributing the update method between different machines inside a network. By this we get a major calculating process, as we use different machines to execute our update method, so the scene can be much richer. Which concepts are we going to learn in the technology course? The first thing we will learn is how to create a 64-bits app under the graphic API DirectX11. After that, we will learn how to read *.xml files which will allow us to modify our video game's behavior through some files which are external to the game's code. After that we will paint debug info through axis, grids, cubes or spheres. Next, we will be able to read the state of the input, which will allow us to modify the characters behavior or the game itself depending on the movements the player executes, either through the keyboard or the mouse. And last, we will learn two camera controllers for a FPS or first person game or a spherical camera controller which will allow us to move around an object. During the second session of the course we will learn to export geometry from 3D Studio Max to our video game engine through *.ace format files. Inside the video game's engine we will implement the importer and the static meshes rendering. In aim to paint those static meshes we will also need a textures component which will allow us to load the textures used inside the video game's meshes. Lastly, we will learn to implement or integrate a physical library such as Nvidia Physx which will allow us to control the pyhisics inside all our video game, which will help us to create a character controller in aim to move inside the video game and collide with the static elements in it without breaking the video game's magic. This library will also allow us to integrate collision test, either by lightning bolts, capsule-spheres, etc. During the third session of the technology course we will integrate a skeletal animation library such as Cal3D inside our video games engine. Then, we will also implement artificial intelligence through a states machine which will allow us to modify our enemies' behavior inside our video game. Finally, during the last lesson of the course, we will implement an audio engine such as OpenAL, which will allow us to reproduce sound effects and music inside our video game. We will learn billboard concepts which allow us to generate particle and effects systems inside our video game. And last, we will learn the concept of Hewitt, Graphic User Interface, which will allow us to integrate user information, as well as their lives, our characters' bullets, etc. inside the game. To sum up, in this module we have learned the different graphics APIs which we use in a video game, the most used development environment inside the video game industry libraries we use in an engine, as well as other components we also use and we have talked about in the technology course we will now do. Here you have different quotes we have used in this topic's slides. Thank you. We hope you liked it.