In this unit, we are going to complete the description and the specification of the Hack machine language. So once again the overall context is that we have a hardware platform, consisting of a instruction memory, CPU, and a data memory. And we have a machine language consisting of A-instructions and C-instructions, we described these instructions in the previous unit. And a Hack program is the sequence of such instructions that are batched together, and we execute them one at a time, that's the overall picture. Now, as it turns out, and this is true for every machine language out there, you can write programs in machine language using two different flavors, or two different languages, if you will. You can write them symbolically, using mneumonics and friendly symbols. And that's what we see here on the left hand side at the bottom of the slide. Or you can write them using agreed upon binary code. Now, if you write programs symbolically, you need someone to translate these programs from symbols to binary code. And once you do this, once the program is specified in binary code, you can actually take this code load it into the computer and actually execute the code on the computer. Now we're going to spend a whole week talking about this translation and about a very special program called an assembler. So I'm not going to spend too much time discussing the translation process, but I just want you to know that it's a challenge that has to be met somewhere when you build this computer. So here is the symbolic and binary syntax of the A-instruction. Well the symbolic syntax is something that we've seen before, @ certain value. And this value can be either a number, which is at most 2 raised to the power of 15- 1. You may be wondering where this number comes from and you'll see it in just a minute. Or, it may be a symbol which refers to such number and we're going to defer the discussion of symbols to a later unit, for example, @21. So here's the same instruction in it's binary flavor. We begin with the special code 0, which tells the computer that this is a A-instruction. And then we specify the same value that we had in the symbolic instruction, but we specify it using binary code. So altogether we get something like this example here. Once again, the first 0 is something called an opcode, and operation code. And then come 15 bits that represent the value that we want to load into the A register. And it so happens that hopefully 10101 Is the same as 21 in binary. What about the C-instruction? Well the C-instruction as you recall, the symbolic definition of the C-instruction is very user friendly. We have a computation which we can store in a certain destination, and we have an optional jump directive, and that's the great benefit of a symbolic expression. If we want to express it in binary then we have to decide on some agreed upon codes, and Norm and I have already done that when we designed this language. So here is the 16 bit specification of the same symbolic instruction. The first one is an opcode It tells the computer that this is a C-instruction. If you recall, we have only two types of instruction, A-instruction and C-instruction, and that's why we need only one code, or one bit to represent the opcode, which is either 0 or 1. So 1 op code means this is a C-instruction. Then we have two bits which we don't use, we don't need them and by convention we set them to 1. The next seven bits taken together specify what is the computation that I want to achieve. These are the control bits that will be sent later on to the ALU, and will tell the ALU which computation it has to carry out. The next three bits represent the destination. And finally, the last three bits represent the jump condition that we called symbolically using the word jump. So this is sort of the different fields of instruction in its binary flavor. So, let us discuss now the mapping between the symbolic expression of the C-instruction and the binary expression of the same instruction. Let us begin with the comp field. So here is the table that relates the symbolic computation mnemonic to their binary equivalent. What we see on the left hand side is the symbols, and on the right hand side we see the binary codes. And we also have the a bit that you can see at the bottom of the table. So, for example, suppose that the computation is D+1, symbolically we want to cause the computer to compute D+1. Well we look up the table, we see the D+1 somewhere in the middle of the table. And we see that the D+1 is listed in the column where a equals 0, so we know that the a bit should be 0. Then we look up the rest of the row in this table and we see that the c bits should be 011111. So that's it, that's how we represent the operation D+1 in binary, it's 000 or whatever it is, I don't want to make a mistake, just look up the table and you have it. So that's how you map from symbols to binary codes. Okay, moving along, let's talk about the destination field, a very similar idea. We have a mapping that gives you the symbolic mnemonics on the left hand side. And in the next column, you have their binary code equivalents, which very conveniently range from 000 to 111, because we have eight different possible destinations. So, once again, if someone gives you a particular destination, like MD, you look it up in the table and immediately you see that it relates to the binary code, in this example 011. So that's how you can translate from symbols to binary code if you have to. All right, finally let us focus on the jump field, almost the same as the destination, we have exactly the same concept. We have the pnueomonics on the left hand side, and we have eight different possible binary combinations in the next column. Once again, conveniently enough, we have eight different jump conditions. And therefore their binary equivalence conveniently enough range from 000 to 111. So, that's it, this basically sums up the mapping between the symbols and the binary code, if you want to put it all together we can do it in one slide. And this is the complete specification of the C-instruction in all its glory, both in the symbolic rendition, and in the binary rendition. And if you had to write a computer program to translate from one language to the other, you can begin to see how you can use this logic in order to write this program. And by the way, this is something that we are going to do in the last week of this course. So, now that we understand how the specific instructions look like in both binary and symbolic, let's move on and talk about the overall concept of a Hack program. Here's an example, of a Hack program, at this level of the course you don't have to understand the program. We're just giving you a first overview of how a program looks like and we can make some quick observations. First of all, a Hack program is a sequence of Hack instructions. This program is written using symbolic instructions. White space is permitted, you can throw in empty lines wherever you want if you think that it improves the readability of the program, and comments are welcome and can be used at will. And finally, I'd like to say that this is not a great way to write Hack code. There are better ways to to write code with less numbers and more symbols, and this is something that we'll do later on in the course. Once again, for now I just want you to see an example of a typical Hack program. And if we want to run this program on the computer, we first of all must translate it into binary code, so we have either a human assembler, or a computer program that translates from one to the other. Once the program is expressed in binary code, we can actually load it into the computer and execute it and the program will hopefully do something useful. If not, we can go back, debug the program, recompile it or reassemble it, and we run it and so on, until we're satisfied with it. So this has been the last unit in which we talked about the Hack machine language. And in the next unit, we're going to talk about how we can use this language to control input, output devices which are connected to the Hack computer.