Brief conversation about Verification Process. I like this picture, because it really addresses the aspects of the test bench, and the testing process, the verification process. So, here's your test bench or benches, if they're not uncommon to have multiple test benches, and all of the test cases that you're going to run, all the stimulus, and all the checks that you're going to do, and you're armed with your test plan. So, on one side of this triangle is, well, how do you test it? That's what's called vectors, that's the stimulus, right here. I'm going to apply a whole bunch of stimulus to this device under test. Just stress it, make sure it's logically correct. So, this is taking care of the stimulus aspect. This side, does it work? Is it accurate? Is it verified? Does it do what I want it to do? This is what the checker does over there. It looks at the stimulus and it looks at what the device under test output and compares them. Yes, no. All those downs or bugs that you have to go fix, and this is a big piece of it as well. It's great to have a bunch of vectors, and a bunch of stimulus, and a bunch of test cases to test your design, and this is true for hardware and software. It's great to have checkers to just tell you if your device is responding appropriately. Is it logically correct? Is it checking in? Did you test it? This is where coverage comes in. Did everything, remember back to the slide, where I said the test plan was written by the module designers because they're intimately familiar with that? Did every single line item in that test plan that, that design engineer wrote, was that test executed and did it pass? That's coverage, and coverage tells you how effective your vectors are. You run a big test, we used to run ours over the weekend and we'd have our regressions running nightly. Then we'd run them a much more extensive set over the weekend, and we'd come in and we'd look at coverage reports to know how well our vectors or testing what we wanted to see get tested. You can't just have the vectors and the checkers. This is essential, today's designs are so incredibly complicated. You have to have coverage checking in there. So, the stimulus can come through directed testing. This means, I'm going to write a specific test case and a checker to test a specific function. I did architecture work when I was at Seagate, but once the architecture was done, I ended up going and writing directed test to test parody and ECC errors in our chip designs. Those had to be done with a directed test because the test bench didn't understand what errors were. It's very hard and it's relatively easy to generate some stimulus to your dud and give that stimulus to the checker and now the checker predict what the dud is supposed to do. But when you stick in an error in here, the results can be unpredictable or very difficult to determine ahead of time, what the hardware is going to do. You can spend so much time trying to develop your checker to cover all of the air injection cases that you want to cover, that this becomes the long pole. It's faster in this particular case, to write a directed test and that's what I did. I wrote a whole bunch of test insert ECC errors and parody errors around the chip, and we had to shut all the checkers off because they were all complaining "Oh the state machines are in an invalid state. Oh the data pattern is incorrect." Turn them off. Turn off the checkers, turn off all these checkers. I showed one box here, as a checker but inside here there's actually many, many, many checkers that are monitoring things, not just at the periphery of the chip but inside as well, inside the unit under test. So, directed testing can be used for very specific things, that it's difficult to write a checker for. Where things are at today- so, this is how years ago, this is how we used to test all of our chips in their first graphics chip back in the days. Every single test was a directed test that we wrote and we had a bunch of bugs. Fortunately, we got the chip back, we put it on the boards, and we gave it to the software guys, and they kept coming to us and say, "Hey your chips doing this weird thing. Tell us about it." We go run a simulation to recreate that case and say, "Oh I see what's going on." "Okay you need to set this bit and do this, this way." So, they were able to code around it and we used to joke. You know the guy at the circus or the carnival, that has the girl up against the wall like this, and he's a knife thrower guy. So, we joked, it was just the two of us. It was a good friend of mine and colleague, Tom Becklin and I, we did the very first graphic chips. We felt like we were that girl that was up against the wall, and the firmware engineers were coming at us with these bugs, and throwing knives at us, and we'd tell them how to get around it. That was close wasn't it? Then they'd come in with another bug and they could throw another knife and we go [NOISE]. We missed a lot of things because we weren't able to conceive of all of the possibilities. Going to a constrain random verification, is where things are at today. Because it will test things you've never even dreamed of and very difficult for you to come up with. Machines are very good at doing this kind of thing. So, in a constrained random verification environment, you have this random generator, and then you have coverage built into your test bench that tells you how well you did. How do you measure it? So, how do you know you simulated long enough? So, we've got to this already. So, I'm just going to say, you write assertions. Designers write coverage assertions into the code, and those coverage assertions are collected and can be reported. So you know how well your test bench did. So, you can run many thousands of simulation runs over the course of a weekend, and you'd come in on Monday and all the results would be tabulated together into one big report. It's really nice, it's really slick. It generates, say if we were using synapses products at the time and would generate this great html report, you'd get this email sitting in your inbox with a link, you go click the link, and there was a whole hierarchy of your chip, and all the coverage assertions. They were reported in, green, was completely covered. Yellow, was partially covered, and red, wasn't covered at all. So, you'd come in and you'd go to your modules that you're responsible for, and look at the coverage reports, and it didn't end until everything was green. That's how you knew you were done, you covered everything. When everything's green all the coverage metrics say, "We've simulated everything." That's when you know your design is 100 percent verified. So, you'd sum all that up and it tells you when you're done. Before, with the directed tests you never really knew when you were done. You just, "Well, I think I tested everything, yeah, I feel pretty comfortable about it." But there was no way to prove it. You write coverage assertions. That's what they're called in system verilog. Assertions, you use those. You know, you have hard data that tells you that. I understand that there's a parallel in software testing as well, where code is instrumented and gives you this very detailed insight into how well the test bench tested your software. So, on the ASIC/FPGA verification side, Cadence had developed this methodology, verification methodology, it was called the Open Verification Methodology. Meanwhile, synopsis developed this verification methodology manual, because what VMN stands for two different ways of doing constrained random. Thank goodness, we finally merged into the unified verification methodology known as UVM. There is a lot of money in verification. It's very hard to find good verification people in State of Colorado. When I was at Seagate, we had to go out of the state to find verification people. Any verification engineer that came up, he was highly sought after, and could get bids for multiple companies in the area over Fort Collins, and Longmont, Boulder, and down into Denver. It gobbled up because the designs get so incredibly complicated, that this verification step to verify, especially, in the case of an application-specific integrated circuit, when you're going to pull the lever that masks made for your chip, and it's $10 million, $15 million, $20 million for the mask that, you had better be certain your design is good to go. Because if there's a bug in it that you missed, that's catastrophic and you can't ship your product, it's a show stopper. You need to modify your design and modify one or more of those masks. Sometimes, you can get away with just modifying the metal layers which are the last layers in the fabrication process of a chip. But if you have to go back and change the base wafer, that's another $10 or $15 million you're spending. So, the verification people are highly sought after, and good pay, and interesting work. So, you can go out to this link here. Accellera, owns the standards here, and you can go read about it. We're not going to take the time to go do it, and then there's a link here that has a little bit of the history about the difference of OVM, and UVM, and VMM, how they got where they are today.