Semiconductor Engineering sat down to discuss the state of verification with Jean-Marie Brunet, senior director of marketing for emulation at Mentor, a Siemens Business; Frank Schirrmeister, senior group director for product management at Cadence; Dave Kelf, vice president of marketing at OneSpin Solutions; Adnan Hamid, CEO of Breker Verification Systems; and Sundari Mitra, CEO of NetSpeed Systems. What follows are excerpts of that conversation.
SE: There has been some discussion about a continuum of verification engines being a way for large EDA vendors to lock out competitors. How real is this?
Schirrmeister: There is a big difference between a single vendor flow and a continuous use of engines. The continuum is real. People are using simulation for IP because of the sheer amount of configuration. They are generating thousands of scenarios and they have so many tasks to run that they move to emulation. And you do certain things only in formal, and some people are saying formal is enough for certain applications. The chip is part of this, too. You have on-chip instrumentation and getting data out of it.
Mitra: We are tools agnostic. None of our customers question whether we use Cadence, Synopsys or Mentor for functional verification. For timing, it’s different. They question whether we’ve vetted a sign-off tool. But for verification, that’s great because it allows us to get a few licenses. As a company, if we had to move from company A to company B, it would be a change, but it’s not that dramatic. As for the different pieces, each one does add value, and it’s difficult to even draw a line because each one has to be set up correctly to give you what you want. But you also need a certain amount of functionality in there. You’ve got to be able to work at the modular level and to build up everything to the point where you can bring in formal techniques and move those things forward. So now we are at a point where we can leverage all of that. But when you’re starting off on a design and moving things around, I could not imagine doing that without all of the tools.
Schirrmeister: And I agree you don’t necessarily want to lock into a single vendor. People want to use different engines. So it comes down to how you exchange data.
Mitra: Yes, and formal is a huge piece of getting this right. But it’s a piece of the whole picture.
Hamid: We love to talk about verification engines, because that’s what we have built as an industry. But really, these verification engines exist to run tests. Does my implementation implement my intent? In Portable Stimulus, we are coming up with graph-based models that are geared toward being able to make sure your IPs work. How do you put them together into a system environment? How do you get multiple sub-chips to work together. We are talking about integration of these engines, and we need to recognize that each of these engines does different things well. But even more than that, the people using these engines are coming from different backgrounds. The starkest difference is someone working in a UVM environment and someone in a post-silicon lab. These guys do not come from the same background. The portable stimulus approach says you can capture the right intent once, and this is super important. Down at the UVM level, there’s some guy who became an expert on coherency testing or USB testing. By the time the design gets to the post-silicon world the expertise is not coherency or USB. The Portable Stimulus allows you to transfer knowledge up to the system level. So hopefully we see more integration of engines, but Portable Stimulus will pick up the slack to generate the right kind of test for the right kind of person and make the tests look like that person expected them to look.
Brunet: The issue for customers is not the engine. It’s the data. If someone runs 20% with this engine and someone else runs some other amount, it doesn’t matter. It’s all about how you manipulate the data, and that can be problematic.
Schirrmeister: And this comes back to how complex a system you can really manage. UVM ends at the small subsystem level. It works for IP. At the chip level, you’re done. That’s where Portable Stimulus comes in, but you can’t over-pitch Portable Stimulus, either. It’s not an end-all solution. It’s a way to smartly generate test benches. You still need a set of fast engines to execute it. Otherwise you can’t use it elsewhere for the number of gates you have. Our customers do multi-chip emulation, so you have to go beyond the chip level. Portable Stimulus brings you well into the notion of how you integrate all of your IP blocks, what is the sequence of waking these things up. It complements the IP blocks, which already have been verified. You also need to connect your design to the interface peripherals and emulate those peripherals. You can model them virtually, or through software development, because that’s one of the tasks for them. At one point you need all of the physics involved. And then you bring in multiple chips, which is where the capacity demand goes beyond the SoC.
SE: There has been a lot of talk for years about moving some of this to the cloud, because maybe you can’t afford your own emulator—or enough emulators for your gigantic design. But that will require the ability to move data back and forth seamlessly. Is this happening today?
Schirrmeister: Absolutely. The key issue is, like you said, when you move data in between. We have figured out how to move it as a DUT (device under test). For IP, it’s relatively straightforward. It’s synthesizable, it works across the engines. We have different coding styles. There are simulation constructs that interpret the LRM (language reference manual) quite differently. But we all can figure that out and users can work around that. What’s hard is the re-use of the verification environment. Having C that could run on all of the processors was the original idea. Today, if you have UVM for all of the IP verification and you now want to use that at the SoC level, it’s difficult because none of this stuff is synthesizable. SystemVerilog assertions work great in simulation, but in emulation they’re not synthesizable. Moving the verification environment is a huge issue, and that’s where the connection comes in. Our engines on the hardware side connect to all simulators. Moving of the verification environment is where we still have work to do.
Brunet: We’re seeing very similar trends. It’s not a change of technology that’s the problem, though, for cloud-based approaches. It’s a challenge for the sales cycle. You move from a $10 million, one-time deal, to 10 million units of $1 over time. Certain environments are tailored to this, while others are not. [Editor’s Note: Numbers here are for explanation purposes only.]
SE: But you’re setting up the ability for any company to tap into this, right?
Brunet: Yes. And it’s happening already. The other impact is that it’s not really relevant that one engine is faster than another because you can always add more cores. But the business transaction is very different.
Kelf: The cloud is a great approach. We had a cloud solution early on and it didn’t go anywhere. There were two issues we saw. One is the legal IP issues. Companies don’t want to send their IP to a cloud. Even if the engineers do, they don’t want to go talk to the company’s lawyers to sell them. However, companies have their own clouds now, which is very easy to do. You can have emulators in huge rooms with fans and that solves that problem. The big problem is the business model. Cloud provides a pay-per-use model, which is very effective for verification. You can get some core verification-based licenses, and then use a pay-per-use for a bulge when you need the extra runs. We’ve employed that quite successfully recently. On the big data side, if you’re dealing with engines running very quickly and trying to speed them up, or if you’re dealing with a massive amount of data coming out those engines, it’s still the same issue. You have to find smart ways of dealing with verification collaboratively between the customers and the folks building platforms. If you look at Portable Stimulus or formal, you can think of these as smart verification platforms that can be configured to solve problems, whether they’re working with post-process data or processing with the engines directly. It doesn’t matter.