Quantcast
Channel: UVM Archives Semiconductor Engineering
Viewing all articles
Browse latest Browse all 117

The Next Incarnation Of EDA

$
0
0

The EDA industry has incrementally addressed issues as they arise in the design of electronic systems, but is there about to be a disruption? Academia is certainly seeing that as a possibility, but not all of them see it happening for the same reason.

The academic community questioned the future of EDA at the recent Design Automation Conference. Rather than EDA as we know it going away, they contend that a new era is about to start. Three panels tackled it in completely different ways. One asked “What are the big opportunities in the next renaissance of EDA?” A second was entitled, “What is the Future for Open-Source EDA?” And the third, “Machine Learning for Electronic Design Automation: Irrational Exuberance or the Dawn of a Golden Age.”

The EDA industry has witnessed a number of significant changes, not all of which have been commercially successful. Twenty years ago, the EDA industry was looking for a new level of abstraction above RTL, which was dubbed the Electronic System Level (ESL). While pieces of that effort are now part of the industry tool portfolio, such as high-level synthesis and virtual prototyping, and languages such as SystemC exist, the general effort did not lead to a new abstraction. Today, ESL remains a niche technology.

Why? One explanation is that ESL was too broad, too general-purpose, and the gap between ESL and RTL was too large. ESL requires restrictions be placed on designs or languages to make synthesis possible. For example, the design of processors is one of the few areas in which dedicated languages do exist, and more have been created recently with the introduction of RISC-V.

With the extensible RISC-V processor specification being open-source, it enabled more research to be conducted into processor architecture. Languages that existed in the past, such as SysML, are being dusted off, while new languages such as Chisel are being created. And processor synthesis tools are being brought to market, along with verification methodologies and reference models.

In a larger context, the shift toward domain-specific solutions is creating the opportunity for many highly specialized abstractions, each of which can be dedicated to a single domain. This, in turn, is fostering a revitalization of research.

So will the future of EDA look the same as it does today?

The role of EDA
To put this in perspective, EDA provides three primary services — productivity, optimization, and assurance. Even though designs get larger and more complex, team sizes and schedules have remained relatively fixed, meaning that productivity must always increase. Finding the right solution that balances cost, performance, and power is a huge optimization problem. As geometries get smaller, ensuring that a design will work once it is manufactured becomes harder. It involves an increasing number of physical factors that must be considered.

As Moore’s Law slows down, the industry is looking at several directions for future expansion. Some of these involve a change in architecture, while others are looking at new packaging technologies. This is in addition to new materials and fabrication techniques. Jayanthi Pallinti, director of ASIC product division at Broadcom, provided some insights about the challenges designers are facing (see figure 1). “At 16nm we had about 6,000 design rules. Now in 3nm, that has grown to over 15,000. Even with all the innovations that EDA has done — and those are helping — it is still challenging.”

Pallinti contends that EDA has to become more hierarchical to keep up, and that the entire system has to be co-designed rather than dealt with in a sequential manner.

Fig. 1: Design complexity and EDA. Source: Broadcom

Fig. 1: Design complexity and EDA. Source: Broadcom

Models are a vital aspects of EDA tools and flows, and these exist at many levels of abstraction. “The challenge is creating models with the right accuracy, and speed and robustness,” said Prith Banerjee, CTO at Ansys. “We have to solve the problem of multi-level simulation. By that I mean using second-order partial differential equations, and from that produce reduced-order models to provide system-level models. We need to go seamlessly from system-level simulation, and when I need a little more accuracy, I click to go to the next level. People are talking about hierarchical simulation, but I’m talking about across electro-mechanical systems.”

Without models, optimization is not possible. “You need models to have predictions. You need predictions to have leverage in exploration,” said Andrew Kahng, distinguished professor of CSE and ECE at UC San Diego. “What you can’t predict, you guard-band, and what you don’t explore you leave on the table.”

Where those models come from could be changing, however. “The next challenge for EDA is creating a complete digital twin of the design flow,” said Jan Rabaey, distinguished professorship at UC Berkeley and CTO at imec. “Rather than doing simulations of new devices, we should be generating models from actual prototypes. We need to have the capability of scaling it up in the virtual world, and then translating the prototypes in the physical world — kind of joint development of both.”

Simulation always has been a problem for the industry. “Verification is horrendous,” adds Rabaey. “The amount of effort is crazy. Raising the abstraction level for functional verification, then ensuring correctness by design, is important. The second one is freedom from choice. We basically are using too much flexibility in design. We may think it is an advantage, but we give ourselves a nightmare.”

Tim Green, director of innovative research at SRC, pointed out that functional verification is only the tip of the iceberg. “Verification is already challenging enough. But verification is really a simple problem in the context of security, because verification is making sure your design is in compliance with your spec. Security is to verify that your design, beyond the spec, doesn’t do anything funny, which is an unknown space.”

Abstraction creates a different opportunity. “There are a lot of customers that have application-specific or domain-specific problems, such as automotive and IoT, where the general-purpose solutions do not match,” said Mamta Bansal, senior director of engineering at Qualcomm. “The majority of EDA vendors are focused on something that goes into volume. Open source can take some domain-specific problems and solve those.”

Noel Menezes, director of strategic CAD labs in Intel, agrees. “I see reason for optimism, such as why certain domain-specific languages may become very successful in specifying hardware. Domain-specific languages, maybe special abstractions, may be the right disruptor for open-source IP/EDA to flourish. The best chances for success are in segments where commercial EDA incentives do not align.”

That also may apply to older technologies. “Opportunities for open-source tools get to be very interesting on some of the trailing nodes,” said Bill Leszinske, operating partner at Cambium Capital. “This is where a lot of the costs have been amortized. That means there can be more innovation.”

With so many potential directions, it may become difficult for EDA to keep up. “There are some important challenges that are arising that basically might hinder or slow down the introduction of new technologies and capabilities,” said Rabaey. “These are beyond things just getting super complex. Going toward 2030, we should be at 1 nanometer. On top of that, designs are becoming extremely heterogeneous. You will see a merger of memory and logic, analog RF, sensors, all those types of things coming together in a single package. Some of those might require very different technologies, different materials, or optical. The computational model of von Neumann is gradually fading, and there will be many alternatives. And you will see the emergence, again, of analog computing, and computing using physical phenomena.”

Could the amount of effort required to solve all of these problems be the catalyst for change? “This nicely segmented market is breaking down as more integration is needed to meet performance goals,” said SRC’s Green. “The current EDA design flow will not deliver the performance that’s needed. We need to define the critical applications that will drive the critical technologies that will drive the design workflows that are needed to achieve the efficiency, the performance, the security that these applications will need.”

Another catalyst is the changing geopolitical environment. “In the past few years, many countries and regions have started to consider semiconductors to be a critical element for the national economy, and even national security,” said Tim Cheng, vice president for R&D at Hong Kong University of Science and Technology. “We have never seen such investment from everywhere in the world. This is great news for talent and competition.”

Along with that, Cheng looked at how that funding could impact EDA. “Funding EDA is no longer just advancing the state of the art. If you need to have control, you need ownership, you worry about national security, you need to own it. And governments are willing to support you. Those people will not steal technology they know is highly sensitive, but they will need people who have the knowledge to build their tools so that they can have a control. This will change the landscape and ecosystem of semiconductors, IC design, and EDA, and could break up the era of the large global EDA companies.”

Open infrastructure
One of the problems for academia is that they fundamentally have to produce papers. These are focused on algorithms and point tools, but they often cannot exist in standalone form. ” Open-source software in EDA encourages academia to work on real EDA problems,” said Chuck Alpert, senior software group director at Cadence. “It’s more realistic. The existence of the OpenROAD EDA flow means the research they do can be more realistic because they’re not working on fake problems. They are working on real concepts, and that’s really good thing.”

There has to be a virtuous cycle for open source to be successful. “If you don’t have a virtuous cycle, and if you don’t have a developer community, or large user community, you need incentive support,” said Intel’s Menezes. “You need incentives for development and incentives for users. Open road is a very successful open-source effort at this point, but my fear is if you don’t have incentives to continue with these open sourcing efforts, now that the Idea program is reaching the end, there will be a problem.”

Successful open source requires collaboration. “OpenROAD is an industrial academic partnership,” said Andreas Olofsson, CEO at Zero ASIC. “There are students who do the research and write the papers, but they don’t really like to do the software engineering, because that’s not their goal at that point in their life, and probably not in the future. Then you have the industrial people who can integrate that. There has to be something to incentivize the training programs to show people how to write the code well.”

But collaboration can be challenging. “Right now, we are all in silos,” says Qualcomm’s Bansal. “I’m in Qualcomm, and I’m in a silo. Every foundry, every supplier, every vendor is in a silo, and so there is no easy way to contribute to the community. Collaborations that have happened are based on funding. Even providing a test case to open door has been a challenge. We do not know how to protect our IP.”

Can that change? “If there will be an EDA 2.0, we as a community need to come together,” said Ruchir Puri, fellow and chief scientist at IBM Research. “Whether it is chip design houses, whether it is the EDA industry, we cannot continue to behave the same way with the attitude of not sharing data. If we cannot get ourselves together to collaborate across the silos, we will not be able to make progress on it. That is a given.”

“No one company, or one university, or one group of people, is going to solve all the problems and have the best solution,” said Cambium’s Leszinske. “We think an environment where a lot of people can innovate and experiment will create a lot of opportunity. We do see open-source projects as a key catalyst to that. Enabling lowered costs of development, enabling lower costs of tape out, allows more innovative ideas to come to market, and that creates an overall bigger industry for all of us.”

Mark Glasser, member of technical staff at Cerebras noted that open source doesn’t always need to be funded to be sustainable. “Something that is often overlooked in the EDA industry is that open-source programs, open-source tools can be used to drive sales of other revenue producing tools. My favorite example is UVM. It’s an open-source verification tool. And it has driven sales of all sorts of things — debuggers, analyzers, context sensitive editors, and all kinds of things that go around it.”

Machine learning
Machine learning is one area in which there does not seem to be full agreement. “There is a very deep understanding that we have gained, and when you know the structure of the problem, you should exploit it,” said Alberto Sangiovanni-Vincentelli, chair of EE and CS at UC Berkeley. “The structure of the problem implies that you understand the physical data behind the particular problem you’re trying to solve. However, if you have not found the deep roots of the problem, the mathematical roots of physical problem, then you need to approximate it because you want to solve this problem and you don’t have the tools. Then you try something which is generic. AI and ML are generic techniques, so they are intrinsically limited. ML uses statistical models to analyze and draw inferences from patterns in data.”

Part of the problem is the rate of change of the underlying physics on which EDA is based. “When you look at technology evolution, there is this inability of foreseeing the future of physics, materials, devices,” said Giovanni DeMicheli, professor and director at École Polytechnique Fédérale de Lausanne. “Why? Because you need something that you learn from, and if your terrain is evolving under your feet, it’s harder to do prediction based on what you have. In addition, there is the lack of a comprehensive datasets to learn from, because not that many designs are in the public domain. It’s very difficult to learn if everybody keeps their own data. Most likely, ML will not be able to choose among future options, especially when it involves technologies, and a mix and match of technology. ML is useful to solve problems that have less structure. But tools like logic design and synthesis, where there is structure, are where algorithms are potentially better to solve the problems because we understand what’s going on.”

Optimization is based on cost functions. “The areas where machine learning has had an impact are games, natural language processing, and computer vision,” said IBM’s Puri. “The good thing about games is they have an amazingly well-defined cost function. Similarly, EDA has a good defined cost function, but the problem with EDA is there are too many cost functions that intersect each other. It is very hard to formulate a single cost function from these multi-dimensional objectives that range from timing, to power, to noise, to area, and so on. This lends itself toward heuristics rather than to a single objective function, game theoretic approach.”

UCSD’s Kahng showed one possible path for EDA. “This figure (figure 2) shows one segment of EDA’s trajectory, the AI/ML empowered EDA. Elements like auto tuning will mature earlier than other elements like consensus on fair benchmarking. But my hope is that the bulk of this figure will become real within the next 5 to 10 years.”

Fig. 2: The road to EDA 2.0. Source: Andrew Kahng/UCSD

Fig. 2: The road to EDA 2.0. Source: Andrew Kahng/UCSD

Can we get there? “For machine learning in EDA, the expectations are very high,” said EPFL’s DeMicheli. “It’s too early to tell whether that’s the way to go or not. We have many results that are surprising, because we still don’t understand why sometimes we get good results, and more analysis of the methods themselves still need to be done. We tend to have more trust in techniques that are based on deterministic reasoning for correctness. But there’s a big space for optimization, and for reducing the cost or area or delay, and in aspects of design, that do not affect correctness. That is a really big opportunity.”

The most likely road ahead will be based on hybrid solutions. “By and large, neural networks are not a good tool when you need trust, because they’re not very interpretable,” said Gary Marcus, professor in the Department of Psychology at New York University. “You need to do verification. And that’s a reason for us to think about neural symbolic hybrids that bring together some aspects of symbolic analysis to do the verification. You’d really like to be able to integrate ML with some symbolic constraints that might tell you that you don’t have the right answer.”

Conclusion
EDA is under a lot of pressure from many directions. Technology is moving at a rapid pace, and EDA is a fundamental piece of the puzzle that allows us to move to ever smaller geometries. As Moore’s Law slows, additional techniques are being introduced that allow for greater levels of integration, and this enables further increases in complexity. Optimization is becoming more difficult because of the number of interrelated cost factors.

The brute force approach to design has stopped working for many companies, which are now turning to domain-specific solutions, and these may be a great enabler for new levels of abstraction, new models, and new approaches. ML has given us a new suite of tools that may be applicable to some of the problems, even if not all. And the introduction of RISC-V has shown a new appetite for open-source as it enables more research and a much wider set of ideas about how to move design forward.

It is not that existing EDA companies are failing. There are simply too many opportunities for them to go after.

Related Reading
Customizing Processors
How custom a processor need to be depends upon many factors, but selecting the appropriate tool chain may be the right place to start.
Distilling The Essence Of Four DAC Keynotes
Common themes emerge, but so do different ways of looking at a problem, from business opportunity to concern for the environment.
Big Changes In Architectures, Transistors, Materials
Who’s doing what in next-gen chips, and when they expect to do it.
Startup Funding: Aug. 2022
110 companies raise $1.9 billion; VCs focus on power semis, automotive, and battery startups. Chinese investment remains strong.

The post The Next Incarnation Of EDA appeared first on Semiconductor Engineering.


Viewing all articles
Browse latest Browse all 117

Trending Articles