Designers prefer to design in flexibility. The reasons are legion and mostly obvious: you may not know today how a chip will be used tomorrow – best to delay setting anything in concrete until you are sure how it is going to be used. You may not fully understand the design until it is nearing completion, and premature optimization can leave you in a difficult situation. And there are more practical considerations – getting buy-in from stakeholders on a set of restrictive requirements can be very difficult. Allowing the hard decisions to be shelved for later is almost always the easier option.
So, the approach is usually to add more flexibility rather than less. This means allowing systems to be configured by software, and in recent times to allow the hardware itself to be directly configured with embedded programmable resources (embedded FPGA blocks, etc.).
But there are other pressures in designing any chip and living as we now do in a post-Moore world, some of those are becoming more prominent.
With the increasing times between process shrinks and the rising costs of these new technologies, there is an increased focus on getting more out of the technologies we already have. For some, the strategy of using off-the-shelf components that decreased in cost and increased in capability over time is now looking flawed. These designers are now looking to custom chips that let them continue to save costs and increase functionality through bespoke solutions that can be tightly architected to solve a specific problem. While building some flexibility into some solutions can be a good idea, generally if a problem space is well known, a chip can be crafted that solves this need without wasting resources on extra flexibility.
There are also costs involved with the teams of people needed to program the flexible parts of the chip in the final solutions. These teams are often located in the end user’s organization while the chip was designed somewhere else, and so now these teams must understand and develop code for a chip they were not involved in designing. This means time and money will be spent getting these teams up to speed on the specifics of the chip design.
Altogether, this is a lot of potential costs – to the designer and the end user – just to tailor a solution using the flexibility of the chip, when such functionality could probably have been decided upon and baked into the design right from the beginning.
Of course, some designs demand a lot of flexibility, for example to support a new standard. The standard you are working towards may still be in flux, but you want to be first to market. Therefore, you may want to have some of the details that are not yet agreed on kept in an FPGA where they can be changed after the chip is manufactured. There is a cost associated with this of course, but this is likely offset by meeting your market window.
Processors embedded in SoCs have been a staple for years precisely because it is more cost-effective to have a general purpose processor than to try to replicate complex logic directly in hardware, particularly when the software to be run may need to change at a later date. Similarly, there are specific problems where you need dedicated hardware but where that hardware requirement will change. In such cases, allowing the hardware to be reconfigured can save silicon area and lead to a more elegant design. However, these use cases, while important, are generally going to be fewer – most chips out there don’t need that level of reconfigurability.
Instead, what is needed is to do the work up-front of understanding the problem space for which the chip is being designed and making the tough decisions as to what the chip will and will not do. By doing this work in advance, you can have a less expensive chip that is ready to use with a minimum of work from end users. The result is that you get to market faster, with a less expensive product.
In times past, adding more technology could help delay difficult decisions. In a post-Moore world, better engineering and better architectures are the road to success. At Adesto, we like to take a holistic view of the discovery process in ASIC definitions so this will include the chips requirements but also any potential future proofing that can be allowed for. Learn more about our ASICs here today.