Category

Blog

New MPW-TWO Program Will Provide Fabrication For Fully Open Source Projects

By Blog

By Rob Mains, General Manager of CHIPS Alliance

CHIPS Alliance is excited to announce that the hardware development community can submit their open source design projects to Efabless.com for space on their forthcoming shuttle. This opportunity comes after the success of having 40 submissions for the MPW-ONE shuttle; 60% of those designs were submitted by first-time ASIC designers. MPW-TWO is the second Open MPW Shuttle providing fabrication for fully open-source projects using the SkyWater Open Source PDK announced by Google and SkyWater. 

The shuttle gives designers the freedom to innovate without having to worry about the risks associated with the cost of fabrication. This is a great opportunity for individuals, universities, and industry to create their own IP and have it manufactured. 

The deadline for submission is June 18. Submissions must be open source designs and leverage open source tooling such as OpenROAD, OpenLane and other EDA applications that Efabless.com has at its design portal. Read more about the project requirements and submit here: https://efabless.com/open_shuttle_program/2.

I look forward to seeing the community’s contributions for this generous offering from Efabless.com and Google.

Modular, Open-source FPGA-based LPDDR4 Test Platform

By Blog

This post was originally published at Antmicro.

The flexibility of FPGAs makes them an excellent choice not only for parallel processing applications but also for research and experimentation in a range of technological areas.

We often provide our customers with flexible R&D platforms that can be easily adapted to changing requirements and new use cases as a result of our practice of using open source hardware, software, FPGA IP and tooling.

As an example of such activity, we have recently been contracted to develop a hardware test platform for experimenting with memory controllers and measuring vulnerability of various LPDDR4 memory chips to the Row hammer attack and similar exploits.

LPDDR4 test platform

Modular and cost-optimized

Targeting high-volume customer-facing devices where size, power use and unit cost are a priority, LPDDR4 does not come in the form of modules, while the hardware tools and software frameworks for testing it can be prohibitively expensive.
Despite efforts to mitigate the Row hammer exploit, a number number of memories available on the market remain vulnerable to the problem, which calls for a test platform that would allow experimenting with memory chips and memory controllers to devise new mitigation techniques.

Another issue is that preexisting work mostly relies on proprietary memory controllers which cannot be adapted to specific memory access patterns that trigger Row hammer.

To address this need, we have created a fully open source flow including Enjoy Digital’s open source memory controller LiteDRAM for which we implemented LPDDR4 support, to enable testing LPDDR4 memory chips.

What our customer needed was a flexible platform for developing security measures that would be cost-optimized for high volume production.

To accomplish that we’ve built a modular device that consists of the main test board and a series of easily swappable testbeds for different memory types, the first of which is already available on our GitHub.

What is more, thanks to being open source, the platform enables various research teams to combine efforts and work collaboratively on coming up with new attacks and mitigations, as well as fully reproduce the results of the work.

LPDDR4 test module

Experimenting reliably on a robust platform

The platform is based on Xilinx Kintex-7 FPGA and it features several I/O options: HDMI, which can be used for processing video data and experimenting with streaming and HDMI preview applications featuring RAM, USB for uploading your bitstream or debugging, as well as an SD card slot and GbE.

There is also additional 64MB of on-board HyperRAM that enables safe experimentation with interchangeable RAM chips under extreme conditions.

With Antmicro’s commercial development services the platform can be customized to meet your specific requirements, while the open source character of the solutions we use gives you full control over the product and vendor independence.

We help our customers build complicated FPGA solutionsembrace the dynamically growing open source tooling ecosystem and develop various technologies that allow developers to work more efficiently across the whole FPGA spectrum.

GitHub Actions Self-hosted Runners, Build Event Server and Google Cloud

By Blog

This post was originally published at Antmicro.

Continuous Integration and smart lifecycle management are key for high-tech product development, which is often a complex and multi-faceted process that requires automation to be efficient and failure-proof. At Antmicro, we’ve been creating various open source cloud and hybrid cloud solutions for our customers, helping them to encapsulate the complexity of their software stack. Lots of those projects cross the hardware/software boundary and involve a mix of open source and proprietary code, which means that fine-grained control of the CI setups are needed to make them work.

To provide the level of flexibility that we and our customers require, we often find ourselves working extensively on the underlying CI infrastructure, building open source solutions that can scale between organizations and teams. One such project involved creating a custom, local GitHub Actions runner, with containerized builds, support for Google’s Build Event Server and workload measurement and analytics; in collaboration with Google we then also enabled running an identical setup with the extra capabilities in Google Cloud.

Self-hosted runner diagram

Custom runner, more applications

GitHub is the world’s largest open source code sharing space, home to many of our open source projects such as Renode, the open source FPGA toolchain SymbiFlow or our open source ASIC development-focused SystemVerilog work. GitHub Actions – used by millions of developers worldwide – is a natural choice for those projects as the go-to CI flow. However, by default it provides compute resources – in the CI world traditionally called runners – with a specific hardware configuration which does not always fit the needs of the workloads that we deal with.

Especially in our work that involves ASIC and FPGA development flows – working towards enabling fully open source chip and IP design in our broader work in collaboration with our customers and fellow CHIPS Alliance members such as Google, Western Digital and QuickLogic, we find ourselves needing hybrid setups which would allow us to keep the code as well as the CI definitions public while being able to rely on internal infrastructure to do the heavy lifting. Long-running builds involving tools like VtR or OpenROAD that use lots of memory and CPU power, can greatly benefit from the flexibility that comes with the use of custom, self-hosted runners, and this solution also gives you a high degree of freedom in terms of integrating your runner with external hardware or tools that can’t be shared publicly. The latter is especially helpful in some of our other open source projects for things like benchmarking RISC-V, OpenPOWER and other cores or tracking the QoR for your FPGA designs. The Quality of Results and flexible Continuous Integration elements are extremely important for custom engineering projects we embark on, which typically integrate a variety of open source components together – fortunately the very nature of open source that constitute the predominant part of the tools we use makes such work much easier.

Virtual machines, distant-bes and Google Cloud integration

Our internal and our customers’ needs have called for the ability to integrate on-premise runners into our GitHub CI flows, which can be done using the GitHub runner project. For many of our projects, we provide flexible development infrastructure based on open source that allows us to better collaborate around shared code and to do that, we need to be able to scale compute resources between the private and public cloud. To enable feature parity with some of internal infrastructure, we also extended the self-hosted runners with some extra features.

Firstly, the custom runners developed as part of the project can be used with our distant-bes framework to push results in the Build Event Server format to custom results viewers transparently to the CI run itself. You can see an example of how this works in the symbiflow-examples repository. Secondly, we modified the runner so that instead of running the CI script on bare metal, it spawns virtual machines and performs the run steps inside them, collects results, and kills the machine, without changing the state of the host system’s kernel. This also allows us to gather performance metrics to see what the real utilization of the runner’s resources is – and we push those results in the form of graphs to our BES server.

Runner's resource utilization

Lastly, based on the needs of several of our collaborative open source projects with Google, we pursued yet another goal, namely, instantiating our self-hosted runners in Google Cloud, which enables our CI to spin powerful servers up and down on demand. This mix of robust internal infrastructure and always-available, scalable on-demand Google Cloud resources is very useful for heavy workloads run by multiple organizations. In the world of collaborative development in forums like the CHIPS Alliance and RISC-V International, this is no longer a nice-to-have, but a necessity.

Goings-on in the FuseSoC Project and Other Open Source Silicon Related News

By Blog

This post was originally published by Olof Kindgren

FOSSi Fever 2020

2020 was a year with a lot of bad news and so it feels slightly strange to cheerfully write about a very specific topic in the light of this. But there will always be good and bad things happening in the world. So let’s keep fighting the bad things and for now take look at what happened last year within the amazing world of open source silicon. I will start by mentioning the most significant, but by no means the only, milestones for the FOSSi movement as a whole and then take a more personal look at the work where I have been directly involved.

OpenMPW

The biggest story within free and open source silicon this year has undoubtly been the openMPW project involving Google, SkyWater Foundries and eFabless together with a number of other collaborators.

Ever since I got involved in open source silicon ten years ago, building a fully open source ASIC has been one of those big milestones. While we have had FOSSi IP cores taped out on chips for at least 20 years and parts of the flow being managed by open source tools, it has always seemed to be too much work and requiring filling in too many gaps to have a fully open source end-to-end flow to produce ASICs. But, over the years people all over the world have filled in the gaps and done the work bit by bit. Sometimes in the context of overarching programmes to advance open source silicon, sometimes in academic settings, sometimes coming from the industry and sometimes as completely unpaid hobby projects. And this year all these efforts came together, helped by funding, to produce four shuttle runs, each loaded with 40 different completely open source designs. The first of these runs are currently being fabricated, and it will be extremely interesting to see them coming back.

One of the final pieces in this puzzle was the PDK. And while SkyWater should be rightfully lauded for their decicion to open up their 130nm PDK, it begs to ask the question: Why on Earth did it take this long? What could possibly the fabs have to lose by doing this? What they gain is easy to answer, a completely new market of users who can create chips at their fab. According to people within the project, it’s estimated that 75% of the people on the first shuttle define themselves as software engineers. It’s very likely that none of these people would ever dream of making an ASIC without this possibility. So I kind of feel that making the EDA industry open up their formats is a bit like trying to get your kids to eat vegetables. It’s a lot of groaning and complaining, but was it really all that bad in the end to get some nutritions. Or in the case of ASIC fabs, was it really all that horrible to release your PDK to get some more customers? Let’s just hope this opens up the eyes of more fabs. My, and many others, dream is to eventually see the same thing happening to ASIC fabs as has been happening with cheap PCB services over the past ten years. And using that analogy, I’m quite sure it pays off to be early in the race. So, get started folks!

QuickLogic and SymbiFlow

The other big thing happening this year is that we finally have an FPGA-vendor shipping an open source toolchain for their devices. The company that will go down in the annals of history for being the first to do this is QuickLogic and their EOS S3 FPGA. This is by no means the first FPGA with an open toolchain, and the QuickLogic-flavored version of SymbiFlow developed by FOSSi veterans Antmicro is based on all this prior work. But it is the first time we see a toolchain being created from the FPGA manufacturer’s specifications rather than being figured out from compiled FPGA binaries and it’s the first toolchain that is supported and funded by the vendor rather than being at best tolerated by them. And again I must ask, why did it take so long for this to happen? If I was running a small FPGA startup with limited resources, I can’t for my life understand why I would want to spend a lot of time and money to build and continouosly maintain a big unwieldy toolchain all by myself instead of adding the required device-specific bits to a known good open source toolchain and share the maintenance burden. If nothing else it would free up resources to build other value-add products on top of the tools. It’s like if every vendor of computer systems would first build their own operating system and compiler before shipping their products. This is what we had in the 80s and abandonded it for very good reasons. Because it made absolutely no one happy. And you know what? I think the users of FPGAs should put more effort into pushing their vendors to support open source toolchains, because it will save everyone a heap of time and money.

Let me illustrate that last point with an example that actually happened when I was porting SERV to the QuickLogic devices. After synthesis I noticed that it used far more resources than expected. Looking at the synthesis logs I realized the memories in the design weren’t mapped to on-chip SRAM. So I asked the toolchain developers about this. They pointed me to the file in the toolchain that contained the rules for mapping to SRAM. I quickly found a badly tuned parameter, changed it to a more sensible value and ten minutes later it was working fine. An hour later I had submitted a patch back to the toolchain that fixed the problem for everyone else who would encounter it.

let’s break this down into numbers. Finding the cause of the bug took about 15 minutes. Fixing it, another five. At that point I could use it myself, but after spending another 15 minutes or so, it was also fixed for everyone else.

Now let’s do the same exercise for a proprietary closed source toolchain. Finding the cause of the bug takes… well…it depends… Let me explain.

I started my professional career at a company which at that point was the world’s largest FPGA buyer. Whenever we had problems, they flew in two FAEs to sit in our lap, they could provide us with custom internal builds of their tools and generally tried to make sure the problem quickly went away so that we would continue to buy FPGAs from them. However, most companies are not the world’s largest FPGA buyer and does not get this treatment. Instead you will have to wade through layers of support people until you reach someone who is actually qualified enough to acknowledge the issue. I have been in this situation numerous times and would estimate this process usually takes around 2-3 months. Actually fixing the bug probably takes five minutes or so in this case too, but here comes the fun part. In most cases you will now have to wait, I don’t know, a year or so until the fixed bug ends up in a released product that you can download. What happens in practice is that the user tends to find a workaround instead. In the example above, the likely solution would be to instantiate a RAM macro instead of relying on inference. This however doesn’t come for free as it requires finding all the instances where this is a problem, add special handling for this case which results in a larger code base with more options to verify and maintain. This costs time and this costs money. So the moral of this story is that closed source tools are more expensive for everyone involved and users of FPGAs should get better at telling the FPGA vendors that they are done with this closed source nonsense.

The QuickFeather. First FPGA board to ship with a FOSSi toolchain

There are numerous other news and projects that are well worth mentioning, but the above two are milestones that we have been waiting for a long time, so they deserved special attention. And if you want to keep up with the latest happenings in open source silicon, I highly recommend subscribing to the El Correo Libre newsletter which does a fantastic job of providing an overview of what goes on in all corners of the world. So let’s move on to some of my more personal victories that aren’t necessarily mentioned in other people’s year in review.

When I am working on and talking about open source silicon I am often wearing many hats because I’m associated with several different organizations. Luckily they are all pretty much aligned on this topic which makes things far easier. But these organizations also have different motives and goals so I would like to mention a few words about them here.

Qamcom

My dayjob is working for Qamcom Research & Technology and also there it has been more FOSSi work than usual, which I think is a good indication that open source silicon is becoming increasingly more common in chip design in general. The year started off by finishing up some work on SweRVolf together with a couple of my Qamcom colleagues. SweRVolf is a project under CHIPS Alliance, an organization that Qamcom has been part of since 2019 to help improve the state for open source and custom silicon. After that I was pulled into a project for doing climate research with a huge radar system. My task here was to handle sub-nanosecond time synchronization between systems located hundreds of miles from each other, using the White Rabbit system developed at CERN. I was pretty excited about getting to know White Rabbit. The timing section at CERN responsible for the White Rabbit project and associated technologies are household names within open source silicon and have a long history here. I know many of the people working there personally and have great respect for their work. But so far I  haven’t had the chance until now to actually get down and dirty with the technology. Once my job was done there I moved on to another proprietary project that I can’t discuss here. I do get to use FuseSoC and Verilator though, so it’s fine 🙂

FOSSi Foundation

The hat I tend to wear most when topic revolves around open source silicon is my FOSSi Foundation director hat. And despite doing a lot less of the things we normally spend time on, it turns out we did a lot of other great things instead. I will however not go into more detail here, but instead point to the excellent summary done by my FOSSi Foundation colleague Philipp Wagner.

RISC-V

Arguably the most well-known project nowadays with ties to open source silicon is RISC-V. My RISC-V ties deepened in the beginning of the year when I was asked to become a RISC-V ambassador. Part of being an ambassador is to create awareness of RISC-V in the fields where I’m active. For well-known reasons the number and nature of events were a bit different from previous years which meant fewer opportunities to weild this new-found power, but I did participate in an ask-the-expert session during the RISC-V global forum and a couple of other events that will be described later. And I also got to be interviewed about RISC-V and open source silicon by Sweden’s largest electronics and tech news outlets as well as the Architecnologia blog.

Current crop of RISC-V Ambassadors; AKA the Twantasic 12

In addition to my day job and participating in different organizations I also run a bunch of open source projects, so let’s take a look at the progress of the most important ones during 2020.

SweRVolf

SweRVolf is an extendable and portable reference SoC platform for the Western Digital / CHIPS Alliance SweRV cores. SweRVolf is designed for software engineers who wants a turn-key system to evaluate SweRV performance and features, for system designers who want a base platform to build upon and for learners of SoC design, computer architecture, embedded systems or open source silicon methodology. To easily achieve the goals of portability and extendability it is powered by FuseSoC, which just happens to be one of my other open source silicon projects mentioned later on.

During 2020 SweRVolf gained support for booting from SPI Flash but most effort was spent on usability,  by making it rock-solid, improve documentation, increase compatibility with more EDA tools, keep the underlying cores up-to-date and follow along with changes in the Zephyr operating system, which is the officially supported software platform for SweRVolf.

But the biggest thing to happen SweRVolf this year is that SweRVolf will be used as the base of a new university course from Imagination University Programme called RVfpga: Understanding Computer Architecture. I’m very excited (and slightly scared) about soon having thousands of students getting familiar with computer architecture, RISC-V and open source silicon through a SoC that I have designed. And I would like to mention a few things about how SwerRVolf is built because I think it’s a great example of how to create chip designs. When I say that I have designed SweRVolf, most of the work has consisted of putting together various pieces and make sure it works well as a whole. Most of the underlying code has been written by other people, and from my perspective, that is really the most successful aspect of SweRVolf because it highlights the rich open source silicon ecosystem. The main CPU core is from Western Digital and governed by CHIPS Alliance. Most of the AXI infrastructure was developed through the PULP project at ETH Zürich and University of Bologna. The UART and SPI controllers were developed for the OpenRISC project during the first wave of open source silicon almost 20 years ago. The Wishbone infrastructure was developed by me when I started out with open source silicon ten years ago and the memory controller was created by Enjoy Digital and is written in Migen as part of the Litex ecosystem. And to go full circle, the memory controller uses a tiny RISC-V CPU called SERV internally to aid with calibration. SERV, the world’s smallest RISC-V CPU is written by me. Small world. And of course the whole project is packaged with FuseSoC and uses Verilator by default for simulations, so it’s FOSSi all the way. As I hope you understand by now it’s not about some lone hero churning out code, but instead all this has been made possible by a huge amount of work by a ton of people over many years, and I’m proud to be one of them.

SERV

Probably the hobby (read unpaid) project I spent most time on during 2020 was SERV, the world’s smallest RISC-V CPU, which turned from small to even smaller during the year. SERV is very much a project driven by numbers, so lets look at some of these numbers.

In February I got hold of an ZCU106 development board with a huge Xilinx Ultrascale+ FPGA for a project I was assigned to. As this was the largest FPGA I had ever had in my home I got curious to see how many SERV cores I could squeeze into it. The year before, at the 2019 RISC-V workshop in Zürich, I had done a presentation on how to fit 8 RISC-V cores in a small Lattice iCE40 FPGA (spoiler: it ended up being slightly more than 8 eventually), giving each of them a single I/O to communicate with the outside world. Problem this time was that after stuffing in 360 cores I run out of I/O pins. It would also had been practically impossible to verify that all these external pins actually did what they were supposed to do, so I needed some way of using less than one I/O pin per core. Then it struck me that just a few months earlier I had created a heterogenous sensor aggregation platform based on SERV cores called Observer. The idea with Observer was to connect a lot of sensors to an FPGA, each serviced by its own SERV core and then merge the data to an output stream. I gave up on the platform when I realized that while I could fit a lot of SERV cores into the devices, I just had a few sensors so there wasn’t much data to to aggregate. But this platform was a very good starting point.

Block diagram of the Observer platform
By removing all sensor interfaces and just have each core print out an identification message instead I had a system that I could instantiate with any number of cores. Trying this on the ZCU106 I could now run over 600 cores on the FPGA. The next problem then was that I was running out of on-chip RAM way before any of the other FPGA resources. In case you don’t know, most FPGAs contain a number of fixed-size SRAM spread out over the devices, each typically being 1-8kB large. For SERV, each core used one for the RF (register file), and another one for the program/data memory. With RISC-V using 32 32-bit registers, this means that only 128 bytes of the RF RAM is used but since the fixed-size SRAM on FPGAs are typically being far larger than that most of the RAM ends up unused. That’s bad, but I had a plan. With a bit of work I managed to share RF, program and data in the same RAM so that the RF is allocated to the top 128 bytes of the RAM. This freed up half of the on-chip RAM blocks and I could eventually hit 1000 SERV cores on the ZCU106 board. Of course, at this point I was curious to see what the situation was for other boards after all these optimizations. Taking it one step further I figured I should turn this into a real thing by creating a benchmark so that people can have a quick way to see roughly how large the FPGAs are on different boards. And with that ServMark was born.

ServMark lasted for about three minutes until I realized CoreScore was a much catchier name, so that’s we have now.

I had originally planned to do a presentation about SERV at Latch-Up in Boston. But for well-known reasons we cancelled all FOSSi Foundation physical events. Instead I accepted an invitation to speak at the First virtual RISC-V Munich meetup. By this point I had attended a couple of virtual meetups and I hated it. It was in most cases awful to watch a narrated slide deck without a stage and a speaker to bring it to life at least a bit. So, I decided to take a fresh look and look at the possibilities instead of being limited by the medium. I decided to make videos instead. First of all, since people would watch on a computer screen with proper resolution instead of a washed-out image projected on canvas. This meant I could have much more detailed pictures and smaller font sizes. I could also freely mix pictures and animations, fine-tune timings, do several takes of the audio and add sound effects. And despite being done by someone who has pretty much zero experience in these sort of things, I think it turned out pretty well. So on the day of the event, I just introduced myself and let the audience indulge in my fully immersive multimedia edutainment experience about SERV. Not sure why the Oscars committee hasn’t got in touch yet.

Is CoreScore the only attempt to put a mind-boggling number of RISC-V cores inside chips? Absolutely not, and during the summer I learned of the Manticore project from Florian Zaruba and Fabian Schuiki (jointly known as Flobian Schuba) from ETH Zürich, both well-known names in the FOSSisphere from their work in the PULP ecosystem. Manticore: A 4096-core RISC-V Chiplet Architecture for Ultra-efficient Floating-point Computing had been accepted into the prestigious Hot chips conference. Manticore is an impressive project, but I still thought it was a bit unfair that I wasn’t invited as well. So I reached out for the biggest FPGA board I could find and then wrote my first academic paper of the year called Plenticore: A 4097-core RISC-V SoClet Architecture for Ultra-inefficient Floating-point Computing. Unfortunately I did not recieve an invitation to Hot chips despite this. I assume it must have gotten lost in the mail somewhere.

Oh well, let’s look at some more numbers instead.

  • A number of optimizations was found over the year which, depending on the measure, further shrank the core 5-10%.
  • The number of supported FPGA boards for the servant SoC grew from 4 to 17, mostly thanks to other contributors (thanks everyone, love you all!)
  • The SERV support for the Zephyr operating system was rewritten and upgraded from the aging Zephyr 1.13 to 2.4, the latest version at the time of writing.
SERV resource usage over time on a Lattice iCE40 FPGA

Coming back to CoreScore, the results right now range from 10 cores on a smaller Lattice iCE40 device up to 5087 cores on a large Xilinx device and the high score table can even be viewed interactively online! If you can’t find your favorite board there, just send a PR and we’ll get it added. And if any people with access to crazy large FPGAs happen to read this, please get in touch with me. I’m very curious to see who will be first to get above 10000 cores.

SERV also saw another great improvement in the form of documentation. While not completely there yet, the functionality of most SERV modules have been documented together with detailed schematics showing the implementation down the individual gates, muxes and flip-flops. And most changes to the source are now accompanied by a code comment to clarify what is going on. Since SERV is optimized for size rather than readability, many parts of the core are difficult to figure out by just looking at the source code. Hopefully this will make it easier for other who want to understand or work on the core, but frankly it has also been very useful for me since I tend to forget why I did things in a certain way so I have had to spend a lot of time following my own tracks.

Schematic of the SERV control unit from the SERV documentation

FuseSoC

The oldest of my open source silicon project still going strong is FuseSoC. It is now about to turn ten years old and keeps growing in features and users for each year. Looking back at the changes through 2020 I can see some new trends in the development. The most important one is that for the first year ever, most of the work was not done by me. During 2020, my fellow FOSSi Foundation director and LowRISC employee Philipp Wagner has been pulling the heaviest load of FuseSoC development. And with Philipp came quality. Dr. Wagner has improved FuseSoC in pretty much every aspect. Bugs have been fixed and features has been added. The development experience has been improved by CI testing, automatic code formatting checks and improved testing coverage. And what makes me happiest is that the user experience has been improved not least by a total rewrite of the documentation into something that is actually useful and can be proudly shown to the world. All this is very much needed as FuseSoC is becoming increasingly popular. It has already been picked up by many of the flagship open source silicon projects like OpenTitanSweRVolfOpenPiton and with the RVFPGA university programme there will soon be a whole new generation who will get familiar with it as well.

Edalize

In 2018, the part of FuseSoC that interacted with the EDA tools was spun off into Edalize. The reason was that it was believed this part could be useful for others who weren’t interested in the whole FuseSoC package. This prediction seems to have been correct and Edalize has very much started a life on its own by now. In addition to FuseSoC, Edalize is now used by several other projects such as SiliceClash and fpga-perf-tool and over the year Edalize has gained support for 7 new EDA tool flows, bringing the total number up to 25.

2020 was also the year when Edalize had it’s first taste of being in the spotlight on its own merits. For the Workshop on Open-Source EDA Technology (WOSET) 2020 I decided to submit a presentation about Edalize. Being an academic conference this also prompted me to write an accompanying paper as is the common courtesy for these kind of events. The paper received a lukewarm response but was accepted anyway. Once again I did not feel like reciting slides to a camera so I turned back to my new-found interest in advanced multimedia productions. And it paid off. The Edalize video won an award for best video at WOSET 2020. Well done Edalize!

LED to Believe

All of the above projects use FuseSoC and Edalize because – well it’s kind of why I created FuseSoC in the first place – to easily reuse components and retarget to different devices. But I also realized there was a need for a dead simple project to help people getting started with FuseSoC – the Hello world of silicon, so to speak. And the Hello World of silicon is of course the blinking LED. So in 2018 I created project LED to Believe with the ambitious goal to create FuseSoC-powered LED blinkers for every FPGA board ever made. The project has several aspects that are useful in different ways. It serves as a very simple introduction to FuseSoC and how to make a design that targets multiple hardware. It is also an excellent pipe cleaner for when you receive a new board. If you can run the project successfully and get the LED to blink, it likely means you have managed to install all the EDA tools correctly which is no small feat, and you also have a template to take on bigger projects. And it’s also fun to see what boards are available out there. While I have submitted a bunch of the board ports myself, the vast majority have come from all the fantastic contributors out there. And during 2020 the number of supported boards grew from 16 to 44. Perhaps not all the FPGA boards ever made, but a considerable chunk of them. And already in the short amount of 2021 that has passed, there have been numerous more contributions so we’re getting closer all the time.

In closing, 2020 was a busy year FOSSi-wise. And this has just touched upon the surface of all things that have been happening during the year. And just as we were about to close the books on 2020, I was informed that Lattice had incorporated one of my FOSSi projects into their shiny new award-winning Propel design suite. Which project, you might ask? Was it the similarly award-winning FuseSoC, to give Lattice users immediate access to a rich ecosystem of Open IP cores? Or was it the Rosetta stone of Edalize, with its award-winning video, that would easily provide a coherent interface for a dozen simulators and make it easy to switch between Lattice’s multitude of FPGA tools such as Diamond, icecube2 and Radiant? Or was it SERV itself, the award-winning CPU capable of offering a RISC-V experience for all but their absolutely smallest offerings? Well, actually, none of the above. It turns out that Propel now contains ipyxact, my somewhat feature-limited Python library for working with IP-XACT files. Not my first choice, but fair enough. I wonder if they have read my somewhat complicated relationship with IP-XACT.

Finally my work is recognized by big EDA vendors (picture by Gatecat)

High-Throughput Open Source PCIe on Xilinx VU19P-Based ASIC Prototyping Platform

By Blog

This post was originally published at Antmicro.

In our daily work at Antmicro we use FPGAs primarily for their flexibility and parallel data processing capabilities that make them remarkably effective in advanced vision and audio processing systems involving high-speed interfaces such as PCI Express, USB, Ethernet, HDMI, SDI etc. that we develop and integrate as open source, portable building blocks. Many of our customers, however, use FPGAs also in a different context, namely for designing ASICs, which is a highly specialized market that typically involves large FPGAs, proprietary flows and IP. In one such project, we were working with one of the largest FPGAs in production today, the 9-million LUT Xilinx VU19. Being a design with considerable complexity, it needed a high-throughput link between the FPGA and the host PC that could be thoroughly benchmarked, analyzed and optimized for the use case.

Implementing PCIe with open source

Implementing PCIe is not completely straightforward as you have to synchronize multiple lines of high-speed bi-directional data. If you hit a bug somewhere in your data flow, things get very tricky to debug, especially if you have no ability to inspect and change the source code of the IPs involved. Being active developers of a variety of portable and reusable open source FPGA IP cores, for the project in question we were able to integrate a fully open PCIe interface into the Xilinx VU19-based ASIC prototyping platform using LiteX/LitePCIe, achieving a pretty respectable throughput of 31 Gbits/s on an 8-lane bandwidth. Although the FPGA chip itself is capable of 16-lane bandwidth for transferring data, the proFPGA board used in the setup supports only 8 lanes, but with hardware capable of bigger bandwidth we can achieve even greater throughput if needed. In fact, the repository also contains instructions for a 16-lane capable VU9-based setup – using a popular and not as prohibitively expensive devboard available from after-market- where we could measure as much as 59 Gbits/s.

PCIe connection between host PC and VU19

Scalable, portable and customizable flows

Our ability to rapidly iterate as well as track down and fix bugs in the system we have created for this customer project demonstrates the scalability and portability of the open source-based approach, and is an example of Antmicro’s wider efforts aimed at developing reusable building blocks and introducing improvements to the whole FPGA ecosystem.

Open source-licensed IP cores play well with the open source FPGA and ASIC tooling that we are building to enable a faster, collaborative and modular system development workflow – a goal that is shared by CHIPS Alliance, of which we are proud to be a Platinum member. As one of the many examples, we are making great progress in enabling open source synthesis and simulation of complex SystemVerilog-based designs, such as security-focused RISC-V cores like OpenTitan’s Ibex. Some of our other projects focus on open source synthesis and place & route flows, linters, formatters, CI systems, simulation platforms, test suites and more.

Flexible system design

The PCIe core used in the ASIC prototyping project also works great in sophisticated computer systems we have been building for our customers. The wide array of customizable and licence-free FPGA IPs and SoC generators that we work with allows us to implement specific functionalities in the devices we build and it includes MIPI CSI and other camera interfaces, SDI, HDMI, ISP processing, video codecs, AI and 2D GPU acceleration, I2S, SPDIF, PCIe, USB, Ethernet, DMA, SATA and DRAM controllers.

Enabling Open Source Ibex Synthesis and Simulation in Verilator/Yosys via UHDM/Surelog

By Blog

This post was originally published at Antmicro.

Throughout 2020 we were hard at work developing proper, portable SystemVerilog support for multiple open-source FPGA and ASIC design tools used by us and our customers, most notably Yosys and Verilator. We strongly believe that the support is a necessary step in building a collaborative ecosystem and scalable and reproducible CIs, especially publicly accessible ones that are common in multi-organization projects such as OpenTitan and CHIPS Alliance. Leading the efforts towards achieving this goal, we’ve been developing a fully open source SystemVerilog parsing flow for Yosys and Verilator using UHDM and Surelog, achieving an important milestone: being able to fully parse, synthesize and simulate OpenTitan’s Ibex core directly from the SystemVerilog source.

Getting closer to open-source synthesis and simulation

In this effort, Antmicro has been gradually covering various SystemVerilog functionalities and real-world implementations, developing support for different open source RISC-V cores and moving closer to a complete open-source synthesis and simulation tools support for Ibex – a small and efficient, 32-bit, RISC-V core used in the OpenTitan project. Originally developed at ETH Zürich as RI5CY, it is now maintained and developed further by lowRISC – a not-for-profit organization promoting collaborative engineering that targets open source silicon designs and tools.

Open source SystemVerilog test suite

What has proved very helpful in this process is the SystemVerilog test suite that we developed last year and continue to maintain together with a broad open source community in order to keep track of the supported and missing SystemVerilog features in a number of Verilog tools. It runs tests dedicated to various tools classes, covering a range of features, from single SV functionalities up to complex designs.

Earlier this year, while closely tracking our progress using sv-tests, we have completed a number of milestones such as parsing Ibex in the Yosys synthesis tool directly or enabling SystemVerilog linting in formatting with Google’s Verible SystemVerilog parser and FuseSoC an open source tooling and IP package manager that is easy to integrate with existing workflows. The results of this ongoing work are now being used in several open source silicon projects, most notably OpenTitan. You can find a demo integration on our GitHub as described in dedicated blog note earlier this year.

Parsing CPU cores with a higher order tool

However, our overarching goal in this space was to enable parsing various complex SystemVerilog designs (with Ibex being the tip of the proverbial iceberg) with higher order tools which could be used as a front-end to multiple other tools, without the necessity to redo the work every time and to maintain SystemVerilog support in multiple tools.

This can be achieved with UHDM and Surelog – two open source tools originally developed by Alain Marcel. UHDM (Universal Hardware Data Model) is a multi-purpose intermediate library that enables plugging a parser into many different tools, while Surelog is a versatile, comprehensive SystemVerilog parser, pre-processor, elaborator and UHDM compiler.

Adding enough coverage of SystemVerilog in UHDM/Surelog to support the Ibex core in two critical open source hardware development tools is an important milestone in Antmicro’s long-running collaboration with Google and Western Digital, both of which are driving the OpenTitan project as well as the FOSSi community, interested in open source simulation, synthesis, place & route and verification of designs of similar or bigger complexity than Ibex. A guide on using the Surelog/UHDM flow to synthesize the Ibex design is available on our GitHub.

Open source technologies are bringing a new dimension to FPGA/ASIC development flows. Learn more at antmicro.com.

The CHIPS Alliance Workshop: 10 Talks From Industry Leaders, All For Free

By Blog

Mark your calendars! The CHIPS Alliance Workshop is coming up on Thursday, Sept. 17 from 11 a.m. to 2 p.m. PT. This free, virtual event will feature talks from industry leaders including Antmicro, Efabless, Google, Intel, Mentor, Metrics, OpenROAD, QuickLogic, SiFive, UC Berkeley and Western Digital.  

The CHIPS Alliance Workshop will fit 10 sessions into three hours for a jam-packed event covering a range of interesting topics in the open source community. You’ll hear about open source ASICs, chiplets, FPGAs and SoCs, in addition to open source design verification, FPGA tooling, machine learning accelerators and more. Read on for additional details, and make sure to register before it’s too late.

Check out the schedule below to learn more about what the sessions will cover:

  • 11:00 a.m. PT: Keynote Kick-Off – CHIPS Alliance
    • Brief welcome
  • 11:05 a.m. PT: Open Design Verification – Tao Liu, Google
    • Open source design verification is a key enabler for more collaborative flows in ASIC development. The RISC-V DV framework, based on an open source instruction set generator developed by Google, is enabling end-to-end verification flows for RISC-V CPUs.  The generator supports all RISC-V ISA extensions, and can be configured to generate highly random tests for various RISC-V processors. This talk will cover the fundamentals of the flow and recent developments including Bit-manipulation extension, Vector extension, multi-cores verification, functional coverage model, python based random instruction generator etc.  Learn more about the technology and its latest developments in Tao Liu’s talk. 
  • 11:25 a.m. PT: Enabling Fully Open Source And Continuous Integration-Driven Flows in ASIC and FPGA Development – Michael Gielda, Antmicro
    • ASIC and FPGA development is making rapid strides towards adopting fully open source, software-oriented approaches where large-scale collaboration and CI are possible. The developments include new frameworks such as UHDM and sv-tests aimed at improving SystemVerilog support in linting, formatting, synthesis and simulation, ongoing work in Verilator towards providing UVM support for open source verification, advances in the open source SymbiFlow toolchain which opens up FPGAs and ASIC prototyping to more software-oriented experimentation and collaboration. We also have the OpenROAD flow and SkyWater PDK tackling end-to-end open source ASIC design, and general progress in the open IP ecosystem – including new and exciting RISC-V and OpenPOWER cores – energizing the community. In this talk, Michael Gielda, VP Business Development at Antmicro, will highlight recent developments and explain the vision for open source chips that Antmicro and the CHIPS Alliance are spearheading.
  • 11:45 a.m. PT: The Emergence of the Open-Source AIB Chiplet Ecosystem – David Kehlet, Intel
    • The AIB chiplet ecosystem has built and powered on ten chiplets across seven process nodes, leveraging three different foundries, and contributing to two different product families.  Among the chiplet functions are AI acceleration, high-speed transceivers, optical interfaces, and high-speed ADCs/DACs.  The demand for AIB-enabled chiplets has spurred the release of an automated AIB PHY generator tool, which will help speed the next generation of AIB adopters to complete their projects.  Dave Kehlet, Research Scientist at Intel, will cover these topics, the new release of the AIB 2.0 specification with even higher bandwidth and lower power, and consideration of future layers needed as open source.
  • 12:05 p.m. PT: Chipyard: Design of customized open-source RISC-V SoCs – Borivoje Nikolic, UC Berkeley
    • Chipyard is an integrated SoC design, simulation and implementation environment for specialized compute systems. Chipyard includes configurable, composable, open-source, generator-based blocks that can be used in multiple stages of the hardware development flow, while maintaining design intent and integration consistency.  Chipyard is built around the open-source RocketChip generator, and targets cloud FPGA implementation and rapid ASIC implementation, allowing for continuous validation of physically realizable customized systems.
  • 12:25 p.m. PT: SweRV and OmniXtend Milestones – Zvonimir Bandic, Western Digital
    • The Open Source RISC-V SweRV Cores have been increasingly adopted by organizations who prioritize a validated, production worthy core. The latest updates on the first commercial dual threaded, embedded SweRV EH2 will be highlighted. This talk will discuss progress on the open cache coherent memory fabric, OmniXtend. The breakthrough architecture uses low cost Ethernet to connect memory to hosts. OmniXtend frees main memory from the CPU and enable next generation memory centric architectures to become a reality. 
  • 12:45 p.m. PT: Chisel & FIRRTL for next-generation SoC designs – Jack Koenig, SiFive
    • The Chisel Working Group is dedicated to improving the productivity of digital design and verification to enable next-generation SoC designs based on open-source tools. Its namesake project, Chisel HDL, is a domain-specific language embedded in Scala that provides designers with modern programming techniques like object orientation, functional programming, parameterized types, and type inference. CWG also includes FIRRTL, the hardware compiler framework that enables decoupling design from implementation via target specialization and custom transformations. In this talk, you’ll learn about the exciting improvements to the various projects, as well as the adoption of formal governance.
  • 1:05 p.m. PT: Open ML Accelerator – Anoop Saha, Mentor
    • High Level Design or High Level Synthesis (HLS) helps users to design hardware at a higher level of abstraction and consequently, improve productivity and reduce costs. This methodology has gained traction in the design of custom application specific accelerators for machine learning. In this talk, Mentor’s Anoop Saha will go over the HLS ecosystem and the open source HLS components that help in building an accelerator. This ecosystem provides resources from IP libraries to full toolkits with real working designs.
  • 1:25 p.m. PT: Cloud Based Verification of RISC-V Processors – Dan Ganousis, Metrics
    • Open-source ISAs such as RISC-V allow users to modify/optimize processor IP for their SW applications. With that benefit, however, comes the responsibility of the user to fully verify the modified processor IP. Many ASIC design groups do not have the requisite processor verification skills and simulation capacity and have realized delayed schedules and budget overruns. Metrics CloudSim provides a simple and economical verification solution in the Cloud that provides scalable computing, elastic storage, and a SaaS business model.
  • 1:35 p.m. PT: OpenROAD open RTL-to-GDS update – Andrew Kahng, OpenROAD/UCSD, and Mohamed Kassem, Efabless 
    • The OpenROAD project seeks to develop an open-source RTL-to-GDS tool that generates manufacturable layout from a given hardware description in 24 hours,  with no human in the loop. By reducing cost, expertise and schedule barriers to hardware design, OpenROAD enables greater access to ASIC implementation and accelerates system innovation in hardware. This talk will give an update on OpenROAD’s status and near-term outlook. The OpenROAD tool is integrated around  an open-source, commercial-quality database and timing engine. A SkyWater 130nm tapeout was made by efabless.com in May, and DRC-clean layout generation in GLOBALFOUNDRIES 12nm was achieved in July.  Efabless will describe the “OpenLANE” flow that integrates much of OpenROAD’s tooling, and the striVe family of SoCs being taped out on SKY130. 
  • 1:50 p.m. PT: Open Source FPGA Tooling, Our Journey from Resistance to Adoption – Brian Faith, QuickLogic
    • Since the inception of the Programmable Logic industry, the vendor-supported FPGA development tools have been proprietary and closed source. Initially this was simply because that is the way things were done – there were no open standards. But over time, keeping them closed and proprietary enabled a level of influence and control over users. If a designer liked your software, they tended not to change, and that implicitly makes your user base captive.  Open source FPGA tools have been around for a long time, being used primarily by hobbyists and in academia. However, over the past few years, an increasing number of new developers with software backgrounds are gravitating towards open source FPGA development tools.  With companies like Google and Antmicro, as well as several universities, making significant contributions to them, these tools are only going to keep getting better.  In this talk, Brian Faith, CEO of QuickLogic, will share their journey from resistance to adoption, how they decided to take the leap into open source FPGA tooling, becoming the 1st Programmable Logic company to do so.
  • 2:00 p.m. PT: Summary Wrap-Up

 

For more information about the CHIPS Alliance and our activities over the past year, check out our 2020 Annual Report and our recent news. We look forward to seeing you at the Workshop! 

CHIPS SweRV Cores and the Open Tools Ecosystem

By Blog

This post was originally published at Antmicro.

Antmicro’s open source work spans all parts of the computing stack, from software and AI, to PCBs, FPGAs and, most recently, custom silicon. We connect those areas with an overarching vision of open source tooling and methodology, and a software-driven approach that allows us to move fast and build future-centric solutions. Our partners and customers, many of whom work with us also in the context of organizations such as CHIPS Alliance and RISC-V, share our approach to developing open systems. We were recently very happy to be invited to give a talk at the “Production grade, open RISC-V SweRV Core Solutions in CHIPS Alliance” meetup organized by Western Digital where we presented our systems approach on the example of the open source tools ecosystem that targets their SweRV cores and which we are helping to develop.

What is SweRV?

SweRV is a family of production-grade RISC-V implementations originally developed by Western Digital, who have announced they are going to transition 2 billion cores in their products to RISC-V, showing they are fully committed to this open processor architecture. SweRV comes in three variants: the original EH1 and the recently released EH2 and EL2.

EH2 is the world’s first dual-threaded commercial, embedded RISC-V core designed for IoT and AI systems, boasting as much as 6.3 CoreMark/MHz in dual-threaded mode, at 1.2 GHz in 16nm. EL2, on the other hand, is a tiny, low-power but high-performance RISC-V core (with just 0.023 mm2 in 16nm, it runs at up to 600 MHz and 3.6 CoreMark/MHz) targeting applications such as state-machine sequencers and waveform generators. The best thing about them is that anybody can use and extend them for free, with more high performance cores being planned in the future.

But a CPU is as good as the tooling around it and Western Digital knows it. That is why the entire SweRV family was handed over to CHIPS Alliance, which now aims to facilitate using the cores in practical scenarios by maintaining the dynamic ecosystem of relevant tools. Many of the necessary building blocks are already in place, while others are still being developed with active participation of Antmicro, the FOSSi community and others. In this article you will see examples of how you can work with SweRV in simulation, on an FPGA and in an ASIC context.

Getting started

To get started very quickly with no hardware whatsoever, you can simulate any of the SweRV cores in Verilator – one of the most successful and widely used open source projects in the EDA space, which we use extensively. Simply, go to the relevant core’s GitHub space in the CHIPS Alliance organization (e.g. https://github.com/chipsalliance/Cores-SweRV for EH1) and simulate the RTL (which is written in SystemVerilog).

Verilator simulates the RTL with high performance by compiling to an optimized model and running it, outperforming many proprietary alternatives. What is more, it is developing very fast thanks to the work of its maintainer – Wilson Snyder, the FOSSI community and CHIPS Alliance. Antmicro specifically has been working together with Western Digital and Google on adding support for SystemVerilog / Universal Verification Methodology to enable Verilator’s design verification for real-world use cases (see Looking into the future below).

Putting SweRV into an FPGA

If you want to get working on something more tangible, you might want to run SweRV on an FPGA – in a portable, vendor-neutral manner, of course.

To simplify interfacing with various toolchains, simulators and other tools you might need depending on the platform you want to target, you can use Edalize – a Python utility that allows you to seamlessly work with different kinds of EDA tools, both for FPGA and ASIC design. It helps you to maintain consistent workflows and pinpoint whether a specific bug is tool-related or pertains to your code. We’ve been adding quite a lot of new functionalities into Edalize recently, while using it heavily as a default way to interface with various tools out there in our work e.g. sv-tests (again, see Looking into the future for more on that topic).

Edalize will help you use your SweRV-based design on the FPGA/board of your choice without having to care about remembering and maintaining specific configurations and runtime flags.

Another great tool from the same author is FuseSoC, a Python-based package manager and a set of build tools for HDL code. It enables you to reuse your FPGA IP across many designs and, of course, it supports SweRV cores well. Apart from making it simple to reuse existing cores, it allows you to easily create compile-time or run-time configurations, port designs to new targets, set up configurable Continuous Integration as well as let other projects use your code. FuseSoC is also used by the SweRVolf SoC.

Thanks to integrations with other open source tools like Google’s Verible linter/formatter, that we’re also helping to develop, FuseSoC can be used to lint and format System Verilog – we have recently written an article about this.

Incorporating SweRV into an SoC

A core alone, however, is not enough to get any practical work done. If you want to build a System-on-Chip, you should definitely look at LiteX – a SoC generator that allows you to put SweRV in an actual use case. LiteX is an IP library and a SoC builder that is portable between various FPGAs and can turn SweRV into a full blown system. It has a number of IPs and other building blocks such as Ethernet, RAM, UART, SATA, etc. which you can configure to work with different kinds of CPUs. It has initial SweRV support which enables the user to quickly build plug & play SoC systems with SweRV. Antmicro is heavily involved in work to build a robust ecosystem around it. LiteX can run the Zephyr RTOS – which is also supported on SweRV – and, with a suitable CPU, it can run Linux as well. The LiteX SoC ecosystem can also be used together with another tooling project we heavily contribute to, SymbiFlow – the open source FPGA flow.

Simulating, experimenting, testing

If you want to use SweRV to build a full production system and leverage the flexibility to customize that comes with the RISC-V and open tooling, you will most likely need to experiment with the architecture and co-develop hardware and software. This is where Renode can be of immense use thanks to its architectural exploration, simulation, testing and debug capabilities for complex systems: entire SoCs, boards and systems of boards. All you need to do is download Renode and put together a few configuration files – it even comes with many demos and pre-compiled examples for various platforms. Renode provides initial support for SweRV EH1 (with more to come) as well as extensive support of LiteX, which will let you quickly build and simulate entire open source SoCs. On top of that, Renode enables hardware/software co-simulation with Verilator for building your custom IP and testing its HDL as-is, while keeping the rest of the system simulated in Renode to save development time.

Building and verifying a production-grade ASIC

Assuming SweRV fits your use case, you may eventually want to build and verify a production-grade ASIC which includes one of those CPUs. As part of the CHIPS working groups focused on cores and tools, the developers of SweRV in collaboration with Google, Antmicro and others are building an entirely open source design verification ecosystem around the cores family, including projects such as riscv-dv and Whisper ISS. The former is an entire SV/UVM flow based on an instruction generator for RISC-V processor verification, which allows you to perform various tests on SweRV-based designs. It features a number of test suites dedicated to different functionalities. It runs ISS and RTL simulators in tandem and compares the results. Whisper ISS is a tool used for verification of SweRV implementations, which can be run in an interactive mode, allowing the user to single step RISC-V code and inspect/modify the RISC-V registers or system memory, or it can be run in lock-step, e.g. with Verilator.

Looking into the future

There is ongoing work from CHIPS Alliance and the broader open source community to rapidly transform the ASIC-development workflows to fully embrace open source. One such effort is sv-tests, a System Verilog test suite designed to stress-test different kinds of designs in SystemVerilog against various open source tools, showing a results table indicating detailed coverage. SweRV, being written in SystemVerilog, is of course one of the suite’s test targets.

The SV test suite informs some of our ongoing open source work for ASIC tooling, one of the goals of which is to enable open source development and verification of System Verilog designs. An interesting tool to look at in this space is Surelog – a full-blown SystemVerilog parser developed in collaboration between Google and Antmicro oriented at simulation and UVM. We are working to plug it as the System Verilog front-end into various open source tools using a framework called UHDM (Universal Hardware Data Model), which will enable code reuse between various tools with similar needs.

With the recent release of the world’s first open PDK, that we are proud to have been participating in, and the progress being made in the OpenROAD project, which aims at a fully open flow for chip design and other areas, it looks like the future in which a SweRV based SoC can be designed, verified and manufactured using open tools is not that far off.

Summary

Apart from being an expanding, production-grade family of cores, SweRV taps into a very good and dynamic ecosystem of tools that we are helping to build. CHIPS Alliance is aiming to revolutionize the way developers work with ASICs and FPGAs by enabling a software-driven approach to silicon, which perfectly aligns with Antmicro’s strategy and long-term objectives. With extensive experience in RISC-V-powered open source work, we offer high-quality services that our customers can use to build on top of SweRV using these new collaborative methodologies and tools. Reach out to Antmicro at contact@antmicro.com to find out how the company can assist you with your next RISC-V-centered project.

Open Source Process Design Kit from Google, SkyWater Technologies and Partners Released

By Blog

This post was originally published at Antmicro.

The ASIC design and manufacturing flow has for a long time been dominated by proprietary tools and processes. The growing complexity of chip-building has been reinforcing the claim that “hardware is too hard to be open source”, as the cost and time needed to build an ASIC have kept small, more agile, software-oriented teams and individuals away from the hardware domain. Thus, ASICs have not been able to benefit from the enthusiasm and collaboration which have been fuelling software development for decades now. Thanks to the continued effort of many entities which Antmicro is very proud to be among, this is now changing quickly.

RISC-V: Openness-driven innovation

The first shift in the walled garden, proprietary chips design landscape came with the creation of the RISC-V Foundation in 2015 centered around the open source RISC-V ISA. Antmicro has been on board as a Platinum Founding Member of the Foundation (now, several hundred members strong, transitioning into a Swiss-based entity called RISC-V International) since the very beginning, as it reflected our belief that an open source approach can – and is bound to, eventually – revolutionize all areas of computing, even the less obvious ones.

RISC-V proved ASIC design can be a collaborative process, with players big and small working together to compliment each other’s strengths not only in developing the ISA but also many of the tools needed to make it practically useful. For example, Microsemi worked with SiFive to provide the SoC complex at the heart of their new and exciting PolarFire FPGA SoC, and then turned to Antmicro to provide a simulation environment – using our open source Renode Framework – to make development possible before the SoC hits the market later this year. The OpenTitan project, driven by key RISC-V adopters Google and Western Digital together with the UK not-for-profit lowRISC, strives to build a more transparent, trustworthy, high-quality reference design and integration guidelines for silicon Root of Trust chips. Such examples abound in the RISC-V world, but the un-core, design tools, verification and other parts of the ecosystem have mostly remained closed.

Enter CHIPS

Established in 2019, CHIPS Alliance takes the open, collaborative aspect of RISC-V even further. CHIPS wants to generate and integrate fully open source, high quality IP and tooling for ASIC design; the organization extends beyond cores and specifications, and acknowledges the importance not only of the result but the process itself; thus, the aim is to make both ASICs and the ASIC design processes open source all the way. Why? Again, a lesson learned from software: if you open up to collaboration, adaptation and change on all levels, the long-term results will be surprisingly good.

CHIPS has been home to such important projects as the Chisel HDL, the Rocket core generator and related tools, the SweRV cores or AIB interconnect. There is work under way to enable fully open source SystemVerilog/UVM support in tools like Verilator and Yosys (with some milestones like fully open source linting, formatting or synthesis of SystemVerilog code already accomplished), opening the door to more open source collaboration around design verification which constitutes the highest cost in modern chip design.

Also in the tools area, the very ambitious OpenROAD project, also a CHIPS member, is a DARPA-backed activity aiming to create a fully open source, quick, automated digital design flow. If you want to see how open source, automated chip design might look like in the future, see OpenROAD’s excellent ChipKit tutorial from ISCA 2020.

Aggregating those activities a vastly different landscape begins to emerge, one where chip design can be innovated upon on various levels, and teams can go back and forth between hardware and software optimizations for new use cases such as machine learning without NDAs and costly licences. But – until now – there was one element notably missing.

First ever open source PDK

We are excited to announce Antmicro’s participation in yet another historic first in the area of semiconductor process technology. In a project led by Google and SkyWater Technology, and in collaboration with partners including Antmicro, Blue Cheetah, efabless and numerous Universities, an open source SkyWater PDK (Process Design Kit) for the 130 nm MOSFET fabrication process, along with related sources, is being made available. This development greatly lowers the cost of entry into chip manufacturing and paves the way for even more exciting collaborations to happen in the open source silicon domain.

For some background, a PDK is a set of data files and tools used to model a specific process in a given foundry used with EDA (Electronic Design Automation) tools in the chip design flow. PDKs traditionally have been closed – to the point where some would say it’s impossible to make them open! This collaboration, where Antmicro worked together with Google and efabless to convert the PDK data for the public release, is an important step towards truly open source chips. The 130nm PDK process is a mature technology that is useful for a range of applications, especially in the area of microcontroller development and research as well as mixed signal embedded designs and other use cases which combine digital and analog circuits. The SKY130 technology stack consists of:

  • 1 level of local interconnect
  • 5 levels of metal
  • Inductor-capable
  • High sheet rho poly resistor
  • Optional MiM capacitors
  • Includes SONOS shrunken cell
  • Supports 10V regulated supply
  • HV extended-drain NMOS and PMOS

SkyWater is an American technology foundry accredited by the US Department of Defense, which offers custom integrated circuit design and manufacturing services. It is predicted that the launch of the open source SKY130 process node will be followed by other, more advanced nodes, ultimately enabling more advanced processor applications, including ones that are Linux-capable.

The inaugural talk by Tim Ansell

On Tuesday, June 30 at 16:00 GMT, Google’s Tim Ansell will give a talk at the FOSSi Dial-up meeting, presenting a thorough overview of the technical details of the PDK, as well as outlining the project’s goals and its roadmap. The event will be livestreamed on YouTube and will be followed by a Q&A session, so tune in to find out more about this historic step towards an open, accessible and collaborative chip-making process.

Semiconductor Engineering: About The SweRV Core EH2

By Blog

In mid-May, CHIPS Alliance announced the open sourcing of the SweRV Core EH2 and SweRV Core EL2 designed by Western Digital. These cores, as well as the earlier EH1, are now supported by Codasip’s SweRV Core Support Package which provides all of the components necessary to design, implement, test, and write software for a SweRV Core-based system-on-chip. But what is SweRV Core EH2?

The SweRV Core EH1 was the first to be released through CHIPS Alliance and was a core aimed at high-end embedded applications including Western Digital’s flash controllers and SSDs. The core is a dual issue, superscalar, high-performance core with 9 pipeline stages. The EH2 is an exciting further development aimed at delivering even more performance for IoT, artificial intelligence and data-intensive embedded applications.

To read more, please check out the article at Semiconductor Engineering written by Roddy Urquhart at Codasip: https://semiengineering.com/about-the-swerv-core-eh2/.