All Posts By

racepointglobal

Advanced Co-simulation with Renode and Verilator: PolarFire SoC and FastVDMA

By Blog

This post was originally published at Antmicro.

Co-simulating HDL has been possible in Renode since the 1.7.1 release, but the functionality – critical for hardware/software co-development as well as FPGA use cases – is constantly evolving based on the needs of our customers like Google and Microchip as well as our work in open source groups including CHIPS Alliance and RISC-V International. To quickly recap, by co-simulation we mean a scenario where a part of the system is simulated in Renode but some specific peripheral or subsystem is simulated directly from HDL, e.g. Verilog. To achieve this, Renode integrates with Verilator, a fast and popular open source HDL simulator, which we are helping our customers adopt as well as expanding its capabilities to cover new use cases. Peripherals simulated directly from HDL are typically called Verilated peripherals.

Why co-simulate?

Co-simulation is a highly effective approach to testing IP cores in complex scenarios. HDL simulation is typically much slower than functional simulation, so to cut down on development turnaround time you can partition your design into a fixed part that you can simulate fast using Renode and the IP you are developing that you can co-simulate using Verilator. Renode offers a lot of ready-to-use components – for which you do not need to have a hardware description – which can be put together to provide a complete system to test new IP core designs.

Co-simulation in Renode so far

For more than a year now, Renode has supported using AXI4-Lite and Wishbone buses for the Verilated peripherals’ communication with the rest of the system. As can be seen with the Verilated UART examples included with Renode, these can be used not only to receive data from, but also to send data to the peripheral. Additionally, because Renode can simulate external interfaces, you can easily open a window for such a UART (with showAnalyzer command during simulation) and the whole interaction is driven by UART’s TX and RX signals.

Diagram depicting FastVDMA co-simulation

The riscv_verilated_liteuart.resc script shows the interaction between the system based on the VexRiscv CPU implemented in Renode and the UART based on an HDL model simulated with Verilator that together enable users to work with Zephyr’s shell interactively.

In these examples, however, all the communication is initiated from Renode’s side by either reading from or writing to UART. There is no way for the peripheral to trigger the communication, which limited use-cases where the co-simulation could have been used.

New co-simulation features

Thanks to our work with Microchip and Google, since Renode 1.12 new features are available in the VerilatorPlugin; the most important change is introducing the ability for a Verilated peripheral to trigger communication with other elements of the system. The Renode API was expanded to accommodate the new features. There are now two new actions in the co-simulating communication which enable a Verilated peripheral to initialize communication with Renode by sending data to or requesting data from the system bus.

Support for the AXI4 bus was also added. This is different from the AXI4-Lite bus, which only implements a subset of AXI4 features. The Verilated peripheral can either act on AXI4 bus as a controller that requests accesses (therefore utilizing these new Renode API actions) or as a peripheral (in the AXI4 bus context) that is only requested to take some action.

How it works

Creating a Renode agent (which handles an HDL model’s connection with Renode) with a role chosen for your peripheral is as simple as including an appropriate header and connecting signals. For example, FastVDMA acts as a controller on AXI4 bus with the following code in a file creating its Renode agent:

#include "src/buses/axi-slave.h"
VDMATop *top;

void Init() {
    AxiSlave* slaveBus = new AxiSlave(32, 32);
	
	
    slaveBus->aclk = &top->clock;
    slaveBus->aresetn = &top->reset;

    slaveBus->awid = &top->io_write_aw_awid;
    slaveBus->awaddr = (uint32_t *)&top->io_write_aw_awaddr;

    // Connecting the rest of the signals to the bus

Another new feature is the ability to connect the Verilated peripheral through multiple buses. For example, now it’s possible to have a peripheral connected to both AXI4 and AXI4-Lite buses, which enables even more complex HDL models to be tested with Renode. This is the case of the FastVDMA, which is connected this way:

#include "src/buses/axi-slave.h"
#include "src/buses/axilite.h"

RenodeAgent *fastvdma;

void Init() {
	AxiLite* bus = new AxiLite();
	AxiSlave* slaveBus = new AxiSlave(32, 32);

	// Initializing both buses’ signals

	//=================================================
	// Init eval function
	//=================================================
	bus->evaluateModel = &eval;
	slaveBus->evaluateModel = &eval;

	//=================================================
	// Init peripheral
	//=================================================
	fastvdma = new RenodeAgent(bus);
	fastvdma->addBus(slaveBus);

	slaveBus->setAgent(fastvdma);
}

FastVDMA

Fast Versatile DMA (FastVDMA) is a Direct Memory Access controller designed with portability and customizability in mind. It is an open source IP core developed by Antmicro in 2019 that is written in Chisel. It is a very versatile DMA controller because of the range of supported buses. It can be controlled through an AXI4-Lite or Wishbone bus while the data can be transmitted through either an AXI4, AXI4-Stream or Wishbone. Since its inception it has been used in a number of projects, e.g., bringing a GUI to krktl’s snickerdoodle. For more information it’s best to head to the original blog note about FastVDMA.

Expanding the PolarFire SoC ecosystem

Through Antmicro’s long-term partnership with Microchip, Renode is a vital part of the PolarFire SoC developer experience. Microchip customers are using the Renode integration with the vendor’s default SoftConsole IDE to develop software for the PolarFire SoC platform and it’s development board, the Icicle Kit.

PolarFire Icicle with Antmicro's HDMI breakout board

With the latest co-simulation capabilities, developers are able to explore a combination of hard and soft IP, developing their FPGA payload and verifying it within Renode. This allows them to work on advanced projects, from graphic output support similar to the one we implemented for snickerdoodle with FastVDMA (possibly employing our open-hardware HDMI board for Icicle Kit), through advanced FPGA-based crypto solutions, to design-space exploration and pre-silicon development, with the debugging and testing capabilities native to Renode.

To see how Renode allows you to simulate complex setups with multiple co-simulated IP blocks, see our tests of Verilated FastVDMA plus Verilated RAM.

It’s worth noting that as these blocks are isolated and interconnected only via Renode, you can easily prepare setups in which peripherals use different busses to connect. This gives you an ability to focus on the interesting details and to prepare your systems from building blocks more easily.

Commercial support

If you’d like to speed up the development of your ASIC or FPGA solution using the advantages of advanced co-simulation, Antmicro offers commercial support and engineering services around Renode, Verilator and other tools, as well as FPGA and ASIC design, hardware, software and cloud services. We work with organizations like CHIPS Alliance and RISC-V International enable a more software-driven hardware ecosystem based by open source. For a full list of our open source activity, head to our open source portal.

Progress on Building Open Source Infrastructure for System Verilog

By Blog

By Rob Mains, General Manager of CHIPS Alliance

SystemVerilog is a rich hardware design description and verification language that is seeing increased usage in industry. In the second Deep Dive Cafe Talk by CHIPS Alliance on July 20, Henner Zeller, who is an software developer with Google, provided an excellent in depth technical talk on building out an open source tooling ecosystem around SystemVerilog to provide a common framework that can be used by both functional simulation applications as well as logic synthesis. In case you missed the live presentation, you can watch it here

Henner started the talk by providing background and motivation for the work on expanding the tooling available for SystemVerilog. It was noted that SystemVerilog is being used by a large cohort in industry, but there is little awareness that open source tools supporting SystemVerilog exist. Of note is that SystemVerilog is analogous to C++ as is Verilog to C in terms of language robustness and complexity. As such, developing a framework and metric-based scorecard to measure the quality of applications supporting this is of importance. A key aspect of adoption of SystemVerilog and associated open source tooling is that it works out of the box. Subsequently, the so-called sv-tests framework was created by Antmicro and Google – and subsequently transferred to CHIPS Alliance – as a unit test system for SystemVerilog tools to assess the quality of the parsing and handling of different attributes of the language. To incentivize participation, a leader board has been created to show quality of results. 

The talk also provided an overview of the different parsers Surelog, sv-parser, and Verible, along with discussion of their positive attributes and limitations. This was followed by presentation of UHDM (Unified Hardware Data Model) – a data model allowing transferring parsed and elaborated SystemVerilog between frontend and simulation or synthesis software.

An example integration of the UHDM based flow developed collaboratively by CHIPS members Google and Antmicro was presented next. The development aims to integrate the UHDM handling capability with Verilator simulator and Yosys synthesis tools. The work is done publicly in CHIPS Alliance’s GitHub repositories. The UHDM integration tests repository collects all the pieces of the system and allows running the integration tests.

The development and measurement of different applications to parse SystemVerilog and robustly represent in a common data representation will do much to ensure the development of a solid framework for construction of different design automation tools. 

The next CHIPS Alliance Deep Dive Cafe is on Tuesday, Aug. 10 at 4 p.m. PT. Experts from Intel and Blue Cheetah will present on PHYS, protocols, EDA, and heterogeneous integration as well as latest developments on die-to-die interfacing. You can register for the event here.

What You Need to Know About Verilator Open Source Tooling

By Blog

By Rob Mains, General Manager of CHIPS Alliance

Verilator is a high performance, open source functional simulator that has gained tremendous popularity in its usage and adoption in the verification of chip design. The ASIC development community has widely embraced Verilator as an effective, often even superior alternative to proprietary solutions, and it is now the standard approach in RISC-V CPU design as the community has worked to provide Verilator simulation capabilities out of the box. CHIPS Alliance and RISC-V leaders Antmicro and Western Digital have been collaborating to make Verilator even more useful for ASIC design purposes, working towards supporting industry-standard verification methods in a completely open source flow.

On June 15 at the inaugural CHIPS Alliance Deep Dive Cafe Talk, Karol Gugala, an engineering manager with Antmicro, provided an excellent in-depth technical talk on extending Verilator towards supporting universal verification methodology, UVM. The event was well attended, and is now also available on YouTube.

Karol started the talk by providing motivation for the improvement and expansion of the System Verilog design environment, and in particular the importance of UVM to offer a common verification framework. He covered the current limitations of Verilator relative to its handling of UVM, and the goal of the community to more robustly support UVM and also System-Verilog as a design language, both of which are efforts Antmicro is deeply involved with as part of its work within CHIPS Alliance with other CHIPS partners. In the talk, Karol provided in-depth details of work that has been done to handle UVM.

The focus of the work summarized by Karol’s talk was the comparison of static event scheduling already existent in Verilator to dynamic scheduling which is required for event driven simulation, specifically event and time awareness. This necessitated a new event scheduling approach in Verilator to be implemented. The new scheduler is an option that the user can activate as a runtime switch. Future work includes: clocking blocks, assertions, assertions, class parameters, integration with UHDM, and further runtime optimization. Full UVM support for the SweRV RISC-V cores and OpenTitan – also based on a RISC-V implementation, Ibex – are two key goals. A detailed overview of how System Verilog statements are evaluated as members of different processes was provided. As an initial implementation, each process now runs in a separate thread, with processes and threads communicating via standard IPC synchronization protocols. Scheduling of the threads / processes is left to the OS scheduler, and further work in the project will include more involved scheduling techniques.

You can read more about dynamic scheduling in Verilator at Antmicro’s blog, and test all the new features of Verilator with examples of designs and some GitHub Actions-based CI simulations (based on Antmicro’s public repo).

The next talk in the CHIPS Deep Dive Cafe Series is the AIB Deep Dive + Opportunities Presented By Intel on Tuesday, Aug. 10 at 8 a.m. PT. You can register for the event here.

Efabless Launches chipIgnite with SkyWater to Bring Chip Creation to the Masses

By Blog
  • Program includes a pre-designed carrier chip and automated open source design flow from Efabless
  • SkyWater’s open source SKY130 process is the first node to be used to fabricate chips for the program
  • Initiative removes access barriers by significantly reducing cost and the need for deep semiconductor experience to design chips

Efabless, a community chip creation platform, today announced the launch of its new chipIgnite program to bring chip design and fabrication to the masses and a collaboration with SkyWater Technology for the first node supported in the program. The chipIgnite program expands upon the SKY130-based open source chip manufacturing program sponsored by Google and supports private commercial designs that include non-open source IP. This initiative represents another step forward in the industry to broaden access to chip design by giving people the ability to more easily create and fabricate chips.

The first shuttle in the chipIgnite program will support fabrication of student projects as part of the EE272B course in the Electrical Engineering department at Stanford University for senior undergraduate and graduate students.

The new program also has industry support from organizations including QuickLogic and the CHIPS Alliance.

“CHIPS Alliance is a major champion of open source hardware design and associated design automation tools. I am excited to see the chipIgnite program offered by Efabless to include many different collaborative IP developers to prove new ideas. The platform alleviates the barriers to entry into chip design and allows for ready exploration of many concepts,” said Rob Mains, general manager of CHIPS Alliance.

Read the full announcement at Efabless: https://info.efabless.com/press-release-efabless-launches-chipignite-with-skywater-to-bring-chip-creation-to-the-masses/

Antmicro’s ARVSOM RISC-V Module Announced

By Blog

This post was originally published at Antmicro.

We are excited to announce the ARVSOM – Antmicro’s fully open source, RISC-V-based system-on-module featuring the StarFive 71×0 SoC. Using the RISC-V architecture, which Antmicro has been heavily involved in since the early days as a Founding Member of RISC-V International, the SoM is going to enable unprecedented openness, reusability and functionality across different verticals.

We are excited to announce the ARVSOM – Antmicro’s fully open source, RISC-V-based system-on-module featuring the StarFive 71×0 SoC. Using the RISC-V architecture, which Antmicro has been heavily involved in since the early days as a Founding Member of RISC-V International, the SoM is going to enable unprecedented openness, reusability and functionality across different verticals.

ARVSOM

Open source-driven innovation

Since its conception, Antmicro has been enabling its customers to tap into the technological freedom that is inherent in open source. We’ve been developing cutting-edge industrial and edge AI systems using vendor-neutral and customizable solutions as well as actively developing and contributing to the tooling ecosystem, improving processes and unlocking even more system design options – often in alignment with our efforts driving RISC-V International and CHIPS Alliance. Targeted at a range of different use cases and enabling unmatched flexibility, ARVSOM is the latest product and example of Antmicro’s expertise in creating practical and easily modifiable technologies using open source.

Based on StarFive 71×0

Sitting at the heart of ARVSOM is the StarFive 71×0 system-on-chip – the first Linux-capable RISC-V SoC that is intended for mainstream, general-purpose edge applications and is expected to be used in millions of devices to be built on the open source RISC-V ISA. The SoC is also used in the BeagleV StarLight development platform (GitHub repo) which has just been shipped to beta users, including Antmicro, who will be verifying its robustness and reliability in real life use cases, while the development community focuses on further expanding the RISC-V software support. The SoC features a dual core U74 CPU from our partner and RISC-V pioneer SiFive, two AI accelerators: the open source NVDLA and SiFive’s Neural Network Engine, 1 MIPI DSI and 2 MIPI CSI interfaces, HDMI, Gigabit Ethernet, dual ISP, USB 3.0, while the production version of the SoC to be released later in the year will also have PCI Express.

Compatible with Scalenode for server room applications

The SoM will be compatible with our server-oriented Scalenode platform, released earlier this month, originally designed as a baseboard for the Raspberry Pi 4 Compute Module. Used together, Scalenode and ARVSOM will enable building easily scalable and flexible infrastructures consisting of clusters of small RISC-V-based compute units, with as many as 18 Scalenode boards fitting in a 1U rack. ARVSOM will also work with other Raspberry Pi CM 4 baseboards, opening the door to a host of solutions and custom devices to be built with it right away.

ARVSOM in Renode

You can already start software development with our ARVSOM and the BeagleV StarLight SBC thanks to support in Renode, our open source simulation framework. The two have joined an array of platforms and sensors that can be simulated in our open source framework for system development and testing. You can freely co-develop hardware and software in Renode’s virtual environment, with the code behaving exactly the way it would on real hardware. Renode can also be used throughout the development lifecycle to enable continuous testing and integration, as well as prototyping new solutions based on RISC-V, as it features extensive support of this open source ISA. You can read more about the latest updates in Renode in the version 1.12 announcement blog note.

Our SoM is in development and will be gradually unveiled throughout the year with the increasing StarFive 71×0 SoC availability.

ARVSOM can be customized and integrated into your project. We have been helping our customers embrace the groundbreaking RISC-V architecture and enable them to reap the benefits of having an advanced system based on vendor-neutral, future-proof and robust technologies. Get in touch with us at contact@antmicro.com if you need a modern product that you have full control over.

Dynamic Scheduling in Verilator – Milestone Towards Open Source UVM

By Blog

This post was originally published at Antmicro.

UVM is a verification methodology traditionally used in chip design which has historically been missing from the open source landscape of verification-focused tooling. While new, open source approaches to verification have emerged that include the excellent Python-based Cocotb (that we also use and support) maintained by FOSSi Foundation, not everyone can easily adopt it, especially in long-running projects and existing codebases that use a different verification approach. Leading the efforts towards comprehensive UVM / SystemVerilog support in open source tools, we have been gradually completing milestones, getting closer to what will essentially be a modular, collaboration-driven chips design methodology/workflow. Some examples of our activity in this space include enabling open source synthesis and simulation of the Ibex CPU in Verilator/Yosys via UHDM/Surelog, and the most recent joint project with Western Digital in which we have developed dynamic scheduling in Verilator.

Verilating non-synthesizable code

The work is a part of a wider, long-term effort that, besides Western Digital, has involved other fellow CHIPS Alliance members such as Google, to develop UVM / SystemVerilog support in open source tools. To get closer to this goal, this time we have introduced an improvement to Verilator – an open source tool that is great at doing what it was originally designed to do, that is simulating synthesizable code, but hasn’t been enabling its users to convert non-synthesizable code and run fully fledged testbenches written in Verilog/SystemVerilog. The biggest challenge to opening that possibility turned out to be Verilator’s original scheduler running everything sequentially, which in general is not a bad approach but can get you only so far without actually executing the code in parallel.

Dynamic scheduling in Verilator

To run proper UVM testbenches in Verilator, we had to be able to properly handle language constructs specifically designed for use in simulation. Those features include delay statements, forks, wait statements and events. To achieve all of this, we needed to add a proof-of-concept dynamic scheduler to Verilator.

How dynamic scheduling in Verilator works

Instead of grouping code blocks of the same type as the original scheduler does, the dynamic scheduler separates the code into many runnable blocks and then creates and spawns a process for each of them (when they are due to be executed). To deal with it we created a new process, which translates to a new high level thread for each initial blockalways blockforever block, each block in the fork statement etc. This allows us to pause the execution of one block and continue the execution of other blocks. Blocks can be paused when they encounter simulation control statements like:

  • The wait statement (e.g. wait (rdy == 1); or wait (event_name.triggered))
  • Waiting for an event (e.g @event_name;)
  • A delay (#10;)
  • Or even a join statement after fork (joinjoin_any – join_none is also supported, however it is not used to control the execution per se)

All of the statements listed above are not supported in Verilator’s original scheduler, so to be able to dynamically control the execution of those already dynamically spawned sub-processes, a way to react to changes in signal/events states needed to be implemented. We did it by wrapping the primitive types used for storing signal values in objects using higher abstraction levels, which now allows us to attach other processes to monitor values of specific signals and execute an arbitrary callback when a signal changes. Note that this is a mechanism used internally to implement the flow control statements and is not visible to the user writing Verilog code.

One additional thing that had to be re-written and adapted to the dynamic scheduler was the way non-blocking assignments are handled.

With a scheduler that is aware of the simulation time and other flow control statements, the same approach originally used by Verilator could not be applied anymore. A way to properly schedule assignments, and execute them in the proper moment was a crucial step in getting all of this to work.

Note that the features described above are the ones that are immediately visible to users of Verilator with the dynamic scheduler. This is however only the tip of the iceberg when it comes to the amount of internal changes needed to be done in Verilator.

Example usage

To present the usage of all the new Verilator features we have created a public GitHub repository with a number of example designs and GitHub Actions-based CI simulating them.

The examples range from single feature examples to more complex scenarios simulating a system with two UARTs sending data between each other.

The UART testbench example uses all the newly added features. It spawns a thread for every simulated UART block and each thread feeds the UART IP that is being tested with the predefined data (the “hello world” string in our example).

The UART block reads the data from an AXI stream interface, serializes it and sends it over the TX line.

The received data is available on the output AXI Stream bus.

Each testbench thread introduces a random delay between consecutive data chunk transmission.

The threads are synchronized using SystemVerilog events – e.g. once the data is fed into the UART block over the AXI stream interface, a thread is triggering an event informing the other part of the thread that the data is being transmitted by the IP.
Once transmission is done, another event is triggered informing the first thread that the IP is ready for a new data chunk.

Once all the test data is transferred between both tested UART instances, the threads are joined and data correctness is validated.

Next steps, future goals

This ongoing project is a big step towards UVM support in open tools as it not only removes a number of limitations in this area but also opens the door to future developments – which include a number of SystemVerilog features oriented towards verification which were earlier out of scope since open source UVM verification wasn’t on the horizon. Together with the effort to support SystemVerilog in Verilator using UHDM which is part of another ongoing project, in CHIPS Alliance we continue to work towards enabling open source design verification for everyone, which will revolutionize the way chips are built.

New MPW-TWO Program Will Provide Fabrication For Fully Open Source Projects

By Blog

By Rob Mains, General Manager of CHIPS Alliance

CHIPS Alliance is excited to announce that the hardware development community can submit their open source design projects to Efabless.com for space on their forthcoming shuttle. This opportunity comes after the success of having 40 submissions for the MPW-ONE shuttle; 60% of those designs were submitted by first-time ASIC designers. MPW-TWO is the second Open MPW Shuttle providing fabrication for fully open-source projects using the SkyWater Open Source PDK announced by Google and SkyWater. 

The shuttle gives designers the freedom to innovate without having to worry about the risks associated with the cost of fabrication. This is a great opportunity for individuals, universities, and industry to create their own IP and have it manufactured. 

The deadline for submission is June 18. Submissions must be open source designs and leverage open source tooling such as OpenROAD, OpenLane and other EDA applications that Efabless.com has at its design portal. Read more about the project requirements and submit here: https://efabless.com/open_shuttle_program/2.

I look forward to seeing the community’s contributions for this generous offering from Efabless.com and Google.

Modular, Open-source FPGA-based LPDDR4 Test Platform

By Blog

This post was originally published at Antmicro.

The flexibility of FPGAs makes them an excellent choice not only for parallel processing applications but also for research and experimentation in a range of technological areas.

We often provide our customers with flexible R&D platforms that can be easily adapted to changing requirements and new use cases as a result of our practice of using open source hardware, software, FPGA IP and tooling.

As an example of such activity, we have recently been contracted to develop a hardware test platform for experimenting with memory controllers and measuring vulnerability of various LPDDR4 memory chips to the Row hammer attack and similar exploits.

LPDDR4 test platform

Modular and cost-optimized

Targeting high-volume customer-facing devices where size, power use and unit cost are a priority, LPDDR4 does not come in the form of modules, while the hardware tools and software frameworks for testing it can be prohibitively expensive.
Despite efforts to mitigate the Row hammer exploit, a number number of memories available on the market remain vulnerable to the problem, which calls for a test platform that would allow experimenting with memory chips and memory controllers to devise new mitigation techniques.

Another issue is that preexisting work mostly relies on proprietary memory controllers which cannot be adapted to specific memory access patterns that trigger Row hammer.

To address this need, we have created a fully open source flow including Enjoy Digital’s open source memory controller LiteDRAM for which we implemented LPDDR4 support, to enable testing LPDDR4 memory chips.

What our customer needed was a flexible platform for developing security measures that would be cost-optimized for high volume production.

To accomplish that we’ve built a modular device that consists of the main test board and a series of easily swappable testbeds for different memory types, the first of which is already available on our GitHub.

What is more, thanks to being open source, the platform enables various research teams to combine efforts and work collaboratively on coming up with new attacks and mitigations, as well as fully reproduce the results of the work.

LPDDR4 test module

Experimenting reliably on a robust platform

The platform is based on Xilinx Kintex-7 FPGA and it features several I/O options: HDMI, which can be used for processing video data and experimenting with streaming and HDMI preview applications featuring RAM, USB for uploading your bitstream or debugging, as well as an SD card slot and GbE.

There is also additional 64MB of on-board HyperRAM that enables safe experimentation with interchangeable RAM chips under extreme conditions.

With Antmicro’s commercial development services the platform can be customized to meet your specific requirements, while the open source character of the solutions we use gives you full control over the product and vendor independence.

We help our customers build complicated FPGA solutionsembrace the dynamically growing open source tooling ecosystem and develop various technologies that allow developers to work more efficiently across the whole FPGA spectrum.

CHIPS Alliance and RISC-V International Invite the RISC-V Community to Participate in Updating a New Unified Memory Architecture Standard

By Announcement

New joint working group will enhance the OmniXtend Cache Coherency architecture

SAN FRANCISCO, March 24, 2020 – RISC-V International, a non-profit corporation controlled by its members to drive the adoption and implementation of the free and open RISC-V instruction set architecture (ISA), and CHIPS Alliance, the leading consortium advancing common and open hardware for interfaces, processors and systems, today announced a joint collaboration to update the OmniXtend Cache Coherency specification and protocol, along with building out developer tools for OmniXtend.

As part of this collaboration, RISC-V International and CHIPS Alliance have formed a new OmniXtend working group which will focus on creating an open, cache coherent, unified memory standard for multicore compute architectures. The group will update the OmniXtend specification and protocol, build out architectural simulation models and a reference register-transfer level (RTL) implementation, as well as create a verification workbench. These tools for an open, standard unified memory coherency bus leveraging OmniXtend will make it easier for designers to take advantage of OmniXtend for data-centric applications.

“As RISC-V International develops implementation independent specifications and ecosystem components, it is an important priority for us to ensure that whatever we develop will work with emerging and established standards. The joint working group will interact with various RISC-V groups to review the OmniXtend protocol with an emphasis on cache management and paying close attention to coherency enablement for RISC-V members,” said Mark Himelstein, CTO at RISC-V International. “As a result of this joint effort, the RISC-V community will have the tools they need to leverage an open, coherent, unified memory standard for all types of data-centric applications.”

“The newly formed OmniXtend working group will set the standard for open, coherent heterogeneous compute architectures. We plan to allow for a mixture of hardware IP blocks, giving developers more design flexibility so they can choose what works best for their specific application needs,” said Rob Mains, General Manager at CHIPS Alliance. “We encourage the RISC-V community to get involved in this important initiative which will open new design possibilities with OmniXtend.”

Dejan Vucinic of Western Digital will be giving a talk on OmniXtend at the CHIPS Alliance Spring Workshop on March 30, 2021. The event will also cover the AIB chiplet ecosystem, SWeRV Core support, FPGA tooling and much more. To register for this free virtual event, please visit: https://events.linuxfoundation.org/chips-alliance-spring-workshop/register/.

To learn more about the OmniXtend working group, please visit: https://lists.chipsalliance.org/g/riscv-omnixtend-wg.

About RISC-V International

RISC-V is a free and open ISA enabling a new era of processor innovation through open collaboration. Founded in 2015, RISC-V International is composed of more than 1,300 members building the first open, collaborative community of software and hardware innovators powering a new era of processor innovation. The RISC-V ISA delivers a new level of free, extensible software and hardware freedom on architecture, paving the way for the next 50 years of computing design and innovation.

RISC-V International, a non-profit organization controlled by its members, directs the future development and drives the adoption of the RISC-V ISA. Members of RISC-V International have access to and participate in the development of the RISC-V ISA specifications and related HW / SW ecosystem.

About the CHIPS Alliance

The CHIPS Alliance is an organization which develops and hosts high-quality, open source hardware code (IP cores), interconnect IP (physical and logical protocols), and open source software development tools for design, verification, and more. The main aim is to provide a barrier-free collaborative environment, to lower the cost of developing IP and tools for hardware development. The CHIPS Alliance is hosted by the Linux Foundation. For more information, visit chipsalliance.org.

About the Linux Foundation    

The Linux Foundation was founded in 2000 and has since become the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Today, the Foundation is supported by more than 1,000 members and its projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js and more. The Linux Foundation focuses on employing best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, visit linuxfoundation.org.

###

GitHub Actions Self-hosted Runners, Build Event Server and Google Cloud

By Blog

This post was originally published at Antmicro.

Continuous Integration and smart lifecycle management are key for high-tech product development, which is often a complex and multi-faceted process that requires automation to be efficient and failure-proof. At Antmicro, we’ve been creating various open source cloud and hybrid cloud solutions for our customers, helping them to encapsulate the complexity of their software stack. Lots of those projects cross the hardware/software boundary and involve a mix of open source and proprietary code, which means that fine-grained control of the CI setups are needed to make them work.

To provide the level of flexibility that we and our customers require, we often find ourselves working extensively on the underlying CI infrastructure, building open source solutions that can scale between organizations and teams. One such project involved creating a custom, local GitHub Actions runner, with containerized builds, support for Google’s Build Event Server and workload measurement and analytics; in collaboration with Google we then also enabled running an identical setup with the extra capabilities in Google Cloud.

Self-hosted runner diagram

Custom runner, more applications

GitHub is the world’s largest open source code sharing space, home to many of our open source projects such as Renode, the open source FPGA toolchain SymbiFlow or our open source ASIC development-focused SystemVerilog work. GitHub Actions – used by millions of developers worldwide – is a natural choice for those projects as the go-to CI flow. However, by default it provides compute resources – in the CI world traditionally called runners – with a specific hardware configuration which does not always fit the needs of the workloads that we deal with.

Especially in our work that involves ASIC and FPGA development flows – working towards enabling fully open source chip and IP design in our broader work in collaboration with our customers and fellow CHIPS Alliance members such as Google, Western Digital and QuickLogic, we find ourselves needing hybrid setups which would allow us to keep the code as well as the CI definitions public while being able to rely on internal infrastructure to do the heavy lifting. Long-running builds involving tools like VtR or OpenROAD that use lots of memory and CPU power, can greatly benefit from the flexibility that comes with the use of custom, self-hosted runners, and this solution also gives you a high degree of freedom in terms of integrating your runner with external hardware or tools that can’t be shared publicly. The latter is especially helpful in some of our other open source projects for things like benchmarking RISC-V, OpenPOWER and other cores or tracking the QoR for your FPGA designs. The Quality of Results and flexible Continuous Integration elements are extremely important for custom engineering projects we embark on, which typically integrate a variety of open source components together – fortunately the very nature of open source that constitute the predominant part of the tools we use makes such work much easier.

Virtual machines, distant-bes and Google Cloud integration

Our internal and our customers’ needs have called for the ability to integrate on-premise runners into our GitHub CI flows, which can be done using the GitHub runner project. For many of our projects, we provide flexible development infrastructure based on open source that allows us to better collaborate around shared code and to do that, we need to be able to scale compute resources between the private and public cloud. To enable feature parity with some of internal infrastructure, we also extended the self-hosted runners with some extra features.

Firstly, the custom runners developed as part of the project can be used with our distant-bes framework to push results in the Build Event Server format to custom results viewers transparently to the CI run itself. You can see an example of how this works in the symbiflow-examples repository. Secondly, we modified the runner so that instead of running the CI script on bare metal, it spawns virtual machines and performs the run steps inside them, collects results, and kills the machine, without changing the state of the host system’s kernel. This also allows us to gather performance metrics to see what the real utilization of the runner’s resources is – and we push those results in the form of graphs to our BES server.

Runner's resource utilization

Lastly, based on the needs of several of our collaborative open source projects with Google, we pursued yet another goal, namely, instantiating our self-hosted runners in Google Cloud, which enables our CI to spin powerful servers up and down on demand. This mix of robust internal infrastructure and always-available, scalable on-demand Google Cloud resources is very useful for heavy workloads run by multiple organizations. In the world of collaborative development in forums like the CHIPS Alliance and RISC-V International, this is no longer a nice-to-have, but a necessity.