A primary objective of OSVISE is to foster innovation, and PlanV as a part of this initiative, is working to bridge the gap between open-source verification and industry benchmarks, by enhancing UVM support in Verilator. To accelerate our contribution to Verilator, PlanV has developed a Continuous Integration (CI) system that automates feature tests, ensuring our work remains consistent and reliable. In this blog post, we will introduce our CI system and the automation tests we’ve implemented, along with two UVM test models—one written in pyuvm and the other in SystemVerilog UVM (sv-uvm). Both models have successfully passed simulations using Verilator.
Sparking the Idea: Why Automate?
“If you can automate it in five hours, don’t spend five minutes doing it manually.” This well-known principle highlights the long-term benefits of automation. While some tasks may seem trivial to perform manually, their cumulative impact can be significant. This inspired us to create a CI system that automates the regular execution of feature tests, allowing us to continuously monitor the state and detect regressions in Verilator. As an open-source project, Verilator benefits from contributions by numerous developers, with new code frequently being pushed to the master branch. Given the frequent changes in Verilator, having a CI system to clearly track which features are supported and which issues have been fixed is essential. This not only helps us stay on top of Verilator’s evolving state but also prevents us from duplicating efforts. Therefore, we developed our own CI system in GitHub, PlanV_Verilator_Feature_Tests.
This CI system, integrated with our Verilator repository, automatically runs a series of feature tests. Not only does this system keep us informed about the latest developments in Verilator, but it also serves as a robust testing platform. Our process begins with general tests to assess whether the current version of Verilator supports a given feature. If we find that the feature is not yet supported, we proceed by developing a comprehensive set of tests tailored to that feature, structured according to the SystemVerilog Language Reference Manual (LRM). For example, when working on a new currently-unsupported feature (like array randomization), we write extensive tests to cover all aspects of it. These tests then guide our modifications to the C++ code in Verilator. We continue refining the code until full support for the feature is achieved. This approach streamlines the development and verification process, ensuring thorough coverage and providing clear direction for future work.
In addition, our CI system addresses gaps in testing coverage. For some unsupported features, the Verilator repository has limited or no tests. As we work to support these features, we also develop and integrate corresponding tests into the Verilator test suite, thereby enhancing the overall robustness of the test library.
Behind the Scenes: How We Built Our CI Magic
To bring this system to life, we turned to GitHub Actions—a powerful tool baked right into GitHub. By crafting a combination of YAML files and scripts, GitHub Actions spins up a virtual ubuntu environment on its servers and periodically runs simulations across different branches of our Veril ator repository. This automation helps us keep our feature test repository neat and tidy.
The best part? The simulation framework is generated automatically by scripts, so users only need to drop their SystemVerilog test files into the tests folder. From there, the scripts take care of the heavy lifting—setting up the simulation framework, running the simulations, and saving the results in the appropriate log files.
And if you prefer working locally, no problem! Users can also run these scripts directly on their local machines, making it easy to simulate and test right from their own setup, without solely relying on GitHub Actions. Currently, the system is set up to automatically run all tests as a batch, but we are considering adding functionality that will allow users to specify and run individual tests as needed. For now, running the script will generate the simulation framework for all tests and produce a comprehensive report.
Test Reports: From Data to Insights
We know that clear and concise reporting is key, so we rolled up our sleeves and developed a Python script to take our log file results and transform them into easy-to-read HTML reports. These reports categorize each test as a pass or fail, and with just a click, users can dive straight into the corresponding log file to see the simulation details (as shown in Fig.1). This makes our CI system not only powerful but also easy to use, allowing for quick checks and smooth verification.
The pass/fail status of each test is determined by scanning the terminal output for specific keywords like Error or test fail. We’ve written a comprehensive set of tests ourselves, using the LRM 1800-2023 as a reference to ensure thorough coverage. Since these tests are custom-built, we have full control over the error formats and criteria that determine whether a test passes or fails.
Fig.1 Current Feature Test Report
Exploring UVM: pyuvm and sv-uvm in Action
In addition to feature tests, our CI system includes more complex UVM tests. To demonstrate the versatility of Verilator in supporting UVM, we developed two test models: one using pyuvm and the other using sv-uvm. Both models, centered around an asynchronous FIFO as the Design Under Test (DUT), have successfully run in Verilator. By including both test models, we aim to show that pyuvm, like sv-uvm, is a viable approach for achieving open-source verification through Verilator. Furthermore, as Verilator’s support for UVM continues to grow, implementing a simple sv-uvm testbench has become increasingly feasible. So, next, we will showcase some details of the tests, as well as the simulation results.
Pyuvm is a Python library built on cocotb, allowing users to write testbenches using UVM methodology in a Python environment. Cocotb enables creating testbenches in Python, and pyuvm adds the ability to use UVM within these testbenches. Since Cocotb already supports Verilator, pyuvm naturally works with it too. With pyuvm, we can easily generate random sequences in Python, send them to the DUT through a driver, and use a monitor to return the results to the scoreboard for checking. Additionally, coverage collection is straightforward and fully supported by pyuvm. The test report is shown in Fig.2. This is an exciting step forward for open-source verification. We included this test to demonstrate an open-source approach, and exploring the interaction between pyuvm and sv-uvm is something we might dive into more in the future.
Fig.2 pyuvm Test Report
In terms of SystemVerilog UVM (sv-uvm), we successfully ran uvm_test_1 in our repository. This test follows the classic UVM structure outlined in “Practical UVM” by Zhang Qiang, utilizing two interfaces—one for input and one for output—and two corresponding agents that manage the input and output sequences. This structure compiled and ran smoothly in Verilator, as shown in Fig.3. On the other hand, uvm_test_2 follows the structure from “The UVM Primer” by Ray Salemi. However, when testing this UVM setup, we encountered the “Recursive Module Not Supported” issue, indicating that while Verilator now supports UVM to some extent, full support is still a work in progress.
Fig.3 sv-uvm Test Report
Wrapping It Up: The Road Ahead
In conclusion, these two testbenches show that achieving UVM support in Verilator is well within reach. The journey of open-source verification is filled with potential, and we at PlanV are focused on actively contributing to this effort by using our CI system to identify bugs early, rather than waiting for others to find them. By taking on this responsibility, we help to ensure the stability and reliability of Verilator. As we continue to develop and improve our CI system and UVM support, our goal is to make a valuable contribution to the Verilator ecosystem and the broader opensource verification community.