In this first article about the Xilinx Zynq MPSoC we will see how to build and deploy a basic Yocto Linux image.
Starware Design Ltd - Xilinx
If your project requires high levels of integration and performance then an FPGA is probably the optimal solution. Starware Design has experience in using toolchains and devices from all the major FPGA providers. Starware Design design support can range from a bespoke IP block to a turnkey solution.
Hardware / software partitioning
RTL coding (VHDL and Verilog/SystemVerilog)
Verification (UVVM, co-sim with Python)
System On Chip (Zynq, Zynq MPSoC)
Design for Xilinx, Altera/Intel and Lattice FPGAs
Interfacing with PCIe, DDR memories, high speed ADCs, Gigabit Ethernet
Why DevOps for FPGA development?
During the development and support phase of a product containing an FPGA bitstreams are released containing new features, bug fixes etc.
Releases are more frequent during the development phase as new features are added to the design. The support phase can last from a couple of years for a consumer product to five or more years for an industrial product.
In the previous blog post we learned how to integrate Xilinx Vivado with Docker and Jenkins to build automatically (or with a single button) the FPGA bitstream.
During the project life span, the FPGA bitstream is going to be built a large number of times. Wouldn’t be interesting to collect metrics from each build and track them?
In this blog post of the series “FPGA meets DevOps” I am going show you how to get metrics from a Xilinx Vivado build and track them in Jenkins using the Plot plugin.
In particular we are going to track resource usage (i.e. LUT, FF, DSP and memory). This gives you insight on how the resource usage evolved during the project life span and if the FPGA is getting too full.
In the previous blog posts we have created a system to build automatically (or with a single button) the FPGA bitstream.
Let’s imagine a bug is flagged after a bitstream has been released. The questions we need to answer to fix the problem are:
- What is the version affected?
- What is the source code set that was used to build that particular version?
In this blog post of the series “FPGA meets DevOps”, I am going to show you how to use source version control with Xilinx Vivado.
Most of the existing documentation about source version control and Vivado, i.e. User Guide 1198 (https://www.xilinx.com/support/documentation/sw_manuals/xilinx2016_3/ug1198-vivado-revision-control-tutorial.pdf), requires the developer to write a TCL script to recreate the project.
The problem with this approach is that changes to the project in Vivado (i.e. changing the implementation strategy or place and route parameters) have to be manually ported to the TCL file.
My typical Xilinx Vivado FPGA project has a block design as top level with automatically generated and managed wrapper. It has a mix of Xilinx and custom IP cores and I use the Out Of Context flow for synthesis since it reduces build time by caching IP cores that haven’t been modified or updated.
When I started researching how to better integrate Vivado with source version control, I defined the following requirements:
- The block design is the primary source to recreate the design (IP cores configuration, wiring, etc)
- The top level wrapper HDL file shouldn’t be under version control since it can be recreated from the block diagram
- Minimum TCL scripts coding for each project
- Easy to save changes made in Vivado GUI (i.e. implementation settings)
- Use the project-based out of context flow to reduces build time
- Continuous Integration friendly
In this second blog post of the series “FPGA meets DevOps” I am going show you how to integrate Xilinx Vivado with Docker and Jenkins.
Docker provides a lightweight operating system level virtualisation. It allows developers to package up an application with all the parts it needs in a container, and then ship it out as one package. A container image is described by a file (Dockerfile) which contains a sequence of commands to create the image itself (i.e.: packages to install, configuration tasks, etc) and it is all you need to replicate the exact build environment on another machine.
The objective is to create a container that will run Vivado in headless mode (without user interface) to build the FPGA image.