Build and deploy Yocto Linux on the Xilinx Zynq Ultrascale+ MPSoC ZCU102
In this first article about the Xilinx Zynq MPSoC we will see how to build and deploy a basic Yocto Linux image.
In this first article about the Xilinx Zynq MPSoC we will see how to build and deploy a basic Yocto Linux image.
Starware Design has experience in edge AI for audio and video applications.
Services:
Architecture definition/evaluation
Implementation on FPGA/ASIC
Implementation on microprocessor/microcontroller
Verification of the implementation agains the model (i.e. using Cocotb)
Person detection proof-of-concept running on Zynq UltraScale (ZCU104).
Starware Design tasks:
Model preparation for FPGA deployment
Software running on the FPGA with PyQT GUI
28nm audio AI ASIC for keyword spotting.
Starware Design tasks:
Benchmark of the existing AI architecture and proposals for the next generation architecture.
AI network bit-accurate modelling
Evaluation board hardware, software and FPGA design (Xilinx Artix7 plus STMicroelectronics STM32MP1).
Automated lab test setup design and implementation (similar to Amazon Alexa compatible devices testing).
RTL design and validation using Cocotb and AI model in Python
If your project requires high levels of integration and performance then an FPGA is probably the optimal solution. Starware Design has experience in using toolchains and devices from all the major FPGA providers. Starware Design design support can range from a bespoke IP block to a turnkey solution.
Services:
Architecture design
Hardware / software partitioning
RTL coding (VHDL and Verilog/SystemVerilog)
Verification (UVVM, Cocotb, co-simulation)
System On Chip (Zynq, Zynq MPSoC)
Design for Xilinx, Altera/Intel and Lattice FPGAs
Interfacing with PCIe, DDR memories, high speed ADCs, Gigabit Ethernet
Person detection proof-of-concept running on Zynq UltraScale (ZCU104).
Starware Design tasks:
Model preparation for FPGA deployment
Software running on the FPGA with PyQT GUI
28nm audio AI ASIC for keyword spotting.
Starware Design tasks:
Porting ASIC design to FPGA for rapid prototyping
Bit-accurate validation using Cocotb and AI model in Python
Xilinx Zynq FPGA with multiple video in and video out up to 1080p resolution. Mixture of Xilinx IP cores and custom cores.
Starware Design tasks:
Proof of concept on evaluation board
FPGA design and validation, IP cores creation and customisation
Bare metal and Linux drivers/software
Xilinx Kintex 7 with high speed ADCs and PCIe interface to x86 platform.
Starware Design tasks:
Creation of a co-simulation platform: QEMU running Linux with target device driver and apps interacting with Modelsim running the FPGA simulation plus the embedded microcontroller code.
Xilinx Artix-7 with DDR-3 memory, PCIe express and ADC LVDS interface. Mixture of Xilinx IP cores and custom cores.
Starware Design tasks:
FPGA design and validation, IP cores creation and customisation.
Why DevOps for FPGA development?
During the development and support phase of a product containing an FPGA bitstreams are released containing new features, bug fixes etc.
Releases are more frequent during the development phase as new features are added to the design. The support phase can last from a couple of years for a consumer product to five or more years for an industrial product.
In the previous blog post we learned how to integrate Xilinx Vivado with Docker and Jenkins to build automatically (or with a single button) the FPGA bitstream.
During the project life span, the FPGA bitstream is going to be built a large number of times. Wouldn’t be interesting to collect metrics from each build and track them?
In this blog post of the series “FPGA meets DevOps” I am going show you how to get metrics from a Xilinx Vivado build and track them in Jenkins using the Plot plugin.
In particular we are going to track resource usage (i.e. LUT, FF, DSP and memory). This gives you insight on how the resource usage evolved during the project life span and if the FPGA is getting too full.
In the previous blog posts we have created a system to build automatically (or with a single button) the FPGA bitstream.
Let’s imagine a bug is flagged after a bitstream has been released. The questions we need to answer to fix the problem are:
In the previous blog post we created a system that automatically builds the FPGA bitstream and Linux image.
Let’s imagine a bug has been found after a bitstream or Linux image has been released. The questions we need to answer to fix the problem are:
By the end of this blog post we will be able to answer those questions for FPGA bitstream and Linux image, but also to identify a particular board i.e. for RMA.
In this blog post of the series “FPGA meets DevOps”, I am going to show you how to use source version control with Xilinx Vivado.
Most of the existing documentation about source version control and Vivado, i.e. User Guide 1198 (https://www.xilinx.com/support/documentation/sw_manuals/xilinx2016_3/ug1198-vivado-revision-control-tutorial.pdf), requires the developer to write a TCL script to recreate the project.
The problem with this approach is that changes to the project in Vivado (i.e. changing the implementation strategy or place and route parameters) have to be manually ported to the TCL file.
My typical Xilinx Vivado FPGA project has a block design as top level with automatically generated and managed wrapper. It has a mix of Xilinx and custom IP cores and I use the Out Of Context flow for synthesis since it reduces build time by caching IP cores that haven’t been modified or updated.
When I started researching how to better integrate Vivado with source version control, I defined the following requirements:
In this second blog post of the series “FPGA meets DevOps” I am going show you how to integrate Xilinx Vivado with Docker and Jenkins.
Docker provides a lightweight operating system level virtualisation. It allows developers to package up an application with all the parts it needs in a container, and then ship it out as one package. A container image is described by a file (Dockerfile) which contains a sequence of commands to create the image itself (i.e.: packages to install, configuration tasks, etc) and it is all you need to replicate the exact build environment on another machine.
The objective is to create a container that will run Vivado in headless mode (without user interface) to build the FPGA image.
A couple of years ago I wrote a few blog posts regarding FPGA and devops; in particular on how to use Xilinx/AMD Vivado with git, Jenkins and docker.
With these new blog posts, I am going to update that content using Vivado 2022.2. I will also replace Jenkins with Gitlab for continuous integration.
I want to show you that it is not difficult nor expensive to get started with devops for FPGA development.
In this blog post, I am going to show you how to use version control for Xilinx/AMD Vivado and Petalinux projects. I am going to use git, but you can use SVN or other version control tools.
In the previous blog post [link] I have shown you how to use version control (git in particular) for Xilinx/AMD Vivado and Petalinux projects.
In this blog post, we’re going to integrate AMD/Xilinx Vivado and Petalinux with Gitlab CI.
Starware Design provides design and consulting services for FPGA, board-level, embedded software and edge AI projects.
Whether you need a consultant to be part of your team on-site or a turnkey solution, Starware Design has the capability to suit your requirements.
Starware Design Ltd
St John's Innovation Centre
Cowley Road
Cambridge
CB4 0WS
This email address is being protected from spambots. You need JavaScript enabled to view it.