MicroZed Chronicles: Debugging Image Processing Applications.
- 11 hours ago
- 5 min read
FPGA Horizons London- October 6th and 7th 2026 - get Tickets here.
The $99 Artix UltraScale+ Explorer Board - learn more here
FPGA are ideal for image processing, the programmable logic running in parallel enables a high performance, low latency image processing pipeline.
Beyond the basic recover of the image and image correction we often want to be able to perform higher level image processing algorithms on the capture image or images if several cameras are used.
A typical image processing pipeline will consist of several stages, for example MIPI CSI2 decoding, De-Bayering to convert the image from RAW to RGB pixels. Along with processes such as dead pixel correction, and white balance.

In our FPGA designs these are implemented using IP cores which come either from the AMD Vivado IP Library, Vitis Vision Library IP, Vitis Model Composer, Simulink HDL or custom generated. In a well designed image processing chain these are interconnected using AXI Stream and AXI4 if we are using frame buffers in memory.
The ability to be able see the impacts at different points of the image processing pipeline is often very useful.
A little while ago I mentioned I had invested in a Exostiv EP16000 probe. This probe uses spare giga bit transceivers to output captured signals for analysis. Think of it like an ILA on steroids, it enables us to capture millions of samples from a ADC, interface, processing pipeline or in this case image processing pipeline.
Recently I have been doing a lot of image processing and wanted to be able to capture several image processing frames from an application we were developing. Based around a ZCU106 with four Rpi Camera cameras attached, I wanted to be able to monitor the inputs from the cameras and the output from an image processing core which merges the camera feeds.
The design insertion flow is very simple, we first create our design in Vivado and it is a good practice here to implement the design first to ensure we have pipe cleaned the process.
The next stage is to use the Exostiv core insertor to configure and insert the IP core into the completed design. To do this you need to have a synthesised design open, in Vivado.
Using the core insertor we can select the device type, the GT configuration and line rate.

Along with the signals we wish to observe. I also enabled qualification of the trigger in the IP core as I want to only capture valid pixels and not blanking, as this will reduce the video we are able to capture.

With the monitoring defined the next stage is to implement the design, we do not need to do anything in Vivado the core insertor will run and insert the core and generate the bit stream for us.

Once the bit stream is available we are able to program the board and start the imaging operation.
To capture signals we can use the Exostiv Probe Client. Here we can set the triggers and qualification conditions. Along with starting the trigger and controlling exactly how the capture should take place.
The probe will capture its samples and store them in its 8GB memory, once the capture is complete these can be downloaded over the USB 3 link to the machine running the client. These sample are stored in a file under the project directory.
For this example with 10 bit grey scale pixels we are able to capture in the 8GB of memory 130 full frames. When we are capturing four camera inputs simultaneously e.g. camera takes approximately 2 GB.
While we can inspect the image data on the probe client as we would a normal logic analyser.

The coolest thing to do is to recreate the images using python so we can inspect them. To do this we need to use a little python and the rawd file in the Exostiv project area into which the samples are written.
The python file parses the RAWD file to reconstruct the image. The rawd file is a continuous stream of fixed-size data frames (samples). Each sample represents one cycle where all probes were captured simultaneously. There's no header, no metadata, just raw sample data from start to end and it is rounded to the nearest 16 bit boundary.
For this application I had 62 probes defined which means 8 bytes per frame and the data within the frame maps sequentially to the probes.
The python application then just has to parse the file, extract the signals and apply the AXIS video stream protocol logic to be able to reconstruct the 2D image. As the file contains several frames the python application will cycle through them, but you can pause and select individual frames are required.

Overall this is a very beneficial way for being able to demonstrate video processing IP pipeline without significant over head on the processing system being examined and validated. This is especially useful if you are working we image sensors and not cameras where the raw pixel values require considerably more post processing in the FPGA.
FPGA Conference
FPGA Horizons London- October 6th and 7th 2026 - get Tickets here.
FPGA Journal
Read about cutting edge FPGA developments, in the FPGA Horizons Journal or contribute an article.
Workshops and Webinars:
If you enjoyed the blog why not take a look at the free webinars, workshops and training courses we have created over the years. Highlights include:
Upcoming Webinars Timing, RTL Creation, FPGA Math and Mixed Signal
Professional PYNQ Learn how to use PYNQ in your developments
Introduction to Vivado learn how to use AMD Vivado
Ultra96, MiniZed & ZU1 three day course looking at HW, SW and PetaLinux
Arty Z7-20 Class looking at HW, SW and PetaLinux
Mastering MicroBlaze learn how to create MicroBlaze solutions
HLS Hero Workshop learn how to create High Level Synthesis based solutions
Perfecting Petalinux learn how to create and work with PetaLinux OS
Boards
Get an Adiuvo development board:
Adiuvo Embedded System Development board - Embedded System Development Board
Adiuvo Embedded System Tile - Low Risk way to add a FPGA to your design.
SpaceWire CODEC - SpaceWire CODEC, digital download, AXIS Interfaces
SpaceWire RMAP Initiator - SpaceWire RMAP Initiator, digital download, AXIS & AXI4 Interfaces
SpaceWire RMAP Target - SpaceWire Target, digital download, AXI4 and AXIS Interfaces
Embedded System Book
Do you want to know more about designing embedded systems from scratch? Check out our book on creating embedded systems. This book will walk you through all the stages of requirements, architecture, component selection, schematics, layout, and FPGA / software design. We designed and manufactured the board at the heart of the book! The schematics and layout are available in Altium here. Learn more about the board (see previous blogs on Bring up, DDR validation, USB, Sensors) and view the schematics here.
All words in this blog were written by a human.




