MicroZed Chronicles: MIPI Imaging on Zynq - Part 1

Updated: Mar 24


In a previous installment, we looked at the Trenz ZynqBerry Zero module which features a Zynq-7010 device in the same format as the popular Raspberry Pi Zero form factor. This board also provides USB OTG, UART / JTAG over USB, MIPI, HDMI and an SD Card along with several GPIO.


As the board is quite tiny, I wanted to create a simple image processing system using the MIPI input. What is interesting about the design on the ZynqBerry Zero is that unlike the Zynq UltraScale+ MPSoC, which has native support for the MIPI D-PHY in the IO, the Zynq-7000 SoC does not have native MIPI D-PHY support. As a result, board designers have a decision to make about how to implement the MIPI interface. An external MIPI D-PHY can be used to provide full MIPI compliance, or alternatively, a MIPI compatible approach can be used which provides a low-cost solution. Xilinx Application Note 894 provides a range of information on the MIPI interfacing solutions.


The lane rate is limited to 800 Mbps if you select the MIPI compatible resistor network approach, but higher lane rates can be achieved if you use an external D-PHY.


Of course, this adds additional costs and many imaging solutions such as the Raspberry Pi camera or Digilent Pcam 5C will operate perfectly well at much less than 800 Mbps lane rates.


Therefore, it is common for most development boards that use the Zynq-7000 device and provide MIPI interfaces to use the MIPI compatible interface. This is exactly what the ZynqBerry Zero module does to implement its MIPI interface.



To get started with the development of the MIPI design for the programmable logic, I created a simple block diagram that receives a MIPI stream, performs the De-Bayer process to provide a RGB pixel, and then stores a frame in the PS DDR. The output path contains a video timing controller, AXIS to video output, and a TMDS driver available from the Digilent IP Library.


As I create several image processing systems, I have two scripts which will create MIPI input and DVI output structures for my projects. These TCL scripts create hierarchy blocks in the block design and just need connecting to the VDMA or any other image processing algorithms.



Notice the different connections on the MIPI IP block in the block diagram to support the high-speed MIPI signals and the low-speed MIPI signals.


Since the device we are targeting is a Zynq-7010, we do not have a huge amount of logic resources available. However, it is sufficient to implement the basic image processing path.


The design uses multiple clock domains.


  1. 200 MHz for the MIPI reference

  2. 100 MHz for the AXI Stream and AXI-Lite control

  3. 74.25 MHz for the output video pixel clock


Implementing the image processing path targeting the ZynqBerry Zero shows that the device is pretty full, with over 80 percent of the LUTs occupied, along with over 50% of the FF and BRAMS used.



The floor plan of the implemented device looks impressive, with most Basic Elements (Bel) occupied.



Timing closure is achieved as you would expect for the clock frequencies used in this design, with only one constraint needed to indicate an asynchronous clock internal to the MIPI IP block.



With the design built in Vivado, now all we need to do is write the application software to configure the IP blocks in the programmable logic and configure the image sensor for operation as desired. The configuration of the software is over the PS I2C, which is directly connected to the MIPI connector.


This will have to wait until the next blog though because a special flexi is needed, other than what is provided on the Pcam 5C or Raspberry Pi cameras, to fit in the smaller MIPI connector on the ZynqBerry Zero in order to be able to connect to the MIPI interface.

I must admit I did not realize this until I tried to connect the Pcam 5C sensor to the ZynqBerry Zero. The Cable I ordered was this one

545 views0 comments

Recent Posts

See All