Accessing BRAM data from /dev/mem of RedPitaya

Applications, development tools, FPGA, C, WEB
adiana007
Posts: 21
Joined: Mon Jun 05, 2023 11:44 am

Accessing BRAM data from /dev/mem of RedPitaya

Post by adiana007 » Mon Jun 05, 2023 12:22 pm

Hello,

Inspired by Pavel's project, I am using AXI BRAM Reader IP for fetching the data from BRAM to the device memory of PS. I store around 32768 samples of 32-bit each in the BRAM. used a void* for mapping the device memory with the offset provided in the address editor using the mmap() command. I then typecast it into uint32_t* and use fwrite to store all the samples into a binary file

Code: Select all

fwrite(srcInt32, sizeof(uint32_t), buflen, fptr_b);
where srcInt32 is the uint32_t* of the mapped bram pointer, buflen is 32768 and fptr_b is the file descriptor of the binary file.
The function takes around 4.5-5ms, which is extremely slow for my application (my application demands sub millisecond delay). So, I try to use memcpy() for writing the data from '/dev/mem' to file. The input data is a counter that has values sequentially {1,2,3,etc}. But when we try to acquire the data using memcpy (acquiring multiple 32bit chunks), the difference between subsequent indexes of the copied data was found to follow a constant pattern of {-62,64,66,-64,2,0,2,0,2,0,2,0,2,0,2,0} and repeats itself (ideally the differences should give constant 1 between subsequent indexes).
Also, when I try printing the value of srcInt32[j] in a for loop, the output values follow sequential pattern as desired, whereas when I print srcInt32[j-1], srcInt32[j], srcInt32[j+1] in the same iteration, the output of j-1th index gives value of j+1 (the last value accessed), whereas output of jth or j+1th index gives the value of j and j+1 as desired.
Request you to kindly help me out with faster access of desired data from BRAM of FPGA.

pavel
Posts: 790
Joined: Sat May 23, 2015 5:22 pm

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by pavel » Mon Jun 05, 2023 1:57 pm

I remember answering this question probably to you or your colleagues few months ago by e-mail.

The sample rate required by your applications is too high for the approach you are currently using. Your measurements and observations also confirm this.

You need to start by calculating the exact sample rate and data throughput required by your application.

If the data throughput is less than 20 MB/s, then the approach you are currently using should work if the data is transferred to a computer via the Ethernet interface.

If the data throughput is greater than 20 MB/s and less than 80 MB/s, then you should use DMA (direct memory access) to transfer data between the FPGA and the CPU.

If the data throughput is more than 100 MB/s, then you need to replace the Red Pitaya board with another device that can work continuously with such data rate.

If you are using fwrite() to write data to the micro SD card, then the micro SD card is the bottleneck. Writing to the micro SD card should be replaced by sending data via the Ethernet interface to a computer.

adiana007
Posts: 21
Joined: Mon Jun 05, 2023 11:44 am

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by adiana007 » Thu Jun 08, 2023 8:59 am

Hi Pavel,

Thanks for the inputs on the data rate specifications.

But one observation during normal data transfer operation was that when I try to access data corresponding to multiple indexes in a single for loop, example, let's say bram is the uint32_t* of the mapped memory location. In a single for loop, if I try to access bram[j-1], bram[j], bram[j+1], bram[j+2], the first value instead of fetching from the address j-1, it fetches the data corresponding to j+2 (the last address accessed in the same iteration of loop) whereas the other data locations are fetched with right address values. I used AXI GPIO block and monitored address values being fetched and found similar behaviour. I suspect AXI signalling (is there an updation in the AXI BRAM Reader core?) to be causing the issue, but would like to have your comments on the same.

Parallely, I am trying to use AXI burst (in the incremental mode) to copy all the data to /dev/mem.

I visualize RedPitaya as an ADC<->SoC FPGA module, but can you help me with some explanation and relevant resources to understand why RedPitaya can't handle 100MB/s throughput of data (should I build custom kernel module to meet the throughput requirements).

I am also exploring on streaming the data through TCP to avoid file writes, but I'm facing issue with packet loss and packet size.

pavel
Posts: 790
Joined: Sat May 23, 2015 5:22 pm

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by pavel » Thu Jun 08, 2023 9:39 am

You provide very little detail on the requirements of your project. I can only make some basic assumptions based on the little information that you provide.

You mentioned that you have to read the 32 KB buffer multiple times with sub-millisecond delays and write the data to a file. From this information, I assume that your requirement is to read the data continuously, the data does not fit into the available on-board RAM, and the data rate is greater than 31.25 MB/s (32/0.001/1024).

If the required sub-millisecond delay is 0.1 ms and these transfers must run for more than one second, then the data rate is 312.5 MB/s (32/0.0001/1024), the data will not fit into the available on-board RAM (512 MB), and the Red Pitaya board does not have an interface capable of handling 312.5 MB/s.

The fastest way for the Red Pitaya board to store data that does not fit into available on-board RAM is to send it to a computer via the Gigabit Ethernet interface. The maximum data rate of the Gigabit Ethernet interface is approximately 100 MB/s.

I think that it is pointless to discuss technical details, like burst transfers or kernel parameters without having a clear idea of the required data rate and other relevant requirements.

adiana007
Posts: 21
Joined: Mon Jun 05, 2023 11:44 am

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by adiana007 » Thu Jun 08, 2023 11:12 am

Ok, so to clarify. My application is such that I will be receiving 32k samples from ADC and I store them into BRAM when positive edge of a trigger signal arrives through the GPIO (the trigger is pulsed, thus not a continu. Once 32k samples are written into BRAM, I access the data (32k samples) into the PS using AXI BRAM Reader and write into a binary file.

The rate of this trigger is around 512 Hz (Trigger repetition interval). Which means I will have to store 32k samples where one sample is 32-bit each into the binary file within 1.9ms (1/512Hz). Only when the next positive edge of trigger arrives, I store the next 32k samples into the BRAM and this cycle repeats. Thus, I would like to have this process of writing 32k samples of 32bit/sample into binary file within 1ms. I hope the process is clear. Kindly let me know for further details or clarifications.

Also, do request your inputs on accessing data corresponding to multiple indexes as discussed in the previous post. Thanks in advance!

pavel
Posts: 790
Joined: Sat May 23, 2015 5:22 pm

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by pavel » Thu Jun 08, 2023 11:23 am

What is the duration of the data recording session?

What data rate do you get by combining all these numbers?

What technology is used to store the binary file? Is it a storage server, a USB disk or a micro SD card?

adiana007
Posts: 21
Joined: Mon Jun 05, 2023 11:44 am

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by adiana007 » Fri Jun 09, 2023 9:06 am

I am decimating the clock by 2 and storing the 32k samples (rate of generation of 32k samples when a trigger arrives is 62.5MHz, not using default clock of 125MHz). There is no fixed time for the recording session, it might vary from 30 seconds to 30 minutes (but the rate of arrival of trigger is constant = 512Hz, and per trigger 32k samples of 32-bit each are stored at sampling rate of 62.5MHz into the BRAM).

At the point when trigger arrives, 32k samples will be generated at the rate of 62.5Msps. This corresponds to a time period of 524.28us (32k/62.5). After the 32k samples are stored into BRAM, we have 1.375ms (1900 - 524.28 us) to store the data from memory into file (before next trigger arrives).

I am currently creating a binary file in the microSD card and using fwrite to write into it (data corresponding to every trigger will be appended, file is opened as ab+).

pavel
Posts: 790
Joined: Sat May 23, 2015 5:22 pm

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by pavel » Fri Jun 09, 2023 11:04 am

There is no fixed time for the recording session, it might vary from 30 seconds to 30 minutes (but the rate of arrival of trigger is constant = 512Hz, and per trigger 32k samples of 32-bit each are stored at sampling rate of 62.5MHz into the BRAM).

I am currently creating a binary file in the microSD card and using fwrite to write into it (data corresponding to every trigger will be appended, file is opened as ab+).
OK. It answers two of my questions. Thanks.

It looks like I have to do the data rate calculation myself to answer the third question.

Here is how I calculate it based on the provided information:

32 kilo samples * 32 bits/sample * 512 Hz / 8 bits (to obtain kilo bytes) / 1024 (to obtain mega bytes) = 64 MB/s

Since the storage technology is microSD card, let's check the specifications of SDIO controller in Zynq-700 TRM (UG585):
https://www.xilinx.com/support/document ... 00-TRM.pdf

This is what I see on page 36:
SD/SDIO Controllers (Two)
  • Full speed clock 0-50 MHz with maximum throughput at 25 MB/s
BTW. Table 22-8 on page 652 also contains relevant information for this discussion.

The data movement method you are currently using is called "CPU Programmed I/O
" in this table. The maximum throughput of this method according to this table is 25 MB/s.

Hopefully this will convince you that the storage technology and data movement method that you are currently using does not have sufficient throughput to meet your project requirements.

As far as I know, the only way to continuously transfer data from the FPGA on the Red Pitaya board to a file at 64 MB/s is using the following technologies:
  • PL AXI_ACP DMA for data transfer method between FPGA and CPU
  • Gigabit Ethernet to send data from the CPU to a computer that writes the data to a file
An example of one of my applications that continuously transfers data from the FPGA on the Red Pitaya board to a file using PL AXI_ACP DMA and Gigabit Ethernet can be found at the following link:
https://github.com/pavel-demin/red-pita ... s/adc_test

The maximum data transfer rate that I have been able to obtain with this method so far is 78.125 MB/s.

Personally, if I were tasked with building a data acquisition system with such requirements, I would be more inclined to use a faster communication interface (USB3, 10GbE, PCIe) or storage technology (NVMe SSD) so as not to feel constrained by the hardware.

adiana007
Posts: 21
Joined: Mon Jun 05, 2023 11:44 am

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by adiana007 » Mon Jun 12, 2023 12:26 pm

Hey Pavel,

Thanks for your findings and suggestions in approaches for implementation. I will go through your project and keep you posted in case of any doubts or clarifications. As a basic question, I ran the Makefile, but got an error saying xsct not found, I'm using Vivado 2022.2, I don't have Vitis nor SDK. I tried debugging, but I can't find xsct find under the bin folder. Should I reinstall Vivado?.

Also, what are the software pre-requisites to modify and run the adc_test project. I was going through the SDR projects of yours and found your Alpine Linux bootable SD card image that has custom applications built on top of it. Should I need to use the image for adc_test project too?.

In the meantime, I was trying to write to a file in the RAM of the RedPitaya (/tmp folder), but found the benchmark to be the same. Should I be connecting an external NVMe SSD to the RedPitaya, if not for streaming the data to PC without writing to file.

Thanks in advance!

pavel
Posts: 790
Joined: Sat May 23, 2015 5:22 pm

Re: Accessing BRAM data from /dev/mem of RedPitaya

Post by pavel » Mon Jun 12, 2023 3:05 pm

Building the Vivado project and bitstream does not require xsct. The command is

Code: Select all

make NAME=adc_test bit
This adc_test project requires the CMA driver that is only available on my SD card image.

The most up-to-date code can be found in the develop branch of the git repository
https://github.com/pavel-demin/red-pita ... ee/develop

The installation of the development machine is described at
http://pavel-demin.github.io/red-pitaya ... t-machine/

The structure of the source code and of the development chain is described at
http://pavel-demin.github.io/red-pitaya ... d-blinker/

The information about the SD card image can be found at
http://pavel-demin.github.io/red-pitaya-notes/alpine/

Post Reply
jadalnie klasyczne ekskluzywne meble wypoczynkowe do salonu ekskluzywne meble tapicerowane ekskluzywne meble do sypialni ekskluzywne meble włoskie

Who is online

Users browsing this forum: No registered users and 11 guests