README.md 7.44 KB
Newer Older
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
1
# FPGA_final_project
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
2
3
#### Sabyasachi Mondal , Ravi Yadav
fpga vs cpu performance comparison and fpga streamlining for computation intensive tasks
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
4

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
5
6
7
8
9
# Overview
We want to use FPGA for implementing an algorithm in hardware to perform computation more effeciently. CPU hardware is non-flexible so the code runs using the same set of registers and ALU , we cant optimize the harware as per our code. Our objective here is to harware a processing unit (something smilar to a flexible ALU using the CLBs) in the FPGA using High level code.


# Background
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
10
CPUs are known for their general purpose use, the same GPUs can power all kinds of applications. CPU can run simulate any finite state machine but can't be reprogrammed as a hardware. In CPU the hardware is static so all data will get converted to the same set of specific instruction set that runs one at a time in CPU.
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
11

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
12
In FPGA for example may implement multiple multipliers or registers to work in parallel or in specifc order on the hardware level if we want. Depending on the kind of data we would receive we can implement an hardware that can entirely process the exact type of data much faster.
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
13

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
14
For application specific needs like signal processing the CPU takes help of the same compilation techniques and the same Machine level instructions which cant be optimized except for designing better software at the high level / mid level.
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
15

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
16
17
18
19
20
21
But We can break the same stereotype and as software designers develop our own algorithms bottom up from register levels to a high level code (python for example), which may prove immensely powerful for the task specific algorithm.

# Objective

Our Objective is to develop better integrated code such that our hardware and software works hand in hand to deliver the best result. We start thinking of a algorithm in python and think how it can be optimized while running it in the FPGA's Logic Unit. We would develop the hardware in C++ and write/burn the hardware in FPGA and use our Py code to drive it.  

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
22
In this case we are going to use the FPGA to implement a processing unit in hardware from High Level C code that will be able to perform image processing (like inversion, color specific background sieve) at a much faster rate:
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
23

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
24
25
26
1. *Perform Image processing by using the registers, axi_streaming and DMA* [Future scope multi-agent control]
    1.a Implement image inversion and build / test IP
    1.b Implement interactive image layer extraction / exclusion using modified watershed algorithm.
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
27

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
28
29
30
31
and 
compare how CPU performs in comparision to our FPGA hardware that is exactly wired up to work on the kind of data we expect to provide as input.

# Implementation Strategy
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
32
33
34
35
Previously we have seen the image resizer takes in the whole data DMA makes the data transfer rate much faster, but there were several instances where CPU performed better and faster specifically in a wider range of image dimension color and size.

We intend to implement the following:
1. make faster multichannel operations at a hardware level integrated with similar high level software constructs
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
36
37
    1.a Highl Level Code structure to enable parallel operation and optimization
    1.b Maintain same level of parallelism (multiple data streams and logical processing constructs) in H/W level
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
38
2. make the FPGA capable to process images in as wide range as our CPU supports
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
39
40
    2.a CPU has large storage FPGA doesnot, we can make high level py code drive large data into DMA acceptable maximum chunks
    2.b Increase number of data channels into and out of FPGA for faster processing (higher utilization).
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
41
42
43
44
45
46
47
48
49
50
51
52

This is how a typical openCV resizer works:
<Data Transfer Image>

We will notice this further if we study the resizer code that in the 2d image is fed to our DMA and internally the whole image is read row by row , col by col. Image array size is static becuase we are have finite space in FPGA.

This may be made more efficient and robust (accomodating any image width) if by implementing the following changes:
1. Multichannel image operation where we use parallel threads for processing. Each of this processing an logic entity (utilizing multiple CLBs) is expected to be faster.
2. By chunking and sending data in packets fromour high level code we can also ensure that our FPGA can process an image much larger than it's own memory or DMA allocation space.

We use two streams of data in each process with it's own processing unit in our IP , which can be schematically represented in:
<image for our Implementation>
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
53

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
54
55
In the background extraction technique we use a modified form of the watershed algorithm to suit different layers of the image with a similar range and intensity of pixels, so we have a customizable tayer to extract.
<Image modified watershed>
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
56
57

# Tasks
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
58
59
The Tasks and maximum estimated time:

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
60
61
62
1. Problem statement and brainstorming for project selection : *24 hrs*
2. Design a basic model and build overlay : *4 hrs*
3. Python code adjustment and integration : *3 hrs*
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
63
4. Plan next stage of overlay design : *2 hrs*
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
64
65


Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
66
# Resources used and Future project topics
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
67
68

#### Resources used
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
69
Image segmentation : https://theailearner.com/2020/11/29/image-segmentation-with-watershed-algorithm/
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
70
71
72
73
74
Operation with stream: https://www.xilinx.com/html_docs/xilinx2020_2/vitis_doc/hls_stream_library.html#ivv1539734234667__ad398476
Specialized Constructs : https://www.xilinx.com/html_docs/xilinx2020_2/vitis_doc/special_graph_constructs.html?hl=template
Database in FPGA : https://dspace.mit.edu/bitstream/handle/1721.1/91829/894228451-MIT.pdf?sequence=2&isAllowed=y
Database in FPGA : https://www.xilinx.com/publications/events/developer-forum/2018-frankfurt/accelerating-databases-with-fpgas.pdf
Database in FPGA  : https://www.xilinx.com/html_docs/xilinx2020_2/vitis_doc/vitis_hls_process.html#djn1584047476918
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
75
Vitis Examples : https://github.com/Xilinx/Vitis_Accel_Examples/blob/master/cpp_kernels/README.md
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
76
Running Accelerator : https://pynq.readthedocs.io/en/v2.6.1/pynq_alveo.html#running-accelerators
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
77
Pragma Interfaces : https://www.xilinx.com/html_docs/xilinx2017_4/sdaccel_doc/jit1504034365862.html
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
78
Interface of Streaming : https://www.xilinx.com/html_docs/xilinx2020_2/vitis_doc/managing_interface_synthesis.html#ariaid-title34
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
79
80
81
82

#### Future scope
The image processing can serve a stepping stone for controlling multi-agent systems. Where each streaming interface can be used for instruction input and output for each agent. Instead of using RTOS in each bot we can have multiple datastreams from each bots being processed in an IP designed to emulate a FSM for each agent and decide their action. This can lead to higher robustness and fault tolerance and lower costs. 

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
83

Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
84
# Errors Logs and Issues encountered
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
85
86
87
88
 [BD 41-759] The input pins (listed below) are either not connected or do not have a source port, and they don't have a tie-off specified. These pins are tied-off to all 0's to avoid error in Implementation flow.
Please check your design and connect them as needed: 
/color_filter/ap_start
When ap_Ctrl = None not specified in design
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
89
90

Cant find custom IP in Vivado : add IP zip path, open IP Integrator view, from IP configure window manually add the IP
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
91
92
93

Cant connect hls::stream<> type object in IP : Note: The hls::stream class should always be passed between functions as a C++ reference argument. For example, &my_stream.
IMPORTANT: The hls::stream class is only used in C++ designs. Array of streams is not supported.
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
94
95

Non-Blocking write not-allowed in Non-FIFO Interfaces like axis instead try using FIFO m_axi
Sabyasachi Mondal's avatar
Sabyasachi Mondal committed
96
97

DMA size must be lesser than 16383 so we cant feed very large datasets directly to a single DMA.