Skip to content

Commit a6df10c

Browse files
committed
Update README.md
1 parent f0f52ed commit a6df10c

File tree

1 file changed

+53
-18
lines changed

1 file changed

+53
-18
lines changed

README.md

Lines changed: 53 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -48,47 +48,74 @@ This project is primarily focused on exploring:
4848

4949
## GPU
5050

51-
tiny-gpu is built to execute a single kernel at a time. In order to launch a kernel, we need to load external program memory with our kernel code, load data memory with the necessary data, specify the number of threads to launch in the device control register, and then launch the kernel by setting the start signal to high.
51+
tiny-gpu is built to execute a single kernel at a time.
5252

53-
The GPU itself consists of 2 control units - the device control register & the dispatcher, as well as multiple compute cores, and memory controllers with cache.
53+
In order to launch a kernel, we need to do the following:
5454

55-
### Device Control Register
55+
1. Load global program memory with the kernel code
56+
2. Load data memory with the necessary data
57+
3. Specify the number of threads to launch in the device control register
58+
4. Launch the kernel by setting the start signal to high.
59+
60+
The GPU itself consists of the following units:
61+
62+
1. Device control register
63+
2. Dispatcher
64+
3. Variable number of compute cores
65+
4. Memory controllers for data memory & program memory
66+
5. Cache
67+
68+
**Device Control Register:**
5669

5770
The device control register usually stores metadata specifying how kernels should be executed on the GPU.
5871

5972
In this case, the device control register just stores the `thread_count` - the total number of threads to launch for the active kernel.
6073

61-
### Dispatcher
74+
**Dispatcher:**
6275

6376
Once a kernel is launched, the dispatcher is the unit that actually manages the distribution of threads to different compute cores.
6477

6578
The dispatcher organizes threads into groups that can be executed in parallel on a single core called **blocks** and sends these blocks off to be processed by available cores.
6679

6780
Once all blocks have been processed, the dispatcher reports back that the kernel execution is done.
6881

69-
### Memory Controllers
82+
## Memory
83+
84+
The GPU is built to interface with an external global memory. Here, data memory and program memory are separated out for simplicity.
85+
86+
**Global Memory:**
87+
88+
tiny-gpu data memory has the following specifications:
89+
90+
- 8 bit addressability (256 total rows of data memory)
91+
- 8 bit data (stores values of <256 for each row)
92+
93+
tiny-gpu program memory has the following specifications:
94+
95+
- 8 bit addressability (256 rows of program memory)
96+
- 16 bit data (each instruction is 16 bits as specified by the ISA)
97+
98+
**Memory Controllers:**
7099

71100
Global memory has fixed read/write bandwidth, but there may be far more incoming requests across all cores to access data from memory than the external memory is actually able to handle.
72101

73102
The memory controllers keep track of all the outgoing requests to memory from the compute cores, throttle requests based on actual external memory bandwidth, and relay responses from external memory back to the proper resources.
74103

75-
### Cache
104+
Each memory controller has a fixed number of channels based on the bandwidth of global memory.
105+
106+
**Cache:**
76107

77108
The same data is often requested from global memory by multiple cores. Constantly access global memory repeatedly is expensive, and since the data has already been fetched once, it would be more efficient to store it on device in SRAM to be retrieved much quicker on later requests.
78109

79110
This is exactly what the cache is used for. Data retrieved from external memory is stored in cache and can be retrieved from there on later requests, freeing up memory bandwidth to be used for new data.
80111

81-
## Global Memory
82-
83-
The GPU is built to interface with an external global memory. Here, data memory and program memory are separated out for simplicity.
84-
85112
## Core
86113

87114
Each core has a number of compute resources, often built around a certain number of threads it can support. In order to maximize parallelization, these resources need to be managed optimally to maximize resource utilization.
88115

89116
In this simplified GPU, each core processed one **block** at a time, and for each thread in a block, the core has a dedicated ALU, LSU, PC, and register file. Managing the execution of thread instructions on these resources is one of the most challening problems in GPUs.
90117

91-
### Scheduler
118+
**Scheduler:**
92119

93120
Each core has a single scheduler that manages the execution of threads.
94121

@@ -98,32 +125,40 @@ In more advanced schedulers, techniques like **pipelining** are used to stream t
98125

99126
The main constraint the scheduler has to work around is the latency associated with loading & storing data from global memory. While most instructions can be executed synchronously, these load-store operations are asynchronous, meaning the rest of the instruction execution has to be built around these long wait times.
100127

101-
### Fetcher
128+
**Fetcher:**
102129

103130
Asynchronously fetches the instruction at the current program counter from program memory (most should actually be fetching from cache after a single block is executed).
104131

105-
### Decoder
132+
**Decoder:**
106133

107134
Decodes the fetched instruction into control signals for thread execution.
108135

109-
### Register Files
136+
**Register Files:**
110137

111138
Each thread has it's own dedicated set of register files. The register files hold the data that each thread is performing computations on, which enables the same-instruction multiple-data (SIMD) pattern.
112139

113140
Importantly, each register file contains a few read-only registers holding data about the current block & thread being executed locally, enabling kernels to be executed with different data based on the local thread id.
114141

115-
### ALUs
142+
**ALUs:**
116143

117-
Dedicated arithmetic-logic unit for each thread to perform computations.
144+
Dedicated arithmetic-logic unit for each thread to perform computations. Handles the `ADD`, `SUB`, `MUL`, `DIV` arithmetic instructions.
118145

119-
### LSUs
146+
Also handles the `CMP` comparison instruction which actually outputs whether the result of the difference between two registers is negative, zero or positive - and stores the result in the `NZP` register in the PC unit.
147+
148+
**LSUs:**
120149

121150
Dedicated load-store unit for each thread to access global data memory.
122151

123-
### PCs
152+
Handles the `LDR` & `STR` instructions - and handles async wait times for memory requests to be processed and relayed by the memory controller.
153+
154+
**PCs:**
124155

125156
Dedicated program-counter for each unit to determine the next instructions to execute on each thread.
126157

158+
By default, the PC increments by 1 after every instruction.
159+
160+
With the `BRnzp` instruction, the NZP register checks to see if the NZP register (set by a previous `CMP` instruction) matches some case - and if it does, it will branch to a specific line of program memory. _This is how loops and conditionals are implemented._
161+
127162
Since threads are processed in parallel, tiny-gpu assumes that all threads "converge" to the same program counter after each instruction - which is a naive assumption for the sake of simplicity.
128163

129164
In real GPUs, individual threads can branch to different PCs, causing **branch divergence** where a group of threads threads initially being processed together has to split out into separate execution.

0 commit comments

Comments
 (0)