Skip to content

Commit 7915de8

Browse files
authored
histogram readme.md (oneapi-src#583)
Updated histogram readme for how to run sample on DevCloud in batch mode.
1 parent d8ee396 commit 7915de8

File tree

1 file changed

+108
-1
lines changed
  • DirectProgramming/DPC++/ParallelPatterns/histogram

1 file changed

+108
-1
lines changed

DirectProgramming/DPC++/ParallelPatterns/histogram/README.md

Lines changed: 108 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,11 @@ Code samples are licensed under the MIT license. See
3030
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
3131

3232
## Building the histogram program for CPU and GPU
33-
On a Linux* System
33+
34+
### Running Samples In DevCloud
35+
Running samples in the Intel DevCloud requires you to specify a compute node. For specific instructions, jump to [Run the Histogram sample on the DevCloud](#run-histogram-on-devcloud)
36+
37+
### On a Linux* System
3438
Perform the following steps:
3539

3640
```
@@ -76,3 +80,106 @@ Dense Histogram:
7680
Sparse Histogram:
7781
[(0, 161) (1, 170) (2, 136) (3, 108) (5, 105) (6, 110) (7, 108) (8, 102) ]
7882
```
83+
### Running the Histogram sample in the DevCloud<a name="run-histogram-on-devcloud"></a>
84+
1. Open a terminal on your Linux system.
85+
2. Log in to DevCloud.
86+
```
87+
ssh devcloud
88+
```
89+
3. Download the samples.
90+
```
91+
git clone https://github.com/oneapi-src/oneAPI-samples.git
92+
```
93+
94+
4. Change directories to the Hidden Markov Model sample directory.
95+
```
96+
cd ~/oneAPI-samples/DirectProgramming/DPC++/ParallelPatterns/histogram
97+
```
98+
#### Build and run the sample in batch mode
99+
The following describes the process of submitting build and run jobs to PBS.
100+
A job is a script that is submitted to PBS through the qsub utility. By default, the qsub utility does not inherit the current environment variables or your current working directory. For this reason, it is necessary to submit jobs as scripts that handle the setup of the environment variables. In order to address the working directory issue, you can either use absolute paths or pass the -d \<dir\> option to qsub to set the working directory.
101+
102+
#### Create the Job Scripts
103+
1. Create a build.sh script with your preferred text editor:
104+
```
105+
nano build.sh
106+
```
107+
2. Add this text into the build.sh file:
108+
```
109+
source /opt/intel/inteloneapi/setvars.sh > /dev/null 2>&1
110+
mkdir build
111+
cd build
112+
cmake ..
113+
make
114+
```
115+
116+
3. Save and close the build.sh file.
117+
118+
4. Create a run.sh script with with your preferred text editor:
119+
```
120+
nano run.sh
121+
```
122+
123+
5. Add this text into the run.sh file:
124+
```
125+
source /opt/intel/inteloneapi/setvars.sh > /dev/null 2>&1
126+
cd build
127+
make run
128+
```
129+
6. Save and close the run.sh file.
130+
131+
#### Build and run
132+
Jobs submitted in batch mode are placed in a queue waiting for the necessary resources (compute nodes) to become available. The jobs will be executed on a first come basis on the first available node(s) having the requested property or label.
133+
1. Build the sample on a gpu node.
134+
135+
```
136+
qsub -l nodes=1:gpu:ppn=2 -d . build.sh
137+
```
138+
139+
Note: -l nodes=1:gpu:ppn=2 (lower case L) is used to assign one full GPU node to the job.
140+
Note: The -d . is used to configure the current folder as the working directory for the task.
141+
142+
2. In order to inspect the job progress, use the qstat utility.
143+
```
144+
watch -n 1 qstat -n -1
145+
```
146+
Note: The watch -n 1 command is used to run qstat -n -1 and display its results every second. If no results are displayed, the job has completed.
147+
148+
3. After the build job completes successfully, run the sample on a gpu node:
149+
```
150+
qsub -l nodes=1:gpu:ppn=2 -d . run.sh
151+
```
152+
4. When a job terminates, a couple of files are written to the disk:
153+
154+
<script_name>.sh.eXXXX, which is the job stderr
155+
156+
<script_name>.sh.oXXXX, which is the job stdout
157+
158+
Here XXXX is the job ID, which gets printed to the screen after each qsub command.
159+
160+
5. Inspect the output of the sample.
161+
```
162+
cat run.sh.oXXXX
163+
```
164+
You should see output similar to this:
165+
166+
```
167+
Input:
168+
1 1 8 1 8 6 1 0 1 5 5 2 2 8 1 2 1 1 1 6 2 1 1 8 3 6 6 2 2 1 1 8 1 0 0 0 2 2 7 6 5 1 6 1 1 6 1 5 1 0 0 1 1 1 0 5
169+
5 0 7 0 1 6 0 5 7 0 3 0 0 0 0 6 0 2 5 5 6 6 8 7 6 6 8 8 7 7 2 2 0 7 2 2 5 2 7 1 3 0 1 1 0 1 7 2 0 1 5 1 7 0 8 3 1 5 0 6 1 0 8 2 7 2 1 1 1 3 2 5 1 2 5 1 6 3 3 1 3 8 0 1 1 8 2 0 2 0 1 2 0 2 1 8 1 6 0 6 7 1 1 8 3 6 0 7 7 1 6 1 7 6 1 8 3 3 6 3 1 2 7 2 1 0 1 8 7 0 5 5 1 1 3 2 1 3 7 0 3 2 1 1 8 0 1 0 2 5 3 6 7 0 6 2 0 8 8 5 6 3 0 5 7 3 5 0 0 3 7 7 5 6 7 2 7 8 0 0 2 3 0 1 3 1 1 2 7 1 5 1 0 3 7 2 0 3 0 0 6 7 5 0 5 3 0 3 0 0 1 3 2 5 2 3 6 3 5 5 2 0 7 6 3 6 7 6 0 7 6 5 6 0 3 0 2 1 1 0 2 2 1 1 7 3 8 2 5 2 7 7 2 1 3 2 1 1 1 8 6 5 2 3 3 6 1 5 8 2 1 1 2 5 2 0 7 3 3 3 3 8 8 0 1 2 8 2 3 7 0 8 1 2 2 1 6 2 8 5 1 3 5 7 8 0 5 2 1 8 7 0 6 7 8 7 7 5 8 0 3 8 8 2 8 1 7 2 1 6 0 0 7 3 2 2 1 7 0 2 5 7 5 2 3 1 0 2 1 6 2 2 3 1 5 3 0 3 5 0 7 3 1 5 7 6 7 8 2 7 0 7 2 5 7 5 0 6 5 8 3 7 0 7 6 5 8 5 6 2 5 2 5 0 5 1 1 3 1 6 0 8 3 0 0 1 7 2 5 2 0 7 2 0 3 7 3 0 3 0 2 6 0 7 6 5 0 1 8 8 5 8 7 8 1 0 8 0 2 2 2 2 0 2 0 3 0 3 3 3 3 3 7 3 2 0 6 0 3 0 8 0 1 1 6 3 1 3 1 0 6 3 7 1 5 7 8 6 0 0 7 1 1 6 3 2 8 0 2 3 0 1 1 6 3 5 7 7 0 8 2 1 0 7 8 5 2 5 0 0 6 6 5 8 3 8 1 2 7 5 3 2 1 0 8 7 8 1 3 8 1 3 3 1 2 0 5 1 6 3 6 1 0 2 7 3 0 8 1 7 2 5 7 6 8 5 2 7 0 5 6 2 8 7 1 8 7 2 3 2 8 0 3 8 1 1 1 1 7 5 6 0 8 2 6 7 7 8 5 8 2 2 8 2 7 0 1 6 3 5 8 2 3 1 1 2 0 2 3 8 5 7 8 5 1 1 1 8 1 7 5 0 7 1 0 6 3 5 1 6 8 0 6 1 8 7 5 0 8 7 6 2 5 5 5 6 7 7 1 0 5 0 2 3 3 6 0 1 0 1 8 7 0 5 8 6 3 2 2 0 0 1 3 6 5 8 1 3 2 5 1 0 6 3 0 7 7 2 2 8 2 1 1 2 6 3 6 7 5 2 8 6 3 0 1 8 6 0 1 2 6 0 0 1 2 2 8 0 5 1 6 7 0 1 7 6 1 2 2 8 6 8 5 8 8 1 5 1 1 6 6 8 7 6 0 0 0 6 7 3 5 5 8 5 2 6 2 7 8 3 6 1 2 0 1 2 1 6 6 6 2 1 6 7 5 0 5 3 2 3 6 7 6 5 2 2 0 1 0 7 7 6 0 8 1 1 1 8 7 5 3 7 1 0 5 0 3 1 2 5 5 8 1 0 3 5 0 1 8 0 6 0 0 6 3 8 5 2 5 1 5 0 2 0 7 6 8 1 7 1 0 1 0 6 0 1 0 0 1 8 1 7 2 3 3 5 1 8 6 6 1 2 2 2 3 1 8 2 2 6 3 7 6 1 2 6 1 2 6 2 0 5 0 2 7 3 5 8 3 2 3 1 5 6 6 6 7 3 8 0 8 0 5 5 8 5 0 0 6 2 0 6 8 1 6 6 2 0 3 5 3 2 8 6 1 3 3 8 7 0 7 6 7 1 0 6 7 0 5 0 0 5 8 1
170+
Dense Histogram:
171+
[(0, 161) (1, 170) (2, 136) (3, 108) (4, 0) (5, 105) (6, 110) (7, 108) (8, 102) ]
172+
Sparse Histogram:
173+
[(0, 161) (1, 170) (2, 136) (3, 108) (5, 105) (6, 110) (7, 108) (8, 102) ]
174+
```
175+
176+
6. Remove the stdout and stderr files and clean-up the project files.
177+
```
178+
rm build.sh.*; rm run.sh.*; make clean
179+
```
180+
7. Disconnect from the Intel DevCloud.
181+
```
182+
exit
183+
```
184+
### Build and run additional samples
185+
Several sample programs are available for you to try, many of which can be compiled and run in a similar fashion to this sample. Experiment with running the various samples on different kinds of compute nodes or adjust their source code to experiment with different workloads.

0 commit comments

Comments
 (0)