Skip to content

Commit 701d3d8

Browse files
authored
hmm readme (oneapi-src#579)
Updated Hidden Markov Model readme to include instructions for Running on DevCloud.
1 parent 1deb9aa commit 701d3d8

File tree

1 file changed

+108
-1
lines changed
  • DirectProgramming/DPC++/GraphTraversal/hidden-markov-models

1 file changed

+108
-1
lines changed

DirectProgramming/DPC++/GraphTraversal/hidden-markov-models/README.md

Lines changed: 108 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,9 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
4242

4343
## Building the `DPC++ Hidden Markov Model` Program for CPU and GPU
4444

45+
### Running Samples In DevCloud
46+
Running samples in the Intel DevCloud requires you to specify a compute node. For specific instructions, jump to [Run the Hidden Markov Model sample in the DevCloud](#run-hmm-on-devcloud)
47+
4548
### Include Files
4649
The include folder is located at %ONEAPI_ROOT%\dev-utilities\latest\include on your development system.
4750

@@ -94,4 +97,108 @@ Device: Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz Intel(R) OpenCL
9497
The Viterbi path is:
9598
19 18 17 16 15 14 13 12 11 10
9699
The sample completed successfully!
97-
```
100+
```
101+
102+
### Running the Hidden Markov Model sample in the DevCloud<a name="run-hmm-on-devcloud"></a>
103+
1. Open a terminal on your Linux system.
104+
2. Log in to DevCloud.
105+
```
106+
ssh devcloud
107+
```
108+
3. Download the samples.
109+
```
110+
git clone https://github.com/oneapi-src/oneAPI-samples.git
111+
```
112+
113+
4. Change directories to the Hidden Markov Model sample directory.
114+
```
115+
cd ~/oneAPI-samples/DirectProgramming/DPC++/GraphTraversal/hidden-markov-models
116+
```
117+
#### Build and run the sample in batch mode
118+
The following describes the process of submitting build and run jobs to PBS.
119+
A job is a script that is submitted to PBS through the qsub utility. By default, the qsub utility does not inherit the current environment variables or your current working directory. For this reason, it is necessary to submit jobs as scripts that handle the setup of the environment variables. In order to address the working directory issue, you can either use absolute paths or pass the -d \<dir\> option to qsub to set the working directory.
120+
121+
#### Create the Job Scripts
122+
1. Create a build.sh script with your preferred text editor:
123+
```
124+
nano build.sh
125+
```
126+
2. Add this text into the build.sh file:
127+
```
128+
source /opt/intel/inteloneapi/setvars.sh > /dev/null 2>&1
129+
mkdir build
130+
cd build
131+
cmake ..
132+
make
133+
```
134+
135+
3. Save and close the build.sh file.
136+
137+
4. Create a run.sh script with with your preferred text editor:
138+
```
139+
nano run.sh
140+
```
141+
142+
5. Add this text into the run.sh file:
143+
```
144+
source /opt/intel/inteloneapi/setvars.sh > /dev/null 2>&1
145+
cd build
146+
make run
147+
```
148+
6. Save and close the run.sh file.
149+
150+
#### Build and run
151+
Jobs submitted in batch mode are placed in a queue waiting for the necessary resources (compute nodes) to become available. The jobs will be executed on a first come basis on the first available node(s) having the requested property or label.
152+
1. Build the sample on a gpu node.
153+
154+
```
155+
qsub -l nodes=1:gpu:ppn=2 -d . build.sh
156+
```
157+
158+
Note: -l nodes=1:gpu:ppn=2 (lower case L) is used to assign one full GPU node to the job.
159+
Note: The -d . is used to configure the current folder as the working directory for the task.
160+
161+
2. In order to inspect the job progress, use the qstat utility.
162+
```
163+
watch -n 1 qstat -n -1
164+
```
165+
Note: The watch -n 1 command is used to run qstat -n -1 and display its results every second. If no results are displayed, the job has completed.
166+
167+
3. After the build job completes successfully, run the sample on a gpu node:
168+
```
169+
qsub -l nodes=1:gpu:ppn=2 -d . run.sh
170+
```
171+
4. When a job terminates, a couple of files are written to the disk:
172+
173+
<script_name>.sh.eXXXX, which is the job stderr
174+
175+
<script_name>.sh.oXXXX, which is the job stdout
176+
177+
Here XXXX is the job ID, which gets printed to the screen after each qsub command.
178+
179+
5. Inspect the output of the sample.
180+
```
181+
cat run.sh.oXXXX
182+
```
183+
You should see output similar to this:
184+
185+
```
186+
[100%] Built target hidden-markov-models
187+
Scanning dependencies of target run
188+
Device: Intel(R) UHD Graphics P630 [0x3e96] Intel(R) Level-Zero
189+
The Viterbi path is:
190+
16 4 17 0 16 8 16 4 17 0 1 4 17 8 16 8 16 8 12 11
191+
The sample completed successfully!
192+
[100%] Built target run
193+
```
194+
195+
6. Remove the stdout and stderr files and clean-up the project files.
196+
```
197+
rm build.sh.*; rm run.sh.*; make clean
198+
```
199+
7. Disconnect from the Intel DevCloud.
200+
```
201+
exit
202+
```
203+
### Build and run additional samples
204+
Several sample programs are available for you to try, many of which can be compiled and run in a similar fashion to iso3dfd_omp_offload. Experiment with running the various samples on different kinds of compute nodes or adjust their source code to experiment with different workloads.

0 commit comments

Comments
 (0)