You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: DirectProgramming/DPC++/GraphTraversal/hidden-markov-models/README.md
+108-1Lines changed: 108 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,9 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
42
42
43
43
## Building the `DPC++ Hidden Markov Model` Program for CPU and GPU
44
44
45
+
### Running Samples In DevCloud
46
+
Running samples in the Intel DevCloud requires you to specify a compute node. For specific instructions, jump to [Run the Hidden Markov Model sample in the DevCloud](#run-hmm-on-devcloud)
47
+
45
48
### Include Files
46
49
The include folder is located at %ONEAPI_ROOT%\dev-utilities\latest\include on your development system.
4. Change directories to the Hidden Markov Model sample directory.
114
+
```
115
+
cd ~/oneAPI-samples/DirectProgramming/DPC++/GraphTraversal/hidden-markov-models
116
+
```
117
+
#### Build and run the sample in batch mode
118
+
The following describes the process of submitting build and run jobs to PBS.
119
+
A job is a script that is submitted to PBS through the qsub utility. By default, the qsub utility does not inherit the current environment variables or your current working directory. For this reason, it is necessary to submit jobs as scripts that handle the setup of the environment variables. In order to address the working directory issue, you can either use absolute paths or pass the -d \<dir\> option to qsub to set the working directory.
120
+
121
+
#### Create the Job Scripts
122
+
1. Create a build.sh script with your preferred text editor:
Jobs submitted in batch mode are placed in a queue waiting for the necessary resources (compute nodes) to become available. The jobs will be executed on a first come basis on the first available node(s) having the requested property or label.
152
+
1. Build the sample on a gpu node.
153
+
154
+
```
155
+
qsub -l nodes=1:gpu:ppn=2 -d . build.sh
156
+
```
157
+
158
+
Note: -l nodes=1:gpu:ppn=2 (lower case L) is used to assign one full GPU node to the job.
159
+
Note: The -d . is used to configure the current folder as the working directory for the task.
160
+
161
+
2. In order to inspect the job progress, use the qstat utility.
162
+
```
163
+
watch -n 1 qstat -n -1
164
+
```
165
+
Note: The watch -n 1 command is used to run qstat -n -1 and display its results every second. If no results are displayed, the job has completed.
166
+
167
+
3. After the build job completes successfully, run the sample on a gpu node:
168
+
```
169
+
qsub -l nodes=1:gpu:ppn=2 -d . run.sh
170
+
```
171
+
4. When a job terminates, a couple of files are written to the disk:
172
+
173
+
<script_name>.sh.eXXXX, which is the job stderr
174
+
175
+
<script_name>.sh.oXXXX, which is the job stdout
176
+
177
+
Here XXXX is the job ID, which gets printed to the screen after each qsub command.
6. Remove the stdout and stderr files and clean-up the project files.
196
+
```
197
+
rm build.sh.*; rm run.sh.*; make clean
198
+
```
199
+
7. Disconnect from the Intel DevCloud.
200
+
```
201
+
exit
202
+
```
203
+
### Build and run additional samples
204
+
Several sample programs are available for you to try, many of which can be compiled and run in a similar fashion to iso3dfd_omp_offload. Experiment with running the various samples on different kinds of compute nodes or adjust their source code to experiment with different workloads.
0 commit comments