Skip to content

Commit 74219c4

Browse files
authored
Review of C++ Sample READMEs (oneapi-src#852)
* Review of C++ Sample READMEs For each readme in the AI-and-Analytics folder, I reviewed and updated to include consistent instructions for: Using VS Code sourcing or running setvars troubleshooting with the diagnostics utility * Shortened setvars
1 parent 5a8c6e4 commit 74219c4

File tree

7 files changed

+171
-35
lines changed

7 files changed

+171
-35
lines changed

DirectProgramming/C++/CombinationalLogic/MandelbrotOMP/README.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,13 +54,25 @@ The basic steps to build and run a sample using VS Code include:
5454
oneAPI toolkits using the **Generate Launch Configurations** extension.
5555

5656
To learn more about the extensions, see
57-
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
57+
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
5858

5959
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.
6060

6161

6262
## Building the `Mandelbrot` Program
6363

64+
> **Note**: If you have not already done so, set up your CLI
65+
> environment by sourcing the `setvars` script located in
66+
> the root of your oneAPI installation.
67+
>
68+
> Linux Sudo: . /opt/intel/oneapi/setvars.sh
69+
>
70+
> Linux User: . ~/intel/oneapi/setvars.sh
71+
>
72+
> Windows: C:\Program Files(x86)\Intel\oneAPI\setvars.bat
73+
>
74+
>For more information on environment variables, see Use the setvars Script for [Linux or macOS](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html), or [Windows](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-windows.html).
75+
6476
Perform the following steps:
6577
1. Build the program using the following `make` commands.
6678
```
@@ -78,6 +90,8 @@ $ make
7890
make clean
7991
```
8092
93+
If an error occurs, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits.
94+
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)
8195
8296
## Running the Sample
8397

DirectProgramming/C++/CompilerInfrastructure/Intrinsics/README.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,12 +45,24 @@ The basic steps to build and run a sample using VS Code include:
4545
- Run the sample in the VS Code terminal using the instructions below.
4646

4747
To learn more about the extensions and how to configure the oneAPI environment, see
48-
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
48+
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
4949

5050
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.
5151

5252
## Building the `Intrinsics` Program
5353

54+
> **Note**: If you have not already done so, set up your CLI
55+
> environment by sourcing the `setvars` script located in
56+
> the root of your oneAPI installation.
57+
>
58+
> Linux Sudo: . /opt/intel/oneapi/setvars.sh
59+
>
60+
> Linux User: . ~/intel/oneapi/setvars.sh
61+
>
62+
> Windows: C:\Program Files(x86)\Intel\oneAPI\setvars.bat
63+
>
64+
>For more information on environment variables, see Use the setvars Script for [Linux or macOS](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html), or [Windows](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-windows.html).
65+
5466
Perform the following steps:
5567
1. Build the program using the following `make` commands.
5668
```
@@ -67,6 +79,8 @@ $ make (or "make debug" to compile with the -g flag)
6779
make clean
6880
```
6981
82+
If an error occurs, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits.
83+
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)
7084
7185
### Application Parameters
7286

DirectProgramming/C++/CompilerInfrastructure/OpenMP_Offload_Features/README.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,13 +63,17 @@ After learning how to use the extensions for Intel oneAPI Toolkits, return to th
6363

6464
## Building the Program
6565

66-
> Note: if you have not already done so, set up your CLI
67-
> environment by sourcing the setvars script located in
66+
> **Note**: If you have not already done so, set up your CLI
67+
> environment by sourcing the `setvars` script located in
6868
> the root of your oneAPI installation.
6969
>
7070
> Linux Sudo: . /opt/intel/oneapi/setvars.sh
71+
>
7172
> Linux User: . ~/intel/oneapi/setvars.sh
73+
>
7274
> Windows: C:\Program Files(x86)\Intel\oneAPI\setvars.bat
75+
>
76+
>For more information on environment variables, see Use the setvars Script for [Linux or macOS](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html), or [Windows](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-windows.html).
7377
7478

7579
### Running Samples In DevCloud
@@ -105,6 +109,9 @@ Perform the following steps:
105109
make clean
106110
```
107111
112+
If an error occurs, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits.
113+
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)
114+
108115
### Example of Output
109116
110117
```

DirectProgramming/C++/GraphTraversal/MergesortOMP/README.md

Lines changed: 36 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,14 +22,28 @@ Performance number tabulation
2222

2323
## Purpose
2424

25-
Merge sort is a highly efficient recursive sorting algorithm. Known for its greater efficiency over other common sorting algorithms, it can compute in O(nlogn) time instead of O(n^2), making it a common choice for sorting implementations that deal with large quantities of elements. While it is already a very fast algorithm-- capable of sorting lists in a fraction of the time it would take an algorithm such as quicksort or insertion sort, it can be further accelerated with parallelism using OpenMP.
25+
Merge sort is a highly efficient recursive sorting algorithm. Known for its
26+
greater efficiency over other common sorting algorithms, it can compute in
27+
O(nlogn) time instead of O(n^2), making it a common choice for sorting
28+
implementations that deal with large quantities of elements. While it is
29+
already a very fast algorithm-- capable of sorting lists in a fraction of the
30+
time it would take an algorithm such as quicksort or insertion sort, it can be
31+
further accelerated with parallelism using OpenMP.
2632

27-
This code sample demonstrates how to convert a scalar implementation of merge sort into a parallelized version with minimal changes to the original, using OpenMP pragmas.
33+
This code sample demonstrates how to convert a scalar implementation of merge
34+
sort into a parallelized version with minimal changes to the original, using
35+
OpenMP pragmas.
2836

2937

3038
## Key Implementation Details
3139

32-
The OpenMP* version of the merge sort implementation uses the #pragma omp task in its recursive calls, which allows the recursive calls to be handled by different threads. The #pragma omp taskawait preceding the function call to merge() ensures the two recursive calls are completed before the merge() is executed. Through this use of OpenMP* pragmas, the recursive sorting algorithm can effectively run in parallel, where each recursion is a unique task able to be performed by any available thread.
40+
The OpenMP* version of the merge sort implementation uses the #pragma omp task
41+
in its recursive calls, which allows the recursive calls to be handled by
42+
different threads. The #pragma omp taskawait preceding the function call to
43+
merge() ensures the two recursive calls are completed before the merge() is
44+
executed. Through this use of OpenMP* pragmas, the recursive sorting algorithm
45+
can effectively run in parallel, where each recursion is a unique task able to
46+
be performed by any available thread.
3347

3448
## License
3549

@@ -42,8 +56,8 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
4256

4357
### Using Visual Studio Code* (Optional)
4458

45-
You can use Visual Studio Code (VS Code) extensions to set your environment, create launch configurations,
46-
and browse and download samples.
59+
You can use Visual Studio Code (VS Code) extensions to set your environment,
60+
create launch configurations, and browse and download samples.
4761

4862
The basic steps to build and run a sample using VS Code include:
4963
- Download a sample using the extension **Code Sample Browser for Intel oneAPI Toolkits**.
@@ -58,6 +72,18 @@ After learning how to use the extensions for Intel oneAPI Toolkits, return to th
5872

5973
## Building the `Merge Sort` Program
6074

75+
> **Note**: If you have not already done so, set up your CLI
76+
> environment by sourcing the `setvars` script located in
77+
> the root of your oneAPI installation.
78+
>
79+
> Linux Sudo: . /opt/intel/oneapi/setvars.sh
80+
>
81+
> Linux User: . ~/intel/oneapi/setvars.sh
82+
>
83+
> Windows: C:\Program Files(x86)\Intel\oneAPI\setvars.bat
84+
>
85+
>For more information on environment variables, see Use the setvars Script for [Linux or macOS](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html), or [Windows](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-windows.html).
86+
6187
Perform the following steps:
6288
1. Build the program using the following `make` commands.
6389
```
@@ -77,10 +103,14 @@ $ make
77103
make clean
78104
```
79105
106+
If an error occurs, troubleshoot the problem using the Diagnostics Utility for
107+
Intel® oneAPI Toolkits.
108+
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)
80109
81110
### Application Parameters
82111
83-
There are two configurable options defined near the top of the code, both of which affect the program's performance:
112+
There are two configurable options defined near the top of the code, both of
113+
which affect the program's performance:
84114
85115
- constexpr int task_threshold - This determines the minimum size of the list passed to the OpenMP merge sort function required to call itself and not the scalar version recursively. Its purpose is to reduce the threading overhead as it gets less efficient on smaller list sizes. Setting this value too small can reduce the OpenMP implementation's performance as it has more threading overhead for smaller workloads.
86116
- constexpr int n - This determines the size of the list used to test the merge sort functions. Setting it larger will result in longer runtime and is useful for analyzing the algorithm's runtime growth rate.

DirectProgramming/C++/Jupyter/OpenMP-offload-training/README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,16 @@
33
The the content of this repo is a collection of Jupyter notebooks that were
44
developed to teach OpenMP Offload.
55

6-
The Jupyter notebooks are tested and can be run on the Intel Devcloud. Below
7-
are the steps to access these Jupyter notebooks on the Intel Devcloud:
6+
The Jupyter notebooks are tested and can be run on the Intel DevCloud. Below
7+
are the steps to access these Jupyter notebooks on the Intel DevCloud:
88

9-
1. Register with the Intel Devcloud at
10-
https://intelsoftwaresites.secure.force.com/devcloud/oneapi
9+
1. Register with the Intel DevCloud at
10+
https://intelsoftwaresites.secure.force.com/devcloud/oneapi.
1111

12-
2. SSH into the Intel Devcloud "terminal"
12+
2. SSH into the Intel DevCloud "terminal".
1313

1414
3. Type the following command to download the oneAPI-essentials series of
15-
Jupyter notebooks and OpenMP offload notebooks into your devcloud account
15+
Jupyter notebooks and OpenMP offload notebooks into your DevCloud account
1616
`/data/oneapi_workshop/get_jupyter_notebooks.sh`
1717

1818
| Optimized for | Description
@@ -26,8 +26,8 @@ are the steps to access these Jupyter notebooks on the Intel Devcloud:
2626

2727
## Running the Jupyter Notebooks
2828

29-
1. Open "OpenMP Welcome.ipynb" with JupyterLab
30-
2. Start the modules of interest
29+
1. Open "OpenMP Welcome.ipynb" with JupyterLab.
30+
2. Start the modules of interest.
3131
3. Follow the instructions in each notebook and execute cells when instructed.
3232

3333
## License
@@ -45,20 +45,20 @@ Third party program Licenses can be found here:
4545
* Introduce Developer Training Modules
4646
* Describe oneAPI Tool Modules
4747

48-
[Introduction to OpenMP Offload](intro)
48+
[Introduction to OpenMP Offload](intro)
4949
* oneAPI Software Model Overview and Workflow
5050
* HPC Single-Node Workflow with oneAPI
5151
* Simple OpenMP Code Example
5252
* Target Directive Explanation
5353
* _Lab Exercise_: Vector Increment with Target Directive
5454

55-
[Managing Data Transfers](datatransfer)
55+
[Managing Data Transfers](datatransfer)
5656
* Offloading Data
5757
* Target Data Region
5858
* _Lab Exercise_: Target Data Region
5959
* Mapping Global Variable to Device
6060

61-
[Utilizing GPU Parallelism](parallelism)
61+
[Utilizing GPU Parallelism](parallelism)
6262
* Device Parallelism
6363
* OpenMP Constructs and Teams
6464
* Host Device Concurrency

DirectProgramming/C++/ParallelPatterns/openmp_reduction/README.md

Lines changed: 40 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,14 +14,21 @@ For comprehensive instructions see the [DPC++ Programming](https://software.inte
1414
| Time to complete | 10 min
1515

1616
## Purpose
17-
This example demonstrates how to do reduction by using the CPU in serial mode, the CPU in parallel mode (using OpenMP), the GPU using OpenMP offloading.
17+
This example demonstrates how to do reduction by using the CPU in
18+
serial mode, the CPU in parallel mode (using OpenMP), the GPU using OpenMP
19+
offloading.
1820

19-
All the different modes use a simple calculation for Pi. It is a well known mathematical formula that if you integrate from 0 to 1 over the function, (4.0 / (1+x*x) )dx, the answer is pi. One can approximate this integral by summing up the area of a large number of rectangles over this same range.
21+
All the different modes use a simple calculation for Pi. It is a well known
22+
mathematical formula that if you integrate from 0 to 1 over the function, (4.0
23+
/ (1+x*x) )dx, the answer is pi. One can approximate this integral by summing
24+
up the area of a large number of rectangles over this same range.
2025

21-
Each of the different functions calculates pi by breaking the range into many tiny rectangles and then summing up the results.
26+
Each of the different functions calculates pi by breaking the range into many
27+
tiny rectangles and then summing up the results.
2228

2329
## Key Implementation Details
24-
This code shows how to use OpenMP on the CPU host as well as using target offload capabilities.
30+
This code shows how to use OpenMP on the CPU host
31+
as well as using target offload capabilities.
2532

2633
## License
2734
Code samples are licensed under the MIT license. See [License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
@@ -31,10 +38,14 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
3138
## Building the dpc_reduce program for CPU and GPU
3239

3340
### Include Files
34-
The include folder is located at `%ONEAPI_ROOT%\dev-utilities\latest\include` on your development system".
41+
The include folder is located at
42+
`%ONEAPI_ROOT%\dev-utilities\latest\include` on your development system".
3543

3644
### Running Samples In DevCloud
37-
If running a sample in the Intel DevCloud, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
45+
If running a sample in the Intel DevCloud,
46+
remember that you must specify the compute node (CPU, GPU, FPGA) and whether to
47+
run in batch or interactive mode. For more information, see the
48+
[Intel® oneAPI Base Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
3849

3950

4051
### Using Visual Studio Code* (Optional)
@@ -49,11 +60,25 @@ The basic steps to build and run a sample using VS Code include:
4960
- Run the sample in the VS Code terminal using the instructions below.
5061

5162
To learn more about the extensions and how to configure the oneAPI environment, see
52-
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
63+
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
5364

54-
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.
65+
After learning how to use the extensions for Intel oneAPI Toolkits, return to
66+
this readme for instructions on how to build and run a sample.
5567

5668
### On a Linux* System
69+
70+
> **Note**: If you have not already done so, set up your CLI
71+
> environment by sourcing the `setvars` script located in
72+
> the root of your oneAPI installation.
73+
>
74+
> Linux Sudo: . /opt/intel/oneapi/setvars.sh
75+
>
76+
> Linux User: . ~/intel/oneapi/setvars.sh
77+
>
78+
> Windows: C:\Program Files(x86)\Intel\oneAPI\setvars.bat
79+
>
80+
>For more information on environment variables, see Use the setvars Script for [Linux or macOS](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html), or [Windows](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-windows.html).
81+
5782
Perform the following steps:
5883
```
5984
mkdir build
@@ -77,6 +102,13 @@ Clean the program using:
77102
make clean
78103
```
79104

105+
If an error occurs, you can get more details by running `make` with the `VERBOSE=1` argument:
106+
``make VERBOSE=1``
107+
For more comprehensive troubleshooting, use the Diagnostics Utility for
108+
Intel® oneAPI Toolkits, which provides system checks to find missing dependencies and permissions errors.
109+
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html).
110+
111+
80112
## Running the Sample
81113

82114
### Application Parameters

0 commit comments

Comments
 (0)