Skip to content

Commit 0b26e49

Browse files
bdmoore1praveenkk123
authored andcommitted
Bdmoore1 add vscode 0928 (oneapi-src#683)
* Added VS Code to vector-add * Added terminal instructions Added a overview of the steps to downloading and building a sample. Steps are focused on using the VS Code extensions. * Add VS Code instructions In most of the sample README files, I added instructions for how to learn more about using VS Code. If the sample seemed too complicated for VS Code (i.e., setting conda env or building folder trees), or the sample was not in the oneapi samples browser extension, then I did not include the VS Code instructions.
1 parent 26f1429 commit 0b26e49

File tree

108 files changed

+3047
-1316
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+3047
-1316
lines changed

AI-and-Analytics/End-to-end-Workloads/LidarObjectDetection-PointPillars/README.md

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ PointPillars is an AI algorithm that uses LIDAR point clouds to detect and class
1919
3. Afterward, the pre-processed data is used by a so-called Pillar Feature Extraction (PFE) CNN to create a 2D image-like representation of the sensor environment. For the inference, this sample uses the Intel® Distribution of OpenVINO™ toolkit. The output of this CNN is a list of dense tensors (learned pillar features).
2020
4. To convert these dense tensors into an pseudo-image, a scatter operation is performed. This operation is again realized with SYCL and DPCPP.
2121
5. This pseudo-image is consumed by the second CNN, the so-called Region Proposal Network (RPN). The inference is performed with the help of the Intel® Distribution of OpenVINO™ toolkit. The output is an unfiltered list of possible object detections, their position, dimensions and classifications.
22-
6. Finally, this output data (object list) is post-processed with the help of the anchors created in the 2nd step. The anchors are used to decode the object position, dimension and class. Afterwards, a Non-Maximum-Suppression (NMS) is used to filter out redundant/clutter objects. Finally, the objects are sorted according to their likelihood, and then provided as output. All of these steps are implemented as SYCL and DPCPP kernels.
22+
6. Finally, this output data (object list) is post-processed with the help of the anchors created in the 2nd step. The anchors are used to decode the object position, dimension and class. Afterwards, a Non-Maximum-Suppression (NMS) is used to filter out redundant/clutter objects. Finally, the objects are sorted according to their likelihood, and then provided as output. All of these steps are implemented as SYCL and DPCPP kernels.
2323

2424
By default, the application will use 'host' as the execution device for SYCL/DPCPP kernels and CPU (single-threaded) for Intel® Distribution of OpenVINO™ toolkit inferencing part. The execution device and the inferencing device are displayed in the output, along with the elapsed time of each of the five steps described above. For more details refer to section: [Execution Options for the Sample Program](#execution-options-for-the-sample-program).
2525

@@ -30,7 +30,7 @@ This sample demonstrates a real-world, end-to-end example that uses a combinatio
3030
- You will learn how to implement oneAPI-based function kernels that can be executed on the host system, on a multi-threaded CPU or a GPU.
3131
- You will learn how to implement standard algorithms for AI-based object detection, for example, _Non-Maximum-Suppression_, using oneAPI.
3232

33-
## License
33+
## License
3434
Code samples are licensed under the MIT license. See
3535
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
3636

@@ -49,17 +49,32 @@ To build and run the PointPillars sample, the following libraries have to be ins
4949
3. Boost (including `boost::program_options` and `boost::filesystem` library). For Ubuntu, you may install the libboost-all-dev package.
5050
4. Optional: If the sample should be run on an Intel GPU, it might be necessary to upgrade the corresponding drivers. Therefore, please consult the following page: https://github.com/intel/compute-runtime/releases/
5151

52+
### Using Visual Studio Code* (VS Code)
53+
54+
You can use VS Code extensions to set your environment, create launch configurations,
55+
and browse and download samples.
56+
57+
The basic steps to build and run a sample using VS Code include:
58+
- Download a sample using the extension **Code Sample Browser for Intel oneAPI Toolkits**.
59+
- Configure the oneAPI environment with the extension **Environment Configurator for Intel oneAPI Toolkits**.
60+
- Open a Terminal in VS Code (**Terminal>New Terminal**).
61+
- Run the sample in the VS Code terminal using the instructions below.
62+
63+
To learn more about the extensions and how to configure the oneAPI environment, see
64+
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
65+
66+
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.
5267

5368
### Build process (Local or Remote Host Installation)
5469
Perform the following steps:
5570
1. Prepare the environment to be able to use the Intel® Distribution of OpenVINO™ toolkit and oneAPI
56-
```
71+
```
5772
$ source /opt/intel/openvino_2021/bin/setupvars.sh
5873
$ source /opt/intel/oneapi/setvars.sh
5974
```
6075

61-
2. Build the program using the following `cmake` commands.
62-
```
76+
2. Build the program using the following `cmake` commands.
77+
```
6378
$ mkdir build && cd build
6479
$ cmake ..
6580
$ make
@@ -134,8 +149,8 @@ In order to run on the DevCloud, you need to request a compute node using node p
134149
| FPGA Runtime | qsub -l nodes=1:fpga\_runtime:ppn=2 -d . hello-world.sh |
135150

136151
### Build process (DevCloud)
137-
1. Build the program using the following `cmake` commands.
138-
```
152+
1. Build the program using the following `cmake` commands.
153+
```
139154
$ mkdir build && cd build
140155
$ cmake ..
141156
$ make

AI-and-Analytics/Features-and-Functionality/IntelPython_XGBoost_Performance/README.md

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# `Intel® Python XGBoost Performance Sample`
2-
This sample code illustrates how to analyze the performance benefit from using Intel optimizations upstreamed by Intel to latest XGBoost compared to un-optimized XGBoost 0.81. It demonstrates how to use software products that can be found in the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
2+
This sample code illustrates how to analyze the performance benefit from using Intel optimizations upstreamed by Intel to latest XGBoost compared to un-optimized XGBoost 0.81. It demonstrates how to use software products that can be found in the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
33

44
| Optimized for | Description
55
| :--- | :---
@@ -14,11 +14,11 @@ This sample code illustrates how to analyze the performance benefit from using I
1414
XGBoost is a widely used gradient boosting library in the classical ML area. Designed for flexibility, performance, and portability, XGBoost includes optimized distributed gradient boosting frameworks and implements Machine Learning algorithms underneath.
1515

1616
In this sample, you will an XGBoost model and prediction using Intel optimizations upstreamed by Intel to the latest XGBoost package and the un-optimized XGBoost 0.81 for comparison.
17-
18-
## Key Implementation Details
17+
18+
## Key Implementation Details
1919
This XGBoost sample code is implemented for the CPU using the Python language. The example assumes you XGBoost installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python* as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit). It also assumes you have set up an additional XGBoost 0.81 conda environment, with details on how to do so explained within the sample and this README.
20-
21-
## License
20+
21+
## License
2222
Code samples are licensed under the MIT license. See
2323
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
2424

@@ -34,7 +34,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
3434

3535
### Activate conda environment With Root Access
3636

37-
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script. Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
37+
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script. Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
3838

3939
Intel Python environment will be active by default. However, if you activated another environment, you can return with the following command:
4040

@@ -100,9 +100,24 @@ Run the Program
100100

101101
`python IntelPython_XGBoost_Performance.py`
102102

103-
The output files of the script will be saved in the included models and result directories.
103+
The output files of the script will be saved in the included models and result directories.
104104

105105
##### Expected Printed Output (with similar numbers):
106106
```
107107
[CODE_SAMPLE_COMPLETED_SUCCESFULLY]
108108
```
109+
### Using Visual Studio Code* (VS Code)
110+
111+
You can use VS Code extensions to set your environment, create launch configurations,
112+
and browse and download samples.
113+
114+
The basic steps to build and run a sample using VS Code include:
115+
- Download a sample using the extension **Code Sample Browser for Intel oneAPI Toolkits**.
116+
- Configure the oneAPI environment with the extension **Environment Configurator for Intel oneAPI Toolkits**.
117+
- Open a Terminal in VS Code (**Terminal>New Terminal**).
118+
- Run the sample in the VS Code terminal using the instructions below.
119+
120+
To learn more about the extensions and how to configure the oneAPI environment, see
121+
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
122+
123+
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.

AI-and-Analytics/Features-and-Functionality/IntelPython_XGBoost_daal4pyPrediction/README.md

Lines changed: 23 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# `Intel® Python XGBoost daal4py Prediction Sample`
2-
This sample code illustrates how to analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction. It demonstrates how to use software products that can be found in the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
2+
This sample code illustrates how to analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction. It demonstrates how to use software products that can be found in the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
33

44
| Optimized for | Description
55
| :--- | :---
@@ -16,11 +16,11 @@ XGBoost is a widely used gradient boosting library in the classical ML area. Des
1616
This sample code illustrates how to analyze the performance benefit of minimal code changes to port pre-trained XGBoost models to daal4py prediction for much faster prediction than XGBoost prediction.
1717

1818
In this sample, you will run an XGBoost model with daal4py prediction and XGBoost API prediction to see the performance benefit of daal4py gradient boosting prediction. You will also learn how to port a pre-trained XGBoost model to daal4py prediction.
19-
20-
## Key Implementation Details
21-
This sample code is implemented for CPU using the Python language. The example assumes you have XGboost and daal4py installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python* as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
22-
23-
## License
19+
20+
## Key Implementation Details
21+
This sample code is implemented for CPU using the Python language. The example assumes you have XGboost and daal4py installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python* as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
22+
23+
## License
2424
Code samples are licensed under the MIT license. See
2525
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
2626

@@ -35,7 +35,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
3535

3636
### Activate conda environment With Root Access
3737

38-
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script. Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
38+
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script. Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
3939

4040
Intel Python environment will be active by default. However, if you activated another environment, you can return with the following command:
4141

@@ -97,9 +97,24 @@ Run the Program
9797

9898
`python IntelPython_XGBoost_Performance.py`
9999

100-
The output files of the script will be saved in the included models and result directories.
100+
The output files of the script will be saved in the included models and result directories.
101101

102102
##### Expected Printed Output (with similar numbers):
103103
```
104104
[CODE_SAMPLE_COMPLETED_SUCCESFULLY]
105105
```
106+
### Using Visual Studio Code* (VS Code)
107+
108+
You can use VS Code extensions to set your environment, create launch configurations,
109+
and browse and download samples.
110+
111+
The basic steps to build and run a sample using VS Code include:
112+
- Download a sample using the extension **Code Sample Browser for Intel oneAPI Toolkits**.
113+
- Configure the oneAPI environment with the extension **Environment Configurator for Intel oneAPI Toolkits**.
114+
- Open a Terminal in VS Code (**Terminal>New Terminal**).
115+
- Run the sample in the VS Code terminal using the instructions below.
116+
117+
To learn more about the extensions and how to configure the oneAPI environment, see
118+
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
119+
120+
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.

0 commit comments

Comments
 (0)