Skip to content

Commit 83efc11

Browse files
authored
Ipldt2021.4 (oneapi-src#642)
* IPLDT changes * Update README.md * update update * adding "conda deactivate" * update json update * CLEANUP removing changes to json that were introduced during testing of ci, * Update sample.json * Update sample.json * Update sample.json * Update sample.json * round 1 changes * pass 2 * round3 * round4 * Backing out changes, as someone else is makingextensive changes * updates * Revert "updates" This reverts commit e174711. * Revert "Backing out changes, as someone else is makingextensive changes" This reverts commit 9b2b94c. * update * update * Update CHANGELOGS.md * Update README.md * update * updates * update * Update README.md * Update README.md * Update README.md
1 parent 74f5be0 commit 83efc11

File tree

16 files changed

+455
-448
lines changed

16 files changed

+455
-448
lines changed

AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedLinearRegression/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# `Intel Python daal4py Distributed Linear Regression Sample`
2-
This sample code shows how to train and predict with a distributed linear regression model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of the MPI library installed, and it demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
2+
This sample code shows how to train and predict with a distributed linear regression model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of the Intel® MPI Library installed, and it demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
33

44
| Optimized for | Description
55
| :--- | :---

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Horovod_Multinode_Training/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
4444

4545
### Sourcing the oneAPI AI Analytics Toolkit environment variables
4646

47-
By default, the Intel AI Analytics toolkit is installed in the `/opt/intel/oneapi` folder. The toolkit may be loaded by sourcing the `setvars.sh` script on a Linux shell. Notice the flag `--ccl-configuration=cpu_icc`. By default, the `ccl-configuration` is set to `cpu_gpu_dpcpp`. However, since we are distributing our TensorFlow workload on multiple CPU nodes, we are configuring the Horovod installation to use CPUs.
47+
By default, the Intel® AI Analytics toolkit is installed in the `/opt/intel/oneapi` folder. The toolkit may be loaded by sourcing the `setvars.sh` script on a Linux shell. Notice the flag `--ccl-configuration=cpu_icc`. By default, the `ccl-configuration` is set to `cpu_gpu_dpcpp`. However, since we are distributing our TensorFlow workload on multiple CPU nodes, we are configuring the Horovod installation to use CPUs.
4848

4949
```
5050
source /opt/intel/oneapi/setvars.sh --ccl-configuration=cpu_icc

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ If you are running this sample on the DevCloud, skip the Pre-requirements and go
2222

2323
## Pre-requirements (Local or Remote Host Installation)
2424

25-
TensorFlow* is ready for use once you finish the Intel AI Analytics Toolkit installation and have run the post installation script.
25+
TensorFlow* is ready for use once you finish the Intel® AI Analytics Toolkit installation and have run the post installation script.
2626

2727
You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation and the Toolkit [Intel® oneAPI AI Analytics Toolkit Get Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.
2828

AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ TensorFlow* is a widely-used machine learning framework in the deep learning are
55
|:--- |:---
66
| OS | Linux* Ubuntu* 18.04
77
| Hardware | Intel® Xeon® Scalable processor family or newer
8-
| Software | Intel® oneAPI AI Analytics Toolkit
9-
| What you will learn | How to get started to use Intel optimization for TensorFlow*
8+
| Software | Intel® AI Analytics Toolkit
9+
| What you will learn | How to get started to use Intel Optimization for TensorFlow*
1010
| Time to complete | 10 minutes
1111

1212
## Purpose

CHANGELOGS.md

Lines changed: 165 additions & 165 deletions
Large diffs are not rendered by default.

CODESAMPLESLIST.md

Lines changed: 167 additions & 167 deletions
Large diffs are not rendered by default.

DirectProgramming/DPC++FPGA/Tutorials/Tools/dynamic_profiler/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,7 @@ Intel® oneAPI provides two runtime profiling tools to help you analyze your DPC
2626

2727
1. The **Intel® FPGA Dynamic Profiler for DPC++** is a profiling tool used to collect fine-grained device side data during DPC++ kernel execution. When used within the Intel® VTune™ Profiler, some host side performance data is also collected. However, note that the VTune Profiler is not designed to collect detailed system level host-side data.
2828

29-
2. The **Intercept Layer for OpenCL™** is a profiling tool used to obtain detailed system-level information.
30-
29+
2. The **Intercept Layer for OpenCL™ Applications™** is a profiling tool used to obtain detailed system-level information.
3130
This tutorial introduces the Intel® FPGA Dynamic Profiler for DPC++. To learn more about the Intercept Layer, refer to the FPGA tutorial "[Using the OpenCL Intercept Layer to Profile Designs running on the FPGA](https://github.com/oneapi-src/oneAPI-samples/blob/master/DirectProgramming/DPC%2B%2BFPGA/Tutorials/Tools/system_profiling)".
3231

3332
#### The Intel® FPGA Dynamic Profiler for DPC++

DirectProgramming/Fortran/CombinationalLogic/openmp-primes/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ It illustrates two OpenMP* directives to help speed up the code.
1010
| Optimized for | Description
1111
|:--- |:---
1212
| OS | macOS* with Xcode* installed
13-
| Software | Intel® oneAPI Intel Fortran Compiler
14-
| What you will learn | How to build and run a Fortran OpenMP application using Intel Fortran compiler
13+
| Software | Intel® Fortran Compiler
14+
| What you will learn | How to build and run a Fortran OpenMP application using Intel Fortran Compiler
1515
| Time to complete | 10 minutes
1616

1717
## Purpose

DirectProgramming/Fortran/DenseLinearAlgebra/vectorize-vecmatmult/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@ serial version and the version that was compiled with the auto-vectorizer.
77
| Optimized for | Description
88
|:--- |:---
99
| OS | macOS* with Xcode* installed
10-
| Hardware | Intel-based Mac*
11-
| Software | Intel® oneAPI Intel Fortran Compiler
12-
| What you will learn | Vectorization using Intel Fortran compiler
10+
| Hardware | Intel®-based Mac*
11+
| Software | Intel® Fortran Compiler
12+
| What you will learn | Vectorization using Intel Fortran Compiler
1313
| Time to complete | 15 minutes
1414

1515

1616
## Purpose
17-
The Intel® Compiler has an auto-vectorizer that detects operations in the application
17+
The Intel® Fortran Compiler has an auto-vectorizer that detects operations in the application
1818
that can be done in parallel and converts sequential operations
1919
to parallel operations by using the
2020
Single Instruction Multiple Data (SIMD) instruction set.

Libraries/oneDAL/IntelPython_daal4py_Distributed_Kmeans/README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
*This sample and any necessary extra files/data needed to run it are already located in the [AI-and-Analytics](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics) folder of this repository. Please go to the [IntelPython_daal4py_DistributedKMeans](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedKMeans) folder within the AI-and-Analytics folder to get everything you need to build and run this sample.*
33

44
# `Intel Python daal4py Distributed K-Means` Sample
5-
This sample code shows how to train and predict with a distributed k-means model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of the MPI library installed, and it demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
5+
This sample code shows how to train and predict with a distributed k-means model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of the Intel® MPI Library installed, and it demonstrates how to use software products that can be found in the [Intel® oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel® AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
66

77
| Optimized for | Description
88
| :--- | :---
@@ -22,7 +22,11 @@ In this sample, you will run a distributed K-Means model with oneDAL daal4py lib
2222
This distributed K-means sample code is implemented for CPU using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python as part of the [oneAPI AI Analytics Toolkit powered by oneAPI](https://software.intel.com/en-us/oneapi/ai-kit).
2323

2424
## Additional Requirements
25+
<<<<<<< HEAD
26+
You will need a working Intel® MPI library, which is included in the [Intel® oneAPI HPC Toolkit](https://software.intel.com/en-us/oneapi/hpc-kit).
27+
=======
2528
You will need a working MPI library. We recommend to use Intel&reg; MPI, which is included in the [oneAPI HPC Toolkit](https://software.intel.com/en-us/oneapi/hpc-kit).
29+
>>>>>>> parent of 5fa54cb (round 1 changes)
2630
2731
## License
2832
Code samples are licensed under the MIT license. See

Libraries/oneDAL/IntelPython_daal4py_Distributed_LinearRegression/README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
*This sample and any necessary extra files/data needed to run it are already located in the [AI-and-Analytics](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics) folder of this repository. Please go to the [IntelPython_daal4py_DistributedLinearRegression](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedLinearRegression) folder within the AI-and-Analytics folder to get everything you need to build and run this sample.*
22

33
# `Intel Python daal4py Distributed Linear Regression` Sample
4-
This sample code shows how to train and predict with a distributed linear regression model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of the MPI library installed, and it demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
4+
This sample code shows how to train and predict with a distributed linear regression model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of the Intel® MPI Library installed, and it demonstrates how to use software products that can be found in the [Intel® oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
55

66
| Optimized for | Description
77
| :--- | :---
@@ -22,7 +22,11 @@ This distributed linear regression sample code is implemented for the CPU using
2222

2323

2424
## Additional Requirements
25+
<<<<<<< HEAD
26+
You will need a working Intel® MPI Library, which is included in the [Intel® oneAPI HPC Toolkit](https://software.intel.com/en-us/oneapi/hpc-kit).
27+
=======
2528
You will need a working MPI library. We recommend to use Intel(R) MPI, which is included in the [oneAPI HPC Toolkit](https://software.intel.com/en-us/oneapi/hpc-kit).
29+
>>>>>>> parent of 5fa54cb (round 1 changes)
2630
2731
## License
2832
Code samples are licensed under the MIT license. See

Libraries/oneDAL/IntelPython_daal4py_Getting_Started/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
# `Intel Python daal4py Getting Started` Sample
55

6-
This Getting Started sample code show how to do batch linear regression using the python API package daal4py from oneDAL. It demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or the [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
6+
This Getting Started sample code show how to do batch linear regression using the python API package daal4py from oneDAL. It demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or the [Intel® AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
77

88
| Optimized for | Description
99
| :--- | :---

Publications/DPC++/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ Example: If running a sample in the Intel DevCloud, remember that you must speci
8686
1. Setup oneAPI environment variables:
8787
> Windows: C:\Program Files(x86)\Intel\oneAPI\setvars.bat
8888
This will need to be run each time you open a new cmd window(non Persistent)
89-
- Aternatively you can search for the oneAPI cmd prompt - startmenu> look for Intel oneAPI 202*> "Intel oneAPI command prompt for Intel 64 for Visual Studio 2017"
89+
- Aternatively you can search for the oneAPI cmd prompt - startmenu> look for `Intel oneAPI 202*`> "Intel oneAPI command prompt for Intel 64 for Visual Studio 2017"
9090

9191
On Windows:
9292

0 commit comments

Comments
 (0)