Skip to content

Commit d4ed1d3

Browse files
jkinskyjimmytweikrzeszewalexsin368ZhaoqiongZ
authored
Intel® TensorFlow* Model Zoo Inference With FP32 Int8 readme update (oneapi-src#1469)
* Fixes for 2023.1 AI Kit (oneapi-src#1409) * Intel Python Numpy Numba_dpes kNN sample (oneapi-src#1292) * *.py and *.ipynb files with implementation * README.md and sample.json files with documentation * License and thir party programs * Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample (oneapi-src#1293) * add IntelPytorch Quantization code samples (oneapi-src#1301) * add IntelPytorch Quantization code samples * fix the spelling error in the README file * use john's README with grammar fix and title change * Rename third-party-grograms.txt to third-party-programs.txt Co-authored-by: Jimmy Wei <[email protected]> * AMX bfloat16 mixed precision learning TensorFlow Transformer sample (oneapi-src#1317) * [New Sample] Intel Extension for TensorFlow Getting Started (oneapi-src#1313) * first draft * Update README.md * remove redunant file * [New Sample] [oneDNN] Benchdnn tutorial (oneapi-src#1315) * New Sample: benchDNN tutorial * Update readme: new sample * Rename sample to benchdnn_tutorial * Name fix * Add files via upload (oneapi-src#1320) * [New Sample] oneCCL Bindings for PyTorch Getting Started (oneapi-src#1316) * Update README.md * [New Sample] oneCCL Bindings for PyTorch Getting Started * Update README.md * add torch-ccl version check * [New Sample] Intel Extension for PyTorch Getting Started (oneapi-src#1314) * add new ipex GSG notebook for dGPU * Update sample.json for expertise field * Update requirements.txt Update package versions to comply with Snyk tool * Updated title field in sample.json in TF Transformer AMX bfloat16 Mixed Precision sample to fit within character length range (oneapi-src#1327) * add arch checker class (oneapi-src#1332) * change gpu.patch to convert the code samples from cpu to gpu correctly (oneapi-src#1334) * Fixes for spelling in AMX bfloat16 transformer sample and printing error in python code in numpy vs numba sample (oneapi-src#1335) * 2023.1 ai kit itex get started example fix (oneapi-src#1338) * Fix the typo * Update ResNet50_Inference.ipynb * fix resnet inference demo link (oneapi-src#1339) * Fix printing issue in numpy vs numba AI sample (oneapi-src#1356) * Fix Invalid Kmeans parameters on oneAPI 2023 (oneapi-src#1345) * Update README to add new samples into the list (oneapi-src#1366) * PyTorch AMX BF16 Training sample: remove graphs and performance numbers (oneapi-src#1408) * Adding PyTorch Training Optimizations with AMX BF16 oneAPI sample * remove performance graphs, update README * remove graphs from README and folder * update top README in Features and Functionality --------- Co-authored-by: krzeszew <[email protected]> Co-authored-by: alexsin368 <[email protected]> Co-authored-by: ZhaoqiongZ <[email protected]> Co-authored-by: Louie Tsai <[email protected]> Co-authored-by: Orel Yehuda <[email protected]> Co-authored-by: yuning <[email protected]> Co-authored-by: Wang, Kai Lawrence <[email protected]> Co-authored-by: xiguiw <[email protected]> * Intel® TensorFlow* Model Zoo Inference With FP32 Int8 readme update Restructured to match new template. Updated readme sample name to match name in sample.json file. Added prerequisites information based on release notes requirements. Updated DevCloud information. Added setvars information. Updated formatting, branding, and corrected spelling and grammar issues. * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelPyTorch_TrainingOptimizations_AMX_BF16/README.md Co-authored-by: Clayne Robison <[email protected]> * Update AI-and-Analytics/Features-and-Functionality/IntelPyTorch_TrainingOptimizations_AMX_BF16/IntelPyTorch_TrainingOptimizations_AMX_BF16.ipynb Co-authored-by: Clayne Robison <[email protected]> --------- Co-authored-by: Jimmy Wei <[email protected]> Co-authored-by: krzeszew <[email protected]> Co-authored-by: alexsin368 <[email protected]> Co-authored-by: ZhaoqiongZ <[email protected]> Co-authored-by: Louie Tsai <[email protected]> Co-authored-by: Orel Yehuda <[email protected]> Co-authored-by: yuning <[email protected]> Co-authored-by: Wang, Kai Lawrence <[email protected]> Co-authored-by: xiguiw <[email protected]> Co-authored-by: Clayne Robison <[email protected]>
1 parent e6e1825 commit d4ed1d3

File tree

1 file changed

+130
-86
lines changed
  • AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8

1 file changed

+130
-86
lines changed
Lines changed: 130 additions & 86 deletions
Original file line numberDiff line numberDiff line change
@@ -1,122 +1,166 @@
1-
# `Intel® Model Zoo` Sample
2-
This code example provides a sample code to run ResNet50 inference on Intel's pretrained FP32 and Int8 model
1+
# `Intel® TensorFlow* Model Zoo Inference With FP32 Int8` Sample
2+
3+
The `Intel® TensorFlow* Model Zoo Inference With FP32 Int8` sample demonstrates how to run ResNet50 inference on pretrained FP32 and Int8 models included in the Model Zoo for Intel® Architecture.
4+
5+
| Area | Description
6+
|:--- |:---
7+
| What you will learn | How to perform TensorFlow* ResNet50 inference on synthetic data using FP32 and Int8 pre-trained models.
8+
| Time to complete | 30 minutes
9+
| Category | Code Optimization
310

411
## Purpose
5-
- Demonstrate the AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware
6-
- Show how to efficiently execute, train, and deploy Intel-optimized models
7-
- Make it easy to get started running Intel-optimized models on Intel hardware in the cloud or on bare metal
812

9-
***DISCLAIMER: These scripts are not intended for benchmarking Intel platforms.
10-
For any performance and/or benchmarking information on specific Intel platforms, visit [https://www.intel.ai/blog](https://www.intel.ai/blog).***
13+
The sample intends to help you understand some key concepts:
1114

12-
## Key implementation details
13-
The example uses Intel's pretrained model published as part of [Intel Model Zoo](https://github.com/IntelAI/models). The example also illustrates how to utilize TensorFlow and MKL run time settings to maximize CPU performance on ResNet50 workload.
15+
- What AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware.
16+
- How to train and deploy Intel-optimized models.
17+
- How to start running Intel-optimized models on Intel hardware in the cloud or on bare metal.
1418

15-
## License
16-
Code samples are licensed under the MIT license. See
17-
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
19+
> **Disclaimer**: The sample and supplied scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit [https://www.intel.ai/blog](https://www.intel.ai/blog).
1820
19-
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
21+
## Prerequisites
2022

21-
## Running Samples on the Intel&reg; DevCloud
22-
If you are running this sample on the DevCloud, skip the Pre-requirements and go to the [Activate Conda Environment](#activate-conda) section.
23+
| Optimized for | Description
24+
|:--- |:---
25+
| OS | Ubuntu* 20.04 or higher
26+
| Hardware | Intel® Core™ Gen10 Processor <br> Intel® Xeon® Scalable Performance processors
27+
| Software | Intel® AI Analytics Toolkit (AI Kit)
2328

24-
## Pre-requirements (Local or Remote Host Installation)
29+
### For Local Development Environments
2530

26-
TensorFlow* is ready for use once you finish the Intel® AI Analytics Toolkit (AI Kit) installation and have run the post installation script.
31+
You will need to download and install the following toolkits, tools, and components to use the sample.
2732

28-
You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation and the Toolkit [Intel&reg; AI Analytics Toolkit Get Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.
33+
- **Intel® AI Analytics Toolkit (AI Kit)**
2934

30-
## Activate conda environment With Root Access<a name="activate-conda"></a>
35+
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts.
3136

32-
Navigate the Linux shell to your oneapi installation path, typically `/opt/intel/oneapi`. Activate the conda environment with the following command:
37+
TensorFlow* or Pytorch* are ready for use once you finish installing and configuring the Intel® AI Analytics Toolkit (AI Kit).
3338

34-
#### Linux
35-
```
36-
conda activate tensorflow
37-
```
39+
### For Intel® DevCloud
3840

41+
The necessary tools and components are already installed in the environment. You do not need to install additional components. See [Intel® DevCloud for oneAPI](https://devcloud.intel.com/oneapi/get_started/) for information.
3942

40-
## Activate conda environment Without Root Access (Optional)
43+
## Key Implementation Details
4144

42-
By default, the Intel® AI Analytics Toolkit is installed in the `/opt/intel/oneapi` folder, which requires root privileges to manage it. If you would like to bypass using root access to manage your conda environment, then you can clone your desired conda environment using the following command:
45+
The example uses some pretrained models published as part of the [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models). The example also illustrates how to utilize TensorFlow* and Intel® Math Kernel Library (Intel® MKL) runtime settings to maximize CPU performance on ResNet50 workload.
4346

44-
#### Linux
45-
```
46-
conda create --name user_tensorflow --clone tensorflow
47-
```
47+
## Set Environment Variables
4848

49-
Then activate your conda environment with the following command:
49+
When working with the command-line interface (CLI), you should configure the oneAPI toolkits using environment variables. Set up your CLI environment by sourcing the `setvars` script every time you open a new terminal window. This practice ensures that your compiler, libraries, and tools are ready for development.
5050

51-
```
52-
conda activate user_tensorflow
53-
```
51+
## Run the `Intel® TensorFlow* Model Zoo Inference With FP32 Int8` Sample
5452

55-
## Navigate to Intel Model Zoo
53+
### On Linux*
5654

57-
Navigate to the Intel Model Zoo source directory. It's located in your oneapi installation path, typically `/opt/intel/oneapi/modelzoo`.
58-
You can view the available Model Zoo release versions for the Intel® AI Analytics Toolkit:
59-
```
60-
ls /opt/intel/oneapi/modelzoo
61-
1.8.0 latest
62-
```
63-
Then browse to the preferred [Intel Model Zoo](https://github.com/IntelAI/models/tree/master/benchmarks) release version location to run inference for ResNet50 or another supported topology.
64-
```
65-
cd /opt/intel/oneapi/modelzoo/latest
66-
```
55+
> **Note**: If you have not already done so, set up your CLI
56+
> environment by sourcing the `setvars` script in the root of your oneAPI installation.
57+
>
58+
> Linux*:
59+
> - For system wide installations: `. /opt/intel/oneapi/setvars.sh`
60+
> - For private installations: ` . ~/intel/oneapi/setvars.sh`
61+
> - For non-POSIX shells, like csh, use the following command: `bash -c 'source <install-dir>/setvars.sh ; exec csh'`
62+
>
63+
> For more information on configuring environment variables, see *[Use the setvars Script with Linux* or macOS*](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html)*.
6764
68-
## Install Jupyter Notebook*
65+
#### Activate Conda with Root Access
66+
67+
By default, the AI Kit is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it. However, if you activated another environment, you can return with the following command.
6968
```
70-
conda install jupyter nb_conda_kernels
69+
conda activate tensorflow
7170
```
7271

73-
## How to Build and Run
74-
1. Go to the code example location.<br>
75-
2. If you have GUI support, enter the command `jupyter notebook`. <br>
76-
or<br>
77-
a. If you do not have GUI support, open a remote shell and enter command `jupyter notebook --no-browser --port=8888`.<br>
78-
b. Open the command prompt where you have GUI support, and forward the port from host to client.<br>
79-
c. Enter `ssh -N -f -L localhost:8888:localhost:8888 <userid@hostname>`<br>
80-
d. Copy-paste the URL address from the host into your local browser to open the jupyter console.<br>
81-
3. Go to `ResNet50_Inference.ipynb` and run each cell to create synthetic data and run int8 inference.
82-
83-
---
84-
**NOTE**
72+
#### Activate Conda without Root Access (Optional)
8573

86-
In the jupyter page, be sure to select the correct kernel. In this example, select 'Kernel' -> 'Change kernel' -> Python [conda env:tensorflow].
74+
You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using the following commands similar to the following.
8775

88-
---
76+
```
77+
conda create --name user_tensorflow --clone tensorflow
78+
conda activate user_tensorflow
79+
```
8980

90-
### **Request a Compute Node**
91-
In order to run on the DevCloud, you need to request a compute node using node properties such as: `gpu`, `xeon`, `fpga_compile`, `fpga_runtime` and others. For more information about the node properties, execute the `pbsnodes` command.
92-
This node information must be provided when submitting a job to run your sample in batch mode using the qsub command. When you see the qsub command in the Run section of the [Hello World instructions](https://devcloud.intel.com/oneapi/get_started/aiAnalyticsToolkitSamples/), change the command to fit the node you are using. Nodes which are in bold indicate they are compatible with this sample:
81+
#### Navigate to Model Zoo
9382

94-
<!---Mark each compatible Node in BOLD-->
95-
| Node | Command |
96-
| ----------------- | ------------------------------------------------------- |
97-
| GPU | qsub -l nodes=1:gpu:ppn=2 -d . hello-world.sh |
98-
| CPU | qsub -l nodes=1:xeon:ppn=2 -d . hello-world.sh |
99-
| FPGA Compile Time | qsub -l nodes=1:fpga\_compile:ppn=2 -d . hello-world.sh |
100-
| FPGA Runtime | qsub -l nodes=1:fpga\_runtime:ppn=2 -d . hello-world.sh |
83+
Navigate to the Model Zoo for Intel® Architecture source directory. By default, it is in your installation path, like `/opt/intel/oneapi/modelzoo`.
10184

85+
1. View the available Model Zoo release versions for the AI Kit:
86+
```
87+
ls /opt/intel/oneapi/modelzoo
88+
2.11.0 latest
89+
```
90+
2. Navigate to the [Model Zoo Scripts](https://github.com/IntelAI/models/tree/v2.11.0/benchmarks) GitHub repo to determine the preferred released version to run inference for ResNet50 or another supported topology.
91+
```
92+
cd /opt/intel/oneapi/modelzoo/latest
93+
```
10294

103-
### Troubleshooting
104-
If an error occurs, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits.
105-
[Learn more](https://software.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)
95+
#### Install Jupyter Notebook
10696

107-
### Using Visual Studio Code* (Optional)
97+
```
98+
conda install jupyter nb_conda_kernels
99+
```
108100

109-
You can use Visual Studio Code (VS Code) extensions to set your environment, create launch configurations,
110-
and browse and download samples.
101+
#### Open Jupyter Notebook
102+
103+
1. Change to the sample directory.
104+
2. Launch Jupyter Notebook.
105+
```
106+
jupyter notebook
107+
```
108+
> **Note**: If you do not have GUI support, you must open a remote shell and launch the Notebook a different way.
109+
> 1. Enter a command similar to the following:
110+
> ```
111+
> jupyter notebook --no-browser --port=8888`
112+
> ```
113+
>2. Open the command prompt where you have GUI support, and forward the port from host to client.
114+
>3. Enter a command similar to the following:
115+
> ```
116+
> ssh -N -f -L localhost:8888:localhost:8888 <userid@hostname>
117+
> ```
118+
>4. Copy and paste the URL address from the host into your local browser.
119+
120+
3. Locate and select the Notebook.
121+
```
122+
ResNet50_Inference.ipynb
123+
```
124+
4. Change the kernel to **Python [conda env:tensorflow]**.
125+
5. Click the **Run** button to move through the cells in sequence.
126+
127+
### Run the Sample on Intel® DevCloud (Optional)
128+
129+
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
130+
2. On a Linux* system, open a terminal.
131+
3. SSH into Intel® DevCloud.
132+
```
133+
ssh DevCloud
134+
```
135+
> **Note**: You can find information about configuring your Linux system and connecting to Intel DevCloud at Intel® DevCloud for oneAPI [Get Started](https://devcloud.intel.com/oneapi/get_started).
136+
137+
4. You can specify a CPU node using a single line script.
138+
```
139+
qsub -I -l nodes=1:xeon:ppn=2 -d .
140+
```
141+
142+
- `-I` (upper case I) requests an interactive session.
143+
- `-l nodes=1:xeon:ppn=2` (lower case L) assigns one full GPU node.
144+
- `-d .` makes the current folder as the working directory for the task.
145+
146+
|Available Nodes |Command Options
147+
|:--- |:---
148+
|GPU |`qsub -l nodes=1:gpu:ppn=2 -d .`
149+
|CPU |`qsub -l nodes=1:xeon:ppn=2 -d .`
150+
151+
5. Activate conda.
152+
` $ conda activate`
153+
6. Follow the instructions to open the URL with the token in your browser.
154+
7. Locate and select the Notebook.
155+
```
156+
ResNet50_Inference.ipynb
157+
````
158+
8. Change the kernel to **Python [conda env:tensorflow]**.
159+
9. Run every cell in the Notebook in sequence.
111160
112-
The basic steps to build and run a sample using VS Code include:
113-
- Download a sample using the extension **Code Sample Browser for Intel oneAPI Toolkits**.
114-
- Configure the oneAPI environment with the extension **Environment Configurator for Intel oneAPI Toolkits**.
115-
- Open a Terminal in VS Code (**Terminal>New Terminal**).
116-
- Run the sample in the VS Code terminal using the instructions below.
117-
- (Linux only) Debug your GPU application with GDB for Intel® oneAPI toolkits using the Generate Launch Configurations extension.
161+
## License
118162
119-
To learn more about the extensions, see
120-
[Using Visual Studio Code with Intel® oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
163+
Code samples are licensed under the MIT license. See
164+
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
121165
122-
After learning how to use the extensions for Intel oneAPI Toolkits, return to this readme for instructions on how to build and run a sample.
166+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).

0 commit comments

Comments
 (0)