Skip to content

Commit 3a70325

Browse files
authored
Update README.md
1 parent d1a8806 commit 3a70325

File tree

1 file changed

+155
-122
lines changed
  • AI-and-Analytics/Getting-Started-Samples/INC-Sample-for-Tensorflow

1 file changed

+155
-122
lines changed
Lines changed: 155 additions & 122 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,36 @@
1-
# `Intel® Neural Compressor TensorFlow* Getting Started*` Sample
1+
# `Intel® Neural Compressor TensorFlow* Getting Started` Sample
22

3-
This sample demonstrates using the Intel® Neural Compressor, which is part of the Intel® AI Tools with the with Intel® Optimizations for TensorFlow* to speed up inference by simplifying the process of converting the FP32 model to INT8/BF16.
3+
This sample demonstrates using the Intel® Neural Compressor, which is part of the AI Tools with the with Intel® Extension for TensorFlow* to speed up inference by simplifying the process of converting the FP32 model to INT8/BF16.
44

5-
| Property | Description
6-
|:--- |:---
7-
| Category | Getting Started
8-
| What you will learn | How to use Intel® Neural Compressor tool to quantize the AI model based on TensorFlow* and speed up the inference on Intel® Xeon® CPUs
9-
| Time to complete | 10 minutes
5+
| Property | Description |
6+
|:--- |:-- |
7+
| Category | Getting Started |
8+
| What you will learn | How to use Intel® Neural Compressor tool to quantize the AI model based on TensorFlow* and speed up the inference on Intel® Xeon® CPUs |
9+
| Time to complete | 10 minutes |
1010

1111

1212
## Purpose
1313

14-
This sample shows the process of building a convolutional neural network (CNN) model to recognize handwritten numbers and demonstrates how to increase the inference performance by using Intel® Neural Compressor. Low-precision optimizations can speed up inference. Intel® Neural Compressor simplifies the process of converting the FP32 model to INT8/BF16. At the same time, Intel® Neural Compressor tunes the quantization method to reduce the accuracy loss, which is a big blocker for low-precision inference.
14+
This sample shows the process of building a convolution neural network (CNN) model to recognize handwritten numbers and demonstrates how to increase the inference performance by using Intel® Neural Compressor. Low-precision optimizations can speed up inference. Intel® Neural Compressor simplifies the process of converting the FP32 model to INT8/BF16. At the same time, Intel® Neural Compressor tunes the quantization method to reduce the accuracy loss, which is a big blocker for low-precision inference.
1515

1616
You can achieve higher inference performance by converting the FP32 model to INT8 or BF16 model. Additionally, Intel® Deep Learning Boost (Intel® DL Boost) in Intel® Xeon® Scalable processors and Xeon® processors provides hardware acceleration for INT8 and BF16 models.
1717

1818
You will learn how to train a CNN model with Keras and TensorFlow*, use Intel® Neural Compressor to quantize the model, and compare the performance to see the benefit of Intel® Neural Compressor.
1919

2020
## Prerequisites
2121

22-
| Optimized for | Description
23-
|:--- |:---
24-
| OS | Ubuntu* 20.04 (or newer) <br> Windows 11, 10*
25-
| Hardware | Intel® Core™ Gen10 Processor <br> Intel® Xeon® Scalable Performance processors
26-
| Software | Intel® Neural Compressor, Intel Optimization for TensorFlow
22+
| Optimized for | Description |
23+
|:--- |:--- |
24+
| OS | Ubuntu* 20.04 (or newer) <br> Windows 11, 10* |
25+
| Hardware | Intel® Core™ Gen10 Processor <br> Intel® Xeon® Scalable Performance processors |
26+
| Software | Intel® Neural Compressor, Intel® Extension for TensorFlow* |
27+
> **Note**: AI and Analytics samples are validated on AI Tools Offline Installer. For the full list of validated platforms refer to [Platform Validation](https://github.com/oneapi-src/oneAPI-samples/tree/master?tab=readme-ov-file#platform-validation).
2728
2829
### Intel® Neural Compressor and Sample Code Versions
2930

3031
>**Note**: See the [Intel® Neural Compressor](https://github.com/intel/neural-compressor) GitHub repository for more information and recent changes.
3132
32-
This sample is updated regularly to match the Intel® Neural Compressor version in the latest Intel® AI Tools release. If you want to get the sample code for an earlier toolkit release, check out the corresponding git tag.
33+
This sample is updated regularly to match the Intel® Neural Compressor version in the latest AI Tools release. If you want to get the sample code for an earlier toolkit release, check out the corresponding git tag.
3334

3435
1. List the available git tags.
3536
```
@@ -43,22 +44,6 @@ This sample is updated regularly to match the Intel® Neural Compressor version
4344
git checkout 2022.3.0
4445
```
4546

46-
### For Local Development Environments
47-
48-
You will need to download and install the following toolkits, tools, and components to use the sample.
49-
50-
- **Intel® AI Tools **
51-
52-
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Tools for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Tools installation information and post-installation steps and scripts.
53-
54-
Intel® Extension for TensorFlow* is included in the Intel AI Tools Offline Installer package.
55-
56-
- **Jupyter Notebook**
57-
58-
Install using PIP: `$pip -m install notebook`. <br> Alternatively, see [*Installing Jupyter*](https://jupyter.org/install) for detailed installation instructions.
59-
60-
- **TensorFlow\* 2.2** (or newer)
61-
6247
## Key Implementation Details
6348

6449
The sample demonstrates how to:
@@ -72,131 +57,101 @@ The sample demonstrates how to:
7257
- Test the performance of the FP32 model and INT8 (quantization) model.
7358

7459
## Environment Setup
60+
61+
You will need to download and install the following toolkits, tools, and components to use the sample.
62+
7563
If you have already set up the PIP or Conda environment and installed AI Tools go directly to Run the Notebook.
7664

77-
### On Linux*
65+
### 1. Get AI Tools
66+
67+
Required AI Tools: Intel® Neural Compressor, Intel® Extension for TensorFlow* (CPU)
7868

79-
#### Setup Conda Environment
69+
If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.
8070

81-
You can list the available conda environments using a command similar to the following.
71+
>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
8272
83-
##### Option 1: Clone Conda Environment from AI Toolkit Conda Environment
73+
### 2. (Offline Installer) Activate the AI Tools bundle base environment
8474

85-
Please confirm to install Intel AI Toolkit!
75+
If the default path is used during the installation of AI Tools:
8676

8777
```
88-
conda info -e
89-
# conda environments:
90-
#
91-
base * /opt/intel/oneapi/intelpython/latest
92-
pytorch /opt/intel/oneapi/intelpython/latest/envs/pytorch
93-
pytorch-1.7.0 /opt/intel/oneapi/intelpython/latest/envs/pytorch-1.7.0
94-
tensorflow /opt/intel/oneapi/intelpython/latest/envs/tensorflow
95-
tensorflow-2.3.0 /opt/intel/oneapi/intelpython/latest/envs/tensorflow-2.3.0
96-
/opt/intel/oneapi/pytorch/1.7.0
97-
/opt/intel/oneapi/tensorflow/2.3.0
78+
source $HOME/intel/oneapi/intelpython/bin/activate
9879
```
99-
1. Activate the conda environment with Intel® Optimizations for TensorFlow*.
100-
101-
By default, the Intel® AI Analytics Toolkit is installed in
102-
the `/opt/intel/oneapi` folder, which requires root privileges to manage it.
103-
104-
1. If you have the root access to your oneAPI installation path:
105-
```
106-
conda activate tensorflow
107-
(tensorflow) xxx@yyy:
108-
```
109-
110-
2. If you do not have the root access to your oneAPI installation path, clone the `tensorflow` conda environment using the following command:
111-
```
112-
conda create --name usr_tensorflow --clone tensorflow
113-
```
114-
115-
3. Activate your conda environment with the following command:
116-
```
117-
source activate usr_tensorflow
118-
```
119-
2. Install Intel® Neural Compressor from the local channel.
120-
```
121-
conda install -c ${ONEAPI_ROOT}/conda_channel neural-compressor -y --offline
122-
```
12380

124-
3. Install dependencies**
81+
If a non-default path is used:
82+
12583
```
126-
pip install -r requirements.txt
84+
source <custom_path>/bin/activate
12785
```
128-
**Install Jupyter Notebook** by running `pip install notebook`. Alternatively, see [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions.
12986

87+
### 3. (Offline Installer) Activate relevant Conda environment
13088

131-
##### Option 2: Create Conda Environment
89+
```
90+
conda activate tensorflow
91+
```
13292

133-
Configure Conda for **user_tensorflow** by entering commands similar to the following:
134-
```
135-
conda deactivate
136-
conda env remove -n user_tensorflow
137-
conda create -n user_tensorflow python=3.9 -y
138-
conda activate user_tensorflow
139-
conda install -n user_tensorflow pycocotools -c esri -y
140-
conda install -n user_tensorflow neural-compressor tensorflow -c conda-forge -c intel -y
141-
conda install -n user_tensorflow jupyter runipy notebook -y
142-
conda install -c anaconda ipykernel
143-
python -m ipykernel install --user --nam=user_tensorflow
144-
```
93+
### 4. Clone the GitHub repository
14594

95+
```
96+
git clone https://github.com/oneapi-src/oneAPI-samples.git
97+
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/INC-Sample-for-Tensorflow
98+
```
14699

147-
#### Configure Jupyter Notebook
100+
### 5. Install dependencies
148101

149-
Create a new kernel for the Jupyter notebook based on your activated conda environment.
150-
```
151-
conda install ipykernel
152-
python -m ipykernel install --user --name usr_tensorflow
153-
```
154-
This step is optional if you plan to open the notebook on your local server.
102+
Install for Jupyter Notebook:
155103

156-
## Run the `Intel® Neural Compressor TensorFlow* Getting Started*` Sample
104+
```
105+
pip install -r requirements.txt
106+
```
107+
108+
For Jupyter Notebook, refer to [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions.
109+
110+
111+
## Run the Sample
157112

158113
> **Note**: Before running the sample, make sure [Environment Setup](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Getting-Started-Samples/INC-Sample-for-TensorFlow#environment-setup) is completed.
159114
>
160-
> Linux*:
161115
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
116+
162117
* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated)
163-
* [Conda/PIP](#condapip)
118+
* [Conda/PIP](#condapip)
164119
* [Docker](#docker)
165120

166-
### AI Tools Offline Installer (Validated)
167-
1. If you have not already done so, activate the AI Tools bundle base environment.
168-
If you used the default location to install AI Tools, open a terminal and type the following
121+
122+
### AI Tools Offline Installer (Validated)
123+
124+
#### 1. Register Conda kernel to Jupyter Notebook kernel
125+
126+
> **Note**: If you have done this step before, skip it.
127+
128+
If the default path is used during the installation of AI Tools:
129+
169130
```
170-
source $HOME/intel/oneapi/intelpython/bin/activate
131+
$HOME/intel/oneapi/intelpython/envs/<offline-conda-env-name>/bin/python -m ipykernel install --user --name=tensorflow
171132
```
172-
If you used a separate location, open a terminal and type the following
133+
134+
If a non-default path is used:
135+
173136
```
174-
source <custom_path>/bin/activate
137+
<custom_path>/bin/python -m ipykernel install --user --name=tensorflow
175138
```
176139

177-
### Active Conda Environment
140+
#### 2. Launch Jupyter Notebook
178141

179-
1. Ensure you activate the conda environment.
180-
```
181-
source /opt/intel/oneapi/setvars.sh
182-
conda activate tensorflow
183-
```
184-
or
185-
```
186-
conda activate usr_tensorflow
187-
```
188-
2. Change to the sample directory.
142+
- Option A: Launch Jupyter Notebook.
189143

190-
### Run the Notebook
191-
1. Launch Jupyter Notebook.
192144
```
193145
jupyter notebook --ip=0.0.0.0
194146
```
195-
2. Alternatively, you can launch Jupyter Notebook by running the script located in the sample code directory.
147+
148+
- Option B: You can launch Jupyter Notebook by running the script located in the sample code directory.
149+
196150
```
197151
./run_jupyter.sh
198152
```
199-
The Jupyter Server shows the URLs of the web application in your terminal.
153+
154+
The Jupyter Server shows the URLs of the web application in your terminal.
200155

201156
```
202157
(tensorflow) xxx@yyy:$ [I 09:48:12.622 NotebookApp] Serving notebooks from local directory:
@@ -216,15 +171,91 @@ source <custom_path>/bin/activate
216171
[IPKernelApp] ERROR | No such comm target registered: jupyter.widget.version
217172

218173
```
219-
In a web browser, open the link that the Jupyter server displayed when you started it. For example:
220-
**http://yyy:8888/?token=146761d9317552c43e0d6b8b6b9e1108053d465f6ca32fca**.
221174

222-
3. Locate and select the Notebook.
175+
#### 3. Follow the instructions to open the URL with the token in your browser
176+
177+
#### 4. Select the Notebook
178+
179+
```
180+
inc_sample_tensorflow.ipynb
181+
```
182+
183+
#### 5. Change the kernel to `tensorflow`
184+
185+
#### 6. Run every cell in the Notebook in sequence
186+
187+
### Conda/PIP
188+
189+
> **Note**: Before running the instructions below, make sure your Conda/Python environment with AI Tools installed is activated
190+
191+
#### 1. Register Conda/Python kernel to Jupyter Notebook kernel
192+
193+
> **Note**: If you have done this step before, skip it.
194+
195+
For Conda:
196+
```
197+
<CONDA_PATH_TO_ENV>/bin/python -m ipykernel install --user --name=tensorflow
198+
```
199+
200+
To know <CONDA_PATH_TO_ENV>, run conda env list and find your Conda environment path.
201+
202+
For PIP:
203+
204+
```
205+
python -m ipykernel install --user --name=tensorflow
206+
```
207+
208+
#### 2. Launch Jupyter Notebook
209+
210+
211+
- Option A: Launch Jupyter Notebook.
212+
223213
```
224-
inc_sample_tensorflow.ipynb
214+
jupyter notebook --ip=0.0.0.0
225215
```
226-
4. Change the kernel to **user_tensorflow**.
227-
5. Run every cell in the Notebook in sequence.
216+
217+
- Option B: You can launch Jupyter Notebook by running the script located in the sample code directory.
218+
219+
```
220+
./run_jupyter.sh
221+
```
222+
223+
The Jupyter Server shows the URLs of the web application in your terminal.
224+
225+
```
226+
(tensorflow) xxx@yyy:$ [I 09:48:12.622 NotebookApp] Serving notebooks from local directory:
227+
...
228+
[I 09:48:12.622 NotebookApp] Jupyter Notebook 6.1.4 is running at:
229+
[I 09:48:12.622 NotebookApp] http://yyy:8888/?token=146761d9317552c43e0d6b8b6b9e1108053d465f6ca32fca
230+
[I 09:48:12.622 NotebookApp] or http://127.0.0.1:8888/?token=146761d9317552c43e0d6b8b6b9e1108053d465f6ca32fca
231+
[I 09:48:12.622 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
232+
[C 09:48:12.625 NotebookApp]
233+
234+
To access the notebook, open this file in a browser:
235+
...
236+
Or copy and paste one of these URLs:
237+
http://yyy:8888/?token=146761d9317552c43e0d6b8b6b9e1108053d465f6ca32fca
238+
or http://127.0.0.1:8888/?token=146761d9317552c43e0d6b8b6b9e1108053d465f6ca32fca
239+
[I 09:48:26.128 NotebookApp] Kernel started: bc5b0e60-058b-4a4f-8bad-3f587fc080fd, name: python3
240+
[IPKernelApp] ERROR | No such comm target registered: jupyter.widget.version
241+
242+
```
243+
244+
#### 3. Follow the instructions to open the URL with the token in your browser
245+
246+
#### 4. Select the Notebook
247+
248+
```
249+
inc_sample_tensorflow.ipynb
250+
```
251+
252+
#### 5. Change the kernel to `tensorflow`
253+
254+
#### 6. Run every cell in the Notebook in sequence
255+
256+
### Docker
257+
258+
AI Tools Docker images already have Get Started samples pre-installed. Refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
228259

229260
## Example Output
230261

@@ -263,3 +294,5 @@ for details.
263294

264295
Third party program Licenses can be found here:
265296
[third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
297+
298+
*Other names and brands may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html)

0 commit comments

Comments
 (0)