Skip to content

Commit 52b6e4a

Browse files
authored
Update README.md
1 parent 7091b57 commit 52b6e4a

File tree

1 file changed

+45
-40
lines changed
  • AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted

1 file changed

+45
-40
lines changed

AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted/README.md

Lines changed: 45 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ This sample code shows how to get started with TensorFlow*. It implements an exa
1717

1818
| Optimized for | Description
1919
|:--- |:---
20-
| OS | Ubuntu* 22.0.4 (and newer) <br> Windows* 10 and newer
20+
| OS | Ubuntu* 22.0.4 and newer
2121
| Hardware | Intel® Xeon® Scalable processor family
2222
| Software | TensorFlow
2323

@@ -37,7 +37,7 @@ The sample includes one python file: TensorFlow_HelloWorld.py. it implements a s
3737
y_batch = y_data[step*N:(step+1)*N, :, :, :]
3838
s.run(train, feed_dict={x: x_batch, y: y_batch})
3939
```
40-
In order to show the harware information, you must export the environment variable `ONEDNN_VERBOSE=1` to display the deep learning primitives trace during execution.
40+
In order to show the harware information, you must export the environment variable `export ONEDNN_VERBOSE=1` to display the deep learning primitives trace during execution.
4141
>**Note**: For convenience, code line os.environ["ONEDNN_VERBOSE"] = "1" has been added in the body of the script as an alternative method to setting this variable.
4242

4343
Runtime settings for `ONEDNN_VERBOSE`, `KMP_AFFINITY`, and `Inter/Intra-op` Threads are set within the script. You can read more about these settings in this dedicated document: *[Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads](https://software.intel.com/en-us/articles/maximize-tensorflow-performance-on-cpu-considerations-and-recommendations-for-inference)*.
@@ -53,48 +53,51 @@ You will need to download and install the following toolkits, tools, and compone
5353

5454
**1. Get Intel® AI Tools**
5555

56-
Required AI Tools: <Tensorflow* ><!-- List specific AI Tools that needs to be installed before running this sample -->
56+
Required AI Tools: 'Intel® Extension for TensorFlow* - CPU'
5757
<br>If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.<br>
58-
or simple pip install in your current ready python environment
59-
```
60-
pip install tensorflow==2.14
61-
```
62-
please see the[supported versions](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html).
58+
please see the [supported versions](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html).
6359

64-
## Run the Sample
60+
>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
6561

66-
>**Note**: Before running the sample, make sure Environment Setup is completed.
67-
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
68-
* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated)
69-
* [Conda/PIP](#condapip)
70-
* [Docker](#docker)
62+
**2. (Offline Installer) Activate the AI Tools bundle base environment**
7163

72-
### AI Tools Offline Installer (Validated)
73-
1. If you have not already done so, activate the AI Tools bundle base environment. If you used the default location to install AI Tools, open a terminal and type the following
64+
If the default path is used during the installation of AI Tools:
7465
```
7566
source $HOME/intel/oneapi/intelpython/bin/activate
7667
```
77-
If you used a separate location, open a terminal and type the following
68+
If a non-default path is used:
7869
```
7970
source <custom_path>/bin/activate
8071
```
81-
2. Activate the Conda environment:
72+
73+
**3. (Offline Installer) Activate relevant Conda environment**
74+
75+
For the system with Intel CPU:
8276
```
83-
conda activate tensorflow
84-
```
85-
3. Clone the GitHub repository:
77+
conda activate tensorflow
78+
```
79+
For the system with Intel GPU:
80+
```
81+
conda activate tensorflow-gpu
82+
```
83+
**4. Clone the GitHub repository**
8684
```
8785
git clone https://github.com/oneapi-src/oneAPI-samples.git
8886
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted
8987
```
90-
### Run the Script
88+
## Run the Sample
9189

92-
Run the Python script.
90+
>**Note**: Before running the sample, make sure Environment Setup is completed.
91+
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
92+
* [AI Tools Offline Installer (Validated)/Conda/PIP](#ai-tools-offline-installer-validatedcondapip)
93+
* [Docker](#docker)
94+
### AI Tools Offline Installer (Validated)/Conda/PIP
9395
```
9496
python TensorFlow_HelloWorld.py
9597
```
98+
### Docker
99+
AI Tools Docker images already have Get Started samples pre-installed. Refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
96100
## Example Output
97-
98101
1. With the initial run, you should see results similar to the following:
99102

100103
```
@@ -105,7 +108,6 @@ python TensorFlow_HelloWorld.py
105108
4 0.32920069
106109
[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]
107110
```
108-
109111
2. Export `ONEDNN_VERBOSE` as 1 in the command line. The oneDNN run-time verbose trace should look similar to the following:
110112
```
111113
export ONEDNN_VERBOSE=1
@@ -115,28 +117,31 @@ python TensorFlow_HelloWorld.py
115117

116118
3. Run the sample again. You should see verbose results similar to the following:
117119
```
118-
2024-03-12 16:01:59.784340: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type CPU is enabled.
119-
onednn_verbose,info,oneDNN v3.2.0 (commit 8f2a00d86546e44501c61c38817138619febbb10)
120-
onednn_verbose,info,cpu,runtime:OpenMP,nthr:24
121-
onednn_verbose,info,cpu,isa:Intel AVX2 with Intel DL Boost
122-
onednn_verbose,info,gpu,runtime:none
123-
onednn_verbose,info,prim_template:operation,engine,primitive,implementation,prop_kind,memory_descriptors,attributes,auxiliary,problem_desc,exec_time
124-
onednn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:cdba::f0 dst_f32:p:blocked:Acdb16a::f0,,,10x4x3x3,0.00195312
125-
onednn_verbose,exec,cpu,convolution,brgconv:avx2,forward_training,src_f32::blocked:acdb::f0 wei_f32:ap:blocked:Acdb16a::f0 bia_f32::blocked:a::f0 dst_f32::blocked:acdb::f0,attr-scratchpad:user attr-post-ops:eltwise_relu ,alg:convolution_direct,mb4_ic4oc10_ih128oh128kh3sh1dh0ph1_iw128ow128kw3sw1dw0pw1,1.19702
126-
onednn_verbose,exec,cpu,eltwise,jit:avx2,backward_data,data_f32::blocked:abcd::f0 diff_f32::blocked:abcd::f0,attr-scratchpad:user ,alg:eltwise_relu alpha:0 beta:0,4x128x128x10,0.112061
127-
onednn_verbose,exec,cpu,convolution,jit:avx2,backward_weights,src_f32::blocked:acdb::f0 wei_f32:ap:blocked:ABcd8b8a::f0 bia_undef::undef::: dst_f32::blocked:acdb::f0,attr-scratchpad:user ,alg:convolution_direct,mb4_ic4oc10_ih128oh128kh3sh1dh0ph1_iw128ow128kw3sw1dw0pw1,0.358887
120+
2024-03-12 16:01:59.784340: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type CPU is enabled.
121+
onednn_verbose,info,oneDNN v3.2.0 (commit 8f2a00d86546e44501c61c38817138619febbb10)
122+
onednn_verbose,info,cpu,runtime:OpenMP,nthr:24
123+
onednn_verbose,info,cpu,isa:Intel AVX2 with Intel DL Boost
124+
onednn_verbose,info,gpu,runtime:none
125+
onednn_verbose,info,prim_template:operation,engine,primitive,implementation,prop_kind,memory_descriptors,attributes,auxiliary,problem_desc,exec_time
126+
onednn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:cdba::f0 dst_f32:p:blocked:Acdb16a::f0,,,10x4x3x3,0.00195312
127+
onednn_verbose,exec,cpu,convolution,brgconv:avx2,forward_training,src_f32::blocked:acdb::f0 wei_f32:ap:blocked:Acdb16a::f0 bia_f32::blocked:a::f0
128+
dst_f32::blocked:acdb::f0,attr-scratchpad:user attr-post-ops:eltwise_relu ,alg:convolution_direct,mb4_ic4oc10_ih128oh128kh3sh1dh0ph1_iw128ow128kw3sw1dw0pw1,1.19702
129+
onednn_verbose,exec,cpu,eltwise,jit:avx2,backward_data,data_f32::blocked:abcd::f0 diff_f32::blocked:abcd::f0,attr-scratchpad:user ,alg:eltwise_relu alpha:0
130+
beta:0,4x128x128x10,0.112061
131+
onednn_verbose,exec,cpu,convolution,jit:avx2,backward_weights,src_f32::blocked:acdb::f0 wei_f32:ap:blocked:ABcd8b8a::f0 bia_undef::undef:::
132+
dst_f32::blocked:acdb::f0,attr-scratchpad:user ,alg:convolution_direct,mb4_ic4oc10_ih128oh128kh3sh1dh0ph1_iw128ow128kw3sw1dw0pw1,0.358887
128133
...
129-
```
130-
>**Note**: See the *[oneAPI Deep Neural Network Library Developer Guide and Reference](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html)* for more details on the verbose log.
134+
135+
>**Note**: See the *[oneAPI Deep Neural Network Library Developer Guide and Reference](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html)* for more details on the verbose log.
131136

132137
4. Troubleshooting
133138

134-
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the *[Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)* for more information on using the utility.
139+
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the *[Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)* for more information on using the utility.
135140
or ask support from https://github.com/intel/intel-extension-for-tensorflow
136-
141+
137142
## Related Samples
138143

139-
* [Intel Extension Fot TensorFlow Getting Started Sample](https://github.com/oneapi-src/oneAPI-samples/blob/development/AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_TensorFlow_GettingStarted/README.md)
144+
* [Intel Extension For TensorFlow Getting Started Sample](https://github.com/oneapi-src/oneAPI-samples/blob/development/AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_TensorFlow_GettingStarted/README.md)
140145

141146
## License
142147

0 commit comments

Comments
 (0)