Skip to content

Commit fda4b68

Browse files
authored
Update README.md (oneapi-src#546)
1 parent dad5ec9 commit fda4b68

File tree

1 file changed

+2
-2
lines changed
  • AI-and-Analytics/Getting-Started-Samples/LPOT-Sample-for-Tensorflow

1 file changed

+2
-2
lines changed

AI-and-Analytics/Getting-Started-Samples/LPOT-Sample-for-Tensorflow/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `Intel® Low Precision Optimization Tool (LPOT)` Sample for TensorFlow*
1+
# `Intel® Low Precision Optimization Tool (LPOT)` Sample for TensorFlow*
22

33
## Background
44
Low-precision inference can speed up inference obviously, by converting the fp32 model to int8 or bf16 model. Intel provides Intel® Deep Learning Boost technology in the Second Generation Intel® Xeon® Scalable Processors and newer Xeon®, which supports to speed up int8 and bf16 model by hardware.
@@ -178,7 +178,7 @@ conda activate /opt/intel/oneapi/intelpython/latest/envs/tensorflow
178178

179179
### Open Sample Code File
180180

181-
In a web browser, open link: **http://yyy:8888/?token=146761d9317552c43e0d6b8b6b9e1108053d465f6ca32fca**. Click 'lpot_sample_TensorFlow.ipynb' to start up the sample.
181+
In a web browser, open link: **http://yyy:8888/?token=146761d9317552c43e0d6b8b6b9e1108053d465f6ca32fca**. Click 'lpot_sample_tensorflow.ipynb' to start up the sample.
182182

183183
### Run
184184

0 commit comments

Comments
 (0)