@@ -34,10 +34,10 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
34
34
35
35
## Build and Run the Sample
36
36
37
- ### Pre-requirement
37
+ ## Running Samples on the Intel® ; DevCloud
38
+ If you are running this sample on the DevCloud, see [ Running Samples on the Intel® ; DevCloud] ( #run-samples-on-devcloud )
38
39
39
- > NOTE: No action is required if users use Intel DevCloud as their environment.
40
- Please refer to [ Intel oneAPI DevCloud] ( https://intelsoftwaresites.secure.force.com/devcloud/oneapi ) for Intel DevCloud.
40
+ ### Pre-requirements
41
41
42
42
1 . ** Intel AI Analytics Toolkit**
43
43
You can refer to the oneAPI [ main page] ( https://software.intel.com/en-us/oneapi ) for toolkit installation,
@@ -47,39 +47,8 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
47
47
Users can install via PIP by ` $pip install notebook ` .
48
48
Users can also refer to the [ installation link] ( https://jupyter.org/install ) for details.
49
49
50
- #### ** Conda Environment Creation**
50
+ #### ** Conda Environment Creation (Local Installation) **
51
51
52
- ##### ** 1. Intel oneAPI DevCloud**
53
- ---
54
- ###### ** Stock TensorFlow**
55
-
56
- 1 . Create conda env: ` $conda create -n stock-tensorflow python matplotlib ipykernel psutil pandas gitpython `
57
- 2 . Activate the created conda env: ` $source activate stock-tensorflow. `
58
- 3 . Install stock Tensorflow with a specific version: ` (stock-tensorflow) $pip install tensorflow==2.3.0 `
59
- 4 . Install extra needed package: ` (stock-tensorflow) $pip install cxxfilt `
60
- 5 . Deactivate conda env: ` (stock-tensorflow)$conda deactivate `
61
- 6 . Register the kernel to Jupyter NB: ` $~/.conda/envs/stock-tensorflow/bin/python -m ipykernel install --user --name=stock-tensorflow `
62
-
63
- > NOTE: Please change the python path if you have a different folder path for anaconda3.
64
- After profiling, users can remove the kernel from Jupyter NB with ` $jupyter kernelspec uninstall stock-tensorflow. `
65
-
66
- ###### ** Intel TensorFlow**
67
-
68
- > NOTE: Intel-optimized Tensorflow is on DevCloud. However, users don't have access to install extra packages.
69
- Therefore, we need to clone Intel Tensorflow into the user's home directory for installing extra packages.
70
-
71
- 1 . Source oneAPI environment variables: ` $source /opt/intel/oneapi/setvars.sh `
72
- 2 . Create conda env: ` $conda create --name intel-tensorflow --clone tensorflow `
73
- 3 . Activate the created conda env: ` $source activate intel-tensorflow `
74
- 4 Install the extra needed package: ` (intel-tensorflow) $pip install cxxfilt matplotlib ipykernel psutil pandas gitpython `
75
- 5 . Deactivate conda env: ` (intel-tensorflow)$conda deactivate `
76
- 6 . Register the kernel to Jupyter NB: ` $~/.conda/envs/intel-tensorflow/bin/python -m ipykernel install --user --name=intel-tensorflow `
77
-
78
- > NOTE: Please change the python path if you have a different folder path for anaconda3.
79
- After profiling, users can remove the kernel from Jupyter NB with ` $jupyter kernelspec uninstall intel-tensorflow. `
80
-
81
- ##### ** 2. Linux with Intel oneAPI AI Analytics Toolkit**
82
- ---
83
52
###### ** Stock TensorFlow**
84
53
85
54
1 . Create conda env: ` $conda create -n stock-tensorflow python matplotlib ipykernel psutil pandas gitpython `
@@ -106,7 +75,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
106
75
After profiling, users can remove the kernel from Jupyter NB with ` $jupyter kernelspec uninstall intel-tensorflow. `
107
76
108
77
109
- ### Running the Sample
78
+ ### Running the Sample (local installation)
110
79
111
80
1 . Copy the Intel Model Zoo from your AI Analytics Toolkit installation path: ` $cp -rf /opt/intel/oneapi/modelzoo/latest/models ~/ `
112
81
2 . cd ~ /models; git init; git add . ; git commit -m 'initial commit'
@@ -123,11 +92,66 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
123
92
124
93
> NOTE: To compare stock and Intel-optimized TF results in the section "Analyze TF Timeline results among Stock and Intel Tensorflow," users need to run all cells before the comparison section with both stock-tensorflow and intel-tensorflow kernels.
125
94
95
+ ### ** Running Samples on the Intel® ; DevCloud (Optional)<a name =" run-samples-on-devcloud " ></a >**
96
+
97
+ #### ** Conda Environment Creation (DevCloud)**
98
+
99
+ ##### ** Stock TensorFlow**
100
+
101
+ 1 . Create conda env: ` $conda create -n stock-tensorflow python matplotlib ipykernel psutil pandas gitpython `
102
+ 2 . Activate the created conda env: ` $source activate stock-tensorflow. `
103
+ 3 . Install stock Tensorflow with a specific version: ` (stock-tensorflow) $pip install tensorflow==2.3.0 `
104
+ 4 . Install extra needed package: ` (stock-tensorflow) $pip install cxxfilt `
105
+ 5 . Deactivate conda env: ` (stock-tensorflow)$conda deactivate `
106
+ 6 . Register the kernel to Jupyter NB: ` $~/.conda/envs/stock-tensorflow/bin/python -m ipykernel install --user --name=stock-tensorflow `
107
+
108
+ > NOTE: Please change the python path if you have a different folder path for anaconda3.
109
+ After profiling, users can remove the kernel from Jupyter NB with ` $jupyter kernelspec uninstall stock-tensorflow. `
110
+
111
+ ##### ** Intel TensorFlow**
112
+
113
+ > NOTE: Intel-optimized Tensorflow is on DevCloud. However, users don't have access to install extra packages.
114
+ Therefore, we need to clone Intel Tensorflow into the user's home directory for installing extra packages.
115
+
116
+ 1 . Source oneAPI environment variables: ` $source /opt/intel/oneapi/setvars.sh `
117
+ 2 . Create conda env: ` $conda create --name intel-tensorflow --clone tensorflow `
118
+ 3 . Activate the created conda env: ` $source activate intel-tensorflow `
119
+ 4 Install the extra needed package: ` (intel-tensorflow) $pip install cxxfilt matplotlib ipykernel psutil pandas gitpython `
120
+ 5 . Deactivate conda env: ` (intel-tensorflow)$conda deactivate `
121
+ 6 . Register the kernel to Jupyter NB: ` $~/.conda/envs/intel-tensorflow/bin/python -m ipykernel install --user --name=intel-tensorflow `
122
+
123
+ > NOTE: Please change the python path if you have a different folder path for anaconda3.
124
+ After profiling, users can remove the kernel from Jupyter NB with ` $jupyter kernelspec uninstall intel-tensorflow. `
125
+
126
+ ### ** Run in Interactive Mode**
127
+ 1 . Copy the Intel Model Zoo from your AI Analytics Toolkit installation path: ` $cp -rf /opt/intel/oneapi/modelzoo/latest/models ~/ `
128
+ 2 . Launch Jupyter notebook: ` $jupyter notebook --ip=0.0.0.0 `
129
+ 3 . Follow the instructions to open the URL with the token in your browser
130
+ 4 . Browse to the ` models/docs/notebooks/perf_analysis ` folder
131
+ 5 . Click the ` benchmark_perf_comparison.ipynb ` or ` benchmark_perf_timeline_analysis.ipynb ` file
132
+ 6 . Change your Jupyter notebook kernel to either "stock-tensorflow" or "intel-tensorflow" (highlighted in the diagram below)
133
+ <br ><img src =" images/jupyter_kernels.png " width =" 300 " height =" 300 " ><br >
134
+ 7 . Run through every cell of the notebook one by one
135
+
136
+ > NOTE: To compare stock and Intel-optimized TF results in the section "Analyze TF Timeline results among Stock and Intel Tensorflow," users need to run all cells before the comparison section with both stock-tensorflow and intel-tensorflow kernels.
137
+
138
+ ### ** Request a Compute Node**
139
+ In order to run on the DevCloud, you need to request a compute node using node properties such as: ` gpu ` , ` xeon ` , ` fpga_compile ` , ` fpga_runtime ` and others. For more information about the node properties, execute the ` pbsnodes ` command.
140
+ This node information must be provided when submitting a job to run your sample in batch mode using the qsub command. When you see the qsub command in the Run section of the [ Hello World instructions] ( https://devcloud.intel.com/oneapi/get_started/aiAnalyticsToolkitSamples/ ) , change the command to fit the node you are using. Nodes which are in bold indicate they are compatible with this sample:
141
+
142
+ <!-- -Mark each compatible Node in BOLD-->
143
+ | Node | Command |
144
+ | ----------------- | ------------------------------------------------------- |
145
+ | GPU | qsub -l nodes=1:gpu: ppn =2 -d . hello-world.sh |
146
+ | CPU | qsub -l nodes=1:xeon: ppn =2 -d . hello-world.sh |
147
+ | FPGA Compile Time | qsub -l nodes=1: fpga \_ compile: ppn =2 -d . hello-world.sh |
148
+ | FPGA Runtime | qsub -l nodes=1: fpga \_ runtime: ppn =2 -d . hello-world.sh |
149
+
126
150
### Example of Output
127
- Users should be able to see some diagrams for performance comparison and analysis.
151
+ Users should be able to see some diagrams for performance comparison and analysis.
128
152
One example of performance comparison diagrams:
129
153
<br ><img src =" images/perf_comparison.png " width =" 400 " height =" 300 " ><br >
130
154
131
- For performance analysis, users can also see pie charts for top hotspots of Tensorflow* operations among Stock and Intel Tensorflow.
155
+ For performance analysis, users can also see pie charts for top hotspots of Tensorflow* operations among Stock and Intel Tensorflow.
132
156
One example of performance analysis diagrams:
133
157
<br ><img src =" images/compared_tf_op_duration_pie.png " width =" 900 " height =" 400 " ><br >
0 commit comments