Skip to content

Commit 13cb632

Browse files
authored
Added distinct DevCloud instructions (oneapi-src#555)
The previous devcloud instructions did not work with this sample. I made a DevCloud specific section and added instructions for using Jupyter Lab terminal and a local terminal.
1 parent 8e03683 commit 13cb632

File tree

4 files changed

+157
-28
lines changed

4 files changed

+157
-28
lines changed

AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted/README.md

Lines changed: 157 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -33,44 +33,23 @@ Intel-optimized Tensorflow is available as part of the Intel® AI Analytics Tool
3333

3434
Runtime settings for `MKLDNN_VERBOSE`, `KMP_AFFINITY`, and `Inter/Intra-op` Threads are set within the script. You can read more about these settings in this dedicated document: [Maximize TensorFlow Performance on CPU: Considerations and Recommendations for Inference Workloads](https://software.intel.com/en-us/articles/maximize-tensorflow-performance-on-cpu-considerations-and-recommendations-for-inference)
3535

36-
## License
36+
## License
3737
Code samples are licensed under the MIT license. See
3838
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
3939

4040
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
4141

42-
## Build and Run the Sample
43-
44-
<!---Include the next paragraph ONLY if the sample runs in batch mode-->
45-
### Run in Batch Mode
46-
This sample runs in batch mode, so you must have a script for batch processing. Once you have a script set up, refer to the [Tensorflow Hello World](https://github.com/oneapi-src/oneAPI-samples/blob/master/AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted/README.md) instructions to run the sample.
47-
48-
<!---Include the next paragraph ONLY if the sample DOES NOT RUN in batch mode-->
49-
### Run in Interactive Mode
50-
This sample runs in interactive mode. Follow the directions in the README.md for the sample you want to run. If the sample can be run in interactive mode, the sample will have directions on how to run the sample in a Jupyter Notebook. An example can be found in the [Intel&reg; Modin Getting Started](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Getting-Started-Samples/IntelModin_GettingStarted) sample.
51-
52-
### Request a Compute Node
53-
In order to run on the DevCloud, you need to request a compute node using node properties such as: `gpu`, `xeon`, `fpga_compile`, `fpga_runtime` and others. For more information about the node properties, execute the `pbsnodes` command.
54-
This node information must be provided when submitting a job to run your sample in batch mode using the qsub command. When you see the qsub command in the Run section of the [Hello World instructions](https://devcloud.intel.com/oneapi/get_started/aiAnalyticsToolkitSamples/), change the command to fit the node you are using. Nodes which are in bold indicate they are compatible with this sample:
55-
56-
<!---Mark each compatible Node in BOLD-->
57-
| Node | Command |
58-
| ----------------- | ------------------------------------------------------- |
59-
| GPU | qsub -l nodes=1:gpu:ppn=2 -d . hello-world.sh |
60-
| CPU | qsub -l nodes=1:xeon:ppn=2 -d . hello-world.sh |
61-
| FPGA Compile Time | qsub -l nodes=1:fpga\_compile:ppn=2 -d . hello-world.sh |
62-
| FPGA Runtime | qsub -l nodes=1:fpga\_runtime:ppn=2 -d . hello-world.sh |
63-
42+
## Build and Run the Sample on your Local Machine
6443

44+
These instructions demonstrate how to build and run a sample on a machine where you have installed the Intel AI Analytics Toolkit. If you would like to try a sample without installing a toolkit, see [Running Samples in DevCloud](#running-samples-in-devcloud).
6545

6646
### Pre-requirement
6747

6848
TensorFlow is ready for use once you finish the Intel AI Analytics Toolkit installation and have run the post installation script.
6949

7050
You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation and the Toolkit [Getting Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.
7151

72-
### On a Linux* System
73-
#### Activate conda environment With Root Access
52+
### Activate conda environment With Root Access
7453

7554
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the setvars.sh script. Then, navigate the Linux shell to your oneapi installation path, typically `~/intel/oneapi`. Activate the conda environment with the following command:
7655

@@ -81,7 +60,7 @@ source activate tensorflow
8160

8261
please replace ~/intel/oneapi for your oneapi installation path.
8362

84-
#### Activate conda environment Without Root Access (Optional)
63+
### Activate conda environment Without Root Access (Optional)
8564

8665
By default, the Intel AI Analytics toolkit is installed in the inteloneapi folder, which requires root privileges to manage it. If you would like to bypass using root access to manage your conda environment, then you can clone your desired conda environment using the following command:
8766

@@ -97,7 +76,15 @@ source activate user_tensorflow
9776

9877
## Running the Sample
9978

100-
To run the program on Linux* or the environment of Intel DevCloud, type the following command in the terminal with Python installed:
79+
To run the program on Linux*, type the following command in the terminal with Python installed:
80+
81+
1. Navigate to the directory with the TensorFlow sample:
82+
83+
```
84+
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started Samples/IntelTensorFlow_GettingStarted
85+
```
86+
2. Run the sample:
87+
10188
```
10289
python TensorFlow_HelloWorld.py
10390
```
@@ -117,7 +104,11 @@ If you export the DNNL_VERBOSE as 1 in the command line, the mkldnn run-time ver
117104
```
118105
export DNNL_VERBOSE=1
119106
```
120-
107+
Then run the sample again:
108+
```
109+
python TensorFlow_HelloWorld.py
110+
```
111+
You will see the verbose output:
121112
```
122113
2021-01-06 10:44:28.875296: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
123114
dnnl_verbose,info,DNNL v1.2.0 (commit N/A)
@@ -130,3 +121,141 @@ dnnl_verbose,exec,cpu,convolution,jit:avx512_common,forward_training,src_f32::bl
130121
```
131122
Please see the [DNNL Developer's Guide](https://intel.github.io/mkl-dnn/dev_guide_verbose.html) for more details on the verbose log.
132123

124+
### Running Samples In DevCloud (Optional)
125+
#### Run TensorFlow_HelloWorld in Jupyter Lab
126+
1. Open [Intel DevCloud](https://software.intel.com/content/www/us/en/develop/tools/devcloud.html).
127+
2. In the upper right corner, click Sign In.
128+
3. Log in with your Intel account username and password.
129+
4. Open Jupyter lab: https://jupyter.oneapi.devcloud.intel.com/
130+
131+
a. If you are redirected to the Intel DevCloud page, scroll to the bottom and select Launch Jupyter Lab.
132+
![](images/jupyter-button.png)
133+
b. When Jupyter Lab opens, if prompted for a kernel, select **No Kernel**.
134+
5. Close the Welcome page. The Launcher tab will appear.
135+
6. On the Launcher tab, click **Terminal**.
136+
![](images/jupyter-terminal.png)
137+
138+
7. You will see your login at the prompt.
139+
8. Activate the TensorFlow environment:
140+
source activate tensorflow
141+
You will see (tensorflow) in your prompt.
142+
143+
![](images/tf-login.png)
144+
145+
9. Change directories to the TensorFlow Getting Started sample directory.
146+
```
147+
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted
148+
```
149+
10. Run the program:
150+
```
151+
python TensorFlow_HelloWorld.py
152+
```
153+
With successful execution, it will print out the following results:
154+
```
155+
0 0.4147554
156+
1 0.3561021
157+
2 0.33979267
158+
3 0.33283564
159+
4 0.32920069
160+
[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]
161+
```
162+
To see verbose output, enable verbose mode:
163+
164+
```
165+
export DNNL_VERBOSE=1
166+
```
167+
168+
Run the sample again to see the verbose output:
169+
```
170+
python TensorFlow_HelloWorld.py
171+
```
172+
The mkldnn run-time verbose trace should look similar to what is shown below:
173+
174+
```
175+
2021-01-06 10:44:28.875296: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
176+
dnnl_verbose,info,DNNL v1.2.0 (commit N/A)
177+
dnnl_verbose,info,cpu,runtime:OpenMP
178+
dnnl_verbose,info,cpu,isa:Intel AVX-512 with Intel DL Boost
179+
dnnl_verbose,info,gpu,runtime:none
180+
dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:acdb:f0 dst_f32::blocked:abcd:f0,,,4x4x128x128,12.0649
181+
dnnl_verbose,exec,cpu,reorder,simple:any,undef,src_f32::blocked:cdba:f0 dst_f32:p:blocked:Acdb16a:f0,,,10x4x3x3,0.187012
182+
dnnl_verbose,exec,cpu,convolution,jit:avx512_common,forward_training,src_f32::blocked:abcd:f0 wei_f32:p:blocked:Acdb16a:f0 bia_undef::undef::f0 dst_f32:p:blocked:aBcd16b:f0,,alg:convolution_direct,mb4_ic4oc10_ih128oh128kh3sh1dh0ph1_iw128ow128kw3sw1dw0pw1,0.266113
183+
```
184+
185+
186+
### Running the Sample in DevCloud with a Local Terminal
187+
1. Open a terminal on your Linux system.
188+
2. Log in to DevCloud.
189+
```
190+
ssh devcloud
191+
```
192+
3. Enable job submission to the queue:
193+
```
194+
qsub -I
195+
```
196+
4. Change directories to the TensorFlow Getting Started sample directory.
197+
```
198+
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted
199+
```
200+
5. To run the program on Linux* or the environment of Intel DevCloud:
201+
```
202+
python TensorFlow_HelloWorld.py
203+
```
204+
With successful execution, it will print out the following results:
205+
```
206+
0 0.4147554
207+
1 0.3561021
208+
2 0.33979267
209+
3 0.33283564
210+
4 0.32920069
211+
[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]
212+
```
213+
214+
To see verbose output, enable verbose mode:
215+
```
216+
export DNNL_VERBOSE=1
217+
```
218+
Run the sample again to see the verbose output:
219+
```
220+
python TensorFlow_HelloWorld.py
221+
```
222+
223+
The mkldnn run-time verbose trace should look similar to what is shown below:
224+
225+
```
226+
2021-01-06 10:44:28.875296: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
227+
dnnl_verbose,info,DNNL v1.2.0 (commit N/A)
228+
dnnl_verbose,info,cpu,runtime:OpenMP
229+
dnnl_verbose,info,cpu,isa:Intel AVX-512 with Intel DL Boost
230+
dnnl_verbose,info,gpu,runtime:none
231+
dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:acdb:f0 dst_f32::blocked:abcd:f0,,,4x4x128x128,12.0649
232+
dnnl_verbose,exec,cpu,reorder,simple:any,undef,src_f32::blocked:cdba:f0 dst_f32:p:blocked:Acdb16a:f0,,,10x4x3x3,0.187012
233+
dnnl_verbose,exec,cpu,convolution,jit:avx512_common,forward_training,src_f32::blocked:abcd:f0 wei_f32:p:blocked:Acdb16a:f0 bia_undef::undef::f0 dst_f32:p:blocked:aBcd16b:f0,,alg:convolution_direct,mb4_ic4oc10_ih128oh128kh3sh1dh0ph1_iw128ow128kw3sw1dw0pw1,0.266113
234+
```
235+
236+
### Run in Batch Mode on DevCloud
237+
The batch script will activate your environment for each sample and run each sample.
238+
239+
1. Navigate to the directory with the TensorFlow sample:
240+
```
241+
cd ~/oneAPI-samples/AI-and-Analytics/Getting-Started Samples/IntelTensorFlow_GettingStarted
242+
```
243+
244+
2. Create a new file titled hello-world.sh.
245+
```
246+
vim hello-world.sh
247+
```
248+
3. Add these two lines to the top of the file:
249+
```
250+
source activate tensorflow
251+
python TensorFlow_HelloWorld.py
252+
```
253+
4. Save and exit your text editor
254+
```
255+
:wq
256+
```
257+
5. You can now use the script to run in batch mode.
258+
source hello-world.sh
259+
6. To add files to run in bach mode, open hello-world.sh and add each script or sample on a new line.
260+
261+
Loading

0 commit comments

Comments
 (0)