Skip to content

Commit ed10529

Browse files
authored
KMeans Readme Update for node and mode (oneapi-src#525)
* KMeans Readme Update for node and mode * KMeans Readme Update for node and mode
1 parent 5e1154d commit ed10529

File tree

1 file changed

+27
-1
lines changed
  • AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedKMeans

1 file changed

+27
-1
lines changed

AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedKMeans/README.md

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ Code samples are licensed under the MIT license. See
2424

2525
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
2626

27+
## Running Samples on the Intel® DevCloud
28+
If you are running this sample on the DevCloud, see [Running Samples on the Intel® DevCloud](#run-samples-on-devcloud)
29+
2730
## Building daal4py for CPU
2831

2932
oneAPI Data Analytics Library is ready for use once you finish the Intel® oneAPI AI Analytics Toolkit installation and have run the post installation script.
@@ -72,7 +75,7 @@ Launch Jupyter Notebook in the directory housing the code example
7275
jupyter notebook
7376
```
7477

75-
### Running the Sample as a Python File
78+
### Running the Sample as a Python File<a name="running-the-sample"></a>
7679

7780
When using daal4py for distributed memory systems, the command needed to execute the program should be executed in a bash shell. To execute this example, run the following command, where the number **4** is chosen as an example and means that it will run on **4 processes**:
7881

@@ -84,6 +87,29 @@ The output of the script will be saved in the included models and result directo
8487

8588
_Note: This code samples focus on using daal4py to do distributed ML computations on chunks of data. The `mpirun` command above will only run on a single local node. To launch on a cluster, you will need to create a host file on the master node, among other steps. The **TensorFlow_Multinode_Training_with_Horovod** code sample explains this process well._
8689

90+
## Running Samples on the Intel&reg; DevCloud (Optional)<a name="run-samples-on-devcloud"></a>
91+
92+
<!---Include the next paragraph ONLY if the sample runs in batch mode-->
93+
### Run in Batch Mode
94+
This sample runs in batch mode, so you must have a script for batch processing. Once you have a script set up, refer to [Running the Sample](#running-the-sample).
95+
96+
<!---Include the next paragraph ONLY if the sample DOES NOT RUN in batch mode-->
97+
### Run in Interactive Mode
98+
This sample runs in interactive mode. For more information, see [Run as Juypter Notebook](#run-as-jupyter-notebook).
99+
100+
### Request a Compute Node
101+
In order to run on the DevCloud, you need to request a compute node using node properties such as: `gpu`, `xeon`, `fpga_compile`, `fpga_runtime` and others. For more information about the node properties, execute the `pbsnodes` command.
102+
This node information must be provided when submitting a job to run your sample in batch mode using the qsub command. When you see the qsub command in the Run section of the [Hello World instructions](https://devcloud.intel.com/oneapi/get_started/aiAnalyticsToolkitSamples/), change the command to fit the node you are using. Nodes which are in bold indicate they are compatible with this sample:
103+
104+
<!---Mark each compatible Node in BOLD-->
105+
| Node | Command |
106+
| ----------------- | ------------------------------------------------------- |
107+
| GPU | qsub -l nodes=1:gpu:ppn=2 -d . hello-world.sh |
108+
| __CPU__ | __qsub -l nodes=1:xeon:ppn=2 -d . hello-world.sh__ |
109+
| FPGA Compile Time | qsub -l nodes=1:fpga\_compile:ppn=2 -d . hello-world.sh |
110+
| FPGA Runtime | qsub -l nodes=1:fpga\_runtime:ppn=2 -d . hello-world.sh |
111+
112+
87113
##### Expected Printed Output (with similar numbers, printed 4 times):
88114
```
89115

0 commit comments

Comments
 (0)