Skip to content

Commit 7dcde4d

Browse files
authored
node and mode updates to linear regression readme (oneapi-src#524)
1 parent ed10529 commit 7dcde4d

File tree

1 file changed

+27
-1
lines changed
  • AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedLinearRegression

1 file changed

+27
-1
lines changed

AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedLinearRegression/README.md

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ Code samples are licensed under the MIT license. See
2424

2525
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
2626

27+
## Running Samples on the Intel® DevCloud
28+
If you are running this sample on the DevCloud, see [Running Samples on the Intel® DevCloud](#run-samples-on-devcloud)
29+
2730
## Building daal4py for CPU
2831

2932
oneAPI Data Analytics Library is ready for use once you finish the Intel® oneAPI AI Analytics Toolkit installation and have run the post installation script.
@@ -72,7 +75,7 @@ Launch Jupyter Notebook in the directory housing the code example
7275
jupyter notebook
7376
```
7477

75-
## Running the Sample
78+
## Running the Sample<a name="running-the-sample"></a>
7679

7780
### Running the Sample as a Python File
7881

@@ -86,6 +89,29 @@ The output of the script will be saved in the included models and result directo
8689

8790
_Note: This code samples focus on using daal4py to do distributed ML computations on chunks of data. The `mpirun` command above will only run on a single local node. To launch on a cluster, you will need to create a host file on the master node, among other steps. The **TensorFlow_Multinode_Training_with_Horovod** code sample explains this process well._
8891

92+
### Running Samples on the Intel&reg; DevCloud (Optional)<a name="run-samples-on-devcloud"></a>
93+
94+
<!---Include the next paragraph ONLY if the sample runs in batch mode-->
95+
### Run in Batch Mode
96+
This sample runs in batch mode, so you must have a script for batch processing. Once you have a script set up, refer to [Running the Sample](#running-the-sample).
97+
98+
<!---Include the next paragraph ONLY if the sample DOES NOT RUN in batch mode-->
99+
### Run in Interactive Mode
100+
This sample runs in interactive mode. For more information, see [Run as Juypter Notebook](#run-as-jupyter-notebook).
101+
102+
### Request a Compute Node
103+
In order to run on the DevCloud, you need to request a compute node using node properties such as: `gpu`, `xeon`, `fpga_compile`, `fpga_runtime` and others. For more information about the node properties, execute the `pbsnodes` command.
104+
This node information must be provided when submitting a job to run your sample in batch mode using the qsub command. When you see the qsub command in the Run section of the [Hello World instructions](https://devcloud.intel.com/oneapi/get_started/aiAnalyticsToolkitSamples/), change the command to fit the node you are using. Nodes which are in bold indicate they are compatible with this sample:
105+
106+
<!---Mark each compatible Node in BOLD-->
107+
| Node | Command |
108+
| ----------------- | ------------------------------------------------------- |
109+
| GPU | qsub -l nodes=1:gpu:ppn=2 -d . hello-world.sh |
110+
| __CPU__ | __qsub -l nodes=1:xeon:ppn=2 -d . hello-world.sh__ |
111+
| FPGA Compile Time | qsub -l nodes=1:fpga\_compile:ppn=2 -d . hello-world.sh |
112+
| FPGA Runtime | qsub -l nodes=1:fpga\_runtime:ppn=2 -d . hello-world.sh |
113+
114+
89115
##### Expected Printed Output (with similar numbers, printed 4 times):
90116
```
91117

0 commit comments

Comments
 (0)