Skip to content

Commit 61ee48a

Browse files
committed
more in readme
1 parent 9d51fab commit 61ee48a

File tree

2 files changed

+26
-1
lines changed

2 files changed

+26
-1
lines changed

cassio/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -234,6 +234,8 @@ The first part of the script consist of slurm preprocessing directives such as:
234234
#SBATCH -c 4
235235
```
236236

237+
**Important: do not forget to activate conda env before submitting a job, or make sure you do so in the script.**
238+
237239
Similar to arguments we passed to `srun` during interactive job request, here we specify requirements for the batch job.
238240

239241
After `#SBATCH` block one may execute any shell commands or run any script of your choice.

prince/README.md

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,4 +97,27 @@ As we noted before, one particular difference with Cassio is about GPU allocatio
9797
#SBATCH --gres=gpu:1
9898
#SBATCH --mem=64G
9999
#SBATCH -c 4
100-
```
100+
```
101+
102+
**Important: do not forget to activate conda env before submitting a job, or make sure you do so in the script.**
103+
104+
Similar to arguments we passed to `srun` during interactive job request, here we specify requirements for the batch job.
105+
106+
After `#SBATCH` block one may execute any shell commands or run any script of your choice.
107+
108+
**You can not mix `#SBATCH` lines with other commands, Slurm will not register any `#SBATCH` after the first regular (non-comment) command in the script.**
109+
110+
To submit `job_wgpu` located in `gpu_job.slurm`, go to Cassio node and run:
111+
112+
`sbatch gpu_job.slurm`
113+
114+
sample output:
115+
116+
```
117+
Torch cuda available: True
118+
GPU name: Tesla V100-SXM2-32GB-LS
119+
120+
121+
CPU matmul elapsed: 1.1984939575195312 sec.
122+
GPU matmul elapsed: 0.01778721809387207 sec.
123+
```

0 commit comments

Comments
 (0)