Commit afa9241b authored by Carina Lansing's avatar Carina Lansing
Browse files

Reinstated a section that got deleted on accident.

parent 327a49f4
......@@ -27,3 +27,76 @@ $ module load singularity/3.7.1
Once in your compute node shell with the singularity module loaded,
you can run LASSO-O using the following instructions.
## Running LASSO-O via Singularity
You can start LASSO-O by executing the `` script via a scheduler command.
Below we provide examples for the Slurm and PBS schedulers.
These commands instruct the scheduler to run LASSO-O with one node
and one cpu. Currently, the LASSO-O container executes each process
in sequence. Later we will add the ability to support multiple cores.
Until then, LASSO-O will take a while to run depending upon the number of simulations
you are processing. In addition, the first time you run the lasso-o_shcu container,
the container runtime will download the container image, which will also take a few minutes.
> **NOTE:**
> If you do not use a scheduler to invoke the script, it will run on the login node,
> which may be killed per host policy if it runs for too long.
When your job has completed, you may view the outputs created in the
`run-lasso-o_shcu/data/outputs` folder using the notebooks provided.
See the [notebooks/](notebooks/ file for more
#### Start via Slurm
* **Step 1: Run job**
$ srun --verbose --nodes=1 --ntasks=1 --cpus-per-task=1 ./
* **Step 2: Check job status**
To check the status of your job, list the queue for your user ID:
$ squeue -u [user_name]
You can also get more information about the running job using the scontrol command
and the jobID printed out from the squeue command:
$ scontrol show jobid -dd [jobID]
#### Start via PBS
PBS jobs must be started with a batch script:
* **Step 1: Edit batch script**
To run LASSO-O, first edit
the `` file to use the appropriate parameters for your
environment. In particular, the account name, group_list, and QoS
parameters should be definitely be changed, but other parameters may also be adjusted
as needed. In addition, the `module load` commands should be adjusted
to load the singularity module that is available at your cluster.
* **Step 2: Submit job**
After you have edited the batch script, then you should be able to submit a batch job via:
$ qsub -d .
* **Step 3: Check job status**
To check the status of your job, list the queue for your user ID:
$ qstat -u [user_name]
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment