Commit 327a49f4 authored by Carina Lansing's avatar Carina Lansing
Browse files

Updated the NOTE style in the readme to something that GitLab can render.

parent d7e87dac
......@@ -36,11 +36,10 @@ Until then, LASSO-O will take a while to run depending upon the number of simula
you are processing. In addition, the first time you run the lasso-o_shcu container,
the container runtime will download the container image, which will also take a few minutes.
<div style="background-color: #F9F5D2; border: 1px solid grey; margin: 10px; padding: 10px;">
<strong>NOTE: </strong>
If you do not use a scheduler to invoke the script, it will run on the login node,
which may be killed per host policy if it runs for too long.
</div>
> **NOTE:**
> If you do not use a scheduler to invoke the script, it will run on the login node,
> which may be killed per host policy if it runs for too long.
When your job has completed, you may view the outputs created in the
`run-lasso-o_shcu/data/outputs` folder using the notebooks provided.
......
......@@ -27,77 +27,3 @@ $ module load singularity/3.7.1
Once in your compute node shell with the singularity module loaded,
you can run LASSO-O using the following instructions.
## Running LASSO-O via Singularity
You can start LASSO-O by executing the `run.sh` script via a scheduler command.
Below we provide examples for the Slurm and PBS schedulers.
These commands instruct the scheduler to run LASSO-O with one node
and one cpu. Currently, the LASSO-O container executes each process
in sequence. Later we will add the ability to support multiple cores.
Until then, LASSO-O will take a while to run depending upon the number of simulations
you are processing. In addition, the first time you run the lasso-o_shcu container,
the container runtime will download the container image, which will also take a few minutes.
<div style="background-color: #F9F5D2; border: 1px solid grey; margin: 10px; padding: 10px;">
<strong>NOTE: </strong>
If you do not use a scheduler to invoke the script, it will run on the login node,
which may be killed per host policy if it runs for too long.
</div>
When your job has completed, you may view the outputs created in the
`run-lasso-o_shcu/data/outputs` folder using the notebooks provided.
See the [notebooks/README.md](notebooks/README.md) file for more
information.
#### Start via Slurm
* **Step 1: Run job**
```bash
$ srun --verbose --nodes=1 --ntasks=1 --cpus-per-task=1 ./run.sh
```
* **Step 2: Check job status**
To check the status of your job, list the queue for your user ID:
```bash
$ squeue -u [user_name]
```
You can also get more information about the running job using the scontrol command
and the jobID printed out from the squeue command:
```bash
$ scontrol show jobid -dd [jobID]
```
#### Start via PBS
PBS jobs must be started with a batch script:
* **Step 1: Edit batch script**
To run LASSO-O, first edit
the `pbs_sub.sh` file to use the appropriate parameters for your
environment. In particular, the account name, group_list, and QoS
parameters should be definitely be changed, but other parameters may also be adjusted
as needed. In addition, the `module load` commands should be adjusted
to load the singularity module that is available at your cluster.
* **Step 2: Submit job**
After you have edited the batch script, then you should be able to submit a batch job via:
```bash
$ qsub pbs_sub.sh -d .
```
* **Step 3: Check job status**
To check the status of your job, list the queue for your user ID:
```bash
$ qstat -u [user_name]
```
......@@ -26,6 +26,12 @@ environment:
* [README-DOCKER.md](./README-DOCKER.md). Use these instructions if you are running from your local desktop. **NOTE: this should be used for small test runs only**
* [README-SINGULARITY.md](./README-SINGULARITY.md). Use these instructions if you have Singularity installed on your HPC cluster.
> **NOTE:**
> Some HPC clusters may require review/approval of container images before you
> are allowed to use them. If that is the case, admins can find more information
> about the lasso-o_shcu container image and how it was built from the [GitLab repository](thttps://code.arm.gov/lasso/containers/lasso-o_shcu).
#### 3) Prepare Simulation Data
Place or symbolically link your WRF LES simulation wrfstat and wrfout files into the `data/inputs` directory.
You may add 1 - 10 different simulation outputs to the inputs folder. Wrfout
......@@ -65,11 +71,9 @@ wrfstat_d01_2018-07-10_12:00:00.nc
You can order WRF Simulation data to test with from the [ARM Data Discovery Tool](https://adc.arm.gov/discovery/#/results/datastream::sgplassodiagraw3C1.m1/start_date::2018-07-10)
<div style="background-color: #F9F5D2; border: 1px solid grey; margin: 10px; padding: 10px;">
<strong>NOTE: </strong>
You do not have to copy any observational data (e.g., sgpcldfracset15mC1.c1, sgplassodiagobsC1.c1),
as this data is embedded in the container image.
</div>
> **NOTE:**
> You do not have to copy any observational data (e.g., sgpcldfracset15mC1.c1, sgplassodiagobsC1.c1),
> as this data is embedded in the container image.
### 4) Edit config.yml file
Edit the config.yml file to provide parameters about your run. The config.yml file contains
......@@ -112,11 +116,9 @@ environment:
* [README-DOCKER.md](./README-DOCKER.md). Use these instructions if you are running from your local desktop. **NOTE: this should be used for small test runs only**
* [README-SINGULARITY.md](./README-SINGULARITY.md). Use these instructions if you have Singularity installed on your HPC cluster.
<div style="background-color: #F9F5D2; border: 1px solid grey; margin: 10px; padding: 10px;">
<strong>NOTE: </strong>
The specific run command will vary depending upon the container runtime and scheduler
used at your HPC cluster.
</div>
> **NOTE:**
> The specific run command will vary depending upon the container runtime and scheduler
> used at your HPC cluster.
### 6) Plotting output and skill scores via Jupyter Notebook
See the [notebooks/README.md](notebooks/README.md) file for instructions on how to start
......
......@@ -51,14 +51,12 @@ $ conda activate lasso
$ jupyter-notebook --ip=0.0.0.0 --no-browser --port=56269
```
<div style="background-color: #F9F5D2; border: 1px solid grey; margin: 10px; padding: 10px;">
<strong>NOTE: </strong>
You MUST be in the notebooks folder when you run the juptyer-notebook
command or else the notebook links will not behave correctly.
Also, we are binding to the arbitrary port 56269. If this port
is in use on your server, feel free to change the port number as needed.
</div>
> **NOTE:**
> You MUST be in the notebooks folder when you run the juptyer-notebook
> command or else the notebook links will not behave correctly.
>
> Also, we are binding to the arbitrary port 56269. If this port
> is in use on your server, feel free to change the port number as needed.
When you start your juptyer notebook kernel, the console should output
the http url from which you can access your notebook. For example:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment