1. Introduction

This document provides a brief summary of information that you'll need to know to quickly get started working on Narwhal. For more detailed information, see the Narwhal User Guide.

2. Get a Kerberos Ticket

For security purposes, you must have a current Kerberos ticket on your computer before attempting to connect to Narwhal. A Kerberos client kit must be installed on your desktop to enable you to get a Kerberos ticket. Information about installing Kerberos clients on your Windows desktop can be found at HPC Centers: Kerberos & Authentication.

3. Connect to Narwhal

Narwhal can be accessed via Kerberized ssh as follows:

% ssh narwhal.navydsrc.hpc.mil

4. Home, Working, and Center-Wide Directories

Each user has file space in the $HOME, $WORKDIR, and $CENTER directories. The $HOME, $WORKDIR, and $CENTER environment variables are predefined for you and point to the appropriate locations in the file systems. You are strongly encouraged to use these variables in your scripts.

NOTE: $WORKDIR is a "scratch" file system, and $CENTER is a center-wide file system that is accessible to all center production machines. Neither of these file systems is backed up. You are responsible for managing files in your $WORKDIR and $CENTER directories by backing up files to the archive system and deleting unneeded files. Currently, $WORKDIR files older than 21 days and $CENTER files older than 180 days are subject to being purged.

If it is determined as part of the normal purge cycle that files in your $WORKDIR directory must be deleted, you WILL NOT be notified prior to deletion. You are responsible to monitor your workspace to prevent data loss.

5. Transfer Files and Data to Narwhal

File transfers to DSRC systems must be performed using Kerberized versions of the following tools: scp, sftp, and mpscp. For example, the command below uses secure copy (scp) to copy a local file into a destination directory on a Narwhal login node.

% scp local_file user@narwhal.navydsrc.hpc.mil:/target_dir

For additional information on file transfers to and from Narwhal, see the File Transfers section of the Narwhal User Guide.

6. Submit Jobs to the Batch Queue

The Portable Batch System (PBS Professional ™) is the workload management system for Narwhal. To submit a batch job, use the following command:

qsub [ options ] my_job_script

where my_job_script is the name of the file containing your batch script. For more information on using PBS or on job scripts, see the Narwhal User Guide, the Narwhal PBS Guide, or the sample script examples found in the $SAMPLES_HOME directory on Narwhal.

7. Batch Queues

The following table describes the PBS queues available on Narwhal:

Queue Descriptions and Limits on Narwhal
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest urgent 24 Hours 16,384 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority frontier 168 Hours 65,536 Jobs belonging to DoD HPCMP Frontier Projects
high 168 Hours 32,768 Jobs belonging to DoD HPCMP High Priority Projects
debug 30 Minutes 8,192 Time/resource-limited for user testing and debug purposes
HIE 24 Hours 3,072 Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide.
viz 24 Hours 128 Visualization jobs
standard 168 Hours 32,768 Standard jobs
mla 24 Hours 128 Machine Learning Accelerated jobs
smla 24 Hours 128 Machine Learning Accelerated jobs
dmla 24 Hours 128 Machine Learning Accelerated jobs
serial 168 Hours 1 Serial jobs
bigmem 96 Hours 1,280 Large-memory jobs
transfer 48 Hours N/A Data transfer for user jobs. See the Navy DSRC Archive Guide, section 5.2.
Lowest background 4 Hours 1,024 User jobs that are not charged against the project allocation

8. Monitoring Your Job

You can monitor your batch jobs on Narwhal using the qpeek, qview, or qstat commands.

The qstat command lists all jobs in the queue. The "-u username" option shows only jobs owned by the given user, as follows:

% qstat -u user1
                                                           Req'd  Req'd    Elap
Job ID           Username Queue     Jobname SessID NDS TSK Memory Time  S  Time
--------         -------- --------  ------- ------ --- --- ------ ----- -  -----
815.narwhal-pbs1  user1   debug     mytest  1766   1   128   --   00:30 R  00:12
824.narwhal-pbs1  user1   frontier  inspect --     8   1024  --   06:00 Q   --
825.narwhal-pbs1  user1   standard  45dh8   2584   10  128   --   02:00 R  14:22

Notice that the output contains the JobID for each job. This ID can be used with the qpeek, qview, qstat, and qdel commands.

To delete a job, use the command "qdel jobID".

To view a partially completed output file, use the "qpeek jobID" command.

9. Archiving Your Work

When your job is finished, you should archive any important data to prevent automatic deletion by the purge scripts.

Copy one or more files to the archive system
archive put [-C path ] [-D] [-s] file1 [file2 ...]

Copy one or more files from the archive system
archive get [-C path ] [-s] file1 [file2 ...]

For more information on archiving your files, see the Archive Guide.

10. Modules

Software modules are a very convenient way to set needed environment variables and include necessary directories in your path so that commands for particular applications can be found. Narwhal uses "modules" to initialize your environment with COTS application software, system commands and libraries, compiler suites, environment variables, and PBS batch system commands.

A number of modules are loaded automatically as soon as you log in. To see the modules which are currently loaded, run "module list". To see the entire list of available modules, run "module avail". You can modify the configuration of your environment by loading and unloading modules. For complete information on how to do this, see the Modules User Guide.

11. Available Software

A list of software on Narwhal is available on the software page.

12. Advance Reservation Service

A subset of Narwhal's nodes has been set aside for use as part of the Advance Reservation Service (ARS). The ARS allows users to reserve a user-designated number of nodes for a specified number of hours starting at a specific date/time. This service enables users to execute interactive or other time-critical jobs within the batch system environment. The ARS is accessible via most modern web browsers at https://reservation.hpc.mil/. Authenticated access is required. The ARS User Guide is available on HPC Centers.