Nodes have quad sockets with 24 cores/socket per node and 8GB/core memory.
Use the "snodes" command to see the number of available nodes. For this cluster, stellar, that means multiples of 96 cores. Where practical, we ask that you entirely fill the nodes so that CPU core fragmentation is minimized. No jobs should be run on the login nodes with the exception of brief tests that last no more than a few minutes and use only a few CPU-cores. The login nodes, stellar-intel and stellar-amd, should be used for interactive work only such as compiling programs and submitting jobs as described below. Please remember that these are shared resources for all users.
To attend a live session of either workshop, see our Trainings page for the next available workshop.įor more resources, see our Support - How to Get Help page.Īll users are required to read and abide by the Stellar usage guidelines: For an introduction to navigating Princeton's High Performance Computing systems, view the material associated with our Getting Started with the Research Computing Clusters workshop. Additional information specific to Stellar's file system, priority for job scheduling, etc. Using Stellar also requires some knowledge on how to properly use the file systems, environment modules, and the Slurm job scheduler. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. Since Stellar is a Linux system, knowing some basic Linux commands is highly recommended. $ ssh For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ). If you have trouble connecting then see our SSH page. $ ssh GFDL, connect to the AMD login node ( VPN required from off-campus): Once you have been granted access to Stellar, you can connect by opening an SSH client and using the SSH command as detailed below.įor PU and PPPL, connect to the Intel login node ( VPN required from off-campus): If, however, you are part of a research group with a faculty member who has contributed to or has an approved project on Stellar, that faculty member can sponsor additional users by sending a request to Any non-Princeton user must be sponsored by a Princeton faculty or staff member for a Research Computer User (RCU) account. Have your PI send a request to Research Computing. Former Perseus and Eddy users will not get an account automatically. See section titled For large clusters: Submit a proposal or contribute for details. To use Stellar you have to request an account and then log in through SSH.Īccess to the large clusters like Stellar is granted on the basis of brief faculty-sponsored proposals. Schematic diagram of the Stellar cluster. The cluster was built to support large-scale parallel jobs for researchers in astrophysical sciences, plasma physics, physics, chemical & biological engineering and atmospheric & oceanic sciences. Stellar is a heterogeneous cluster composed of Intel and AMD nodes.
Hardware and Software Requirements for PICSciE Workshops.Requirements for PICSciE Virtual Workshops.