High-Performance Computing Cluster

For research needs that use high-performance computing for data analysis, GW has implemented Pegasus, a shared high-performance computing cluster.

The High Performance Computing Cluster (HPC), Pegasus, is managed by professional staff in GW IT, with university-sponsored computational staff housed in the Computational Biology Institute, School of Engineering and Applied Science (SEAS), GW School of Public Health (GWSPH), the Columbian College of Arts and Sciences (CCAS) and School of Medicine and Health Science (SMHS). Access to the High Performance Computing cluster is open to the university community. 

Facility

Pegasus is housed on the Virginia Science and Technology Campus in one of GW's two enterprise-class data centers and features the following:

  • Professional IT management by GW IT, including 24-hour on-premise and remote environment monitoring with hourly staff walkthroughs

  • Redundant power distribution, including UPS (battery) and generator backup

  • Cooling systems are provided to the cluster utilizing the facility’s CRAC unit.

  • Direct network connectivity to GW's robust 100-Gigabit fiber optic network

​Compute and Interconnect Capacity

Pegasus compute capacity features a topal of 8,112 CPU cores, 76,800 NVIDIA Tensor Cores and 614,400 CUDA cores in these compute node configurations:

  • 164 Dell R740 standard CPU nodes featuring dual 20-Core 3.70GHz Intel Xeon Gold 6148 processors, 92GB of RAM, and 800 GB SSD onboard storage.
  • 16 Dell R740 “Small” GPU nodes featuring 2 NVIDIA Tesla V100 GPUs, Dual 20-Core 3.70GHz Intel Xeon Gold 6148 processors, 192GB of RAM, and 800 GB SSD onboard storage.
  • 22 Dell C4140 “Large” GPU nodes featuring 4 Nvidia Tesla V100 SXM2 16GB GPUs with NVLink enabled with a 6TB NVMe card, Dual 18-Core 3.70GHz Intel Xeon Gold 6140 processors, 384GB of RAM, and 800 GB SSD onboard storage.
  • 6 Dell R740 High Throughput nodes featuring Dual 4-Core 3.70GHz Intel Xeon Gold 5122 processors, 384GB of RAM, and 800 GB SSD onboard storage
  • 2 Dell R740 High Memory nodes featuring Dual 18-Core 3.70GHz Intel Xeon Gold 6140M processors, 3Tb of RAM, and 800 GB SSD onboard storage
  • Mellanox EDR Infiniband controller to 100GB fabric.

​Storage Systems

The Pegasus cluster has both a primary storage system and a high-speed scratch storage system connected to the Infiniband network fabric. Both are accessible throughout the entire cluster, and remote file transfer services are provided through dedicated login nodes. Additional specifications include:

  • For NFS, the cluster utilizes DDN GS7K Storage appliance having a total of 2Pb of space. Which is connected to compute and login nodes via Mellanox EDR Infiniband over 100Gb fabric.
  • For scratch/high speed storage, the cluster utilizes DDN ES14K Lustre appliance providing 2Pb of Parallel Scratch storage. The ES14K will be connected to the compute nodes via Mellanox EDR Infiniband over 100Gb fabric.

UCS Stack

UCS Stack is a high performance virtual environment that provides virtual machines with direct access to a 100GB network infrastructure.  It is integrated with the high performance high capacity cluster, Pegasus, located on the GW Virginia Science and Technology Campus (VSTC). 

Stay Informed

View Pegasus user documentation on our webpage.

High Performance Computing Cluster Acknowledgement

Please acknowledge in your publications the role GW IT Research Technology Services (RTS) facilities played in your research.

If you have published a paper or given a presentation that acknowledges GW’s High Performance Computing Cluster (Pegasus or Colonial One), please acknowledge the use with the language below.  

Please email RTSHelp with the paper title, journal, poster, Digital Object Identifier (DOI) or conference where a presentation was given.  These citations help Research Technology Services (RTS) demonstrate the importance of its role in the support of research within the GW community, helping to assure the continued availability of these valuable resources.   We appreciate your conscientiousness in this matter.

For the High Performance Computing Cluster (Pegasus / Colonial One) Samples:

“We gratefully acknowledge the computing resources provided on the High Performance Computing Cluster operated by Research Technology Services at the George Washington University.”

“This work was completed in part with resources provided by the High Performance Computing Cluster at The George Washington University, Information Technology, Research Technology Services.”

“The authors acknowledge the use of the High Performance Computing Cluster and the advanced support from the Research Technology Services at The George Washington University, to carry out the research presented here.”

 

Request Technology Services