Colonial One High-Performance Computing

Colonial One

Colonial One User Documentation
Colonial One users may visit the link above for user documentation.

About the Cluster
For research needs that use high-performance computing for data analysis, GW recently acquired and is in the process of implementing a new shared high-performance computing cluster named Colonial One. Colonial One will be implemented and managed by professional IT staff in the Division of Information Technology, with university-sponsored computational staff housed in the Computational Biology Institute and the Columbian College of Arts and Sciences. Access to Colonial One will be open to the university community, with priority access configured for schools and faculty members that contribute to the cluster’s core infrastructure and additional compute nodes.  The initial implementation of Colonial One represents a partnership between the Division of IT and OTS in response to current and developing faculty research needs in the college’s various academic disciplines.

The following highlights the facility, compute capacity, and storage capacity of Colonial One:

Facility
Located on the Virginia Science and Technology Campus in one of GW’s two enterprise-class data centers, Colonial One will be housed in an optimal facility featuring:

  • Professional IT management by the  Division of IT, including 24-hour on-premise and remote environment monitoring with hourly staff walkthroughs.
  • Redundant power distribution to include UPS (battery) and generator backup.
  • Redundant cooling systems utilizing a dedicated chilled water plant and a glycol refrigeration system. 
  • Direct network connectivity to the university’s robust 22 Gbps intra-campus fiber optic network.  A major infrastructure upgrade later this year will increase this inter-campus capacity to 100 Gbps.

Compute and Interconnect Capacity
Colonial One’s initial compute capacity features a total of 1,408 CPU cores and 159,744 CUDA cores in the following compute node configurations:

  • 64 standard CPU nodes featuring dual Intel Xeon E5-2670 2.6GHz 8-core processors with varying ranges of RAM capacity (64GB, 128GB, and 256GB nodes) and dual on-board solid state hard drives.
  • 32 GPU nodes featuring dual Intel Xeon E-2620 2.0GHz 6-core processors with dual nVidia Kepler K20 GPUs, 128GB of RAM, and dual on-board solid state hard drives.
  • FDR InfiniBand network interconnect featuring 54.5 Gbps total throughput, with 2:1 oversubscription per compute node.

Storage Systems
The Colonial One cluster will utilize a primary storage system and a high-speed scratch storage system connected to the InfiniBand interconnect network with the following specifications:

  • Dell NSS Primary storage with 144TB of usable capacity.
  • Dell HSS High-speed scratch storage with 300TB of usable capacity.
     

Project Updates
Photos of the Cluster