Colonial One High-Performance Computing

Colonial One logo

For research needs that use high-performance computing for data analysis, GW recently implemented Colonial One, a new shared high-performance computing cluster.

Colonial One is managed by professional staff in the Division of Information Technology, with university-sponsored computational staff housed in the Computational Biology Institute and the Columbian College of Arts and Sciences.

Access to Colonial One is open to the university community. 

Facility

Colonial One is housed on the Virginia Science and Technology Campus in one of GW's two enterprise-class data centers and features the following:

  • Professional IT management by the  Division of IT, including 24-hour on-premise and remote environment monitoring with hourly staff walkthroughs

  • Redundant power distribution, including UPS (battery) and generator backup

  • Redundant cooling systems using a dedicated chilled water plant and a glycol refrigeration system.

  • Direct network connectivity to GW's robust 100-Gigabit fiber optic network

​Compute and Interconnect Capacity

Colonial One’s initial compute capacity features a total of 2,924 CPU cores and 1132,288 CUDA cores in these compute node configurations:

  • 64 standard CPU nodes featuring dual Intel Xeon E5-2670 2.6GHz 8-core processors with varying ranges of RAM capacity (64GB, 128GB, and 256GB nodes) and dual on-board solid state hard drives

  • 79 CPU nodes featuring dual Intel Xeon E5-2650v2 2.6 GHz 8-core processors with 128 GB of RAM each

  • 32 GPU nodes featuring dual Intel Xeon E-2620 2.0GHz 6-core processors with dual NVIDIA K20 GPUs and 128 GB of RAM 

  • 1 large-memory node featuring four Intel Xeon E7-8857v2 3.0 GHz 12-core processors with 2 TB of RAM

  • FDR InfiniBand network interconnect featuring 54.5 Gbps total throughput, with 2:1 oversubscription per compute node.

​Storage Systems

The Colonial One cluster has both a primary storage system and a high-speed scratch storage system connected to the Infiniband network fabric. Both are accessible throughout the entire cluster, and remote file transfer services are provided through dedicated login nodes. Additional specifications include:

  • Dell NSS primary storage with 120 TB of usable capacity

  • Dell/Terascala Lustre HSS high-speed scratch storage with 250 TB of usable capacity

Stay Informed

  • Follow the GW Colonial One blog to learn about project updates and progress and view photos of the cluster.
  • View Colonial One user documentation on our wiki page.