Technical Specification
Compute Nodes
- 30 compute nodes, each with Intel Skylake microarchitecture processors,
- 96GB of DDR4 2666MHz DIMM
Tiered Storage
DaLI employs three hierarchical storage tiers to efficiently service the explosive data growth from the 20 projects’ instruments. The three tiers are specifically designed to provide high performance for data that is actively being processed, moderate performance and cost storage for interim data, and high capacity and low cost and performance storage on a modern tape library.
Network
One x3650 M6 Management node
- 2x Intel next-generation Skylake CPUs,
- 128GB DDR4 memory,
- 2x 1TB NL-SAS HDD,
- 1 Mellanox 40GbE/FDR adaptor
EDR Infiniband Core Network
- 1 Mellanox SB7790 36-port IB switch
GbE management network
- 1 BNT G8052R 1/10GbE switch
Data Movers
- Lenovo SR650
- 2x Intel Xeon Gold 6148 (20 core 2.4GHz)
- 192GB DDR4 2666MHz DIMM
- 100Gb/s EDR
- 40Gb Ethernet
- 2x 480GB SSD
Storage
- DDN GridScaler 14 kXe
- 420x 10TB Near-line 7.2K SAS HDD
- 12x 1.92TB SAS SSD Metadata
- Raw Capacity: 4.2PB
- Usable: About 3.0PB
Operating System
DaLI runs the Scientific Linux release 7.4 (Nitrogen) that is designed and tunes to process massive amounts of data.
Job Scheduler/Management
DaLI uses Simple Linux Utility for Resource Management or Slurm as the job scheduler. The Slurm Workload Manager is a an open source job scheduler used by many of the world’s supercomputers and High Performance Computing clusters. Slurm commands enable you to submit, manage, monitor, and control your jobs running on DaLI.