A+ Server 2124GQ-NART

2U Dual Processor (AMD) GPU System with NVIDIA HGX A100 4-GPU 40GB/80GB, NVLink

Complete System Only: To maintain quality and integrity, this product is sold only as a completely-assembled system (with minimum 2 CPUs, Min of 512GB Memory for 80G HGX-4 A100 OR Min of 256GB Memory for 40G HGX-4 A100, 1 Storage device, and 1 NIC included in IO board).
Service: OSNBD3 is highly recommended.

Previous slide
Next slide
AMD logo
  • AI/ML, Deep Learning Training and Inference
  • High-performance Computing (HPC)
  • Cloud Computing
  • Research Laboratory/National Laboratory 
  • Autonomous Vehicle Technologies
  • Molecular Dynamics Simulation

1.  High Density 2U System with NVIDIA® HGX™ A100 4-GPU; Highest GPU
    communication using NVIDIA® NVLINK™, 4 NICs for GPU Direct RDMA (1:1 GPU Ratio)

 

2.  Supports HGX A100 4-GPU 40GB (HBM2) or 80GB (HBM2e)

 

3.  Direct connect PCI-E Gen4 Platform with NVIDIA® NVLink™ v3.0 up to 600GB/s interconnect

 

4.  On board BMC supports integrated
     IPMI 2.0 + KVM with dedicated 10G LAN

 

5.  Dual AMD EPYC™ 7003/7002 Series Processors
     (The latest AMD EPYC™ 7003 Series Processor with AMD 3D V-Cache™ Technology requires BIOS version 2.3 or newer)

 

6.  8TB Registered ECC DDR4 3200MHz SDRAM in 32 DIMMs

 

7.  4 PCI-E Gen 4 x16 (LP), 1 PCI-E Gen 4 x8 (LP)

 

8.  4 Hot-swap 2.5″ drive bays
     (SAS/SATA/NVMe Hybrid)

 

9.  2x 2200W Platinum Level power supplies with Smart Power Redundancy

 
Product SKUs
 
AS -2124GQ-NART
  • A+ Server 2124GQ-NART (Black)
 
Motherboard
 

Super H12DSG-Q-CPU6
 
Processor/Cache
 
CPU
  • Dual AMD EPYC™ 7003/7002 Series Processors
    (The latest AMD EPYC™ 7003 Series Processor with AMD 3D V-Cache™ Technology requires BIOS version 2.3 or newer)
  • Socket SP3
  • Supports CPU TDP up to 280W*
Cores
  • Up to 128 Cores (64 per CPU)
Note * Certain CPUs with high TDP may be supported only under specific conditions. Please contact Tech Standard Solutions Support for additional information about specialized system optimization
GPU Support
  • Supports HGX A100 4-GPU 40GB (HBM2) or 80GB(HBM2e) with NVLink GPU interconnect and PCI-E Gen4 host CPUs
 
GPU
 
Supported GPUs
  • HGX A100 4-GPU 40GB/80GB SXM4 Multi-GPU Board
CPU-GPU Interconnect
  • PCI-E Gen 4 x16 Direct Connect CPU-GPU Dual-Root
GPU-GPU Interconnect
  • NVIDIA® NVLink™ GPU-to-GPU Interconnect
 
System Memory
 
Memory Capacity
  • 32 DIMM slots
  • Up to 8TB 3DS ECC DDR4-3200MH SDRAM
Memory Type
  • 3200MHz ECC DDR4 SDRAM
 
On-Board Devices
 
Chipset
  • System on Chip (SoC)
Network
  • Dual RJ45 10GbE-aggregate host LAN, RJ45 1GbE IPMI
IPMI
  • Support for Intelligent Platform Management Interface v.2.0
  • IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Graphics & System Management
  • ASPEED AST2600 BMC
 
Input / Output
 
SATA
  • 4 SATA3 (6Gbps) ports
LAN
  • 2 RJ45 10GbE-aggregate host LAN ports
  • 1 RJ45 1GbE Dedicated IPMI management port
USB
  • 2 USB 3.0 ports (rear)
Video
  • 1 VGA Connector (rear)
Others
  • 1 COM port (header)
  • 1 TPM 2.0 (header)
 
System BIOS
 
BIOS Type
  • AMI 256Mb SPI Flash ROM
 
Management
 
Software
  • IPMI 2.0
  • KVM with dedicated LAN
  • SSM, SPM, SUM
  • SuperDoctor® 5
  • Watchdog
 
PC Health Monitoring
 
CPU
  • Monitors for CPU Cores, Chipset Voltages, Memory.
  • 4+1 Phase-switching voltage regulator
FAN
  • Fans with tachometer monitoring
  • Status monitor for speed control
  • Pulse Width Modulated (PWM) fan connectors
Temperature
  • Monitoring for CPU and chassis environment
  • Thermal Control for fan connectors
Chassis
Form Factor
  • 2U Rackmountable
Model
  • CSE-228GTS-R2K21P
 
Dimensions
Height
  • 3.5" (89mm)
Width
  • 17.2" (437mm)
Depth
  • ~32.7" (830.3mm)
Weight
  • Net Weight: 78.5 lbs (35.6 kg)
  • Gross Weight: 88.5 lbs (40.1 kg)
Package Dimensions
  • 45.5" L x 22.5" W x 11" H
Available Colors
  • Black
 
Drive Bays / Storage
Hot-swap
  • 4 Hot-swap 2.5" drive bays (SATA/NVMe Hybrid or SAS with optional HBA)
 
Expansion Slots
PCI-Express
  • 4 PCI-E Gen4 x16 (LP) slots - supporting HGX A100 4-GPU's 1:1 connection to 4 NICs
  • 1 PCI-E Gen 4 x8 (LP) slot
 
System Cooling
Fans
  • 4 Hot-swap heavy duty fans
 
Power Supply
2200W Redundant Power Supplies with PMBus
Total Output Power
  • 1000W: 100 – 127Vac
  • 2200W: 220 – 240Vac
Dimension
(W x H x L)
  • 107 x 82 x 204 mm
Input
  • 100-127Vac / 12-9.5A / 50-60Hz
  • 220-240Vac / 11-10A / 50-60Hz
Output Type
  • Gold Finger Connector
Certification Platinum Level
 
Operating Environment
RoHS
  • RoHS Compliant
Environmental Spec.
  • Operating Temperature:
    10°C ~ 35°C (50°F ~ 95°F)
  • Non-operating Temperature:
    -40°C to 60°C (-40°F to 140°F)
  • Operating Relative Humidity:
    8% to 90% (non-condensing)
  • Non-operating Relative Humidity:
    5% to 95% (non-condensing)
 
Optional Parts List
 
  Part Number Qty Description
GPU Baseboard * GPU-NVTHGX-A100-SXM4-4 - NVIDIA Redstone GPU Baseboard, 4 A100 40GB SXM4
GPU-NVTHGX-A100-SXM4-48 - NVIDIA Redstone GPU Baseboard, 4 A100 80GB SXM4
* Note: Required as Complete System
Drive Tray MCP-220-00187-0B - SATA or SAS Maroon Drive Carrier, tool-less with lock feature
Storage Controller Card & Cable(s) AOC-S3108L-H8IR-16DD
& 1x CBL-SAST-1285-100
- 8 int 12Gb/s ports, 8xGen3, ROC – LP, 16 HDD w/ exp;
Slimline x4 (STR) to MiniSAS HD (STR) x4, 85cm, 100OHM, R
AOC-SLG3-2H8M2 - 2x Hybrid NVMe/SATA M.2 RAID Carrier, Standard LP,RoHS
AOC-SLG3-2M2 - LP, PCIe3 x8, Dual port NVMe M.2 carrier
AOC-SLG3-2NM2 - 2x NVMe M.2 RAID Carrier, Standard LP,RoHS
AOC-S3908L-H8IR-16DD
& 1x CBL-SAST-1276F-100
- 8 int 12Gb/s ports, x8 Gen4, ROC - LP, 16 HDD w/ exp,RoHS
Network Card(s) AOC-623106AN-CDAT - ConnectX-6 Dx 100GbE Ethernet Adapter Card, dual port QSFP56 PCIe 4.0 x16, RoHS
AOC-653105A-ECAT - ConnectX-6 VPI Card. 100Gb/s InfiniBand & Ethernet Adapter Card, dual ports, QSFP56, PCIe 3.0/4.0 x16. HF, RoHS
AOC-653105A-HDAT - Mellanox ConnectX-6 VPI 200Gb/s InfiniBand & Ethernet Adapter Card single port, QSFP56, PCIe 3.0/4.0 x16, RoHS
AOC-653106A-ECAT - ConnectX-6 VPI Card. 100Gb/s InfiniBand & Ethernet Adapter Card, dual ports, QSFP56, PCIe 3.0/4.0 x16. HF, RoHS
AOC-653106A-HDAT - Mellanox ConnectX-6 VPI 200Gb/s InfiniBand & Ethernet Adapter Card dual ports, QSFP56 PCIe 3.0/4.0 x16. HF, RoHS
AOC-683105AN-HDAT - Nvidia MCX683105AN-HDAT PCIe 1-port HDR 200G QSFP56 Gen 3.0/4.0x16 Cx-6 DE, RoHS
AOC-MCX512A-ACAT - Mellanox MCX512A-ACAT PCIe 2-port 25GbE SFP28 Gen3x8 CX-5 EN. RoHS
AOC-MCX555A-ECAT - CX-5 VPI EDR IB adapter & 100GbE,1p, QSFP28, PCIe3x16
AOC-MCX556A-ECAT - MCX556A-ECAT, CX-5 VPI,EDR IB,100GbE,2p,QSFP28,PCIe3x16
AOC-S100GC-i2C - Standard PCIe 4.0 x 16 dual port 100GbE with QSFP28 based on Intel E810-CAM2,RoHS
AOC-S25G-m2S - Standard low-profile 2-port 25GbE SFP28, based on Mellanox ConnectX-4 LX EN chipset.,RoHS
AOC-S40G-i2Q - Intel® XL710-BM2 2-port QSFP+ (40Gb/port) Gen3 x8 Standard Low Profile
AOC-SGP-I2 - Intel® i350 AM2 2-port RJ45 (1Gb/port) Gen3 x4 Standard Low Profile
AOC-STG-i4S - Intel XL710-BM1 4-port SFP+ (10Gb/port) Gen3 x8, Standard Low Profile
AOC-STG-i4T - Std LP 4-port 10G RJ45, Intel XL710+ X557
AOC-STGS-i2T - Intel® X550-AT2 2 RJ45 (10Gb/port) Gen3 x4 Standard Low Profile
TPM security module AOM-TPM-9655V - TPM 1.2 module with Infineon 9655, RoHS/REACH,PBF;
AOM-TPM-9665V - TPM 2.0 module with Infineon 9665, RoHS/REACH,PBF;
Global Services & Support OS4HR3/2/1
 
OSNBD3/2/1
-
 
-
3/2/1-year onsite 24x7x4 service
 
3/2/1-year onsite NBD service
 

Guaranteed support from day one

We’re committed to having your back no matter the issue or time of day, starting when we first connect.

Experts on standby when needed icon

Experts on standby when needed

Get the answers you need fast with guidance from our support team.

Price match anyone icon

Price match anyone

Stay within your budget by paying the lowest, most up-to-date market prices.

Free ground shipping on everything icon

Free ground shipping on everything

Avoid paying hefty shipping fees each time you need a new product.

Lets connect

We’re more than just a tech company. We’re a motivated team of innovators, and creators. Ready to take your business to the next level?