Inspur KR6288X2-A0 AI Server | 8x NVIDIA HGX H200 | Dual Intel Xeon 8558P | 2TB DDR5
  • 商品カテゴリー:サーバー
  • 品番:Inspur KR6288X2-A0
  • 在庫状況:In Stock
  • 状態:新品
  • 製品機能:即発送可能
  • 最小注文:1 単位
  • 定価:$467,869.00
  • 販売価格: $411,765.00 節約額 $56,104.00
  • 今すぐチャット メールを送信

ご安心ください。返品を承ります。

配送:国際発送の場合、関税手続きや追加料金が発生する場合があります。 詳細を見る

配達:国際配送が税関手続きの対象となる場合は、追加のお時間をいただく場合があります。 詳細を見る

返品:14日以内の返品が可能です。返品送料は販売者が負担します。 詳細を見る

送料無料。当社はNET 30 Daysの購入注文を受け付けています。信用情報に影響を与えることなく、数秒で決定を得られます。

大量の Inspur KR6288X2-A0 製品が必要な場合は、フリーダイヤル Whatsapp: (+86) 151-0113-5020 までお電話いただくか、ライブチャットでお見積もりをご依頼ください。担当営業マネージャーがすぐにご連絡いたします。

Inspur KR6288X2-A0 AI Server | 8x NVIDIA HGX H200 | Dual Intel Xeon 8558P | 2TB DDR5

Keywords

Inspur KR6288X2-A0, NVIDIA HGX H200, Intel Xeon 8558P, 2TB DDR5 RAM, AI Training Server, Generative AI, HPC Server, Buy Inspur Server

Description

Step into the future of hyperscale artificial intelligence with the Inspur KR6288X2-A0. This flagship AI server is engineered to train the world's most complex Large Language Models (LLMs), featuring the brand new NVIDIA HGX H200 8-GPU architecture. With a massive combined 1128GB of HBM3e memory across the HGX baseboard, this system shatters previous memory bottlenecks, allowing data scientists to run massive parameter models efficiently without requiring as many interconnected nodes.

At the heart of this compute giant are dual Intel Xeon 8558P processors. Each CPU boasts 48 cores, 260M cache, and a 2.7GHz base clock operating at a 350W TDP. This provides 96 total physical cores of premium x86 orchestration power to prepare data and manage the immense GPU workload. To keep the processing pipeline completely saturated, the system is populated with 32x 64GB DDR5-5600MHz ECC-RDIMMs, totaling a massive 2TB of ultra-fast system memory.

Storage is tiered for both reliability and extreme speed. The host operating system is secured on two 480GB SATA 6Gbps 2.5-inch Read Intensive SSDs. Meanwhile, training data and checkpoints are handled by two 3.84TB U.2 16GTps 2.5-inch NVMe solid-state drives, ensuring rapid data ingestion directly to the GPUs. Powering this immense hardware is a robust redundant power array featuring Titanium-rated high-efficiency PSUs (supporting 220VAC or 240VDC). With a comprehensive 3-year warranty, this server is a secure investment for enterprise data centers pushing the boundaries of AI Training Server capabilities.

Key Features

  • Next-Gen AI Acceleration: 1x Nvidia HGX-200-8GPU board delivering an unprecedented 1128GB of VRAM.
  • Elite Processing: 2x Intel Xeon 8558P processors (48 Cores, 2.7GHz, 260M Cache, 350W).
  • Massive Memory Bandwidth: 2TB Total System RAM configured via 32x 64GB DDR5-5600MHz ECC-RDIMMs.
  • High-Speed Data Tier: 2x 3.84TB U.2 NVMe SSDs (16GTps) for rapid checkpointing and data ingestion.
  • Reliable OS Boot: 2x 480GB SATA 6Gbps 2.5" SSDs.
  • Titanium Efficiency: Equipped with ultra-efficient Titanium power supplies (3200W/2700W 220VAC/240VDC configuration).
  • Enterprise Guarantee: Backed by a 3-Year Warranty.

Configuration

Component Specification Quantity
Brand / Model Inspur KR6288X2-A0 (H200 Complete Machine) 1
Processor (CPU) Intel 8558P Xeon 2.7GHz 48C 260M 350W 2
Memory (RAM) 64G DDR5-5600MHz ECC-RDIMM 32
System Disk 480G SATA 6Gbps 2.5in Read 2
Data Disk 3.84T U.2 16GTps 2.5in R-Standard 2
GPU Baseboard Nvidia HGX-200-8GPU 1128G 1
Power Supply 3200W / 2700W Titanium 220VAC or 240VDC
Warranty 3 Years 1

Compatibility

The Inspur KR6288X2-A0 is a premier platform designed for the NVIDIA AI Enterprise software stack. It natively supports the latest deep learning frameworks such as PyTorch, TensorFlow, and JAX. Operating system compatibility includes enterprise standards such as Ubuntu Server 22.04 LTS and Red Hat Enterprise Linux (RHEL) 9. The HGX H200 architecture utilizes NVLink interconnects internally and is designed to interface with high-speed NDR InfiniBand networking cards for massive cluster scaling.

Usage Scenarios

This server is specifically architected for Foundation Model Training. The 1128GB of total VRAM across the 8-GPU baseboard allows data scientists to load incredibly large LLMs directly into memory, enabling massive batch sizes and significantly cutting down training times for generative AI models.

It also serves as a dominant High-Throughput Inference Node. For customer-facing Generative AI applications requiring real-time text, image, or video generation, the sheer memory bandwidth of the H200 GPUs ensures multiple concurrent user requests are served with minimal latency.

Frequently Asked Questions

Q: What is the primary difference between an HGX H100 and this HGX H200 system?
A: The primary upgrade is memory capacity and bandwidth. While a standard 8-GPU H100 system features 640GB of memory, the Nvidia HGX-200-8GPU featured here includes 1128GB of faster HBM3e memory (approx. 141GB per GPU). This allows significantly larger models to run on a single node without encountering memory bottlenecks.

Q: Are the NVMe drives configured for redundancy?
A: The system includes two 3.84T U.2 NVMe drives. Typically in an AI training environment, these are configured in a RAID 0 stripe for maximum read/write performance to feed data to the GPUs as fast as possible, though they can be configured in RAID 1 if data redundancy is prioritized over speed.

この商品に関連する製品
Inspur NF5280M6 AI 対応デュアル Xeon サーバー (エンタープライズ ワークロード向け Tesla L2 GPU 搭載) おすすめ
Inspur NF5466M6 デュアル Intel Xeon 4314 エンタープライズ ストレージおよびコンピューティング サーバー おすすめ
Dell PowerEdge R760xs - デュアル Xeon Silver 4410Y エンタープライズ構成 おすすめ
Dell PowerEdge R760xs - Xeon Gold 6507P パフォーマンス サーバー おすすめ
Dell PowerEdge R660 1U ラック サーバー - デュアル Xeon Gold 6430、1TB RAM、25GbE、およびファイバー チャネル HBA おすすめ
HPE ProLiant DL380 Gen11 2U ラック サーバー |デュアル インテル Xeon ゴールド 6542Y 48 コア | 1TB DDR5 RAM | 2x 300GB SAS HDD おすすめ
Inspur NF8480M5 4U エンタープライズ ストレージ サーバー — 24× LFF SAS、クアッド Xeon Gold 6248R 高密度プラットフォーム おすすめ
Lenovo ThinkSystem SR850 V3 高性能 4 CPU サーバー (Xeon Gold 6448H およびエンタープライズ ネットワーキング搭載) おすすめ