
AMD Instinct MI300X 192GB HBM3 OAM
AMD's flagship AI accelerator with industry-leading 192GB HBM3 memory and 5.3TB/s bandwidth. Best-in-class performance per dollar for generative AI and large language models. Available now with competitive pricing and ROCm support.
$29,999
$32,000
Ships directly from distributor
What it's for
The AMD Instinct MI300X accelerator, featuring AMD CDNA™ 3 architecture, is designed for AI training and inferencing generative AI models. Built using advanced chiplet architecture and delivering exceptional memory bandwidth and capacity with HBM3 memory, the AMD Instinct MI300X accelerator redefines the possibilities for running the largest and most demanding GenAI and HPC workloads. Maximize your AI budget with MI300X's exceptional price-performance. Call (555) 123-4567 for volume discounts and migration support from CUDA to ROCm.
Key Features
- ✓192GB HBM3 memory - Largest memory capacity in its class
- ✓5.3 TB/s memory bandwidth - Industry-leading bandwidth
- ✓304 Compute Units with 19,456 Stream Processors
- ✓Optimized for large language models like Llama 2, GPT-4
- ✓ROCm 6.0 open-source software ecosystem
- ✓AMD Infinity Fabric for multi-GPU scaling
- ✓Lower total cost of ownership vs competitors
- ✓OCP Accelerator Module (OAM) form factor
Use Cases
- →Large Language Model training (Llama, Mistral, GPT)
- →Generative AI inference at scale
- →Multi-modal AI models (text, image, video)
- →Scientific computing and research
- →Drug discovery and molecular dynamics
- →Financial modeling and risk analysis
Technical Specifications
| Architecture | CDNA 3 |
| GPU Memory | 192 GB HBM3 |
| Memory Bandwidth | 5.3 TB/s |
| Memory Interface | 8192-bit |
| Compute Units | 304 CUs |
| Stream Processors | 19,456 |
| FP64 Performance | 163 TFLOPS |
| FP32 Performance | 653 TFLOPS |
| FP16 Performance | ~1,307 TFLOPS |
| BF16 Performance | ~1,307 TFLOPS |
| INT8 Performance | ~2,614 TOPS |
| Max TDP | 750W |
| Thermal Solution | Passive (liquid cooling required) |
| Form Factor | OAM (OCP Accelerator Module) |
| Interconnect | Infinity Fabric |
Related Products

NVIDIA H100 80GB PCIe Gen5
NVIDIA H100 Tensor Core GPU in PCIe form factor with 80GB HBM3 memory. Ideal for deploying AI inference and training in standard servers without NVLink clustering requirements. Contact our sales team for volume pricing and immediate availability.
$29,999

NVIDIA H100 80GB SXM5
NVIDIA H100 Tensor Core GPU in SXM5 form factor with NVLink for multi-GPU scaling. Designed for HGX server platforms and large-scale AI training clusters. Enterprise volume discounts available - contact sales for custom configurations.
$32,999

NVIDIA H200 141GB HBM3e SXM5
Industry-leading Hopper architecture GPU with 141GB HBM3e memory and 4.8TB/s bandwidth. Perfect for large language models, generative AI, and high-performance computing workloads. In stock now - contact us for immediate delivery and competitive pricing.
$39,999

NVIDIA B200 192GB Blackwell
Revolutionary Blackwell architecture with 192GB HBM3e and FP4 precision for next-gen AI. Pre-order now for 2025 delivery - reserve your allocation with our sales team.
$0