NVIDIA GPU A100 80GB HBM2e x16 PCIe Gen4 300W Passive dual
900-21001-0020-100
Out of stock
Purchasing from M2M
M2M sells exclusively to our system-building and value-added resale channel customers, partnering with industry-leading vendors to provide project-focused solutions, backed by expert insight and fast and reliable service.
Log In to Buy
If you are a reseller, VAR or system builder simply sign up online to view prices and start purchasing. SIGN UP HERE
Get Project Support
M2M offers industry insight and know-how combined with vendor-driven technology solutions for sector-specific projects. To discover how M2M could optimise your project call us on 0208-676-6067.
Apply For A Trade Credit Account
To apply to become an M2M Credit Customer please follow the LINK and complete and submit the form.
Buy Now
NVIDIA GPU A100 80GB HBM2e x16 PCIe Gen4 300W Passive dual
Out of stock
Product Details
900-21001-0020-100
NVIDIA GPU A100 80GB HBM2e x16 PCIe Gen4 300W Passive dual
Product Description
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. The A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
Product Specifications
Model: A100 80GB PCIe
FP64: 9.7 TFLOPS
FP64 Tensor Core: 19.5 TFLOPS
FP32: 19.5 TFLOPS
Tensor Float 32 (TF32): 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core: 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core: 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core: 624 TOPS | 1248 TOPS*
GPU Memory: 80GB HBM2e
GPU Memory Bandwidth: 1,935 GB/s
Max Thermal Design Power (TDP): 300W
Multi-Instance GPU: Up to 7 MIGs @ 10GB
Form Factor: PCIe Dual-slot air-cooled or single-slot liquid-cooled
Interconnect: NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s ** PCIe Gen4: 64 GB/s
Server Options: Partner and NVIDIA-Certified Systems™ with 1-8 GPUs