Purchasing from M2M
M2M sells exclusively to our system-building and value-added resale channel customers, partnering with industry-leading vendors to provide project-focused solutions, backed by expert insight and fast and reliable service.
Log In to Buy
If you are a reseller, VAR or system builder simply sign up online to view prices and start purchasing. SIGN UP HERE
Get Project Support
M2M offers industry insight and know-how combined with vendor-driven technology solutions for sector-specific projects. To discover how M2M could optimise your project call us on 0208-676-6067.
Apply For A Trade Credit Account
To apply to become an M2M Credit Customer please follow the LINK and complete and submit the form.
Buy Now
Nvidia H100 PCIe GPU
Out of stock
Product Details
Nvidia H100 PCIe GPU
900-21010-0000-000
Product Description
Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA® H100 Tensor Core GPU. With the NVIDIA NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads. The GPU also includes a dedicated Transformer Engine to solve trillion-parameter language models. The H100’s combined technology innovations can speed up large language models (LLMs) by an incredible 30X over the previous generation to deliver industry-leading conversational AI.
Product Specification:
FP64: 26 teraFLOPS
FP64 Tensor Core: 51 teraFLOPS
FP32: 51 teraFLOPS
TF32 Tensor Core: 756 teraFLOPS2
BFLOAT16 Tensor Core: 1,513 teraFLOPS2
FP16 Tensor Core: 1,513 teraFLOPS2
FP8 Tensor Core: 3,026 teraFLOPS2
INT8 Tensor Core: 3,026 TOPS2
GPU memory: 80GB
GPU memory bandwidth: 2TB/s
Decoders: 7 NVDEC, 7 JPEG
Max thermal design power (TDP): 300-350W (configurable)
Multi-Instance GPUs: Up to 7 MIGS @ 10GB each
Form factor: PCIe dual-slot air-cooled
Interconnect: NVLink: 600GB/s, PCIe Gen5: 128GB/s
Server options: Partner and NVIDIA-Certified Systems with 1–8 GPUs
NVIDIA AI Enterprise: included