Analysis.CastroMedia.org

Description

Alphabet runs Google’s global AI hardware, including GPUs and TPUs for search and cloud services.

Scope

  • Reports indicate Google purchased about 50,000 NVIDIA H100 GPUs for AI workloads, providing roughly 1×10^20 dense INT8 operations per second when combined.12
  • Google Cloud continues deploying TPU v4 and v5p pods to support research and products.3

Combined, these resources yield about 1×10^20 dense INT8 operations per second.

Implications

Alphabet’s expanding compute allows it to train advanced models for search, advertising, and cloud offerings, but it concentrates substantial resources within a single corporation.

Works cited

  1. Observer, “Google Reportedly Orders 50,000 Nvidia H100s,” 2023. https://observer.com/google-50k-h100-gpus/ 

  2. Colfax, “NVIDIA H100 Tensor Core GPU,” 2023. https://colfaxresearch.com/nvidia-h100-performance/ 

  3. TechCrunch, “Google Cloud introduces TPU v5p,” 2023. https://techcrunch.com/2023/03/14/google-cloud-introduces-tpu-v5p/