Microsoft Unveils NVIDIA’s “Blackwell-Based” High-End Azure AI Compute Platform; HPC-Focused Azure AMD EPYC With HBM As Well

Microsoft announced some "serious" developments around its AI compute portfolio at the recent "Ignite" event, revealing NVIDIA's Blackwell integration with Azure and new AMD EPYC "Genoa" chips with custom HBM…

Continue Reading Microsoft Unveils NVIDIA’s “Blackwell-Based” High-End Azure AI Compute Platform; HPC-Focused Azure AMD EPYC With HBM As Well

Cerebras video shows AI writing code 75x faster than world's fastest AI GPU cloud — world's largest chip beats AWS's fastest in head-to-head comparison

Llama 3.1 405B runs at nearly a thousand tokens a second on Cerebras Inference, and took a quarter of a second to get the first token. Source

Continue Reading Cerebras video shows AI writing code 75x faster than world's fastest AI GPU cloud — world's largest chip beats AWS's fastest in head-to-head comparison