When Nvidia talks ‘datacenter’ it is talking almost exclusively B200 and H200, gigantic HBM GPUs that will only ever be socketed into servers in sets of 8; the silicon is never to be used for gaming.
A little bit of 5090 silicon gets vacuumed up for ‘low end’ accelerators, but this is relatively low volume.
People do rent them for Blender rendering and other GPGPU stuff, but the vast majority is probably being used for AI inference (or, honestly, just hoarded by companies who don’t really understand ML and let them sit there under-utilized :/)
Point I’m making is they’re not talking about gaming GPUs here. Making so many ‘AI’ GPUs does impact how much TSMC can allocate to gaming, but this announcemet has nothing to do with that.
So, they’re saying other AI customers will “continue to be a top priority”? That makes sense given your explanation. I care even less now about this statement from Nvidia.
Forgive my ignorance. Would these be mainly for gaming, or are there other use cases, like 4k video effects rendering or something?
When Nvidia talks ‘datacenter’ it is talking almost exclusively B200 and H200, gigantic HBM GPUs that will only ever be socketed into servers in sets of 8; the silicon is never to be used for gaming.
https://www.servethehome.com/new-shots-of-the-nvidia-hgx-b200-astera-labs/
A little bit of 5090 silicon gets vacuumed up for ‘low end’ accelerators, but this is relatively low volume.
People do rent them for Blender rendering and other GPGPU stuff, but the vast majority is probably being used for AI inference (or, honestly, just hoarded by companies who don’t really understand ML and let them sit there under-utilized :/)
Point I’m making is they’re not talking about gaming GPUs here. Making so many ‘AI’ GPUs does impact how much TSMC can allocate to gaming, but this announcemet has nothing to do with that.
So, they’re saying other AI customers will “continue to be a top priority”? That makes sense given your explanation. I care even less now about this statement from Nvidia.
Precisely, yes.