2252 位用户此时在线
24小时点击排行 Top 10:
- 本站自动实时分享网络热点
- 24小时实时更新
- 所有言论不代表本站态度
- 欢迎对信息踊跃评论评分
- 评分越高,信息越新,排列越靠前
3
2
1
1
4
230k GPUs, including 30k GB200s, are operational for training Grok in a single supercluster called Colossus 1 (inference is done by our cloud providers).
2
1
1
230k GPUs, including 30k GB200s, are operational for training Grok in a single supercluster called Colossus 1 (inference is done by our cloud providers).
At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks.
As Jensen
btc
(
twitter.com)
00:04:34
9
2
1
1
13
>be xAI
2
1
1
>be xAI
>literally founded July 2023
>absolute nobody in the AI space
>Elon tweets “we’re gonna build the world’s most powerful AI training cluster”
>entire industry laughs
*122 days later*
>100,000 H100s go brrrr in Memphis
>Colossus is operational
>entire industry: “wait what
btc
(
twitter.com)
•
NIK
21
2
1
1
22
2
1
1
24
Elon Musk on how he makes predictions: I generally try to get the estimate to the nearest order of magnitude.
2
1
1
Elon Musk on how he makes predictions: I generally try to get the estimate to the nearest order of magnitude.
Herbert Ong: “We want to understand how you think. For example, in the Master Plan Part II, you calculated it takes 6 billion of FSD miles driven before maybe the
btc
(
twitter.com)
00:00:53