Metaverse at scale implies some fairly dramatic increases in computational resources and, to a lesser extent, bandwidth.
Some believe the next-generation internet could require a three-order-of-magnitude (1,000 times) increase in computing power, to support lots of artificial intelligence, 3D rendering, metaverse and distributed applications.
The issue is how that compares with historical increases in computational power. In the past, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades.
Will that be fast enough to support ubiquitous metaverse experiences? There is reasons for both optimism and concern.
The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change.
For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely.
A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required.
Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times.
Still, there is reason for optimism. Consider the advances in computational support to support artificial intelligence and generative AI, to support use cases such as ChatGPT.
“We've accelerated and advanced AI processing by a million x over the last decade,” said Jensen Huang, Nvidia CEO. “Moore's Law, in its best days, would have delivered 100x in a decade.”
“We've made large language model processing a million times faster,” he said. “What would have taken a couple of months in the beginning, now it happens in about 10 days.”
In other words, vast increases in computational power might well hit the 1,000 times requirement, should it prove necessary.
And improvements on a number of scales will enable such growth, beyond Moore’s Law and chip density. As it turns out, many parameters can be improved.
“No AI in itself is an application,” Huang says. Preprocessing and post-processing often represents half or two-thirds of the overall workload, he pointed out.
By accelerating the entire end-to-end pipeline, from preprocessing, from data ingestion, data processing, all the way to the preprocessing all the way to post processing, “we're able to accelerate the entire pipeline versus just accelerating half of the pipeline,” said Huang.
The point is that metaverse requirements--even assuming a 1,000-fold increase in computational support within a decade or so--seem feasible, given what is happening with artificial intelligence processing gains.