Generative AI and Large Languate Models, as applied to many aspects of computing, will change soem things more than others.
Nvidia CEO Jensen Huang is quite positive about what routine use of large language models will mean for sales of advanced processors, citing an acceleration in demand for Nvidia processors. Some analysts believe generative AI could add as much as $6 billion in revenue for Nvidia within three years.
Others in the infra business should benefit as well. Some argue infrastructure to support generative AI could reach $50 billion by 2028, for example. Some note that a single LLM training operation can cost millions of dollars. Those costs are going to be borne by app providers of all sorts.
Data center CxOs might also tend to agree about changes generative AI processing will create. We already know that generative AI requires lots of computing cycles, which we might quantify as floating point operations per second. The actual impact from a single request depends on the size of the dataset that is interrogated, the complexity of the question or task or the type of query.
Text responses, perhaps ironically, might require an order of magnitude more processing than generating an image. And generation of an image might require an order of magnitude more processing operations than generating a text-to-speech use case.
So capital investment budgets will likely be altered to add more-more processing power than we presently tend to see. As a background issue, there will be some additional demand for higher connectivity between data centers, data centers and peering points and domains to domains, though it remains unclear how much incremental demand might be generated. It is a non-zero number, but it is hard to quantify at the moment. .
It also seems likely there could be architectural impact as well. Edge computing makes sense to support lower-latency use cases and applications. It is unclear so far whether the training of large data sets or end user operations will actually drive edge computing very much.
Inference operations might still need to be conducted at large hyperscale centers, not at the edge. For that reason, use of large language models might not actually cause data center architecture to shift to greater reliance on edge computing, for example.
Higher energy requirements will likely lead to newer approaches to cooling, though.