You might wonder whether it is possible to create a rate-of-improvement algorithm for artificial intelligence as we have algorithms for computing power and home broadband speeds, such as Moore’s Law and Nielsen’s Law.
It is easier to illustrate the increase in computational power to support AI for the simple reason that we can use parallel processing hardware to support AI. But AI progress depends on lots of other things.
For AI applications, training time matters.
To be sure, deep learning and neural networks might be said to increase algorithmic efficiency. That might bend the Moore’s Law curve higher. But artificial intelligence also depends on large datasets to train machines, so chip level performance is but one input.
In other words, AI--especially machine learning--is driven by multiple inputs, not just chip performance. Moore’s Law, by itself, suggests increases in transistor density of roughly a doubling every year to 18 months.
In the case of machine learning, software and computer architecture, plus the ability to access bigger data sets, can produce a seven-fold 11-fold rate of improvement.
source: Discover magazine,; Jaime Sevilla, University of Aberdeen
The point is that, at least for machine learning, progress has been faster than Moore’s Law.
No comments:
Post a Comment