As we plunge head-on into the game-changing dynamic of general artificial intelligence, observers are weighing in on just how huge an impact it will have on global societies. Will it drive explosive economic growth as some economists project, or are such claims unrealistically optimistic?

Two researchers from Epoch, a research group evaluating the progression of artificial intelligence and its potential impacts, decided to explore arguments for and against the likelihood that innovation ushered in by AI will lead to explosive growth comparable to the Industrial Revolution of the 18th and 19th centuries.

The seven members of the Epoch team have backgrounds in machine learning, statistics, economics, forecasting, physics and software engineering.

While they concluded that “explosive growth seems plausible,” they were quick to add that “high confidence” in rapid development “seems currently unwarranted.”

Epoch Associate Director Tamay Besiroglu and staff researcher Ege Erdil have explained their conclusions in a paper, “Explosive Growth from AI Automation: A Review of the Arguments,” published Sept. 20 on the preprint server arXiv.

They stated that many observers believe AI can drive explosive growth of “an order of magnitude faster than current rates” and lead to “super-exponential growth.” But they caution that there will be obstacles.

First, governmental regulations will likely be swift and possibly extreme as concerns grow over ethics, privacy and risk.

“Fear or reluctance regarding powerful new technologies, concerns over… intellectual property leading to a shortage of training data and unwillingness to let AI systems perform tasks that can be automated without human supervision,” due to legal concerns are factors that may tamp down the economic impact of AI, they said.

But after weighing the concerns against potential benefits of AI, the authors conclude it is “unlikely that regulation of the training and deployment of AI will block explosive growth.”

“The potential value of AI deployment could be immense,” the authors said, “with the prospect of increasing output by several orders of magnitude. Consequently, this would likely create formidable disincentives for imposing restrictions.”

Another potential obstacle to the rapid adoption of AI applications is fear of their potential unreliability. They cite as an example the tendency of AI to “hallucinate,” or generate responses that are blatantly false. A recent study found a chatbot sourced its information to research papers that in fact did not exist.

Here, again, the researchers said that improvements in reliability and incorporating safety measures such as human oversight will eventually lead to greater trust in AI operations.

“Overall, our assessment is that this argument is most likely not going to block explosive growth, but its influence cannot be ruled out,” they said.

They also ruled out other obstacles as major factors deterring the acceptance of AI, such as human preferences for human-produced goods and physical bottlenecks in production.

In the end, their outlook for explosive growth may best be described as “cautious optimism.”

“We conclude that explosive growth seems plausible with AI capable of broadly substituting for human labor,” they said.