Is AI “Summoning the Demon”?

Are we ready for the true cost of AGI? … what it is, and what it means … the moral gray area … “offensive” and “defensive” investing … AI’s Phase 2 starts now

We all know the term, but fewer know the story’s details…

When Prometheus stole fire from heaven, Zeus, the king of the gods, was furious. As revenge, he decided to destroy Prometheus and his brother, Epimetheus.

So, Zeus created a woman – Pandora – and sent her to Epimetheus, along with a mysterious jar. They married, and eventually, curiosity got the best of Pandora. She opened the jar, unintentionally unleashing all sorts of evils upon the world.

(As an interesting factoid, Pandora’s “jar” became Pandora’s “box” in the 16th century when Erasmus mistranslated the original Greek.)

One of the modern-day idiomatic takeaways is “beware of a present which seems valuable, but in reality, is a curse.”

With this as our context, let’s begin today with a handful of quotes

“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”

  • Larry Page, Co-Founder of Google

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.”

  • Elon Musk, founder of Tesla, SpaceX, xAI, Neuralink, and cofounder of OpenAI

“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.”

  • James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era”

“The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

  • Stephen Hawking, theoretical physicist, cosmologist, and author

This week, Wall Street has been focusing on mega-cap tech companies, evaluating whether AI is goosing their profits yet. But let’s look at AI from a different angle.

Today, we’re beginning a series of Digests we’ll publish over the next several weeks that will tackle AI from several different perspectives.

Clearly, there’s an investment implication – potentially, one that will provide returns that are magnitudes greater than what we’re seeing play out today with Microsoft, Amazon, Apple, and even Nvidia.

But there’s a cost to everything.

The question we want to look at today – that many of our brightest thinkers are asking – is “do we really know the true cost of tomorrow’s AI, and to what extent might this ‘valuable present’ actually be a curse?”

The coming era of “AGI”

Today, artificial intelligence is considered “narrow.”

Google’s DeepMind, Facebook’s facial recognition technology, Apple’s ‘Siri’, Amazon’s Alexa, Tesla’s and Uber’s self-driving vehicles, and OpenAI’s ChatGPT are all examples.

Narrow AI uses complex learning algorithms to analyze and process enormous volumes of data, from which the AI makes predictions regarding behavior to accomplish specific tasks.

Narrow AI only works on a specific task. So, it can’t transfer knowledge to other domains that it hasn’t been trained on.

Think of this as “AI Lite.” And for how amazing it is, it’s nothing compared to what’s on the way.

The AI that’s coming, which has many of our futurists and forward thinkers concerned, is “AGI,” which refers to Artificial General Intelligence.

AGI is expected to exceed human intelligence in every aspect. It’s predicted to be an autonomous agent that can learn without human supervision. It will have some version of consciousness, subjective experience, emotional understanding, and self-reliant decision-making capability.

The central fear in creating such an AGI reduces to “how do we control a conscious agent that is substantially more intelligent than us?”

For an example of how this might look, here’s Nick Bilton from The New York Times:

In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry.

But the upheavals can escalate quickly and become scarier and even cataclysmic.

Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.

A “pro-AI” response is that in the early days of development, we can effectively program AGI to self-iterate along a central morality that prevents dystopia

But there’s a core flaw in that line of thinking…

Who decides “central morality?”

Without getting too philosophical, consider the impossible challenge…

What “morality” decides AGI’s view on hot-button issues that have historically led to disagreement in our society such as sex, marriage, abortion, and divorce? Does AGI’s morality more closely resemble a religious or secular morality?

Or consider a suffering, terminal patient with 12 months to live. Is euthanasia moral or not?

Let’s bring it closer to home…

If you make 10X the salary of the average American and have 20X the average net worth, would “morality” dictate that you give a large chunk of that wealth away to someone with next to nothing?

Questions like these – and the different answers that Americans have today – are already polarizing our nation. Look at the state of current politics. 

Where will AGI’s morality land?

This is just the first of a handful of Digests we’ll write about AGI that covers various angles, but let’s steer it back toward the investment implications before we wrap up

Our macro expert Eric Fry has turned his attention to AGI in recent months.

He recently wrote about the book, The Singularity Is Near by computer scientist and futurist Ray Kurzweil – now one of the chief AI researchers at Alphabet Inc. The book details how AI might achieve humanlike intelligence by 2029.

Eric highlights the dystopian vision of the future we’ve touched upon today, adding:

[Kurzweil’s book contains] an even more chilling prediction: Computers could achieve superhuman abilities by 2045. This event – which he calls the singularity – would see machine intelligence become infinitely more powerful than all human intelligence combined…

In Kurzweil’s version of the future…humans would transcend the “limitations of our biological bodies and brain,” and our knowledge of genetics and nanotechnology could even mean that future humans will be… well… not human at all.

Eric then pivots toward the investment implications. Interestingly, there’s an “offensive” and “defensive” angle.

For “offense,” Eric tells his readers that they’ll be investing in AI companies directly as AGI develops. This also includes their suppliers, as well as businesses that can cut costs thanks to AI and AGI.

As for “defense,” Eric writes, “But as we get on the road to AGI, smart investors will also invest in things that AI can never be.”

This is an interesting and important idea.

As we look toward a future in which jobs and/or business models might be replaced or restructured by AGI/automation, “AGI proof” sectors and/or businesses will be needed in a balanced, holistic portfolio.

We’ll bring you more on this as Eric provides more of his research. He’s creating a series of reports for members of his trading service, The Speculator. If you’re a subscriber, keep your eye out. We’ll bring you as much as we can here in the Digest.

In the meantime, for investing in AI today, I’ll direct you toward the briefing that Eric just put together with Louis Navellier and Luke Lango. Released only a few days ago, it explains how the AI Revolution has just hit an inflection point where AI software stocks will be taking over leadership from AI hardware stocks.

This report sheds light on how to be on the right side of this “Phase 2” of the AI revolution. Our three experts provide the blueprint to follow if you want to make the most money from this next phase in AI stocks.

For now, AI development continues accelerating at, as Musk put it, “close to exponential” rates

We’ll do all we can to help you invest accordingly with the goal of making wealth on a scale that mirrors how this technology will transform our world. However, let’s not be blind to the risks.

Are we ready to pay the price for all this?

We’ll end by returning to Elon Musk:

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.

I mean, with artificial intelligence, we’re summoning the demon.”

Have a good evening,

Jeff Remsburg


Article printed from InvestorPlace Media, https://investorplace.com/2024/08/is-ai-summoning-the-demon/.

©2024 InvestorPlace Media, LLC