Artificial intelligence and enhanced machine learning to the point of simulating human consciousness have been on the horizon for decades. Our collective human imaginations have come to be in awe of it just as we fear it. There is reason to believe, however, that the tools being created with this techonology could pose a serious risk, just as they offer the chance for utopia.
"The advent of artificial general intelligence is called the singularity because it is so hard to predict what will happen after that," Musk said. "I think it's very much a double-edged sword. I think it's there's a strong probability that it will make life much better, and that will have an age of abundance. And there's some chance that it goes wrong and destroys humanity. Hopefully that chance is small, but it's not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong."
Musk has called for a pause on AI development, citing these concerns, though he said signing a letter on it, as he did, "would be futile." He also brought up OpenAI, which he was part of at the begining. His concern about it is that it began as a not-for-profit, but it's now turning to a for-profit company under the auspices of Microsoft.
His plan is to create a new AI effort, called X AI, which will bring a new force to the competitive marketplace of generative AI. He also spoke about Tesla and AI, which is moving toward having "3 million cars be able to drive themselves with no one." These cars, when not in use by their owners, could comprise a sort of fleet of AI Ubers that would allow something of a revenue share between Tesla, that would organize the venture, and the owners who would supply the fleet of cars when not being driven by them.
Oddly, this would save on the need for parking spaces, as cars when not in use would not be idle, but would continue driving around.
Musk spoke about the concerns he has with OpenAI, whose founder Larry Page recently called him "speciesist" for caring more about humans than the machines they'd created. this, he said, was the last straw between him and Page.
"The final straw was Larry calling me a speciesist for being pro-human consciousness instead of machine consciousness. I'm like, 'well, yes, I guess I am I, I am a speciesist."
For Musk, that OpenAI, which he said he named, was meant to be open and not profit driven, so that AI could be developed ethically. "That profit motivation can be potentially dangerous," he said.
"It does seem weird that something can be a non-profit, open source and somehow transform itself into a for profit, closed source," he said. "I mean, this would be like, like, let's say you find an organization to save the Amazon rainforest and instead, they became a lumber company and chopped down the forest and sold it for money. And you'd be therefore like, 'oh, wait a second. That's the exact opposite of what I gave them money for.' Is that legal? That doesn't seem legal." If it is, he said, he doesn't believe that it should be.
This isn't just a hypothetical, either. "I also think it's important to understand, when push comes to shove, let's say they do create some some digital super intelligence, almost god-like intelligence, well, who's in control?"
This is the key question as human beings are poised to fight with AI for dominance over their own thoughts.
"And what exactly is the relationship between OpenAI and Microsoft? And I do worry that Microsoft actually may be more in control than, say the leadership team at OpenAI realizes. As part of the Microsoft's investment. They have rights to all of the software, all of the model weights and everything necessary to run the inference system," Musk said.
"At any point Microsoft could cut off OpenAI."
Musk, a dedicated and hard worker, noted that it's hard to advise young people who are just entering the work force, especially given the fact that AI could be taking over so many jobs. "If we do get to the sort of like magic genie situation where you can ask the AI, for anything–let's say it's a benign scenario–how do we actually find fulfillment? You know, it's a, how do we find meaning in life if AI could do your job better than you can?
"I mean if I think about it too hard it can be just dispiriting and demotivating. Because, I mean, I put a lot of blood, sweat and tears into building companies. And then, and then I'm like, 'wait, what should I be doing this? Because if I'm sacrificing time with friends and family that I would prefer to–' But then ultimately, the AI can do all these things. Does that make sense? So I don't know. To some extent, I have to have deliberate suspension of disbelief in order to be to remain motivated. So I guess I would say just, you know, work on things that you find interesting fulfilling, and that contributes some good to the rest of society."