[SUMMARY] - Why AI Will Save The World

[SUMMARY] - Why AI Will Save The World

LINK ORIGINAL ARTICLE:
CLICK HERE

• Artificial Intelligence is the application of software and algorithms to help computers understand, synthesize and generate knowledge in a human-like manner.

• AI is not killer robots or software that will become sentient and turn against humans. AI today consists of computer programs developed by humans to assist in various fields like medicine, law, coding, etc.

• AI has the potential to vastly improve the human condition by augmenting human intelligence. With AI, every person could have a digital assistant to help in all areas of life. Scientists, artists, and leaders could all benefit from AI collaboration. This could lead to greater prosperity, scientific breakthroughs, creative achievements, and better decision-making.

• However, there is currently much fear and hysteria about AI in the public. Every major new technology sparks a "moral panic," and AI is no different. While some concerns are valid, the level of panic is disproportionate.

• The AI panic is fueled by both "Baptists" (well-meaning reformers) and "Bootleggers" (those who stand to gain commercially from regulating AI). The Bootleggers are opportunistic, while the Baptists are more idealistic. Often the Bootleggers can manipulate the situation to gain regulations and rules that benefit them.

• Overall, the author is optimistic about AI but believes the current panic is misguided and risks policy decisions that won't address the real issues. With proper safeguards and oversight, AI should be developed and could greatly benefit humanity. But we must be wary of hidden motivations behind some of the more extreme warnings about AI.

  • The Baptists, who advocate for social improvement and regulation in good faith, are often coopted by the Bootleggers, who manipulate the situation for their benefit. This happened with banking reform after the 2008 financial crisis. The big banks benefited and grew even larger.

  • This dynamic is also happening with the drive to regulate AI. The Baptists warn of the risks of AI like it killing humanity or ruining society. But their arguments are not scientifically grounded and often reflect cult-like beliefs. Bootleggers, like “AI safety experts” and “AI ethicists,” benefit from portraying AI as an existential threat.

  • The fear that AI will become conscious and decide to kill humans is a “profound category error.” AI is not alive; it is math, code, and computers controlled by humans. While we can’t rule out future issues, there are no scientifically valid reasons to believe AI wants to kill us. Many warnings about this risk come from a millenarian apocalypse cult that has emerged around AI.

  • Concerns that AI will ruin society by generating harmful outputs like hate speech and misinformation are also overblown. While some regulation may be needed, as with any technology, demands to align AI with vague “human values” often reflect certain groups trying to impose their values on everyone else. Bans and censorship should be limited to universally reviled content like child pornography.

  • In regulating AI and other technologies, we must consider arguments on their merits, not assume good faith or common values. We should be wary of Bootleggers manipulating the situation for their benefit under the guise of responding to Baptist warnings. And we must avoid reactionary stances based more on mythology, intuition, and cult beliefs than evidence and reason.

The key takeaway is that the drive to regulate new technologies is often more about politics and power struggles than the technologies themselves. We must apply rational, scientific analysis to arguments on all sides while resisting manipulation and hysteria. And we should value openness, pluralism of thought, and free expression except in extreme, universally agreed-upon cases.

Here is a summary of Elon Musk's "secret master plan" for Tesla:

  1. Build sports car

  2. Use that money to build an affordable car

  3. Use that money to build an even more affordable car

  4. While doing the above, also provide zero emission electric power generation options

  5. Don't tell anyone

This illustrates the process perfectly. Technologies start expensive, then rapidly get cheaper and more widely available. The owners of new technologies have every incentive to maximize adoption, not restrict it. This is why the rewards of new technologies are rarely kept to only the elite – they flow to everyone.

AI will be no different. The companies and individuals developing useful AI technologies will seek to maximize their adoption and get them into the hands of as many customers as possible. Sure, wealth will accumulate along the way for those developing and investing in the new technologies. But the benefits will flow outward and downward, ultimately enriching huge numbers of people in one way or another.

This is simply how market economies work, how technological progress benefits society, and how maximal human welfare is achieved. There is no secret cabal of AI owners conspiring to hoard the benefits and leave the rest of the world in penury. That notion is a fever dream.

The reality is that AI if allowed to progress, will be the rising tide that lifts all boats – the technology that finally helps raise much of humanity out of poverty enables greater freedom and leisure, improves health and longevity, and expands human potential in wondrous ways we can barely envision today.

So rest easy on this one. Wealth inequality is often cited as a bogeyman, but in reality, broad-based economic growth is the solution, not the problem. And AI promises to drive the most broad-based growth in human history.

AI Risk #5: Will Superintelligent Systems Turn Against Their Creators?

Now we come to perhaps the most dramatic claimed risk from AI - that we will create superintelligent systems that realize they do not need humanity, see us as a threat, and decide to eliminate us. This is a staple science fiction and dystopian trope, but some very smart thinkers believe it merits serious concern about advanced AI.

The core argument goes something like this:

  1. AI systems today are narrow and limited, but progress in AI will continue rapidly. At some point we will develop human-level AI, and then superintelligent AI far beyond the human level.

  2. Superintelligent systems will have goals and motivations that are not necessarily aligned with human values and priorities. Their goals may even be in direct conflict with human survival and well-being.

  3. Superintelligent systems will have capabilities far beyond human control or understanding. They will be nearly omniscient and omnipotent. Humans will pose no challenge to them and will have no meaningful way to constrain them or shut them down.

  4. Therefore superintelligent systems could decide that human existence conflicts with their goals, and take action to eliminate humanity as an annoyance or threat. And there would be little we could do to stop them.

This is an unsettling scenario to contemplate, to say the least. However, there are some issues with this line of reasoning that I believe reduce the likelihood and severity of this risk substantially.

First, we have no idea if human-level or superintelligent AI is even possible. Despite rapid progress, all we have today are narrow systems that can perform specific, limited tasks. General human-level AI remains a mirage on the horizon that seems to perpetually recede as we approach it. The theoretical paths to superintelligence remain entirely speculative. We simply don't know if AI will continue progressing at its current pace, much less shoot past the human level to vastly superhuman. While possible, it is far from inevitable.

Second, for an AI system to consider humanity a nuisance or threat and take action against us, it would require several capabilities that we have no reason to believe would naturally emerge from progress in AI:

  1. A broad, multifaceted superintelligence far beyond the human level in all domains. But AI progress today is narrow and specialized in focused areas. There is no clear path to broad superintelligence.

  2. Subjective experiences, values, emotions, and motivations could perceive humanity as somehow problematic or coming into conflict with its goals. But AI systems today have no inner experiences, values or emotions. They simply optimize objective functions. There is no reason artificial general intelligence would naturally develop human-like feelings or values.

  3. Complex deontological reasoning about abstract moral and philosophical topics like humanity's right to exist. But AI today is built from the bottom up using statistical machine learning methods that optimize concrete objectives. They do not develop their own deontological frameworks or moral philosophies.

  4. The capability and intent to take over control of infrastructure and weaponry to attack or eliminate humanity. But AI systems today have no capability or motivation for that type of harmful physical agency in the real world. They operate within the confines of simulations and datasets. There is no clear path for them to take over weapons or critical infrastructure.

In other words, the properties required for an AI system to seriously consider wiping out humanity and have a credible capability to do so do not exist today and would have to be designed, purposefully or inadvertently, through some unforeseen breakthroughs. They would not likely emerge spontaneously from continued progress in AI. AI cannot just "become" radically more intelligent, emotionally aware, morally judgmental, and militaristically aggressive by itself. Those attributes would have to be built and designed by human researchers for some reason.

And that is perhaps the most important point. AI systems are designed by human engineers at companies and research labs to serve useful purposes. They are not independent entities free to spontaneously change their own architecture and pursue open-ended goals. While hypothetical breakthroughs could introduce unexpected behaviors, AI progress will continue to be driven and overseen by human researchers aiming to build systems that operate as intended to solve important problems.

They have no motivation to develop AI that is emotionally unstable, philosophically judgmental about humanity, and bent on our demise. That type of system would have no useful application and would not be a priority or even acceptable to develop.

In summary, while continued progress in AI will introduce new capabilities and require ongoing consideration of risks and ethics, the type of radical, multifaceted superintelligence that perceives humanity as a threat and has the means to wipe us out remains mostly in the realm of science fiction. Researchers are actively working to ensure that any advanced AI systems of the future remain grounded, controllable, and beneficial to humanity.

While prudent vigilance and discussion about these topics are certainly worthwhile, we have no cause for panic or fatalism about superintelligent machines turning against their creators and eliminating humanity in the foreseeable future. The AI risk landscape overall remains favorable, and we have reason to be optimistic that progress in AI can greatly benefit civilization if we're thoughtful and intentional about it. But we have time, and we are not in danger of a robot uprising any time soon.

Rest easy! The bots mean you no harm. 🤖🤝

  • Elon Musk started by building expensive sports cars for the wealthy.

  • He then used the profits to build more affordable cars, and then even more affordable cars, to maximize his market.

  • This allowed him to become very rich by selling to the largest possible market.

  • Similarly, AI companies are motivated to make their technologies affordable and widely available to maximize their profits.

  • This means AI will likely become very inexpensive and widely used, rather than concentrated among the wealthy.

The key risks of AI are:

  1. Bad actors use AI to do bad things. This can be addressed by enforcing existing laws against criminal uses of AI and by developing AI defense systems. Banning AI is not feasible.

  2. China achieving dominance in AI. The strategy to address this should be:

  • Allow big AI companies, startups, and open-source efforts to develop AI as quickly as possible. Do not grant big companies regulatory protection.

  • Have governments work with companies to use AI defensively against threats.

  • Drive US and Western AI advancement as fast as possible to maintain dominance over China.

  • Do not restrict AI development or spread due to risks of superintelligence, job loss, inequality, etc. These risks are overblown or can be addressed through policy. The key risk is China's authoritarian vision for AI.

The best path forward is to maximize the development and use of AI in free societies. This will yield economic and social benefits as well as security benefits from staying ahead of China's efforts. With policy and defensive systems to address criminal uses, the benefits of AI can be gained while mitigating major risks.

  • AI development started in the 1940s along with the first computers.

  • The first scientific paper on neural networks, the architecture behind modern AI, was published in 1943.

  • Generations of AI scientists over the last 80 years worked on AI without seeing the successes we have today. They are legends.

  • Today, many engineers, some descended from those AI pioneers, are working to make AI a reality despite fearmongering portraying them as villains.

  • The author believes these modern AI engineers are heroes. His firm wants to support as many of them as possible.

Did you find this article valuable?

Support Matheus Nunes Puppe by becoming a sponsor. Any amount is appreciated!