Digital Technology

The 12 Greatest Dangers Of AI

In his new book Taming Silicon Valley, the AI expert Gary Marcus warns about the greatest dangers of AI, why we might eventually need a universal basic income, why we need an AI agency, and why we all need to be aware of these problems and speak up.

What are the greatest dangers of AI?

Gary Marcus: I gave a list of 12 immediate dangers of AI in Silicon Valley; the one I am most worried about is how automatically generated disinformation and deepfakes could influence elections. In the long term, we have no idea how to make safe and reliable, and that just can’t be good.

Why do you think we might eventually need a universal basic income?

GM: Because over some time frame, AI will replace most jobs, and a handful of oligarchs will have most of the money.

Why do you think we need an AI agency?

GM: Because, for better or worse, AI is reshaping everything, and we need to have an agency that act dynamically to take advantage of opportunities and try to mitigate risks. One function would be prescreening new technology, and forcing companies to show that the benefits of those new technologies outweigh the risks.

What can an average person do about all this?

GM: We need to speak up, and insist that we will throw out leaders who would give away the store to big tech companies. And the time for boycotting GenAI may be soon.

The 12 immediate dangers of AI, according to Gary Marcus

1. Deliberate, automated, mass-produced political disinformation. “Generative AI systems are the machine guns (or nukes) of disinformation, making disinformation faster, cheaper, and more pitch perfect…During the 2016 election campaign, Russia was spending $1.25 million per month on human-powered troll farms that created fake content, much of it aimed at creating dissension and causing conflict in the United States.”

2. Market manipulation. “Bad actors won’t just try to influence elections; they will also try to influence markets. I warned Congress of this possibility on May 18, 2023; four days later, it would become a reality: a fake image of the Pentagon, allegedly having exploded, spread virally across the internet…and the stock market briefly buckled.”

3. Accidental misinformation. “Even when there is no intention to deceive, LLMs can spontaneously generate (accidental) misinformation. One huge area of concern is medical advice. A study from Stanford’s Human-Centered AI Institute showed that LLM responses to medical questions were highly variable, often inaccurate.””

4. Defamation. “A special case of misinformation is misinformation that hurts people’s reputations, whether accidentally or on purpose…In one particularly egregious case, ChatGPT alleged that a law professor had been involved in a sexual harassment case while on a field trip in Alaska with a student, pointing to an article allegedly documenting this in The Washington Post. But none of it checked out.”

5. Nonconsensual deepfakes. “Deepfakes are getting more and more realistic, and their use is increasing. In October 2023 (if not earlier) some high school students started using AI to make nonconsensual fake nudes of their classmates.”

6. Accelerating crime. “The power of Generative AI…[is already being used for] impersonation scams and spear-phishing…The biggest impersonation scam so far seems to revolve around voice-cloning. Scammers will, for example, clone a child’s voice and make a phone call with the cloned voice, alleging that the child has been kidnapped; the parents are asked to wire money, for example, in the form of bitcoin.”

7. Cybersecurity and bioweapons. “Generative AI can be used to hack websites to discover ‘zero-day’ vulnerabilities (which are unknown to the developers) in software and phones, by automatically scanning millions of lines of code—something heretofore done only by expert humans.”

8. Bias and discrimination. “Bias has been a problem with AI for years. In one early case, documented in 2013 by Latanya Sweeney, African American names induced very different ad results from Google than other names did, such as advertisements for researching criminal records.”

9. Privacy and data leaks. “In Shoshana Zuboff’s influential The Age of Surveillance Capitalism, the basic thesis, amply documented, is that the big internet companies are making money by spying on you, and monetizing your data. In her words, surveillance capitalism ‘claims human experience as free raw material for translation into behavioral data [that] are declared as proprietary behavioral surplus, fed into [AI], and fabricated into prediction products that anticipate what you will do now, soon, and later’—and then sold to whoever wants to manipulate you.”

10. Intellectual property taken without consent. A lot of what AI will “regurgitate is copyrighted material, used without the consent of creators like artists and writers and actors…The whole thing has been called the Great Data Heist—a land grab for intellectual property that will (unless stopped by government intervention or citizen action) lead to a huge transfer of wealth—from almost all of us—to a tiny number of companies.”

11. Overreliance on unreliable systems. “In safety-critical applications, giving LLMs full sway ove the world is a huge mistake waiting to happen, particularly given all the issues of hallucination, inconsistent reasoning, and unreliability we have seen. Imagine, for example, a driverless car system using an LLM and hallucinating the location of another car. Or an automated weapon system hallucinating enemy positions. Or worse, LLMs launching nukes.”

12. Environmental costs. “None of these risks to the information sphere, jobs, and other areas factors in the potential damage to the environment…Generating a single image takes roughly as much energy as charging a phone. Because Generative AI is likely to be used billions of times a day, it adds up…the overall trend for the last few years has clearly been towards [training] bigger and bigger [LLM] models, and the bigger the model, the greater the energy costs.”

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *