Welcome statement


Parting Words from Moristotle (07/31/2023)
tells how to access our archives
of art, poems, stories, serials, travelogues,
essays, reviews, interviews, correspondence….

Monday, June 5, 2023

AI Will Not Take Over

By Bettina Sperry

[Editor’s Note: In response Maureen Dowd’s “Don’t Kill ‘Frankenstein’ with Real Frankensteins at Large” (NY Times, May 27), Bettina weighed in on the threat of an AI takeover.]

Maureen Dowd is right that a lot of colleges are dropping liberal arts degrees. And Noor Anand Chawla acknowledges it too, in her article “Why Asian Universities Are Embracing US Liberal Arts Programs” (JSTOR Daily, May 21):
…in the early years of this century, it seemed that the United States was shifting focus from promoting liberal arts education. These “worrying signs” of the United States “turning away from the tradition of liberal arts education that has made it a global leader in post-secondary education over the past century,” were noted by Pericles Lewis, the President of Yale-NUS College, Singapore, in a 2013 essay published in the Harvard International Review.
    Lewis points out that while language and literature departments were being cut in the US, as students preferred to pursue technical or pre-professional majors [emphasis mine….]
    See also “The Decline of Liberal Arts and Humanities” (Wall Street Journal, March 28), in which students discuss higher education and their choice of major.
    Of course, student preferences for programs leading to well-paid jobs aren’t the only factor, but many of those jobs are to help develop AI.
    And Dowd’s concern is reflected in a recent CNN article, “Experts are warning AI could lead to human extinction. Are we taking it seriously enough?” (Oliver Darcy, May 31):
On Tuesday, hundreds of top AI scientists, researchers, and others — including OpenAI chief executive Sam Altman and Google DeepMind chief executive Demis Hassabis — again voiced deep concern for the future of humanity, signing a one-sentence open letter to the public that aimed to put the risks the rapidly advancing technology carries with it in unmistakable terms.
    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the letter, signed by many of the industry’s most respected figures.
    It doesn’t get more straightforward and urgent than that. These industry leaders are quite literally warning that the impending AI revolution should be taken as seriously as the threat of nuclear war….
I’m not a believer that AI will take over everything. Rather, given human nature and violence, I think that over time, AI will become self-limiting by default of human interference, destructiveness, and an ability to pull the plug. Humans are too selfish for an AI takeover. The hand of greed is always present.
    The article “Jim Chanos, Bank of America, and others still aren’t jumping on the AI hype train” (by George Glover, Business Insider, June 3) supports my view:
For Chanos and others, the craze that’s currently taking Wall Street by storm has more in common with a fad like the metaverse than revolutionary technologies that genuinely disrupted the stock market.
    Corporate use of AI is already an issue, and arguments are in progress. See, for example, the NY Times article “Microsoft Calls for A.I. Rules to Minimize the Technology’s Risks” (David McCabe, May 25).
    Apple co-founder Steve Wozniak called for ethical regulation (and for the use of AI to detect bad uses of AI) in the May 2nd CNN broadcast, “‘The Godfather of AI’' quits Google and warns of its dangers. Apple co-founder weighs in.” Watch his short video statement here.
    The issue of human safety is currently being tossed around by those at the leading edge of AI research. See for example, “‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation” (by Josh Taylor and Alex Hern, The Guardian, May 2). And the following, earlier reference to Hinton comes from the April 17 Yahoo NewsElon Musk, who cofounded OpenAI, says he tried to make it ‘the furthest thing from Google’ after disagreeing with Larry Page over AI safety”:
He [Dr Geoffrey Hinton] is not alone in the upper echelons of AI research in fearing that the technology could pose serious harm to humanity. Last month, Elon Musk said he had fallen out with the Google co-founder Larry Page because Page was “not taking AI safety seriously enough.” Musk told Fox News that Page wanted “digital superintelligence, basically a digital god, if you will, as soon as possible.”
    Clearly, there is a concern that AI has the ability to misinform and mislead humans. One has to wonder what tools are available to continuously fact-check the information AI produces. And if none exists, then the line between truth and fiction becomes ever more blurred, chaotic, problematic, and harmful. From the same Yahoo News report:
Toby Walsh, the chief scientist at the University of New South Wales AI Institute, said people should be questioning any online media they see now:
    “When it comes to any digital data you see – audio or video – you have to entertain the idea that someone has spoofed it.”
    Somewhere, at some point, someone will establish limits to prevent the destruction AI is believed to be able to cause, including destruction of the human race.


A closing thought on the value of humanities education, from the Noor Anand Chawla article quoted at the top:
Ask any educator what sets American higher education apart from the rest of the world, and the response is likely to be its focus and encouragement of a robust liberal arts curriculum. The arts, in general, play a large role in contributing to holistic education that fosters creativity, highlights the importance of collaboration for skill development, and teaches innovative problem-solving. Further, through exposure to the arts, one can learn to appreciate cultural diversity and value freedom of expression while cultivating critical thinking skills, which in the long run, have been known to change lives.

Copyright © 2023 by Bettina Sperry

No comments:

Post a Comment