Written by Mark Evilsizor
From his column Tech
Haga clic aquí para ver esto en español
In 1913, H. G. Wells wrote the book, The World Set Free. In it he postulated the possibility of nuclear power and atomic weapons. This was 20 years prior to physicist Leo Szilard working out the possibilities of a nuclear chain reaction, and 32 years before the first use of an atomic weapon. Wells was familiar with the scientific discoveries and writings of his day and projected forward the possible application of technological expression of atomic science for benefit or harm.
Today, some are comparing AI (artificial intelligence) and the current state of generative AI to the beginning of the atomic era. Companies are already using the capabilities of AI to help newly-hired customer service representatives perform at a level of an experienced employee. Whether this will lead to companies valuing new staff members more highly, or dispensing with their higher paid co-workers is yet to be seen. In either case, AI is currently being used to reshape the workplace as companies scramble to incorporate these new technologies.
In late May 2023, 350 leaders in the AI field signed a concise statement of warning asking the public to consider the harm that may result from AI, and not go blindly into the future. At the same time, other thoughtful people contended that such a warning is an exaggeration, suggesting, rather, that Generative AI is just “autofill on steroids.”
Where do we begin to consider the future of AI and what governance should be applied to this new technology? I think science fiction (SF) stories or movies may serve us well as we grapple with this fascinating computer tool that confronts us today.
In SF, an author’s imagination often starts with their current world and then follows the trail into a potential future. The stories of SF may delight us with the prospect of future technologies and ways of being (I am still waiting for my flying car ala The Jetsons), or they may hold warning of future catastrophe (consider the Terminator movies). Let’s look at some illustrations to guide our thinking.
During the 1940s and ’50s, Isaac Asimov wrote the stories that would be compiled into the book, I, Robot. Early on, he posits three laws which are hardwired into all robots to keep humans safe. They are: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law[1]. Then he goes on to tell stories of dilemmas where these laws are not sufficient. It is both entertaining and thought provoking. Perhaps these rules can serve as ethical guardrails for AI being developed today.
Arthur C. Clarke wrote 2001: A Space Odyssey in 1968. It explores advanced space travel and several themes that I won’t spoil, but it includes an iconic AI character, a computer named "HAL" (one letter off from IBM—the world leader of computing at the time). Through the interactions between the space crew and HAL, Clarke explores what happens when AI has goals that are not in sync with those who use it. As we set simply defined goals, provide autonomy to act, and put resources at the disposal of AI (look up the paperclip maximizer thought experiment), we may want to remember HAL.
Brave New World was published in 1932 and still merits reading today. In his book, Aldous Huxley describes how technology is used to pacify and control society. He explores the use of genetic engineering, what makes us human, and how comfort and truth are not always compatible.
Carl Sagan’s Contact, and Liu Cixin’s The Three Body Problem both explore what it may be like to make contact with life beyond our planet. Also, included in these books are explorations of faith and science and thoughts about unintended consequences. If we imagine broadly and consider potential outcomes up front, perhaps we can avert calamitous scenarios suggested in these and other works of SF.
Lastly, I leave you with the story of Nichelle Nichols, who played Lieutenant Uhura in the original Star Trek series. The TV program portrayed many technologies which we have not yet created, and explored ethical issues related to the use of science and technology. At one point, Nichols was considering leaving the show, that is until she was introduced to Dr. Martin Luther King Jr. at a gathering. She told the civil rights leader about her intention, but he encouraged her to stay because, he said, her portrayal showed African-Americans “as intelligent, quality, beautiful people who can sing, dance, and can go to space…” Nichols stayed with the show and later went on to work with NASA, helping to recruit Dr. Sally Ride and Colonel Guion Bluford, the first woman and African American, respectively, to go into space.
So, it may be the directions of AI in the future have already been considered and are available for us to ponder. I have only scratched the surface of the inspiring, challenging, insightful content that is out there. Perhaps, by considering these stories, we can be more thoughtful as we shape the paths and technologies of the future.
Mark Evilsizor has worked in Information Technology for more than 25 years. He currently serves as head of IT for the Linda Hall Library in Kansas City, Mo. Opinions expressed are his own.