An image of robots and humans lining up to board a bus labeled with the destination, “heaven,” consumed the massive screen that backdropped Wendell Wallach as he began his Open VISIONS Forum presentation in the Dolan School of Business Event Hall on Jan. 24.
As an introduction to his lecture, entitled “Hype v. Reality: Navigating the Future of AI Ethics and Governance,” Wallach described the mindset of technology optimists who believe “we are on a highway to heaven on Earth in a self-driving bus.”
In startling contrast, the next slide presented a fiery hellscape with humans tucked inside a wicker bin. Wallach explained that techno-pessimists feel we are “going straight to hell in a handbasket.”
Wallach, a bioethicist and author focused on the ethics and governance of emerging technologies, holds a belief system that sits between these two extremes. Humanity’s fate hangs in the balance, as we have reached an “inflection point” in human history.
In his recent book, “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control,” Wallach explores the potential of new technologies to shape our future. He is unclear, however, if the outcome will be positive or negative. Wallach advises his audience to rely on their inquisitive minds, pushing them to question both the practical and moral implications of emerging innovations.
After decades in the field, Wallach has earned a reputation as “the godfather” of AI ethics and governance. His lecture allowed him to showcase his vast knowledge, as well as his flair for subtle comedy.
Wallach shared a quote from Thomas Watson, the founder and chairman of IBM. In 1943, Watson said, “I think there’s a world market for maybe five computers.”
The keenly selected statement was met with laughter, which was amplified as Wallach waved his cell phone in the air and asked who else had the same device in their pocket. Watson’s inaccurate prediction elicited chuckles, as it seems impossible to imagine a society free from technological dependence. It demonstrated the inability to foresee the future of technology, alluding to the idea that artificial intelligence may become as ubiquitous as computers.
In an interview with The Mirror, Wallach further reflected on the past. As an undergraduate at Wesleyan University, Wallach majored in the College of Social Studies. His interdisciplinary approach followed him to Harvard University. He had planned to pursue a career in law. But, he decided to accept enrollment in the Harvard Divinity School and later the Graduate School of Education.
In the 1970s, Wallach headed to India. He noted that this was an unusual choice at the time, but felt invigorated by the ideologies behind meditation and self-reflection. While there, he discovered a story in a copy of “The Futurist” magazine a friend had brought along. The article examined the promise of wealth within the field of computing, an intriguing thought in a time when the personal computer had barely gained prominence.
Wallach attributed this article to his newfound interest in technology. Eventually, he purchased a word processor. His thoughts quickly materialized on the page, catalyzing his entrance into a lifetime of writing.
Wallach’s awareness of social issues has laid the groundwork for his analysis of machine ethics. His discussion noted the complexities of AI as a “source of promise and productivity,” coupled with a “considerable disquiet, disquiet about the overall trajectory of the scientific enterprise.”
The lecture delved deep into a dialogue about Chat GPT, an AI chatbot launched by OpenAI in November 2022. When Wendell asked for a show of hands from audience members who had used the application, nearly every hand flew in the air.
To exhibit the curiosity-inducing output the platform can provide, Wallach asked ChatGPT to write lyrics about the Russo-Ukrainian War in the style of Bob Dylan. As opposed to simply showing the lyrics, Wallach burst into a soulful rendition of the AI’s prose.
“Now, as bad as my imitation of Dylan was, you do recognize his style,” Wallach quipped. “These large language models can produce very rich content in a matter of a few moments.”
Still, Wallach also acknowledged the flaws in the system. He revealed the implicit biases possessed by AI, as shown in images created by Open AI’s DALL-E system. According to OpenAI, DALL-E can “create realistic images and art from a description in natural language.” When asked to produce a photo of “toys in Afghanistan,” the dolls and teddy bears were clad in military gear.
In addition to harmful stereotyping, Wallach explained that AI has opened new doors for increases in misinformation and scams.
“What will you do when you get a quick video in your text messages from a friend or relative saying they need $5,000 within an hour?” Wallach inquired. “And, it sounds like them. It looks like them. We have no way of assuming whether it is them.”
As scams become increasingly deceptive, Wallach underscored the immense value of studying AI ethics to “maximize the benefits [of AI] while mitigating harmful risks and undesired societal concerns.” His goal is to discern which technologies should be embraced and regulated, and which must be rejected.
The concept of AI regulation has garnered attention within higher education institutions, including Fairfield University. Wallach jested that, “ChatGPT has thrown every university in the country into activity and every teacher into a tizzy.”
Currently, the Academic Integrity Tutorial released by the DiMenna-Nyselius Library relays an inconclusive message about AI policy. It reads, “Policies on AI will vary by professor. Some may be fine with you utilizing these tools, and some may punish their use with a failing grade and might report you for academic dishonesty.’
The lecture concluded with a panel discussion and Q&A, highlighting the direct impact of AI on the campus community. The panelists included Fredrickson Family Innovation Lab Director Lei (Tommy) Xie, Associate Vice Provost of Innovation & Effectiveness Jay Rozgonyi and Aidan Pickett ‘26.
Rozgonyi has played a pivotal role in formulating the narrative associated with AI at Fairfield. In December, a faculty was hosted to demonstrate how new technologies can be implemented into curriculums. Initially, he revealed that a significant number of people reacted with fear and horror. Rozgonyi has fielded questions, including, “‘How do we stop this? Will the university block it?’”
Yet, he has no intention of prohibiting the use of AI. Instead, he wishes to promote a creative approach that will force students to grapple with the consequences of their decisions. Moreover, he thinks that the use of ChatGPT will heighten students’ existing cheating habits.
“I think people cheat for certain reasons,” he declared. “They’re going to use ChatGPT to cheat in the same way that they wrote answers on their hand and then went into a test.”
To combat these issues, Rozgonyi’s students are not allowed to hand in written papers. He forces them to employ new strategies, using audio and visual techniques so “they cannot limit themselves to just words.”
Pickett provided a glimpse into the student perspective, drawn from his studies within the College of Arts and Sciences. Majoring in Philosophy, he is concerned that AI may pose a threat to the humanities. He proclaimed that he has a “personal crusade against mediocrity and idleness” and is “reluctant to embrace technology simply for the sake of convenience.”
Pickett’s mentality informed his question to Wallach: “How do you embrace the excitement of this technology, while still carving out space to maintain our own creative capacities as human beings?”
Wallach, who spends his free time honing the craft of stained glass, balances his technological studies with an undeniable proclivity for the arts. Therefore, he is also resistant to the stifling effects of the technology. Still, his response alluded to his willingness to be open-minded.
“I think we can use these technologies to help students understand better what they want to get out of their education and how to own their own learning toward,” Wallach said. “We can’t pretend that these technologies don’t exist. Perhaps, we can use this opportunity to confront the uncertainty rather than trying to keep it at bay.”
During the Q&A portion of the evening, Philosophy Professor Jason Smith illustrated his own experience with AI usage in the classroom.
Smith asks students to fill out a questionnaire, with questions ranging from “Do you feel you deserve to be called the author of something you used ChatGPT to help produce?” to “Do you feel your professor would be fulfilling their professorial duties if you learned they were using ChatGPT to produce feedback on your papers?” and “Do you feel your religious authority figure would be fulfilling their duties if you learned they were using ChatGPT to guide their responses to, say, your confessions?”
To the first question, Smith was met with an overwhelming response of, “Yes!”. But, the following questions were met with responses like, “I’d demand a tuition refund!” and “I’d report them to the local dioceses!” When The Mirror asked for further insight into his informal study, Smith revealed that he takes “solace in the fact that students intuitively recognize that there are important lines to be drawn here, and thus that there is still the implicit recognition that certain human pursuits and interactions possess a special (dare I say sacred) value that cannot be duplicated by technology.”
Leave a Reply