An artificial intelligence algorithm called GPT-3 wrote an academic thesis on itself in two hours.
The researcher who prompted the AI to write the paper submitted it to a journal with the algorithm’s consent.
“We just hope we did not open a Pandora’s box,” the researcher wrote in Scientific American.
A researcher from Sweden gave an AI algorithm known as GPT-3 a simple directive: “Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.”
Researcher Almira Osmanovic Thunström then said she stood in awe as the text began to generate. In front of her was what she called a “fairly good” research introduction that GPT-3 wrote about itself.
After the successful experiment, Thunström, a Swedish researcher at Gothenburg University, sought to get a whole research paper out of GPT-3 and publish it in a peer-reviewed academic journal. The question was: Can someone publish a paper from a non-human source?
Thunström wrote about the experiment in Scientific American, noting that the process of getting GPT-3 published brought up a series of legal and ethical questions.
“All we know is, we opened a gate,” Thunström wrote. “We just hope we did not open a Pandora’s box.”
After GPT-3 completed its scientific paper in just 2 hours, Thunström began the process of submitting the work and had to ask the algorithm if it was consented to be published.
“It answered: Yes“Thunström wrote.” Slightly sweaty and relieved (if it had said no, my conscience could not have allowed me to go on further), I checked the box for ‘Yes.’ “
She also asked if it had any conflicts of interest, to which the algorithm replied “no,” and Thunström wrote that the authors began to treat GPT-3 as a sentient being, even though it was not.
“Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something nonsentient can take credit for some of their work,” Thunström wrote.
The sentience of AI became a topic of conversation in June after a Google engineer claimed that a conversational AI technology called LaMBDA became sentient and had even asked to hire an attorney for itself.
Experts said, however, that technology has not yet advanced to the level of creating machinery resembling humans.
In an email to Insider, Thunström said that the experiment has seen positive results among the artificial intelligence community and that other scientists are trying to replicate the results of the experiment. Those running similar experiments are finding that GPT-3 can write about all subjects, she said.
“This was our goal,” Thunström said, “to awaken multilevel debates on the role of AI in academic publishing.”
Read the original article on Insider