By Julianne Hill
Now that artificial intelligence is actively used by many in academia—in the classroom, for syllabus development and in research—what is its proper role in legal scholarships while maintaining academic integrity?
It’s a question that’s largely left unanswered, says Nachman N. Gutowski, an assistant professor at the University of Nevada at Las Vegas William S. Boyd School of Law.
As new tools, such as ChatGPT’s Deep Research, are unveiled and scholars experiment with generative AI for research, drafting and writing, what scholarly authorship means is called into question, adds Gutowski, the author of a forthcoming article, titled “Disclosing the Machine: Trends, Policies, and Considerations of Artificial Intelligence Use in Law Review Authorship,” accepted by the Jacksonville University Law Review.
Law journals are the gatekeepers for legal scholarships, Gutowski says, but his research found that many law reviews have been slow to adapt to the influence of AI and don’t have policies about demanding disclosure about its use.
“Some of them are thinking about it. Some of them default to, ‘Well, we’ll catch you,’” and others simply don’t have a policy, he says.
In November 2024, he conducted, via email, a survey of ABA-accredited law schools and their journals and law reviews.
According to his forthcoming article, the majority of respondents agreed to participate only if their journal name remained anonymized and aggregated into the data, he says.
“Perhaps the most telling aspect of the survey was the overwhelming silence from so many law reviews, who chose not to respond to inquiries about their stance on AI,” according to the article. “This lack of engagement suggests a reluctance, or perhaps an inability, to confront the nuanced and evolving challenges posed by AI in scholarly work.”
Of those who answered, 68% of respondents said they do not have an AI policy addressing the use of AI at all, while 16% said they did, and another 16% said a policy is in development. Meanwhile, 56% did not require law review authors to disclose whether AI tools were used, according to the article.
Without those guardrails, it’s left to the scholars and editors to navigate the best practices and ethical ways for the use of AI, he says.
“We’re kind of in the Wild West,” Gutowski adds.
Daniel W. Linna Jr., the director of law and technology initiatives at the Northwestern University Pritzker School of Law, agrees.
“I just had an article published in the University of Chicago Legal Forum on deepfakes, and, interestingly enough, no one asked me any questions about what I used or didn’t use,” Linna says, who’s also a 2018 ABA Journal Legal Rebel.
To underscore how quickly things are changing, Andrew Perlman, the dean at the Suffolk University Law School, posted an article in December 2024 that used ChatGPT to write a paper about how to use ChatGPT when writing a paper. It’s titled “Generative AI and the Future of Legal Scholarship.”
Back when ChatGPT was first announced amid much fanfare in November 2022, “the actual writing of a scholarly piece didn’t seem plausible. The technology wasn’t sufficiently advanced,” Perlman says.
But the tool has quickly evolved. And in December 2024, Perlman crafted a series of prompts to develop the article, which serves two purposes, he says.
“One was to demonstrate the ability of generative AI to contribute meaningfully to scholarship and, secondly, to set out a theory for what that might look like, so that people could actually see how generative AI can, quote, unquote, think about scholarship and while producing it at the same time,” he says.
Andrew W. Torrance, an associate dean at the University of Kansas School of Law, has found AI to be such a great tool that he and co-author Bill Tomlinson, a professor of informatics and education at the University of California at Irvine, submitted a paper, titled “ChatGPT and Works Scholarly: Best Practices and Legal Pitfalls in Writing with AI” with AI originally listed as a co-author, handling a variety of tasks throughout the researching and writing process.
They submitted the article in 2023 to law and science journals but were turned away, he says.
Eventually, the piece, which spells out the authors’ previous experiences in using AI, was published in the SMU Law Review Forum, with the contributions of AI explained in the first “beefy” footnote, Torrance adds.
As for best practices, Torrance says he and his colleagues only use AI early in the article development process.
Before submitting a prompt for help crafting the article, he and colleagues conduct background research, create a list of essential literature to inform the piece, and write a short abstract outlining the major points and arguments, he adds. That’s all fed into AI, along with a definition of the target audience and reminders to not plagiarize, violate copyrights or make things up.
Once generative AI responds, Torrance’s team is “very, very careful” to weed out “hallucinations“—algorithmic pattern misperceptions that create inaccurate or nonsensical output—with rigorous fact-checking by a team of research assistants to verify every factual assertion, he says. He uses Turnitin, a plagiarism detector, to ensure that generative AI has not crossed a line.
“That is a sine qua non of writing with AI,” he says. “You cannot trust it to be true.”
Despite the heavy lifts before and after inputting information to generative AI, ultimately, “it saves time,” Torrance says. Not only does it help with generating ideas, he says, it gives a head start on organizing information and stating information clearly and sparsely.
Gutowski makes the case that law reviews should encourage transparency, and AI use should not be hidden or stigmatized.
While overly restrictive bans or unreliable detection tools are not practical, thoughtful polices supporting the use of generative AI to carefully enhance and complement human intellectual work are needed, as using the tools become increasingly used, Gutowski says.
Eventually, the tool will be as common as using a computer, and there will be no need to spell out the contributions of AI to legal papers, some say.
“What submitted pieces should be judged on is their quality and their accuracy and their sophistication; whether a scholar engaged with generative AI in one form or another to produce the piece shouldn’t be relevant,” Perlman says. “In my view, judge the pieces on their merits.”