Authors: By Marnie Roestel and Sarah S. Learman, Ph.D. – Office of Curriculum and Instructional Support
In our first article, AI: A New Opportunity for Academic Dishonesty, we learned that the newest Artificial Intelligence tools are sometimes used to complete writing assignments and other coursework. This article also explored our response to the technology and various approaches to its use in the classroom. In our second article, AI in Teaching and Learning: Overcoming Artificial Intelligence Instructional Intimidation, faculty were encouraged to practice creating AI-generated responses to course prompts and consider various options for mitigating the use of AI to complete work. In this final installment on the AI paradox, we will review the tools available that aid in detecting AI-generated work.
With the ‘rise of the AI machine,’ it is becoming increasingly difficult to determine what was composed by a human and what was written artificially. Justin Gluska, founder of the Gold Penguin, states, “it’s getting harder and harder to trust the things we read. In a world where anyone can say anything, it’s important to spot the difference between fact and fiction. With the rise of AI-generated content, we’re at risk of losing levels of authenticity previously represented with some of the most popular online websites, blogs, and scholarly produced content.” So how might we determine if a human or a machine wrote the work submitted?
ChatGPT took higher education by storm in early 2023, leaving us all feeling a bit bewildered and behind in the game. Shortly thereafter, we saw the release of several sites designed to identify AI-created works [GPTZero, GPT-2 Output Detector, Content at Scale, Crossplag, etc.). These are promising tools but should be used with caution. It’s important for users to know how the tool works, how it reviews the content to detect and reach its conclusions, and to understand what the returned results mean.
It’s logical to assume SafeAssign, Blackboard’s plagiarism detection tool, can also detect AI-generated work. While Blackboard is currently pursuing multiple approaches to AI detection, such as natural language processing techniques and the use of machine learning algorithms to identify patterns in text indicative of AI-generated content, SafeAssign is not yet capable of flagging work as AI created (Matthijs, 2023). Developing these capabilities to expand SafeAssign beyond current plagiarism detection is in development at Blackboard, with an upgrade coming.
There is no foolproof cheating detection tool on the market. Whether it be securing exams, identifying plagiarism, or discovering paid-for sites that provide answers or write papers, the best line of defense comes in the form of:
- Make it Personal: Move graded activities to include personal experiences or self-reflection where AI tools struggle to create a suitable response that draws from the author’s perspective.
- Create Multi-Step Writing Activities: Instead of students submitting one final paper, require several submissions in the form of an outline, drafts, and the final paper. This helps the instructor see how their writing develops over time and be able to identify changes in writing style should AI be used.
- Re-design/Refresh the Activity: Adjust assessments to enhance security, incorporate open-ended questions, or have students work collaboratively in groups.
While no method will, without a shadow of a doubt, identify AI-created work, the best line of defense is using the AI-detection tools available and your own judgment!
Additional Resources
References
Gluska, Justin. (2023). How to Check if Something Was Written with AI. Gold Penguin. https://goldpenguin.org/blog/check-for-ai-content/
Matthijs, Nicolaas. (2023). Artificial Intelligence and Its Impact on The Learn And SafeAssign Roadmap in Insights & Innovation Blog. Daily Digest, Anthology Community.
1 thought on “AI in Academia: Exposing …”