Artificial Intelligence, Academic Integrity, & Equity

K.Park/ October 24, 2020/ 0 comments


My question for my blog post was: “Academic integrity is an important part of higher education.  How can we equip instructors to use academic integrity tools equitably in the classroom?”


The International Center for Academic Integrity defines academic integrity as “a commitment, even in the face of adversity, to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage.” (International Center for Academic Integrity) How this plays out is different from institution to institution, but one common focus is the regular reference to issues of plagiarism and cheating in the classroom. Within those two topics, instructors have the ability to choose how they will approach the two issues in their courses; and opinions range far and wide.  On one end there are instructors who want to focus on trusting and teaching students not to cheat or plagiarize and on the other end, there are those who say that it’s going to happen, and they have a responsibility to ensure it does not happen.

This post is going to focus on the instructors who fall closer to the latter category.  Many instructors will use test design methods such as shuffling the order of questions and answers and other similar settings on their exams to prevent cheating.  In addition to this, they may adopt tools that help identify potential issues of plagiarism or cheating, such as text checking software or online proctoring software.

However, due to recent issues with tools like Proctorio[1] and the realization that machine learning and artificial intelligent software is just as biased[2] as the individuals who create the software, many questions come up. 


Artificial Intelligence (AI) is bias or even racist.  MIT Media Lab researcher Joy Buolamwini discovered that “most facial-recognition software does not accurately identify darker-skinned faces and the faces of women” (Coded Bias, 2020) and began work to push for legislation to govern against bias in the algorithms used.  Her work is documented in the film Coded Bias.

In a recent study, researchers testing various “plagiarism tools determined that: “In the literature, most of the studies use the term ‘plagiarism detection tools’. However, plagiarism and similarity are very different concepts. What these tools promise is to find overlapping texts in the document examined. Overlapping texts do not always indicate plagiarism, thus the decision about whether plagiarism is present or not should never be taken on the basis of a similarity percentage. The similarity reports of these tools must be inspected by an experienced human being, such as a teacher or an academic, because all systems suffer from false positives (correctly quoted material counted as similarity) and false negatives (potential sources were not found). (Foltýnek, 2020, p. 28)”

In an NPR article published in 2014,”The use of a plagiarism-detecting service implicitly positions teachers and students in an adversarial position,” Howard says. Howard argues it’s policing without probable cause. “The students have to prove themselves innocent before their work can be read and graded,” she says.  From what I understand about stereotype threat and its impact on achievement and persistence (previous blog post about welcoming courses), this could be considered a harmful practice, particularly for minority students if an instructor uses the tool in a punitive way instead of as a learning opportunity.


All of the issues above make it sound like it isn’t possible to use any of these tools equitably. Technology is changing and as we progress into the future, it’s becoming more intertwined with our day-to-day actions.  AI is already being used widely not just for facial recognition, albeit poorly, but for screening applicants, recommending similar videos to watch or items to purchase.  We receive advertisements based on our search results online or even from the email we receive in our mailboxes.  What can one do?

In a recent article published by Associate Professor Erica Southgate (2020), she proposes a framework to think ethically about AI in educational systems, that builds upon the United Nations Universal Declaration of Human Rights and the common principles developed by the Australian Human Rights Commission called PANEL

At a higher level, the author proposes that institutions work through this framework prior to adopting a tool that uses AI.

  1. Awareness – Raise awareness about AI and where it’s used, what it’s capable of doing, so all stakeholders can be empowered and participate in conversations and decisions.
  2. Explainability – This is related to awareness, but universities should work to make the information about the systems accessible and easy to understand.  Universities should be able to clearly explain why they’re using AI and what it is intended to do and what it actually does.
  3. Fairness – When talking about fairness, it’s related to how AI may influence how we understand students and that there be standards and training provided to mitigate issues of bias, representational skews, etc.
  4. Transparency – In this point, the focus is on being able to understand why an AI system made a particular decision.  It should be clear how certain conclusions were drawn, not just because the system told us so, but because we understood how the algorithm and system works.
  5. Accountability – This last point focuses on regulation and standards that identify what types of operations and decisions should not be given to autonomous systems.  As aI grows and the code becomes better and the systems become more intertwined with our daily lives, it’s important to know what’s appropriate.

I think these pillars also work well in a classroom and not just at an institution level.  In order to support instructors int eh classroom, tools and technologies will be needed to help them identify academic integrity concerns. In the event that these systems are used, technology staff need to work with instructors to ensure that they and their students understand how an AI system interacts with their work. Instructors should also be able to communicate to students when and how they use these tools to make decisions and be clear on how it relates to any institution policy or procedures. This doesn’t cover all issues that the use of AI will encounter, but it’s a necessary start or as the title of the paper says, “A beginning-of-the-discussion.”


Coded Bias. (2020). About the film. Retrieved from Coded Bias:

Foltýnek, T. D.-N.-D.-W. (2020). Testing of support tools for plagiarism detection. International Journal of Educational Technology in Higher Education, 17(1), 1-31.

International Center for Academic Integrity. (2020). Fundamental Values of Academic Integrity. Retrieved from

Southgate, E. (2020). Artificial intelligence, ethics, equity and higher education: A ‘beginning-of-the-discussion’ paper. National Centre for Student Equity in Higher Education, Curtin University, and the University of Newcastle.

Turner, C. (2014). Turnitin and the debate over anti-plagiarism software. Retrieved from nprED How learning happens:

[1] Editorial | Virtual proctors worsen the overall academic environment.

[2] Artificial intelligence has a problem with bias, here’s how to tackle it.

Share this Post

Leave a Comment

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.