After two years of implementing general artificial intelligence (AI) policies and practices in all Upper School (US) classes, many students expressed concerns over the effectiveness of these measures in preventing cheating, questioning their impact on learning and their overall benefit to the US community.
For the past two years, major classes at MFS were required to install AI policies into their syllabi, outlining what would be considered acceptable and unacceptable usage of AI for classwork and other assignments. The need to include such policies regarding AI usage comes from the need to regulate student use of ever-changing generative AI technology, specifically large language models, a type of generative AI that focuses on producing human-like text and completing language-based tasks.
Additionally, a number of minor courses recently included AI policies in their class rules as well.
Although each class has different AI policies specific to their coursework and the nature of their curriculum, all classes follow the MFS Acceptable Use of Technology Policy, a document that is re-released at the beginning of every school year to students and families with updates. This policy states that class curricula should strive to “foster the responsible and ethical use of artificial intelligence” and “provide a learning environment that equips students with the skills and knowledge required for the future.” With “concerns regarding academic integrity presented by AI technologies,” MFS divisions and departments have created their own specific “policies and practices related to generative AI.”
However, many US students don’t believe that current AI policies in classes are effective in preventing such academic integrity issues that AI advancements bring.
Jillian Godleski ’27 said, “I feel like they’re not effective because the people who want to cheat are going to find a way to cheat anyway.”
Similarly, Lily Miller-Rowe ’27 said, “I don’t think that it’s actually preventing anything. I think that the people who are really, really dedicated to cheating will continue to cheat no matter what.”
“AI has become such a big part of our world, and it’s advancing so quickly that the people who cheat will get away with it because of how advanced it’s [developing],” said Miller-Rowe.
Godleski mentioned her English class’s AI policies, noting the difficulties specifically in a writing-based class where large language models are the AI cheating tool of choice.
“I think people still use them either way, and especially on the English [assignments],” said Godleski. “There [are] a lot of ways to get around it, even [while] having to work on essays in-class.”
This year’s sophomores’ final essay assignment for English class was to be fully written in-class — students were prohibited from writing outside of class time in any shape or form unless arranged with their English teacher. In light of student AI usage concerns regarding academic dishonesty in writing, this assignment differed for last year’s sophomores (the Class of 2026), who were allowed to write outside of class for the same assignment.
However, the in-class policy of the current sophomore English essay assignment was removed before students started the final draft of the essay writing process, as teachers realized that the logistics of the in-class writing times were too difficult to manage and that the policy wasn’t that effective against preventing any academic dishonesty with AI, according to tenth grade English teacher Dan Sussman.
Regarding the ethics behind the policies, Arianna Arzu ’26 believes that “it’s reasonable to put strict policies in place for AI because [they outline what] cheat[ing] is and [shows what is] unreasonable and unfair in AI [usage].”
“I do think it’s important that things are regulated that way,” Arzu said.
Still, Arzu agreed that the current AI policies at MFS have not actually prevented academic dishonesty from AI technologies.
Arzu said, “I hear a lot of people talking about using AI and stuff like that. I just feel like overall, I don’t see it really working as successfully as people probably wanted it to be working.”
Certain AI policies have also impacted students’ learning styles, with stricter rules on assignments causing students like Miller-Rowe to adjust their approaches to work.
“I think I’ve gotten used to the [AI policies] this year, but it was definitely a really big adjustment, and I think that it definitely affected the way that I’m working in class,” said Miller-Rowe before the tenth grade final essay’s at-home policies were changed.
“I’m pretty disappointed with the [number] of people [who] do cheat because it really affects me. I’m not able to continue with my usual writing process [because of stricter policies].”
With many classes, such as English and History, switching to more in-class assessments and essays instead of at-home work, Arzu notes the difficulties students feel having to change their flow and work environment.
“I feel like it makes certain things that used to be normal more regulated,” said Arzu. “I feel like the [at-home policy] is definitely a [big] change, especially because some people find it easier to work at home. Some people find it easier to work in several different spaces that aren’t just school.”
Regarding taking classwork for home, Arjun Khandhar ’27 said, “I think we should be able to take our essays and do them at home.”
Additionally, Khandhar noted the overall effectiveness of AI policies, questioning the real value of prohibiting AI usage for students.
“We should be able to take them at home because there’s no real-world practical use of writing an essay in class with a teacher supervising us,” said Khandhar. “In reality, we’re going to have access to the internet. I think we should strive to be good writers, but we should still have the ability to do that [with AI]. I think students will always use AI, and as we progress as society, becoming more technologically advanced, it’s gonna be way more prevalent. I think at this point, [we should] sort of embrace it.”
The Acceptable Use of Technology Policy also notes that “MFS embraces the still-evolving nature and transformative potential of AI in education and beyond and recognizes the need to continually reevaluate our engagement with these technologies,” giving room for future changes to such policies to occur in the coming years.