Research from Dr Katelyn Mroczek has examined how assessment design can better prepare students to navigate science information in today’s digital landscape.
“Students are now expected to evaluate information from a broad range of sources, including Wikipedia and AI,” she says. “However, as the information landscape has evolved, we need assessment tasks that move beyond fact-checking to help students critically navigate complex and conflicting information.”
To address this gap, Dr Mroczek and a team of researchers designed an assessment that required students to evaluate both Wikipedia and AI-generated immunology articles, using the same criteria for accuracy, readability and suitability for general audiences.
The results were promising. Participants reported a range of new skills, including critical thinking, science communication, collaboration and information literacy.
“Students demonstrated sophisticated critical evaluation, identifying limitations in AI-generated content such as missing references and outdated datasets, while also recognising Wikipedia’s stronger citation practices and cultural inclusivity.”
Dr Mroczek says that many students continued to use AI, despite concerns about reliability.
“This paradox suggests that preventing students from using AI isn’t realistic. Instead, we need to embed AI literacy training into the curriculum.”
The next step will be to refine the assessment and track how students’ use of Wikipedia and AI evolves over time.
“We also hope to adapt this approach across a range of institutions and disciplines, creating assessments where students comparatively evaluate emerging and established information sources while developing communication skills for diverse audiences.”
“Ultimately, the goal is for all students to become sophisticated consumers and communicators of scientific information, regardless of its source.”

