Generative AI and aphasia

New research has highlighted how generative artificial intelligence (AI) could help in recreating fragmented language in people with communication barriers.

Researchers from the La Trobe Business School Centre for Data Analytics and Cognition (CDAC) and La Trobe University’s Centre for Research Excellence in Aphasia Recovery and Rehabilitation have highlighted how generative artificial intelligence (AI) could help in recreating fragmented language in people with communication barriers.

Millions of people worldwide live with communication disabilities. One such condition, aphasia, affects a person’s ability to produce and understand language, often creating significant challenges in everyday communication and social participation.

While assistive communication technologies exist, many struggle to support natural conversation when speech is fragmented, incomplete or contains errors.

The research developed and evaluated a framework that enables AI models to interpret partial speech and reconstruct coherent sentences.

“We found that large language models can infer a speaker’s intended meaning from incomplete or error-filled utterances while preserving the user’s original intent,” Dr Achini Adikari explains.

“This represents a shift from traditional assistive communication systems that rely on fixed templates or word prediction toward more adaptive, context-aware language support.”

The next phase of the research will test more advanced models and explore how combining speech fragments with visual cues and gestures could further improve how meaning is reconstructed in conversations involving people with aphasia.

“Ultimately, these efforts aim to move toward the development of practical, human-centred assistive communication systems that can support more natural and empowering interactions for people with communication impairments.”