As a speech-language pathologist, I’ve dedicated my career to helping people find their voices—sometimes literally, sometimes figuratively. Whether I’m working with children struggling to develop language, adults recovering from strokes or traumatic brain injuries, or older adults managing degenerative diseases like Parkinson’s, the goal is always the same: to connect, to communicate, and to live fully. Communication is fundamental to being human. Yet, when it’s lost, regaining it becomes a monumental, life-defining challenge. The ability to express oneself and engage with the world is not just a skill—it is the foundation of identity, relationships, and independence.
Communication is not a privilege reserved for the able-bodied or neurotypical; it is a fundamental human right. The Communication Bill of Rights, established by the National Joint Committee for the Communication Needs of Persons with Severe Disabilities, outlines the rights of every individual to access communication supports, express themselves, and be understood. These rights include the right to request, refuse, express preferences, and access communication systems, as well as the right to have these systems supported by the community. Yet, systemic barriers often render these rights inaccessible, treating communication as something to be rationed rather than guaranteed.
One patient’s story stands out as both an example of the barriers we face and the promise AI could hold. He was a man recovering from a severe stroke, presenting with global aphasia, apraxia of speech, dysphagia, and significant mobility impairments requiring physical and occupational therapy. Global aphasia left him unable to consistently produce or comprehend language. Apraxia disrupted the coordination of his speech muscles, and his dysphagia required modified diets to prevent aspiration. These challenges made him entirely dependent on others, robbing him of his independence and complicating every aspect of his recovery.
His care trajectory, like so many others, was dictated by systemic constraints rather than clinical need. In his thirty days in an acute hospital, he received only 15–30 minutes of speech therapy per day, five to seven days a week. Acute care prioritizes medical stabilization, and while speech-language pathologists play a vital role in this phase, their interventions are often brief and focused on immediate safety concerns like swallowing. In inpatient rehabilitation, his therapy increased to an hour per day but was capped after three weeks by insurance—a common limitation that prioritizes cost over sustained recovery. In subacute rehabilitation, his therapy dropped to 25 minutes daily, restricted by productivity requirements and limited resources. Finally, in a skilled nursing facility, therapy became inconsistent and heavily dependent on insurance approvals.
The barriers in his care were in no way the fault of the clinicians. They were world-class, skilled professionals who advocated tirelessly for everything he needed. They worked within a system that often denied their efforts, valuing short-term cost savings over patient outcomes. Despite their advocacy and his determination, the system caused harm. This relatively young patient, as far as I know, never left the skilled nursing facility. He deserved so much more than what the system allowed.
Artificial intelligence could have offered this patient something the system could not. AI-driven tools could have supported him not only during therapy sessions but also during his interactions with untrained individuals, such as family members, caregivers, or even casual staff. These exchanges, while informal, hold valuable information about his real-world communication needs and progress. By capturing and analyzing these attempts, AI could have provided clinicians with actionable insights to refine and personalize his therapy. Beyond these external interactions, AI could also have collected data during the long periods when he was alone. People communicate in countless ways, even when by themselves—through internal thoughts, gestures, or even unsuccessful attempts to speak that reflect their persistence and intent. These moments, though private, are deeply meaningful and could reveal patterns of effort and potential areas of growth.
Still, AI cannot replace the empathy, creativity, and nuanced decision-making that clinicians bring to their work. It cannot interpret the frustration in a patient’s gestures, the hope in their attempts to speak, or the courage it takes to try again after failure. These deeply human moments are what make therapy effective, and they cannot be programmed. AI’s role must be to complement and amplify the work of clinicians, not substitute for it. Over-reliance on AI risks losing not only the trust of patients but also the essence of our profession.
As clinicians, we see the potential in every patient and feel the weight of every missed opportunity. We advocate, we innovate, and we fight—but within systems that often tie our hands. It’s not enough for AI to promise progress; it must actively empower us to deliver the care our patients deserve.
For AI to be implemented effectively, however, we must confront the significant barriers and ethical challenges it raises, including:
- Funding restrictions: Health care systems often prioritize short-term cost savings over long-term investments, leaving innovative tools underfunded or underutilized.
- Clinician burnout: Overwhelming caseloads and administrative demands make it difficult for speech-language pathologists to integrate new tools into their practice without additional support.
- Mistrust of technology: Patients and clinicians alike may hesitate to adopt AI tools due to concerns about reliability, data security, and potential misuse.
- Algorithmic bias: If AI systems are trained on biased or incomplete data, they risk excluding underrepresented groups such as non-native speakers, individuals with rare disorders, or culturally diverse populations.
- Corporate misuse: Insurance companies and administrators could exploit AI to deny care, prioritizing profit over patient outcomes.
- Erosion of human judgment: Over-reliance on AI risks diminishing the critical thinking and creativity clinicians bring to their work, which is irreplaceable in patient care.
- Privacy concerns: AI systems require vast amounts of data, raising important questions about consent, security, and the ethical use of sensitive information.
What we’re asking for isn’t extraordinary—it’s human decency. It’s a system that values people over profit and ensures that communication, the foundation of our humanity, is treated as a right, not a privilege. Speech-language pathologists, AI developers, administrators, and policymakers all have a role to play in ensuring that these systems are designed and implemented responsibly, ethically, and collaboratively.
So where does that leave us? How do we ensure that AI reflects the values of patient-centered care rather than corporate interests? Speech-language pathologists: What safeguards and workflows do you need to feel confident in using these tools in your practice? AI developers: How are you collaborating with clinicians to design systems that are ethical, secure, and adaptable? Administrators and insurers: What steps are you taking to ensure that AI is used to enhance care, not deny it? Patients and caregivers: What concerns and hopes do you have for this emerging technology?
The future of AI in speech therapy depends on how we answer these questions. Communication isn’t just a skill—it’s the foundation of what makes us human. It’s up to all of us to ensure that AI enhances, rather than erodes, that truth.
Jaclyn Caserta-Wells is a speech pathologist.