AI is being deployed in elderly care for three specific failure points: undetected pain, falls, and isolation. Over 1 billion people are aged 60 or older. Over 55 million live with dementia. PainChek, an Australian app, uses facial muscle analysis combined with caregiver checklists to generate a pain score. A dementia care chain in England that adopted it saw psychotropic prescriptions drop. A lot of what staff were treating as behavioral problems was unaddressed pain.

The design problems here are not standard accessibility problems. A 2025 systematic review of 132 studies found the dominant barrier is not motor difficulty or screen size. It is fear of error. Residents often cannot consent to the tools being used on them. Caregivers, not residents, hold the device. That means designers are building two simultaneous experiences: one for clinical speed, one for human dignity. JMIR researchers found co-design sessions with cognitively impaired participants collapse under conditions nobody anticipated, glare, ambient noise, chairs without armrests. Wearable fall detectors get removed. Anything that reads as surveillance gets rejected.

The full article is worth reading for its breakdown of where AI objectivity introduces its own bias, specifically how facial recognition trained on narrow datasets misreads minority groups, and how an Australian study found Aboriginal and Torres Strait Islander residents received systematically lower pain scores. The ethical core of the piece is this: when the person being designed for cannot choose to be designed for, every binary decision in the interface becomes a moral one.

[READ ORIGINAL →]