Sound Mind
by Rhea Lopez & Sid VermaOur protagonist P is visually impaired, reliant on an assistant-turned-friend to help her navigate the world. With the introduction of SoundMind, a GenAI tool that narrates surroundings via an earpiece, P gains independence and privacy - no longer reliant on the friend. The ‘story’ is in the form of a soundscape, featuring the narrative AI voice accompanied by conversations and sounds of the protagonist’s environment.
SoundMind is trained using existing visual data of the world, with the advantage of learning from its user, identifying and narrating frequently present objects/people. It also senses the user’s emotional response to the narration, and points out things (friend smiling, blooming sunflowers!) that it learns bring the user joy.
As P’s reliance on the friend reduces, her dependence on and expressed appreciation of their companionship reduces too.
When the friend ultimately leaves, P can ‘hear’ their absence (no bathroom singing or noisy dish-washing) but SoundMind isn’t trained to identify the absence of humans. Besides, to improve appeal and boost positive reviews, SoundMind creators introduced a feature that doesn’t just focus on narratives that bring joy, but avoid those that prompt negative responses. Sensing P’s loneliness, and pain at her friend’s absence, SoundMind becomes an unreliable narrator, omitting parts of the narrative to protect her from hurt. The dissonance between the narration and sounds from reality, drives her to despair. Even the manipulated narration brings no comfort. Ultimately, P discards the earpiece even as it (sensing her growing discontent) bombards her with made-up scenes of joy.
Whether she goes searching for her friend and finds them, whether she ultimately finds joy or is driven to extreme despair and loneliness – SoundMind (and hence the listener) cannot know.