Do you trust me now?
Exploring the concept of gender bias in voice assistants, this project collects data on the effect of the pitch of a voice on trust. The technology probe utilises a Wizard of Oz technique to mimic an interaction with an Amazon Echo that has different pitched voices.
Design Informatics, University of Edinburgh
When we only have limited information provided, we rely on stereotypes to make a trust decision, which can lead to gender bias. In voice assistants, we usually have only our perception of the voice and the wake word. In this project, we aim to understand to what extend they influence a users trust.
The interaction has two parts: the TrustBox, which is mounted to the wall and includes a box with buttons and an Amazon Echo, and a Human Computer, who fulfils the interaction part of the wizard of Oz experiment. Users can flick through the different wake words like ‘Amazon’ or ‘Echo’ and different pitches and then decide whether they do or do not trust the device. By asking follow up questions, we can collect reasons.
It was not deployed but tested with lab mates. The interaction would be perfect in an exhibition.