Mirroring to Build Trust in Digital Assistants | Metcalf et al

The chattiness of an AI-generated voice impacts how much it’s trusted and varies by user, according to researchers at Apple. They found useability increased when the conversational style of a voice assistant more closely matched individual preferences. They suggest conversational mirroring can be achieved by analyzing speaking patterns.

The Apple research was conducted to improve Siri, their proprietary voice assistant.


We describe experiments towards building a conversational digital assistant that considers the preferred conversational style of the user. In particular, these experiments are designed to measure whether users prefer and trust an assistant whose conversational style matches their own. To this end we conducted a user study where subjects interacted with a digital assistant that responded in a way that either matched their conversational style, or did not. Using self-reported personality attributes and subjects’ feedback on the interactions, we built models that can reliably predict a user’s preferred conversational style.


  • Katherine Metcalf
  • Barry-John Theobald
  • Garrett Weinberg
  • Robert Lee
  • Ing-Marie Jonsson
  • RussWebb
  • Nicholas Apostoloff

SEE FULL PAPER Repository (free)

Mirroring to Build Trust in Digital Assistants, Metcalf et al (2019), Apple, online: arXiv:1904.016764v1

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.