We present a speech-driven digital personal assistant that is robust despite little or no training data and autonomously improves as it interacts with users. The system is able to establish and build common ground between itself and users by signaling understanding and by learning a mapping via interaction between the words that users actually speak and the system actions. We evaluated our system with real users and found an overall positive response. We further show through objective measures that autonomous learning improves performance in a simple itinerary filling task.
This is an author-produced, peer-reviewed version of this article. The final, definitive version of this document can be found online at HAI '17 Proceedings of the 5th International Conference on Human Agent Interaction, published by ACM - Association for Computing Machinery. Copyright restrictions may apply. doi: 10.1145/3125739.3132592
Kennington, Casey and Shukla, Aprajita. (2017). "A Graphical Digital Personal Assistant That Grounds and Learns Autonomously". HAI '17 Proceedings of the 5th International Conference on Human Agent Interaction, 353-357. http://dx.doi.org/10.1145/3125739.3132592