Georgia Institute of Technology
The aim of my research is to address the longitudinal proactive assistance problem, which requires anticipating user behavior and adapting assistive actions. Anticipating user behavior over an extended period of time requires understanding routine behavior of the user as a whole, and using it to predict their actions several minutes to hours into the future. While assisting with any action, a robot must adapt its behavior to user preferences regarding boundaries of what a robot should do and not do. Adapting assistive actions is useful even in user-initiated interactions, alleviating the need for detailed goal specification, but is especially necessary for robot-initiated actions, where the user may not be there to specify such details. I seek to address these questions without adding unnecessary burden on the user, by relying on unobtrusive observations and sparse user interaction.
My work contributes computational models that process multi-modal observations to extract information about user behavior, reason over temporal and semantic patterns to predict future user actions, and finally reason over how a robot should assist with predicted user behavior. The stochasticity in human behavior, the diversity in actions they perform and objects they use, and the nuances in user preferences regarding execution of various tasks and robot behavior make such anticipatory assistance especially challenging.