top of page

Research Proposal

The Effects of Anthropomorphism on Trust in Sidewalk Autonomous Delivery Robots

This research proposal was submitted as a part of the graduate course Methods and Tools in Applied Cognitive Science. This proposal details a study designed to answer the question "Do anthropomorphic features in delivery robots lead to more trust?". The design reports on the current integration of robots into our everyday lives and the growing concept of trust in human-automation interaction

Delivery Robot Study Proposal

Significance of Project

Sidewalk Autonomous Delivery Robots (SADRs) have grown in popularity over the past few years. Trust is an important factor to consider when designing robots and autonomous products, and users that do not trust your product may no longer want to use the service. SADRs should be designed with trust in mind. Anthropomorphism in autonomous agents can play a significant role in the levels of trust humans have with that machine. This study is designed to question if increased levels of anthropomorphism lead to increased levels of trust. This study focuses on SADRs and general perceptions of trust, but further ideas can be expanded upon with future research.

Summary of Project

The use of autonomous robots in the workplace and in our personal lives has grown significantly in the past few years. This trend leads to a growth in research revolving around our trust in automation and how we can improve this relationship. This specific paper proposes a research study designed around Sidewalk Autonomous Delivery Robots (SADRs).

Sidewalk autonomous delivery robots are small machines that are designed to travel on sidewalks and make deliveries. They are commonly found on university and work campuses. The 2020 coronavirus outbreak created a more popular market for these robots, as they cut out the need for a delivery person and any human contact which lessens the risk of transmission. Trust is an important factor in human-automation interaction and can ultimately determine if people want to continue using your product. One potential way to increase trust could be by adding anthropomorphic features or a sense of human-likeness to an autonomous agent.

Anthropomorphism is the practice of adding human-like features to nonhuman objects. Previous research has found that anthropomorphism in autonomous agents is positively correlated with trust. One way of adding a sense of anthropomorphism to a robot is by ensuring it has a way to communicate. People often communicate with words or by using their bodies to tell you how they intend to move. Robots and other autonomous agents should have the same ability.

This study used a randomized alternative treatments design along with a pretest to investigate the relationship human-likeness in SADRs has on participants’ trust. The method in this study proposes two different levels of human-likeness added to an autonomous agent along with a control group. The control group was an SADR with no modifications. The other two groups included one robot with auditory feedback and another robot with visual feedback. Participants were selected with certain requirements from Amazon Mechanical Turk (MTurk) to ensure that valid human subject data would be produced. Participants would receive informed consent, take a demographic survey, and then be placed into one of the three experimental conditions. A 3- minute animation of a robot would be shown performing a delivery. The robot in the video would showcase either visual, auditory, or no feedback depending on which group the participant was in. Visual feedback included a large external interface on the front of the robot displaying its intent. For example, displaying the text “TURNING LEFT” along with an arrow pointing in the direction. Auditory feedback would include a recording of a female voice stating the intent of the robot. For example, the voice would say “I’m waiting to cross” when the robot stops at a stoplight. After viewing the animation, each participant would complete the Jian et al. (2000) trust survey.

References

Abrar, M. M., Islam, R., & Shanto, M. A. (2021). An autonomous delivery robot to prevent the spread of coronavirus in product delivery system. 2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). https://doi.org/10.1109/uemcon51285.2020.9298108 

​

Chen, J., Mishler, S., & Hu, B. (2021). Automation error type and methods of Communicating Automation Reliability Affect Trust and performance: An empirical study in the cyber domain. IEEE Transactions on Human-Machine Systems, 51(5), 463–473. https://doi.org/10.1109/thms.2021.3051137

​

Craig, & Schroeder, N. L. (2017). Reconsidering the voice effect when learning from a virtual human. Computers and Education, 114, 193–205. https://doi.org/10.1016/j.compedu.2017.07.003  

​

de Visser, Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents. Journal of Experimental Psychology. Applied, 22(3), 331–349. https://doi.org/10.1037/xap0000092 

​

Epley, Waytz, A., & Cacioppo, J. T. (2007). On Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864 

​

Figliozzi, & Jennings, D. (2020). Autonomous delivery robots and their potential impacts on urban freight energy consumption and emissions. Transportation Research Procedia, 46, 21-28. 

https://doi.org/10.1016/j.trpro.2020.03.159 

​

Følstad, Nordheim, C. B., & Bjørkli, C. A. (2018). What Makes Users Trust a Chatbot for Customer Service? An Exploratory Interview Study. Internet Science, 194–208. https://doi.org/10.1007/978-3-030-01437-7_16  

​

Jennings, & Figliozzi, M. (2019). Study of Sidewalk Autonomous Delivery Robots and Their Potential Impacts on Freight Efficiency and Travel. Transportation Research Record, 2673(6), 317–326. https://doi.org/10.1177/0361198119849398 

​

Jian, Bisantz, A. M., & Drury, C. G. (2000). Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04 

​

Kannan, Lee, A., & Min, B.-C. (2021). External Human-Machine Interface on Delivery Robots: Expression of Navigation Intent of the Robot.

​

Li, & Suh, A. (2021). Machinelike or humanlike? A literature review of anthropomorphism in AI-enabled technology. Proceedings of the Annual Hawaii International Conference on System Sciences, 2020-, 4053–4062.

​

Matthews, Lin, J., Panganiban, A. R., & Long, M. D. (2020). Individual Differences in Trust in Autonomous Robots: Implications for Transparency. IEEE Transactions on Human-Machine Systems, 50(3), 234 -244. 

https://doi.org/10.1109/THMS.2019.2947592

​

Natarajan, & Gombolay, M. (2020). Effects of Anthropomorphism and Accountability on Trust in Human Robot Interaction. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 33–42.

https://doi.org/10.1145/3319502.3374839 

​

Pani, Mishra, S., Golias, M., & Figliozzi, M. (2020). Evaluating public acceptance of autonomous delivery robots during COVID-19 pandemic. Transportation Research. Part D, Transport and Environment, 89.

https://doi.org/10.1016/j.trd.2020.102600

​

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Wadsworth Cengage Learning.

​

Zemmar, Lozano, A. M., & Nelson, B. J. (2020). The rise of robots in surgical environments during COVID-19. Nature Machine Intelligence, 2(10), 566–572. https://doi.org/10.1038/s42256-020-00238-2

​

Zheng Yan, Kantola, R., & Peng Zhang. (2011). A Research Model for Human-Computer Trust Interaction. 2011IEEE 10th International Conference on Trust, Security and Privacy in Computing and Communications, 274–281.

https://doi.org/10.1109/TrustCom.2011.37

©2022 by Samantha Nieto. Proudly created with Wix.com

bottom of page