Challenges and Opportunities for Delay-Invariant Telerobotic Interactions

Dries Cardinaels 🔗
UHasselt - Flanders Make
Digital Future Lab
Diepenbeek, Belgium
Tom Veuskens 🔗
UHasselt - Flanders Make
Digital Future Lab
Diepenbeek, Belgium
Raf Ramakers 🔗
UHasselt - Flanders Make
Digital Future Lab
Diepenbeek, Belgium
Kris Luyten 🔗
UHasselt - Flanders Make
Digital Future Lab
Diepenbeek, Belgium
30th International Conference on Intelligent User Interfaces Companion
DOI: 10.1145/3708557.3716332 | ISBN: 979-8-4007-1409-2/2025/03

Abstract

Effective operation in direct-control telerobotics relies heavily on real-time communication between the operator and the robot, as the operator retains full control over the robot’s actions. However, in scenarios involving long distances, communication delays disrupt this feedback loop, creating significant challenges for precise control. To investigate these challenges, we conducted a user study where participants operated a TurtleBot3 Waffle Pi under varying delay conditions. Post-experiment brainstorming and analysis revealed recurring challenges, including over-correction, unpredictable robot behavior, and reduced situational awareness. Potential solutions identified include improving robot behavior predictability, integrating feedforward mechanisms, and enhancing visual feedback. These findings underscore the importance of designing intelligent interfaces to mitigate the impact of delays on telerobotic performance.

CCS Concepts

  • Human-centered computing~Graphical user interfaces
  • Computing methodologies~Robotic planning
  • Computer systems organization~Robotics
Keywords: Telerobotics, Human-Robot Interaction (HRI), Delay, Unmanned Ground Vehicle (UGV), Control Interface Design
PDF
Dries Cardinaels, Tom Veuskens, Raf Ramakers, and Kris Luyten. 2025. Challenges and Opportunities for Delay-Invariant Telerobotic Interactions. In 30th International Conference on Intelligent User Interfaces Companion (IUI Companion ’25), March 24–27, 2025, Cagliari, Italy. Association for Computing Machinery, New York, NY, USA, https://doi.org/10.1145/3708557.3716332

1 Introduction

Mobile robots, such as Unmanned Ground Vehicles (UGVs), are steadily advancing in capabilities, enabling them to operate in challenging environments [1]. As a result, they are increasingly being deployed in scenarios where human intervention is difficult or impractical, such as urban search and rescue, mining, and space exploration [2, 3, 4, 5, 6].

One common method for controlling remote robots is through direct-control teleoperation, in which the operator maintains full control over the robot’s decisions and actions. In practice, this involves the operator interpreting video feedback from the robot’s environment and then issuing commands for the robot to execute [7]. In theory, such teleoperation should be feasible anywhere, provided a reliable communication medium, such as a wireless network or satellite link, is available.

In real-world scenarios, the effectiveness of direct-control systems is significantly affected by the physical distance between the operator and the robot. This means that as the distance increases, the time required for signals to travel through the communication medium also increases, resulting in communication delays [8]. This communication delay, often in the order of milliseconds to seconds, is formally defined as the time between issuing a command and observing its outcome [9, 10, 5].

Previous research has already shown that delay negatively affects the operator’s ability to control the robot effectively. Delays as short as 200 milliseconds can disrupt natural movement patterns, reduce the effectiveness of the operator’s actions, and undermine the reliability of remote operations [11, 12, 13, 14, 8]. Recent research suggests incorporating some level of automation when the remote environment is constrained due to delayed teleoperation [6, 15], thereby shifting towards a shared-control teleoperated system.

While current robots can perform complex tasks without continuous human input, they are still limited in their adaptability and decision-making abilities outside of narrowly defined situations [16]. Additionally, the shift from a direct, to a shared-control approach would reduce the operator’s control over decision-making, which further amplifies the existing trust issues humans have with fully autonomous systems [3].

This paper identifies key challenges and opportunities for advancing delay-invariant telerobotics within the direct-control paradigm, where operators retain full authority over the robot’s actions. To investigate these, we conducted an exploratory user study to investigate the cognitive and operational challenges faced by operators under delayed conditions. Insights from the experiment, along with participant feedback gathered during post-experiment brainstorming sessions, revealed critical challenges and potential solutions for mitigating the effects of delay. These findings provide a foundation for designing future systems and interfaces that enhance operator performance, improving applications in planetary exploration, disaster response, and other high-delay environments. Ultimately, this work seeks to navigate the complexity of direct-control teleoperation rather than avoiding it.

2 Exploration of Delay-Invariant Telerobotic Interaction

To explore the key challenges and opportunities in delay-invariant telerobotics, we first sought to understand where the primary challenges arise and why they remain insufficiently addressed in current research. To achieve this, we conducted an exploratory study in which 8 participants operated a UGV remotely, navigating a predefined path under four delay conditions: (1) 250 ms (baseline), (2) 500 ms, (3) 750 ms, and (4) 1250 ms. Following the trials, a brainstorming session was held to gather participants’ insights and suggestions, identifying strategies to address the challenges posed by delay.

Experimental Setup and Apparatus The experimental setup was designed to simulate a teleoperation scenario in a delay-invariant environment. Participants were instructed to operate a UGV along a predefined path, aiming to keep it as closely aligned with the given route as possible (see Figure 1 left). To assist with this, the interface displayed a red cross at the center of the camera view, which participants were instructed to keep on the path’s line as accurately as possible. The testing area and path layout were standardized for all participants, while the onboard camera provided continuous, but delayed, video feedback. Environmental factors, such as lighting and surface conditions, were controlled. The scene was well-lit, ensuring all elements of the path were clearly visible, and the surface was clean and smooth to minimize external factors that could introduce additional deviations.

For our experiment, we used the TurtleBot3 Waffle Pi (TB3) from Robotis as UGV. The standard Raspberry Pi Camera Module v2.1 was replaced with an OAK-D Pro camera from Luxonis, utilizing only its RGB video stream. To achieve a first-person perspective (see Figure 1, right), we created a custom 3D-printed mount for the camera. The Qualisys Tracking System was used to collect precise positional data. Five infrared markers were placed on the TB3, and data was collected using eleven tracking cameras: three Arqus cameras and eight Miqus cameras. This setup ensured accurate motion capture and positioning information throughout the experiment.

An overview of the setup. Left: The testing area with a predefined path marked on the floor, showing the user where to navigate the UGV. Right: The TurtleBot3 equipped with a custom 3D-printed camera mount and OAK-D Pro camera.
An overview of the setup. Left: The testing area with a predefined path marked on the floor, showing the user where to navigate the UGV. Right: The TurtleBot3 equipped with a custom 3D-printed camera mount and OAK-D Pro camera.
Figure 1. An overview of the setup. Left: The testing area with a predefined path marked on the floor, showing the user where to navigate the UGV. Right: The TurtleBot3 equipped with a custom 3D-printed camera mount and OAK-D Pro camera.

User Tasks and Procedure Before each session, we positioned the TB3 at the starting spot in the testing area and ensured that all relevant systems—tracking, video feed, and input capture—were communicating with each other. Participants then received a brief introduction to the setup, including a short demonstration of the TB3 controls, an explanation of the video feed and interface, and a description of how input signals and video feedback were communicated. This introduction also clarified the focus of our data capturing process and provided an overview of the post-task brainstorming session.

We used keyboard keys for navigation, allowing participants to operate the TB3 using familiar keyboard inputs. The operator interface (see Figure 2) included a toolbar for initializing the system, and the video feed was displayed at a resolution of 1280x720 pixels. After the introduction, participants began their navigation task, following a trapezoidal path. This shape was chosen to incorporate diverse driving challenges, including straight and sloping segments, sharp turns, and varying path lengths. Each participant performed the task under four conditions: one baseline condition with a delay of 250 ms, and three additional levels of delay—500 ms, 750 ms, and 1250 ms. We selected 250 ms as the baseline because it was the lowest stable delay achievable, where stable is defined as fluctuations remaining within a maximum range of 20 ms [17].

The operator’s interface displaying the video feed from the UGV’s onboard camera. A red cross at the center of the view serves as a reference marker, helping operators align the UGV with the path. The interface also shows the system delay in the top-right corner.
The operator’s interface displaying the video feed from the UGV’s onboard camera. A red cross at the center of the view serves as a reference marker, helping operators align the UGV with the path. The interface also shows the system delay in the top-right corner.
Figure 2. The operator’s interface displaying the video feed from the UGV’s onboard camera. A red cross at the center of the view serves as a reference marker, helping operators align the UGV with the path. The interface also shows the system delay in the top-right corner.

Methodology In total, 8 participants (6 male, 2 female) were recruited at our research institute. None of them had prior experience with teleoperating a UGV, ensuring that learning effects would be minimized. Every participant was informed about what data would be captured during the study.

In each trial, we recorded the UGV’s trajectory, the time taken to complete the task, and the degree to which the vehicle deviated from the intended path. At present, these deviations are assessed visually, though future work will explore automated methods for quantifying them. After completing the trials, participants took part in a brief brainstorming session. They were asked three questions: (1) What did you find most challenging about controlling the robot? (2) What would have helped you control it more effectively? and (3) What types of visual feedback would be useful? Their responses highlighted common challenges (see Section 3) and potential approaches for improvement (see Section 4).

3 Challenges with Delay-Invariant Telerobotic Interaction

Teleoperation relies heavily on video feedback, as operators depend on visual information to control a robotic system remotely. In direct-control teleoperation, this video feedback is essential for interpreting the robot’s environment and issuing commands. However, delays significantly affect the usability of the video feed, as the real state of the environment may have already changed by the time the operator receives the visual information, introducing challenges for direct-control. The challenges highlighted in this section are directly informed by the findings of our study (Section 2) and the associated brainstorming session, where participants shared their experiences.

Understanding Delay Delay in telerobotics, often in the order of milliseconds to seconds, is defined as the time between issuing a command and observing its outcome [18, 9, 10, 19, 12, 5]. This delay has two main sources: computation and transmission delay [12]. The computation delay represents time spent in processing, while transmission delay is the time spent to transfer data of the communication media between the operator and the robotic system. Computation delay can be seen as something that is kept at a specific level, while transmission delay is highly susceptible to the distance between the operator and the robotic system [8].

A significant challenge in teleoperation is the delay caused by long distances between the operator and the robot. In ideal conditions, communication signals travel at the speed of light. However, even under these ideal circumstances, additional delays occur due to the time required to close the feedback loop and process signals through various systems. As these delays accumulate with increasing distance, they become particularly problematic in scenarios like outer-space teleoperation. This growing delay severely limits the operator’s ability to maintain precise and effective control over the robotic system.

Challenges Our study (Section 2) identified several recurring challenges in performing direct interactions with a remote robot under delayed conditions. These challenges were derived from participant feedback on the most difficult aspects of controlling the UGV. They can be broadly categorized into two areas: challenges related to the control interface and challenges associated with maintaining situational awareness.

Control Interface A common challenge observed was the tendency for participants to over-correct the UGV’s movements [20]. When participants noticed the UGV deviating from the intended path, they sent corrective inputs to adjust its trajectory. By the time the UGV responded to an input, it had already moved further along its trajectory than anticipated, causing the corrective commands to overshoot the intended position. This back-and-forth adjustment cycle disrupted smooth operation [21, 1], which not only hindered task completion but, in some cases, led to a complete loss of control.

Another significant challenge participants encountered was related to the UGV’s behavior when responding to control inputs. While linear movements (e.g., moving forward and backward) were relatively predictable, rotational movements (e.g., moving left and right) proved problematic for all participants. This is critical for two main reasons. First, when participants struggled to control the UGV’s rotation to achieve the desired outcome, they often reverted to the oscillatory behavior described above. Second, the unpredictability of rotational movements exacerbates the learning curve for operating a UGV under delayed conditions. Research has shown that operators can adapt to a constant delay over time, but varying or unpredictable system responses make this adaptation much harder. Consequently, rotational unpredictability could steepen the learning curve, making it even more difficult for operators to control the UGV effectively.

A final challenge identified was the uncertainty about whether control inputs were successfully transmitted to the UGV. This problem highlights the need for clear feedback within the control interface. In our study, participants received no feedback confirming their input commands, which led to notable behavioral responses. Some participants pressed keys more aggressively or repeatedly when they did not observe immediate movement, interpreting the lack of motion as a failure to transmit their input. This often resulted in further oscillations, as the delayed and unpredictable motion of the UGV did not align with participants’ expectations. Such situations have also been shown to increase frustration and anger in operators navigating a UGV under delay, further compounding the challenges of maintaining control [22].

Situational Awareness In teleoperation, the operator must maintain situational awareness of the environment in which the robotic system is operating. This requires gathering information about the remote environment primarily through video camera feeds. Such awareness is crucial for the operator to provide accurate and informed instructions to the robot [23, 1]. However, if situational awareness is compromised—for instance, due to delays—it can have severe consequences for performance. This happens because the operator may misinterpret or misidentify the situation, leading to suboptimal commands and reduced effectiveness [9].

Our study (Section 2) highlights similar challenges. When operators navigated the path, approaching corners proved particularly difficult to manage. Here, corners initially fell outside the visible area of the video feed, making it challenging for operators to assess whether the robot was positioned correctly. This lack of visibility led to difficulties in managing and fine-tuning turns, requiring multiple corrective commands to realign the robot with the intended path. The inclusion of corners in the path was intended to observe how operators managed rotational movements and how the camera placement and video feed influenced their ability to control the UGV. The findings clearly demonstrate that operators are heavily constrained by the quality and coverage of the video feed, underscoring its critical importance when designing teleoperated UGV systems.

Additionally, Operators encountered difficulties in estimating the robot’s path deviations during straight-line navigation, as the robot’s forward movement did not result in perfectly linear motion due to inherent physical limitations. Although these deviations were small, their inconsistency in magnitude and direction required constant attention, making alignment challenging. This unpredictability not only complicated navigation but also prevented operators from forming a reliable mental model of the robot’s behavior. Consequently, the absence of predictable motion patterns increases the cognitive load, steepens the learning curve, and underscores the inherent difficulty of direct-control systems under these conditions.

4 Opportunities for Delay-Invariant Telerobotic Interaction

Despite the challenges outlined in Section 3, delay-invariant telerobotics offers significant opportunities for advancement. Drawing on findings from our study and insights gathered during a brainstorming session, we identified key areas for improvement. Participants were asked two questions: (1) What would have helped you control it more effectively? and (2) What types of visual feedback would be useful? Their responses revealed several opportunities, which we evaluate in terms of their advantages and limitations.

More Predictable Behavior A major challenge in delayed teleoperation is that the relationship between input commands and actual movement is often inconsistent, making it difficult for operators to anticipate how the system will respond. By improving the consistency of the UGV’s response to commands—ensuring that the same input leads to the same motion—the operator can better predict how the system behaves, even under delay. While this does not eliminate the challenges of delayed feedback, it reduces additional sources of unpredictability, such as oscillatory corrections or drift. However, achieving greater predictability requires advanced control mechanisms, adding complexity to the system. Additionally, in highly dynamic environments, delays may still pose significant operational limitations despite more stable behavior.

Incorporate Feedforward Incorporating feedforward to display the robot’s intended path and behavior provides operators with a clear visual reference, improving their ability to anticipate movements [24]. This visualization helps operators understand the immediate consequences of their inputs, reducing errors and unnecessary corrections. By offering a clearer picture of the robot’s trajectory, feedforward enhances decision-making and promotes smoother control. However, generating accurate feedforward projections requires modeling the robot’s dynamics, which adds complexity to the system. To address inherent deviations or the effects of rough terrain, projections should include a degree of uncertainty, indicating possible areas where the robot might end up. The language used in these visualizations must be intuitive, requiring minimal training for operators to understand while also avoiding excessive cognitive load that could hinder their ability to perform intended actions [25]. This balance ensures that operators can effectively interpret the feedforward information and assess potential risks, especially in scenarios where precise path control is critical to achieving the desired outcome and avoiding hazards.

Visualization of Command Transmission Providing visual feedback to confirm that a command has been transmitted and to display the time estimated time for its execution can significantly help operators manage delays. This type of feedback ensures that operators are aware when their input has been sent and acknowledged by the system, reducing frustration caused by uncertainty about whether a command was successfully registered. Additionally, it allows operators to plan corrective movements more effectively, enhancing overall control and reducing unnecessary errors. However, incorporating this feature could increase the cognitive load on operators, especially if the interface already presents a high volume of information. Furthermore, misinterpreting the feedback can lead to mistakes, particularly in critical situations where precise actions and timely actions are essential.

5 Conclusion and Future Work

This paper has explored the challenges and opportunities in achieving robust, delay-invariant telerobotic systems using a direct-control paradigm. By conducting a user study with varying delay conditions, we identified key challenges faced by operators, such as over-correction, unpredictable robot behavior, and decreased situational awareness. These challenges underscore the critical need for intelligent control interfaces and enhanced visual feedback mechanisms. Opportunities such as reducing delay, incorporating feedforward visualization, and improving predictability present promising pathways for mitigating the effects of delay in telerobotic interactions.

Future work will focus on developing and evaluating system prototypes that integrate these opportunities to validate their effectiveness in real-world applications. Specifically, we aim to design intelligent interfaces that minimize cognitive load while providing operators with clear, actionable feedback. By building on the insights gained from this study, we aim to advance the capabilities of delay-invariant telerobotic systems, ensuring improved operator performance and broader applicability in challenging remote scenarios.

Acknowledgments

This work was funded by the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" program and by the Special Research Fund (BOF) of Hasselt University, BOF23OWB29. The infrastructure for this work is funded by the European Union – NextGenerationEU project MAXVR-INFRA and the Flemish government.

References

  1. MD Moniruzzaman, Alexander Rassau, Douglas Chai, and Syed Mohammed Shamsul Islam. 2022. Teleoperation methods and enhancement techniques for mobile robots: A comprehensive survey. Robotics and Autonomous Systems 150 (2022), 103973. https://doi.org/10.1016/j.robot.2021.103973
  2. Carlos Marques, João Cristóvão, Paulo Alvito, Pedro Lima, João Frazão, Isabel Ribeiro, and Rodrigo Ventura. 2007. A search and rescue robot with tele‐operated tether docking system. Industrial Robot: An International Journal 34, 4 (2007), 332–338. https://doi.org/10.1108/01439910710749663
  3. R.R. Murphy. 2004. Activities of the rescue robots at the World Trade Center from 11-21 september 2001. IEEE Robotics & Automation Magazine 11, 3 (2004), 50–61. https://doi.org/10.1109/MRA.2004.1337826
  4. D. W. Hainsworth. 2001. Teleoperation User Interfaces for Mining Robotics. Autonomous Robots 11, 1 (2001), 19–28. https://doi.org/10.1023/A:1011299910904
  5. T.B. Sheridan. 1993. Space teleoperation through time delay: review and prognosis. IEEE Transactions on Robotics and Automation 9, 5 (1993), 592–606. https://doi.org/10.1109/70.258052
  6. Robert R. Burridge and Kimberly A. Hambuchen. 2009. Using prediction to enhance remote robot supervision across time delay. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, St. Louis, MO, USA, 5628-5634. https://doi.org/10.1109/IROS.2009.5354233
  7. Thomas B Sheridan. 1989. Telerobotics. Automatica 25, 4 (1989), 487–507.
  8. T. Kim, P. M. Zimmerman, M. J. Wade, and C. A. Weiss. 2005. The effect of delayed visual feedback on telerobotic surgery. Surgical Endoscopy And Other Interventional Techniques 19, 5 (2005), 683–686. https://doi.org/10.1007/s00464-004-8926-6
  9. Jing Du, Zhengbo Zou, Yangming Shi, and Dong Zhao. 2018. Zero latency: Real-time synchronization of BIM data in virtual reality for collaborative decision-making. Automation in Construction 85 (2018), 51–64. https://doi.org/10.1016/j.autcon.2017.10.009
  10. M. Meehan, S. Razzaque, M.C. Whitton, and F.P. Brooks. 2003. Effect of latency on presence in stressful virtual environments. In IEEE Virtual Reality, 2003. Proceedings.. IEEE, Los Angeles, CA, USA, 141-148. https://doi.org/10.1109/VR.2003.1191132
  11. G. Hirzinger, J. Heindl, and K. Landzettel. 1989. Predictive and knowledge-based telerobotic control concepts. In Proceedings, 1989 International Conference on Robotics and Automation. IEEE, Scottsdale, AZ, USA, 1768-1777 vol.3. https://doi.org/10.1109/ROBOT.1989.100231
  12. Ignacio Abad\’a, Francisco Naveros, Eduardo Ros, Richard R Carrillo, and Niceto R Luque. 2021. A cerebellar-based solution to the nondeterministic time delay problem in robotic control. Science Robotics 6, 58 (2021), eabf2756.
  13. Parham M. Kebria, Abbas Khosravi, Saeid Nahavandi, Peng Shi, and Roohallah Alizadehsani. 2020. Robust Adaptive Control Scheme for Teleoperation Systems With Delay and Uncertainties. IEEE Transactions on Cybernetics 50, 7 (2020), 3243–3253. https://doi.org/10.1109/TCYB.2019.2891656
  14. A. Shahdi and S. Sirouspour. 2009. Adaptive/Robust Control for Time-Delay Teleoperation. IEEE Transactions on Robotics 25, 1 (2009), 196–205. https://doi.org/10.1109/TRO.2008.2010963
  15. Jason P. Luck, Patricia L. McDermott, Laurel Allender, and Deborah C. Russell. 2006. An investigation of real world control of robotic assets under communication latency. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. ACM, Salt Lake City Utah USA, 202–209. https://doi.org/10.1145/1121241.1121277
  16. Ramviyas Parasuraman, Sergio Caccamo, Fredrik Båberg, Petter Ögren, and Mark Neerincx. 2017. A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings. arXiv:1710.06785. https://doi.org/10.48550/arXiv.1710.06785
  17. Ilja T. Feldstein and Stephen R. Ellis. 2021. A Simple Video-Based Technique for Measuring Latency in Virtual Reality or Teleoperation. IEEE Transactions on Visualization and Computer Graphics 27, 9 (2021), 3611–3625. https://doi.org/10.1109/TVCG.2020.2980527
  18. Robert Gruen, Eyal Ofek, Anthony Steed, Ran Gal, Mike Sinclair, and Mar Gonzalez-Franco. 2020. Measuring System Visual Latency through Cognitive Latency on Video See-Through AR devices. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, Atlanta, GA, USA, 791-799. https://doi.org/10.1109/VR46266.2020.00103
  19. Christiane Attig, Nadine Rauh, Thomas Franke, and Josef F. Krems. 2017. System Latency Guidelines Then and Now – Is Zero Latency Really Considered Necessary? In Engineering Psychology and Cognitive Ergonomics: Cognition and Design. Springer International Publishing, Cham, 3–14.
  20. Gary Witus, Shawn Hunt, and Phil Janicki. 2011. Methods for UGV teleoperation with high latency communications. In Unmanned Systems Technology XIII. SPIE, Orlando, Florida, United States, 80450N. https://doi.org/10.1117/12.886058
  21. Ryan K. Orosco, Benjamin Lurie, Tokio Matsuzaki, Emily K. Funk, Vasu Divi, F. Christopher Holsinger, Steven Hong, Florian Richter, Nikhil Das, and Michael Yip. 2021. Compensatory motion scaling for time-delayed robotic surgery. Surgical Endoscopy 35, 6 (2021), 2613–2618. https://doi.org/10.1007/s00464-020-07681-7
  22. Euijung Yang and Michael C. Dorneich. 2017. The Emotional, Cognitive, Physiological, and Performance Effects of Variable Time Delay in Robotic Teleoperation. International Journal of Social Robotics 9, 4 (2017), 491–508. https://doi.org/10.1007/s12369-017-0407-x
  23. Curtis W. Nielsen, Michael A. Goodrich, and Robert W. Ricks. 2007. Ecological Interfaces for Improving Mobile Robot Teleoperation. IEEE Transactions on Robotics 23, 5 (2007), 927–941. https://doi.org/10.1109/TRO.2007.907479
  24. Andreea Muresan, Jess Mcintosh, and Kasper Hornbæk. 2023. Using Feedforward to Reveal Interaction Possibilities in Virtual Reality. ACM Transactions on Computer-Human Interaction 30, 6 (2023), 1–47. https://doi.org/10.1145/3603623
  25. Uwe Gruenefeld, Lars Prädel, Jannike Illing, Tim Stratmann, Sandra Drolshagen, and Max Pfingsthorn. 2020. Mind the ARm: realtime visualization of robot motion intent in head-mounted augmented reality. In Proceedings of the Conference on Mensch und Computer. Association for Computing Machinery, 259–266. https://doi.org/10.1145/3404983.3405509