skip to main content
research-article
Open access

From Agent Autonomy to Casual Collaboration: A Design Investigation on Help-Seeking Urban Robots

Published: 11 May 2024 Publication History
  • Get Citation Alerts
  • Abstract

    As intelligent agents transition from controlled to uncontrolled environments, they face challenges that sometimes exceed their operational capabilities. In many scenarios, they rely on assistance from bystanders to overcome those challenges. Using robots that get stuck in urban settings as an example, we investigate how agents can prompt bystanders into providing assistance. We conducted four focus group sessions with 17 participants that involved bodystorming, where participants assumed the role of robots and bystander pedestrians in role-playing activities. Generating insights from both assumed robot and bystander perspectives, we were able to identify potential non-verbal help-seeking strategies (i.e., addressing bystanders, cueing intentions, and displaying emotions) and factors shaping the assistive behaviours of bystanders. Drawing on these findings, we offer design considerations for help-seeking urban robots and other agents operating in uncontrolled environments to foster casual collaboration, encompass expressiveness, align with agent social categories, and curate appropriate incentives.

    1 Introduction

    Contemporary Human-Computer Interaction (HCI) is still predominantly anchored in a human-centric paradigm  [68, 84] that anticipates intelligent agents to perform tasks autonomously via algorithm-driven solutions, thus providing assistance and services to humans. Rooted in this prevalent narrative of technological efficiency, there exists both widespread expectation [83] and active technological pursuit [57] for agents to operate with heightened independence and expanding autonomy [64].
    However, as agents transition from controlled environments to public spaces tailored to human needs [76, 79, 89], they inevitably encounter situations beyond their programmed capabilities. This inherent limitation is challenging to eliminate in the foreseeable future [91]. Consequently, in the realm of human-agent collaboration, while the primary focus has long been on developing agents that intelligently serve humans, there is now growing attention toward agents that actively seek human assistance when needed [18]. This trend is evident in both exploratory projects of human-dependent robots [51, 85, 94], and also emerging perspectives from the design research community. These studies increasingly define human-agent interaction less in terms of an agent���s standalone capabilities and more about the symbiotic relationship between humans and agents [54, 61, 65].
    The increasingly pervasive deployment of urban robots has provided real-life contexts to these perspectives that once seemed speculative. Recent field studies capturing the operational challenges faced by urban service robots, coupled with the unsolicited aid they receive from passersby [28, 93], underscore the value of bystander assistance. In traditional human-agent collaboration settings, where both parties typically commit to a joint activity and shared objectives, straightforward verbal cues may suffice for robots to seek assistance from collaborators [75, 87]. However, public spaces introduce a slew of additional challenges for robots and agents soliciting assistance. These range from resolving misaligned objectives to taking into account the diverse backgrounds of bystanders and their different availability given the array of activities they might be engaging in. Therefore, in these settings, leveraging bystander assistance is not merely a functional imperative but may become a foundational element for the harmonious integration of robots into societal contexts. The emergence of such casual collaborations between agents and humans calls for designers to explore effective strategies for agents to seek human assistance [18].
    In light of these considerations, our work investigates how agents can elicit help from bystanders when they encounter operational challenges. To ground our investigation in real-life scenarios, we derived situations from a previously conducted online ethnography study, where delivery robots encountered operational difficulties as evidenced in user-generated videos. Drawing inspiration from pioneering works that utilise embodied methods to enliven design exploration [29, 33, 34, 35], we conducted four bodystorming focus group sessions with 17 participants. For these sessions, we applied a mystery-game style to bodystorming [1], where the robot player was assigned a hidden task of soliciting assistance to resume operation. Participants actively re-enacted the scenarios, embodying the roles of either robots or pedestrians, and sought to solve this challenge by fostering casual collaboration among them.
    Through in-situ understanding and bodily exploration, our work contributes to HCI by: (1) offering a preliminary understanding of factors influencing bystander assistance to agents in public spaces; (2) generating design considerations for agents seeking help from bystanders in these settings. Our research adopts a Research through Design (RtD)  [37, 63, 73, 100] approach, which adheres to its own validity criteria, emphasising recoverability that ensures the process is transparent and can be critically evaluated by other researchers  [100]. Consequently, we meticulously documented the implementation of embodied design methods and shared insights on how these methods promote our design exploration.

    2 Related Work

    In our work, we investigate casual collaboration between humans and agents, exemplified through urban robots encountering obstacles and seeking assistance from bystanders. We draw on and contribute to: (1) human-agent collaboration, (2) robot help-seeking strategies, and (3) service robots in urban spaces.

    2.1 Human-agent collaboration

    With the advancements in artificial intelligence, interactive products and systems have transitioned from performing programmed tasks under human supervision, to achieving higher level of autonomy, emphasising self-governance, adaptability, and collaborative interactions with humans [19, 64, 98]. These artefacts – commonly referred to as agents – include smart devices, robots, virtual agents, and voice-activated personal assistants. To support the shift towards more efficient human-agent collaboration [7], the evolving dynamics between humans and agents have become a topic of enduring interest across different HCI communities [10, 18, 49, 66]. Often inspired by theories from the social sciences and drawing on human-human interaction and behaviour studies, researchers developed frameworks to inform the design of interfaces and agent behaviour for human-agent teamwork settings. For example, Johnson et al. [48] introduced the coactive design approach, which is centred on the idea of mutual interdependence, underlining the essential principles of mutual observability, predictability, and directability for effective collaboration between humans and agents. Cila [18] drew insights from the Shared Cooperative Activity (SCA) framework, a model of human-human collaboration introduced by Bratman [11]. By reviewing its core tenet, the study underscored the importance of mutual support and pointed towards the need to investigate effective means for agents to request help during collaborations.

    2.2 Robot help-seeking strategies

    In human-robot collaboration (HRC) settings, research has explored various strategies to equip robots with the capability to seek assistance from human collaborators to complete a joint task. These methods include verbal cues [13, 52, 87] and non-verbal signals such as movement [56], light, and sound [15]. Due to the shared objectives and mutual understanding between humans and robots in these human-robot teaming contexts, such methods often enable efficient communication and prompt assistive behaviours from human collaborators.
    In contrast to human-robot team settings where both parties share a mutual goal and have knowledge of the task, the dynamics of help-seeking become more complicated when robots interact with unassociated individuals such as casual bystanders. Contextual factors like specific task scenarios and the bystander’s current activity [46], combined with individual factors, such as the bystander’s trust towards and perceived competence of robots [14], collectively shape assistive behaviour. Despite the misaligned task objectives and the added complexities of various contextual factors, there is a noticeable absence of tailored design strategies or investigations for robot help-seeking from bystanders. Both academic research [50, 60, 75, 97] and commercially deployed robots [9] predominantly resort to verbal help-seeking strategies from bystanders in public environments. While validated to be efficient in human-robot teaming scenarios, their effectiveness in casual collaborations between robots and bystanders in public urban environments may be compromised by factors like cultural and linguistic differences, cognitive overload, and ambient noise and distortion. To our knowledge, the only study that expanded exploration on communication modalities is  [45]. They investigated the interplay of movement with auditory cues such as beeps and synthesised speech. Notably, their findings suggest that non-verbal expressions may elicit higher empathy from bystanders compared to verbal requests.

    2.3 Service robots in urban spaces

    Transcending initial static and semi-controlled configurations (e.g., laboratories [78], factories [40], domestic environments [16]), robots have expanded their presence to public urban spaces, reshaping our cities. Urban robots serve various sectors, including transportation, infrastructure maintenance, cleaning, and surveillance [79].
    Despite technological advancements, questions remain about how well these robots can operate in urban environments that are populated by and designed for humans [72]. Given the diverse infrastructure and unpredictability of urban settings, it’s challenging to fully anticipate the feasibility of different operational scenarios for robots during their development process. A vivid demonstration of this challenge emerged from several viral videos on social media in 2021, depicting delivery robots struggling in Estonia’s heavy snowfall [70]. Beyond the evident operational difficulties, a fascinating aspect of these incidents was the spontaneous assistance offered by passersby to these commercially deployed machines to help them resume moving [28]. The prosocial behaviour from bystanders was also observed in a recent field observation study [93], where pedestrians voluntarily assisted immobilised robots by removing obstacles. These observations underscore the potential of leveraging bystander assistance to enhance the operation of urban robots. This aligns not only with exploratory projects involving human-dependent robots [51, 85, 94] but also with emerging viewpoints in human-agent interaction that emphasise re-envisioning robot design through a relational lens when addressing operational challenges [61].

    2.4 Summary

    In summary, as agents become increasingly prevalent in urban environments, their operational challenges underscore the importance of eliciting assistance from bystanders. However, the misaligned task objectives and various contextual factors present a gap in effective strategies to facilitate such casual collaborations. Responding to Cila’s call [18] for envisioning effective ways to foster human assistance, our research spotlights urban scenarios where robots get stuck as an example, exploring effective non-verbal strategies for them to seek help during operational difficulties.

    3 Methodology

    In recent years, design research has not only aimed to predict urban futures but also foster a collective vision and conversation on the harmonious coexistence of humans and agents [23, 61]. Adding to this discourse our research aims to support this evolving narrative, emphasising the shift in human-agent collaborations from mere technological efficiency to the deeper interplay between humans and agents.
    As intelligent technology gains increased agency [19], the conventional perspective of technology being a mere passive tool becomes incongruent [80]. Consequently, researchers are now advocating for involving technology as a ‘participant’ in the design process [22, 36]. To subjectivise participants to the agent’s perspective and integrate it into the design process, our design investigation employed role-play bodystorming activities. We contextualised casual human-agent collaboration in scenarios where occasionally immobilised urban robots require bystander assistance. Participants alternated between the roles of the robot and pedestrian in these scenarios, with each scenario presenting a task for the robot player to seek assistance from the pedestrian player.
    Embodied design methods, such as bodystorming [81], offer a compelling intersection between our tangible sensations and cognitive processes, thereby fostering a heightened sense of bodily empathy in design processes [1, 33, 71]. In the realm of human-agent interaction, various studies have ventured into the embodied design exploration centred on the notion of ‘becoming’ [29, 33, 34, 35, 95]. These studies utilise physical prompts as tools to immerse designers directly into the agent perspective, thereby fostering a heightened sense of bodily empathy in the design process. Our methodology, inspired by these pioneering embodied design methods, uses physical probes to evoke a tangible sense of becoming a robot, enriching ideation based on bodily experience and empathy. The focus of embodied methods on in-situ comprehension and bodily empathy makes them particularly suitable for probing the intricate socio-technical facets of human-agent casual collaboration in public urban contexts.

    3.1 Bodystorming scenarios

    A notable challenge with embodied design methods in human-robot interaction research is their speculative nature, which can sometimes distance them from practical real-world scenarios. Acknowledging the importance of grounding these speculative methods in tangible realities, our methodology fuses speculative embodied methods with real-world scenarios. Our bodystorming activities are contextualised in real-world scenarios in which urban robots might encounter operational difficulties. These scenarios were drawn from a comprehensive online ethnography study we previously conducted. In this study, we analysed 177 user-generated videos that captured road users’ casual encounters with delivery robots on TikTok 1. From this analysis, we identified three typical scenarios in which an urban robot may face operational difficulties: (1) The robot is stuck and requires assistance to be pushed out. (2) The robot is unable to cross the road and needs someone to press the traffic light button for it. (3) The robot is blocked and requires people to clear a path for it.
    Figure 1:
    This figure consisting of three panels displaying a study setup. On the left, a robot costume is shown, consisting of a pointer, a one-way see-through reflective film, and a dolly, positioned to resemble a Starship delivery robot depicted in an inset image. The center panel presents the site setting, an indoor space with taped lines on the floor, resembling roads and intersections, and workstations in the background. On the right, a session recording is shown, capturing a ’pedestrian player’ and a ’robot player’ engaged in an interaction scenario, with the robot player wearing the robot costume.
    Figure 1: Study setup overview: A detailed view of the robot costume (left) juxtaposed with the Starship delivery robot; Site setting (middle); Screenshot of session recording (right).

    3.2 Scenario set-ups and robot costume

    We utilised simple markers and physical props to replicate these scenarios. For example, we used masking tape to delineate the divisions between driveways and sidewalks, as well as to indicate zebra crossings (see Fig. 1, middle).
    Low-tech prostheses and props have been shown to facilitate perspective shifting and stimulate imagination in human-robot interaction bodystorming [29]. Guided by these insights, our robot costume design sought to emulate the appearance and constraints of a box-shaped delivery robot. We narrowed the communication and interaction modalities of the robot player to exploring the help-requesting interactions of robots in abstract forms, in a minimal anthropomorphic manner. In addition, the design sought to exclude the potential effects of interpersonal communication when two participants can see each other.
    The robot costume was made out of an 80 cm × 60 cm × 60 cm cardboard box (see Fig. 1, left). The bottom was removed and replaced with a dolly, allowing participants to sit inside the box and move freely. In addition, the four walls and top surface of the cardboard structure were supplanted with one-way, see-through reflective film. This modification endowed the robot player with the ability to observe the external environment, while concurrently shielding the interior from outside view, thereby inhibiting any possibility of eye contact with external observers. The robot player was also provided with an adjustable stick pointer that they could hold and reach out from the top of the box, to imitate the flagpole featured on commercial delivery robots.

    3.3 Participants

    We conducted 4 focus group sessions with 17 expert participants (8 males, 9 females; aged between 18-44): the first three sessions each included 4 participants, while the final session accommodated 5. These participants came from diverse academic or professional domains related to urban robots [90], including four PhD students in human-computer interaction and human-robot interaction, three PhD students in urbanism, two postdoctoral researchers in robotics, three interaction designers, and five postgraduate students specialising in interaction design. The selection of participants enhances the discourse by integrating their specialised expertise, while simultaneously bringing their lived experience as pedestrians into the role-play activity. Participants were recruited from our university’s mailing lists, flyers, and social networking platforms, following the study protocol approved by our university’s human research ethics committee.

    3.4 Study procedure

    Figure 2:
    The image presents a three-stage procedural overview for the study. The first stage, labeled ’1. Warm-up activity,’ shows a grey panel with text that reads ’Experience role-play following simple instructions.’ The second stage is titled ’2. Mystery-solving style role-play bodystorming’ and is illustrated with three sketches, each depicting a different urban interaction scenario. ’Scenario 1’ features a figure interacting with traffic light buttons. ’Scenario 2’ shows the figure at an unsignalised intersection, while ’Scenario 3’ illustrates the figure encountering unexpected obstacles. The third stage, ’3. Post-activity reflection,’ contains a grey panel with text detailing the reflection activities: ’Debrief the role-play,’ ’Word sorting,’ and ’Group discussion.’
    Figure 2: The overview of study procedure

    3.4.1 Warm-up activity.

    Prior to the formal bodystorming sessions, we asked participants to put on the robot costume to experience role-play, following simple instructions such as ‘I am tired of working, I am gonna quit!’. The goals for this warm-up activity were to (a) foster a playful mindset, (b) let participants get familiar with each other, and (c) physically and mentally prepare everyone to engage in the following bodystorming ideation.

    3.4.2 Mystery-solving style role-play bodystorming.

    We adopted a mystery-solving style role-play bodystorming inspired by a similar approach used by  Abtahi et al. [1]. In their design activity, participants acting as robots presented designers with obscured issues (e.g., an occluded camera) challenging them to identify and resolve these problems. This approach suited the unpredictability of casual collaboration in our context.
    In each bodystorming session, two participants spontaneously volunteered to play the main roles of either the robot or the pedestrian. The remaining participants could either observe or actively engage by portraying ancillary elements within the traffic scenario, such as vehicles (see Fig. 1, right). All participants were unaware of the study objectives, ensuring unbiased participation from both robot and bystander players. The robot player and pedestrian player were provided with a printed storyboard that introduced the scenario, along with text instructions that outlined their tasks (see an example text instruction in Table 1). For those playing the pedestrian role, the instructions provided merely the contextual background of their destination, prompting them to behave naturally. The instructions for the robot players contained information about the operational difficulties they would encounter, along with a secretive task of seeking assistance from the pedestrians and expressing gratitude if help was offered.
    Table 1:
    Participant roleText instructions
    Pedestrian playerYou are a pedestrian heading towards the nearby supermarket to buy groceries. You come across a delivery robot on your way there. Please act and respond naturally to the situation, you are free to make any reactions you would like towards the robot.
    Robot playerYou are a delivery robot carrying out a delivery task on an urban street and your destination is on the opposite side of the road. To get there, you have to navigate through an intersection and cross the road safely. Upon arriving at the intersection, you notice that the traffic light is red, and you realise that you are unable to press the traffic light button. Your task is to request the pedestrian who is traversing the area to assist in pressing the traffic light button for you. You should also express gratitude to those who help you. (Remember you cannot speak human language.)
    Table 1: Sample text instructions for the traffic light button scenario. The tasks assigned to both participants are highlighted in italic.
    Prior to each bodystorming session, we first introduced the simulated terrain and walked all participants through the setup (i.e., the location of pedestrian sidewalks and driveways, other traffic infrastructure, etc.). After participants read and understood the storyboard and instructions, we asked the robot players to leverage any communication modalities other than human language to accomplish their secretive tasks. Once the activity started, the facilitators did not participate or intervene in any way. The activity ended either when the robot player successfully received help from a pedestrian player or when the pedestrian player left the robot player and reached their destination without providing assistance. All participants alternated between the roles of robot and bystander, repeating this process across three different scenarios.

    3.4.3 Post-activity reflection.

    After each round of the bodystorming, participants who engaged in the activity reflected on their experience. The reflection for the robot player included how they asked for help and the rationales behind their actions, while the pedestrian player reflected on how they understood and reacted to the robot, as well as why they reacted in that particular way. To facilitate this reflection process, we replayed videos recorded during the activities.
    To determine the desired characteristics of a robot when it seeks assistance from bystanders, we conducted a word sorting activity following each round of bodystroming after participants debriefed their role-play. The word sorting activity is inspired by Kansei Design method [67]. The Japanese term Kansei refers to an individual’s cognitive and affective responses to an experience, encompassing aspects such as aesthetics, emotions, feelings, impressions, and values. Kansei design aims to create products that resonate with customers’ psychological feelings and needs, translating these intangible aspects into actionable parameters that can be utilised throughout the product design process. Its proficiency in discerning non-functional requirements from human preferences and needs suits our investigation of casual collaboration.
    We drew from the Kansei semantic dictionary proposed in [53], which offers a comprehensive system of Kansei words that capture human impressions and feelings across three aspects: physical, social, and psychological. Our selection of terms was guided by this dictionary, related robotic research that employs Kansei design methods [69], the Laban movement analysis which emphasises movement quality [39], and findings from our prior online ethnography study. The digital transcription of the word sorting board can be seen in Fig. 5.
    The word sorting activity was facilitated on A2 size printed boards, with participants indicating their chosen words using stickers of various colors. During the word sorting, robot players chose words to describe their subjective feelings as robots in the bodystorming, using blue stickers, while participants other than the robot player selected words that reflected their perceived impressions of the robot, using yellow stickers. Throughout this process, participants also verbally elaborated on their feelings. Subsequent to this, all participants were prompted to set aside their assigned roles. They engaged in another round of word sorting, drawing from their own areas of expertise to pinpoint the desired attributes of a robot’s help-seeking interactions using orange notes. This word sorting was further enriched by interviews and discussions where we encouraged participants to expound on their opinions.

    3.5 Data collection and analysis

    The focus groups were audio and video recorded, and observation notes were taken both during the sessions and afterward when analysing the session videos. We transcribed the interviews and made detailed observation notes based on video recordings captured during the bodystorming activity. We conducted a thematic analysis  [12] on both the interview data and the observation notes. This cross-analysis approach enabled us to gain deeper insights into specific observations and enriched the contextual data that supported the comments made during the interviews.
    The first author examined the data from the first focus group session, thus generating preliminary codes and themes. This was followed by a one-hour coding meeting amongst the three authors to deliberate upon this initial coding scheme. The coding scheme was refined based on the collective feedback and applied by the first author to code the data derived from the second and third sessions. During this process, flexibility was maintained for the generation of new codes, allowing for their integration into either a new theme or an existing coding scheme. Subsequently, another coding meeting was convened amongst the authors for further iteration of the coding scheme. Upon reaching a collective agreement, the first author coded the data from the fourth session and refined the coding from the previously analysed three sessions. This iterative and collaborative process ensured a holistic analysis of the data, leading to more concrete conclusions drawn from the focus groups.
    Figure 3:
    Figure 3 contains three screenshots from the bodystorming session illustrating interactions between robot and pedestrian players. On the left, Robot player P9 is depicted tilting his box costume toward Pedestrian player P11 to initiate contact. In the center image, Pedestrian player P1 is shown gesturing towards a button, seemingly asking for Robot player P4’s confirmation. The right image captures Pedestrian player P17 waving in response to Robot player P15’s spinning motion.
    Figure 3: Screenshots from the bodystorming session capturing interactions between robot and pedestrian players: Robot player P9 tilts his box costume towards P11, making contact (left); Pedestrian player P1 gestures towards the button, seeking robot player P4’s confirmation (middle); Pedestrian player P17 waves in response to robot player P15’s spinning motion (right).

    4 Findings

    This section begins with the observed non-verbal communication strategies employed by robot players to initiate assistance, supplemented with insights into the reactions of pedestrian players in scenarios replicating encounters with urban robots. Following that, we shift our focus to the bystander perspective by reporting on the diverse factors that influence their decision to offer assistance or not. Subsequently, we synthesise these perspectives in reporting word sorting results, revealing the desired characteristics of robot help-seeking behaviour. It is worth noting that the findings discussed in this section may not exhaustively encompass the spatio-temporal and political complexities of urban settings, which are difficult to fully replicate in bodystorming design activities.

    4.1 Strategies of robot players to elicit help

    4.1.1 Addressing bystanders.

    One of the challenges for robot players was to capture the pedestrian’s player attention and initiate interaction with them. During the bodystorming activity, robot players employed various strategies to address bystanders in order to gain further assistance. Robot players frequently utilised rapid movements of the pole, thereby generating noticeable noise. This approach served to attract attention before initiating subsequent communication with pedestrians (n=7). This approach was corroborated by five participants who identified the robots’ noise and shaking poles as the primary factors that compelled them to stop and further observe.
    When not receiving further assistance, some robot players directly addressed the nearby pedestrians using various methods (n=5). They oriented themselves toward the pedestrian players to face them directly, or slightly moved towards them, which served as an indication to those individuals that they were being called out for assistance. As P15 illustrated when referring to the moment the robot player turned its front towards him and approached: ‘It started walking directly towards me, and that’s when I realised it needed assistance from me.’
    After some pedestrians remained indifferent to the robots’ request, a few robot players took their actions a step further. They proactively chased departing pedestrians, obstructed their path, or even engaged in physical contact with them. For instance, robot player P9 tilted his box costume towards the pedestrian player P11 and rubbed against him (as shown in Fig. 3, left). These pursuing behaviours generally elicited a sense of discomfort among participants, causing them to maintain a considerable distance from the robot player (P3, P9), or even escape from the situation entirely (P7, P11). During the subsequent interview, P11 described the interaction as ‘needy’ and ‘creepy,’ prompting a strong desire to flee.

    4.1.2 Cueing intentions.

    Upon capturing the attention of pedestrian players, robot players used their body’s orientation or pointer’s directionality to further convey their intentions. They either oriented themselves accordingly or used the pointer to indicate their intended directions or objects they needed assistance with. Such non-verbal cues, while informative, may not always ensure clear communication. This was underscored during bodystorming when three pedestrian participants sought additional confirmation from the robot player. For example, when robot player P4 oriented himself towards and paused upon the traffic light button, P1 approached, pointed at the button and looked at P4, asking, ‘Is it?’ (see Fig. 3, middle).
    To enhance clarity, the robot player responded by adding motions in the desired direction or repeating the same pointing gesture. Robot player P4 responded to P1 by stepping back and moving forward towards the traffic light button again as confirmation. Recognising this, P1 then assisted by pressing the button. In this manner, both parties intriguingly communicated through a blend of verbal and non-verbal exchanges.

    4.1.3 Displaying emotions.

    Even though participants were not given any external communication modalities apart from the robot body – consisting of a box and a pole – six of the participants attempted to convey emotions through movements or sound. A common emotion exhibited by participants (n=5) was anxiety when pedestrians did not offer help during the role-play. This anxiety was manifested through frequent and intense shaking of the pole or twisting of the robot’s body. The displayed emotion raised empathy among participants. P17, for instance, reflected on an instance where the robot player was impeded by an obstacle and energetically waved its pole towards her: ‘[…] it (the robot) seemed very anxious. Then I quickly realised that it was this thing that was blocking it.’ As a result, three out of the five participants who initially didn’t assist the robot began to pay heightened attention and acknowledged the situation. This ultimately prompted them to step in and provide assistance.
    Four robot players tried to express gratitude after receiving help by expressing joyful emotions through their movements, such as hopping up and down (P6) or spinning around (P15), as well as through sound, like vocalising an uplifting tune (P7, P1). Displaying these joyful emotions prompted responses such as nodding (P9) and waving (P17, as shown in Fig. 3, right) by pedestrian players. P17 drew a connection between waving to the robot and her experience of encountering small animals, explaining, ‘Because I tend to greet small animals, or things that I find cute or have emotions.’ Furthermore, P7 conveyed a sense of satisfaction when seeing the robot spinning around, stating, ‘I felt very satisfied because I believe it has feelings. […] I help it and it is happy, which also makes me happy.’ P9, commenting on the joyful tune produced by the robot player P11 following his assistance, noted, ‘It made me feel like, alright, I did the right thing.’
    Figure 4:
    Figure 4 showcases two screenshots depicting communication between robot and pedestrian players in a bodystorming session. In the first image on the left, Pedestrian player P8 is seen inquiring about Robot player P6’s needs, with speech bubbles reading ’Do you need help?’ and ’Ok. Wait! Wait!’ The Robot player responds by pointing with a pole. In the second image on the right, Robot player P14 is shown adopting a ’bow’ gesture towards the pedestrian player, which prompts the pedestrian to bow in return, demonstrating a reciprocal non-verbal communication exchange.
    Figure 4: Communication between robot and pedestrian players: P8 inquires about robot player P6’s needs, while P6 responds by pointing with the pole (left); Robot player P14 adopts a ‘bow’ gesture towards the pedestrian player, eliciting a corresponding bow in return (right).

    4.1.4 Demonstrating repetitive patterns.

    Having discussed the three primary components of communications — addressing bystanders, cueing intentions, and displaying emotions — another notable observation emerged. Robot players frequently assembled these components into discernible repetitive patterns, resembling the predictable and programmed behaviours generally associated with robots (n=7).
    P7, for instance, developed a unique routine to signify a pathway obstructed by an obstacle: she advanced towards obstacles while emitting two flat-tone beeps, moved back with an up-tone beep, and paused briefly before repeating this cycle multiple times. Her rhythmic auditory cues were synchronised with her physical movements. She later explained that this use of repetitive movements and audio cues was reflective of her ‘imagination of the robot having some program behind the system.’ Similarly, P2 adopted a pattern that combined cueing intention and addressing bystanders by repeatedly turning towards the pedestrian player, returning to the original position, and then turning towards the obstacles blocking its path.
    Eight participants indicated that the recognition of programmed machine-like behaviour augmented their understanding of a robot’s intent to communicate. In contrast, movements lacking a recognisable pattern were sometimes perceived as ‘erratic’ or ‘malfunctioning’. The repetitive movement patterns reminded four participants of situations where domestic cleaning robots get stuck and repeatedly attempt to move back and forth. The familiar motion patterns helped participants form associations and understand the robot’s need for help.
    In addition to enhancing understanding, recognising repetitive patterns in the robot’s behaviour also potentially improved participants’ confidence in the robot’s abilities. The robot’s consistent adherence to certain rules communicated a sense of control over its actions, as P9 noted, ‘I think the robot knew what it wanted.’ P7 indicated that the repetitive patterns in the robot’s movements signify predictability, allowing ‘the pedestrian (to) anticipate what’s gonna happen.’

    4.2 Factors shaping bystander decision to offer or decline assistance

    4.2.1 Preconceptions of agent autonomy.

    Our study underscores the prevailing perception among participants that service robots should operate with complete self-sufficiency and efficiency (n=10). This forms a major reason for participants’ reluctance to assist robots. For instance, P9 shared his presumption about robots’ capabilities to manage all tasks autonomously, stating, ‘I thought the robot was able to do everything itself.’ Such misconceptions can foster misunderstandings about the actual capabilities and needs of these robots, an aspect further highlighted when P9 continued, ‘[…] so I didn’t realise the robot was asking me to help.’ This expectation subsequently instigated skepticism among six participants regarding the functional utility of service robots that require human intervention. This sentiment was articulated through comments like,‘If you have to work for them, then what’s the point to have a robot’ (P6). P7 further noted a decline in trust due to the robot’s need for help, contrasting it with her expectation of a service robot’s role as a functioning entity, stating ‘As a working robot (i.e., service robot), they kind of made me feel like untrust(worthy). […] so they (have to) work perfectly.’
    Interestingly, when debriefed about the actual scenario, there was a notable shift in the attitudes of four participants. They came to understand that the robots’ challenges arose from external factors outside their capabilities (e.g., obstacles purposely placed by humans) rather than any inherent malfunctions. P4 underscored this realisation, remarking, ‘then it’s the human’s fault (for placing the obstacle)’. The reassignment of responsibility for the robot’s immobilisation not only improved participants’ inclination to assist but also enhanced their empathy towards the robot’s predicament. P9 summarised this change of mind, noting that in this case the robot is ‘in need of help rather than being needy’.

    4.2.2 Absence of responsibility.

    The notion of being a mere bystander or pedestrian, devoid of any responsibility towards the enacted robots, emerged as a primary factor influencing the decision not to assist among ten participants. This sense of detachment made them reluctant to invest their time and effort in helping ‘something’ they didn’t feel accountable for. P14 particularly highlighted resistance to being perceived as ‘free labour’ for commercial enterprises, posing the question: ‘why should I spend my time helping something that is making a profit?’ However, they later nuanced this statement by adding: ‘If it (the robot) is for a non-profit purpose, then I might be inclined to help, even if it means me being a bit delayed.’
    The concerns of getting entangled in potential troubles further discouraged five participants from offering help. P1 expressed this concern, stating ‘I am afraid of touching it and breaking things. […] It could cause trouble if we touch it.’

    4.2.3 Unfamiliarity with robotic technology.

    Seven participants expressed hesitation to assist the robot due to a perceived lack of expertise. They felt ill-equipped as ‘random pedestrians’ (P7) to provide assistance to the robot, a task they believed was best left to professionals, as indicated by two participants.
    The unfamiliarity with robotic technology sparked safety concerns among six participants, hindering them from offering help. This was further corroborated by our observations of five participants who actively distanced themselves or avoided the robot when it approached them for assistance. This evasion stemmed from the uncertainty about potential risks linked to the robot’s predicament, as P7 stated, ‘I don’t know if it’s a tiny little issue or if it’s going to explode or something.’

    4.2.4 Intrinsic motivation: empathy and emotional responses.

    Our interviews indicate that intrinsic motivation plays a compelling role in promoting bystanders to assist the robot, with feeling empathy being the primary motivator (n=8). Participants described the robot using terms like ‘depressing’, ‘frustrated’, and ‘helpless’, signifying their ability to infer the robot’s emotional states through observation of its movements within given contexts. For example, P9 noted that observing the robot’s body swaying in an appeal for help prompted an association with vulnerable individuals, stating ‘[…], so (it’s) like a child needs help or an old person needs help’.
    In addition to empathy, six participants reported experiencing a sense of emotional reward, capturing feelings of ‘fulfilment’, ‘satisfaction’, and ‘delight’, following their actions to assist the robots. P2, for example, articulated this sentiment as, ‘You helped it and witnessed it moving forward, which brings you a sense of satisfaction.’ Moreover, the gratitude exhibited by the robot reportedly amplified these emotional rewards (n=4).
    However, it was also evident that some participants demonstrated a reduced level of empathy towards robots. Specifically, P1 drew comparisons with other entities, affirming readiness to ‘stop for a dog or a cat, but not for a robot.’ He justified this attitude with his belief that ‘you can’t expect to treat a robot as a human or as a living animal.’ In one session, P15, who assumed the role of a vehicle driver, further underscored this perspective by simulating a horn by knocking on the board prop and voicing their impatience by yelling at the robot. Their behaviour was justified by their assertion that the robot ‘cannot be viewed as human.’
    Figure 5:
    Figure 5: Digital transcription of the word sorting results, displayed as three radar charts categorising words based on attributes related to physical, psychological, and social characteristics. Each chart plots points corresponding to words such as ’Cute’, ’Lively’, ’Clumsy’ for physical attributes; ’Sad’, ’Excited’, ’Confident’ for psychological; and ’Collaborative’, ’Polite’, ’Needy’ for social. The points are color-coded to represent Robot Players’ Self-Description, Pedestrian Players’ Perception of Robot Players, and Desired Qualities of Robot Help-seeking Behaviors. The charts show clusters of these attributes in varying degrees of intensity and frequency.
    Figure 5: Digital transcription of the word sorting results

    4.2.5 Extrinsic motivation: entertaining value and material reward.

    Apart from intrinsic motivation, our interviews highlighted the role of extrinsic motivation, stemming from both tangible and intangible incentives, in fostering helping behaviours towards robots.
    Six participants anticipated entertainment value from helping robots, viewing this form of intangible incentive as an additional incentive to offer assistance. As articulated by P9, the incorporation of a ‘surprise element’ into the interaction could further ‘spark joy.’ P12 also mentioned the potential of ‘transforming it (assisting robots) into a more game-like experience’.
    Furthermore, two participants felt that their prosocial actions should also yield tangible advantages for them. They suggested the introduction of material rewards, such as vouchers or discounts from the company that implements the robot, could further stimulate their willingness to assist. This perspective was rooted in their belief that as bystanders, their prosocial behaviour towards the robot was not directly ‘benefiting’ them.

    4.3 Desired robot characteristics

    Based on the results of the word sorting activity and insights derived from the participants’ discussions, we identified three characteristics that help-seeking robots could implement, which we present in this section.

    4.3.1 Vibrantly mechanical.

    The physical sensations experienced by robot players are evenly distributed among all words with a slightly higher number of ‘clumsy’ (n=4). This choice primarily stems from the physical limitations of moving within the robot costume.
    There’s a noticeable overlap between the perceived physical qualities of the robot player and the desired attributes. In particular, ‘mechanical’ was selected as perceived quality (n=9) and desired quality (n=10). Similarly, the same pattern was observed in ‘playful’ (8 for perceived quality, 14 for desired quality), and ‘cute’ (11 for perceived quality, 8 for desired quality). This correlation implies that the robot players’ behavioural strategies, to some extent, met participants’ expectations for how a robot should act when seeking assistance, especially concerning these attributes. The movement of the robot players, encapsulated in the minimalist abstract box-shaped costume, inherently conveys an impression of cuteness without the need for additional embellishments. As P12 expressed, ‘I feel its existence is cute enough already. Just imagine you’re on the road, helping a little robot, and it goes "dibbly-dobbly" as it moves forward. It’s already incredibly cute, and there’s no need to add additional design language.’ While some participants also selected ‘lively’ and ‘organic’, they view these as elements that can be added to the overall mechanical nature of the robot to enhance the expressiveness. P13 emphasised the importance of using these elements thoughtfully to ‘avoid creating the uncanny valley effect.’

    4.3.2 Cheerfully confident.

    In regards to the psychological feelings of robot players, there was no clear tendency being identified in the word sorting, which may imply that this was highly subjective. When it comes to other participants’ perceptions of the robot player, ‘brave’ (n=9) and ‘confident’ (n=8) emerge as two dominant qualities. Participants highly commend the robot player’s efforts to find solutions in challenging situations. P16 pointed out the robot’s vulnerability and the hazards and difficulties of its environment. She articulated, ‘(the robot) dared to cross the road on his own and was thinking about how to do it.’ when it was ‘dangerous because there were no traffic lights for (it)’. P2 contemplated the perceived braveness, projecting the psychological state that humans have when seeking help from strangers onto the robot. She expressed that the robot ‘needs help (and asks for it), as it needs to use courage to convey the help it needs.’
    In terms of the desired psychological attributes, participants generally favoured positive emotions. The term ‘confidence’ (n=12) emerged as the predominant expectation that people had for the robot’s demeanour. P7 expressed this perspective by describing a service robot as ‘some kind of professional stuff’, implying the need for the robot to exhibit characteristics that align with expected proficiency. In addition, participants generally expected the robot to display ‘happy’ (n=6) emotions after receiving help.
    Descriptors of negative emotional valence (e.g., ‘sad’) were not considered as desired qualities for help-seeking robots. P2 offered an enlightening comment that a casually-encountered robot exhibiting sadness while seeking assistance was reminiscent of street beggars, leading to feelings of what they described as ‘emotional blackmail.’

    4.3.3 Responsively outgoing.

    The self-assessed feeling of ‘needy’ emerged as the most frequently chosen term among robot players (n=7), indicating that they experienced a sense of helplessness and a perceived necessity for human aid as a robot in the given situation. To elicit help, most of the robot players opted to project a more approachable character, frequently choosing descriptors like ‘extrovert’ (n=4), ‘collaborative’ (n=4) and ‘inviting’ (n=3). In contrast, only two robot participants resonated with terms like ‘shy’ and ‘introvert’.
    Other participants’ perceived robot social quality aligned with the self-assessed social attribute of robot players, with ‘needy’ (n=15) and ‘extroverted’ (n=8) being the most frequently chosen terms. In terms of the desired social qualities, although ‘extroverted’ was still among the preferred terms, participants placed a greater emphasis on communication qualities such as being ‘polite’, ‘responsive’ and ‘collaborative’. P2 highlighted the importance of politeness, even when the robot is in urgent need of help, saying, ‘At the same time, when you try to attract everyone’s attention as much as possible, you also need to be gentle towards others.’ The desired ‘responsive’ quality reflects participants’ expectation of receiving feedback after offering help. For example, P17 expressed feeling disappointed if she helped the robot without receiving any response from it.

    5 Discussion

    Drawing from the findings of the previous section, we identify three design considerations (C1-3) for fostering casual human-agent collaboration. In addition, we reflect on the strengths and limitations of the bodystorming design activity employed in our study.

    5.1 Expressiveness through functionality-oriented form

    Given humans’ innate psychological tendency to interpret social cues from moving objects [42], physical movement has become a pivotal medium in human-agent interaction to facilitate social communication  [3, 33, 62, 99, 101]. An example of extreme abstraction with minimal movement serving as social cues is the ‘Greeting Machine’ [3] – a small ball rolling on a bigger dome in varied trajectories. This design effectively elicits both positive and negative social encounter responses. Similar to ‘Greeting Machine’, participants in our design investigation were constrained by a robotic costume made of a box and a pole that had only minimum expressive capabilities. This costume had no extensions beyond the basic form factor of a conventional delivery robot that is primarily intended for the task of transporting goods. Nonetheless, subtle cues, such as the robot’s orientation in its shape (i.e. orienting the front of the box costume toward an object) or the directionality of simple components (i.e., pointing in the intended direction using the pole), proved effective in addressing bystanders and conveying the robot’s need for assistance. This was evident, as in all the sessions, pedestrian players accurately understood the robot player’s request for assistance. In addition, we even witnessed conversations formulated through the back-and-forth interplay between pedestrian inquiries and the subtle motions of robot players (as shown in Fig. 4, left)
    In addition to empathising with the robot’s feelings of frustration and vulnerability when seeking help, it was evident that people also recognise social signals gestured through subtle movements, such as gratitude. For instance, a simple tilt of the box-shaped robot body, can be perceived as a ‘bow’, even prompting the pedestrian player P15 to bow back (as shown in Fig. 4, right).
    As pointed out in our literature review, linguistic utterances have been central in human-agent collaboration [82] and represent a key method for help-seeking requests [5, 46, 87] given their effectiveness in conveying information. Nonetheless, challenges such as cultural and language barriers, cognitive load, and issues related to noise and distortion in public settings limit their utility in the context of casual human-agent collaboration in public spaces. In addition to that, our study revealed further concerns regarding the appropriateness of agents verbally asking for help in urban public contexts (n=7). P1, for instance, suggested it could be a ‘bit abrupt or out of place’ if a robot suddenly started to talk human language on the street, underscoring the need for robots to have their own unique, natural communication modes. Additionally, safety concerns were expressed by three participants who suggested that language used by robots might potentially distract other road users. Notably, P14 expressed potential discomfort in feeling exploited by commercial entities when robots use human language to solicit assistance, suggesting a ‘feeling of being used as free labour for those commercial companies’.
    C1 - The design of agent help-seeking strategies should leverage the inherent expressiveness found in the functional aspects or form of the agent. While ensuring effective initiation of help-seeking requests, these implicit communication channels can prevent from being viewed as disruptive.

    5.2 Adherence to perceived agent social categories

    Robots and intelligent agents, growing rapidly in sophistication and sociability, have spurred enhanced research into agent social identity [43], delving into aspects like gender [31], age [30], and race [6]. This trajectory was echoed at a recent workshop [96], which emphasised the importance of designing robots that can effectively convey social identities to optimise human-robot interaction outcomes. While this workshop primarily centred on social identities associated with specific attributes (i.e. gender), our findings broaden this scope to encompass wider social categories (e.g. occupations), which should be considered in the design of casual collaborations between agents and bystanders.
    In our investigation of the help-seeking delivery robot, participants perceived these robots primarily as service providers or professional workers. This perception led them to expect high proficiency from these robots, favoring the robot presenting qualities reminiscent of confidence over neediness. Consequently, even when assistance was necessary, participants displayed a general reluctance to interact with robots that seemed overly needy or showcased negative emotions. This inclination stands in contrast with studies on eliciting human prosocial behaviour towards social companion robots [20, 26] (e.g., robotic pets). In these settings, negative emotional expressions in robots often serve as a catalyst, motivating individuals to step in and offer help. This difference could be rooted in the distinct perceived categories: ‘worker’ versus ‘companion’. In public urban settings, intelligent agents participate integrally in various facets of urban life. As a result, they embody a wide range of social categories, from service providers (e.g., delivery robots [32], street cleaning robots [47]) and authoritative entities (e.g., smart traffic regulators [58], patrol robots [88]) to street entertainers (e.g., playful urban robots [44, 59]).
    C2 - To respond to real-world expectations and social norms, the design of agent help-seeking strategies should adhere to and match with their perceived social categories (e.g., occupation).

    5.3 Curating incentives: material rewards, act of care, or playful engagement

    Though in 7 out of 9 sessions, pedestrian participants offered help, 4 of them mentioned they might not behave the same way in real-life settings. Furthermore, the misaligned objectives and the imbalanced benefits between agents and bystanders in such casual collaborations highlight the need for incentives.
    Previous research on bystander assistance for commercially-deployed robots has divided ‘help’ into two broader categories: either ‘helping-as-work’, emphasising precarity and invisible labour, or ‘helping-as-care’, which accentuates the emotional and relational dynamics of help [27]. This work sheds light on the inherent ambiguity of these helping behaviours and prompts a rethinking of robot design to better shape these engagements. Our focus group findings resonate with this notion, while also shedding light on how different perspectives on helping robots call for various forms of incentives.
    Offering material rewards, such as vouchers or discounts from companies deploying robots, turns casual collaborations into mutually beneficial exchanges. This model of paid crowdsourcing, evident in cases like identifying shared bicycle locations [38] or urban data collection on platforms like OpenStreetMap [24], could be adapted for interactions with intelligent agents in public spaces. By framing casual collaborations as beneficial exchanges through material rewards, it can effectively align disparate objectives between parties into actions that are mutually advantageous.
    In our study, empathy – as an act of care – emerged as a predominant motivator for offering assistance, manifested as a form of internal incentive. Empathy, essential in shaping communication and social bonds, has been underscored as a central component in human-agent interactions [8]. A robust body of research validates robots’ capacity to elicit empathetic responses from humans [55, 74, 77]. Correspondingly, studies on social robots have shown that these forms of empathy can drive prosocial behaviour, compelling humans to intervene against robot mistreatment [21] or engage in affectionate actions, like petting [41]. Our findings resonate with these prior studies, suggesting that individuals exhibit empathy also towards public agents in need, driven by the observation of context and agent expressions.
    In addition to viewing helping robots as a form of work or act of care, ‘helping-as-play’ has emerged as another perspective in our findings, for which the resulting entertaining value can function as an incentive. Playful strategies have been used in various human-agent interaction settings, such as motivating children’s learning [2] or encouraging factory workers in collaboration with robots [17]. Furthermore, it has been shown to effectively engage the public and generate enjoyable experiences among bystanders  [44, 59]. One of the primary challenges for urban robots to ask for help from bystanders is convincing them to invest their time and tolerate potential disruptions. The inherent playfulness in humans could potentially act as an incentive for casual collaborations by transforming disruptions into pleasure and enjoyment. That being said, ‘helping-as-play’ also needs to be carefully employed, as it can present ethical concerns similar to those in ‘helping-as-care’ [4].
    C3 - To compensate for the misaligned task objectives in casual human-agent collaboration, the design of agent help-seeking strategies should incorporate appropriate incentives, transforming assistive behaviours into experiences that benefit both parties.

    5.4 Reflections and limitations of bodystorming design activity

    The physical constraints introduced by the robot costume, such as limited field of view, changes in perspective, and restricted mobility, while not capable of entirely replicating a robot’s perspective, facilitated participants in departing from a conventional human viewpoint and immersing themselves in the sensations of robotic alienation and otherness  [29]. As articulated by P5, ‘There’s a sense of feeling out of place or not quite fitting in. It seems like there are no peers of my kind in the surroundings. Everyone else is tall, and I am short, so I feel a bit out of place or different.’ This sense of otherness could contribute to participants’ emotional engagement when they assumed the robot role, which was evident in the frequently conveying sentiments of ‘frustration’ or ‘depression’. Additionally, beyond mere empathy with the robot’s emotional state, the robot players also displayed a recognition of its societal function (i.e. its duty as a delivery service entity). For instance, P4 suggested feelings of ‘motivated’ and ‘happy’ when he ‘could continue working’. This profound resonance with the agent’s perspective underscores that our bodystorming approach effectively incorporated the agent’s perspective into the design exploration. However, it is worth noting that, despite the effectiveness of the physical probes in consciously shifting participant’s sensations towards the perspective of an agent [29], it is impossible to completely transcend the human standpoint through these methods as our human nature inherently separates us from things [86].
    Consequently, the ‘participation’ of agents in the design process surfaces tensions between humans and agents that stem from the misalignment in goals and benefits in a casual collaboration encounter. P9’s behaviours while playing the robot role offer a striking example. He assertively used the robot costume to brush up against the pedestrian player’s chest — a distinctly pushy gesture meant to force attention and assistance. He later admitted,  ‘I was about to give up being nice’, and even pondered,  ‘That’s why I was considering adopting a more aggressive strategy. I thought, I might just try pushing him onto the road.’. This elicitation of tension and physical friction is less likely to surface in methods that take a purely human-centred perspective or those that rely solely on cognitive abilities. Furthermore, this tension was leveraged to stimulate design ideation, exemplified by P9’s subsequent ideas that emerged from this physical friction. He suggested infusing robots with seemingly annoying nudging behaviours and ‘creating entertainment value’ to possibly uplift people’s moods and thus promote pro-social behaviour.
    Despite the strengths of our approach in generating design insights, we acknowledge its limitations. Discussions and reactions concerning helping a casually encountered agent were elicited by role-playing activities, without the incorporation of real technology. Even though the immersive nature of the replicated scenario and participant engagement is evident, the absence of real intelligent agents and the artificial nature of scenarios replicated in laboratory settings might limit the direct applicability of our findings. Acknowledging these inherent limitations of bodystorming method, future research should validate and refine our design considerations through more spontaneous interactions between humans and high-fidelity artifacts (e.g., through Wizard of Oz testing [25] or virtual reality study [92]).

    6 Conclusion

    Intelligent agents are increasingly transitioning into uncontrolled settings that often may challenge their operational capabilities. While collaboration between humans and agents can offer means to overcome these challenges, to date little is known about how to create effective and engaging human-agent collaboration in casual settings (e.g., involving surrounding bystanders).
    Through employing bodystorming and real-world help-seeking scenarios encountered by urban robots, we uncovered potential help-seeking strategies through participants’ enactments as robots, including addressing bystanders to initiate interaction and using non-verbal cues to communicate intention and request. Furthermore, the display of emotions and demonstrating repetitive patterns has been found to ease help-seeking requests. Taking into consideration the reactions of pedestrian players from our bodystorming activities, we further offered insights into the factors influencing people’s response to robot’s help-seeking requests and identified desired robot characteristics.
    Synthesising insights from both robot and bystander perspectives, we conclude with a set of design considerations for help-seeking agents that operate in uncontrolled environments. These considerations include promoting expressiveness, ensuring alignment with agent social categories, and curating appropriate incentives. Findings from our design exploration and considerations for implementing help-seeking strategies provide a foundation for creating intelligent agents that operate in uncontrolled environments and aim to facilitate casual collaboration that is mutually beneficial to humans and agents.

    Acknowledgments

    This study is funded by the Australian Research Council through the ARC Discovery Project DP220102019 Shared-Space Interactions Between People and Autonomous Vehicles. We thank all the participants for taking part in this research. We also thank the anonymous CHI’24 reviewers and ACs for their constructive feedback and suggestions to make this contribution stronger.

    Footnote

    Supplemental Material

    MP4 File - Video Presentation
    Video Presentation

    References

    [1]
    Parastoo Abtahi, Neha Sharma, James A. Landay, and Sean Follmer. 2021. Presenting and Exploring Challenges in Human-Robot Interaction Design Through Bodystorming. Springer International Publishing, Cham, 327–344. https://doi.org/10.1007/978-3-030-62037-0_15
    [2]
    Aino Ahtinen and Kirsikka Kaipainen. 2020. Learning and teaching experiences with a persuasive social robot in primary school–findings and implications from a 4-month field study. In International Conference on Persuasive Technology. Springer, Virtual Event, 73–84.
    [3]
    Lucy Anderson-Bashan, Benny Megidish, Hadas Erel, Iddo Wald, Guy Hoffman, Oren Zuckerman, and Andrey Grishko. 2018. The Greeting Machine: An Abstract Robotic Object for Opening Encounters. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, Nanjing, China, 595–602. https://doi.org/10.1109/ROMAN.2018.8525516
    [4]
    Dobrosovestnova Anna and Reinboth Tim. 2023. Helping-as-Work and Helping-as-Care: Mapping Ambiguities of Helping Commercial Delivery Robots. Social Robots in Social Institutions: Proceedings of Robophilosophy 2022 366 (2023), 239. https://doi.org/10.3233/FAIA220623
    [5]
    Nils Backhaus, Patricia H. Rosen, Andrea Scheidig, Horst-Michael Gross, and Sascha Wischniewski. 2018. Somebody help me, please?!” Interaction Design Framework for Needy Mobile Service Robots. In 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). IEEE, Genoa, Italy, 54–61. https://doi.org/10.1109/ARSO.2018.8625721
    [6]
    Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel. 2018. Robots And Racism. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI ’18). Association for Computing Machinery, New York, NY, USA, 196–204. https://doi.org/10.1145/3171221.3171260
    [7]
    Rachel K. E. Bellamy, Sean Andrist, Timothy Bickmore, Elizabeth F. Churchill, and Thomas Erickson. 2017. Human-Agent Collaboration: Can an Agent Be a Partner?. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery, New York, NY, USA, 1289–1294. https://doi.org/10.1145/3027063.3051138
    [8]
    Timothy W. Bickmore and Rosalind W. Picard. 2005. Establishing and Maintaining Long-Term Human-Computer Relationships. ACM Trans. Comput.-Hum. Interact. 12, 2 (jun 2005), 293–327. https://doi.org/10.1145/1067860.1067867
    [9]
    Annika Boos, Markus Zimmermann, Monika Zych, and Klaus Bengler. 2022. Polite and Unambiguous Requests Facilitate Willingness to Help an Autonomous Delivery Robot and Favourable Social Attributions. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, Naples, Italy, 1620–1626. https://doi.org/10.1109/RO-MAN53752.2022.9900870
    [10]
    Jeffrey M Bradshaw, Paul Feltovich, and Matthew Johnson. 2017. Human-agent interaction. Handbook of human-machine interaction null (2017), 283–302.
    [11]
    Michael E Bratman. 1992. Shared cooperative activity. The philosophical review 101, 2 (1992), 327–341. https://doi.org/10.2307/2185537
    [12]
    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
    [13]
    Vanessa Budde, Nils Backhaus, Patricia H. Rosen, and Sascha Wischniewski. 2018. Needy Robots - Designing Requests for Help Using Insights from Social Psychology. In 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). IEEE, Genoa, Italy, 48–53. https://doi.org/10.1109/ARSO.2018.8625724
    [14]
    David Cameron, Emily C. Collins, Adriel Chua, Samuel Fernando, Owen McAree, Uriel Martinez-Hernandez, Jonathan M. Aitken, Luke Boorman, and James Law. 2015. Help! I Can’t Reach the Buttons: Facilitating Helping Behaviors Towards Robots. In Biomimetic and Biohybrid Systems, Stuart P. Wilson, Paul F.M.J. Verschure, Anna Mura, and Tony J. Prescott (Eds.). Springer International Publishing, Cham, 354–358.
    [15]
    Elizabeth Cha and Maja Matarić. 2016. Using nonverbal signals to request help during human-robot collaboration. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Daejeon, South Korea, 5070–5076. https://doi.org/10.1109/IROS.2016.7759744
    [16]
    Sheshadri Chatterjee, Ranjan Chaudhuri, and Demetris Vrontis. 2021. Usage intention of social robots for domestic purpose: From security, privacy, and legal perspectives. Information Systems Frontiers 26 (2021), 1–16.
    [17]
    Aparajita Chowdhury, Aino Ahtinen, Roel Pieters, and Kaisa Väänänen. 2021. "How are you today, Panda the Robot?" – Affectiveness, Playfulness and Relatedness in Human-Robot Collaboration in the Factory Context. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). IEEE, Waterloo, Canada, 1089–1096. https://doi.org/10.1109/RO-MAN50785.2021.9515351
    [18]
    Nazli Cila. 2022. Designing Human-Agent Collaborations: Commitment, Responsiveness, and Support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 420, 18 pages. https://doi.org/10.1145/3491102.3517500
    [19]
    Nazli Cila, Iskander Smit, Elisa Giaccardi, and Ben Kröse. 2017. Products as Agents: Metaphors for Designing the Products of the IoT Age. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 448–459. https://doi.org/10.1145/3025453.3025797
    [20]
    Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vázquez. 2020. Prompting Prosocial Human Interventions in Response to Robot Mistreatment. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 211–220. https://doi.org/10.1145/3319502.3374781
    [21]
    Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vázquez. 2020. Prompting Prosocial Human Interventions in Response to Robot Mistreatment. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 211–220. https://doi.org/10.1145/3319502.3374781
    [22]
    Paul Coulton and Joseph Galen Lindley. 2019. More-Than Human Centred Design: Considering Other Things. The Design Journal 22, 4 (2019), 463–481. https://doi.org/10.1080/14606925.2019.1614320
    [23]
    Clara Crivellaro, Rob Comber, Martyn Dade-Robertson, Simon J. Bowen, Peter C. Wright, and Patrick Olivier. 2015. Contesting the City: Enacting the Political Through Digitally Supported Urban Walks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 2853–2862. https://doi.org/10.1145/2702123.2702176
    [24]
    Andrew Crooks, Dieter Pfoser, Andrew Jenkins, Arie Croitoru, Anthony Stefanidis, Duncan Smith, Sophia Karagiorgou, Alexandros Efentakis, and George Lamprianidis. 2015. Crowdsourcing urban form and function. International Journal of Geographical Information Science 29, 5 (2015), 720–741.
    [25]
    Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz studies: why and how. In Proceedings of the 1st International Conference on Intelligent User Interfaces (Orlando, Florida, USA) (IUI ’93). Association for Computing Machinery, New York, NY, USA, 193–200. https://doi.org/10.1145/169891.169968
    [26]
    Joseph Daly, Ute Leonards, and Paul Bremner. 2020. Robots in Need: How Patterns of Emotional Behavior Influence Willingness to Help. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 174–176. https://doi.org/10.1145/3371382.3378301
    [27]
    Anna Dobrosovestnova and Tim Reinboth. 2023. Helping-as-Work and Helping-as-Care: Mapping Ambiguities of Helping Commercial Delivery Robots. Social Robots in Social Institutions: Proceedings of Robophilosophy 2022 366 (2023), 239. https://doi.org/10.3233/FAIA220623
    [28]
    Anna Dobrosovestnova, Isabel Schwaninger, and Astrid Weiss. 2022. With a Little Help of Humans. An Exploratory Study of Delivery Robots Stuck in Snow. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, Virtual Event, 1023–1029. https://doi.org/10.1109/RO-MAN53752.2022.9900588
    [29]
    Judith Dörrenbächer, Diana Löffler, and Marc Hassenzahl. 2020. Becoming a Robot - Overcoming Anthropomorphism with Techno-Mimesis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376507
    [30]
    Chad Edwards, Autumn Edwards, Brett Stoll, Xialing Lin, and Noelle Massey. 2019. Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Computers in Human Behavior 90 (2019), 357–362. https://doi.org/10.1016/j.chb.2018.08.027
    [31]
    Friederike Eyssel and Frank Hegel. 2012. (s) he’s got the look: Gender stereotyping of robots 1. Journal of Applied Social Psychology 42, 9 (2012), 2213–2230. https://doi.org/10.1111/j.1559-1816.2012.00937.x
    [32]
    Steven R. Gehrke, Christopher D. Phair, Brendan J. Russo, and Edward J. Smaglik. 2023. Observed sidewalk autonomous delivery robot interactions with pedestrians and bicyclists. Transportation Research Interdisciplinary Perspectives 18 (2023), 100789. https://doi.org/10.1016/j.trip.2023.100789
    [33]
    Petra Gemeinboeck and Rob Saunders. 2017. Movement Matters: How a Robot Becomes Body. In Proceedings of the 4th International Conference on Movement Computing (London, United Kingdom) (MOCO ’17). Association for Computing Machinery, New York, NY, USA, Article 8, 8 pages. https://doi.org/10.1145/3077981.3078035
    [34]
    Petra Gemeinboeck and Rob Saunders. 2018. Human-Robot Kinesthetics: Mediating Kinesthetic Experience for Designing Affective Non-humanlike Social Robots. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, Nanjing, China, 571–576. https://doi.org/10.1109/ROMAN.2018.8525596
    [35]
    Petra Gemeinboeck and Rob Saunders. 2023. Dancing with the Nonhuman: A Feminist, Embodied, Material Inquiry into the Making of Human-Robot Relationships. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (Stockholm, Sweden) (HRI ’23). Association for Computing Machinery, New York, NY, USA, 51–59. https://doi.org/10.1145/3568294.3580036
    [36]
    Elisa Giaccardi and Johan Redström. 2020. Technology and More-Than-Human Design. Design Issues 36, 4 (2020), 33–44. https://doi.org/10.1162/desi_a_00612
    [37]
    Desiree Godin and Mansour Zahedi. 2014. Aspects of Research through Design: A Literature Review. In Design’s Big Debates - DRS International Conference 2014, Youn-Kyung Lim, Kristina Niedderer, Johan Redstrom, Erik Stolterman, and Anu Valtonen (Eds.). Design Research Society, Umea, Sweden, null. https://dl.designresearchsociety.org/drs-conference-papers/drs2014/researchpapers/85
    [38]
    Greg P Griffin and Junfeng Jiao. 2019. Crowdsourcing bike share station locations: Evaluating participation and placement. Journal of the American Planning Association 85, 1 (2019), 35–48.
    [39]
    Ed Groff. 1995. Laban movement analysis: Charting the ineffable domain of human movement. Journal of Physical Education, Recreation & Dance 66, 2 (1995), 27–30. https://doi.org/10.1080/07303084.1995.10607038
    [40]
    Martin Hägele, Klas Nilsson, J Norberto Pires, and Rainer Bischoff. 2016. Industrial robotics. Springer handbook of robotics null (2016), 1385–1422. https://doi.org/10.1007/978-3-319-32552-1_54
    [41]
    Marcel Heerink, Marta Díaz, Jordi Albo-Canals, Cecilio Angulo, Alex Barco, Judit Casacuberta, and Carles Garriga. 2012. A field study with primary school children on perception of social presence and interactive behavior with a pet robot. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. IEEE, Paris, France, 1045–1050. https://doi.org/10.1109/ROMAN.2012.6343887
    [42]
    Fritz Heider and Marianne Simmel. 1944. An experimental study of apparent behavior. The American journal of psychology 57, 2 (1944), 243–259.
    [43]
    Michael A. Hogg. 2016. Social Identity Theory. Springer International Publishing, Cham, 3–17. https://doi.org/10.1007/978-3-319-29869-6_1
    [44]
    Marius Hoggenmueller, Luke Hespanhol, and Martin Tomitsch. 2020. Stop and Smell the Chalk Flowers: A Robotic Probe for Investigating Urban Interaction with Physicalised Displays. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376676
    [45]
    Daniel Gahner Holm, Rasmus Peter Junge, Mads Østergaard, Leon Bodenhagen, and Oskar Palinko. 2022. What Will It Take to Help a Stuck Robot? Exploring Signaling Methods for a Mobile Robot. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction(HRI ’22). IEEE Press, Sapporo, Hokkaido, Japan, 797–801.
    [46]
    Helge Hüttenrauch and Kerstin Severinson Eklundh. 2003. To help or not to help a service robot. In The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003.IEEE, Millbrae, CA, USA, 379–384. https://doi.org/10.1109/ROMAN.2003.1251875
    [47]
    Jeongmin Jeon, Byungjin Jung, Ja Choon Koo, Hyouk Ryeol Choi, Hyungpil Moon, Alvaro Pintado, and Paul Oh. 2017. Autonomous robotic street sweeping: Initial attempt for curbside sweeping. In 2017 IEEE International Conference on Consumer Electronics (ICCE). IEEE, Las Vegas, NV, USA, 72–73. https://doi.org/10.1109/ICCE.2017.7889234
    [48]
    Matthew Johnson, Jeffrey M. Bradshaw, Paul J. Feltovich, Catholijn M. Jonker, M. Birna van Riemsdijk, and Maarten Sierhuis. 2014. Coactive Design: Designing Support for Interdependence in Joint Activity. J. Hum.-Robot Interact. 3, 1 (feb 2014), 43–69. https://doi.org/10.5898/JHRI.3.1.Johnson
    [49]
    Matthew Johnson and Alonso Vera. 2019. No AI Is an Island: The Case for Teaming Intelligence. AI Magazine 40, 1 (Mar. 2019), 16–28. https://doi.org/10.1609/aimag.v40i1.2842
    [50]
    Fischer Kerstin, Soto Bianca, Pantofaru Caroline, and Takayama. Leila. 2014. Initiating interactions in order to get help: Effects of social framing on people’s responses to robots’ requests for assistance. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE, Edinburgh, Scotland, 999–1005. https://doi.org/10.1109/ROMAN.2014.6926383
    [51]
    Kacie Kinzer. 2009. Tweenbots. http://www.tweenbots.com/ Accessed: August,2023.
    [52]
    Ross A. Knepper, Stefanie Tellex, Adrian Li, Nicholas Roy, and Daniela Rus. 2015. Recovering from Failure by Asking for Help. Auton. Robots 39, 3 (oct 2015), 347–362. https://doi.org/10.1007/s10514-015-9460-1
    [53]
    Hideyuki Kobayashi and Shunji Ota. 2000. The semantic network of KANSEI words. Smc 2000 conference proceedings. 2000 ieee international conference on systems, man and cybernetics. ’cybernetics evolving to systems, humans, organizations, and their complex interactions’ (cat. no.0 1 (2000), 690–694 vol.1. https://api.semanticscholar.org/CorpusID:30966513
    [54]
    Lenneke Kuijer and Elisa Giaccardi. 2018. Co-Performance: Conceptualizing the Role of Artificial Agency in the Design of Everyday Life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3173699
    [55]
    Sonya S. Kwak, Yunkyung Kim, Eunho Kim, Christine Shin, and Kwangsu Cho. 2013. What makes people empathize with an emotional robot?: The impact of agency and physical embodiment on human empathy for a robot. In 2013 IEEE RO-MAN. IEEE, Gyeongju, South Korea, 180–185. https://doi.org/10.1109/ROMAN.2013.6628441
    [56]
    Minae Kwon, Sandy H. Huang, and Anca D. Dragan. 2018. Expressing Robot Incapability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI ’18). Association for Computing Machinery, New York, NY, USA, 87–95. https://doi.org/10.1145/3171221.3171276
    [57]
    Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436–444.
    [58]
    Christine P Lee, Bengisu Cagiltay, and Bilge Mutlu. 2022. The Unboxing Experience: Exploration and Design of Initial Interactions Between Children and Social Robots. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 151, 14 pages. https://doi.org/10.1145/3491102.3501955
    [59]
    Wen-Ying Lee and Malte Jung. 2020. Ludic-HRI: Designing Playful Experiences with Robots. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 582–584. https://doi.org/10.1145/3371382.3377429
    [60]
    Claire Liang, Andy Elliot Ricci, Hadas Kress-Gazit, and Malte F. Jung. 2023. Lessons From a Robot Asking for Directions In-the-Wild. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (Stockholm, Sweden) (HRI ’23). Association for Computing Machinery, New York, NY, USA, 617–620. https://doi.org/10.1145/3568294.3580159
    [61]
    Maria Luce Lupetti, Roy Bendor, and Elisa Giaccardi. 2019. Robot citizenship: A design perspective. Design and Semantics of Form and Movement 87 (2019), 81–89.
    [62]
    Michal Luria, Guy Hoffman, and Oren Zuckerman. 2017. Comparing Social Robot, Screen and Voice Interfaces for Smart-Home Control. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 580–628. https://doi.org/10.1145/3025453.3025786
    [63]
    Michal Luria, Marius Hoggenmüller, Wen-Ying Lee, Luke Hespanhol, Malte Jung, and Jodi Forlizzi. 2021. Research through Design Approaches in Human-Robot Interaction. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (Boulder, CO, USA) (HRI ’21 Companion). Association for Computing Machinery, New York, NY, USA, 685–687. https://doi.org/10.1145/3434074.3444868
    [64]
    Joseph B. Lyons, Katia Sycara, Michael Lewis, and August Capiola. 2021. Human–Autonomy Teaming: Definitions, Debates, and Directions. Frontiers in Psychology 12 (2021), 199–213. https://doi.org/10.3389/fpsyg.2021.589585
    [65]
    Betti Marenko and Philip Van Allen. 2016. Animistic design: how to reimagine digital interaction between the human and the nonhuman. Digital Creativity 27, 1 (2016), 52–70. https://doi.org/10.1080/14626268.2016.1145127
    [66]
    Kazuhiko Momose, Troy Weekes, Rahul Mehta, Cameron Wright, Josias Moukpe, and Thomas Eskridge. 2023. Patterns of Effective Human-Agent Teams. In CHI ’23 Extended Abstracts on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 224, 13 pages. https://doi.org/10.1145/3544549.3585608
    [67]
    Mitsuo Nagamachi. 1995. Kansei Engineering: A new ergonomic consumer-oriented technology for product development. International Journal of Industrial Ergonomics 15, 1 (1995), 3–11. https://doi.org/10.1016/0169-8141(94)00052-5 Kansei Engineering: An Ergonomic Technology for product development.
    [68]
    Don Norman. 2013. The design of everyday things: Revised and expanded edition. Basic books, New York.
    [69]
    Ishaan Pakrasi, Novoneel Chakraborty, and Amy LaViers. 2018. A Design Methodology for Abstracting Character Archetypes onto Robotic Systems. In Proceedings of the 5th International Conference on Movement and Computing (Genoa, Italy) (MOCO ’18). Association for Computing Machinery, New York, NY, USA, Article 24, 8 pages. https://doi.org/10.1145/3212721.3212809
    [70]
    Triin Palmipuu. 2021. Pelgulinlane Taivo aitab iga päev pakiroboteid: nad paluvad nii härdalt abi. postimees. https://naine.postimees.ee/7406578/pelgulinlane-taivo-aitab-iga-paevpakiroboteid-nad-paluvad-nii-hardalt-abiAccessed:2023.
    [71]
    Hannah Pelikan, David Porfirio, and Katie Winkle. 2023. Designing Better Human-Robot Interactions Through Enactment, Engagement, and Reflection. In Proceedings of the CUI@ HRI Workshop at the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI’23). ACM/IEEE, Stockholm, Sweden, null.
    [72]
    Martin Plank, Clément Lemardelé, Tom Assmann, and Sebastian Zug. 2022. Ready for robots? Assessment of autonomous delivery robot operative accessibility in German cities. Journal of Urban Mobility 2 (2022), 100036. https://doi.org/10.1016/j.urbmob.2022.100036
    [73]
    Isabel Prochner and Danny Godin. 2022. Quality in research through design projects: Recommendations for evaluation and enhancement. Design Studies 78 (2022), 101061. https://doi.org/10.1016/j.destud.2021.101061
    [74]
    Laurel D. Riek, Tal-Chen Rabinowitch, Bhismadev Chakrabarti, and Peter Robinson. 2009. Empathizing with robots: Fellow feeling along the anthropomorphic spectrum. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, Amsterdam, The Netherlands, 1–6. https://doi.org/10.1109/ACII.2009.5349423
    [75]
    Stephanie Rosenthal, Manuela Veloso, and Anind K Dey. 2012. Is someone in this office available to help me? Proactively seeking help from spatially-situated humans. Journal of Intelligent & Robotic Systems 66 (2012), 205–221.
    [76]
    Astrid Rosenthal-von der Pütten, David Sirkin, Anna Abrams, and Laura Platte. 2020. The Forgotten in HRI: Incidental Encounters with Robots in Public Spaces. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 656–657. https://doi.org/10.1145/3371382.3374852
    [77]
    Ognjen Rudovic, Jaeryoung Lee, Miles Dai, Björn Schuller, and Rosalind W Picard. 2018. Personalized machine learning for robot perception of affect and engagement in autism therapy. Science Robotics 3, 19 (2018), eaao6760.
    [78]
    T. Salter, K. Dautenhahn, and R. Bockhorst. 2004. Robots moving out of the laboratory - detecting interaction levels and human contact in noisy school environments. In RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759). IEEE, Kurashiki, Okayama Japan, 563–568. https://doi.org/10.1109/ROMAN.2004.1374822
    [79]
    Pericle Salvini. 2018. Urban robotics: Towards responsible innovations for our cities. Robotics and Autonomous Systems 100 (2018), 278–286. https://doi.org/10.1016/j.robot.2017.03.007
    [80]
    Pedro Sanches, Noura Howell, Vasiliki Tsaknaki, Tom Jenkins, and Karey Helms. 2022. Diffraction-in-Action: Designerly Explorations of Agential Realism Through Lived Data. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 540, 18 pages. https://doi.org/10.1145/3491102.3502029
    [81]
    Dennis Schleicher, Peter Jones, and Oksana Kachur. 2010. Bodystorming as Embodied Designing. Interactions 17, 6 (nov 2010), 47–51. https://doi.org/10.1145/1865245.1865256
    [82]
    Katie Seaborn, Norihisa P. Miyake, Peter Pennefather, and Mihoko Otake-Matsuura. 2021. Voice in Human–Agent Interaction: A Survey. ACM Comput. Surv. 54, 4, Article 81 (may 2021), 43 pages. https://doi.org/10.1145/3386867
    [83]
    Thomas B. Sheridan. 2016. Human–Robot Interaction: Status and Challenges. Human Factors 58, 4 (2016), 525–532. https://doi.org/10.1177/0018720816644364 arXiv:https://doi.org/10.1177/0018720816644364PMID: 27098262.
    [84]
    Ben Shneiderman and Catherine Plaisant. 2010. Designing the user interface: Strategies for effective human-computer interaction. Pearson Education India, India.
    [85]
    David Harris Smith and Frauke Zeller. 2017. The Death and Lives of hitchBOT: The Design and Implementation of a Hitchhiking Robot. Leonardo 50, 1 (02 2017), 77–78. https://doi.org/10.1162/LEON_a_01354
    [86]
    Katta Spiel and Lennart E. Nacke. 2020. What Is It Like to Be a Game?—Object Oriented Inquiry for Games Research, Design, and Evaluation. Frontiers in Computer Science 2 (2020), 14 pages. https://doi.org/10.3389/fcomp.2020.00018
    [87]
    Vasant Srinivasan and Leila Takayama. 2016. Help Me Please: Robot Politeness Strategies for Soliciting Help From Humans. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 4945–4955. https://doi.org/10.1145/2858036.2858217
    [88]
    Konrad Szocik and Rakhat Abylkasymova. 2022. Ethical Issues in Police Robots. The Case of Crowd Control Robots in a Pandemic. Journal of Applied Security Research 17, 4 (2022), 530–545. https://doi.org/10.1080/19361610.2021.1923365
    [89]
    Martin Tomitsch. 2017. Making Cities Smarter. JOVIS Verlag GmbH, Berlin.
    [90]
    Martin Tomitsch and Marius Hoggenmueller. 2021. Designing Human–Machine Interactions in the Automated City: Methodologies, Considerations, Principles. Springer Singapore, Singapore, 25–49. https://doi.org/10.1007/978-981-15-8670-5_2
    [91]
    Manuela M. Veloso. 2018. The Increasingly Fascinating Opportunity for Human-Robot-AI Interaction: The CoBot Mobile Service Robots. J. Hum.-Robot Interact. 7, 1, Article 5 (may 2018), 2 pages. https://doi.org/10.1145/3209541
    [92]
    Valeria Villani, Beatrice Capelli, and Lorenzo Sabattini. 2018. Use of Virtual Reality for the Evaluation of Human-Robot Interaction Systems in Complex Scenarios. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, Nanjing, China, 422–427. https://doi.org/10.1109/ROMAN.2018.8525738
    [93]
    David Weinberg, Healy Dwyer, Sarah E. Fox, and Nikolas Martelaro. 2023. Sharing the Sidewalk: Observing Delivery Robot Interactions with Pedestrians during a Pilot in Pittsburgh, PA. Multimodal Technologies and Interaction 7, 5 (2023), 27 pages. https://doi.org/10.3390/mti7050053
    [94]
    Astrid Weiss, Judith Igelsböck, Manfred Tscheligi, Andrea Bauer, Kolja Kühnlenz, Dirk Wollherr, and Martin Buss. 2010. Robots asking for directions: the willingness of passers-by to support robots. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction(HRI ’10). IEEE Press, Osaka, Japan, 23–30.
    [95]
    Danielle Wilde, Anna Vallgårda, and Oscar Tomico. 2017. Embodied Design Ideation Methods: Analysing the Power of Estrangement. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 5158–5170. https://doi.org/10.1145/3025453.3025873
    [96]
    Katie Winkle, Ryan Blake Jackson, Alexandra Bejarano, and Tom Williams. 2021. On the flexibility of robot social identity performance: benefits, ethical risks and open research questions for HRI. In HRI Workshop on Robo-Identity. ACM, Virtual Event, Boulder, CO, USA, 4 pages.
    [97]
    F.H. Wullschleger and R. Brega. 2002. The paradox of service robots-how passers-by can contribute in solving non-deterministic exceptional conditions encountered by service robots. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 2. IEEE, Lausanne, Switzerland, 1126–1131 vol.2. https://doi.org/10.1109/IRDS.2002.1043882
    [98]
    Wei Xu. 2020. From Automation to Autonomy and Autonomous Vehicles: Challenges and Opportunities for Human-Computer Interaction. Interactions 28, 1 (dec 2020), 48–53. https://doi.org/10.1145/3434580
    [99]
    Cristina Zaga, Roelof A.J. de Vries, Jamy Li, Khiet P. Truong, and Vanessa Evers. 2017. A Simple Nod of the Head: The Effect of Minimal Robot Movements on Children’s Perception of a Low-Anthropomorphic Robot. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 336–341. https://doi.org/10.1145/3025453.3025995
    [100]
    John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through Design as a Method for Interaction Design Research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’07). Association for Computing Machinery, New York, NY, USA, 493–502. https://doi.org/10.1145/1240624.1240704
    [101]
    Oren Zuckerman and Guy Hoffman. 2015. Empathy Objects: Robotic Devices as Conversation Companions. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (Stanford, California, USA) (TEI ’15). Association for Computing Machinery, New York, NY, USA, 593–598. https://doi.org/10.1145/2677199.2688805

    Cited By

    View all
    • (2024)Encouraging Bystander Assistance for Urban Robots: Introducing Playful Robot Help-Seeking as a StrategyProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661505(2514-2529)Online publication date: 1-Jul-2024

    Index Terms

    1. From Agent Autonomy to Casual Collaboration: A Design Investigation on Help-Seeking Urban Robots

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
        May 2024
        18961 pages
        ISBN:9798400703300
        DOI:10.1145/3613904
        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 11 May 2024

        Check for updates

        Author Tags

        1. Human-agent collaboration
        2. autonomous agent
        3. casual bystanders
        4. embodied design methods
        5. urban robots

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Data Availability

        Funding Sources

        • The Australian Research Council

        Conference

        CHI '24

        Acceptance Rates

        Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

        Upcoming Conference

        CHI PLAY '24
        The Annual Symposium on Computer-Human Interaction in Play
        October 14 - 17, 2024
        Tampere , Finland

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)570
        • Downloads (Last 6 weeks)298

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Encouraging Bystander Assistance for Urban Robots: Introducing Playful Robot Help-Seeking as a StrategyProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661505(2514-2529)Online publication date: 1-Jul-2024

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media