Interactions and Screens in Research and Education

Autonomy and artefactual presence

Autonomy and artefactual presence in a polyartefacted seminar

Amélie Bouquain

Christelle Combe

Joséphine Rémon

Version française > Amélie Bouquain, Christelle Combe, Joséphine Rémon, « Autonomy and artefactual presence in a polyartefacted seminar », Interactions and Screens in Research and Education (enhanced edition), Les Ateliers de [sens public], Montreal, 2023, isbn:978-2-924925-25-6, http://ateliers.sens-public.org/interactions-and-screens-in-research-and-education/chapter5.html.
version:0, 11/15/2023
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)

As presented in the introduction of this book, three telepresence devices were used in the seminar: the telepresence robots Beam and Kubi as well as the web conferencing software program Adobe Connect. In this chapter, in the light of work in interactive multimodal communication and more specifically in robot-mediated communication (Herring 2013 ; Takayama and Go 2012 ; Takayama and Harris 2013 ; Neustaedter et al. 2016 ; Sirkin et al. 2011 ; Gaver 1992), we interrogate the notion of artefactual presence through a comparative study of the affordances of these devices. We ask how the effects of presence related to each device define an artefactual presence or an interactional presence, depending on the interactional co-construction implemented by the participants. To what extent do effects of presence vary according to the artefact or a particular device and in the co-construction of its use by the different members? Our study is based on interviews with participants (Amélie, Jean-François, Samira, Christelle) which we cross-reference with the analysis of certain critical moments in the video corpus of the sessions.

Theoretical framework

Several studies in multimodal communication and human-computer communication have demonstrated the characteristics of the different telepresence devices used and their effects on communication.

The researchers point to parameters such as rotation or field of view, but also to characteristics of the mediated space (within which the interaction takes place), such as the lack of symmetry in the transmission and reception of sound and image between the different participants.

An approach focused on interaction and not on geographical location

In studies on meetings involving co-located and remotely located participants, the geographical location is often emphasised. The meeting with the co-located participants is defined as a “hub” while the remote location, present in the form of a “proxy” or artefact enabling the remote participant to participate (screen, video camera, speaker, microphone), is a “satellite” (Sirkin et al. 2011, 163). This “hub/satellite” view thus focuses on the technical device and not on the experience of the interaction.

In our study, in contrast, we do not consider one place as the “hub” and another as a “satellite”. This is because there were sometimes fewer participants in the seminar room than online, and because there were different “satellite locations” (as opposed to a single satellite individual in the case of Sirkin et al.). Thus, in order to correspond to the actual lived experience, the present/remote dualism must be overcome in favour of an approach centred on interaction and not on geographical location.

Engagement and mobility

Other researchers have studied telepresence devices from the perspective of interactional engagement and movement effects. Herring (2013, 1), for example, points to difficulties with Adobe Connect-type devices, including sound and visualisation issues, participant fatigue, difficulty in feeling engaged in the interaction, and frustrations with speaking difficulties:

However, currently popular teleconferencing tools (e.g., Skype, Adobe) are limited in various respects. Even when video is added to audio communication, remote participants often cannot see or hear everyone at the remote location, may feel disengaged and fatigued (because more effort is required to pay attention), and may experience interactional frustration due to difficulty getting the floor and identifying who is speaking (e.g., Egido 1990 ; Sirkin et al. 2011).

This difficulty in feeling engaged is partly due to the lack of possibility of movement. According to Gaver (1992, 21), the possibility of exploring the environment through movement is not a constitutive characteristic of mediated spaces since the cameras and microphones are fixed and controlled by the people on site. According to the author, being online feels more like watching television than having control over a perceptual exploration.

Conversely, “kinetic proxies” (which can be set in motion, such as the Beam or Kubi) allow for a hybrid approach (Sirkin et al. 2011, 166) by combining motion and video image, as opposed to a robot which would be a simple avatar of the remote person. The artefact thus represents the remote participant and reminds the group of their presence through movement or rotation. According to the study led by Sirkin et al., the quality of conversational engagement is higher when motion is possible:

The motorized action brought the remote person to life. Hub participants were able to perceive the satellite’s attention in motion through the swivelling of the display (2011, 176).

Compared to a video-conferencing device, the Kubi’s rotational mobility creates an additional effect of presence (Herring 2013, 3). In addition to its small size, its screen can be rotated to follow the conversation. However, the Kubi cannot be entirely piloted in the same way as the Beam (Herring 2013, 3) since it has to be carried from one place to another by a human agent.

Sirkin et al. (2011) have also revealed undesirable effects of motion. Rotation can be interpreted as a disruption, and as an interruption for the remote participant who must operate this rotation. When the participant rotates the device to face an interlocutor, it can be perceived as though they are “turning their back” on other participants:

screen motion towards one person is more akin to turning one’s back (rather than one’s head) towards someone else (Sirkin et al. 2011, 164).

Another difficulty is that head movements and rotational movements of artefacts are not always interpreted in the same way, as participants seek to attribute intelligibility to these movements, even though some may be mere incidents lacking any communicative purpose.

We study these various movement effects in our context, and analyse below how they combine with characteristics of autonomy (see “Analysis and results”).

Disrupted reciprocity of perceptions in the mediated space

In the field of human-computer interaction, Gaver (1992, 17) compares the affordances of an unmediated situation with a media space defined as a space created by “computer-controllable networks of audio and video equipment used to support synchronous collaboration”. He identifies the following characteristics of the latter space: distant collaboration, restricted field of vision, impossible detailed inspection, limited peripheral awareness, biased sound transmission, limited perceptual exploration, and discontinuity of spaces that make speech turns and communicative behaviours more difficult. These characteristics apply to the situation we analyse in this study.

Media spaces are also characterised by anisotropy, i.e., the non-reciprocity of perceptions in the mediated spaceSee chapter “Attentional affordances in an instrumented seminar”.↩︎, in contrast to air (i.e., face-to-face communication). This term comes from physics and is applied when the properties of an object vary according to direction. As Gaver (1992, 23) explains, air is isotropic and allows for reciprocity of perceptionsSee chapter “Attentional affordances in an instrumented seminar”.↩︎:

Air is isotropic with respect to light and – unless it is moving – with respect to sound as well. This means that air affords reciprocal communication, that people can predict what their partners will see and hear by what they themselves see and hear.

Screen mediation therefore disrupts this reciprocity by making the exchanges anisotropic. This is reflected, for example, in the difficulty for participants in the seminar room to know precisely what a remote participant is looking at. Sirkin et al. (2011) point out the importance for the remote participant to have an broad view of a space, so as to be able to follow which attentional fociSee chapter “Attentional affordances in an instrumented seminar”.↩︎ are being activated. Many parameters may be unknown to the participants in the seminar room (Sirkin et al. 2011, 164): this is the case, for example, of the angle of view of the remote participant’s camera or the size of their computer screen. Furthermore, according to these authors, the “TV presenter effect” makes face-to-face participants feel as though they are all being looked at simultaneously if the satellite is looking at the camera, or all neglected if it looks away. They also note the “skip-over effect”, whereby the remote participant tends to be neglected despite the presence of the face-to-face artefact that represents them.

At the end of this brief literature review, we note that the complexity of the situation is due in particular to the anisotropy of the mediated space, as well as to the affordances of the telepresence devices used, depending on the possibilities of movement or rotation, but also depending on the way these movements happen and are interpreted by the participants. On the basis of these elements, we can now analyse the interactional characteristics of our corpus and the effects of presence they generate.

Analysis and results

Remote communication devices present different potentialities of movement, vision and hearing, which have impact in terms of presence effects, around issues of transmission/reception, and visibility/invisibility or presence/absence: one can be present and invisible to others or visible and absent. These effects of presence define an artefactual presence or an interactional presence, depending on the interactional co-construction implemented by the participants. We define artefactual presence as the presence of the object with a reduced possibility of interaction, as opposed to interactional presence which allows an individual to take their place in the interaction without hindrance. As we will see below, it is mainly the issues of autonomy of movement and visual and sound adjustment that determine the objectal or interactional status of the pilot and her artefact. These presence effects bring into play conviviality, stealthWe mean stealth in the military sense of being designed to avoid detection by using a variety of technologies that reduce signature.↩︎, reactions to solicitation or orders as well as temporality (beginning/end).

Autonomy of rotation and movement

Rotations and movements are characterised by parameters of autonomy: are movements possible or impossible, autonomously driven or dependent on others? The movement must also be considered in terms of the starting position: was this chosen by the user? Additionally, is the movement translational and/or rotational (is it a head or artefact rotation; slow or fast rotation; discreet or noisy rotation)? A distinction will therefore be made between movement in production and the effect of movement in reception.

Movement range of the Beam

Our experimentation showed us that the Beam offered its user only a relative autonomy of movement.

Relative movement autonomy of the Beam

In Session 5, for example, Morgane, who has full visibility of the room space, chooses to move the Beam telepresence device herself, because it is in the field of the camera recording the data. Amélie is momentarily reduced to her artefactual presence, but she immediately takes over by piloting the move herself.

Relative autonomy of the Beam

In Session 2, we find another illustration of the limitations of the Beam’s possibilities, when the presence of the artefact seems to be spatially related to group presence.

When the speakers move their table to get closer to the microphone that is transmitting the sound to the remote participants, Amélie, the Beam user, has situated the Beam between this table and the tables behind her, where the participants in Lyon are seated.

Figure 1: Proximity of the speakers to the Beam

Even if she experiences this change of configuration as uncomfortable, she cannot move because she is hampered by the tables which limit her movements and make it difficult to move to another part of the room:

It’s true that, wow, when they came closer, I felt that the normal distance, the interpersonal distance between people, was completely disrespected, they were practically up against me […] it wasn’t very comfortable.

In this case, the comfort of the remote person is replaced by that of the participants in the seminar room.

In other cases, the Beam pilot can sometimes exercise autonomy in piloting the artefact, as we explain now.

Effect of presence of the Beam’s autonomous piloting

In Session 3 (group work), we see an example of the effect of presence of the Beam’s movement, which is also commented on by the pilot in an interview.

During the group discussion, the Beam starts to rotate slightly on its wheels and then suddenly moves towards the centre of the room. Christine exclaims, “Oh, she scared me!” and Morgane, putting her hand over her heart, says “Oh my goodness!” However, the interruption does not last more than 5 seconds.

“She scared me”

The short length of the interruption is likely due to the fact that the sequence occurs during the last session recorded for the research corpus. At this point, the participants have become fairly accustomed to the devicesSee chapter “Digital bugs and interactional failures in the service of a collective intelligence”)↩︎ or, in any case, have a tacit protocol for dealing with interruptions so that their interactions remain fluid.

Effect of presence of the Beam’s motion

The effect of movement

Amélie commented in an interview on the effect of movement as perceived by other participants and by herself:

Moving around for the sake of moving around, if the others don’t perceive the meaning, it can be disruptive, and at the same time I see myself as signalling my presence through the movements of the robot.

From our analyses, artefactual or interactional presence depends on perception by others as much as oneself. In an anisotropic mediated space, participants rely as much on the perception of others as on their own perception in an attempt to reconstruct a comprehensive perception and allow the interaction to function.

Artefactual presence of the Kubi

Our analyses identify different situations in which the presence of the artefact pilot is reduced to artefactual presence.

Relative rotation autonomy of the Kubi

During Session 3 (group work), the participants are trying to draw a representation of the experimental set-up on the whiteboard on the wall. At this point let us recall that the Kubi pilot can prepare her interface and configure the positioning of the interlocutors, and has to reconfigure everything if they change places. During this session, Christelle had set up the coordinates of the participants on the Kubi interface at the beginning of the session to be able to position herself more easily, but she became disorientated when the participants in the classroom got up to interact around the whiteboard. Christelle is thus present via the Kubi placed on a table, but she is unable to position herself so as to see the whiteboard correctly. Caroline asks her permission to move the Kubi: “Do you want me to turn you or not? I don’t want to be rude”. Christelle doesn’t answer immediately, because she is trying to adjust her interface: “I’ve done something stupid here, wait”.

Artefactual presence of the Kubi

Christelle still doesn’t seem to be able to see what is written on the board. She has an annoyed expression on her face, her eyebrows are furrowed, and then the Kubi starts to rotate back and forth and left to right for almost a minute.

Artefactual presence can take over, for example, through a request for permission to move the artefact by someone else. This request points to a potential interactional presence, even if sometimes the artefact is rotated even though the pilot has not responded to a request for permission.

So although the Kubi does have a certain rotational autonomy, it still needs to be mastered and must correspond to the configuration at a given moment and in a given space. In some cases, such as above, artefactual presence is brought to the fore.

Audio-visual autonomy

In addition to the effects of movement, the angle of vision is also a determining parameter. As a whole, the artefactual communication situation at stake is complex. Firstly, the Kubi user has to choose a setting that allows her to adjust the view on the computer screen. Secondly, the camera within the seminar room that transmits the image to Adobe Connect must also be adjusted to produce suitable visual content. The combination of these two actions allows the Kubi user to feel like they are part of the situation. While face-to-face participants can visually scan a space, remote artefact participants may not see the technical solutions deployed in the room, for example, or certain actions, such as drawing on the board. For the purposes of this paper, we distinguish between audio-visual autonomy and autonomy of movement, but audio-visual perception and autonomous or provoked movement are of course interdependent, because if the angle of vision is a barrier to interactional felicity, then remediation involves positional adjustment, either autonomous or assisted.

Limitations in the field of vision and rotation of the Kubi

The Kubi’s field of vision is limited and must be set by the pilot, which has implications in terms of artefactual presence.

Limitation in the Kubi’s rotation

During Session 3 (group work), as mentioned above, participants exchanged views on how they mentally represented the hybrid experimental set-up.

Christelle, using the Kubi from home, explains her point of view. Then Christine draws a diagram on the whiteboard. Christelle does not see this action because she is turned towards the seated participants and not towards the whiteboard on which Christine is drawing. Christine suggests to Christelle to look at the diagram she has just drawn: “Look here, is my drawing the kind of thing you had in mind?” Christelle starts to turn around. Christine guides her (“More, more”). Christine takes the initiative to turn the Kubi (“There you go”) because Christelle’s field of vision is limited, and she has reached the limit of how far she can turn the Kubi, as can be seen on the video that captures Christelle at home in front of her Kubi piloting interface.

Limitation in the Kubi’s rotation

Christelle’s interactional presence is thus temporarily limited to an artefactual presence, when her ability to move is mediated by another participant, in order to give her visual autonomy. However, the shift operated by a third party is endowed with an interactional intentionality, contrary to a scenario where the Kubi could be moved to put another object in its place (a book or cup for example).

Interactional presence implies at times an artefactual presence; in order to be or become present, one has to transiently pass through moments of artefactual presence, for instance,  when one is assisted by another participant and is momentarily recategorised, not as a pilot, but as an object.

Adjusting the Kubi’s field of vision

The Kubi’s limited field of vision deprives the pilot of some of the interactions that take place in Lyon.

Example during Session 3

In Session 3, while Christelle is making her positioning adjustments, Morgane goes over to the Beam. Christelle misses this action because she is readjusting her field of vision at the time and does not see the participants in the seminar room.

Figure 2: The back of the Kubi

Adjustment of the camera’s field of vision in the seminar room

The Kubi pilot depends on the help of the participants in Lyon to adjust her angle of vision, which moderates her interactional power. In contrast, the Beam pilot can adjust her field of view autonomously. She has greater agency in this respect.

The Kubi pilot needs help to adjust her angle of vision

In the same séance, Christelle (the Kubi pilot) asks the participants to reposition the camera sending the video to Adobe Connect from the seminar room, as she cannot see what is happening in the room. As Caroline gets up to go and draw on the whiteboard, Christelle says: “It would be nice if the camera was on the screen there, the one that shows what I see in Adobe, because otherwise I see you”. At that moment, the camera that transmits the images to Adobe Connect is directed towards the seated participants and not towards the whiteboard. Christelle displays the Adobe. Adobe interface on her computer at home. Adobe. Adobe displays the same elements as the Kubi interface: the participants in the room in Lyon.

Comparative autonomy of the Kubi and the Beam

The Beam pilot is autonomous in adjusting her field of view

At the same time, Amélie, who is piloting the Beam, zooms in and turns autonomously towards the whiteboard without speaking. Blue arrows are displayed in the video when the user turns the robot.

The “back” of the Beam

Even though the Beam pilot can move the device autonomously, she still has a limited field of vision of what is happening behind the artefact. Our analysis of the video demonstrates this limitation, and the Beam pilot confirms it in an interview.

Video clip from Session 2

We can see that during Session 2, when the speakers decide to move their table closer to the centre of the room, the technical adjustments made by the participants in Lyon are invisible to the Beam user, Amélie, who is facing the speakers with the Beam while the Lyon participants are behind the Beam. While the speakers move the table, Morgane, in charge of the technical set-up, exchanges a glance with Samira as she waves her hand. Dorothée stands up, as does Morgane. She readjusts the research data camera. Dorothée moves the webcam. Morgane sits down again. Samira points at the webcam. As all of these actions are occurring, Amélie, piloting the Beam, smiles at the speakers. Then Dorothée tells Samira to move the webcam, which is transmitting sound and images for the remote participants. Samira stands up. She appears for a second in Amélie’s field of vision (the Beam pilot). Amélie zooms in on the speakers who have resumed speaking after this technical interruption.

The ‘back’ of the Beam

Extract from Amélie’s interview

Amélie realises when she watches the videos afterwards that there may have been activity occurring ‘behind her back’:

If someone is behind me, I’m not going to see them […], I think of the last video, the last seminar in November, when you went under the table to get the camera or to do something, I moved […] and I didn’t see you because you were behind me, well, behind the robot […] I didn’t think there was someone close to me I could bump into.

The Beam user thus perceives its artefactual presence afterwards and had not experienced it in this way at the time.

The choices made about the issues of transmission and reception, such as adapting the Adobe Connect software for hybrid use (a group in a room/individuals in separate locations), had consequences for audio reception (e.g., participants on Adobe Connect could not hear participants in the room) or visual reception (e.g., participants in the room could not tell apart the silhouettes of participants on Adobe Connect.) who were against the light. These different types of perceptual (motion and audio-visual) autonomy influence the availability and participation regimes allowing the interaction to function.

Participatory autonomy

The technical choices and the potentialities of each device have an impact on participation. By participatory autonomy we mean the regulation initiated by the individual of their involvement in the interaction.

Issues of availability for being spoken to can appear according to whether the artefact allows a participant to participate in the interaction in reception or in transmission.

Difficulty in calling the Beam

The participants found it easier to address the Adobe Connect users than the Beam pilot. Morgane explains in an interview that she tried several times and through several channels (signs, chat, email, SMS) to contact Amélie, the Beam pilot, without success.

Extract from Jean-François’ interview

Jean-François also notes this difficulty in his interview by comparing the systems:

I find it easier to manage with Adobe Connect because we have more discreet ways of signalling each other, whereas with you [Amélie in the Beam] it can be quite violent […] we have to call out to you […] it’s less discreet and because you can’t see us, we have to call out to you and speak loudly. If you could see us, it could be a little sign, a little chat message.

Difficulty in giving a strong visual signal when using the Kubi or Adobe Connect

The Kubi or Adobe Connect users also found it difficult to speak up. In the Kubi, unless you speak up and impose your voice, you cannot address the rest of the group.

Difficulty of speaking in the Kubi

During Session 3, Christelle starts to raise her finger to say “no”, in response to what Jean-François has just said. Then she raises her finger fully and waits for permission to speak. When this permission does not come, she starts speaking. Her impatience is shown by the fact that she lowers and raises her finger several times and twists her mouth. This signal remains weak in relation to the interaction in progress, and the configuration of the general set-up at that particular moment. Her signal is not perceptible either to the Beam pilot or to the Adobe Connect participants and is only perceived by the participants in the room a little later.

Failure of a weak signal in the Kubi

In the Adobe Connect chat function it is equally difficult to send a strong signal.

Difficulty in sending a strong signal in the Adobe Connect chat

During Session 3, Tatiana wishes to intervene and writes a message to that effect in the chat window. Christelle relays this intervention.

Christelle: “Tatiana is speaking.” Christelle: “What is Tatiana saying?”
Christelle: “She’s writing, look.”
Morgane reads what Tatiana wrote in the Adobe chat window.

Christelle thus urges the participants in the room to take into account all the modalities available in the different devices.

Failure of a weak signal in Adobe

More broadly, we can thus question whether participation in the interaction is subject to regimes that could be described as artefactual, in the sense that they are dependent on the artefact, or the telepresence device used.

The chat as a space for autonomous or relayed communication

Still from the point of view of participatory autonomy, we see in this section that the communication space of the chat is ambivalent, in that it sometimes allows participation in the overall interaction, not in an autonomous way but through the on-site participants, and sometimes generates a separate space for autonomous communication.

The remote participants using Adobe Connect all have access to the chat. Some participants in the room are also connected to Adobe Connect and use the chat, which is projected, but despite this, the content of the chat sometimes has to be relayedSee chapter “Attentional affordances in an instrumented seminar”.↩︎.

The content of the chat must be relayed

In Session 3, Christelle, a long-time Adobe Connect user, but present via the Kubi that day, reports that Tatiana has written a message in the chat, which Morgane then reads out loud to the participants in Lyon. Thus, in this case, the chat contributes to the overall group interaction because it is relayed by on-site participants.

The social aspect of the chat is pointed out in interviews by various members of the team.

Social aspect of the chat

Christelle says that, as a user of the Beam, Kubi and Adobe Connect, she feels less isolated when using Adobe Connect due to the chat function:

There weren’t very many people who came to talk to me when I was in the Kubi or in the robot, whereas through Adobe I chatted a bit with Tatiana or Samira, […] the social aspect I didn’t benefit from so much, because people didn’t necessarily come to talk to me […], you’re alone, that’s quite clear.

On the contrary, Amélie, the Beam pilot, indicates in an interview that it was difficult for her to access the communication space created by the chat function (unless she manipulated both the Beam and Adobe Connect interfaces simultaneously). Jean-François and Samira both point out the compartmentalisation aspects of the chat.

Partitioning effects of the chat.

Amélie, the Beam pilot, points out that the communication space created by the chat is difficult for her to access:

There can be interactions between me [off-site] and you on-site, between the people using Adobe and you, but not between the people using Adobe and me, we can’t interact.

Jean-François also mentions (in the first sessions of the seminar) that he feels less complicity with remote participants using Adobe Connect.

Samira goes further when she relates similar phenomena with the use of Google Hangouts, in an earlier phase of the seminar, talking about an actual separate group.

Morgane sometimes turned me to face the wall […] so that I could see the PowerPoint, and then all the other people in the room sitting at a round table were behind me, behind my screen, so I couldn’t see the people who were there, [sometimes] it was over, we had moved on but we were still facing the wall, and the connection disappeared after a while so we were facing a blue screen, a blue wall and we could hear people talking behind us, […] what was good about these moments is that, since I was often with Christelle, we could interact together and so we felt less alone, because otherwise we would have felt very alone […] and so it created a complicity, a kind of separate group during those moments, we generally laughed about it […].

This sense of “separate group” is possibly reinforced at this point by the fact that the participants are asked to stop using the chat in a text message from Christine.

I remember a session where I think it was bothering people a little bit that Christelle and I were writing on the chat too much on because we hadn’t deactivated the little ringtone that makes a sound with each message […] Christine sent a text message to Christelle to ask her to try to stop interacting as much on the chat […] we didn’t stop but […] I deactivated the sound.

The chat is sometimes a space for autonomous exchanges, whose participants no longer want to be part of the general group interaction.

Autonomy of the chat

The aspect of sub-community in the chat appears in the following example, during Session 2. The participants on Adobe Connect cannot hear the exchanges in the room very well, as Prisca points out on the chat. An exchange then takes place on the chat about grammar, which appears to be unrelated to the exchanges taking place in the seminar room.

Christelle: “Without solid linguistic knowledge it is useless to…”
Christelle: “I think he means that you have to master the grammar first.”
Christelle: “before speaking.”
Liping: “For us, normally, we start a lesson with the grammar, and I explain the vocabulary, then the texts and the exercises.”

Below we explore constraints or orders in terms of participation in the interaction from the point of view of autonomy and intention. These elements involve screen presence, the necessity of thinking about framing of one’s own picture from the viewer’s point of view, or hyper-exposure when speaking.

Over-ratification and over-exposure

Based on the concept of ratification (Goffman 1981), we analyse examples of what we call “over-ratification”: an exposed and undesirable ratification from the point of view of remote participantsSee chapter “New norms of politeness in digital contexts”.↩︎; and of what we call “hyper-exposure”, i.e., taking a conversational turn without wanting to.

Over-ratification

During Session 1, Christine gives the floor to the remote participants: “Perhaps we’ll give the floor to the remote participants, too. Do you have anything to say?” This question is followed by a 5-second silence. Then she adds: “No? … AmélieTatiana…?”.

Following Goffman’s concept (1981), this example illustrates an over-ratification, an exposed and undesirable ratification from the point of view of the distant participantsSee chapter “New norms of politeness in digital contexts”.↩︎.

Over-ratification of remote participants

Amélie is exposed by an unwanted solicitation

In an interview, Amélie alludes to feeling exposed when being asked to speak:

In fact, when I’m told “Those of you online, do you have anything to say?”, when I have nothing to say, it embarrasses me a little […] there are people who are in the seminar room and who don’t participate, we don’t say to them, “Hey, you don’t have anything to say”, and I’m really all alone in the robot and that exposes me much more […] if we say, “In Adobe, you don’t have anything to say”, there are three or four people but I am obliged to say, “No, I have nothing to say”, it bothers me a little, it exposes my face more in any case.

Unwanted exposure sometimes occurs via an incidental audio hyper-exposure, which takes over a speech turn without the person intending to. For example, when the Beam user sneezes at homeSee chapters “New norms of politeness in digital contexts” and “Artefacted intercorporeality, between reification and personification”.↩︎, due to the anisotropy of the mediated space, she does not realise that the sound effect is dramatically amplified in the room.

Amélie sneezes into the Beam

Amélie comments on the episode as follows:

I didn’t feel like I sneezed very loudly and that it had such a strong impact in the room, I didn’t even realise this in fact, […] but I understood when I watched the video of the seminar, I realised the impact it had.

Voluntary hypo-exposure

The anisotropy of the mediated space (Gaver 1992, 234), i.e., the fact that the space has different characteristics depending on the orientation, makes it possible to spy on others, or to be artefactually present while being absent, or on the contrary to “arrive” unnoticed by the participants in the room.

Amélie arrives without anyone noticing

Amélie, the Beam pilot, mentions these aspects in her interview:

It’s a bit weird that first moment because in fact it’s like an intrusion but a secret discreet intrusion, it’s not an intrusion for me, it’s for the others, the robot is supposed to be connected so that means potentially at any moment I can arrive without anyone noticing, if they don’t look at the robot they can’t see me on the screen, so if I connect and I don’t move or I don’t speak, potentially people don’t necessarily realise that I’ve arrived and as soon as I move, that’s when it signals my presence.

The Beam’s “zoom” function in particular can give the user a stealthy presence, unbeknownst to the other participants.

Christelle uses the Beam’s zoom in a stealthy way

Christelle explains in an interview how she uses this “zoom” function:

I felt like the Six Million Dollar Man, I had bionic sight, without speaking, without touching anything, I was suddenly getting closer, I could see things very, very well that I could not see in real life, so it was very exciting, [… I said to myself, well, that’s what being a robot is all about, I have enhanced eyesight and I can have access to something that I couldn’t see in real life, a bit like Big Brother, […] and it’s very insidious because anyone who hasn’t used the Beam doesn’t know, they can’t even imagine […] Christine is always bugging me about looking at my phone, and in fact Christine kept chatting on her phone, I couldn’t help myself, I zoomed in […] in fact by zooming in, I was almost in their conversation, it’s as if I could see them very closely, whereas they thought I was very far away, and there’s really an intrusive side, it’s really like a Big Brother robot, because if you stay behind your screen, like that, you’re far away, people can’t imagine that you can hear.

She distinguishes between “barging in and feeling like you’re going to crush them and zooming in and being there without them sensing you’re there”.

But while the robot’s movement creates an effect, its immobility is not synonymous with inactivity, even if the participants in the room do not realise it.

Amélie is sceptical about the zoom feature

Amélie is more sceptical than Christelle about the Beam’s zoom functionality:

I don’t really use it that much I think […] as interactions are quite quick […] is it worth focusing on one person who is talking, I should test it, but then I would have to zoom out, I prefer to have a wider view, I think, of three or four people.

Generally speaking, affordances are negotiated by taking into account a set of parameters, such as the delayed adjustments of the Beam in this case.

We have seen through this study that the effect of presences linked to each device define an artefactual presence or an interactional presence, depending on the interactional co-construction implemented by the participants. The issues of autonomy of movement and visual and sound adjustment determine the objectal or interactional status of the user and the device they are using.

The duality between objectal and interactional status does not presume interactional felicity. Artefactual presence sometimes corresponds to the user’s intention in that it allows discretion, just as interactional presence can then go against the user’s intention in that it sometimes creates situations of over-ratification. Artefactual presence can be experienced against one’s will (e.g., the Beam is moved during the break and when the pilot reconnects it has no cues) or can be taken advantage of (e.g., the Beam pilot uses the zoom function unobtrusively while the participants in the room are unaware that the pilot is connected, or the Beam is moved by others, which facilitates the interaction). Similarly, interactional presence can be experienced against one’s will (e.g., the floor is explicitly given to participants online when they have nothing in particular to say at that moment) or can be taken advantage of (e.g., when a request in the chat is relayed to the room verbally).

To conclude this analysis, we propose the following table to sum up these categories:

Artefactual presence Interactional presence
Against one’s will Example: Being inappropriately moved by others Example: Over-ratification
Taken advantage of Example: Being moved by others with interactional intent Example: chat contribution relayed orally in-situ

In addition to an analysis centred around the characteristics of the telepresence devices used, the regulations involving the regimes of autonomy are played out in the intentions of each person and their interpretation, which allow interactions to be co-constructed in a polyartefacted hybrid context, within an artefacted-interactional community.

References
Egido, Carmen. 1990. “Teleconferencing as a Technology to Support Cooperative Work : Its Possibilities and Limitations.” In Intellectual Teamwork: Social and Technological Foundations of Cooperative Work, edited by Jolene Rae Galegher, Robert E. Kraut, and Carmen Egido, 351–71. Hillsdale, N.J: L. Erlbaum Associates.
Gabillon, Bernard. 2019. “Tentations Design.” In Design En Regards, edited by David Bihanic, 222–43. Art Book Magazine Distribution.
Gaver, William W. 1992. “The Affordances of Media Spaces for Collaboration.” Proc. CSCW 1992, ACM Press, 17–24. https://doi.org/10.1145/143457.371596.
Goffman, Erving. 1981. Forms of Talk. University of Pennsylvania Publications in Conduct and Communication. Philadelphia: University of Pennsylvania Press. https://www.upenn.edu/pennpress/book/715.html.
Herring, Susan C. 2013. “Telepresence Robots for Academics.” Proceedings of the American Society for Information Science and Technology 50 (1): 1–4. https://doi.org/10.1002/meet.14505001156.
Neustaedter, Carman, Gina Venolia, Jason Procyk, and Dan Hawkins. 2016. “To Beam or Not to Beam: A Study of Remote Telepresence Attendance at an Academic Conference.” In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, 417–30. ACM. https://doi.org/10.1145/2818048.2819922.
Sirkin, David, Gina Venolia, John Tang, George Robertson, Taemie Kim, Kori Inkpen, Mara Sedlins, Bongshin Lee, and Mike Sinclair. 2011. “Motion and Attention in a Kinetic Videoconferencing Proxy.” In Human-Computer InteractionINTERACT 2011, edited by Pedro Campos, Nicholas Graham, Joaquim Jorge, Nuno Nunes, Philippe Palanque, and Marco Winckler, 162–80. Lecture Notes in Computer Science. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-23774-4_16.
Takayama, Leila, and Janet Go. 2012. “Mixing Metaphors in Mobile Remote Presence.” In, 495–504. ACM. https://doi.org/10.1145/2145204.2145281.
Takayama, Leila, and Helen Harris. 2013. “Presentation of (Telepresent) Self: On the Double-Edged Effects of Mirrors.” In, 381–88. IEEE Press. https://doi.org/10.1109/HRI.2013.6483613.