HRI 2025 — 2025 ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2025

Anywhere Projected AR for Robot Communication: A Mid-Air Fog Screen-Robot System

Adrian Lozada, Uthman Tijani, Villa Keth, Hong Wang and Zhao Han

Top HRI conference. 25% acceptance rate.
, , ,
Anywhere Projected AR for Robot Communication A Mid Air Fog Screen Robot System

Abstract

Augmented reality (AR) allows visualizations to be situated where they are relevant, e.g., in a robot’s operating environment or task space. Yet, headset-based AR suffers a scalability issue because every viewer must wear a headset. Projector-based spatial AR solves this problem by projecting augmentations onto the scene, e.g., recognized objects or navigation paths, viewable to crowds. However, this solution mostly requires vertical flat surfaces that may not exist in large open areas like auditoriums, warehouses, construction sites, or search and rescue scenes. Moreover, when humans are not co-located with the robot or situated at a distance, the projection may not be legible to humans. Thus, there is a need to create a projectable, viewable surface for humans in such scenarios.

In this HRI systems paper, we introduce a fog screen-robot system that integrates a mid-air fog screen device into a robot to create such a projectable surface and presents two evaluations in a construction site and a search and rescue scenario for high-stakes communication needs. Specifically, we implemented an existing fog screen device, which can only project one-third of a meter (33cm). We improved it to achieve a fog screen length of half a meter (53cm). In the noisy construction site scenario, the robot inspected the site and projected icons for missing wall sockets and plumbing fixtures. In the unstructured search and rescue scenario, the robot was able to project a person icon for a first responder to save life. All 3D models and code are available at https://github.com/TheRARELab/hri-25-fog-screen-robot-system. Videos are available at https://osf.io/b4efu/.

.


Index Terms Robot communication, fog screen, mid-air display, augmented reality (AR), projector-based AR, human-robot interaction


I. Introduction

For robot communication, various modalities have been explored [1], including speech and audio [23], gaze [4], visual displays [5], and body language like gestures and postures [6]. In the last decade, research efforts have started to leverage augmented reality (AR) to enhance robot communication of non-verbal cues [789101112], including projector-based AR works [1314] where a robot’s navigation paths and manipulation intent were projected onto the ground and table surface, viewable to crowds.


A photo of the fog screen-robot system. A Fetch Robot stands in a room with a projector on the top left of its base, projecting a red human icon to the fog screen. Next to the projector is the fog machine device, which consistently outputs the fog to form a screen.
Fig. 1.  The fog screen-robot system. A human icon is projected, which is useful in search-and-rescue scenes to indicate the sign of life of a human victim. A video is available at https://osf.io/ny5cv. To be discussed in Section VI, we also simulated a construction site scenario.

Particularly, AR allows visual overlays situated in a robot’s task environment to externalize a robot’s internal states. Projector-based AR further improves AR to offer scalability such that each one in a crowd no longer needs to wear an AR headset to view the augmentation. However, some environments may lack projectable surfaces, e.g., warehouses, construction sites, and search and rescue scenes. Falling back to a monitor screen is limited to its screen size, while the form of light if using LEDs, loses rich semantics that an image can communicate. Moreover, projections in some environments may not be legible or visible to humans when humans are not co-located with the robot and farther away, e.g., a first responder in a potentially noisy hallway not be able to hear a robot’s speech nor see floor projection clearly while the robot has found a victim and needs to communicate further away from the first responder.

While these AR works [13, 7, 8, 9, 10, 11, 12, 14] improve the understanding of a robot’s intent and behaviors like manipulation and navigation, we must solve the problem that some environments lack projectable surfaces. So, how can we leverage projector-based AR for robot communication when there is no available or suitable (e.g., irregular) surface to project onto?

To retain the benefits of projector-based AR, we propose a fog screen-robot system that integrates a fog screen device with a robot to create a mid-air flat display for robots to communicate in environments lacking projectable surfaces. Specifically, we first implemented a fog screen device prototype, Hoverlay II [15]. However, it only achieved a short-distance (33cm) fog screen, limiting the project range from, e.g., a room, to outside. After problem analysis, we found that it might be caused by a low-density fog screen resulting from low fog output and weak airflow regulation to form a screen due to weak fans.

To address the problems and integrate with the robot, we purchased a more powerful fog machine, developed a custom controller, and programmed it in the Robot Operating System (ROS) for the robot to project autonomously. We also selected powerful fans and narrowed fog outlet for better airflow regulation of the fog, leading to a new prototype (Fig. 1). To further address the robot’s mobility and navigation needs, we purchased a portable battery station.

With the improvements, our system was able to achieve half a meter of fog screen length (53cm) from 33cm. For further evaluation, we carried out two case studies with high-stakes communication needs, one in a construction site and another in a search and rescue scenario, to show the utility of our system in real-world settings with critical communication needs.

II. Related Work

A. AR and Projector-Based AR for HRI

In robotics, AR has been explored for its potential benefits to enhance interactions [16]. For example, Groechel et al. [17] enabled an armless robot to express body language with two virtual arms to improve perceived emotion, usability, and physical presence. Liu et al. [18] used AR to help users better understand a robot’s knowledge structure, e.g., perceived objects, so users can better teach new tasks and understand failures. Hedayati et al. [19] showed that AR visualizations for an aerial robot’s camera capabilities reduced crashes during teleoperation. For a comprehensive review of AR for robotics, we refer readers to the survey by Walker et al. [16].

More closely related to our work is projected spatial AR, in which projections are situated in the robot’s operating environment, viewable to a crowd of people without everyone wearing an AR headset. For example, Chadalavada et al. [14] and Coovert et al. [9] proposed projecting a mobile robot’s intention onto the shared floor space with arrows and a simplified map. Han et al. [13] proposed an open-source software and hardware setup for projected AR to popularize this technology in the robotics and HRI community. As another example, Walker et al. [7] used AR to help aerial robots convey their motion intents. A user study showed that AR designs, such as NavPoints, Arrows, and Gaze, increased task efficiency and communication clarity.

B. Mid-Air Displays

As mentioned in Section I, one limitation of projector-based AR is the requirement of flat surfaces for projection which may not be available in some environments. A potential solution is using mid-air displays for projected spatial AR with a robot for communication in these environments. These displays create floating surfaces in the air without the need for physical screens. Researchers have studied different aspects of this technology, e.g., adding gestural interactions [20], developing reconfigurable displays [21, 22, 23, 24], displaying 3D visuals [25] and for multi-user interaction [26], creating a wall-sized walk-through display [27], projecting onto disturbed and deformed screens [28], and improving user engagement with tactile feedback [29, 30].

Yet, existing fog screen systems are bulky and are practically impossible to integrate with a robot. For example, Antti et al.’s [30] is over a square meter (155×105cm) and Stephen et al.’s [27] is more than 2 meters wide. Commercial products like Lightwave’s Fogscreen Pro [31] is even wider: 23.10m). When integrated, it would rather limit the navigation capabilities of mobile robots or mobile manipulators. Indeed, there are portable ones, such as the 21×10×7cm handheld display [29] and [21] with a screen width between 6-14cm. However, these screen sizes are small for projections to be seen when humans are meters away from the robot. Finally, Walter [15] developed Hoverlay II, a small open-source fog screen device (32.6×27.1×24cm). It can produce a wide floating screen extending over a meter. Among all those works, its moderate form factor makes it ideal for integration with robots. Its widescreen allows robots to project onto where humans can still see when farther away from the robot.

With all those works from the computer graphics community, so far, it is unexplored how to integrate a fog screen with a robotic system, use it for communication in HRI purposes, and what the use cases are.

III. Implementing Hoverlay II Fog Screen Device

Our goal is to build a compact, portable fog screen device that can be mounted conveniently on a robot, enhancing its communication capabilities through projections in open environments where no projectable surfaces exist. To begin, we chose to implement the Hoverley II project [15].

A. Underlying Principles


A top view of the fog screen device showing its internal mechanism. The device is a cone-shaped structure with a fog machine at the left end. Air enters through fans located at both the upper and lower ends, labeled with generating high-pressure air. The blue particles (fog) within the device generated by the fog machine fill the middle container and exit the right opening. The laminar airflow former regulates the dispersed fog into a laminar flow to form a screen.
Fig. 2.  Top view of the fog screen device showing its internal mechanism. The blue particles (fog) within the device generated by the fog machine fill the middle container and exit the right opening. The laminar airflow former (Fig. 3) regulates the dispersed fog into a laminar flow to form a screen.

As seen in Fig. 2, the device’s operational principle is a three-stage process that converts free-form fog into a flat screen. In the first stage (See Fig. 2 middle), fluid is transformed into microscopically fine droplets to form an uncontrolled mist or fog. In the second stage, two arrays of fans (“Upper fans” and “Lower fans” in Fig. 2) generate two streams of high-pressure air. In the final stage, two laminar airflow formers (See Fig. 2 right) regulate the flow of the high-pressure air when the air exits the device case to ensure the formation of a flat screen ideal for projection.

Concretely, we chose to purchase a handhold off-the-shelf fog machine, MicroFogger 5 Pro [32], to generate fog with a special liquid called “fog juice” [33]. It consists of water, glycerin, and propylene glycol. When the liquid is heated, it turns into fog as it meets the cooler air outside. Although the fog can be produced by an ultrasonic atomizer placed in a container of water as used in Hoverlay II, we found that using the fog machine brings two benefits that help us reach our goal of mounting it onto a robot besides its small form factor. First, it is cleaner because there are no concerns about water leakage. Second, it produces denser fog using fog juice [33].


Diagram of a laminar airflow former, used for directing fog in a flat-screen formation. The figure shows a long, cylindrical object composed of multiple hexagonal cells, similar to a honeycomb. A magnified section highlights this honeycomb-like structure.
Fig. 3.  The design of the laminar airflow former, passage for high-pressure air from the fans to make the fog exit in a particular direction to form a flat screen. Taking a closer look at the airflow former design, it is similar to gluing a series of plastic straws together by the side or like a honeycomb.

As seen in the mid-left of Fig. 2, the fog container hosts the fog machine to accumulate the generated fog before coming out of the opening on the right. Because the generated fog at this stage is dispersed as it comes out, it is unsuitable to project onto as it is not flat. To solve this, it must be controlled to follow a laminar flow resembling a thin layer of surface in mid-air for projection. The two arrays of fans at both sides (Fig. 2 top and bottom) and the honeycomb-structured airflow formers (Fig. 2 right and Fig. 3) are designed to achieve this. The fans generate high-pressure air by sucking in air through the vents at the left end of the device and pushing it through the airflow formers, where the airflow is made laminar. The high-pressure air on both sides serves as a barrier for the fog, forcing it to remain within it, compressing the fog into a flat layer in mid-air suitable for projection.


Full side view of the fog screen device with fans. At the top of the cylinder, fans are located at two sides, directing airflow into the device. A vertical fog screen emits from the laminar airflow formers.
Fig. 4.  Full side view of the fog screen device with fans. This is how it is positioned on the robot, allowing a wide viewing and projection angle.

Fig. 4 shows a front-side view of the device, giving a full look at the smaller view in Fig. 2. It is also a high-level view of where the fans, airflow formers, and a rendering of the fog screen are. This perspective helps us better understand the role of each visible element in creating a stable, flat screen ideal for projecting visual content legibly.

B. Building First Prototype

To test the Hoverlay II design, we developed 3D models using the SolidWorks CAD software and built a prototype by laser-cutting acrylics and 3D-printed parts. Specifically, we designed two CAD models of the case to house the fog machine, the fans, and the airflow formers. The honeycomb laminar airflow former in Fig. 5 was designed to fit the two front openings of the housing. 3D printing is necessary due to the honeycomb pattern of the airflow formers. Additionally, because each piece is larger than the 3D printer’s capacity, they were cut into symmetrical halves and joined together later.


The 3D model of the symmetry halves of the airflow former, which consist of four rectangular airflow formers with a honeycomb structure inside.
Fig. 5.  The 3D model of the symmetry halves of the airflow former. We cut it in half because its height exceeds the 3D printer’s volume capacity. As two pieces of airflow formers are needed, four pieces were printed and each pair was glued together.

To fabricate the device, we adopted a hybrid approach. As the case has large panels, we laser-cut 3mm and 2mm acrylic boards to make the individual case components. The two thicknesses were unintentional but rather due to a limited supply of acrylic sheets at the authors’ institution. Acrylic was also used for its durability (cf. non-solid 3D prints), and, as durability ensures it is not easily broken, it eases maintenance.

To support joint interlocking, we added interlock joints to each flat part as they must be joined after cutting. To assemble the case, we chose to use hot glue as it is easy to use and bonding is strong. The acrylic case can withstand the glue’s high temperatures, ensuring a durable and airtight assembly.


A figure shows a 3D design and physical assembly of the initial fog screen device prototype. The left part (a) shows the front side view of the prototype, with a grey 3D rendering design on the left and the physical prototype on the right. The right part (b) shows the back view of the prototype, with a grey 3D rendering design on the left and the physical prototype on the right.
Fig. 6.  3D design and physical assembly of the initial fog screen device prototype. Assembled components include the housing, airflow formers, fans, and fog container.

Finally, as shown in Fig. 6, we attached the airflow formers to the front side of the case and two columns of eight 80mm fans [34] to the back side. The eight fans were powered through two PWM fan splitter connectors [35] to simplify wiring. The entire fog screen device and a projector [36] were placed on the Fetch robot’s base for testing (Fig. 1), keeping the robot’s vertical footprint.

C. Fog Emission Control

To enhance the longevity and performance of the MicroFogger 5 Pro, mainly to avoid overheating issues associated with its heating coil, we integrated an Arduino Nano microcontroller to automate the MicroFogger’s operational cycles. The MicroFogger was activated for 10 seconds to generate fog, followed by a 5-second rest period. This controlled cycling preserved the device’s components by preventing overheating and maintaining consistent fog quality.

IV. Preliminary Evaluation


Two sets of visual comparisons between two fog machine prototypes in two rows. The top one shows three timed snapshots (00:03, 00:04, 00:05) of the first prototype with a 12mm fog outlet. The projected icon is unrecognizable at 33cm. The bottom one shows the modified prototype with a 3mm fog outlet, it shows the fog length of this prototype is 40cm.
Fig. 7.  Improvement in the fog screen length when the fog outlet width is reduced to a quarter (3mm) of its original width (12mm). (a) Snapshots showing how the projected icon gradually became unrecognizable at 33cm for the first prototype. (b) We tested the 6mm fog outlet and saw no change. (c) Reducing to a quarter (3mm) increased the fog screen length to 40cm. The video for (a) is available at https://osf.io/4gkza.

First, we tested the airflow former’s effectiveness in keeping the fog flow laminar, i.e., flat, by projecting a person icon. As shown in Fig. 7a, the icon gradually becomes unrecognizable. Seen in the last image, the implemented fog screen device is only able to form up to a 33cm flat fog screen before the person icon is no longer recognizable, unexpectedly lower than the one meter (100cm) claimed by the Hoverley II project [15].

Then we tested the responsiveness of the fog screen formation. The Microfogger 5 Pro began generating fog within 2 seconds and it took 1 to 2 minutes to warm up and start producing denser fog. Once warmed up and re-producing fog, it immediately produced denser fog. For the fog screen device with the fog container, there was a 2-second delay to start forming a screen. So, with the fog machine warmed up, it took 4 seconds before a flat fog screen was produced.

A. Problem Analysis

We identified three problems that may have caused the short-length fog screen and a fourth problem with integrating with the robot.

The first problem was related to the MicroFogger’s maximum fog output of 500CFM. CFM stands for Cubic Feet per Minute to measure gas movement. That is, it produces 23.60cm3 of fog per second at its maximum performance. As we projected images further along the fog screen, the fog became less and less dense and made the projections unrecognizable. While 33cm may be adequate for a static setup, for scenarios such as search and rescue, where a robot scans a room for injured victims and projects information into a hallway, a longer fog screen is better to more quickly inform a first responder of emergency.

The second problem is the weak regulation of fog, i.e., unable to keep the airflow laminar at a longer distance. We identified that the possible cause is the fan providing air movement at 25CFM, 25 cubic feet per minute, only about 1.18cm3 per second.

The third problem might be the 12mm opening for the fog outlet. We chose 12mm to push more fog out after accumulation in the fog container. However, this width made it hard to regulate the fog to form a screen due to the less dense fog within. A further reduction can help the fog within the width more dense.

Finally, this initial prototype lacked mobility, as it required a wall outlet to power the projector and the fans, making it unsuitable for robots that move around. To address this issue, we need a mobile power source.

V. Improvements

A. 2000-CFM New Fog Machine

To solve the low fog volume issue, we opted for a 400-watt fog machine with 2000CFM fog output [37] from the original 500CFM, i.e., improving four times from 23.6cm3 per second to 94.4cm3 per second. As the fog machine also became bigger and could not be hosted in existing housing, we moved the fog machine outside and used a flexible pipe [38] to connect the fog machine outlet to the top of the housing to supply fog (See Fig. 1 and 12). However, we found the fog volume remained low. It turned out that the fog was trapped in the pipe due to inadequate mixing with cooler air, which is needed for condensation to form fog. To solve this, we added holes in the pipe near the fog outlet (see Fig. 12) to allow immediate air mixing which increased the generated fog volume and density.

B. 125CFM 24V Fans

We upgraded the weak 80×80mm 25CFM 12V fans to larger 120×120mm 24V fans with 125CFM output. Compared to the previous four 80×80mm fans per column, the new 12×12cm fans required only three per column.

C. Custom Fog Machine Controller Circuit


A custom-built fog machine controller circuit featuring an Arduino Nano microcontroller. Labels point to the rectifier, Arduino Nano, output wire, live wire, and neutral wire.
Fig. 8.  Custom-built fog machine controller circuit featuring an Arduino Nano microcontroller. Key components include a relay, rectifier, transistors, resistors, and diodes. The relay controls the power supply to the fog machine, with connections for neutral and output.

For the Fetch robot to autonomously control the fog machine, we developed a custom controller hardware by soldering the necessary components onto a circuit board (Fig. 8), including an Arduino Nano microcontroller, a relay, rectifier, transistors, resistors, and diodes.


Labeled DMX port connections for the bigger 400W fog machine showing the Live, Neutral, and Output terminals.
Fig. 9.  DMX port connections for the 400W fog machine showing the Live, Neutral, and Output terminals. This is where the cable connected to the custom controller is plugged into.

Initially, we faced the problem that there was no documentation on how to interface with the machine. To build the custom controller, we disassembled the built-in controller for the remote control to understand its design and inner workings. We found that the built-in controller through a DMX port (Fig. 9), connects a neutral 120V AC wire to the output wire through a relay and a button to close the circuit. The DMX port has three connections: the neutral (the port to the left), live (the port to the right), and output (the port in the middle). A live wire indicates whether it is ready to produce fog: There is a 3- to 5-minute initial warm-up time once powered, and after that, it produces fog for up to 35 seconds. It then needs to warm up again for 45 seconds before producing fog again.

Our custom controller turns the fog machine on or off by connecting the output wire with the neutral wire. The fog machine signaled its status by setting the live wire high (indicating it is ready) or low (indicating it is not ready). To begin implementing our controller, we gathered an Arduino Nano Every [39] microcontroller, transistors, resistors, diodes, a relay, and a rectifier. Transistors were used to safely control the relay as Arduino does not supply enough current to actuate the relay coils. The diodes were used to protect the transistors from the relay voltage spikes, and the resistors limit the current level for Arduino to safely operate.

For the relay, due to the high voltage (120V AC) of the live and neutral wires, a direct connection to the Arduino Nano is not feasible. So we used an intermediary relay element that uses an electromagnet to mechanically operate a switch. Specifically, we used an HFD2/005-M L2 latching relay [40], which stays in its current state, either on or off, until receiving another signal for change. To connect the relay, we wired pins D13 and D12 of the Arduino Nano Every to the two coils of the relay. Both coils’ ground (GND) connections were connected to pin D3 of the Arduino Nano. The fog machine’s neutral and output wires were connected to the relay’s input and output pins. For safe handling of the custom controller, we designed and 3D-printed a case to house the circuit.

For the rectifier, we used it because Arduino only works with DC voltages and can not handle the 120V AC on the live terminal. The rectifier converts this 120V AC signal to a 5V DC signal which is suitable for Arduino. The rectifier was connected to the controller circuit which helped to know when the fog machine was warming up and ready.


TABLE I
Improvements made for longer fog screen length

First PrototypeNew Iteration (1/4 fog outlet, powerful fans)
Original1/2 Fog Outlet1/4 Fog Outlet12 Volts (62.5 CFM)18 Volts (93.75 CFM)24 Volts (125 CFM)
Fog outlet width12mm6mm3mm3mm3mm3mm
Fog screen length33cm33cm (+0)40cm (+7)49cm (+16)49cm (+16)53cm (+20)
Images to verify the lengthsFig. 7aFig. 7bFig. 7cFig. 14aFig. 14bFig. 14c

D. Narrower Fog Outlet


reduce fog width2
Fig. 10.  Triagle-shaped 3D printed structure attached to both walls of the fog outlet to reduce the fog outlet width to one-fourth, 3cm from 12cm. A 3-mm version was also created to reduce the fog outlet width to half.

To reduce the fog outlet, we used the divide-and-conquer technique. We first reduced it to half of the original fog outlet width (6mm) but did not see a difference (Fig. 7b). We then further reduced it to one-fourth (3mm) and achieved a 40cm fog screen length (Fig. 7c). This length is achieved from the first prototype where the two 3D prints in Fig. 10 were attached to both walls to narrow the outlet. The results are summarized in Table I.


From left to right, the first figure shows a comparison of the initial prototype housing on the left-hand side with the improved housing for the new iteration on the right-hand side. The second figure shows the top-down view of the initial prototype housing, and the third figure shows the top-down view of the improved housing for the new iteration.
Fig. 11.  Comparison of the initial prototype housing (left) with the improved housing for the new iteration (right). While the height remained the same (37cm), the width became shorter (24.6cm vs. 29cm) and it is shallower (16.8cm vs. 27cm) than the initial housing design due to the smaller fog container and the shorter distance between the fans and the airflow formers).

A photo of an assembled fog screen system. A projector is positioned on top of a power station. A fog screen device is mounted vertically, with a fog pipe located at the top. A fog machine is under the fog screen device.
Fig. 12.  Components of the fog screen-robot system. Limited space on the base was optimized using 3D-printed shelves mounted with M5 bolts. This allowed us to stack the projector on the power station and the fog screen device on the fog machine. The zoomed-in circle highlights the holes added to the pipe for fog condensation, as detailed in Section V.A.

E. Smaller Housing

As the housing can no longer host the new fog machine, we made several changes to the housing (See Fig. 11). First, we made the fog container smaller. Second, we opened a hole on the top to attach a pipe, as seen in Fig. 12. Third, we modified the housing back structure, positioning the fans parallel to the airflow formers for two benefits: It saves space, leading to a shallower design to facilitate placement on the robot’s base. It also ensures the airflow is not re-directed before passing through the airflow formers. Previously, the fans were installed at an angle and the air would hit the wall, disrupting the flow, before going through the airflow formers.


A figure displays an improved airflow-former in three parts. Part (a) shows a 3D view of the airflow-former, a long triangular-shaped structure with honeycomb structures inside. Part (b) shows the previous design, where generated air is depicted with arrows pointing downwards, and two areas at the bottom corners show trapped air. Part (c) shows the new design, where all generated air is directed outwards with no air getting trapped.
Fig. 13.  Improved airflow former used in the new iteration and a comparison figure showing its benefit over the old one. (a) The improved airflow former. The difference with the old one is the added extended block on the left of the figure. (b) This shows how part of the generated air in the previous design gets trapped which reduces the airflow pressure. (c) This shows how the new design makes all generated air flow out and increases air pressure.

F. Airflow Former Improvement

As seen in Fig. 13, we made a minor revision to the airflow former. Previously, as shown in the lower part of Fig. 13b, some air is trapped in the housing. The new airflow former adds two extensions, seen in the lower part of Fig. 13c, for more air to smoothly flow through the airflow formers, thereby increasing the airflow strength.

G. ROS Integration

Finally, to allow the Fetch robot to autonomously control the fog machine, we developed two ROS services to communicate with Arduino through USB with the “rosserial_arduino” serial communication package [41] to turn the fog machine on and off. The rosserial_arduino facilitates communication between the Fetch robot, which operates on Arduino by enabling the Arduino to function as a ROS node. The Arduino controls the fog machine through the three aforementioned pins via the latching relay [40], turning it on or off, and the rectifier for fog production readiness. This node publishes two ROS services: “/fog_machine/turn_on” and “/fog_machine/turn_off”, which the robot can call whenever it needs to use the fog screen. The “fog-machine-control” folder has the ROS service Arduino code with documentation for installation and usage. To call these services, one can write a ROS service client node by following the tutorial on the ROS wiki either in C++ [42] or Python [43]. Sample code is provided in the “evaluation” folder in the supplementary material.

H. Placement on Fetch and Mobility

With all the improvements, we designed shelves to fit all the components on the Fetch robot’s base. Shown in Fig. 12, the left shelf was mounted onto the base, above the power station, and a project was placed on top. The right shelf was placed above the fog machine and had the fog screen device on top. To power all the components while the robot is navigating, we used a portable power station [44] to power the projector, fog machine, fans, and Arduino.

VI. Fog Screen-Robot System Evaluation

A. Length of Fog Screen


A sequence of images showing tests on a fog screen device at different voltage and airflow settings. Each row of images represents a different test setting: (a) 12 Volts (62.5 CFM), (b) 18 Volts (93.75 CFM), and (c) 24 Volts (125 CFM). Each test shows a human icon projected onto the fog, as the distance measured as 45 cm, 49 cm, 53cm, and 57 cm, the icon becomes less distinguishable.
Fig. 14.  Sample video frames of the tests on our system with different fan output. Each row shows how the projected icon gradually becomes consistently unrecognizable as we test the fog screen length by projecting icons further and further away. Four video frames are shown for each distance to show the consistency of observations. The system achieved the longest fog screen length, 53cm, when the fans were at 24V (125 CFM).

We first evaluated our system by measuring the fog screen length achieved. We took videos of all the tested conditions and extracted a series of frames (Fig. 14) to show how consistent the icon legibility was. Specifically, we applied the divide-and-conquer technique. As we upgraded our fans from 12V to 24V, we tested three fan speeds at 12V, 18V, and 24V to compare with the previous 12V fans. Regarding the voltage-speed conversion, we found that voltage is proportional to fan RPM and CFM [45]: 12V (62.5CFM per fan, and 375CFM in total), 18V (93.75CFM per fan, and 562.5CFM in total), and 24V (125CFM per fan, and 750CFM in total). Results are shown in Fig. 14 and the last three columns of Table I.

During the evaluation, we projected the icon at 33cm (the maximum for the initial prototype with weaker fans) and increased the distance by 4-cm increments until the icon became unrecognizable. We found that the 4-cm step size provides small enough increments to capture changes. As seen in Fig. 12, because the projection from the projector was thrown about 18 from the fog screen, it gets stretched when projected further and further away. To correct this, we shortened the icon width by 10% at each 4cm increment after 37cm. As seen from the projected icons throughout this paper, this adjustment kept these symmetric icons look symmetric.

Fig. 14 shows the last few increments before the icon became no longer recognizable (see the rightmost set of frames). Our system was able to project at a distance of 57cm (a half meter compared to the 33cm) when the fans were at 24V (last row of Fig. 14). At 53cm, the projected icon is no longer consistently recognizable.

B. Case Studies

We then tested our fog machine-robot system by simulating two indoor environments with real-world conditions, i.e., a search-and-rescue scenario and an office construction site. These scenarios may lack flat surfaces or, with a flat ground surface, the projection becomes too small to recognize if projected on the floor farther away from humans. Furthermore, wireless communication is not possible because wireless signals may be blocked by walls and debris in the search-and-rescue scenario or may not yet be available in construction sites before routers are installed. We also did not consider outdoor scenarios where field robots may operate because these open areas allow good reception of wireless signals for wireless communication.

To map the environments and for the robot to navigate to specified goals, we used the ROS navigation stack [46].


A figure shows a search and rescue scenario with a robot navigation path. It shows the locations of an injured victim in the middle, the first responder at the left bottom corner, and the robot at different times. Arrows show the robot’s movement path.
Fig. 15.  An illustration of a search and rescue scenario in a potentially noisy hallway. A robot navigated through the hallway, detected a victim, and was navigating back to the first responder. At the corner, the robot projects a person icon to alert first responders on the other end of the hallway.

A series of video frames showing search and rescue case study. The first two photos show the robot starting to move, the third photo shows the robot turning right, and the last photo shows a projected human icon on the fog screen visible from the starting point.
Fig. 16. Video frames of our simulated search and rescue case study where a robot operates amidst loud noise. While a first responder was searching other rooms, the robot was searching a room to the right of the end of the hallway. It finds an injured person in the room and projects a live human icon for first responders to rescue the person. Video: https://osf.io/x735a.

As shown in Fig. 15, we simulated a high-stakes search-and-rescue scenario where a robot navigates a building during a simulated search-and-rescue mission. As it moves through hallways, which may have rubble and debris, and checks rooms, it locates an injured person. Then it heads back to the end of the hallway. Approaching the corner and detecting loud noise, the robot decided not to use speech, not to project onto the ground with debris, but projected a human icon onto the fog screen to communicate the victim’s location to first responders at the other end of the hallway. This quickly relays vital information in critical situations, ensuring that first responders can act swiftly to save lives. Fig. 16 shows that a real robot navigated the hallway, scanned a room, and projected a live human icon for first responders to see. However, as this work does not focus on navigation through debris, the surface was left flat.


A figure shows an office construction site scenario with a robot navigation path. It shows the locations of the worker at the third right cubicle, and the robot at different times. Arrows show the robot’s movement path.
Fig. 17.  An illustration of an office construction site scenario showing that the robot navigates to cubicles, inspects for missing installations, and alerts the worker in the potentially noisy hallway.

A series of photos showing a case study in an office building. The first four photos show the robot going to the cubicle on the left and then proceeding to the next on the right. The fifth photo shows it found missing light bulbs and projected a bulb icon, and the last photo shows it found electrical socket installations and projected a wall plug icon.
Fig. 18.  Snapshots of a case study in an office building under construction. The robot inspects for missing installations and alerts the worker. It first goes to the cubicle on the left, finds no missing installations, then proceeds to the right, and finds missing light bulb and electrical socket installations. It then projects a bulb and wall plug icon to alert the workers. Videos: https://osf.io/db564https://osf.io/7jznd.

We also tested our system in a simulated office building construction site, which is often loud and chaotic, not suitable for speech. Debris can obstruct surfaces, so projecting on some surfaces cannot be seen. Irregular surfaces like unfinished walls are also not suitable to project onto. Due to the complexity and size of those large-scale construction projects, it is not uncommon for the electrical or plumbing team to miss installations in rooms, e.g., electrical outlets, light fixtures, or plumbing fixtures, leading to project delays and increasing costs if not immediately identified.

In the scenario shown in Fig. 18, the robot is deployed to navigate through the simulated cubicle spaces in an office construction site and inspect whether all required components have been installed. When the robot detects a missing installation, it projects onto the fog screen of the missing components, e.g., a wall socket plug and a light bulb icon.

VII. Discussion and Limitations

Our novel integration of a fog screen-robot system showed possible communication of projected AR in areas with no suitable surfaces or situations where humans are not co-located with, or farther away from the robot. While HRI researchers are yet to conduct user studies to explore how fog screens improve human-robot teaming tasks and how they can benefit the task and the perception of the robot subjectively, our work serves as a basis for exploring this area further.

While selecting and designing icons for evaluation, we found that symmetrical icons appeared clearer than asymmetrical ones, likely due to that the further end of the icon has less fog density. For an icon being symmetrical, humans complete the other half of the icon subconsciously. Besides, as the fog is constantly in a flow, detailed icons will lose the details as they are extended and blurred by the flow. Based on these observations, we recommend that projection icon designers prioritize symmetry and simplicity for projection legibility.

The system currently has a few limitations. First, the Fetch robot has a base diameter of about 50cm. Robots with less mounting space would require some design modifications to fit all components to their base. Nonetheless, we believe the form factor will become much smaller, as we have witnessed in many other technologies, e.g., room-sized computers and bulky AR with computing devices placed in a backpack.

Second, the projected icon is most legible on the fog screen at a maximum of 53cm if only viewed at a certain angle facing away the projector lens (see Fig. 14). This is different from the normal projector use where the projection is perpendicular to the projector throw direction. Yet, a robot can detect humans and rotate itself so a human’s viewing angle is the best.

Third, the fog machine can only produce fog intermittently. When powered on, it takes 3-5 minutes to warm up before producing fog for 35 seconds. After this first cycle, warm-up time reduces to 45 seconds before producing fog again for 35 seconds. This timing mechanism is a safety measure against overheating. Tampering with these devices’ cycle is dangerous and poses a risk of fire hazard. This limitation means the robot cannot use the fog screen for more than 35 seconds. As a potential solution, two fog machines can be used to produce fog in a round-robin manner without interruption.

VIII. Conclusion

In this work, we proposed a fog screen-robot system, an integration of a fog screen device into a robot for it to communicate anywhere without a projectable surface. This addresses the inability of robots to communicate in environments lacking projectable surfaces and in scenarios where humans are not co-located yet farther away from the robot. We first implemented an existing fog screen device and then made improvements to achieve a longer fog screen length (an increase from 33cm to 53cm). We demonstrated search-and-rescue and construction site case studies to show how our approach can be applied in a real-world setting for high-stakes communication needs.

References

[1] C. Tsiourti, A. Weiss, K. Wac, and M. Vincze, “Designing emotionally expressive robots: A comparative study on the perception of communication modalities,” in Proceedings of the 5th international conference on human agent interaction, 2017, pp. 213–222.

[2] G. Bolano, L. Iviani, A. Roennau, and R. Dillmann, “Design and evaluation of a framework for reciprocal speech interaction in human-robot collaboration,” in 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). IEEE, 2021, pp. 806–812.

[3] F. A. Robinson, O. Bown, and M. Velonaki, “Implicit communication through distributed sound design: Exploring a new modality in human-robot interaction,” in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 597–599.

[4] S. Li and X. Zhang, “Implicit intention communication in human–robot interaction through visual behavior studies,” IEEE Transactions on Human-Machine Systems, vol. 47, no. 4, pp. 437–448, 2017.

[5] E. Sibirtseva, D. Kontogiorgos, O. Nykvist, H. Karaoguz, I. Leite, J. Gustafson, and D. Kragic, “A comparison of visualisation methods for disambiguating verbal requests in human-robot interaction,” in 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, 2018, pp. 43–50.

[6] H. Knight and R. Simmons, “Laban head-motions convey robot state: A call for robot body language,” in 2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016, pp. 2881–2888.

[7] M. Walker, H. Hedayati, J. Lee, and D. Szafir, “Communicating robot motion intent with augmented reality,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 316–324.

[8] K. Chandan, V. Kudalkar, X. Li, and S. Zhang, “Arroch: Augmented reality for robots collaborating with a human,” in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 3787–3793.

[9] M. D. Coovert, T. Lee, I. Shindev, and Y. Sun, “Spatial augmented reality as a method for a mobile robot to communicate intended movement,” Computers in Human Behavior, vol. 34, pp. 241–248, 2014.

[10] C. Reardon, K. Lee, J. G. Rogers, and J. Fink, “Communicating via augmented reality for human-robot teaming in field environments,” in 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2019, pp. 94–101.

[11] R. Newbury, A. Cosgun, T. Crowley-Davis, W. P. Chan, T. Drummond, and E. A. Croft, “Visualizing robot intent for object handovers with augmented reality,” in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 2022, pp. 1264–1270.

[12] R. T. Chadalavada, H. Andreasson, M. Schindler, R. Palm, and A. J. Lilienthal, “Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human–robot interaction,” Robotics and Computer-Integrated Manufacturing, vol. 61, p. 101830, 2020.

[13] Z. Han, J. Parrillo, A. Wilkinson, H. A. Yanco, and T. Williams, “Projecting robot navigation paths: Hardware and software for projected ar,” in 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2022, pp. 623–628.

[14] R. T. Chadalavada, H. Andreasson, R. Krug, and A. J. Lilienthal, “That’s on my mind! robot to human intention communication through on-board projection on shared floor space,” in 2015 European Conference on Mobile Robots (ECMR). IEEE, 2015, pp. 1–6.

[15] M. Walter, “Hoverlay ii open hardware interactive midair screen,” https://hackaday.io/project/205-hoverlay-ii, 2014.

[16] M. Walker, T. Phung, T. Chakraborti, T. Williams, and D. Szafir, “Virtual, augmented, and mixed reality for human-robot interaction: A survey and virtual design element taxonomy,” ACM Transactions on Human-Robot Interaction, vol. 12, no. 4, pp. 1–39, 2023.

[17] T. Groechel, Z. Shi, R. Pakkar, and M. J. Matarić, “Using socially expressive mixed reality arms for enhancing low-expressivity robots,” in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 2019, pp. 1–8.

[18] H. Liu, Y. Zhang, W. Si, X. Xie, Y. Zhu, and S.-C. Zhu, “Interactive robot knowledge patching using augmented reality,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 1947–1954.

[19] H. Hedayati, M. Walker, and D. Szafir, “Improving collocated robot teleoperation with augmented reality,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 78–86.

[20] V. Remizova, A. Sand, I. S. MacKenzie, O. Špakov, K. Nyyssönen, I. Rakkolainen, A. Kylliäinen, V. Surakka, and Y. Gizatdinova, “Mid-air gestural interaction with a large fogscreen,” Multimodal Technologies and Interaction, vol. 7, no. 7, p. 63, 2023.

[21] M. A. Norasikin, D. Martinez-Plasencia, G. Memoli, and S. Subramanian, “Sonicspray: a technique to reconfigure permeable mid-air displays,” in Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces, 2019, pp. 113–122.

[22] M.-L. Lam, B. Chen, K.-Y. Lam, and Y. Huang, “3d fog display using parallel linear motion platforms,” in 2014 International Conference on Virtual Systems & Multimedia (VSMM). IEEE, 2014, pp. 234–237.

[23] M.-L. Lam, B. Chen, and Y. Huang, “A novel volumetric display using fog emitter matrix,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 4452–4457.

[24] M.-L. Lam, Y. Huang, and B. Chen, “Interactive volumetric fog display,” in SIGGRAPH Asia 2015 Emerging Technologies, 2015, pp. 1–2.

[25] C. Lee, S. DiVerdi, and T. Hollerer, “Depth-fused 3d imagery on an immaterial display,” IEEE transactions on visualization and computer graphics, vol. 15, no. 1, pp. 20–33, 2008.

[26] Y. Tokuda, M. A. Norasikin, S. Subramanian, and D. Martinez Plasencia, “Mistform: Adaptive shape changing fog screens,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 4383–4395.

[27] S. DiVerdi, I. Rakkolainen, T. Höllerer, and A. Olwal, “A novel walk-through 3d display,” in Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055. SPIE, 2006, pp. 428–437.

[28] K. Otao and T. Koga, “Mistflow: a fog display for visualization of adaptive shape-changing flow,” in SIGGRAPH Asia 2017 Posters, 2017, pp. 1–2.

[29] A. Sand, I. Rakkolainen, P. Isokoski, R. Raisamo, and K. Palovuori, “Light-weight immaterial particle displays with mid-air tactile feedback,” in 2015 IEEE International Symposium on Haptic, Audio and Visual Environments and Games (HAVE). IEEE, 2015, pp. 1–5.

[30] A. Sand, V. Remizova, I. S. MacKenzie, O. Spakov, K. Nieminen, I. Rakkolainen, A. Kylliäinen, V. Surakka, and J. Kuosmanen, “Tactile feedback on mid-air gestural interaction with a large fogscreen,” in Proceedings of the 23rd International Conference on Academic Mindtrek, 2020, pp. 161–164.

[31] L. International, “Fogscreen pro,” https://www.lasershows.net/fog-screens/, 2024.

[32] Vosentech, “Microfogger 5 pro,” https://vosentech.com/index.php/product/microfogger-5-pro/, 2024.

[33] E. Vitz, “Fog machines, vapors, and phase diagrams,” Journal of Chemical Education, vol. 85, no. 10, p. 1385, 2008.

[34] Apevia, “Apevia af58s-bk 80mm 4pin molex + 3pin motherboard silent black case fan – connect to power supply or motherboard (5-pk),” https://a.co/d/eJYsTpK, 2024.

[35] RISHTEN, “Motherboard pwm fan hub splitter,” https://a.co/d/4qmYRPm, 2024.

[36] RCA, “Rca 480p lcd home theater projector – up to 130” rpj136, 1.5 lb, white,” https://www.rca.com/us_en/home-theater-331-us-en/projectors/home-theater-projector-480p-4429-us-en, 2024.

[37] Amazon, “Fog machine, smoke machine with wireless & wired remote control for parties halloween wedding and stage effect, 400w,” https://a.co/d/4RfLoyl, 2024.

[38] Sionlan, “Vacuum cleaner hose for bissell cleanview swivel pet crosswave 2252 2489 2486 2254 22543 24899 1831 vacuum hose replace part #203-8049,” https://a.co/d/iIgKimT, 2024.

[39] Arduino, “Arduino nano every,” https://docs.arduino.cc/hardware/nano-every/, 2024.

[40] HONGFA, “Hfd2 subminiature dip relay,” https://www.hongfa.com/product/signal-relay/HFD2, 2024.

[41] ROS Wiki, “rosserial-arduino package summary,” https://wiki.ros.org/rosserial_arduino, 2024.

[42] ——, “Writing a simple service and client (C++),” https://wiki.ros.org/ROS/Tutorials/WritingServiceClient%28c%2B%2B%29, 2024.

[43] ——, “Writing a simple service and client (Python),” https://wiki.ros.org/ROS/Tutorials/WritingServiceClient%28python%29, 2024.

[44] BLUETTI, “Bluetti portable power station eb3a, 268wh lifepo4 battery backup w/ 2 600w (1200w surge) ac outlets, recharge from 0-80generator for outdoor camping (solar panel optional),” https://www.amazon.com/dp/B09WW3CTF4, 2024.

[45] L. Powell, “Fundamentals of fans,” https://www.airequipmentcompany.com/wp-content/uploads/2018/01/Fundamentals-of-Fans-Air-Equipment-Company.pdf, 2015.

[46] ROS Wiki, “Ros navigation stack,” https://wiki.ros.org/navigation, 2024.


Posted