A cross-platform immersive experience: VR Trash Gallery

 An immersive cross-platform experience that combines Virtual Reality and Augmented Reality, 2021.

Softwares: Unity (including AR Foundation package), Blender, Tiltbrush, Xcode


What

VR Trash Gallery is an immersive cross-platform experience that combines Virtual Reality and Augmented Reality. In this experience, the player is a human agent who does not live on planet Earth due to trash pollution. The mission is to learn the reason why humans evacuated planet earth years ago, learn about types of waste, go explore planet earth, pick up trash, and finally curate a virtual gallery with the trash they picked up to educate the rest of the humans who are not Earth's inhabitants.

Why

This experience is meant to entertain the user whilst also sensitizing them on the issue of trash pollution and climate change.

Process

Coming up with the narrative of the experience

Ideation & Conceptualizing

The player finds themselves inside the spaceship hangar. They must teleport close to the neon orange capsule in order to activate the informational video where they will be briefed for their mission.

(Narrator Speaking on headset speakers). The player is briefed on the mission and is ready to proceed to explore the earth. Their mission is to explore earth, pick up any trash they find interesting, and then curate an art gallery with the trash they picked up. The goal is to educate the rest of the humans that have never visited earth, on climate change and trash pollution, through the exhibited artifacts. 

In order to explore and pick trash, the player must take their headset off and go into the AR application. 

In the AR application, the player uses an image of a net to create a virtual hand through which they will pick up any trash they find interesting around them. The player can choose to stop picking trash whenever they want. They may even feel the need to not pick up anything at all. After all, there are other agents like him who are on the same mission. 

When the player decides they are done with trash picking, they go back to the VR experience. Now the player is in a VR empty trash gallery. The trash they picked in AR is lying on the floor. The player must choose where each artifact will be placed. The experience ends when the player is ready to exhibit their gallery. 

In summary, the experience can be broken down into three parts.

Part 1: VR - Hangar/Space Landing Scene

Part 2: AR - Simulation of Trash Picking with virtual trash

Part 3: VR - Gallery Curation

Moodboard

Colour Palette

Colour Palette

Character & Asset Design

Character Sketches

Waste: Battered Cardboard Box

Waste: Toilet Roll

Waste: Books

Producing the soundtrack

  • Composed with the intent of ‘Spacey’ with a hopeful feeling to the listeners

  • Goal: to entice the player to learn more about trash

  • Final track has revisions made on the initial track with more ‘sci-fi’ elements

Technical R&D

For the Augmented reality part of the experience, I had to use image tracking and plane detection. To do that I used AR Foundation, a Unity package. To make sure the concept would work, I made some initial prototypes with AR Foundation.

User Testing

Round 1

Research Question: How might we successfully design the interior architecture of the gallery to emphasize the importance of each item equally?

Conducted ABC testing: 3 types of galleries.

In the first round of user testing,, we focused on the third section of our experience which is the curation and exhibition of the VR gallery. We wanted to make sure that the template of the gallery that we provided to the players was an adequate setup that would fulfill the needs and wants of the player as art curator. This means that the space of the gallery that we design must be able to allow the curator/player enough freedom to place artifacts however they believe it is best but not as much so that the curator/player gets “lost in curation” and is hence dissatisfied. In other words, we wanted to decide on one setup of the VR gallery that is closest to the sweet spot between no constraints and too many constraints in curation. For that, we came up with three different gallery designs. 

We used the A/B/C testing method together with a questionnaire.

In order to make the 3 galleries, we researched on existing galleries both places in real life galleries as well as galleries made in the VR space and the history of ‘immersive spaces’ used throughout the different times. 

We also looked at existing VR galleries, e.g.: Virtual Gallery of Arts & Zofia Weiss Gallery made by Grzegorz Lipski, Digital Art Exhibition: Looking Back by I.C.Contemporary and GIGOIA Studios Pop Art Gallery by Gigoia studios.

Conclusions of Round 1
It was seen that while all the galleries were user-friendly, Gallery B was the best when it came to easiest to understand.Gallery A’s common feedback was that the navigation was difficult and at times confusing and Gallery C’s common feedback was that it had too many levels which may be disorientating for users not used to VR as well as being a bit too time-consuming due to that. Based on the questionnaire we chose to do the final gallery curation with Gallery B with comments that were made during the test being taken into account.


Round 2

Research Question: How might we ‘value’ the interruption on the user’s focus due to the switch from VR to AR and back to VR?

In the second user study we wanted to understand whether the switch from VR to AR and then back to VR is too disorienting/defocusing to the user. More specifically, our experience comprises three parts. Initially the user is in VR where they are introduced to their mission. Following that our player is called to switch to the Augmented Reality application in order to pick up trash from the earth. The application is designed in such a way that simulates trash picking in real life. The user is given an image of a net and with that a virtual hand appears on the screen. The player uses their arm to maneuver the net and the virtual hand follows that. Simultaneously the smartphone’s camera spawns virtual trash on any horizontal surface around the player. The player then uses the virtual hand to pick up the virtual hand by moving towards the preferential trash item. The player decides when they want to go back to VR and curate the art gallery based on the trash they chose. As soon as the player is satisfied with the amount of trash they picked they switch to VR and start curating the gallery. 

Having to switch to AR mid-way through the experience can cause the player to lose their focus and to interrupt the flow of the narrative. That is why we wanted to test this aspect of the experience in the second user study.   

Conclusions of Round 2

As expected there were some common misunderstandings of the participants whilst conducting the study. Navigation inside the hangar and understanding that the neon capsule requires some type of interaction with the player was one of them. Moreover, some participants wanted to explore the terrain outside the hangar. In the AR part of the study some participants were shifting from one tactic of moving the phone to a second tactic of moving the target image to achieve trash picking. In the third part, the gallery curation, there were no extra misunderstandings [Gallery B was presented to the participants with the improvements]. Finally, we faced a technological challenge in the AR part which had to do with image tracking. Image tracking does not work smoothly in the application and when combined with poor indoor lighting the smartphone camera had a hard time in identifying the image, hence the virtual hand could not be generated.

Based on the final questionnaire it seems that the AR application is easy to understand although not all participants were able to pick their choice of trash in the AR part due to the aforementioned technological glitches. In general, participants were neutral about their interruption of their focus and of the narrative due to the switch from VR to AR (and vice versa). However, participants commented that they would rather have the experience in VR impartially, a claim that negates the whole concept of having a cross platform experience.

For more information on how we planned and conducted the user studies: https://katrinevrblog.myblog.arts.ac.uk/2021/06/10/how-we-planned-and-conducted-the-studies/


Takeaway

Based on the feedback of the questionnaire, in the second iteration,  participants would prefer to have the experience in VR as a whole. This negates the purpose of having a cross platform experience. Of course, this specific experience was not properly performed due to Covid restrictions which means the experience was not put in proper context. A good example of contextualising our project is putting it in an outdoor digital art festival. In this scenario the VR aspect could be the main attraction of the stand and the AR part could be a past-time activity for participants waiting in the Q that could be done through their phones. The outdoors aspect of this experience is rudimental since virtual trash-picking outdoors is a more realistic simulation of trash-picking. Furthermore, natural lighting will be used and image tracking as well as object detection would work better than with indoor lighting (participant 3 “The app inevitably relies heavily on the quality of the camera and the light source, which, of course, is not always perfect.” - participant 3 was not able to generate the net in multiple phases of the AR part, and trash was not generated properly). Finally, a further addition to the digital art festival context is to make the experience multiplayer. In this case, one player could be doing the VR part and the other the AR. This way the player in VR would not anticipate trash picking, instead they would rely on the other player to go do their duty in AR. This way the participants would not be bothered by them having to take off and put back on the headset (participant 2 “takes some time to adjust in real life when taking the headset off.”).