Tilt World: Smoke and Mirrors

smoke-tiltbrush

Concept
Tilt World rehearsals (Jan 2020 – May 2020) are geared toward building a set improvisational score or group of scores to be used once more dancers are brought into the fold.

Methodology
Today’s rehearsal began with the accumulation improvisation from last rehearsal, digging deeper into different brushes, colors and textures with different tempos and repetitions of movement. In taking detailed notes and ending each rehearsal with a clear written documentation including questions, I have a clear place to begin to ask and answer more questions. Some technical questions are best researched outside of designated rehearsal time.

Inside Virtual Reality
Looking at my hand while it is painting and remembering the movement inside my body at the same time is a very different mental process than when doing this movement without Tilt Brush. It feels similar to patting your head and rubbing your tummy, but instead you are aiming to look at a trail of paint while feeling your body so you can remember what the whole body does after the trace is created. The paint trace ends up being an “aboutness” of the movement instead of a specific archive. Since paint only comes from one end of the brush I have to make specific decisions about where the brush is on my body, whether distally painting from the hand as an extension of the spine or attached to the hip, foot, knee, or other.

The smoke brush continues to create a dispersed movement after you create with it. I used this brush today and ended by slowing and looking at what I created. Turning in a circle, I’m able to see the smoke move and the intricate pathways the chaotic movement created prior to this.

I also used the Mirror function later in the rehearsal which creates a division in the infinite blackness of the virtual space where the other side has a reflection of what you are painting in real time. I found a very curious effect happen when I watched my reflection painting, because it is not exactly what I am painting, but the reverse. It creates a really inquisitive effect.

Questions and Moving Forward
Does it matter if I reproduce the movement “correctly” after the first iteration of improvisation? The difference and trail of remembering and correcting is interesting. I think the answer is no? It is about the effort and physical thinking.

I remembered that in the Oculus Quest you can record live what you are doing, like what we did in the walk through for our VR Poltergeist room. I’m going to try this next rehearsal to live record what is happening while I’m painting.

I also need to research the audio and the controller settings on the Vive. You can do a lot of modifications with the controllers. Can I configure the controller so I can paint out of two hands/two controllers?

I also want to incorporate resizing and moving whole traced improvisations because I believe you need two hands to do these actions and this will make the palette hand more active even if I don’t achieve two paint brushes.

Tilt World: Solo Rehearsal

IMG_20200213_110118

Documentation

I set up my phone to record a few improvisations and tests and took pictures of my 1st solo set up throughout the ACCAD’s Motion Lab. It took about an hour to set up and 15 minutes to break down. I imagine as I get more familiar with the room I will be able to set up a little quicker. In the future I hope to save the visual score in Tilt Brush as well.

Developed Improvisation Studies

Part 1
I created a detailed improvisation study incorporating repetition and color change in order to show an archive of the movement I’ve created.

Part 2
Once a sequence of movement is solidified mentally and physically I am able to retrograde the movement. Immediately retrograding was very confusing. It is hard to tell where the line begins and ends and what movements were done to create this line and not just trace the line back with your hand. Since all the movements are not recorded, just the hand, it becomes important to discover what about the movement in that moment I want to record. Do I put my hand on my pelvis to record pelvis movement and perhaps weight? How do I record a fast shift in weight? Will changing brushes help with that? For example, if the Neon brush has a continuous repetition after drawn toward the ground will that show a strong weight toward the ground such as falling?

Retrograding improvised movement in the virtual environment is something that I will have to work up to and will require more rehearsal.

Paying attention to my focus is very important. The focus must be on the drawing hand or other groups of paint at all times or pathways are not projected and witnessed.

Paint brushes that have movement after being painted are very exciting and include:

  • Neon is very interesting because it traces itself creating a representation of the tracing I might have done earlier.
  • Electricity wiggles like lightning
  • Fire has a subtle texture that moves along the line
  • Stars don’t show the line drawn, but moves along a trajectory
  • Snow is similar to stars

MVIMG_20200213_113309

New Questions:
Can I record from inside the Vive or Oculus in real time so I can watch the progression and creation of the world after it has been completed? Instead of just a snapshot of what was done? I’ve seen other artists record their progression on youtube, how is this done?

Is it possible to move the headset around without your head in it? Does the headset track whether your head is in it or not? This is interesting for possible audience participation. I tried this below with questionable results.

Creating an Immersive Virtual Environment

Concept
Co-creator Laura Rodriguez and I aimed to create a room for a virtual Spielberg Museum to resemble Carol Anne’s room in Poltergeist the movie. Design of the room included interactive elements such as lights that change with proximity, a TV that turns on and plays a clip from the movie when you come close to it, moving objects in a cyclone, the picking up and releasing of objects, and shattering a window on touch.

Methodology
Our 5-week plan included dividing and conquering various different interactive and animated elements. Laura dived into various iterations of animating and shattering techniques of glass for our window in both Maya and Unity while I sourced scripts to move the objects in a cyclone in Unity and created and textured the walls in Maya. We both sourced 3d Models for the static room objects (bed, wall decals, shelves, door, exit sign, window curtains, tree, and little bunny model) through Turbo Squid and various other free 3D models. We also sourced 3d Models in this way to modify in Maya and Unity such as the lamp, records, horse, and television. When you get close to the television the lights turn off and the television plays a video clip of the Poltergeist movie in which the objects circle in Carol Ann’s bedroom. This video gives context if the user has never seen the movie before. With guidance we created a script to activate the lights when you got close to the TV, this also activated the TV and the circling of the objects. We created hands that were able to grab the objects and conducted many tests to determine how large the collision boxes needed to be on the objects before they were grabbable as they passed the user. The speed of the cyclone also needed alteration (and perhaps still does) so the user can grab the objects. To simulate a museum experience, once an object is grabbed, the user hears descriptive information about the movie or disturbing information of the actors who played the characters in Poltergeist.

Discoveries and Looking Ahead
The biggest challenges we faced during this process included our learning curve when using Maya and also creating scripts for the interactions. Some intricacies in using Maya’s software made it difficult to transfer objects from Maya to Unity and it was easy to miss one little click which would be the determining factor for something working. Similarly, creating the scripts required a problem solving brain and although I have some understanding of coding, we had to get some more in depth help to really get some of these interactions working. The most difficult was when we wanted multiple things to happen at the same time like proximity to the TV results in the lights switching off and the objects beginning to circle (which turns on being able to grab the objects). The more if/then’s you added, the more complicated and the more one thing began to affect the next. I think this factored into the shatter of the window and the video clip playing on the TV. At one point the window was shattering into pieces on the floor, and as you see in this video, the window is one big piece of glass which I think is a result of some of the code mentioned above. Similarly, the clip on the TV is distorted in this final video when it was originally working before this script was implemented.

Even with all these challenges, our previous experience of choreography, movement, and creating performative spaces made it second nature for us to imagine the possibilities of what things could do in the virtual space. Turning lights off, triggering sound, and the flickering of a TV combine to create an immersive experience in the virtual world. Just like live performance, the audience is able to suspend their disbelief further when you nuance light, sound, and interaction. The magic is in the details. Having a more in depth knowledge of how these worlds are created gives me insight into the possibilities when used in performance and how live and virtual performances can live in the same spaces together.

Advisor: Shadrick Addy ACCAD

Digging Augmented Reality

Concept
My co-creator and I, Laura Rodriguez/LROD, aimed to create an Augmented Reality application used as an educational supplement to Laura Dixon Gottschild’s Digging the Africanist Presence in American Performance: Dance and Other Contexts.

Methodology
We chose ten pages that we would augment in the book. Each being accountable for five, we discussed ways that would most effectively visualize the textual content. We also were looking at different ways to activate our selected pages resulting in a blueprint containing a motion capture example of poly-rhythm, several reliefs of video examples of the pieces discussed in the book with sound and including the capability to stop and restart the video with a button, one relief picture of a definition with sound to clarify the pronunciation of the word ephebism and one relief of a black and white picture that changed to color.

Discoveries and Looking Ahead
As we collected assets and artifacts for the project (i.e. video of Pearl Primus, a video of Earl “Snake Hips” Tucker, a photo of Dayton Contemporary Dance Company, etc.) we realized this project would run into copyright issues. This was also brought up in our first user test and demonstration during our Grad Day Showing in OSU’s Dance Department. However, using materials without permissions to create a proof of concept was successful. We also have discussed converting the app for cross platform use as it is currently only Android based.

In addition, Vuforia only recognizes images not text unless graphically designed, so using Gottschild’s “Digging” posed some problems in that the content that we wanted to highlight only had text on the pages and all the photos for the whole book were located in the center. For the pages that didn’t have imagery, we used found photos and created inserts to the parts of the book. Although effective, it was most satisfying when the photographs printed on the books pages came to life. If designing an educational tool for augmented reality again, I would prefer to work with the writers as the book is being designed to keep Augmented Reality in the design process creating a more cohesive visual and educational experience.

Advisor/Instructor: Shadrick Addy ACCAD

Embodiment and VR

“VR is perfect for things you couldn’t do in the real world, but not for things you wouldn’t do in the real world. Flying to the moon like Superman is okay. Participating in virtual mass murder—especially if it is designed to be realistic—is not. This is a big deal. Everything we know about training in VR points to the realization that the medium is incredibly influential on future attitudes and behavior. We don’t want to train future terrorists on VR, or desensitize people to violent acts.”

Excerpt From: Jeremy Bailenson. “Experience on Demand.” iBooks.

I assume Bailenson’s statement about virtual mass murder is specifically geared toward a game that would portray you as a serial killer, but I wonder how that works with military, war, and first-person shooter games. Bailenson also describes from the above book three things to consider when creating VR. Does it need to be in VR? Don’t make people sick? Be safe.

Superhot (published by Superhot Team) is a first-person “shooter,” made for a lot of platforms, but I played it for the first (and only) time on the Oculus Quest. I quote the word shooter because you can pick up different weapons other than a gun to kill your assailants. You are loaded into a white room, with a white box to your left with a weapon on it. The first task is to figure out how to grab a weapon and shoot or stab the bright red bodies walking towards you. These bodies explode into shards of red when you are successful in making contact and as they drop their weapons, you can grab them and other weapons also appear around the board as well.

capsule_616x353

The game requires you to duck, stab, shoot, dodge, hide, and find your way through a maze of white and red. I found my anxiety level rising as more and more red bodies came my way; it seemed as if they were so close, so quickly, and I had no time to respond. I found the world very disorienting and one time ended up lunging out of the way into my parent’s television. However, my 16-year-old cousin had no problem. Response time has to be very quick in this game. I found it difficult to grab the weapons and often died trying to get a weapon before I was killed by an oncoming red body.

I did not play this game long, the physical room was in jeopardy, but I can imagine how this game would appeal to people who like first-person shooters. I am not one of those people, however, the striking, clean artwork of red on white was stunning. The precision needed to pick up and hit your assailant required a skill I did not have, but perhaps if I kept playing, this game might heighten my reflexes. I wonder if I would have the same reaction in a non-VR platform? A little more distance between myself and the game might make all the difference.

Where is Superhot on the mass-murder spectrum? Even though it is not designed to be realistic, there are elements of realism that make you react as if you are being attacked. How realistic does the experience need to be to be considered mass-murder? How does the anxiety and frantic speed provoked in order to maim a body differ in the virtual world? Is it easier to stab or shoot this body because of their faceless and shard-like appearance that explodes when attacked of shards? Is this body’s absent frame enough to remove the empathy the player feels when completing these violent actions?

When a trigger is pulled or a stabbing motion practiced, it is imprinted on the body. The impulse, instinct, and musculature are trained into the muscle memory of the player’s body. When the repeated actions are practiced they are then housed in the player’s body as they complete this game and after. This is why VR is used for flight simulators and quarterbacks as they research gameplays.

Where is the line drawn and who determines whether VR content falls under the realm of “training future terrorists” or “desensitizing people to violent acts?”