Virtual reality entertainment – a New Look in Storytelling
Virtual reality entertainment - a New Look in Storytelling

Introduction

Being a tool for creating stories, virtual reality is the next step in video content development after 3D and IMAX with the main purpose to immerse the audience in a story. Media giants and small studios already use this technology to produce 3D VR movies.

How VR Movies Immerses a Viewer into the Story

Virtual reality is a digital space that provides unlimited possibilities to create preferred settings. In general, it all depends on the authors’ fantasy only. But the main purpose of a VR story is to fully immerse a viewer in the reality, created by the author. 

“Now VR can do that today because we have a vastly improved sense of presence in the technology,” said Julie Krohner during a TED Talk.  “Presence is a mind-body trick that happens where our brains count a virtual experience as if it was a real one.” 

AR/VR Apps development

What Virtual Reality Films Actually Look Like

There are some main differences between VR movie and standard ones created for cinemas and TV:

  • Special filming equipment that contains 360 camera, special montage programs, and a VR headset for estimating filmed material;
  • Scriptwriters always consider the number of people and set pieces which will be included in a 360 panorama. “When we really got into the “meat” of writing the script, it was already really clear that it was going to be in VR,” said Gaelle Mourre, the scriptwriter and the director of the 360 degree movie Mechanical Souls. “So, we’ve got to consider, you know, the space, and make sure that there are multiple things happening at the time without crowding the main action. So, we’ve worked out into the script.”;
  • Radius of filming – objects located outside 1,5 meter radius will have a blurred look in virtual reality movie;
  • In VR 360 movie, every actor gets in the spotlight. Melanie Forchetti, a journalist of American online media Backstage, recalls her experience of visiting the set of a VR film “Career Opportunities in Organized Crime” in her own article. “This is a great opportunity for actors because no matter if you’re in the starring role or cast as a background performer, you could be the center of attention. Someone in the audience is potentially looking at you. You have to assume that everything you do — raise an eyebrow, interact with a set piece, etc. — will be seen by someone at any given moment (or when the film is viewed subsequent times)”, said Forchetti.
  • Director should consider the fact, viewers can watch movies in VR at any angle. That’s one of the main differences compared to a standard movie where a director has the ability to emphasize the story atmosphere and characters, using different camera angles. “I think, in the storytelling realm, something is opening up. And that you give the person who’s viewing the experience, a lot of control. So, as a director, you also have to let go certain control. And that’s very hard, but it’s really interesting”, said Jorge Tereso, the director of a VR animated film Gloomy Eyes. 

Where VR Storytelling is Applied the Most

There are two main directions of virtual reality storytelling: 

  • producing original VR films;
  • producing interactive virtual reality entertainment, which complements already existing popular movie and TV franchises and provides additional experience for fans.

There are a lot of bright examples of the VR use in these fields, which demonstrate the possibilities of virtual reality in the movie industry. 

Virtual Reality Movies

Oculus Story Studio became one of the very first studios to begin actively producing VR film. Their first virtual reality animated movie “Henry”, released in 2015, is about one hedgehog who struggles with findings some friends. In this article, you can read more about their latest movies.

3dar and Atlas V produced a 30-minute animated movie Gloomy Eyes. The 2020 movie tells the story in a setting where the sun is tired of seeing people and refused to rise in the sky. Because of the darkness, zombies come out of their graves, including the boy named Gloomy, who encounters a mortal girl, Nena. 

The Hollywood actor Colin Farrell plays the narrator of the story.

A 13-minute VR movie Mechanical Souls tells the story about Chinese wedding, on which the rich bride’s father hired androids as bridesmaids. But when his daughter tries to modify one of the androids, suddenly it goes out of control.  This virtual reality movie was shot with the help of 360 technology, featuring real-life actors.

VR Interactive Content for Famous Franchises

The biggest streaming giants have actively begun using virtual reality as the way to demonstrate their content. For example, Disney+ designed an app, Disney Movies VR, where a headset user can get in the studio’s most popular movies locations, including Star Wars and Marvel Cinematic Universe. This interactive experience is an addition to already existing films. And with the help of Disney Movies VR, a viewer can become their favorite movie character. 

Netflix, in their turn, released YouTube Stranger Things 360о Virtual Reality Experience as a part of the show’s promo campaign. It immerses fans in the one of the first season main location. A viewer gets into a room of Byers house, decorated with Christmas lights. During this experience, a viewer is approaching ringing white telephone. After the phone is picked, we can hear the voice of a missing boy, Will Byers: “Hello? Can you hear me? You have to listen! You’re here! Turn around!”

Conclusion

Innovative virtual reality technology is a new, undiscovered platform for artists they can use to tell their stories. For now, author’s VR movies aren’t very popular, but some media giants, e.g., Disney and Netflix, produce additional VR content for fans who want to be a part of a favorite story. 

Latest Articles

February 29, 2024
Everything you’d like to know about visionOS development

If you’re venturing into the realm of developing applications for Apple Vision Pro, it’s crucial to equip yourself with the right knowledge. In this article, we unravel the key aspects you need to know about the visionOS operating system, the secrets of programming for Apple Vision Pro, and the essential tools required for app development. visionOS: The Heart of Apple Vision Pro The foundation of the Vision Pro headset lies in the sophisticated visionOS operating system. Tailored for spatial computing, visionOS seamlessly merges the digital and physical worlds to create captivating experiences. Drawing from Apple’s established operating systems, visionOS introduces a real-time subsystem dedicated to interactive visuals on Vision Pro. This three-dimensional interface liberates apps from conventional display constraints, responding dynamically to natural light. At launch, visionOS will support a variety of apps, including native Unity apps, Adobe’s Lightroom, Microsoft Office, medical software, and engineering apps. These applications will take advantage of the unique features offered by visionOS to deliver immersive and engaging user experiences. Programming Secrets for Apple Vision Pro Programming for Apple Vision Pro involves understanding the concept of spatial computing and the shared space where apps coexist. In this floating virtual reality, users can open windows, each appearing as planes in the virtual environment. These windows support both traditional 2D views and the integration of 3D content. Here are some programming “secrets” for Apple Vision Pro: All apps exist in 3D space, even if they are basic 2D apps ported from iOS. Consider the Field of View and opt for a landscape screen for user-friendly experiences. Prioritize user comfort and posture by placing content at an optimal distance. Older UIKit apps can be recompiled for VisionOS, gaining some 3D presence features. Be mindful of users’ physical surroundings to ensure a seamless and comfortable experience. Tools for Apple Vision Pro Development To initiate the development of applications for Vision Pro, you’ll need a Mac computer running macOS Monterey or a newer version. Additionally, you’ll require the latest release of Xcode and the Vision Pro developer kit. The development process entails downloading the visionOS SDK and employing familiar tools such as SwiftUI, RealityKit, ARKit, Unity, Reality Composer Pro, and Xcode, which are also utilized for constructing applications on other Apple operating systems. While it’s feasible to adapt your existing apps for Vision Pro using the visionOS SDK, be prepared for some adjustments in code to accommodate platform differences. Most macOS and iOS apps seamlessly integrate with Vision Pro, preserving their appearance while presenting content within the user’s surroundings as a distinct window. Now, let’s delve into the essentials for assembling your own Apple Vision Pro development kit: SwiftUI: Ideal for creating immersive experiences by overlaying 3D models onto the real world. Xcode: Apple’s integrated development environment, vital for app development and testing. RealityKit: Exclusively designed for Vision Pro, enabling the creation of lifelike, interactive 3D content. ARKit: Apple’s augmented reality framework for overlaying digital content onto the real world. Unity: A powerful tool for visually stunning games and Vision Pro app development. Unity is currently actively developing its SDK to interface with Apple Vision Pro. What’s the catch? Few people know that to develop on Unity, you need not just any Mac, but a Mac with an “M” processor on board! Here are a few more words about supported versions: Unity 2022 LTS (2022.3.191 or newer): Apple Silicon version only. Xcode 15.2: Note that beta versions of Xcode are a no-go. VisionOS 1.0.3 (21N333) SDK: Beta versions are not supported. Unity editor: Apple Silicon Mac and the Apple Silicon macOS build are in; the Intel version is out. Pay attention to these restrictions during your development journey! Apple Vision Pro SDK: Empowering Developers The visionOS Software Development Kit (SDK) is now available, empowering developers to create groundbreaking app experiences for Vision Pro. With tools like Reality Composer Pro, developers can preview and prepare 3D models, animations, and sounds for stunning visuals on Vision Pro. The SDK ensures built-in support for accessibility features, making spatial computing and visionOS apps inclusive and accessible to all users. As Apple continues to lead the way in spatial computing, developers hold the key to unlocking the full potential of the Vision Pro headset. By understanding the intricacies of visionOS, programming secrets, essential development tools, and the application process for the developer kit, you can position yourself at the forefront of this revolutionary technological landscape.

February 23, 2024
Beyond the Hype: The Pragmatic Integration of Sora and ElevenLabs in Gaming

Enthusiasts have introduced a remarkable feature that combines Sora’s video-generating capabilities with ElevenLabs’ neural network for sound generation. The result? A mesmerizing fusion of professional 3D locations and lifelike sounds that promises to usher in an era of unparalleled creativity for game developers. How It Works In the context of game development, it should have looked like this: Capture Video with Sora: People start by capturing video content using Sora, a platform known for its advanced video generation capabilities. Luma Neuron Transformation: The captured video is then passed through the Luma neuron. This neural network works its magic, transforming the ordinary footage into a spectacular 3D location with professional finesse. Unity Integration: The transformed video is seamlessly imported into Unity, a widely-used game development engine. Unity’s versatility allows for the integration of the 3D video locations, creating an immersive visual experience that goes beyond the boundaries of traditional content creation. Voilà! The result is nothing short of extraordinary – a unique 3D location ready to captivate audiences and elevate the standards of digital content. A Harmonious Blend of Sights and Sounds But the innovation doesn’t stop there. Thanks to ElevenLabs and its state-of-the-art neural network for sound generation, users can now pair the visually stunning 3D locations with sounds that are virtually indistinguishable from reality. By simply describing the desired sound, the neural network works its magic to create a bespoke audio experience. This perfect synergy between Sora’s visual prowess and ElevenLabs’ sonic wizardry opens up a realm of possibilities for creators, allowing them to craft content that not only looks stunning but sounds authentic and immersive. OpenAI’s Sora & ElevenLabs: How Will They Impact Game Development? The emergence of tools like OpenAI’s Sora and ElevenLabs sparks discussions about their potential impact on the industry. Amidst the ongoing buzz about AI revolutionizing various fields, game developers find themselves at the forefront of this technological wave. However, the reality may not be as revolutionary as some might suggest. Concerns Amidst Excitement: Unraveling the Real Impact of AI Tools in Game Development Today’s AI discussions often echo the same sentiments: fears of job displacement and the idea that traditional roles within game development might become obsolete. Yet, for those entrenched in the day-to-day grind of creating games, the introduction of new tools is seen through a more pragmatic lens. For game developers, the process is straightforward – a new tool is introduced, tested, evaluated, and eventually integrated into the standard development pipeline. AI, including platforms like Sora and ElevenLabs, is perceived as just another tool in the toolkit, akin to game engines, version control systems, or video editing software. Navigating the Practical Integration of AI in Game Development The impact on game development, in practical terms, seems to be more about efficiency and expanded possibilities than a complete overhaul of the industry. Developers anticipate that AI will become part of the routine, allowing for more ambitious and intricate game designs. This shift could potentially lead to larger and more complex game projects, offering creators the time and resources to delve into more intricate aspects of game development. However, there’s a sense of weariness among developers regarding the constant discussion and hype surrounding AI. The sentiment is clear – rather than endlessly discussing the potential far-reaching impacts of AI, developers prefer practical engagement: testing, learning, integrating, and sharing insights on how these tools can be effectively utilized in the real world. OpenAI — for all its superlatives — acknowledges the model isn’t perfect. It writes: “[Sora] may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right, and may struggle with precise descriptions of events that take place over time, like following a specific camera trajectory.” So, AI can’t fully create games and its impact might be limited. While it could serve as a useful tool for quickly visualizing ideas and conveying them to a team, the core aspects of game development still require human ingenuity and creativity. In essence, the introduction of AI tools like Sora and ElevenLabs is seen as a natural progression – a means to enhance efficiency and open doors to new creative possibilities. Rather than a radical transformation, game developers anticipate incorporating AI seamlessly into their workflow, ultimately leading to more expansive and captivating gaming experiences.

January 30, 2024
Touching Art: How Haptic Gloves Empower to “See” the World of Art

In the realm of art, visual experiences have long been the primary medium of expression, creating a challenge for those with visual impairments. However, a groundbreaking fusion of haptic technology and VR/AR is reshaping the narrative. Explore the innovative synergy between haptic technology and VR/AR and how this collaboration is not only allowing the blind to “see” art but also feel it in ways previously unimaginable. Artful Touch – Haptic Technology’s Role in Art Appreciation Haptic technology introduces a tactile dimension to art appreciation by translating visual elements into touch sensations. Equipped with sensors and precision, haptic gloves enable users to feel textures, contours, and shapes of artworks. This groundbreaking technology facilitates a profound understanding of art through touch, providing a bridge to the visual arts that was once thought impossible for the blind to cross. VR/AR technologies extend this tactile experience into virtual realms, guiding users through art galleries with spatial precision. Virtual environments created by VR/AR technologies enable users to explore and “touch” artworks as if they were physically present. The combination of haptic feedback and immersive VR/AR experiences not only provides a new means of navigating art spaces but also fosters a sense of independence, making art accessible to all. Prague Gallery Unveils a Touchful Virtual Reality Experience The Prague’s National Gallery has taken a revolutionary step towards inclusivity in art with its groundbreaking exhibition, “Touching Masterpieces.” Developed with support of Leontinka Foundation, a charity dedicated to children with visual impairments, this exhibit redefines the boundaries of art appreciation. Visitors to the exhibition, especially those who are blind or visually impaired, are invited to embark on a sensory journey through iconic sculptural masterpieces. Among them are the enigmatic bust of Nefertiti, the timeless Venus de Milo sculpture, and the immortal David by Michelangelo. What sets this exhibition apart is the integration of cutting-edge technology – haptic gloves. These gloves, dubbed “avatar VR gloves,” have been meticulously customized for the project. Using multi-frequency technology, they create a virtual experience where a user’s hand can touch a 3D object in a virtual world, providing tactile feedback in the form of vibrations. The key innovation lies in the gloves’ ability to stimulate different types of skin cells’ tactile responses, ensuring that users, particularly the blind, receive the most accurate perception of the 3D virtual objects on display. As visitors explore the exhibit, they can virtually “touch” and feel the intricate details of these masterpieces, transcending the limitations of traditional art appreciation. Future Possibilities and Evolving Technologies As technology advances, the future holds even more possibilities for inclusive art experiences. The ongoing collaboration between haptic technology and VR/AR promises further refinements and enhancements. Future iterations may introduce features such as simulating colors through haptic feedback or incorporating multisensory elements, providing an even more immersive and enriching experience for blind art enthusiasts. The collaboration between haptic technology and VR/AR is ushering in a new era of art perception, where touch and virtual exploration converge to create a truly inclusive artistic experience. By enabling the blind to “see” and feel art, these technologies break down barriers, redefine traditional boundaries, and illuminate the world of creativity for everyone, regardless of visual abilities. In this marriage of innovation and accessibility, art becomes a shared experience that transcends limitations and empowers individuals to explore the beauty of the visual arts in ways never thought possible.



Let's discuss your ideas

Contact us