5 Women You Need to Know in the World of XR

We know that the majority of ІТ employees are men, and women are still forming a minority. According to Zippia, only 34.4% of women make up the labor force in US tech companies like Amazon, Apple, Facebook, Google та Microsoft. This is due to gender stereotypes about ІТ as a male-dominated industry. However, these stereotypes are gradually fading away and changing in a positive direction. 

Despite the obvious sexism, modern women successfully influence ІТ industry, including extended reality and other immersive technologies. To illustrate this influence, we’ll present 5 famous women, who are ruling in XR industry.

Empowering Women in XR

In the 19th century, famous mathematician Ada Lovelace worked on Charles Babbage’s first mechanical computer and wrote the first algorithm for the machine. And in the 20th century, Austrian-Ukrainian-born Hollywood actress Hedy Lamarr, along with composer George Antheil, pioneered an Allies torpedoes radio system during World War 2. It became a prototype for modern wireless technologies, like Bluetooth, GPS, and Wi-Fi.

Get ready for more information about outstanding women that became the next Ada Lovelace and Hedy Lamarr in modern XR technologies. 

2018-Keynote-Nonny-de-la-Pena-by-Mike-Jordan-Getty

Image: SXSW

Nonny de la Pena, Godmother of VR

Nonny de la Pena was awarded the title “The Godmother of virtual reality” by top online media like Forbes, The Guardian, and Engadget. She’s a journalist, VR documentaries director, and the founder and CEO of Emblematic Group, which develops VR/AR/MR content.

The greatest merit of de la Pena is that she invented immersive journalism. Nonny de la Pena showcased her first VR documentary, The Hunger in Los Angeles, in the Sundance movie festival, back in 2012. You can read more about de la Pena’s most famous works in our previous article about VR in journalism.

In March 2022, de la Pena was one of the 16 Legacy Peabody Awards recipients for her work and influence in modern journalism. In her acceptance speech, she reminded about the importance of immersive technologies and what advantages they offer to modern journalism, using her joint project with Frontline After Solitary as an example. The VR experience is based on the true story of Kenny Moore, who spent many years in a solitary confinement cell in the Maine State Prison.

“When we did a piece in solitary confinement with Frontline, we did scanning of an actual solitary confinement cell. Well, now you’re in that cell. You’re in that room. And it has a real different effect on your entire body and your sense of, “Oh my God. Now I understand why solitary confinement is so cruel and unnecessary”. And you just can’t get that feeling reading about it or looking at pictures.”

De la Pena’s accounts in social media: 

helenphoto

Image: LinkedIn

Dr. Helen Papagiannis, Experienced AR Expert

Dr. Helen Papagiannis works in augmented reality field for 17 years. Papagiannis is a founder of XR Goes Pop, which produced immersive content for many top brands including Louis Vuitton, Adobe, Mugler, Amazon, and many more. Particularly, they designed VR showroom for Bailmain, where you can see virtual clothes and accessories from a cruise collection on digital models, plus behind the scene videos.

Virtual try-on and shops are successfully applied by fashion brands, because they allow a customer to try on digital clothes before buying real one. You can read more about it here

Doctor Papagiannis constantly gives her TED Talks and also publishes her researches for well-respected media like Harvard Business Review, The Mandarine, Fast Company, etc.

In 2017, the scientist and developer published a book called Augmented Human. According to Book Authority, it is considered to be the best book about augmented reality ever released. Stefan Sagmeister, designer, and co-founder at Sagmeister & Walsh Inc, thinks Augmented Human is the most useful and complete augmented reality guide, that contains new information about the technology, methods, and practices, that can be used in work. 

Dr. Papagiannis’s accounts in social media:

1__Bf3kGYxRNAvO8vUZfeAyw

Image: Medium

Christina Heller, Trailblazer of Extended Reality

Christina Heller has 15 years of experience in XR. Huffington Post included her in the top 5 of the most influential women who are changing VR.

Heller is a founder and CEO of Metastage, that develops XR content for various purposes: VR games, AR advertisements, MR astronaut training, etc. Since 2018, Metastage has collaborated with more than 200 companies including H&M, Coca-Cola, AT&T, NASA, and worked with famous pop artists like Ava Max and Charli XCX. 

Speaking about Heller herself, before Metastage she had worked in VR Playhouse, which immersive content was showcased at Cannes Film Festival, Sundance, and South by Southwest. 

Under Christina Heller leadership, Metastage extended reality content was widely acclaimed and received many awards and nominations, including two Emmy nominations. Moreover, Metastage is the first US company, that officially started using Microsoft Mixed Reality Capture. This technology provides photorealistic graphics of digital models, using special cameras. And these cameras capture a human movement in a special room, where XR content is superimposed.

“It takes human performances, and what I like about it most is that it captures the essence of that performance in all of its sort of fluid glory, including clothing as well, said Heller. And so every sort of crease in every fold of what people are wearing comes across. You get these human performances that retain their souls. There is no uncanny valley with volumetric capture.” 

Christina Heller’s researches were published in “Handbook of Research on the Global Impacts and Roles of Immersive Media” and “What is Augmented Reality? Everything You Wanted to Know Featuring Exclusive Interviews with Leaders of the AR Industry” (both 2019). 

Heller accounts in social media: 

FOJcPTvVkAQoBxd

Image: Twitter

Kavya Pearlman, Cyber Guardian 

Kavya Pearlman is called “The Cyber Guardian” and she is a pioneer in private data security with the use of immersive technologies, like metaverse. For three years in a row, from 2018 to 2020, and also in 2022, Kavya Pearlman was included in the top 20 Cybersafety influencers.

Pearlman is a founder and CEO of XR Safety Initiative, a non-profit company that develops privacy frameworks and standards of cybersecurity in XR. 

Pearlman worked as a head of security for the oldest virtual world, Second Life. Basically, Kavya Pearlman was the first person who started considering ethical rules, data security, and psychology implications in the game and researched how bullying in VR can affect person’s mental state. 

During The US Presidental election in 2016, Pearlman worked with Facebook as an advisor on third-party security risks, brought by companies and private users.

Kavya Pearlman is a regular member of the Global Coalition for Digital Safety and is a part of Metaverse Initiative on World Economical Forum, representing XR Safety Initiative. 

Pearlman accounts in social media: 

Cathy-Hackl-speaker-metaverso-conferencias-e1668512873206

Image: BCC

Cathy Hackl, Godmother of Metaverse

In the immersive technologies world, Cathy Hackl is known as “the godmother of the metaverse”. Hackl is a futurologist and Web 3.0 strategist that collaborates with numerous leading companies on metaverse development, virtual fashions, and NFT. For the last two years, Big Thinker has been including Cathy Hackle in the top 10 of the most influential women in tech. 

Cathy Hackl is also a co-founder and the head of the metaverse department in Journey. The company works with such big names as Walmart, Procter & Gamble, HBO Max, Pepsico and so on. One of its latest use cases are Roblox VR platforms Walmart Land and Walmart’s Universe of Play. In these platforms, players pass through different challenges, collect virtual merchandise, and interact with the environment. 

Moreover, the futurologist and the metaverse specialist publishes science and analytic articles for top media, like 60 Minutes+, WSJ, WIRED, and Forbes. 

Hackl also wrote four books about business in the metaverse and the technology development. The latest book, Into the Metaverse: The Essential Guide to the Business Opportunities of the Web3 Era, was published in January this year. On Amazon, the book has the highest rating — 5 stars out of 5. The book describes the metaverse concept at a very understandable and detailed level and is itself a quick read. 

Hackl accounts in social media:

 

Qualium Systems appreciates inclusion and respects contributions made by women in XR, metaverse, and other immersive technologies every day. Moreover, our co-founder and CEO Olga Kryvchenko has been working in the IT field for 17 years already.  

photo_2023-03-07_13-05-43

“It’s important for women to work in tech industries, and particularly in Immersive Tech, because it helps break down barriers and empowers women to pursue careers in fields that may have traditionally been male-dominated”, said Kryvchenko. “When women have more representation in tech, it creates a more welcoming and inclusive environment for future generations of women in the industry. Additionally, having a diverse workforce leads to better decision-making, as different perspectives and experiences are taken into account, ultimately resulting in better products and services for everyone.”

Latest Articles

February 29, 2024
Everything you’d like to know about visionOS development

If you’re venturing into the realm of developing applications for Apple Vision Pro, it’s crucial to equip yourself with the right knowledge. In this article, we unravel the key aspects you need to know about the visionOS operating system, the secrets of programming for Apple Vision Pro, and the essential tools required for app development. visionOS: The Heart of Apple Vision Pro The foundation of the Vision Pro headset lies in the sophisticated visionOS operating system. Tailored for spatial computing, visionOS seamlessly merges the digital and physical worlds to create captivating experiences. Drawing from Apple’s established operating systems, visionOS introduces a real-time subsystem dedicated to interactive visuals on Vision Pro. This three-dimensional interface liberates apps from conventional display constraints, responding dynamically to natural light. At launch, visionOS will support a variety of apps, including native Unity apps, Adobe’s Lightroom, Microsoft Office, medical software, and engineering apps. These applications will take advantage of the unique features offered by visionOS to deliver immersive and engaging user experiences. Programming Secrets for Apple Vision Pro Programming for Apple Vision Pro involves understanding the concept of spatial computing and the shared space where apps coexist. In this floating virtual reality, users can open windows, each appearing as planes in the virtual environment. These windows support both traditional 2D views and the integration of 3D content. Here are some programming “secrets” for Apple Vision Pro: All apps exist in 3D space, even if they are basic 2D apps ported from iOS. Consider the Field of View and opt for a landscape screen for user-friendly experiences. Prioritize user comfort and posture by placing content at an optimal distance. Older UIKit apps can be recompiled for VisionOS, gaining some 3D presence features. Be mindful of users’ physical surroundings to ensure a seamless and comfortable experience. Tools for Apple Vision Pro Development To initiate the development of applications for Vision Pro, you’ll need a Mac computer running macOS Monterey or a newer version. Additionally, you’ll require the latest release of Xcode and the Vision Pro developer kit. The development process entails downloading the visionOS SDK and employing familiar tools such as SwiftUI, RealityKit, ARKit, Unity, Reality Composer Pro, and Xcode, which are also utilized for constructing applications on other Apple operating systems. While it’s feasible to adapt your existing apps for Vision Pro using the visionOS SDK, be prepared for some adjustments in code to accommodate platform differences. Most macOS and iOS apps seamlessly integrate with Vision Pro, preserving their appearance while presenting content within the user’s surroundings as a distinct window. Now, let’s delve into the essentials for assembling your own Apple Vision Pro development kit: SwiftUI: Ideal for creating immersive experiences by overlaying 3D models onto the real world. Xcode: Apple’s integrated development environment, vital for app development and testing. RealityKit: Exclusively designed for Vision Pro, enabling the creation of lifelike, interactive 3D content. ARKit: Apple’s augmented reality framework for overlaying digital content onto the real world. Unity: A powerful tool for visually stunning games and Vision Pro app development. Unity is currently actively developing its SDK to interface with Apple Vision Pro. What’s the catch? Few people know that to develop on Unity, you need not just any Mac, but a Mac with an “M” processor on board! Here are a few more words about supported versions: Unity 2022 LTS (2022.3.191 or newer): Apple Silicon version only. Xcode 15.2: Note that beta versions of Xcode are a no-go. VisionOS 1.0.3 (21N333) SDK: Beta versions are not supported. Unity editor: Apple Silicon Mac and the Apple Silicon macOS build are in; the Intel version is out. Pay attention to these restrictions during your development journey! Apple Vision Pro SDK: Empowering Developers The visionOS Software Development Kit (SDK) is now available, empowering developers to create groundbreaking app experiences for Vision Pro. With tools like Reality Composer Pro, developers can preview and prepare 3D models, animations, and sounds for stunning visuals on Vision Pro. The SDK ensures built-in support for accessibility features, making spatial computing and visionOS apps inclusive and accessible to all users. As Apple continues to lead the way in spatial computing, developers hold the key to unlocking the full potential of the Vision Pro headset. By understanding the intricacies of visionOS, programming secrets, essential development tools, and the application process for the developer kit, you can position yourself at the forefront of this revolutionary technological landscape.

February 23, 2024
Beyond the Hype: The Pragmatic Integration of Sora and ElevenLabs in Gaming

Enthusiasts have introduced a remarkable feature that combines Sora’s video-generating capabilities with ElevenLabs’ neural network for sound generation. The result? A mesmerizing fusion of professional 3D locations and lifelike sounds that promises to usher in an era of unparalleled creativity for game developers. How It Works In the context of game development, it should have looked like this: Capture Video with Sora: People start by capturing video content using Sora, a platform known for its advanced video generation capabilities. Luma Neuron Transformation: The captured video is then passed through the Luma neuron. This neural network works its magic, transforming the ordinary footage into a spectacular 3D location with professional finesse. Unity Integration: The transformed video is seamlessly imported into Unity, a widely-used game development engine. Unity’s versatility allows for the integration of the 3D video locations, creating an immersive visual experience that goes beyond the boundaries of traditional content creation. Voilà! The result is nothing short of extraordinary – a unique 3D location ready to captivate audiences and elevate the standards of digital content. A Harmonious Blend of Sights and Sounds But the innovation doesn’t stop there. Thanks to ElevenLabs and its state-of-the-art neural network for sound generation, users can now pair the visually stunning 3D locations with sounds that are virtually indistinguishable from reality. By simply describing the desired sound, the neural network works its magic to create a bespoke audio experience. This perfect synergy between Sora’s visual prowess and ElevenLabs’ sonic wizardry opens up a realm of possibilities for creators, allowing them to craft content that not only looks stunning but sounds authentic and immersive. OpenAI’s Sora & ElevenLabs: How Will They Impact Game Development? The emergence of tools like OpenAI’s Sora and ElevenLabs sparks discussions about their potential impact on the industry. Amidst the ongoing buzz about AI revolutionizing various fields, game developers find themselves at the forefront of this technological wave. However, the reality may not be as revolutionary as some might suggest. Concerns Amidst Excitement: Unraveling the Real Impact of AI Tools in Game Development Today’s AI discussions often echo the same sentiments: fears of job displacement and the idea that traditional roles within game development might become obsolete. Yet, for those entrenched in the day-to-day grind of creating games, the introduction of new tools is seen through a more pragmatic lens. For game developers, the process is straightforward – a new tool is introduced, tested, evaluated, and eventually integrated into the standard development pipeline. AI, including platforms like Sora and ElevenLabs, is perceived as just another tool in the toolkit, akin to game engines, version control systems, or video editing software. Navigating the Practical Integration of AI in Game Development The impact on game development, in practical terms, seems to be more about efficiency and expanded possibilities than a complete overhaul of the industry. Developers anticipate that AI will become part of the routine, allowing for more ambitious and intricate game designs. This shift could potentially lead to larger and more complex game projects, offering creators the time and resources to delve into more intricate aspects of game development. However, there’s a sense of weariness among developers regarding the constant discussion and hype surrounding AI. The sentiment is clear – rather than endlessly discussing the potential far-reaching impacts of AI, developers prefer practical engagement: testing, learning, integrating, and sharing insights on how these tools can be effectively utilized in the real world. OpenAI — for all its superlatives — acknowledges the model isn’t perfect. It writes: “[Sora] may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right, and may struggle with precise descriptions of events that take place over time, like following a specific camera trajectory.” So, AI can’t fully create games and its impact might be limited. While it could serve as a useful tool for quickly visualizing ideas and conveying them to a team, the core aspects of game development still require human ingenuity and creativity. In essence, the introduction of AI tools like Sora and ElevenLabs is seen as a natural progression – a means to enhance efficiency and open doors to new creative possibilities. Rather than a radical transformation, game developers anticipate incorporating AI seamlessly into their workflow, ultimately leading to more expansive and captivating gaming experiences.

January 30, 2024
Touching Art: How Haptic Gloves Empower to “See” the World of Art

In the realm of art, visual experiences have long been the primary medium of expression, creating a challenge for those with visual impairments. However, a groundbreaking fusion of haptic technology and VR/AR is reshaping the narrative. Explore the innovative synergy between haptic technology and VR/AR and how this collaboration is not only allowing the blind to “see” art but also feel it in ways previously unimaginable. Artful Touch – Haptic Technology’s Role in Art Appreciation Haptic technology introduces a tactile dimension to art appreciation by translating visual elements into touch sensations. Equipped with sensors and precision, haptic gloves enable users to feel textures, contours, and shapes of artworks. This groundbreaking technology facilitates a profound understanding of art through touch, providing a bridge to the visual arts that was once thought impossible for the blind to cross. VR/AR technologies extend this tactile experience into virtual realms, guiding users through art galleries with spatial precision. Virtual environments created by VR/AR technologies enable users to explore and “touch” artworks as if they were physically present. The combination of haptic feedback and immersive VR/AR experiences not only provides a new means of navigating art spaces but also fosters a sense of independence, making art accessible to all. Prague Gallery Unveils a Touchful Virtual Reality Experience The Prague’s National Gallery has taken a revolutionary step towards inclusivity in art with its groundbreaking exhibition, “Touching Masterpieces.” Developed with support of Leontinka Foundation, a charity dedicated to children with visual impairments, this exhibit redefines the boundaries of art appreciation. Visitors to the exhibition, especially those who are blind or visually impaired, are invited to embark on a sensory journey through iconic sculptural masterpieces. Among them are the enigmatic bust of Nefertiti, the timeless Venus de Milo sculpture, and the immortal David by Michelangelo. What sets this exhibition apart is the integration of cutting-edge technology – haptic gloves. These gloves, dubbed “avatar VR gloves,” have been meticulously customized for the project. Using multi-frequency technology, they create a virtual experience where a user’s hand can touch a 3D object in a virtual world, providing tactile feedback in the form of vibrations. The key innovation lies in the gloves’ ability to stimulate different types of skin cells’ tactile responses, ensuring that users, particularly the blind, receive the most accurate perception of the 3D virtual objects on display. As visitors explore the exhibit, they can virtually “touch” and feel the intricate details of these masterpieces, transcending the limitations of traditional art appreciation. Future Possibilities and Evolving Technologies As technology advances, the future holds even more possibilities for inclusive art experiences. The ongoing collaboration between haptic technology and VR/AR promises further refinements and enhancements. Future iterations may introduce features such as simulating colors through haptic feedback or incorporating multisensory elements, providing an even more immersive and enriching experience for blind art enthusiasts. The collaboration between haptic technology and VR/AR is ushering in a new era of art perception, where touch and virtual exploration converge to create a truly inclusive artistic experience. By enabling the blind to “see” and feel art, these technologies break down barriers, redefine traditional boundaries, and illuminate the world of creativity for everyone, regardless of visual abilities. In this marriage of innovation and accessibility, art becomes a shared experience that transcends limitations and empowers individuals to explore the beauty of the visual arts in ways never thought possible.



Let's discuss your ideas

Contact us