Library of Typical Features_benefits for XR Projects Customers
September 15, 2022
Library of Typical Features: Benefits for XR Projects Customers

Our experience in XR project development shows that the applications our clients order often contain similar features, for example, it can be 2D drawing or interaction with objects. In each project, we developed features from scratch, even if we had already implemented them before. In order to optimize this process, we decided to develop a special library, the advantages of which we want to briefly tell you about in this post. This will be the first in a series in which we will focus on the benefits of this solution for our customers. In the future, we will talk in more detail about the structure of the library and its components. In short, the main idea of ​​our library is the development of typical features that we encounter from project to project. The library will contain the API with the core logic of the features and their implementation for specific Platforms, such as Oculus, Pico, iOS, Android, etc. There will be about 20 features in total that will be further supplemented. Why is it beneficial for our customers to have such a library? 1. The speed of project development increases, as a result, it becomes possible to bring the product to the market faster Due to the fact that we have already developed the main features for various platforms, there is no need to develop them from scratch for a new project. This saves time and therefore costs for our customers. 2. The reliability and quality of feature implementation is growing, while risks, code problems, and testing costs are being reduced All the features that we add to the library are constantly used in our projects. As a result, you get reliable functionality, perfected on various projects, which is constantly optimized. 3. The available functionality is expanding, as a result, it becomes possible to use all the properties of the features Clients need different functionality, for example, 2D drawing can be used in a VR smart office to draw on a flipchart, or it can be used in a 2D android application for signing documents. Thanks to our library, you will get all the available functionality of a feature, for example, the same 2D drawing, from which you can use the features you need. 4. Continuous improvement, resulting in the ability to receive feature updates even after the completion of the project After we have finished working on your project, the improvement of the library does not stop. You can also enjoy these benefits. We provide our customers with feature updates with new possibilities. To sum up, we want to once again dwell on the main advantages of the library we are developing: development speed, reliability and quality, accessible functionality, and continuous improvement. That’s all for today. Follow our future materials, they will contain more details on the internal filling of the library.

Features that add realism to the graffiti in VR
July 21, 2022
Features that add realism to the graffiti in VR

We are developing a virtual reality graffiti drawing application for our client. To make the drawing process as realistic and convenient as possible, we have implemented several features, including: The ability to shake the can with realistic sound Visibility of spraying a cloud of paint Dependence of the line thickness on the position of the can Realistic texture filling with paint  

Implementing smooth drawing in Unity for VR app with lots of miscalculations_main
June 24, 2022
Implementing smooth drawing in Unity for VR app with lots of miscalculations

While working on a virtual reality application for painting graffiti on walls our team had difficulties with the implementation of smooth painting with spray paint. The drawing process itself is calculated mathematically using linear interpolation. We launch a beam from a can of paint and aim at the texture of the wall on which we will paint, after which we calculate the pixels, and the paint is transferred to the wall. The main difficulty occurred when moving quickly with a can. The paint did not fill all the intermediate points, but only separate ones, as a result, voids appeared in the filling with color. Naturally, at the same time, the drawing process lost its realism and smoothness and became uncomfortable. The reason for this was the use of Unity’s built-in math, which could not cope with a large number of miscalculations. In order to solve this problem, we decided to use third-party libraries: Hybrid renderer — provides systems and components for rendering ECS ​​entities. Hybrid Renderer is not a render pipeline: it is a system that collects the data necessary for rendering ECS ​​entities and sends this data to Unity’s existing rendering architecture. BurstCompiler — it translates from IL/.NET byte code to highly optimized native code using LLVM. It is released as a unity package and integrated into Unity using the Unity Package Manager. Unity C# Job System — lets you write simple and safe multithreaded code that interacts with the Unity Engine for enhanced game performance. In the video below, you can see the difference in performance before and after using third party libraries. After the implementation of the above libraries, the efficiency of the application was improved by 90-100%.

Sound processing for NFT project
June 1, 2022
Sound processing for NFT project

We are working on an interesting NFT project related to 3D avatars, video and sound recording. We decided to briefly tell you about the solutions that we use for work, as well as share information about the inner workings of the project. About the essence of the NFT project The essence of the NFT project is that you buy unique 3D avatars in the application. For them, you can record video with sound using the front camera of your smartphone. The main feature is that the avatar pronounces the recorded sound, repeating the facial expressions and emotions of the speaker. There are currently more than 2000 avatars in the application. The application is currently only available for iPhone. In the future, the purchased NFT avatar can be upgraded in terms of popularity and recognition. After that, it can be monetized: used for advertising, taking part in the shooting of cartoons, animated videos, etc. About the complexities While working on the project, it became necessary to impose various effects on the recorded sound. This can enhance the authenticity of the avatar, make the video more impactful with interesting sound effects, and diversify the experience that users can implement in the application. The main requirement was to use the effects used by professional audio editors. We started looking for a suitable solution, offered various options, some of which did not fit for objective reasons, and the other part was not compatible with the mobile platform. In the end, we settled on one well-known tool that met all our requirements. Tools for working with sound and video Since our project needs to record videos with sound and work with them at the same time, we chose FFMpeg – a complete, cross-platform solution to record, convert and stream audio and video. Before that, we used this library for video processing: trimming, inserting background music. They also knew about the possibilities of working with sound, but did not use all the advanced features. And the possibilities are wide. We can say that this is the most complete library for audio and video processing: compression, timbre, frequency, playback speed, echo. Over 100 different filter combinations. For future clients If you have ideas that you would like to implement, please contact us. We will conduct research, choose the best options and make an effective and simple solution.

Oculus Interaction SDK
April 7, 2022
Switching to the Oculus Interaction SDK on a major education project

Within one of our big VR projects, we migrated from the old version of item interaction to the new Oculus Interaction SDK. Multiplayer virtual reality project in the field of education Qualium Systems has been developing our own multiplayer virtual reality project in the field of Chemistry education for a long time — virtual classrooms. The features of the project VR Chemistry Lessons App offer a large number of interactions with objects, namely: take and hold them, move, resize, etc. For this type of application, the interaction and manipulation of objects, their convenience and simplicity, play a key role, since this is what affects the student’s feelings while using the application. Check out a small video on the work of our VR Chemistry Lessons App. Before the Oculus Interaction SDK was released, the older version of the SDK didn’t allow the effective object manipulation. For example, in the old version, it was impossible to take an object with both hands and change its size — increase or decrease. It was a problem. Development of a custom solution for object manipulation As a result, in order to get around these limitations, we developed our own custom solution for manipulating objects in the application. Besides the advantages, it also had negative aspects: led to code complexity, implied additional work when updating the Oculus SDK on the project, etc. Benefits and adoption of the Oculus Interaction SDK The new Oculus Interaction SDK allowed us to implement almost all the interaction features with objects that we needed. It is now possible to abandon the custom solution, although requiring considerable time for integration. What is the Oculus Interaction SDK Interaction SDK Experimental is a library component that allows developers to use standardized interactions for controllers and hands. Interaction SDK also includes tooling to help developers build their own hand poses. We filmed a short video while testing the Oculus Interaction SDK. The experience we have gained Our development team has the flexibility to change their approaches: develop custom solutions when needed; implement ready-made libraries and frameworks when it speeds up development time without reducing quality.

Proof of concept for software
February 17, 2022
Proof of Concept for software. Risks reduction in the development of AR / VR / MR applications

The development of AR / VR / MR applications is an innovative and dynamically developing area. Active development implies the emergence of many innovative ideas and concepts, technological solutions and equipment. Under these conditions, it is not always clear how one or another function can be implemented and whether it is possible at all. What is the purpose of Proof of Concept To reduce the risk of losing money and time for our clients, we use Proof of Concept. It helps to clarify the technical feasibility and characteristics of the result and then proceed to the implementation of the whole product. Why Proof of Concept is important PoC is for you if you: have non-trivial tasks, in the implementation of which there is no confidence; have a limited budget and want to test the idea. In addition to risks reducing, another benefit of PoC is that you can use it as a pilot project — to evaluate the knowledge and skills of the company’s developers, the level of communication, the quality of the organization of the development process. This is especially true if your idea is feasible, has good prospects and implies a long development stage in the future. Proof of Concept vs MVP If we compare PoC and MVP, we can say that PoC is a bare functionality, the implementation of which allows you to save money at the initial stage, since it does not require costs for UX / UI and functions that are unnecessary at this stage. It is not released to the public, but is tested by a limited number of users. MVP is a complete finished product, only with limited features. It is released to a wide audience and requires more cost and time than PoC. If you have an idea, please contact us — we will advise you on all issues, share experience and expertise in the field of AR / VR / MR development.

Implementation Oculus Interaction SDK
February 8, 2022
Implementation Oculus Interaction SDK

Interaction SDK Experimental is a library component that allows developers to use standardized interactions for controllers and hands. Interaction SDK also includes tooling to help developers build their own hand poses. Technology stack: Unity SDK, Oculus SDK.

3D virtual models in Augmented Reality for retail
February 2, 2022
3D virtual models in Augmented Reality for retail

Today’s retail business’s main purpose is all about involving the customer interacting with the products and the platform so that they feel engaged. From the variety of tools, which are allowed to get this the most interesting ones are 3D models in Augmented reality. 3D virtual models could show clothes in a 3D environment or could be used as a virtual assistants for consultation and sales increasing. Technology: AR Foundation, Unity SDK.

R&D department in extended reality: several examples of experiments and research
December 9, 2021
Extended Reality R&D department: several examples of experiments and research

The field of augmented and virtual reality development is developing by leaps and bounds: Facebook announced its plans to build a metaverse, major market players release new AR / VR / MR devices, new technologies and tools appear regularly. These factors represent both opportunities and challenges. It is very easy to lag behind the development of new directions, to concentrate on the familiar and convenient. Research & Development in VR/AR In order to stay at the peak of new solutions, we at Qualium Systems made a decision about a year ago to create an R&D department in virtual and augmented reality. As part of this department, we master and test new features, tools and capabilities. We use everything that has a perspective to solve the business problems of our clients. Passthrough API For example, we were one of the first to test a then new feature for Oculus Quest 2 – Passthrough API. This technology enables mixed reality experiences that combine real and virtual worlds into one. Immediately after an update became available, we started testing the API. It is an exciting promise by Oculus and has a lot of perspective. We are already brainstorming the ideas of using Passthrough API to solve our clients’ business issues. To test Passthrough API, we used the following Technology stack: Unity SDK, Oculus SDK 31-34, Passthrough API. Our experiment with the Passthrough API was for V31 when the technology was in experimental mode. In the further V34 version of Oculus, the Passthrough API has ceased to be an experimental function and now it does not need to be activated by a user via the console. You can watch our experiment in the video.   Comparison of the Oculus 2 with Passthrough API and HoloLens 2 Also, in continuation of our experiments with the Passthrough API technology, we decided to test the hypothesis – Can the Oculus 2 with Passthrough API take the place of HoloLens 2? After the MRTK library has been adapted to the Oculus Quest 2 and in the ver. 30 of the Operating system we received a pass-through feature, we decided to check if Oculus 2 could be used as an alternative to expensive HoloLens 2. To validate the idea, we’ve made two applications with the same features on the Oculus Quest 2 and on the HoloLens 2. You can see the result in the article http://ad88-46-150-10-153.eu.ngrok.io/blog/ar-vr/can-the-oculus-2-with-passthrough-api-take-the -place-of-hololens-2-checking-the-hypothesis. As part of our experiments, we are trying to highlight the industries that can most benefit from XR technologies, especially during a pandemic and its aftermath. The standalone application with computer vision base on Barracuda and TF model We understood that during the COVID-19 pandemic, many people lost the opportunity to train in the gym with a trainer. Lack of coaching support can lead to improper exercise performing, which in turn can lead to injury. But thanks to modern technology, it has become possible to train correctly. We are creating a standalone application with computer vision based on Barracuda and TF model. You view a recording of an exercise with a trainer and repeat it in real time. The computer vision based on Barracuda and TF model technology compares your movements and displays information on how well you perform the exercise. You can see the result in the video.   Tensorflow lite body recognition As part of testing our hypotheses that can be applied to sports and fitness, we compare Tensorflow lite body recognition based on PoseNet model and Unity Barracuda using tfjs-to-tf and tf2onnx.   Face tracking on PC with Unity SDK and UltraFaceBarracuda One industry that can benefit from XR adoption is Retail. As part of this direction, we tested Face tracking on PC with Unity SDK and UltraFaceBarracuda: depending on the face position, the picture that a person is looking at moves. As a result of this, the effect of a hologram is created. It is an interesting solution for marketing purposes with the ability to track people’s actions and test various marketing activities. To do so, we used the Technology stack: Unity SDK, UltraFaceBarracuda. See the result in the video. This is a small part of our experiments. We continue to research, hypothesize, and help our clients.