Extended reality blurs the boundary between what is real and what is virtual. It combines all the leading technologies in human-machine interactions generated by computer technology and wearables.
The latest development from Umajin allows for real time avatars based on human individuals participating inside the VR experience. Their body movement can be tracked and displayed as a fully animated character. Real time tracking of multiple individuals enables training scenarios with trainees interacting with each other naturally inside the virtual world.
These development features are being added to our enterprise XR platform to make it easy to create amazing VR and AR experiences.
360 photo, 3D models, 3D animation, 2D animation, audio, images, text and video. 3D formats include DAE, STL, PLY, OBJ, FBX and glTF.
Umajin allows animated characters to have their animation modified to increase interaction for the user. Examples include a character looking in a specific direction, pointing in a direction and perform tasks like placing hands or feet in a specific location. This is important as it makes the scene unique and reactive to the user during training.
Umajin events are raised by triggers such as look at, walk, touch, voice and more. These events can launch actions which can change scene, play sound, play animations and more.
Umajin has a natural language processing layer on top of services like Azure or AWS voice recognition. This is used to power the voice triggers and allows for casual phrasing, synonyms, technical terms and multiple languages.
Many users and props can be tracked in real time within the motion capture space. This puts everyone and everything into the same virtual space as the physical space they occupy. This allows participants to easily pass props between each other, collaborate and understand spatial directions including pointing and looking.