Connect with us

Metaverse

Create .NET applications for the metaverse with StereoKit

Much of the Windows Mixed Reality platform depends on Unity. However, it’s not always the best option for many reasons, especially its licensing model which is still very focused on the gaming market. There are alternatives. You can use WebXR in an integrated browser or work with Power Platform’s cross-platform tools built around the React Native implementation of Babylon.js. But if you work with .NET code and want to extend it to augmented reality and virtual reality, you still need a set of .NET mixed reality libraries.

OpenXR: an open standard for mixed reality

Fortunately, there’s an open standards-based approach to working with mixed reality and a set of .NET tools to work with. The Khronos Group is the industry body responsible for graphics standards such as OpenGL and OpenCL that help code get the most out of GPU hardware. As part of his remit, he maintains the OpenXR standard, which is designed to let you write code once and run it on any augmented reality headset or device. With runtimes from Microsoft, Oculus, and Collabara, among others, OpenXR code should run on most platforms that can host .NET code.

The cross-platform and cross-device nature of OpenXR allows for a codebase that can provide mixed reality to supported platforms if you use a language or framework that works on all those platforms. As the modern .NET platform now supports most places where you are likely to want to host OpenXR applications, you should find the Microsoft-sponsored StereoKit tool an ideal way to build these applications, especially with openXR tools. cross-platform user interface like MAUI hosting. non-OpenXR content. You can find the project on GitHub.

As it is developed by the same team as the Windows Mixed Reality Toolkit, it is expected to evolve into the ability to use Microsoft’s Mixed Reality design language. This should allow both tools to support a similar feature set so you can bring what would have been Unity-based applications to the larger C# development framework.

Working with StereoKit

StereoKit is purely designed to take your 3D assets and display them in an interactive mixed reality environment with a focus on performance and a concise API (the documentation calls it “laconic”) to simplify code writing. It’s designed for C# developers, although there’s additional support for C and C++ if you need to get closer to your hardware. Although originally designed for HoloLens 2 applications and augmented reality, the tool is suitable for creating virtual reality code and using augmented reality on mobile devices.

Currently, platform support is focused on 64-bit applications, with StereoKit shipping as a NuGet package. Windows desktop developers currently only have access to x64 code, although you should be able to use the HoloLens Universal Windows Platform (UWP) ARM64 on other ARM hardware such as the Surface Pro X. The Linux package supports x64 and ARM64 support; Android apps will only work on ARM64 devices (although tests should work through Android Bridge technology used by the Windows Subsystem for Android on Intel hardware). Unfortunately, we cannot be fully cross-platform at this time as there is no iOS implementation as there is no official iOS OpenXR release. Apple is focusing on its own ARKit tool, and as a workaround, the StereoKit team is currently working on a cross-platform WebAssembly implementation that should run anywhere there is a WebAssembly-compatible JavaScript runtime.

Developing with StereoKit shouldn’t be too difficult for anyone who has created .NET UI code. It’s probably best to work with Visual Studio, although there’s no reason why you can’t use another .NET development environment that supports NuGet. Visual Studio users will need to ensure they have enabled desktop .NET development for Windows OpenXR apps, UWP for apps targeting HoloLens, and mobile .NET development for Oculus and other Android-based hardware. You’ll need an OpenXR runtime to test the code, with the option of using a desktop simulator if you don’t have a headset. One of the benefits of working with Visual Studio is that the StereoKit development team has provided a set of Visual Studio templates that can speed up getting started by loading prerequisites and filling in boilerplate code.

Most developers will probably want the .NET Core template, as it works with modern .NET implementations on Windows and Linux and prepares you for the cross-platform model being developed. Cross-platform .NET development is now focused on tools like MAUI and WinUI, so it’s likely that the UWP implementation will become less important over time, especially if the team ships a WebAssembly build.

Build your first C# mixed reality app

Building code in StereoKit is aided by well-defined 3D primitives that make it easy to create objects in mixed reality space. Drawing a cube (the mixed reality version of “Hello, world”) can be done in a few lines of code with another example, a free-space drawing app, in just over 200 lines of C#. The library handles most interactions with OpenXR for you, allowing you to work directly with your environment rather than having to implement low-level drawing functions or having code that has to deal with different cameras and screens.

There are some key differences between traditional desktop applications and working in StereoKit that you will need to consider when writing code. Perhaps the most important is state management. StereoKit should implement UI elements in every frame, storing as little state as possible between states. Some aspects of this approach simplify things considerably. All UI elements are hierarchical, so disabling an element automatically disables its child elements.

This approach allows you to attach UI elements to other objects in your model. StereoKit supports many standard 3D object formats, so all you need to do is load a model from a file before defining the interactions and add a layout area on the model, which acts as a host for UI elements and places the object at the top of the page. User interface hierarchy. It is important not to reuse item IDs in a UI object as they form the basis of StereoKit’s minimal interaction state model and are used to track which items are currently active and can be used in user interactions.

StereoKit takes a “hands-first” approach to mixed reality interactions, using hand-held sensors like HoloLens’ tracking cameras where available, or simulating them for mouse or gamepad controllers. hands are displayed in the interaction space and can be used to place other UI elements relative to the hand positions, for example, creating a control menu that is always close to the user’s hands, little wherever it is in the application space.

If you need inspiration for implementing specific features, a helpful library of demo scenes can be found in the StereoKit GitHub repository. These include sample code for working with controllers and handling manual input, among other necessary elements of mixed reality interaction. The code is well documented, giving you plenty of guidance on how to use key elements of the StereoKit APIs.

Removing Microsoft’s reliance on Unity for mixed reality is a good thing. Having its own open-source tool ensures that Mixed Reality is a prime citizen in the .NET ecosystem, supported across as much of that ecosystem as possible. Targeting OpenXR is also key to StereoKit’s success, as it ensures a common level of support across mixed reality devices like HoloLens, virtual reality like Oculus, and augmented reality on Android. You will be able to use the same project to target different devices and integrate familiar tools and technologies such as MAUI. Mixed reality doesn’t have to be a separate aspect of your code. StereoKit makes it easy to integrate into existing .NET projects without having to make major changes. After all, it’s just another UI layer!

Copyright © 2022 IDG Communications, Inc.

#Create #.NET #applications #metaverse #StereoKit

Click to comment

Leave a Reply

Your email address will not be published.

Trending