Using the Nvidia Omniverse Platform to Create a Virtual University

Target

Development of a virtual twin of the university using gaming technologies and the NVIDIA Omniverse platform

Supervisor: Alimzhanov E.S.

Source of financing: according to the intra-university competition of scientific grants

Partners: NVIDIA

Years of implementation: 2021

Funding volume: 1 million tenge

In the course of the project, a virtual model of the University was simulated. Why a virtual model? The model of the University created in this way will allow it to be used to develop a digital or virtual twin of the university. A digital twin is a digital copy of a physical object or process that helps optimize its management efficiency. The project was implemented in several stages:

  • According to the drawing of the layout of the University, all rooms were modeled in Autodesk 3ds Max. After modeling, the finished three-dimensional model was exported to the Unreal Engine game engine using the official Datasmith Importer plugin. The main feature of this export method is that all quantities, coordinate system and even applied materials are exported to Unreal Engine without distortion or changes;
  • 9 first- and second-year students of the specialty “Media Technologies” were involved in the modeling of internal objects;
  • the Nvidia Omniverse platform was used to combine all the simulated objects­. A high-power workstation was purchased for students to collaborate on the project.

Fig. 1 – Stages of object modeling

Fig. 2 – Student-modeled furniture

NVIDIA Omniverse is an open, cloud-based platform that accelerates design and simulation workflows and collaborates in real time on photorealistic quality designs. Omniverse empowers designers, engineers, and researchers to collaborate in virtual worlds in which they are connected. The platform provides designers and developers with “portals” that connect their working environments in live mode, for example, Autodesk Maya, Adobe Photoshop and Unreal Engine. Using Omniverse, each team member can see all the changes made by others without noticeable delay. The platform increases efficiency, productivity, and flexibility as teams from the same room and anywhere in the world can log into Omniverse and collaborate on projects with real-time photorealistic rendering.

A key feature of Omniverse is the ability to easily work in different software at the same time. Content creators, designers, and engineers have access to industry-leading applications such as Autodesk Maya and Revit, Adobe Photoshop, Substance Designer, Substance Painter, McNeel Rhino, Trimble SketchUp and Epic Unreal Engine, Autodesk 3ds Max. Many more are in development, including Blender, SideFX Houdini, and AutoDesk MotionBuilder.

The platform uses Pixar’s open Universal Scene Description (USD) technology to share information about models, animations, visual effects, and rendering. In addition, it supports the Material Definition Language (MDL), a language for transferring data about object materials, developed by NVIDIA.

Fig. 3 – Components and workflow in NVIDIA Omniverse

Fig. 4 – Models of three floors of the central block C.1.2.

To obtain realistic pictures, all simulated objects were placed in classrooms according to the resulting layout. In the virtual copy of the University, it is possible to connect sensors and sensors to display various information. All sensors were created jointly with the director of the Research Center Industry 4.0 A. Neftisov, and they implemented the ability to send data to the local server of the University. To place data from sensors and sensors, hosting was used, which is physically located on the territory of the University. This simplifies the local connection to them to record data, at the same time protects against hacking and ensures data security.

Fig. 5 – Virtual hall of the university with indicators of sensors on the board

The results of the project are presented in the form of an article  “Leveraging Real-Time Simulation and Collaboration Platform for Project-Based Learning: Case Study of Astana IT University” in the proceedings of the scientific conference  TALE 2021 – IEEE International Conference on Engineering, Technology and Education, Proceedings, 2021, pp. 1130–1134. (indexed in Scopus), and also demonstrated several times in front of the guests of the University and officials of the Ministry of Education and Science of the Republic of Kazakhstan and the MDDIAI in 2021 and 2022. For clarity, the demo video is uploaded to Youtube video hosting and is available at the link:

After receiving a virtual copy of the university, a test connection of the module for detecting moving objects from CCTV cameras was performed. Machine learning libraries were used to determine the location of people.

As a continuation of the project, this approach of creating a virtual copy of the building was applied to school No. 89 in Astana. A virtual copy of the first floor of the school with the arrangement of furniture and other internal objects has been created. A test connection of the module for detecting moving objects for monitoring human behavior and safety in real time was carried out.

Fig. 6 – An example of identifying moving objects in a university lobby.

Fig. 7 – Layout of the 1st floor of school No. 89 and its virtual copy