Semantic Representation of Domain Knowledge for Professional VR Training
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 30 | |
Author | ||
License | CC Attribution 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/53679 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
11
00:00
Computer animation
00:29
Computer animation
04:04
Computer animation
08:02
Computer animation
09:28
Computer animation
16:12
Computer animation
17:21
Computer animation
Transcript: English(auto-generated)
00:00
So the title of my presentation is Semantic Representation of Domain Knowledge for Professional VR Training. This is work done by our university, the Poznań University of Economics and Business with collaboration with the Poznań University of Technology and the NEA operator which is one of the main electricity providers in Poland.
00:27
I would like to start with a brief introduction to the main motivations of training in virtual reality, in short VR training in this presentation. So VR training is typically based on various
00:46
advanced virtual and augmented reality devices for presentation and interaction. Such devices include head-mounted displays which are used in majority of VR and AR applications,
01:03
interactive controllers, motion tracking and capture systems and also in larger installations these are projection rooms, so-called caves, and projection walls which are called power walls. VR training permits recreation of real working conditions with a high level of fidelity and it's
01:29
suitable for expensive and dangerous training which is expensive and dangerous in reality. So for example, if we have expensive, hardly available equipment that is easy to be damaged by
01:45
non-experienced workers. If we have dangerous equipment, for example high voltage equipment or different equipment in for example construction area, we have a lot of
02:01
devices which may be dangerous to people who are not properly skilled. So in such cases VR training can be especially beneficial. What we want to do is learning by doing in VR. As Benjamin Franklin said, tell me and I forget, teach me and I may remember and involve me and I learn. So this
02:28
is just what we want to do. We want to involve users, involve trainees and teach them in this way. So what are the main challenges of building VR training environments? So typically such
02:42
environments are very complex. They include 3D scenes which consists of different 3D objects with geometry, structure, space, appearance and behavior, so multiple different elements. In addition, such 3D scenes in VR training environments may be typically used for
03:05
various different scenarios, different scenarios for a particular 3D scene. It in turn requires competencies in both domain knowledge and programming and 3D modeling skills.
03:21
But the problem is that on the one hand domain experts typically lack knowledge and skills in programming and 3D modeling. On the other hand developers and graphic designers typically lack domain knowledge and skills. So it's often required to transfer domain knowledge
03:42
from domain experts to developers and designers which is typically a time consuming and expensive task. Moreover, typically the level of code reuse between different VR training environments, different 3D scenes and training scenarios is low. So the problem that we face in this
04:08
presentation is the lack of approaches to allow domain experts with no programming and 3D modeling skills to design VR training scenarios. How we want to solve this problem
04:23
is by providing a semantic representation and modeling method of VR training scenarios with domain knowledge. Our approach has the following two main elements. The first one is the ontology of training scenarios and the second one is the semantic scenario editor
04:46
which is a SOA-based service-oriented application with a user-friendly desktop client. And in the following slides I would like to present both the main elements. Just to
05:00
relate how our solution is placed in the overall view of how VR training environments are created and used. So I could briefly retell that first we must create VR training scenes, then we must
05:22
build VR training scenarios for the scenes. And a typical case is that several training scenarios are for a single scene. And finally we train people with such a VR environment. So in this presentation we are just focused on the second stage, namely building training
05:44
scenarios. The overview of our approach is presented in this slide. So a VR training manager is a domain expert responsible for designing VR training scenarios. He uses
06:05
the semantic scenario editor client which is a desktop application based on the .NET architecture with a user-friendly graphical interface implemented in the XAML language. The desktop
06:22
editor client connects to the scenario editor server which is a Java-based application implemented using Spring web services. And in turn the server connects to two databases. The 3D repository includes all 3D models of infrastructure objects, pieces of virtual
06:50
equipment which are used by trainees, and virtual scenes which gather all the former elements in complex scenes in complex environments in which we can train. The second
07:06
database we use is the semantic repository. It's a triple store implemented using the Apache Fuseki server. It stores the scenario ontology which is the pair the tbox and rbox
07:27
defining classes and properties for scenarios. And also in the semantic repository we store the object descriptors, equipment descriptors, and scene descriptors which are semantic
07:45
descriptions of objects, equipment, and scenes respectively. These are in fact a-boxes. So we have particular individuals which are described using the classes and properties specified in the
08:08
which is the first main element of our approach. It enables the semantic description of VR training scenarios. The ontology is implemented using the semantic web standards, the resource
08:22
description framework, the resource description framework schema, and the web ontology language. And it enables description of the workflow of training scenarios. So what happens in the scenarios, what are the particular activities that needs to be done by the trainees.
08:43
Second, the ontology describes objects, elements, and states of the elements of the infrastructure depending on the domain of use. Moreover, it describes equipment necessary to execute actions in the scenario by trainees and it also describes possible problems that may
09:10
occur with the infrastructure objects on the one hand and on the other hand it is capable of describing errors that may occur when some actions are improperly performed by the trainees.
09:28
The main elements of the scenario ontology are presented in this slide. In blue we marked the main workflow elements. So every scenario typically includes multiple
09:43
steps. Every step typically includes multiple activities and every activity typically includes multiple actions. In discussions with our business partner, with our operator, we agreed that such structure with three levels of detail, steps, activities, and actions
10:08
is sufficient to represent training scenarios in general. Actions which are at the lowest level which provide the most detail of describing what happens in the scenario present contextual
10:29
data. Such contextual data consists of objects of the infrastructure and elements of the objects.
10:40
We distinguish two types of elements, interactive elements, which may be affected by the trainees. For example, buttons may be pushed, so they are interactive elements. Switches may be turned and so on, so they are interactive elements. When we perform an interaction with
11:02
an interactive element, some dependent elements in the environment typically change their state. So dependent elements, for example, may be transformers in the field, in the station, maybe some displays in the dashboards, and so on. So all elements that are affected by the
11:26
interaction. So these are marked in green here in the slide. And the last part of scenario description are errors and problems. Errors represent what may be done improperly by
11:45
trainees and problems may occur if the infrastructure objects work improperly. Using the scenario ontology, which is a schema for scenarios, we create
12:06
particular scenarios. Particular scenarios are described by scenario knowledge bases. So scenario knowledge bases in our approach are created by domain experts using the scenario editor client. They are encoded in the OLTAR tool format and they include particular steps,
12:27
particular activities, and particular actions to describe the workflow of the scenario. They
12:43
also describe problems and errors, and all these are specified using classes and properties defined in the scenario ontology. Here we have an excerpt of a semantic scenario knowledge base. It consists of four steps with multiple activities
13:07
and with multiple actions. We could of course present much more than only this excerpt. We could add objects with interactive and dependent elements, with the states of the elements,
13:23
with errors and problems, because all such are typically included in scenario knowledge base. So now I would like to go to the second element of our approach, which is the semantic scenario
13:41
editor. I will present in particular its client, which is a .NET-based application with a graphical user interface. So the first thing that the training manager must do is to provide general information about the scenario. General information encompasses
14:02
such properties of the scenario as the type of work, the title of the scenario, its goal, specification whether the scenario is basic, complementary, periodic, verifying, or ad hoc,
14:21
as well as some elements of protective equipment, for example helmets, gloves, that may be used by trainees within this scenario. And on the second page of the scenario editor client, the scenario manager specifies the workflow of the scenario. So three levels of detail,
14:44
starting from steps through activities to actions. We can see steps here in dark blue, then activities within the particular steps with light blue, and with purple we can see actions.
15:01
They are presented in the form of such a tree. Actions are related to the particular infrastructure objects, which are in green here, and for every object its name is specified, its interactive or dependent element with the beginning and final states,
15:23
so final state is after the interaction by the trainee, visual representation of the object, audio representation, fidelity level, so of course more important objects, more important in terms of the interaction and presentation may be specified with higher level of fidelity,
15:46
we can also specify some possible simplifications of the 3D models, and we may specify some intended unrealism. So for example, if we are in a scene for high voltage equipment,
16:01
for training with such equipment, and we know that some of you have two minutes, two minutes, okay, I will finish. Okay, so how look our 3D scenes on the left real and on the right virtual 3D scene with
16:20
high level of fidelity, and an example scenario, this is a task to block a field in the electricity line, for this purpose the trainee must enter the control room.
16:41
What equipment we use here is HTC Vive head-mounted display with interactive controllers, so when we move the controllers and push the appropriate buttons, we can see we can see the hands in the presentation. Here we have dashboards, the task of the user
17:04
is approach the appropriate one and switch the proper interactive elements, which of course affect some infrastructure objects in our environment. To summarize, the main advantages
17:25
of our approach is it enables modeling of scenarios at a domain-specific level by domain experts who typically are not IT experts, and future works that we would like to do
17:43
is to provide possibilities for collaborative creation of scenarios by distributed users to enable scoring trainees performance and skills in VR training scenarios to see the the results of training and to extend the scenario ontology with parallel sequences of activities.
18:06
Thank you for your attention.
Recommendations
Series of 9 media
Series of 3 media