Contents Construct IT for Business

Transcription

Contents Construct IT for Business
Get Started in Virtual Reality
This How To Guide is part funded by the
Department of Trade and Industry under the
Partners in Innovation Scheme. This guide is
published with the Department's consent, but the
views expressed herein are those of Construct IT
and are not necessarily those of the Department.
Information Technology
Construction Best Practice
The IT Construction Best Practice programme
identifies, publicises and supports the use of IT
to improve business and management practices
for the construction industry. It is funded by the
government and is an initiative within the
Construction Best Practice Programme.
For more information, contact
ITCBP
Davis Langdon Consultancy
FREEPOST LON14305
LONDON
WC2B 6BR
Fax: +44 (0)20 7 379 3030
E-mail: itcbp@davislangdon-uk.com
Web: http://www.itcbp.org.uk
Get Started in Virtual Reality
Construct IT for Business
Bridgewater Building
University of Salford
Salford M7 9NU
United Kingdom
Tel:
+44 (0) 161 295 3916
Fax: +44 (0) 161 295 5011
E-mail: j.underwood@salford.ac.uk
Web: http://www.construct-it.org.uk
Contents
Introduction
Use of the guide
1)
1.1
1.2
1.3
Authors
Irene Koh
Construct IT
Jason Underwood
Construct IT
Mark Shelbourn
Construct IT
Carl Abbott
Construct IT
How to benefit from VR
2)
What is Virtual Reality?
2.1
2.2
2.3
3)
Other Contributory Organisations
4)
VR system configurations
Visual displays
Position and orientation tracking
Input devices
How to choose a system
4.1
4.2
4.3
4.4
5)
Definition of VR
VR versus CAD
Main components of a VR system
VR technology
3.1
3.2
3.3
3.4
Atkins, Gleeds, University of Salford
Sales and marketing
Product design
Training
Available VR software systems
Input processes
Simulation processes
Rendering processes
Further information
2
Get Started in Virtual Reality
Introduction
This guide is part of a series of ‘How to…’ guides on construction industry IT development.
This particular guide covers the subject of Getting Started in Virtual Reality. It is a high level
document designed to give general help on the subject and is not intended to be a detailed manual.
This approach has been taken because every company has different requirements depending on its
size and activities and the nature of the IT projects with which it is involved.
Throughout this series of guides the principle adopted is that all IT development should be business
driven. Consequently it should play a part in, and be integral with, construction activities and business
processes.
The IT Development Process
While each guide is designed to be a stand-alone document, the reader is encouraged to think of a
complete process of IT development. This process starts with the development of an IT Strategy,
which has been designed to support your business strategy and continues with the implementation of
that strategy. To assist with the understanding of this process you are encouraged to refer to other
Construct IT guides. Particularly helpful are:
●
How to Develop an Information Strategy Plan
Get Started in Virtual Reality
Use of the Guide
This guide is split into four parts each of which will guide you on your use of Virtual Reality (VR)
within your business. The first part explains some of the potential benefits of using a VR system.
There is a detailed section on how VR can be used for the sale and marketing of the business, and
incorporates a list, which has been taken from construction industry professionals already using VR,
of why you should use VR. The next section describes how the visualisation of product designs can
help with the assessment of maintenance in the early design stages of the project through the use of
VR. A case study is provided to highlight these issues. A final section is included on how VR can help
with training within your business.
Part 2 explains what VR is. It gives a detailed definition, and also describes the differences between
CAD (computer aided design) and VR. The main components of a VR system are highlighted with a
case study provided by Atkins on how they have used a VR system successfully in at least three of their
projects.
Part 3 describes the different parts that make up the VR technology. It includes sections on different
system configurations and describes VRML (Virtual Reality Modelling Language), the most common
language used to specify a Virtual Environment (VE). The different types of visual display, position
and orientation tracking, and different input devices are described.
The final part of the guide provides you with information on how to choose a relevant VR system that
best suits your business needs. The different types of VR available are described, and what input
processes, simulation processes, and rendering processes should be used with the different types of
system available.
This guide details the processes involved in producing an information strategy that is aligned with
your business strategy.
●
How to Implement an IT Strategy
This guide details the procedures required to successfully implement your Information Strategy Plan.
●
An IT Self-Assessment Tool
This guide enables your organisation to make an assessment of its current IT capability and to plan
future improvements.
●
Measuring the Benefits of IT Innovation
This document helps your organisation quantify the financial benefits of IT innovation.
Other How to Guides are being produced which deal with specific aspects of the IT development
process. Together with the IT Self-Assessment tool they are available from Construct IT and through
the IT Construction Best Practice Programme (ITCBPP). Refer to www.construct-it.org.uk and
www.itcbp.org.uk for up to date details.
1
2
Get Started in Virtual Reality
Get Started in Virtual Reality
1 How to benefit from VR
1.1 Sales and marketing
VR is being used in many application areas such as engineering, construction, medicine, military etc.
The main use of VR in industry can be categorised under product design (engineering components or
buildings), training, sales and marketing (Figure 1.1). This section presents a brief summary of
applications of VR in these areas.
The use of VR for promotional purposes has been highly successful in many companies. Normally the
marketing of the properties will take place before the buildings have been fully constructed.
Therefore easy access to view the properties for clients would be an added advantage. It can also be
used to:
● Attract customers to exhibition stands
Figure 1.1: Use of VR in Industry
● Relay important marketing messages
● Generate more press interest.
Main use of VR in industry
Design
Sales &
Marketing
Reasons to use VR
Training
● Assists understanding of design intent(s) because the VR environment directly reflects how
we, as human beings, perceive our own ‘real world’ physical environment
● Intuitive to technical and non-technical audiences alike
Design
Operation and
Concurrent
Visualisation
Maintenance
Engineering Teams
● Allows for the easier inspection of the ‘spaces’ that surround the real world physical objects
● Allows easier assessment for visual impact, aesthetic appreciation, expression and value
● Assists the solving of design interface problems between disciplines
Use of VR in a British House Building Company
One of the top twenty house building companies in the UK decided to use VR after their CEO
saw presentations of 3D studio work on a video that had been prepared for marketing
purposes. The CEO was impressed by the presentation but felt that the greater navigational
control and interactivity offered by VR would be useful to the company to enable designers to
identify design problems before construction began on site. It was also felt that VR would
enable non-technical staff to become actively involved in collaborative design decisions.
As a result the house building company worked with a software company to adapt a generic VR
package for site layout design applications. The house building company stipulated that all data
was to be created in CAD and exported into the virtual environment. The rendering
information was added by 3D studio before the data was imported.
The initial benefits of using VR have come from collaborative working during the evaluation of
design. Non-technical employees have been able to view a proposed site. This helped highlight
previously unnoticed design inconsistencies at a stage where they were easy to rectify. For
example, in one case the orientation of a row of garages was altered to improve the view from
the entrance of one of the houses. To date the use of VR has varied between regions of the
company, with some regions having little or no use of VR. However, the organisation would like
to increase its overall use. It would like to achieve benefits in the feasibility and planning stages
of the house building process. At these early stages the company feels that VR can be used to
better communicate visions and designs. The CAD manager also feels that the visualisation can
then be reused down-stream for sales and marketing purposes.
● Assists the evaluation of sight lines and visibility corridors in designs, which is particularly
important for retail developments, road/railway signal visibility and traffic road signage
(permanent and temporary for road works)
● Real-time spatial analysis, clash checking and proximity detection
● Check and evaluate site access, particularly important for maintenance and emergency
services
● Plan and check access for the installation and use of large items of plant, particularly cranes,
during the construction phase
● Assessment of environments that are physically hazardous for human beings to enter
● Provide an easily understood 3D view and link to design data held in conventional formats,
such as web pages or images. This can be bi-directional
● Visualisation of a proposed facility in its existing physical environment (a virtual
photomontage)
● Monitoring of construction progress by integrating pictures (static or live) from site with the
VR environment
● Visual assessment of construction scheduling/sequencing. A lot of development work into
what is known as 4D-modelling has taken place at CIFE, Stanford University, San Francisco
and the Centre for Virtual Environments, University of Salford
● Facilitate operations/maintenance and refurbishment/refits of existing facilities
● Test ‘what if’ scenarios and present them as options in an easily understood manner.
3
4
Get Started in Virtual Reality
1.2 Product design
Digital prototyping is an important stage of the life cycle of any product. It involves the creation of
CAD models of the product to carry out design analysis before committing to production. While
current CAD systems are reasonably mature for supporting detailed design of products they do not
allow the designers to assess human factor issues such as maintenance, operation, security or
emergency evacuation procedures, etc. VR is being used by many companies to assess such human
factor issues during design by importing CAD models into the VR system. The term virtual
prototyping is used by many to refer to design assessment using the VR technology. The main benefits
of virtual prototyping are significant cost savings, improved quality and faster time to market. Rolls
Royce claims that they achieved 50% faster design review by making real-time changes on the virtual
prototype.
In product design, VR is mainly used for visualising complex designs assessing operation and
maintenance issues and promoting collaboration between design teams. This section presents a brief
summary of applications of VR in these areas.
Visualisation
One of the most intriguing aspects of VR is its ability to depict different types of data in an intuitive
form. On a simple level, VR is ideal for presenting an architectural ‘walk through’ of large
construction projects. Here the benefit of VR is that it allows professionals to translate building plans
and schematic diagrams into a 3D form, which everyone can understand. Furthermore,
environmental issues such as visual impact related to construction projects can be assessed using VR.
As well as having lots of parts, a complex design can have multiple layers. This characterises the design
of many process plants and utilities, with extensive pipelines or other process equipment. In this case,
VR can be used to see any layer from any perspective.
Many aerospace and automotive companies are using VR, for example: Ford, BMW, Volkswagen,
Rolls Royce and McDonnell Douglas use it for interactive visualisation of complex designs. Here the
purpose is to visualise how complex assembly parts will fit together and to understand their spatial
relationships. Such visualisation allows the designers to detect design faults at early stages of the
product life cycle, ultimately saving money.
Operation and maintenance
VR can be used to assess operation and maintenance of buildings and mechanical products: for
example, once a building is designed, designers can use VR and go inside the building and assess
various human tasks and activities. This can help companies to plan new manufacturing and
maintenance procedures. The use of VR allows engineering companies to include ergonomics in their
designs by testing considerations such as how to assemble and maintain an engine, load a car and
operate an earth-moving machine.
Concurrent engineering teams
Many companies are now adopting the philosophy of Concurrent Engineering by considering the
down-stream product life cycle issues during the early design stages. The reason for adopting this
approach is to reduce misunderstandings and unforeseen problems creeping into the design as it
progresses through its life cycle, consequently saving both time and costs and improving the overall
quality of the product and client satisfaction. However, in order to implement the Concurrent
Get Started in Virtual Reality
Engineering concept in areas such as construction, life cycle issues such as concept and detail design,
environmental impact, space planning, maintenance, operational issues, emergency evacuation,
security and construction need to be considered during the design phase. As a result, various parties
such as planners, architects, designers, civil-engineers, contractors, maintenance engineers and
security personnel need to be brought together to review designs and come to a common agreement.
At present, such multi-disciplinary design reviews and briefings take place around 2D drawings that
are subject to misinterpretation by those people less accustomed to interpreting them. VR can be used
as a way of facilitating such multidisciplinary meetings and to overcome the limitations inherent in 2D
drawings. VR can be used as an enhanced communication tool to convey design ideas and design
problems to other members of the team more effectively.
A further problem in supporting Concurrent Engineering design is the difficulty in co-locating all the
team members frequently to conduct design reviews. This can be time-consuming and costly for
organisations that are geographically distributed. Several companies are now assessing distributed
virtual environments as a way of supporting distributed Concurrent Engineering teams.
Case study – Gallicon
‘Gallicon’ has been developed by the University of
Salford in conjunction with Galliford, EC Harris,
Stamford Homes and Welsh Water. Gallicon has been
designed to support partners working together on
construction projects. In partnering it is important
that there is a free and open flow of information
between partners – allowing them to deliver the best
possible construction solution to meet the client’s
business objectives. Applying database and
communications technology to the construction
processes has created an information environment
where this can be achieved.
The data held in a central store may be accessed in a variety of ways:
● Visualisation (VRML)
A web-based visual representation of the project with related information easily accessible
● Planning View (MS Project)
A project management view, controlling
the resources in the database
● CAD View (AutoCAD)
A 2D representation of the project, built
using information rich objects
● Cost View (MS Excel)
Spreadsheet view of cost information
viewable by all partners
Refer to www.scpm.salford.ac.uk/gallicon for further information
5
6
Get Started in Virtual Reality
1.3 Training
2 What is Virtual Reality?
VR has considerable potential to aid in the training of people in many situations which are either
potentially hazardous or where training is expensive. Obvious examples include flight simulation,
recreation of battlefield scenarios for the military forces, training for emergency personnel,
equipment operation, assembly and production line training and maintenance training. VR is also
being used extensively in the medical field for surgical training.
Examples of VR training applications include an assembly-line trainer for Ford, orthopaedic and eye
surgery trainer by Hull University, Minimally Invasive Surgery Trainer (MIST) by Virtual Presence,
Intravenous Catherisation Training System (ICTS) by Mentice and a motorcycle training simulator
by Kawasaki.
Case study
VR in the Construction Industry
A construction contracting company specialises in
managing large construction and engineering contracts,
particularly in the petrochemical industries. The company
initially experimented with the use of VR in a pilot project.
Based on the experience they gained, they decided to
increase their use of VR and build a VR model of a large
chemical plant being developed in the North East of
England.
Being able to navigate around a complex chemical plant
even before it was built brought many advantages to the
design process. One of the most apparent benefits is the
way it allows very easy access to what is very complex
data. The number of different drawings and diagrams
needed to describe a chemical plant is enormous. Even
the most skilled engineer has considerable difficulty in
absorbing all of the information. Using VR, it is much
easier to check issues of access and reach. Indeed, the people who will have to do the
maintenance tasks can check the design themselves.
Health and safety and training are the other areas where VR has provided significant
advantages. Models can be used to check issues such as the flow of toxic vapours in the event
of an accident in the plant and escape routes from danger areas.
Training applications can range from practicing one-off operations, such as lifting a new piece
of equipment into an existing plant, to allowing maintenance staff to learn to perform routine
tasks in a safe environment. The use of collaborative, immersive VR provides a significant
degree of realism for these applications.
7
Get Started in Virtual Reality
2.1 Definition of VR
There are many definitions of Virtual Reality (VR). However, this guides uses a strict definition,
defining VR as ‘a user-interface that allows humans to visualise and interact with autonomous computergenerated environments through human sensory channels in real-time’.
The main keywords used in this definition are computer-generated environment, interaction, human sensory
channels and real-time. The meanings of these keywords are given below:
Computer-generated environment
The computer-generated 3D graphical environment is the major component of VR. This
environment is referred to as a virtual environment in this guide. The virtual environment can
present scientific data in a visual form within scientific applications, engineering CAD models in
engineering applications, CAD models of buildings in construction applications or a fantasy world in
the case of entertainment. The objects within this virtual environment can be programmed to depict
certain behaviour. Such behaviour can vary from rotating fans and real-time clocks, to a car with a
realistic driving performance.
Visualisation and interaction
The ability to visualise and directly interact with objects in the virtual environment is a powerful
feature of VR. The type of interaction that the user wants to carry out within the virtual environment
varies according to the application. Such interaction can vary from a simple walk through to
performing complex engineering operations such as assembly/disassembly tasks.
Real-time
As the user is changing their viewpoint or interacting with the virtual objects, the virtual world needs
to be continuously updated to give a sense of continuous motion to the user. It is important to note
that such real-time interaction differentiates true VR from pre-recorded ‘fly-throughs’ that depict the
virtual world from a fixed point or path.
Human sensory channels
The sensory channels currently used in VR applications are visual, tactile and auditory. Although
researchers are also talking about the future use of smell and taste, the technology for this is not yet
available.
Augmented Reality (AR)
A paradigm that is becoming more common in most of
the VR domains is Augmented Reality (AR). AR works on
the same principles as VR. However, the difference
between VR and AR is in their treatment of the real
world. VR immerses a user inside a virtual world that
completely replaces the real world outside. In contrast, AR
lets the user see the real world around them and augment
the user's view of the real world by overlaying or
composing three-dimensional virtual objects with their
real world counterparts. Ideally, it would seem to the user
that the virtual and real objects coexisted.
8
Get Started in Virtual Reality
Get Started in Virtual Reality
2.2 VR versus CAD
2.3 Main Components of a VR System
In order to understand what VR is, it is necessary to distinguish VR from related technologies. This
section compares and contrasts VR against 2D and 3D CAD.
Figure 3.2 illustrates the main components of a VR system, which offers visual, auditory and haptic
sensory
information to the user. This section summarises the function of each component. A detailed
Visual
Graphics Hardware
description
Display of individual technology components is presented in the next section.
Historically CAD programs were created for building models. In contrast, VR programs were created
to display models. Originally CAD information could only be entered and viewed in a 2D form.
However, as CAD software increased in sophistication the model could be viewed and then edited in a
3D format. Future developments of CAD include so called 4D CAD (3D + time), which will enable the
CAD model to be viewed at each stage of the development life cycle. Nevertheless, the prime purpose
of CAD software remains the entry of data for model creation. VR surpasses CAD by being able to
place users inside the model, allowing them to interact directly with the objects they are viewing rather
than through a 2D computer interface. Many 3D solid model CAD systems allow the user to rotate an
object but only within VR can the user walk around the object, stop, touch it, manipulate parts of it, or
even enter it. CAD software can be linked, via an interface, to VR software, in order to import data
models into the virtual world. This data adds a powerful new way to understand and interact with
CAD data. Furthermore, VR allows the user to improve the visual appearance of the products. This
can be done by applying texture mapping, surface properties and, with sophisticated programs,
varying lighting conditions. VR can also allow the user to define kinematic behaviour of objects to
demonstrate the operation of a particular product.
Visual Interface
Auditory
Display
Audio Localizer
VR Run-time
Environment
Auditory Interface
Haptic
Haptic Hardware
Feedback
Figure
3.2: Main Components of a VR System
Haptic Interface
Position and
Orientation
Tracking
Tracking Technology
Tracking Interface
3D Model
Database
Virtual Environment
Generator
Figure 2.1: Main Components of a VR System
Table 2.1: Comparison between CAD and VR
Dimension
CAD
VR
2D / 3D
Real-time rendering
in response to the
user’s actions
Interface
2D Interfaces
3D Interfaces
Interactivity
Limited
Full interaction
Autonomoy
/Interaction
/Presence
Low/High/Low
High/High/High
Figure 2.1 illustrates the main components of a VR system, which offers visual, auditory and haptic
sensory information to the user. This section summarises the function of each component. A detailed
description of individual technology components is presented in the next section.
Virtual Environment generator
The virtual environment generator can be described as a high performance computer that maintains
the database of the virtual world and executes the VR run-time software. The database maintains a
shape representation of the virtual objects together with their visual and behaviour characteristics.
The shape representation of the objects usually comes from CAD systems (i.e. AutoCAD, Bentley,
Unigraphics, Catia, etc) or visual simulation modelling packages such as MultiGen, Designer’s
Workbench. The VR run-time environment loads the virtual objects from the database when
necessary.
Visual interface (Seeing)
The role of the visual display interface is to render images in real-time and display them on the display
devices. These images must be computed taking into account the viewpoint of the user. For smooth
simulations 30 frames per second (fps) is desirable. Therefore each frame has a life cycle of 33 msecs
(from construction to destruction). Human factor studies indicate that eye motion degrades
dramatically under 12 fps. This large computational load is handled by the graphics hardware. Its task
is to render each virtual object within the virtual environment, taking lighting conditions and texture
of the surfaces into consideration. These images are then projected onto the visual display devices (i.e.
workstation screen, large screen, and head-mounted displays).
9
10
Get Started in Virtual Reality
Auditory interface (hearing)
The role of the auditory interface is to generate realistic auditory cues and present them to the user
via headphones or speakers. The audio localisation system takes either real or synthetic audio signals
and applies specialised processing techniques to spatialise the signals in a 3600 sphere. These
spatialised cues can be made to appear ‘space stabilised’ or moveing around in space. For instance, it is
possible to generate a sound such as a ticking clock and place this in a precise position in the virtual
environment. The sound appears to the listener to remain stationary even when they move their
head. Sophisticated digital signal processing techniques are required to generate realistic audio cues
within virtual environments.
Haptic interface (feeling)
The term haptic is used to refer to tactile and force feedback. The role of the haptic interface provides
this feedback to the user when interacting with virtual objects. Tactile feedback is experienced when
an object is lightly touched or stroked. Through tactile feedback, the user should be able to experience
surface properties such as roughness, smoothness, flatness and temperature. Force feedback is
required to stop user’s hand or any other body parts penetrating into virtual rigid bodies. Such
interfaces can be very useful for applications such as maintenance assessment and training. Although
some devices exist for force feedback to the hand, e.g. through a pen device, such systems that restrict
movement are extremely rare.
Tracking interface (movement)
In order to interact with the virtual environment it is necessary to sense where the operator is looking
and where their hand position is within the virtual environment. This means that the position and
orientation of the head and the hand must be tracked. The position and the orientation of the head is
continuously followed by the tracking hardware and sent to the VR run-time environment to generate
appropriate images to the user. Similarly, the position of the hand is continuously followed by the
tracking hardware to detect any collisions between the hand and the virtual objects. Appropriate
tactile and force feedback are generated in the event of collisions.
Case studies
VR, and the development of Virtual Environments (VE), has made a significant impact on how
engineers can portray their projects and the associated design information to other
stakeholders, including clients, in the design process. Atkins introduced VR as an acceptable
design tool with the development of custom VR functions/tools to facilitate project
collaboration and to solve some of the problems of user interface and speed of navigation.
Most of the VE models that the company has produced are as a direct export from a 3D model
constructed in a CAD package. The company avoids using proprietary VR developer toolkits
basically because VRML is free and the current CAD and visualisation systems that the company
uses support the direct export of VRML.
Atkins has used VR in many ways. These have included design review (internal/final),
external/internal inspections, below ground inspection, assessment of land take and space
usage, sight lines/visibility corridors and construction sequencing. The following are a small
selection of where these uses have been put into practice.
Get Started in Virtual Reality
St Gregory’s School – 6th Form Extension Block – Multidiscipline collaboration
This project aimed to show the same 3D CAD model in
the one VE with the display of the various multidiscipline representations. For the collaborating
disciplines, the presentation of the different model
views in the same VE has proved highly beneficial. It has
provided an instant overview of each other’s design
input and allowed for the easier identification of
potential interface problems.
Docklands Light Railway – Lewisham Extension – Deptford Viaduct – Operations and
Maintenance
This model was created to show the insides of a typical
voided bridge deck to depict internal accessibility for
lifetime maintenance and inspection purposes. Various
user interface functions have been included in the model.
For example, all items of equipment are displayed in vivid
colours to aid identification. The ability to control the
on/off display of various items assists the frame rate
display and increases the flying speed of user in the VR
environment. A slide bar control has been included which
aids understanding of the complex assemblies of prestressing equipment embedded in the deck concrete. The
action of holding the cursor over a visible portion of a
tendon causes the display of a drop-down menu that shows various design, construction or
maintenance details about that particular tendon. This enables the quick dissemination of
information to on-site operatives in a fashion that is readily understood: a virtual
representation of the site environment that they will be maintaining and physically working in.
Remodelling of Proof House Junction,
Birmingham – VR for Rail Signalling and Driver
Training
The accurate modelling of movement of trains
presents a number of challenges to VR. This VE
uses geometry of the permanent way and rolling
stock to generate accurate, realistic motion of
bogies, trains and carriages along the track.
Combined with 3D CAD modelling of the signals,
structures, OHLE (Overhead Line Electrification)
and surroundings, this model can be used to show the driver’s viewer for signal sighting, cab
training or static views from any position for swept path checks or public exhibition. For SPAD
(Signals Passed At Danger) or accident investigations, the known speeds and position of trains,
coupled with synchronised time clocks, would show the drivers’ views throughout the incident.
All pictures courtesy of Atkins.
11
12
Get Started in Virtual Reality
3 VR technology
In order to provide interaction within virtual environments it is necessary to use special hardware
designed both to allow input to the computer and to provide feedback to the user. As illustrated in
Figure 1.1 in Section 1, a complete VR system may be required to provide links such as a visual
interface to generate images, a tracking interface to follow the body movements of the user, an
auditory interface to generate realistic audio cues and a haptic interface to provide touch and force
feedback to the user. Hardware for supporting such interfaces is commercially available. This section
describes the state-of-the-art in visual displays and also considers tracking and auditory technology.
3.1 VR system configurations
There is considerable confusion when attempting to categorise VR systems, particularly when one
tries to understand what benefits each system has to offer. This is hardly surprising when faced with
the degree of hype that still surrounds the subject. However, it is important to note that each type of
VR system has a role to play, and it is important to choose the right solution for the desired
application.
For the purposes of this guide it was considered appropriate to partition a VR system into three
categories: non-immersion, semi-immersion and full immersion systems dependent on the degree of
immersion present. This partitioning facilitates consideration of the peripheral interface and
ultimately cost.
Non-immersive (Desk-top) VR
The main features of a desktop VR system are its use of a
computer-generated virtual environment, which is
delivered by a conventional desk-based high-resolution
monitor.
Desktop VR is essentially based on the familiar personal
computer enhanced with a good ‘gamers’ graphic card. If
a 3D environment is being used then 3D interaction
devices such as a Spaceball may be appropriate (see
Section 3.4). Desktop VR applications do not generally
demand the highest graphics performance, meaning that
top of the range ‘PC clones’ can be used. VRML is
typically used to specify the virtual environment (see
boxed text for more details of VRML). However, for true
3D a means of viewing the display stereoscopically will be required.
Get Started in Virtual Reality
Semi-immersive VR
The term semi or partial immersive VR is used for
describing projection-based VR systems. Reality
Centres and Immersive Workbenches can be
considered as semi-immersive VR systems.
Projection based systems provide a greater sense
of presence than desktop systems because of the
wider field of view.
Typical input devices used within semi-immersive
virtual environments for interaction are gloves,
joysticks and 3D mouse (see Section 3.4).
Fully immersive VR
A fully immersive VR system is one that tends to be thought of first when most people think of a VR
system. To achieve full immersion the user has to employ a head-coupled display that is either headmounted or arranged to move with the head. A sense of full immersion is achieved because the display
provides a visual image wherever the user is looking. Consequently, a head-coupled display provides a
3600 field of regard. The field of view of a head-coupled display is also very important and it is
essential to note that the sense of presence will be a function of the quality of the display provided in
terms of resolution, field of view, update rate and image lags, etc.
All fully immersive VR systems give a sense of
presence in the virtual environment that
cannot be equalled by other VR approaches.
This is a direct consequence of having a field
of regard of 3600 where images can be
presented wherever the user is looking. The
ability to exclude visible features of a real
environment can lead to the sense of
immersion taking place very quickly.
Typical input devices used within fully
immersive virtual environments for
interaction are those that track the natural
body function, such as a tracked data glove
(see Section 3.4).
A second variation of desktop VR is ‘panoramic VR’. In panoramic VR, a VR world is constructed
with the user fixed in one position and the surrounding world, as images, is mapped to the inside of
the sphere or drum. Users can spin the world around themselves giving a full 3600 view of the
environment. These systems normally have limited zoom and vertical rotation functions.
The significant advantage of a desktop VR system is the cost, since it is significantly lower than other
forms of VR system. However, a desktop VR system provides almost no sense of immersion in a virtual
environment. For some applications this may be acceptable but where perception of scale is important
then this can be a serious problem. On the other hand, the sense of presence can be high.
13
14
Get Started in Virtual Reality
VRML
Most desktop users of VR will view models through a VRML viewer. Most VRML viewers are free
and are used as a plugin in standard web browsers. All of the major CAD programs offer the
ability to convert a file from the CAD format to the VRML format. Virtual Reality Modelling
Language (VRML) is the language to define the environment for multi-participant interactive
simulations - virtual worlds networked via the global Internet and hyper-linked with the World
Wide Web. All aspects of a virtual world display, interaction and internetworking can be
specified using VRML. It is the intention of its designers that VRML become the standard
language for interactive simulation within the World Wide Web.
VRML is the file format standard for 3D multimedia and shared virtual worlds on the Internet.
Just as HyperText Markup Language (HTML) led to a population explosion on the Internet by
implementing a graphical interface, VRML adds the next level of interaction, structured
graphics, and extra dimensions (z and time) to the online experience. The applications of VRML
are broad, ranging from business graphics to entertaining web page graphics, to
manufacturing, scientific, entertainment, and educational applications, and of course to 3D
shared virtual worlds and communities.
VRML blends the intuitive human sense of space and time with user Interface interaction and
programming language integration for the Internet. The evolution of the Internet from
command-line to 2D graphical to emergent 3D interfaces reflects ongoing, fundamental
progress towards human-centred interface design that is a more immersive and responsive
computer-mediated experience.
The first version of VRML allowed for the creation of virtual worlds with limited interactive
behaviour. These worlds can contain objects, which have hyper-links to other worlds, HTML
documents or other valid MIME (Multipurpose Internet Mail Extensions) types. When the user
selects an object with a hyper-link, the appropriate MIME viewer is launched. When the user
selects a link to a VRML document from within a correctly configured WWW browser, a VRML
viewer is launched. Thus VRML viewers are the perfect companion applications to standard
WWW browsers for navigating and visualising the web.
The second version of VRML added significantly more interactive capabilities. VRML 2.0 was
reviewed and later replaced by VRML 97 in Dec 1997, which was formally released as
International Standard ISO/IEC 14772. VRML97 is almost identical to VRML 2.0, but with many
editorial improvements to the document and a few minor functional differences. Most major
VRML 2.0 browsers are now VRML 97 Browsers.
Recent VRML developments include GeoVRML, H-Anim, and Database Integration. GeoVRML is
an effort to provide support for representing and visualising geographic data using standard
VRML97. GeoVRML 1.0 defines a number of extensions for VRML to enable geo-referenced
data, such as maps and 3-D terrain models, to be viewed over the web by a user with a standard
VRML plugin for their web browser. H-Anim is a standard way of representing and animating
human figures in VRML97. H-Anim 1.1 contains some extensions for VRML, which abstract the
functionality of the components of a human figure. SQL (Standard Query Language) Database
Access in VRML consists of two distinct, complementary interfaces: 1) Embedded SQL Scripting
provides a mechanism for executing arbitrary SQL statements within a VRML application and 2)
Server Side Includes provides a mechanism for embedding data-driven components within a
VRML world delivered from a server. Current VRML development is focused on X3D, an open
extensible standard as a new-generation successor to VRML, bringing rich and compelling 3D
graphics to the web for a wide variety of applications and devices.
15
Get Started in Virtual Reality
3.2 Visual displays
There are many visual display technologies that can be used for VR applications. These devices come
under the following categories: Head Mounted Displays, Workstation Screens, and Projection
Screens.
Head mounted displays (HMD)
Head mounted displays use two miniature screens that are
placed very close to the user’s eyes. Special optics are
required to allow the eyes to focus at such short distances.
Tracking the position of the head using trackers continuously
monitors the position of the left and the right eye. The
corresponding images for the left and the right eyes are then
generated by the graphics hardware. Resolution, field of
view, weight, comfort and cost are some of the criteria to be
considered in comparing the various HMD models. Most
HMDs are expensive, very uncomfortable, and are usually of
poor visual quality compared to other display techniques.
Stereo projection screens
HMDs are not suitable for applications where a team of specialists need to see the same image and
need to communicate with each other. An
example of such applications which allow this is
collaborative design. It is too expensive to
provide each user with a HMD for these multiuser environments. It is also difficult to
communicate naturally with other members
while the user is wearing a HMD during design
reviews. The following technology is available
in the market as an alternative: stereo
workstation monitors, stereo projection
screens, and immersive workbench
technologies.
The stereo workstation monitor is capable of
refreshing the screen at double the normal
scan rate. Stereo-ready computers generate
two alternating, slightly offset images. Stereo
‘active’ glasses are used to view these
alternating images. An infrared controller is
used to control the active glasses.
The main components of screen and projector VR systems include a reasonably high performance
graphics computer and a wide-angle display in excess of 600. Either can provide this: a large screen
monitor, a large screen projector, or multiple video projection systems. Projection based systems
provide a greater sense of presence than desktop systems because of the wider field of view. The
quality of the projected image is also a very important factor. For the higher resolution it may be
necessary to employ a number of projection systems, each projector making up a part of the
composite picture. The cost of providing and maintaining such a system can be very high compared to
16
Get Started in Virtual Reality
Get Started in Virtual Reality
a desktop system but the increased sense of immersion and presence can often be justified. Unlike VR
systems where a head-mounted display is used, a projection VR system allows a number of people to
share and be involved in the same virtual environment. However, while the system can be used by
many people, only one person is tracked and therefore everything is shown from their eyepoint. This
may cause visual inconsistancies for others who are not close to the tracked user.
Mechanical trackers
The term ‘Reality Centre’ was first used by Silicon Graphics Inc. to refer to VR systems based on
projection systems. The Reality Centre has proved itself to be a very powerful method of presenting
virtual environments. Unlike desktop VR systems, a projection based VR system can go a long way
towards producing a visual image that allows a true sense of scale to be achieved. The use of multiple
projection-based systems can result in extremely high-resolution images being produced - but this
comes with a significant cost increase.
Ultrasound trackers have three components, a transmitter, a receiver and an electronic unit. For a
given temperature the speed of sound is known and can be used to measure distances between the
transmitters and the receivers. A total of nine distances are measured in order to determine the
position and orientation. Drawbacks to ultrasonic are low resolution, long lag times and interference
from echoes and other noises in the environment.
Recently, more advanced Reality Centres
known as CAVES, have been developed to
achieve a greater sense of immersion.
Scientists at the University of Illinois’
Electronic Visualisation Lab first developed
the CAVE in 1992. The CAVE is a
10x10x10-foot structure that sits in a
35x25x13-foot darkened room, consisting
of rear-projected screen walls and a frontprojected floor. Currently four projectors
are used to project full-colour, computergenerated images onto three walls and the
floor. A head-tracker provides information
about the user’s position to generate
perspective view for each eye. Stereoscopic
glasses are worn by the users to receive
stereo images. Computer-controlled audio provides a sonification capability to multiple speakers.
CAVES may have any number of sides, usually from two to six with the equivalent number of
projectors. Four-sided CAVES are the most common, although there are a number of six-sided
CAVES that have come into service and the number seems to be growing. The CAVE is a multiperson, high resolution, 3-D graphics video and audio environment.
3.3 Position and orientation tracking
One of the important aspects of VR is the interactivity within the virtual environment using your body
and human sensory channels. This means when a user is immersed within a virtual environment, they
want to see the virtual world from different directions just by turning the head. For example, when
the users look down they want to see what is below. Similarly, users want to grab objects with their
hand. In order to support such interactivity, the virtual environment generator must know the
position of users. It needs to track their heads to generate the correct visuals and should track the
hands to allow the grabbing of objects. This is called position and orientation tracking.
Mechanical armatures can be used to provide fast and very accurate tracking. The drawbacks of
mechanical sensors are the encumbrance of the device and its restrictions on motion.
Ultrasonic trackers
Magnetic trackers
Magnetic trackers employ alternating low-frequency fields to determine the moving object’s position
and orientation. Limitations of these trackers are a high latency for the measurement and processing,
range limitations, and interference from ferrous materials within the fields.
Optical trackers
Several optical position-tracking systems have been developed. One method uses a ceiling grid LEDs
(light emitting diodes) and a head-mounted camera. The LEDs are pulsed in sequence and the
camera’s image is processed to detect the flashes. Two problems with this method are limited space
(grid size) and lack of full motion (rotations). Another optical method uses a number of video cameras
to capture simultaneous images that are correlated by high-speed computers to track objects.
However, processing time (and cost of fast computers) is a major limiting factor with this method.
3.4 Input devices
The traditional input devices such as mouse and keyboard provide limited interaction within virtual
environments. A virtual environment maintains a true spatial representation of a 3D world and this
means that a user can use a 3D device, which may have up to six degrees of freedom (although many
3D devices only use three degrees of freedom for just tracking position), to exploit the interactive
capabilities available within the virtual environment. The main types of interaction that the user
would like to perform within a virtual environment are navigation and direct manipulation of objects.
Movement and navigation in the virtual environments are necessary during product visualisations
and architectural walkthroughs. This allows the designers to move around the products or buildings
to carry out design analysis. Direct manipulation within virtual environments allows the designers to
interact with virtual objects as in real life. This method of interaction has a direct relationship to tasks
in the real world where a person needs to reach out and pick up objects and place them in different
spatial positions or orientations. Two-dimensional input devices (keyboards and 2D mouse) do not
lend themselves to this task. The devices, which can support navigation and direct manipulation
within virtual environments, are presented below.
There are presently four basic positions and orientation tracking methods, which are mechanical,
ultrasonic, magnetic and optical.
17
18
Get Started in Virtual Reality
3D mouse
The 3D Logitech mouse consists of two parts. The first part
is a triangle with three ultrasonic transmitters, which is
placed on the desktop in front of the mouse. The second
part is the mouse with three microphones. The Logitech 3D
mouse can function as a traditional mouse (in twodimension), until it is lifted off the desktop. At this stage, the
ultrasonic transmitter tracks the position of the mouse in the
three-dimensional space.
Space mouse
Get Started in Virtual Reality
4) How to choose a system
Rather than installing a complete immersive or semi-immersive VR system, companies are perhaps
more likely to hire time and consultancy from a specialist VR organisation. This guide is intended to
detail what you can achieve by the use of VR. If you are intending to use VR on a project you must
ensure that whatever system you choose integrates with your other systems, so that the data used in
creating your VR model is not lost to the project.
If you are selecting a VR system the basic parts can be broken down into an Input Processor, a
Simulator Processor, a Rendering Process and a World Database. All these parts must consider the
time required for processing. Every delay in response time degrades the feeling of ‘presence’ and
reality of the simulation.
4.1 Available VR software systems
This device uses electromechanical methods to measure
the multiple forces or torques applied to a ball. A small ball
is attached to a base with a series of control buttons. In
order to move in the virtual environment the user can twist
the ball or apply soft directional movements. An electromechanical device is used to detect the pressure the user
applies in each of the six degrees of freedom.
There are currently quite a number of different efforts to develop VR technology. Basically there are
two categories for the available software: toolkits and authoring systems. Toolkits are programming
libraries, generally for C or C++ that provide a set of functions with which a skilled programmer can
create VR applications. Authoring systems are complete programs with graphical interfaces for
creating worlds without resorting to detailed programming. These usually include some sort of
scripting language in which to describe complex actions, so they are not really non-programming, just
much simpler programming. The programming libraries are generally more flexible and have faster
renders than the authoring systems, but you must be a skilled programmer to use them.
Glove devices
Approximate prices are given below, but these are likely to fluctuate with market rates.
There are various glove devices available for
use in the virtual environment. A certain
number of these accurately measure and
track finger and hand movement and adjust
the virtual environment accordingly.
Variations are also available that allow a
range of ‘pinch’ gestures that a developer
can map against actions.
Freeware VR programs
Companies new to VR may wish to experiment with the low end of the VR spectrum where freeware
products are available. There are currently a few fast rendering programs that have been released
with source code and no charge. These programs are generally copyrighted freeware, which means
that the original creators retain the copyright and commercial use is restricted. They are not polished
commercial programs, and are often written by students. However, these programs exist to give
people a very low cost entry into the VR world. Many of these systems also come out of research
establishments and are very powerful complete systems. Historically, some commercial systems started
out this way.
VR programs for under £150
There are a number of commercial VR programs that sell for under £150. Many computer games can
be considered in this category, but these are often closed systems that do not allow much customising
or world building by the user.
Such low cost VR authoring systems allow the user to define their own virtual worlds. Typical
programs have graphical interfaces and include a simple scripting language. Worlds created within
the program can be freely distributed with a player program. There are quite a number of these
worlds available from bulletin boards and other sources.
19
20
Get Started in Virtual Reality
Get Started in Virtual Reality
VR packages under £750
4.4 Rendering processes
The next level of VR System is those costing between £150 and £750. There are some excellent
professional packages appearing in this price range. Most of these systems do not require any
specialised hardware beyond the basic computer system.
The rendering processes of a VR program are those that create the sensations that are output to the
user. A network VR program would also output data to other network processes. There would be
separate rendering processes for virtual, auditory, haptic (touch/force), and other sensory systems.
Each renderer would take a description of the world state from the simulation process or derive it
directly from the world database for each time step.
Programs in this range provide a good environment for the creation of objects and worlds, as well as
fairly powerful scripting languages. Programs towards the top end of the range can support a very
wide variety of input and output devices, including HMDs. Other capabilities available include
interactive control of the viewpoint within the created environment and texture mapping.
VR software for over £750
Visual renderer
The visual renderer is the most common process in the construction industry. It has a long history
from the world of computer graphics and animation.
The heavy-duty professional VR software packages start at around £750 and can go up dramatically.
The hardware required to run these systems varies. Most support a PC-based environment with addin rendering/graphics cards such as 3D Labs’ Wildcat (expensive > £1000) or NVIDIA’s geForce cards
(cheap < £200, < £500 for quadro). The majority of these systems run on SGI and other workstation
systems. There are also other packages available that run on vendor specific hardware configurations.
The really high-end packages require extremely expensive hardware ‘Image Generators’ such as
those used in flight simulators.
The major consideration of a graphic renderer for VR applications is the frame generation rate. It is
necessary to create a new frame every 1/20 of a second or faster. 20 fps is roughly the minimum rate at
which the human brain will merge a stream of still images and perceive a smooth animation. 24 fps is
the standard rate for film 25 fps is PAL TV, 30 fps is NTSC TV, and 60 fps is Showscan film rate. This
requirement eliminates a number of rendering techniques such as raytracing and radiosity. These
techniques can generate very realistic images but often take hours to generate single frames.
4.2 Input processes
Most VR renderers use OpenGL-based real-time rendering systems. OpenGL is a platform
independant 3D graphics library that is available on all systems from PCs to super computers.
The input processes of a VR program control the devices used to input information to the computer.
As already described there are a wide variety of possible input devices available: keyboard, mouse,
trackball, joystick, 3D and 6D position trackers (glove, wand, head tracker, body suit, etc.). A
networked VR system would add inputs received from the Internet. A voice recognition system is also
a good augmentation for VR, especially if the user’s hands are being used for the other tasks.
Generally, the input processing of a VR system is kept simple. The object is to get the co-ordinated
data to the rest of the system with a minimal lag time. Some position sensor systems add some filtering
and data smoothing processing. Some glove systems add gesture recognition. This processing step
examines the glove inputs and determines when a specific gesture has been made. Thus, it can
provide a higher level of input to the simulation.
4.3 Simulation process
The core of a VR program is the simulation system. This is the process that knows about the objects
and the various inputs. It handles the interactions, the scripted object actions, simulations of physical
laws (real or imaginary) and determines the world status. This simulation is basically a discrete process
that is iterated once for each time step or frame. A networked VR application may have multiple
simulations running on different machines, each with a different time step. Co-ordination of these can
be a complex task.
It is the simulation engine that takes the user inputs along with any tasks programmed into the world
such as collision detection, scripts, etc. and determines the actions that will take place in the virtual
world.
21
The visual rendering process is often referred to as a rendering pipeline. This refers to the series of
sub-processes that are invoked to create each frame. A sample-rendering pipeline starts with a
description of the world, the objects, lighting and camera (eye) location in world space. A first step
would be to eliminate all objects that are not visible by the camera. This can be done quickly by
clipping the object bounding box or sphere against the viewing pyramid of the camera. The
remaining objects then have their geometry’s transformed into the eye co-ordinate system (eye point
at origin). Then the hidden surface algorithm and actual pixel rendering is done.
The pixel rendering is also known as the ‘lighting’ or ‘shading’ algorithm. There are a number of
different methods that are possible depending on the realism and calculation speed available. The
simplest method is called flat shading and simply fills the entire area with the same colour. The next
step up provides some variation in colour across a single surface. Beyond that is the possibility of
smooth shading across surface boundaries, adding highlights, reflections, etc.
An effective short cut for visual rendering is the use of ‘texture’ or ‘image’ maps. These are pictures
that are mapped onto objects in the virtual world. Instead of calculating lighting and shading for the
object, the renderer determines which part of the texture map is visible at each visible point of the
object, i.e lighting is still performed when using texture - the texture just allows the polygon surface
present an image plane that is lit by the scene illumination. The resulting image appears to have
significantly more detail than is otherwise possible. Some VR systems have special ‘billboard’ objects
that always face towards the user. By mapping an image onto the billboard, the user can get the
appearance of moving around the object.
22
Get Started in Virtual Reality
Auditory rendering
Get Started in Virtual Reality
5) Further information
A VR system can be greatly enhanced by the inclusion of an audio component. This may produce
mono, stereo, or 3D audio. In this way the effect of different surfaces and configurations on the
acoustics of a building can be investigated. Sounds have also been suggested as a means to convey
other information, such as surface roughness. Dragging your virtual hand over sand would sound
different than dragging it through gravel.
inition: Innovative Graphics Solutions - www.inition.co.uk
Haptic rendering
NVIDIA - www.nvidia.com
Haptics is the generation of touch and force feedback information. This area is a very new science and
there is much to be learned. There have been very few studies done on the rendering of true touch
sense (such as liquid, fur, etc.). Almost all systems thus far have been exo-skeletons that can be used
for position sensing as well as providing resistance to movement or active force application.
SGI Ltd - www.sgi.com
Other senses
The sense of balance and motion can be served to a fair degree in a VR system by a motion platform.
These are used in flight simulators and some theatres to provide some motion cues that the mind
integrates with other cues to perceive motion. It is not necessary to recreate the entire motion
perfectly to fool the mind into a willing suspension of disbelief.
The sense of temperature has seen some technology developments. There exist very small electrical
heat pumps that can produce the sensation of heat and cold in a localised area. These systems are
fairly expensive.
VE solutions
Distributed Interactive Virtual Environment (DIVE) - www.sics.se/dive
Virtual Presence - www.vrweb.com
Reality centres
Advanced Virtual Prototyping Group, Centre for Virtual Environments,
University of Salford - www.avp.nicve.salford.ac.uk
Virtual Reality Centre at Teeside Ltd, Teeside University - www.vr-centre.com
VR Centre fot the Built Environment, University College London - www.vr.ucl.ac.uk
Standards
OpenGL - www.opengl.org
Web3D Consortium (VRML) - www.vrml.org
Discussion groups
UK Virtual Reality Special Interest Group - www.crg.cs.nott.ac.uk/grouops/ukvrsig/
23
24
Get Started in Virtual Reality
Get Started in Virtual Reality
Construct IT Management Board
Chairman
Tim Broyd
CIRIA
Director
Martin Betts
University of Salford
Manager
Jason Underwood
Construct IT
Contractors
Derek Blundell
Ballast Construction
John Findlay
Balfour Beatty
Consultants
Martin Jarrett
Citex
Martin Ridgway
WSP Group
Clients
Martin Ong
BAA
IT Industry
George Stevenson
BIW Technologies
Published by Construct IT for Business
Copyright © Construct IT for Business 2002
Design © Design Team, University of Salford. Tel 0161 295 2630
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system or transmitted in any form or by any
means electronic, mechanical or otherwise without the prior written
permission of the copyright holder.
British Library Cataloguing-in-Publication Data
A CIP catalogue record for this book can be obtained from the
British Library
ISBN 1-900491-78-8
25
26