Articles in Control Magazine

Download our regular contributions to Control Magazine.

Issue 30, March 2012
Issue 29, January 2012
Issue 28, December 2011
Issue 27, November 2011
Issue 26, August 2011
Issue 25, July 2011
Issue 24, April 2011
Issue 23, February 2011
Issue 22, January 2011
Issue 21, November 2010
Issue 20, October 2010
Issue 18, June 2010
Issue 17, April 2010
Issue 16, January 2010

Control International Edition July 2011
Control International Edition March 2011
Control International Edition April 2010
Control International Edition August 2010 

About GATE

GATE final publication 2012
Results from the GATE research project
a 75 page overview (pfd 4.7 Mb)

GATE Magazine 2010
a 36-page overview of the GATE project (pdf 5.3 Mb

Research themes:
Theme 1: Modeling the virtual World
Theme 2: Virtual characters
Theme 3: Interacting with the world
Theme 4: Learning with simulated worlds

Pilots:
Pilot Education Story Box
Pilot Education Carkit
Pilot Safety Crisis management
Pilot Healthcare Scottie
Pilot Healthcare Wiihabilitainment

Knowledge Transfer Projects:
Sound Design 
CIGA 
Agecis 
CycART 
VidART
Motion Controller
Compliance
Mobile Learning
Glengarry Glen Ross
CASSIB
EIS
Enriching Geo-Specific Terrain
Pedestrian and Vehicle Traffic Interactions
Semantic Building Blocks for Declarative Virtual World Creation 
Computer Animation for Social Signals and Interactive Behaviors

Address

Center for Advanced Gaming and Simulation
Department of Information and Computing Sciences
Utrecht University
P.O. Box 80089
3508 TB Utrecht
The Netherlands
Tel +31 30 2537088

Acknowledgement

 ICTRegie is a compact, independent organisation consisting of a Supervisory Board, an Advisory Council, a director and a bureau. The Minister of Economic Affairs, and the Minister of Education, Culture and Science bear the political responsibility for ICTRegie. The organisation is supported by the Netherlands Organisation for Scientific Research (NWO) and SenterNovem.

WP 2.1 Modeling motor behavior

Project: New methods to automatically generate virtual character movements

Project: Continuous interactive dialogs with Embodied Conversational Agents

 

New methods to automatically generate virtual character movements
In computer games and simulations humans are represented by virtual characters. The realism of their motor behaviour critically determines user engagement and the validity of interactive simulations. Based on the experimental study of human manoeuvring performance, we are developing realistic parameter-based motion models for virtual characters. Our objective is to identify the principles of natural human maneuvering performance (motor behavior, speed, accuracy). By studying human motor behavior in complex structured real environments, we can derive general parameterized motion models that allow automatic generation of a diversity of virtual character movements. This will enable a more realistic simulation of the motor behavior of autonomous virtual characters, and a more natural interaction with avatars driven by users that are restricted by the limited field-of-view of display devices. Particularly in serious gaming and training applications, this may ultimately lead to increased user engagement and enhanced transfer of skills to the real world.

We investigated human obstacle avoidance behaviour under restricted viewing conditions. We varied both the horizontal and vertical viewing angle independently. We found that even a small restriction of the horizontal visual angle causes a considerable decrease in speed while traversing an obstacle course. The results show further that restrictions in both directions affect obstacle avoidance behaviour. However, enlarging the vertical viewing extent yields the largest performance improvements. These results indicate for instance that most commercially available head-mounted displays (HMDs) are not suitable for use in cluttered simulated environments (e.g. military and first responder training applications), due to their limited vertical viewing angles. Our findings can be used to select and develop of HMDs and other display devices with appropriate field of view extents for any given application.

Using a motion capture system, we are currently analysing several kinematic parameters representing human obstacle crossing behaviour. This enables us to model behavioural changes as shifts in strategy. Our initial results show that, in normal (unrestricted) viewing conditions, humans adopt strategies prioritizing energy conservation and time efficiency. With a restriction of the vertical viewing angle, people appear to counter the risk of tripping by increasing their step length and toe clearance while maintaining their speed, thus sacrificing energy conservation. Additional viewing restrictions appear to cause participants to further reduce their speed and increase their step length and toe clearance even more. Next, we will investigate the effects of viewing restrictions on a range of different manoeuvring tasks and in various circumstances. The results of this research will be useful for implementing realistic manoeuvring performance in virtual environments, and for driving virtual agents.

Workpackage
2.1 Modeling motor behaviour

Partners
Utrecht University
TNO Human Factors

Key Publications
S.E.M. Jansen et al. (2008). Effects of horizontal field-of-view restriction on manoeuvring performance through complex structured environments. Proc. 5th symposium on Applied perception in Graphics and Visualization, pp. 189-189.
S.E.M. Jansen et al. (2010). Restricting the vertical and horizontal extent of the field-of-view: effects on manoeuvring performance. The Ergonomics Open Journal, 3, pp. 19-24.
More publications

Contact details
Lex Toet, TNO Human Factors
lex.toet(at)tno.nl

Continuous interactive dialogs with Embodied Conversational Agents

New methods to make Interactive Embodied Conversational Agents appear more natural by continuous and parallel interaction through verbal and non-verbal communication.
Interactive Embodied Conversational Agents (ECAs) are currently used in an interaction paradigm in which the user and the system take turns to talk. If the interaction capabilities of ECAs are to become more human-like and they are to function in social settings, their design should shift from this turn-based paradigm to one of continuous interaction in which all partners perceive each other, express themselves, and coordinate their behavior to each other, continually and in parallel.
The main objective of this project is to develop a continuous interactive ECA that is capable to perceive and generate conversational (non-)verbal behavior fully in parallel, and to continuously coordinate this behavior to perception. We will thereto develop and implement the sensing, interaction and generation components required to realize continuous behavioral interaction.

We developed the virtual human platform "Elckerlyc" (http://hmi.ewi.utwente.nl/showcase/ Elckerlyc) for generating multimodal verbal and nonverbal behavior for Virtual Humans (VHs). Elckerlyc is designed for continuous interaction with tight temporal coordination between the behavior of a VH and its interaction partner. It provides a mix between the precise temporal and spatial control offered by procedural animation and the realism of physical simulation. It is highly modular and extensible, and can execute behaviors specified in the Behavior Markup Language. Elkerlyc allows continuous interaction by direct revision of bodily behavior, based upon (short term) prediction. This leads to a flexible planning approach in which part of the planning can be done beforehand, and part has to be done on the fly. In the latter case, parts of the behavior have already been executed, and other parts can still be modified. We focus on the specification and execution of such flexible plans. We have provided abstractions for the prediction of sensor input and show how we can synchronize our multimodal output to these predictions in a flexible manner. To demonstrate the feasibility of the multimodal output generation part of our system without investing a lot of work in the sensing part, we have currently implemented placeholders for the predictors.

At the Enterface 2010 workshop a virtual human system will be designed and built, that employs Elckerlyc's continuous interaction capabilities. This ECA will be able to perceive and generate conversational (non-)verbal behavior fully in parallel, and will coordinate this behavior to perception continuously. hus, the ECA responds to (or explicitly ignores) head nods, short vocal utterances such as "yeah" and "hmm" of the user and can try to evoke or encourage such verbal or non-verbal utterances from him or her. Actively dealing with and responding to the user's verbal and nonverbal behavior requires the ECA to be capable of handling overlap, to re-plan and re-time expressions, to ignore interrupt attempts by the user, and to abandon planned utterances (letting itself in effect be interrupted). We will model and implement the sensing, interaction and generation required for this continuous interaction. An evaluation study will be performed to investigate how the newly developed ECA is perceived by human users in terms of politeness and certain personality traits.

Workpackage
WP2.1 Modeling motor behavior

Partners
University of Twente, Human Media Interaction

Key Publications
H. van Welbergen et al. (2009). An Animation Framework for Continuous Interaction with Reactive Virtual Humans. Proc. 22nd Annual Conf. on Comp. Animation and Social Agents, pp. 69-72. A. Nijholt, et al. (2008). Mutually Coordinated Anticipatory Multimodal Interaction. In: Nonverbal Features of Human-Human and Human-Machine Interaction, pp. 70-89.
H. van Welbergen et al. (2009). Real Time Character Animation: A Trade-off Between Naturalness and Control. Proc. Eurographics, pp.45-72.
More publications

Contact details
Job Zwiers, University of Twente
Zwiers(at)ewi.utwente.nl