Virtual and Augmented Reality
Visuo-haptic colocated augmented reality A major focus of
my recent work has been the extension of the paradigm of augmented
reality towards multimodal interaction. Our team has focused on the
integration of haptic feedback into visual augmented reality
environments. We provided several solutions to allow natural
interaction, at the same time both with real and virtual objects.
In this context we have developed methods for the precise calibration
and integration of haptic interfaces in multimodal AR environments,
techniques to improve accuracy and stability of the visual overlay,
as well as enhanced visualization methods to improve depth
perception. An application target of our work is the support of
interoperative surgical procedures as well as novel surgical
In recent research we attempt to go beyond mere visual augmentation
by providing actual haptic augmentation, i.e. the combination of
forces from real and virtual objects. Visuo-haptic augmented reality
opens up completely new possibilities of interaction and media
presentation, e.g. for education, communication, and collaboration.
Data-driven haptic rendering
Akin to image-based rendering in computer graphics, we have developed
the concept of data-driven haptic rendering. The underlying idea is to
acquire interaction data during manipulation of objects, which are
subsequently used for data-driven virtual rendering. We have developed
a special recording device as well as methods for the integration of
different sensor signals for the display. Radial basis functions are
used for interpolation of haptic signals acquired from fluids and
solids. We also proposed computationally efficient techniques to
accelerate the interpolation process. A further extension was made
to measure slip during object manipulation. This research domain is
a key target of our current work. We strive to build a general framework
for multimodal data-driven acquisition and rendering, which allows
to visually as well as haptically capture objects during unconstrained
interaction for subsequent display.
Psychophysical studies and user evaluations
An indispensable element of any interactive system is the user. For a
successful system, it is central to consider the human in the loop. To
meet this goal, we conducted numerous user studies and experiments; to
determine perceptual thresholds, quantify rendering performance, examine
system usability, or investigate cross-modal effects. For instance, we
established perceptual thresholds for small force detection, which
were used to optimize haptic rendering algorithms. Further, we showed
that unavoidable visual delays cause soft objects to be perceived as
being stiffer in augmented reality, which has practical implications
for medical simulation. We also devised a quantitative approach for
evaluating the overall fidelity of haptic rendering methods via
Surgical training systems
Our research group has been developing surgical simulation systems
in close collaboration with clinical partners for more than a decade.
A key target of our work is to achieve a high level of realism. We
strive to go beyond rehearsal of basic manipulative skills, and to
enable the training of procedural skills like decision making and
problem solving. Furthermore, the integration of the simulation
systems into the medical curriculum is tackled.
An example of this is the development of a simulator for hysteroscopic
training (further details on HystSim),
which also led to the foundation of our spin-off company VirtaMed.
Another example is a simulator for arthroscopic interventions. In the
context of surgical simulation, our group has focused on the
development of various methods required in such systems, including
real-time cutting of triangular and tetrahedral meshes, generation of
surgical training scenes, rendering of haptic feedback, and real-time
visualization of the hydrometra.
Training scene generation
A key element of effective VR-based training is the ability to
generate variable scenarios. Due to this, adaptation of a trainee
to a specific scene can be avoided and natural variation, which is
encountered in most real life situations, can be included. This
research covers the main components needed to define a training
scene - the generation of the scene geometry, the modeling of
organ appearance, and the definition of biomechanical parameters.
The first point covers the derivation of the healthy anatomy as
well as typical pathological variations. The second point deals
with the synthesis of appropriate textures for organ surfaces as
well as the mapping of these to the previously created geometries.
The final element focuses on techniques to set the biomechanical
parameters describing the deformation behavior of the soft tissue
objects. We successfully applied optimization-based techniques
as well as analytical derivations.
Real-time mesh cutting
A central training objective of virtual reality based surgical
simulation is the removal of pathologic tissue. This necessitates
stable, real-time updates of the underlying mesh representation.
We have developed a hybrid cutting approach for tetrahedral and
triangular meshes tailored to our hysteroscopic training system.
It combines topological update by subdivision with adjustments
of the existing topology. This is completed by a subsequent local
mesh optimization step.
Moreover, the mechanical and the visual
model are decoupled, thus allowing different resolutions of the
underlying mesh representations. An arbitrary, user-defined cut
surface can be closely approximated while avoiding the creation
of small or badly shaped elements, thus strongly reducing stability
problems in the subsequent deformation computation.
Human Computer Interaction in Medicine
A further direction of my work is the development of new algorithms and
systems for medical diagnosis and planning. The leitmotif of this
activity is the optimal cooperation between interactive algorithms
and the human operator.
Multi-modal data segmentation
Extensive research has been invested in recent years into improving
interactive segmentation algorithms. However, the human computer
interface, a substantial part of an interactive setup, is usually
not investigated. The aim of this work is the optimal cooperation
between interactive image analysis algorithms and human operators.
A visuo-haptic interaction tool for medical segmentation has been
designed, which opens the way to virtual endoscopy of the small
intestine. The system has been installed and is used at the Radiology
Department of the University Hospital Zurich.
Enhanced surgical planning
A further direction is the support of surgical planning procedures
with enhanced interfaces. For instance, in joint projects with the
University Hospital Zurich and the Balgrist University Hospital,
various systems were developed for planning of surgical interventions,
on complex fractures of the hip and the shoulder joint, as well as
for forearm surgery.