VISCERAL

Objective:

VISCERAL (Visual Concept Extraction Challenge in Radiology) aims at distributing a substantial amount of medical imaging data together with expert annotations to the research community. The distribution will occur in the context of large benchmarking campaigns, that allow researchers, to evaluate their algorithms on test data. An evaluation framework will be set up to comperatively assess a wide variety of algorithms in the context of medical image analysis, in particular for localization and segmentation of anatomical structures, as well as detection of tumors and image retrieval.

With the increasing amount of patient data obtained in hospitals and the requirement to diagnose these data, tools for computerized diagnosis aid have becomean important direction to increase productivity of radiologists and avoid mistakes. To train such systems for diagnosis aid, manually annotated data sets are required to train machine learning tools. In radiology, a first step for annotating data is the manual segmentation of image volumes to separate various structures in the images. Manual segmentation on the other hand demands an intensive and time-consuming labour from the radiologists. Manual segmentation of 3D structures can also lead to errors and variation in the results, depending on the observers’ experience. Semi-automatic segmentation methods allow the radiologists to take the final decision on the resulting 3D object and utilize segmentation tools as support tools for this task.

Several tools have been developed for the annotation of anatomical structures or pathologies present in medical images. There is a strong need to evaluate the outcomes of these methods on standardized datasets.  VISCERAL is aiming to bridge this gap by introducing datasets covering ultiple image modalities and anatomical structures.  We also aim to introduce an online evaluation platform, in which the algorithms can be submitted such that the test data is never distributed to the participants. 

Project-Website: http://www.visceral.eu/

Participants: Prof. Bjoern Menze, Prof. Orçun Göksel, Prof. Gábor Székely

Finished in: 2015

BEAMING

Objective:

Today, in spite of advanced video conferencing, shared virtual environments, and gaming environments such as Second Life, it is still simply much more efficient to physically travel to remote location for business, scientific or family meetings—even if at a huge environmental, energetic and opportunity cost.

The science and technology developed in BEAMING will for the first time give people a real sense of physically being in a remote location with other people, and vice versa—without actually traveling.

BEAMING is a four year FP7 EU collaborative project which started on Jan 1st 2010.

The main task of the Virtual Reality in Medicine group is the development of methods for visual and haptic capture of objects, as well as the remote display of these acquired digital copies.

Project-Website: http://.www.beaming-eu.org

Participants: PD Dr. Matthias Harders, Dr. Seokhee Jeon

Partners:

Starlab Barcelona, Spain
Universitat de Barcelona, Spain
University College London, United Kingdom
Eidgenössische Technische Hochschule Zürich, Switzerland
Scuola Superiore di Studi Universitari e di Perfezionamento Sant’ Anna, Italy
Technion - Israel Institute of Technology, Israel
Interdisciplinary Center Herzliya, Israel
IBM Haifa Research Lab, Israel
Consorci Institut d’Investigacions Biomediques August Pi i Sunyer, Spain
Aalborg Universitet, Denmark
Technische Universitaet Muenchen, Germany

Finished in: 2013

DDMOAR

Objective:

Data-Driven Multimodal Object Acquisition and Rendering

The target of this project is to build a general framework for multimodal data-driven acquisition and rendering. The ultimate goal is to visually as well as haptically capture an object during unconstrained interaction and manipulation for subsequent display. 

To this end, we will extend earlier attempts at data-driven haptic object acquisition and rendering. Inhomogeneous materials or geometrical object features such as corners or edges will be fully reconstructed. Special consideration will be given to the handling of deformable objects.

   

Participants: PD Dr. Matthias Harders, Anatolii Sianov

Partners:

Eidgenössische Technische Hochschule Zürich, Switzerland

Finished in: 2013

HAMAM

Objective:

HAMAM: European Highly Accurate Breast Cancer Diagnosis through Integration of Biological Knowledge, Novel Imaging Modalities, and Modelling

HAMAM consists of 9 project partners from 7 countries with leading expertise in the field of breast imaging diagnosis, with EIBIR as the coordinating partner. The 3-year project started in September 2008 and is supported by the European Commission.

Project-Website: http://www.hamam-project.org/cms/website.php

Participants: Dr. Jan Lesniak, Dr. Christine Tanner, Prof. Gábor Székely, Dr. Rémi Blanc

Partners:

EIBIR (AT) University College London (UK) MEVIS Research Gmbh (DE) MEVIS Medical Solutions AG (DE) ETH Zurich - Computer Vision Laboratory (CH) Radboud Universiteit Nijmegen (NL) The University of Dundee (UK) CHARITE Berlin (DE) Boca Raton Community Hospital, Inc (US)

Finished in: 2012

Virtual Patient Models

Objective:

A key element of training with virtual reality surgical simulators is the definition of the
simulated patients. This step typically includes the generation of geometric models of
healthy and pathological anatomy, organ textures, vessel structures, and the determination
of tissue deformation parameters. The target of this project is to extend and modify
previously developed generic methods to patient-specific scenarios. Moreover, the
various modules will be combined into a user-friendly, complete training scene generation
tool. In this context, aspects of optimal human-computer interaction, workflow, and
usability will be addressed.

A key element of training with virtual reality surgical simulators is the definition of the
simulated patients. This step typically includes the generation of geometric models of
healthy and pathological anatomy, organ textures, vessel structures, and the determination
of tissue deformation parameters. The target of this project is to extend and modify
previously developed generic methods to patient-specific scenarios. Moreover, the
various modules will be combined into a user-friendly, complete training scene generation
tool. In this context, aspects of optimal human-computer interaction, workflow, and
usability will be addressed.

Participants: PD Dr. Matthias Harders, Thomas Wolf, Michael Emmersberger

Partners:

VirtaMed AG, Switzerland
Eidgenössische Technische Hochschule Zürich, Switzerland

Finished in: 2012

Patient-Specific Model Generation for Surgical Training Simulation

Objective:

The target of this project is to extend and modify previously developed generic methods to patient-specific scenarios. Moreover, the various modules will be combined into a user-friendly, complete training scene generation tool. In this context, aspects of optimal human-computer interaction, workflow, and usability will be addressed.

A key element of training with virtual reality surgical simulators is the definition of the simulated patients. This step typically includes the generation of geometric models of healthy and pathological anatomy, organ textures, vessel structures, and the determination of tissue deformation parameters.

Participants: PD Dr. Matthias Harders, Thomas Wolf, Michael Emmersberger

Partners:

VirtaMed AG, Switzerland
Eidgenössische Technische Hochschule Zürich, Switzerland

Finished in: 2012

SCOVIS

Objective:

.

SCOVIS will significantly improve the versatility and the performance of the current monitoring systems for security purposes and workflow control in critical infrastructures. The resulting technology will enable the easy installation of intelligent supervision systems, which has not been possible so far, due to the prohibitively high manual effort and the inability to model complex visual processes. An automobile industry has been selected for the evaluation of the SCOVIS research tools under a real world environment. SCOVIS supports the automatic detection of a) behaviours, b) workflow violation and c) localization of salient moving or static objects in scenes, monitoring by multiple cameras (static or active). The project investigates weakly supervised learning algorithms and self-adaptation strategies for analysis of visually observable workflows and behaviours. The goal of these algorithms is to use a relatively small number of labelled data, almost at the initial stage of the algorithm, while in the following, unlabelled data are exploited. Camera network coordination is also supported so that complex behaviours can be identified as combination of spatio-temporal object relations in multiple scenes. SCOVIS supports self-configuration (system is able to automatically calculate the camera spatial relations) and adaptation (the models are automatically enriched through time via online data acquisition and unsupervised learning strategies). User’s interaction is also foreseen for improving the behaviour detection through relevance feedback mechanisms. This way, the user evaluates the system performance and then the rules used by the system are automatically updated (without imposing any additional knowledge from the user about the system operation) so that in the following responses better decisions are accomplished. The proposed research will be performed with absolute respect to privacy and personal data of monitored individuals.

Project-Website: http://www.scovis.eu

Participants: Dr. Helmut Grabner, Dr. Severin Stalder, Prof. Luc Van Gool

Partners:

Institute of Communication and Computer Systems/National Technical University of Athens (GR) University of Southampton (UK) Joanneum Research Forschungsgesellschaft mbH (AT) Atos Origin Sociedad Anonima Espanola Unipersonal (ES) Katholieke Universiteit Leuven - Interdisciplinary Centre for Law and Information Technology (BE) NISSAN Motor Iberica SA (ES)

Finished in: 2011

Exploring the Potential of Visuo-Haptic Augmented Reality for Medical Education

Objective:

In this collaborative project, we explore the application of visuo-haptic augmented reality in medical education. The goal is to augment both the real visual and haptic environment seamlessly with virtual information, while maintaining full functionality. One targeted future application area is training of breast cancer screening.

The goals of this collaboration are two-fold. First, we focus on haptic AR systems that enable us to augment the attributes of real objects with virtual force feedback. Second, we apply the integrated visuo-haptic AR system to medical training and evaluate the performance in empirical usability experiments.

Participants: PD Dr. Matthias Harders, Dr. Seokhee Jeon

Partners:

Eidgenössische Technische Hochschule Zürich, Switzerland
POSTECH, South Korea

Finished in: 2011

PASSPORT

Objective:

The PASSPORT for Liver Surgery project aims to develop patient-specific models of the liver which integrates anatomical, functional, mechanical, appearance, and biological modeling. To these static models, PASSPORT will add dynamics liver deformation modeling and deformation due to breathing, and regeneration modeling.

These models, integrated in the Open Source framework SOFA, will culminate in generating the first multi-level and dynamic “Virtual patient-specific liver” allowing not only to accurately predict feasibility, results and the success rate of a surgical intervention, but also to improve surgeons’ training via a fully realistic simulator, thus directly impacting upon definitive patient recovery suffering from liver diseases.

The main task of the Virtual Reality in Medicine group is the development of methods for stable mesh modifications as well as patient-specific texturing.

Project-Website: http://www.passport-liver.eu/

Participants: PD Dr. Matthias Harders, Dr. Basil Fierz, Olexiy Lazarevych

Partners:

Institut de Recherche contre les Cancers de l’Appareil Digestif, France
Eidgenössische Technische Hochschule Zürich, Switzerland
Technische Universität München, Germany
Université Catholique de Louvain, Belgium
Imperial College of London, United Kingdom
Institut National de Recherche en Informatique et Automatique, France
Universität Leipzig, Germany University College of London, United Kingdom
Université Louis Pasteur, France
Karl Storz GmbH & Co. KG, Germany

Finished in: 2011

COBOL

Although body language is an important element of communication, there has hardly been any scientific research into it, until now. In 2006, the European Commission launched the COmmunication with Emotional BOdy Language (COBOL) project, in the 6th EU framework program. As part of the project, psychologists, computer scientists, and engineers are contributing their specific knowledge to researching emotional postures and movements. The aim is to develop tools to describe and measure how we perceive body language and how we express emotions in the way we move our bodies. For this purpose, researchers are collecting data on behavior and image material and developing methods for syntheses and simulation of body language for use in communications technology. They are also investigating how the recently discovered specialized networks of the cerebral cortex, the most important interconnection organ of our central nervous system, processes the body’s own influences and environmental influences.

Participants: Prof. Luc Van Gool, Dr. Konrad Schindler, Dr. Beat Fasel, Wicher Visser

Partners:

CNRS France Swiss Federal Institute of Technology Zurich (ETHZ) Tilburg University, The Netherlands University of Tubingen, Germany Weizmann Institute of Science, Israel

Finished in: 2010

Computer Simulation of Physical and Chemical Control of Blood Vessel Anastomosis, Growth and Remodeling

Objective:

The objective is to gain a thorough quantitative understanding of the vascular system and its development using computational resources, in which the formation of blood vessels is treated as a multiphysics-driven remodeling of a planar capillary network (plexus).

The long-standing interest of biology and medicine in a thorough quantitative understanding of the vascular system and its development has gained new impetus due to increased efficiency of computational resources during the past two decades. Although many existing models are successful in predicting structurally realistic systems with relevant biophysical properties, they demonstrate no understanding of how such structures actually come into existence from the microscopic point of view. Such understanding is, however, mandatory if realistic simulations of, e.g., anti-angiogenic cancer treatments or the effects of irradiation are considered. In our approach, formation of blood vessels is treated as a multiphysics-driven remodeling of a planar capillary network (plexus), demonstrated to be present at many anatomical sites. The computational model comprises advanced interface modeling, solid and fluid mechanics, as well as production, transport and degradation of chemical agents. Our model does not only explain formation of capillary meshes and bifurcations, but also the emergence of feeding and draining microvessels in an interdigitating pattern that avoids arterio-venous shunts. In addition, it predicts detailed hydrodynamic properties and transport characteristics for oxygen, metabolites or signaling molecules. In comparison with classical models, the complexity of our approach is significantly increased by using a multiphysics modeling environment, where many independent computational components are combined and the data structure is unified. Our results demonstrate that interdisciplinary multicomponent computer models of blood vessel networks can integrate experimental data on the cellular level to simulate supracellular morphogenesis with unprecedented detail.

Participants: Dr. Dominik Szczerba, Prof. Gábor Székely, Dr. Kathrin Burckhardt

Partners:

Tissue Dynamics Lab, Institute of Anatomy, PMU Salzburg

Finished in: 2010

Hermes

HERMES stands for Human Expressive Representations in Motion and their Evaluation in Sequences and it is a research project funded by the European Commission. The main objective is the development of a cognitive artificial system allowing both recognition and description of human behaviours arising from real-world events: the system should understand human motion and behaviour and communicate the inferred results to the end-users using natural texts, audio or synthetic movies. A first goal of this project is to determine which interpretations are feasible to be derived in each category of human motion. In particular, Hermes will interpret and combine the knowledge inferred from three different categories of human motion: agent, body, and face motion. A second goal is to establish how these three types of interpretations can be linked together and coherently evaluate the human motion as a whole in image sequences.

Project-Website: http://www.hermes-project.eu/

Participants: Dr. Gabriele Fanelli, Dr. Daniel Roth, Dr. Michael D. Breitenstein, Dr. Beat Fasel, Dr. Esther Koller-Meier, Dr. Bastian Leibe, Prof. Luc Van Gool

Partners:

Computer Vision Center. Universidad Autonoma de Barcelona (ES) Institut fuer Algorithmen und Kognitive Systeme. Universität Karlsruhe (DE) Computer Vision and Media Technology Laboratory Lab. Aalborg Universitet (DK) Active Vision Laboratory. University of Oxford (UK) Answare Technologies (ES)

Finished in: 2009

Artificial micro swimmers

Objective:

Investigation of micro-scale propulsive mechanisms. Designing and building micro swimmers for positioning and steering of medical micro robots. We also try to understand the parameters that are influencing the efficiency and swimming velocity of flagellar swimming tails. 

Artificial micro/nano swimmers motion is governed by different forces then regular size swimmers. The flagellar ciliary and other mechanisms (amoeba) developed by natural small swimmers such as bacteria and algae enables using the viscous forces due Stokes flows to advance in a mimetic environment. Experimental, theoretical and numerical methods will be used to evaluate the artificial flagellar swimmers. We study artificial swimmers by building upscaled models drive them in highly viscous fluid. In order to check and generalize the experimental results we develop theoretical and numerical couple-field models that can describe fluid-solid interaction and the actuation of the swimmers. Investigation of the artificial swimmers can be important for the bio-medical field in which such swimmers can be used as carriers of diagnostic and interventional tools in fluid filled cavities such as the gastro-intestinal track, the eye or the ventricular system.

Participants: Prof. Gábor Székely, Dr. Gabor Kosa, Dr. Raphael Hoever, Dr. Dominik Szczerba

Partners:

IRIS Institute of Robotics and Intelligent Systems ETHZ IT'IS Foundation Zurich

Finished in: 2009

EPOCH

EPOCH is a network of about a hundred European cultural institutions joining their efforts to improve the quality and effectiveness of the use of Information and Communication Technology for Cultural Heritage. Participants include university departments, research centres, heritage institutions, such as museums or national heritage agencies, and commercial enterprises, together endeavouring to overcome the fragmentation of current research in this field.

Project-Website: http://www.epoch-net.org/

Participants: Dr. Pascal Müller, Prof. Luc Van Gool, Simon Haegler

Finished in: 2008

CyberWalk

Despite recent improvements in Virtual Reality technology it is at present still impossible to physically walk through virtual environments. In this project our goal is to significantly advance the scientific and technological state-of-the-art by enabling quasi-natural, unconstrained, and omni-directional walking in virtual worlds. To achieve this visionary goal we follow a holistic approach that unites science, technology and applications. CyberWalk will develop a completely novel concept for a high-fidelity omni-directional treadmill, named CyberCarpet. This treadmill is envisioned to be a 2D platform that should allow unrestricted omni-directional walking, permitting the user to execute quick or slow movements, or even step over and cross his or her legs. At the end of the project it is foreseen that we will have an easy-to-use device that has been constructed to fit individual needs. Its widespread use will be facilitated by the fact that users can get quickly prepared to use it: the visual tracking that supports the control operates markerless. One only has to put on a Head Mounted Display, through which the virtual environment is displayed. The concept of motion control behind this treadmill will focus on diminishing the forces exerted on the walking user, by minimizing the overall accelerations. To place the developments on a solid human-centred footing CyberWalk will continuously push research in the field of cognitive/behavioural science and will determine the necessary psychophysical design guidelines and appropriate evaluation procedures. The CyberWalk project will showcase its developments via a physical walkthrough, most probably through the virtual reconstruction of the ancient city of Sagalassos. However, it seems clear that the CyberWalk approach will also prove relevant to many other application areas such as medical treatment and rehabilitation (Parkinson disease, phobia, etc.), entertainment, sports (training facilities, fitness centers), behavioural science, education, training (maintenance teams, security guards, etc.), and architecture (exploring large virtual construction sites)

Project-Website: http://www.cyberwalk-project.org/index.htm

Participants: Dr. Roland Kehl, Simon Haegler, PD Dr. Matthias Harders, Prof. Luc Van Gool

Partners:

Max Planck Institute for Biological Cybernetics Technische Universität München Università di Roma Sapienza

Finished in: 2008

Blue-C

Objective: The blue-c project aims at fundamental research for, and development of a new generation of virtual design and modeling environments centering on the interaction between humans and models. By integrating three-dimensional human representations into immersive virtual environments, many of today's collaboration and interaction techniques can be improved and new ones will be invented. Today's technology enables information exchange and simple communication. Our team will build a system that enables a number of participants to interact and collaborate in a virtual world at an unprecedented level of immersion. The blue-c will support: fully three-dimensionally rendered human inlays, supporting motion and speech in real time. Interaction metaphors between humans and simulated artifacts, are they functional, behavioral, or formal models or combinations of those. The blue-c will leverage telepresence and virtual meetings to a new dimension of immersion. We will investigate the usability and performance of the prototype in selected applications including architecture, mechanical design, and medicine. Our group is developing algorithms for multiple camera self-calibration, real-time segmentation, progressive silhouette extraction, and interpretation of natural human gestures.

Project-Website: http://blue-c.ethz.ch/

Participants: Dr. Roland Kehl, Dr. Esther Koller-Meier, Prof. Luc Van Gool

Partners:

CAAD, ETH Zürich Center of Product Development, ETH Zürich

Finished in: 2007

Mosaicking of Endoscopic Placenta Images to Assist Treatment of the Twin to Twin Transfusion Syndrom

Objective:

The Twin to Twin Transfusion Syndrome (TTTS) is a disease of the placenta. It affects identical monochorionic (shared placenta) twins during pregnancy where blood passes from one baby to the other through abnormal vascular connections within their shared placenta. One baby, the recipient twin, gets too much blood that might overload his cardiovascular system and might die from heart failure. The other baby, the donor twin, does not get enough blood and may die from severe anemia. The tragedy is that these babies are healthy. The problem is in the placenta. The death rate for twins who develop TTTS at mid-pregnancy may be as high as 80 to 100 percent. Babies may die in utero, at birth from prematurity or years later from the effects of TTTS. Those who survive suffer from many serious problems, including cerebral palsy.

This disease can be treated by endoscopic laser surgery. The procedure uses an endoscope to identify, and a laser to coagulate, the connecting vessels in the placenta and block the passage of blood from one twin to the other. The procedure for identifying the abnormal connections is not so easy. The endoscope has a small field of view and the obtained images show severe lens distortions and suffer from weak visibility in the amniotic fluid. This makes it difficult for the surgeon to ensure that all the abnormal vascular connections have been found and treated accordingly. This project consists of two phases: the first phase is devoted to the calibration of the endoscope, in order to eliminate the lens distortion in the images. The second phase comprises of the construction of a mosaic of the entire placenta with these small images. This will give the surgeon a map of the entire placenta, on which he can easily detect all the abnormal connecting vessels.

Participants: Dr. Mireille Reeff, Prof. Philippe Cattin, Prof. Gábor Székely

Finished in: 2006

Quantitative Endoscopy

Objective:

Real-time quantitative measurements and 3D visualization during endoscopic surgery can provide the clinician with valuable additional information. The goal is to use the endoscope rather as imaging device than just as a keyhole!

Providing this additional information without adding another tool to the operating scene and without demanding more training for the surgeon is another important aspect (less cost, less risk, shorter surgery).

In a first step we want to work on scenes where something can be tracked rigidly such as in orthopedic or even neurosurgical surgery. In a second phase we might consider to bring this technology to more complex scenes where deformations can occur. But we will restrict ourselves to the case where this motion and deformation can be parameterized.

Once the relationship between the target anatomy, the tools and the preoperative model is establish it is also possible to create a see-through patient with Augmented Reality techniques so the surgeon has all information in the same display.

Project-Website: http://www.vision.ethz.ch/cwengert/research.php

Participants: Dr. Christian Wengert, Prof. Philippe Cattin, Prof. Gábor Székely

Partners:

EPFL, Virtual Reality and Active Interfaces Group: Charles Baur CHUV, Neurosurgery: John M. Duff

Finished in: 2006

Visemes

Objective: mprovment on the current state-of-the-art in face animation, especially for the creation of highly realistic lip motions. To that end, 3D models of faces will be used and - using the latest technology - speech related 3D lip motions will be learned from examples. The problem of realistic face animation is a difficult one. The serious restrictions in animators' capabilities to deal with human faces are hampering a further breakthrough of high-tech domains such as special effects in the movies, the use of 3D face models on communications, and the use of avatars and likenesses in virtual reality, the internet, games, and all kinds of interfaces. This project wants to improve on the current state-of-the-art in face animation, especially for the creation of highly realistic lip motions. To that end, 3D models of faces will be used and - using the latest technology - speech related 3D lip motions will be learned from examples. Thus, the project subscribes to the surging field of image-based modeling and in fact widens its scope to include animation. Indeed, the capacity to extract detailed 3D motion sequences is quite unique and can be fully exploited for animation, which so far has been kept rather separated from modeling.From measured 3D face deformations around the mouth area, typical motions will be extracted for different 'visemes'. These are the basic lip motion patterns observed for speech and are comparable to the phonemes of auditory speech. The visemes will be studied with sufficient detail to also cover natural variations and differences between individuals. The work also encompasses the animation of faces for which no visemes have been extracted. The 'transplan- tations' of visemes to novel faces for which no viseme data have been recorded and for which only a static 3D model is available will allow to animate faces without an extensive learning procedure for each individual. Last but not least the coarticulation effects will also be studied, i.e. the visual blending of visemes as is required for fluent, natural speech. The project will focus on spoken English and German.

Participants:

Partners:

Eyetronics Inc.

Finished in: 2004

3D MURALE - 3D Measurement and Virtual Reconstruction of Ancient Lost Worlds of Europe

Objective: Development of 3D multimedia tools to measure, reconstruct and visualize the ancient city of Sagalassos in Turkey in virtual reality. The archaeological site at Sagalassos is one of the largest archaeological projects in the Mediterranean dealing with a Greco-Roman site over a period of more than a thousand years (4th century BC-7th century AD).

This project aims at the development of 3D measurement, reconstruction and visualisation tools for use by archeological teams.The new multimedia technologies will produce rich new ways of recording, cataloguing, conserving, restoring and presenting archaeological artefacts, monuments and sites. These technologies will be used to model the Sagalassos site and show how they can be used for preserving and presenting the cultural heritage of Europe in two important ways:

ETH subgoal:
ETH aims at developing a texture synthesis procedures, which are able to produce the images of texture looking visually similar to the original textures, in particular, limestones, landscapes, and vegetation. The modeling process uses currently very simple pairwise pixel statistics gathered from the original image. To improve the modeling quality the pixel pair types will be properly and mutually dependent selected from the big class of the candidates. All the selected types form the neighborhood structure of the texture model. For enrichment of the class of textures that could be visually similar reproduced, further modeling improvement includes the texture presegmentation step. The complex texture or even the whole scene will be segmented onto the subtextures having simpler pixel interdependencies.
The so-called composite texture model includes then three types of submodels:
After the composite model design, the synthesis includes the creation of the synthetic map of segments, and the creation of every subtexture at the corresponding places on the map taking into account the interdependencies near the segments' boundary (see Figure with original and synthetic landscapes).

Project-Website: http://www.brunel.ac.uk/project/murale/visualization.html

Participants:

Partners:

Brunel University, London, UK EyeTronics Inc., Leuven, Belgium KULeuven , Katholieke Universiteit Leuven, Belgium Graz University of Technology , Austria Imagination Computer Services GesmbH Vienna University of Technology, Austria

Finished in: 2004

CIMWOS - Combined IMage and WOrd Spotting

Objective: Develop a tool for helping users in the annotation of multimedia documents and their further content based retrieval. The ETH part in CIMWOS is in the object recognition/localization field. We start from a shot-partitioned image sequence (movie) containing an object of interest. This is selected by the user from a key-frame. The object should then be automatically localized in every frame, by tracking it in the frames immediately following/preceding the key-frame, and re-localizing it in other shots. This base goal could be extended to deal with several objects and recognition of object classes. The underlying technology of the project is the matching of affinely invariant regions. Currently its use in object recognition is limited to counting the number of matching regions between the input image and several model objects images, and then selecting the object with the highest count. The main innovation of the project should consist in developing a model that can take into account the configurations of the regions on the object of interest. Learning their relative positions and motions automatically, the system should develop an internal structured representation of the object that will help dealing with the complex situations of a real movie (strong occlusions, sharp changes in camera position, complex motions).

Project-Website: http://www.xanthi.ilsp.gr/cimwos/default.htm

Participants: Prof. Vittorio Ferrari, Prof. Luc Van Gool

Partners:

ILSP - Institute for language and speech processing (Athens) IDIAP Martigny, CH Canal+, Belgium KULeuven - Katholieke Universiteit Leuven, Belgium

Finished in: 2003

CogViSys, Semantic Interpretation of Geometry

Objective: Employ high-level reasoning for geometric modelling. This project aims at developing methods for geometric modelling of incomplete, imprecise and contaminated input data. Two main approaches are investigated which complement each other. The first building block consists of a probabilistic interpretation of the input data. As application example one might think of grouping 3D line segments into polygons which is a task often encountered in 3D modelling of planar objects. A Bayesian formulation of the grouping problem which avoids overfitting the data but which is still able to capture relevant details is investigated. The preliminary geometric models derived in the previous step may still be incomplete or topologically not correct, mainly due to the imprecise nature of the input data. To overcome the remaining deficiencies, a semantic interpretation of the polygon geometries is performed. A domain specific semantic labelling allows to infer missing parts of the model and to correct the overall model topology. This implies closing of gaps in the model. The semantic interpretation of the geometry renders this task selective and thus enables to preserve details of the models. A main application area for the developed methods is the automated reconstruction of building roofs from high resolution multiview aerial imagery.

Project-Website: http://cogvisys.iaks.uni-karlsruhe.de/mainpage.html/

Participants:

Finished in: 2003

Visual Grouping

Objective: Detection and grouping of repeated patterns in images. Grouping is a process in computer vision where the computer identifies meaningful entities in images.It is an important step between low-level feature extraction and scene interpretation. This project aims at developing novel, robust grouping strategies. In particular, the project will consider grouping from a geometric perspective. The propounded approach is principled in that it presents a single mathematical framework in which most traditional grouping rules are encapsulated. It introduces a natural hierarchy of in creasingly specific geometric configurations. It is powerful as it takes perspective distortions fully into account, whereas previous grouping approaches have been restricted to the case of fronto-parallel viewing or cases where depth effects could be modelled as an affine rather than a perspective skew. Finally, it is efficient as it eliminates much of the search by which grouping strategies are so often plagued. This is achieved through a combination of invariant-based indexing and the Hough transform.

Participants:

Partners:

KULeuven - Katholieke Universiteit Leuven

Finished in: 2003

Implant Migration Measurement using Standard Radiographs

Objective: Design of a practicable and precise (0.1 mm) method for measuring the migration of artificial hip sockets. About 400.000 artificial hip joints are implanted every year only in Europe. However, despite its success, hip replacement still involves complications. One major problem is the loosening of the cup, the acetabular part of the implant. Migration of the implant, which means the change of the cup's position in the bone, is interpreted as the only quantifiable sign for loosening. There already exist different 2D and one 3D method for measuring the migration over time. For the former methods, the standard x-ray images of the patients' follow-up studies after implantation are used, whereas for the latter markers are implanted in the pelvis and special stereo x-ray images are acquired. The problem is that the 2D methods are not precise enough and the 3D method is impractical for the clinical routine. Therefore, we work on a method for measuring the cup migration with a precision in the submillimeter range and useable under clinical conditions. Our approach is a 2D measurement using standard x-ray images, which is as insensitive as possible towards the variable orientation and position of the pelvis at the exposure and which uses state of the art image processing algorithms to locate the cup and bony landmarks.

Project-Website: http://www.vision.ee.ethz.ch/projects/ImgMig

Participants:

Partners:

University Hospital Balgris

Finished in: 2003

New Metaphors for Interactive 3D Volume Segmentation

Objective: development of a new framework for three-dimensional interactive segmentation based on new man-machine interfacing paradigms as offered by virtual reality (VR). n spite of considerable efforts during the past decades, image segmentation is still one of the major bottlenecks in medical image analysis. Neither purely manual nor fully automatic approaches are appropriate for the correct, efficient and reproducible identification of organs in 3D data volumes. Goal of this pilot project is the exploration of the power of new man-machine interfacing paradigms as offered by virtual reality (including graphics, audio and haptics), e.g., resulting in new closed-loop segmentation systems, allowing an optimal cooperation between computer-based image analysis algorithms and human operators.

Participants:

Partners:

Semmelweis University Budapest University Hospital Zurich University Regensburg Varian Medical Systems

Finished in: 2003

LaSSo - LAparoscopic Surgery SimulatOr

Objective: Development of a laparoscopic surgery simulator device using the techniques of virtual reality to provide nearly realistic training environment. The basic idea of laparoscopic surgery is to minimize damage to healthy tissue while reaching the actual surgical location. This results in major gain in patient recovery after operation. The price for this advantage is paid by the surgeon, who loses direct contact with the operation site. The operations are usually performed under mono-scopic vision and highly restricted manipulative freedom, which requires very special skills from the surgeon. Up to now no appropriate training devices are available, which would allow to fully acquire these skills before actual intervention on patients. The goal of the project is the development of a laparoscopic surgery simulator device using the techniques of virtual reality which provides nearly realistic training environment.

Participants:

Partners:

University Hospital Zürich IBT - Institute of Biomedical Engineering, IfE - Electronics Laboratory IfR - Institute of Robotics, Institute of Mechanics

Finished in: 2003

AMOBE II - Automation of Digital Terrain Model Generation and Man-Made Object Extraction from Aerial

Objective: The purpose of the project is the development of automatic methods for extracting quantitative 3D information on man-made objects from aerial images. The project partners Institute of Geodesy and Photogrammetry and the Computer Vision Laboratory work in the automatic detection and reconstruction of man-made objects - especially houses - in high resolution, aerial color images. In a previous project (AMOBE I) it has been successfully shown, that an automatic reconstruction of isolated buildings in suburban scenes is possible, if the location of the building in the image is known. In the AMOBE II project, the given task is extended to densely built--up urban areas. This causes qualitatively and quantitatively new difficulties stemming from the more complicated roof shapes and the typical situation of buildings located close or contiguous to each other. For 3D building reconstruction straight line segments at roof edges need to be matched between corresponding views. Solving the correspondence problem is not straightforward. Due the weak geometric constraint ruling stereo vision geometry, corresponding pairs of line segments in different views can not be identified unequivocally. To overcome the geometric ambiguities at the stereo matching step, we take into account the color distribution in the regions flanking the line segments forming a putative pair. We have developed a line segment matching algorithm for 3D reconstruction of static scenes. This algorithm makes extensive use of color information. It also allows to exploit additional geometric and chromatic information from additional views of the scene. The main part of the project concentrates on the further processing of the 3D line hypotheses. Since we achieve a high discriminative power using color information for line segment matching, the resultant 3D line hypotheses are very reliable and also small in number due to the lack of mismatches. This enables us to keep the combinatorics under control, and thus simplifies the ongoing development of an intelligent algorithm to generate reliable and stable hypotheses of roof parts and complete roofs.

Project-Website: http://www.vision.ee.ethz.ch/projects/Amobe_II

Participants:

Partners:

Institute of Geodesy and Photogrammetry, ETH

Finished in: 2002

CATS - Classification And Tracking in advanced Video Surveillance Systems

Objective: Detection of ``unusual'' motion events for surveillance applications in video sequences. In many video surveillance applications, a human operator is required to observe video sequences from a large number of sensors being displayed on monitors in a control room, in order to detect the occurrence of dangerous events. The support of automatic video processing systems should observable relieve the operator. Such a system should be able to detect the occurrence of ``events'' and perform a screening of the ``normal'' ones, just requiring the human evaluation for the ``most interesting'' or ``abnormal'' cases. CATS is a 1-year KTI-project in collaboration with the industry to develop such a self-learning event detection system. This surveillance system is primarily intended to be used in public rooms. As human motions can be modeled as temporal trajectories, which give the spatio-temporal coordinates of a person, we try to learn characteristical behavior patterns. In the case that people act in accordance with the learned motion patterns, a ``normal'' event is detected, while in all other cases the operators' attention should be focused on it.

Participants: Dr. Esther Koller-Meier

Partners:

ASCOM Systec AG

Finished in: 2002

Computer-Assisted Radiographic Hip Joint Measurement

Objective: Development of a computer assisted measurement tool for precise and fast analysis of digitized medical images. The early identification and treatment of hip dysplasia has a long tradition in medical science and is of particular importance for the treating orthopedist. In practice, it can be shown that by early diagnose and special therapy in most cases a satisfactory healing can be realized. The decision of the diagnose if the studied object is displastic or normal is heavy influenced by the subjective sense of the operator. The fundamental problem is that the difference between ''normal'' and ''displastic'' is difficult to define. In order to find new statistical robust analysis criteria, two larger clinical studies are performed.

Participants:

Partners:

Orthopedic Devision of the University Hospital Balgrist, Zurich

Finished in: 2001

Lesion Evolution in Multiple Sclerosis

Objective: The goal of the project is to characterize lesion evolution by quantifying MR-based spatio-temporal changes over time. Traditionally, the characterization of MS lesion development is mostly based purely on the spatial pattern of lesions. Although lesion load measurements provide a more objective and sensitive measure of disease evolution than clinical measures, the poor correlation between changes of lesion load and changes of disability is of concern. Purely intensity based segmentation has strong limitations and does not provide satisfactory results in many cases. By examining temporal changes in consecutive MR scans, active MS lesions can be segmented and characterized straightly. But lesion development is a complex spatio-temporal process, consequently concentrating exclusively on the spatial or temporal aspects of it cannot be expected to provide optimal results. The goal of the project is to characterize lesion evolution by quantifying MR-based spatio-temporal changes over time. Spatio-temporal lesion models will be used to get a better understanding of MS pathogenesis and hopefully allow a classification of MS manifestation that can distinguish different lesion behavior and perhaps give a better explanation of clinical findings. These models will be used to provide a spatio-temporal segmentation method.

Participants:

Partners:

University Hospital Basel

Finished in: 2001

Modelling daily runoff from snow and glacier melt using remote sensing data

Objective: Using satellite images for simulating the effect of a possible climate warming on the areal extent of the seasonal snowcover, the glacier retreat and on the runoff regime in the Swiss alps. The concept of the project requires, that GIS as well as remote sensing techniques are involved in combination with the runoff model SRM+G. The project uses high resolution optical satellite images for a detailled analysis of the regional distribution of snow cover as well as bare glacier ice of three Swiss runoff basins. A main part in the project is the runoff simulation with the SRM+G model for different years. This model version was evolved to analyse quantitatively melt processes in highly glaciated basins. Another part of the project deals with the different contributions to the total runoff like glacier ice, rain, new snow and seasonal snow cover. Adapting various scenarios with changed climate conditions to the SRM+G model we evaluate the consequences to the areal extent of the seasonal snowcover, the glacier retreat and the snow- and icemelt for each basin. This topic will gain further significance since we can state a constant warming of the earths atmosphere during the 20th century.

Project-Website: http://www.vision.ee.ethz.ch/projects/Glacier/

Participants:

Partners:

RSL - Remote Sensing Laboratory, University of Zurich

Finished in: 2000

CARTESIAN - Cost effective Application of Remote sensing to enviromenTal aspects of ski rEgions; a S

Objective: Development of a management information system (MIS) to evaluate impacts of ski-resort activities on the environment and to support the sustainable management and tourism of a region. his project aims at developing a methodology based on satellite data to provide a cost-effective assistance in the monitoring and sustainable maintenance of ski-regions. This includes issues as: Environmental aspects such as vegetation indices, landscape changes, snow cover, socioeconomic issues such as tourism potential and economic development.

Project-Website: http://www.vision.ee.ethz.ch/projects/Cartesian/

Participants:

Partners:

Resource Analysis (Netherlands), Cemagref (France) Stand Montafon (Austria), Les Arcs (France) Ecoscan (Switzerland), University of Amsterdam (Netherlands) RSL - Remote Sensing Lab (Switzerland), Silvretta Nova (Austria) Het Frankrijk Huis (Netherlands), Sion 2006 Bid Committee (Swizerland)

Finished in: 2000

MINORA - MINiaturized Optical Range camera for safety, surveillance and automotive Applications

Objective: Development of universal range cameras based on optical ranging techniques for safety and surveillance applications. Modern everyday life is characterized by an ever increasing interaction between man and machines leading to a growing number of potentially harmful situations through accidents, malfunctions or human oversight. Related to this is our investigation into reliable presence detection systems based on range image sequences. MINORA is a 4-year project in the OPTIQUE II programme of the ETH Council in collaboration with several academic and industrial partners. The purpose of the project is to develop range cameras based on optical ranging techniques (time-of-flight or AM laser radar), with which a large part of today's safety and surveillance applications can be solved. These new sensors, working in the near infrared, will be fast, cheap and can supply 3D information with high accuracy. However, a necessary trade-off means that the sensors provide coarse range images resulting from the need of inexpensive sensor and computing hardware.

Project-Website: http://www.vision.ee.ethz.ch/projects/Minora/

Participants: Dr. Esther Koller-Meier

Partners:

CSEM - Centre Suisse d'Electronique et de Microtechnique SA Institute for Computer Science and Applied Mathematics, University Berne Design Center for Integrated Circuits, EPF Lausanne Microswiss, Leica AG, CEDES AG

Finished in: 2000

COGNIS - COmputer Guided Nannofossil Identification System

A 3-year ETH project in collaboration with the Institute of Geology. The purpose of the project is to develop time-efficient semi-automatic and automatic methods to find, measure, identify and count micrometer-sized coccoliths in Scanning Electron Microscope (SEM) images from ocean sediment specimens.

Participants:

Finished in: 2000

BIOMORPH

Objective: Evaluation and extension of computing techniques for morphometry. The BIOMORPH project was a collaboration between leading European computer science and clinical groups to evaluate and extend state of the art computing techniques for morphometry, i.e.. for the quantification of size and shape of the biological structures. The project was focused on applications in schizophrenia and multiple sclerosis (MS), conditions where the need for improved brain morphometry was particularly clear. In schizophrenia, changes in the morphology of various brain structures provide important clues to the most fundamental brain abnormalities that underlie the condition. In MS, quantification of changes in lesions has become of great importance for pharmaceutical trials, and improved morphometry will reduce the cost of developing new drugs. Several algorithms were developed for the representation of shape and for improved quantification of brain structure including a method for parametrizing the shape of objects such as the hippocampus. The programs were then applied to investigate the corpus callosum outline after it had been segmented in each high resolution scan of an MRI series from 71 individuals. The development of techniques for MR scan analysis in MS patients was mainly focused on a set of twenty-five patients, who had been undergoing serial volumetric MR scanning. Our group developed a new approach for the automatic detection of temporal changes in this 4-dimensional Datasets motivated by techniques in functional MR imaging. These spatio-temporal data resulted using highly effective co-registration procedures provided by the cooperating BIOMORPH partners. Time-variant properties of all voxels were examined and combined to a probabilistic map for lesion activity during the observation period.

Participants:

Partners:

University of Canterbury at Kent, UK Oxford University, Department of Psychiatry, UK Catholic University of Leuven, Belgium INRIA, Sophia Antipolis, France The Maudsley Hospital, London

Finished in: 1999

Portal Imaging

A clinical research project funded by the cancer research of the canton Zürich to improve quality assurance in radiotherapy treatment. Electronic Portal Imaging Devices (EPID) enable us to register megavoltage X-ray images of the treatment field during irradiation. These portal images are then analyzed using a high precision and area-based matching algorithm in order to measure patient setup deviations.

Participants:

Partners:

University Hospital Zürich

Finished in: 1999

PET-MRI

The aim of this project is to support the better quantification of Positron Emission Tomography (PET) images on the basis of associated structural information. This would involve an addressal of the Partial Volume Effect (PVE) , which basically demands better resolution data. Improved resolution is possible using statistical methods of reconstruction. The approach taken here uses Bayesian methods to employ a priori estimates of the activity distribution to regularise the solution whilst encouraging distinct variation across structural boundaries. The prior is derived on the basis of a ``forward model'' of the emission process, a correction of the PET data constrained according to the known structure. The result is a high resolution estimate of tracer distribution toward which the reconstruction solution may be drawn.

Participants:

Partners:

PSI - Paul Scherrer Institute, Switzerland

Finished in: 1999

Image Indexing

An ETH Project: The project "an integrated image analysis and retrieval system" is a joint project with the computer systems and the database group of the Computer Science Department the ETH. The vision lab studies which visual features can efficient indexing into an image database. The development the appropriate feature extraction algorithms is examined, as well.

Participants:

Finished in: 1999

Object/Scene Recognition for Wearable Computer

Objective: The vision part work of ETH poly project--wearable computer is to develop a system which will recognize an object or scene from a given image. An image database in which all already known objects or scenes were included should be built at first. To build the database, invariant to affined transform regions should be extracted firstly from all the images in the database, then colour moment invariance of all the extracted regions are computed and stored in the database. After that, query image which includes the object or scene which are concerned by user has been processed with the same procedure. Then, a distance-based indexing techniques method, like the Vantage Point Tree, is adopted to index regions extracted from the binary tree structure region database. The best matched image in the database will be returned by the system and all the knowledge related to the image would provide to the user. So the user would know the object or the scene for the matched image contains same or similar object/scene.

Project-Website: http://www.wearable.ethz.ch/

Participants: Hao Shao, Prof. Luc Van Gool

OSCAR - an Oppurtinistic SCAnneR

Objective: The central theme of this project is the construction of an opportunistic 3D scanning system consisting of multiple cameras and stripe projectors. Active lighting is a popular technique for the acquisition of 3D shapes. Typically one light projector and one or two cameras are combined into a single acquisition module. For OSCAR I will develop a setup consisting of several projection devices and cameras (i.e. multiple modules) that are configured around the scanned object to be modeled in 3D. Typically, the light that is projected is fixed. Even in cases where a series of patterns are projected in succession, these patterns normally do not depend on the scene content. A notable exception is work at the University of Tel Aviv. In this work it is described how series of projected patterns can be optimised for noise levels and required accuracy. This has led to improvements over the popular Gray code technique. In an other work by that same group a series of colour patterns are optimised for the colour on the surface of an object, on a worst-case basis. Nevertheless, some assumptions had to be made about the reflectance properties of the surface and the constancy of ambient lighting, and the number of projections has to be increased by two additional projections for normalisation. In our planned work, one-shot ranging techniques are envisaged and the optimisation targets different object specific parameters.

Participants: Dr. Andreas Griesser, Prof. Luc Van Gool

CogViSys, a virtual commentator for video sequences

Objective: The goal of this project is to build a virtual commentator for video sequences. This means building a vision system that is able to translate visual information into a textual description, i.e. a system that can understand and tell what is happening in a specific video sequence. In particular we are working with content from situation comedies (sitcoms). This has the advantage of representing a quasi-closed world: Usually there is a rather small number of characters and only a few different sets, thus making the recognition task simpler. Nevertheless it is intended keep the overall framework general, so that it can easily be transferred to other tasks. The project involves different levels of complexity in the field of computer vision and artificial intelligence. They may be roughly stated as follows:

There are two main applications for the virtual commentator. The first one is indexing of video content. The system should annotate film sequences in the manner of a visual database. Based on this it should be possible to issue visual search operations (vgrep) like "Find all scenes where character John appears". The other application is to provide visually impaired viewers with a description of the visual content in order to augment the sound track. An example would be: "Elaine and Kramer have walked out of Seinfeld's apartment and are talking in the corridor."

Project-Website: http://cogvisys.iaks.uni-karlsruhe.de/mainpage.html/

Participants: Dr. Philipp Zehnder, Prof. Luc Van Gool

VITOS - Virtual Touchscreen within the Miniaturized Wearable Computing Project

Objective: Hand gestures receive increasing interest for the interaction between a user and a wearable system. The user should be able to command the system through simple, intuitive gestures. The recognition tool will pick up hand and finger motions seen by a camera. The hand movement will mainly be used to activate different functions while the finger motion is applied to drive the mouse visible on the display. The proposed system has to find respectively track the finger and the hand in an image sequence. Furthermore, the hand movements have to be distinguished between a number of predefined gestures by classifying the tracked trajectories.

Project-Website: http://www.wearable.ethz.ch/poly/

Participants: Prof. Luc Van Gool, Dr. Esther Koller-Meier

Partners:

Electronics Laboratory, ETH Zürich Computer Engineering and Networks Laboratory, ETH Zürich Perceptual Computing and Computer Vision, ETH Zürich History of Technology, ETH Zürich

ViRoom

Objective: Track humans inside a room, recognize their actions, describe the actions, provide the best view. ViRoom is a room with multiple cameras. Our goal is to create a system which detects and tracks humans in this room, recognizes and stores descriptions of their actions, and selects the best viewpoint for this actions and generates new view from a virtual camera if necessary. We do not want to restrict the system to one particular room with a specific arrangement. We would like to be able to turn any room into ViRoom just by setting up the cameras. Some of the possible tasks for ViRoom are: making training videos, automated training, tele-teaching.

Project-Website: http://www.vision.ee.ethz.ch/~doubek/VRPub/

Participants: Dr. Petr Doubek, Prof. Luc Van Gool

CogViSys, Cognitive Vision Systems

Objective: CogViSys aims at developing a virtual commentator, which is able to translate visual information into a textual description. ETH aims at developing a texture understanding system, which will be able to recognize the materials given their images under different viewing and illumination directions. The first step is the analysis. The sequence of images of the material under consideration together with the appropriate viewpoint and illumination information are the input of the analysis procedure. The result of it is a so-called multiview texture model, which contains structural and statistical information about interdependencies of pixels for the variety of material appearances. The analysis must be fulfilled for every type of the material that is of interest in the current application. Thus, the output of the analysis stage is the database of the multiview models of different materials. The second step is the classification. The texture model database is one input of the classifier. The another input is the textured image or images of the same material to be classified and maybe the specific appearance information of those images. The goal of the classifier is to select the model from the database, which best explains the input images. Thus, the output is the name of the material or the rejection from its recognition. The criterion of the model expressibility could be its ability to synthesize the texture that is visually similar to the analyzed one. ETH investigated such synthesis models based on the statistical description of image including viewpoint dependency (see Figure with real and synthetic tangerines and banana covered with the tangerine skin). The algorithm of model creation must be adopted now for the classification purposes.

Project-Website: http://cogvisys.iaks.uni-karlsruhe.de/mainpage.html/

Participants: Dr. Alexey (Oleksiy) Zalesny, Prof. Luc Van Gool

Non-Rigid Registration of CT/MR Data of the Spine

Objective: Medical image registration is a powerful tool allowing both the quantitative study of temporal image sequences and the fusion of image information acquired by different radiological modalities. The main goal is to find the proper transformation allowing the perfect overlay of images of the same object. Depending of the type of admissible transformation, registration procedures are classified as being rigid or elastic. The first part of this project is focusing on the rigid body registration. Prototypes for both volumetric and surface-based registration have been developed, including corrections for the scanning artifacts in the acquired images. In the second part consists in the CT/MR non-rigid registration of volumetric datasets of the spine.

Participants: Dr. Adrian Andronache, Prof. Philippe Cattin, Prof. Gábor Székely

Augmented Reality System - Bones Repositioning Simulator

Objective:

In the last decade Augmented Reality (AR) systems have proved their efficiency in various application areas such as mechanical maintenance and repair, outdoor architectural design, military training. In this project, we explore a new approach for using this technology - AR based medical training. The purpose is to develop a bones repositioning simulator allowing the future surgeon to both improve her skills and reduce the risk of errors.

However, AR setups for medical applications are still a challenge because the system imposes hard constraints. Registration between the real and virtual world demands a high accuracy to maintain the illusion that the virtual object belongs to the real scene. Low latency is inevitable for achieving a real-time system. Moreover, in the context of AR simulation, we focus on parameter setting (stiffness,mass, resting length) for soft tissue models. In our current approach we employ mass-spring systems for deformation computation. These models are used for simulating the virtual muscles that are attached to the real bones.

Participants: Dr. Gerald Bianchi, PD Dr. Matthias Harders, Prof. Gábor Székely

Characterization of the Remaining Coronary Artery Motion after Stabilization

Objective: Recent developments in robotic technology have enhanced surgical precision while operating through less invasive approaches in various surgical subspecialties. However, it is not surprising that use of robotics for performing coronary artery bypass surgery has been slow because of the additional challenge of perpetual cardiac motion, and the precision demanded for a graft to coronary artery anastomosis. It is herein that adding further intelligence to robotic control could probably help. The remaining motion after coronary stabilization, forces the surgeon to adapt to the movement of the heart; and this could be responsible for the inferior quality of anastomosis and increased operative time. This study was aimed to precisely characterize all aspects of remaining coronary artery motion at a point of interest after Octopus stabilization on pig beating hearts, to understand its significance with regard to surgical precision during off-pump coronary bypass surgery (OPCAB) and to explore the possibilities of using it for mechanical motion cancellation.

Project-Website: http://co-me.ch/projects/cardio.en.html

Participants: Prof. Philippe Cattin, Prof. Gábor Székely

Partners:

University Hospital Zürich Physical Electronics Laboratory, ETH Zürich Institure of Mechatronic Systems, Zürich University of Applied Sciences, Winterthur

Biothermofluidics for Cerebrospinal Fluid, Diagnostics & Control Development of a Knowledge Base

his interdisciplinary research project aims at bringing together the skills and know-how of an international group of experts for realizing a multifaceted investigation of the cerebrospinal fluid flow and transport phenomena within the cranial cavity and part of the spinal cord. The part held at the Computer Vision Laboratory deals with extraction of the cerebrospinal fluid space geometry. A generic model of the ventricular system's geometry, that will be used in development of cerebrospinal fluid flow simulation, is prepared in the first stage. Subsequent efforts are aimed at developing a semiautomatic process for extracting the patient-specific cerebrospinal fluid space geometry. Both the acquisition of the initial data and the data processing algorithm must be adapted to the requirements of the daily clinical praxis.

Participants: Peter Cech, Prof. Philippe Cattin, Prof. Gábor Székely

Partners:

Laboratory for Thermodynamics in Emerging Technologies (LTNT), ETH Zürich Measurement and Control Laboratory (IMRT), ETH Zürich Institute of Biomedical Engineering, Biophysics Group, ETH Zürich Neuroradiology, University Hospital Zürich Bioengineering, Imperial College for Science, Technology and Medicine, London CFD Research Corporation, Hunstville AL, USA Institute of Anatomy, University of Bern

Hysteroscopy Simulator

Objective:

Hysteroscopy is the second most often performed endoscopic procedure in gynaecology and is mostly part of any specialization program for gynaecology. It is to be expected that training on a reasonably realistic simulator could substantially contribute to reduce the rate of complications. The simulator will allow realistic real-time visualization of the intervention scene including changes due to surgical actions and the control of the hydrometra by manipulating the liquid influx and efflux as well as realistic tactile sensation.

The following components provided by all partners will be integrated into the simulator:

Clinical evaluation will be carried out in order to gain insight into which level of realism is needed to actually reach the goals of efficient surgical training on VR-based trainer, with special emphasis on visual fidelity and the presence and quality of force feedback. Please also have a look at a former surgery simulation project in our lab.

Project-Website: http://www.hystsim.ethz.ch

Participants: PD Dr. Matthias Harders, Dr. Raimundo Sierra, Dr. János Zátonyi, Dr. Rupert Paget, Dr. Dominik Szczerba, Prof. Gábor Székely, Dr. Stefan Tuchschmid, Dr. Bryn Lloyd

Partners:

Computer Graphics Laboratory, ETH Zürich Institute of Biomechanical Engineering, ETH Zürich Institute for Mechanical Systems, ETH Zürich Institute of Computational Science, ETH Zürich Micromachines and Precision Instrumentation Lab, EPFL Lausanne Clinic of Gynecology, Dept. OB/GYN, University Hospital Zürich

Automatic Segmentation of Vessels and the Identification of Vascular Pathology

Objective:

Vessel segmentation is a key component of every radiological diagnostic system. However, the lack of robust methods still forces radiologists to spend a considerable amount of their time to manually segment and analyze the vessels in CT or MR data.

Our final goal is the development of automatic vessel segmentation methods that not only allow to detect vessels but also recognize diseases such as aneurysms or dissections. For the design and implementation of the application we plan to use the modular ILAB4 platform (developed by Mevis, Bremen), that greatly improves development cycles.

Participants: Dr. Tamas Kovacs, Prof. Gábor Székely

Partners:

Institufe of Diagnostic Radiology, University Zürich

Haptic Soft Tissue Interaction

Objective:

It is well known that haptic devices can enhance the perception of reality in virtual environments. The touch and force sensation is an important component of surgical simulators, which are currently widely developed at many research centers and companies to help the doctors to acquire special skills needed in the surgery.

We are going to focus on possible techniques in haptic rendering for a newly developed 6 Degree Of Freedom (DOF) force feedback device in the context of open surgery simulator. The interaction with virtual organs via simple surgical devices like scalpels and scissors will be modeled. Furthermore the possibilities of direct tissue palpation will be explored. Especially the impact of the new devices and paradigms will be studied. The main emphasis will be on the technical view of contact mechanics as well as to simplifications made for achievieng real-time haptic rendering.

Project-Website: http://www.touch-hapsys.org/

Participants: Dr. Peter Leskovsky, PD Dr. Matthias Harders, Prof. Gábor Székely

Partners:

Technical University Berlin Max Planck Institute for Biological Cybernetics University of Pisa Universite d'Evry Val-d'Essonne University of Birmingham

Texturing for Hysteroscopic Surgery Simulator

Objective: Hysteroscopy is the visualisation of the inner surface of the uterus performed by inserting both the endoscope and the surgical instrument through the cervix into the uterus. Therapeutic hysteroscopy is associated with a certain number of known serious complications, that can be best addressed through repetitive training by the surgeon. A hysteroscopic virtual-reality surgical simulator that provides realistic and configurable training environment is seen as the ideal solution to providing this repetitive training. Texture synthesis for the Hysteroscopic surgical simulator will need to address the following:

Any proposed solution to these areas needs to be implementable in real-time, while presenting a visual experience that is as close to life-like as possible.

Participants: Dr. Rupert Paget, Dr. János Zátonyi, Prof. Gábor Székely

Partners:

University Hospital Zürich

Automatic patellar cartilage segmentation from 3D MRI data volumes

Segmentation of organs is an important and very difficult step in the process of medical image analysis. The main effect of the fast development of medical imaging techniques (and especially MRI) is the huge amount of data that becomes available and must be analyzed by the physicians. In this context, tools for automatic segmentation are extremely important. This project aims for developing such a tool which will allow automatic segmentation of the patellar cartilage of the knee joint from multi-slice magnetic resonance images, taken under clinically applicable conditions. The considered approach is to combine local feature analysis methods with a model based approach, allowing robust segmentation even in the unavoidable presence of noise and artifacts.

Participants: Dr. Cristian Pirnog, Prof. Gábor Székely

Partners:

Laboratory for Biomechanics, ETH Zürich

Generation of Anatomical Models for Surgical Simulators

Objective:

In the past few years virtual reality based systems have been proposed and realized for many medical interventions. These simulators have the potential to provide training on a wide variety of pathologies. So far, realistic generation of anatomical variance and pathologies have not been treated as a specific issue. It has to be possible for a physician to generate an individual surgical scene for every training session.

This research will explore how to generate such anatomical models for surgical simulators considering the natural variability of the healthy anatomy and seamlessly integrating a wide spectrum of different pathologies according to the specifications from physicians.

Project-Website: http://www.rsierra.com/?main=phd

Participants: Dr. Raimundo Sierra, Prof. Gábor Székely

Partners:

Clinic of Gynecology, Dept. OB/GYN, University Hospital Zürich

Haptic Interaction with 3-Dimensional CT Data

Objective:

Diagnosis of medical ailments is increasingly done through CT (Computer Tomography) and MRI (Magnetic Resonance Imaging) images. These images are constantly improving in resolution and quality. However, due to the 3-Dimensional nature of these images, it is often hard not to become confused and overwhelmed by the displayed information.

We are working on a program which renders 3D CT or MRI images to the screen using volume-rendering. Intelligent haptic (touch) feedback is provided from the data, allowing the user to "feel" as well as see the data (using the Sensable Phantom Device: www.sensable.com). This includes a technique which guides the user along the path of the intestine using Euclidean Distance Maps to calculate the forces. The user feels a force pushing the cursor towards the center of the intestine, allowing for fast and easy navigation along the winding turns of the intestine. This in turn allows the segmentation (separation from the remaining data) of the intestine by the drawing of a center-line, and the diagnosis of polyps inside the intestine.

Project-Website: http://www.touch-hapsys.org/

Participants: Dr. Christoph Spuhler, PD Dr. Matthias Harders, Prof. Gábor Székely

Partners:

Technical University Berlin Max Planck Institute for Biological Cybernetics University of Pisa Universite d'Evry Val-d'Essonne University of Birmingham

Modeling of blood vessels in the uterus

Angiogenesis, the growth of vascular structures, is an extremely complex biological process which has long puzzled scientists. Better physiological understanding of this phenomenon could result in many useful medical applications, from virtual surgery simulators for medical interventions, to cancer therapy, where e.g. influence of certain factors on the system could be simulated. Although there is a lot of research being done on blood circulatory systems and many models with high level of mathematical sophistication have already been proposed, most of them offer very modest visual quality and not satisfactory physiological insight for the resultant vascularities. This work is a proposition of a macroscopic model allowing for generation of various vascular systems with high graphical fidelity for simulation purposes.

Participants: Dr. Dominik Szczerba, Prof. Gábor Székely

Surgical scene visualization - Synthesis of bleeding for Virtual Hysteroscopy

Objective: In a realistic hysteroscopic simulator special interest has to be devoted to the simulation of intra-uterine bleeding, influencing the visibility of the surgical scene, until the correct adjustment of the inflow and outflow valves on the instrument are performed by the surgeon. The aim of this project is to develop a computer model that can produce a visually appealing reconstruction of bleeding for hysteroscopic simulator. Therefore, our task incorporates the needs for real-time synthesis and responsiveness of the model to any actions introduced by the surgeon to the dynamic virtual reality environment.

Project-Website: http://www.vision.ee.ethz.ch:8080/projects/bleeding_synthesis/index.html

Participants: Dr. Rupert Paget, Dr. János Zátonyi, Prof. Gábor Székely

Astra

Real object surfaces often have a certain roughness to them. This results in a textured appearance in their images that depends on viewing and illumination conditions. There is an extensive computer vision and computer graphics literature on the description, analysis, and synthesis of such textures. Nevertheless, there are a number of crucial limitations in most of this work:

The current project consists of two tasks to remedy this situation.

Task 1. Texture analysis for robust material classification

Here we intend to first improve our own texture model and maximally adapt it to the task of texture classification. Secondly, we will do research to improve the performance of texton-based approaches. Thirdly, we will combine both types of approaches into a single texture classification scheme. This should combine the advantages of both strands, while leaving as much of the weaknesses behind as possible. As part of the outcome of this task, it will be possible to classify textures under variable imaging conditions. The goal is also to improve classification rates over those found in the state-of-the-art, of course.

Task 2. Multiview consistent texture synthesis

Here we will design powerful texture models for synthesis. As in Task 1, we start by improving our own texture modeling and synthesis approach. This will be done in several ways, including better clique type selection, the refinement of our composite texture approach, and the increased efficiency of the process. Secondly, the methods will be generalized towards the handling of variable imaging conditions. Thirdly, we will combine the resulting methods with other state-of-the-art techniques again, i.c. smart copying methods, hoping to get the best of both worlds. In particular, the goal is to arrive at a method that is fast, yields very realistic textures (still under variable viewing conditions), and that avoids the disturbing verbatim copying effects in current smart copying techniques. The work is also relevant for the creation of more realistic textures on curved surfaces, with results far more sophisticated than simple texture foreshortening and shading dependent on the surface location.

Participants: Prof. Luc Van Gool, Dr. Alexey (Oleksiy) Zalesny

INTUITION

Objective:

One of the INTUITION's targets is the overcoming of the fragmentation observed and making possible of a lasting integration and structuring effect in the European area in order to realize the potential of VR/VE in developing good working practices for all.

.

Project-Website: http://www.intuition-eunetwork.net/

Participants: Dr. Christoph Spuhler, PD Dr. Matthias Harders, Prof. Gábor Székely

IMMERSENCE

Objective:

The main objective is to enable highly realistic multi-modal interactive immersion into virtual and augmented reality environments. Its focus will be on visual, haptic and auditive sensory components, while addressing an even broader range of human senses.

While aiming at full multimodal feedback of all relevant information, the project will focus on hand-based (i.e. manual) tasks when dealing with interaction, in order to keep the related problems tractable within the frames of a single integrated project. For the same reasons we will not cover all facets of information exchange which can emerge in an interactive situation, especially semantics related issues such as verbal communication will be ignored, while explicitly addressing emotional aspects. Haptic enhancement of the multimodal environment will be a special focus of the project, in order to compensate for the relatively limited efforts spent on this area up to now, as compared to visual or auditory components.

Project-Website: http://www.immersence.info

Participants: PD Dr. Matthias Harders, Prof. Gábor Székely, Benjamin Hess, Dr. Konrad Schindler, Dr. Henning Hamer, Dr. Esther Koller-Meier

Partners:

TUM: Technische Universität München, Germany LSC: University of Evry Val d'Essonne, France MPI-T: Max Planck Society, Germany TECH: Technion, Israel UBIRM: University of Birmingham, UK UPC: Universitat Politécnica de Catalunya, Spain UNIPI: University of Pisa, Italy UPM: Universidad Politécnica de Madrid, Spain