...Hartmann
Institute for Knowledge and Language Engineering, Faculty of Computer Science, Otto-von-Guericke University of Magdeburg, Universitätsplatz 2, D-39106 Magdeburg, Germany, email: hartmann@iws.cs.uni-magdeburg.de
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...Preim
MeVis - Center for Diagnostic Systems and Visualization GmbH, Universitätsallee 29, D-28359 Bremen, Germany, email: bernhard@cevis.uni-bremen.de
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...Strothotte
Department of Simulation and Graphics, Faculty of Computer Science, Otto-von-Guericke University of Magdeburg, Universitätsplatz 2, D-39106 Magdeburg, Germany, email: tstr@isg.cs.uni-magdeburg.de
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...captions
In Bernard's classification [2], instructive figure captions are intended to focus the attention of a viewer on important parts of the illustration. Despite of being inspired by the term itself, our definition clearly differs from Bernard's usage.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...meta-objects
Meta-objects are graphical objects like arrows which ``do not directly correspond to physical objects in the world being illustrated'' [28, p. 127,].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...captions
The figure captions mentioned in the following refer to descriptive figure captions.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...conventions.
Muscles are depicted in red, nerves are yellow and bones are white.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...invalid
We refer to those parts of the figure caption that does not reflect the current illustration correctly as invalid.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...expert.
This terminology is related to the reference model for Intelligent Multimedia Presentation Systems [3]. The core of the reference model is an architectural scheme of the key components of multimedia presentation systems.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...specification.
The system as well as the user can initiate the generation of figure captions (recall Section 5.1). In the latter case, the figure captions are immediately updated by the interactive figure caption module.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...employed.
For a fixed number of viewing directions the visibility of parts of a complex object is analyzed. Assuming a constant distance between the camera position and the center of the model during user interaction, a fixed camera position is given for each viewing direction. For the given camera positions rays through the pixel of the rendered image are traced. This method returns sequences of object hits which are used to estimate the relative visibility and the relative size of the projection for the parts of the model. Moreover, the list of occluding objects for a given part can be determined (e.g. object 1 is in front of object 2 and object 3 at position (x,y)). The relative visibility of a given part specifies the rate of rays reaching it at first with respect to the rays crossing the object at all. Because this analysis is computationally expensive, a preprocessing of these values is employed for the predefined set of viewing directions, whereas the values of other viewing directions are estimated based on a linear interpolation between the recorded values. It turned out that for our anatomical model visibility can be estimated well enough based on 26 predefined viewing directions which result from increasing azimuth- and declination angles in steps of 45 degrees.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...template-based
Templates are fixed natural language expressions which may contain variables. When a template is activated, the values of the template variables and an appropriate natural language expression describing them has to be determined.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...coherent
Text as a collection of related sentences.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...cohesive
Text that signals the relations between text portions.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...HREF="node16.html#figfootZoom">9.
The images within this section are furnished with hand-made figure captions following the macrostructure presented in Figure 5.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...techniques
Those text planning techniques are already employed in the Visdok project [9]
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

user
Donnerstag, 4. März 1999, 18:06:27 Uhr MET