next up previous
Next: Abstraction in Illustrations Up: Describing Abstraction in Rendered Previous: Describing Abstraction in Rendered

Introduction

 

Despite great advances of computer graphics capabilities pertaining both to hardware as well as software, rendered images are seldom used for educational purposes. In contrast, designed or even hand-drawn illustrations are commonly used in textbooks (see [31] for a detailed discussion). A variety of illustration techniques are employed: objects are not always drawn entirely to scale, the level of detail varies to a great extent over the image, objects are often drawn in unnatural colors.

Indeed, from an analysis of hand-made illustrations we conclude that abstraction techniques are essential to improve the quality of computer generated illustrations with respect to educational purposes.

Recent advances in computer graphics (see [26], [27], and [33]) now make it possible to influence graphics generation on different levels and thus to adapt visualizations to the information extraction task of the user. For example, within the Zoom Illustrator [19] a number of abstraction techniques for an interactive exploration of anatomic models are incorporated. The objects within anatomic models are annotated with labels, providing information in varying levels of detail. In this system, users can interact with labels as well as with 3D models to request an explanation. The selection of a label, for instance, results in an adaptation within the 3D model to emphasize the labeled object. In this process, the 3D model may be rotated and various abstraction techniques are employed to emphasize objects in which a user is interested.

Illustrations in textbooks are accompanied by figure captions. Frequently, illustration techniques used in hand-made illustrations are mentioned within figure captions and legends. As a result of the application of abstraction techniques, the illustration may not correspond to a physically correct image of the depicted object, hence these illustrations may be misinterpreted.

Thus, figure caption serve at least two different functions: First and foremost, figure captions describe what is depicted. Moreover, the effects of abstraction techniques are described (what has been removed or scaled up or down). Therefore, figure captions may guide the interpretation of such abstracted images.

The orientation provided by such captions is well-recognized in psychology. Gombrich, for instance, argues that ``No picture tells its own story'' [7]. Weidenmann considers this statement to hold also for educational situations [35].

Therefore, we furnished the interactive illustrations of the Zoom Illustrator with figure captions. Within this paper, some aspects of the automated generation of figure captions are described.

This paper is organized as follows: In Section 2 the term abstraction is defined. Based on this definition, typical abstraction techniques are briefly surveyed. In Section 3 we classify figure captions according to their function. Section 4 presents an analysis of visualizations and figure captions in anatomy.

The incorporation of figure captions in interactive systems raises the issue of visualizations exposed to changes. In Section 5 different strategies for the generation of dynamic figure captions are described. The content of these figure captions can be adapted to meet preferences and needs of different users (Section 5.2). In Section 6 we propose a framework in which abstraction is automatically or interactively applied to generate visualizations of complex information spaces together with figure captions describing this visualization. Different methods for the realization of figure captions are proposed in Section 7. Moreover, a prototypical implementation is presented. We conclude with the discussion of related work (Section 8), of open questions of the current approach (Section 9) and a summary (Section 10).


next up previous
Next: Abstraction in Illustrations Up: Describing Abstraction in Rendered Previous: Describing Abstraction in Rendered

user
Donnerstag, 4. März 1999, 18:06:27 Uhr MET