Efficient complementary viewpoint selection in volume rendering
Document typeConference report
Rights accessOpen Access
A major goal of visualization is to appropriately express knowledge of scientific data. Generally, gathering visual information contained in the volume data often requires a lot of expertise from the final user to setup the parameters of the visualization. One way of alleviating this problem is to provide the position of inner structures with different viewpoint locations to enhance the perception and construction of the mental image. To this end, traditional illustrations use two or three different views of the regions of interest. Similarly, with the aim of assisting the users to easily place a good viewpoint location, this paper proposes an automatic and interactive method that locates different complementary viewpoints from a reference camera in volume datasets. Specifically, the proposed method combines the quantity of information each camera provides for each structure and the shape similarity of the projections of the remaining viewpoints based on Dynamic Time Warping. The selected complementary viewpoints allow a better understanding of the focused structure in several applications. Thus, the user interactively receives feedback based on several viewpoints that helps him to understand the visual information. A live-user evaluation on different data sets show a good convergence to useful complementary viewpoints.
CitationGrau, S. [et al.]. Efficient complementary viewpoint selection in volume rendering. A: International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. "21st International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2013 - Full Papers Proceedings". Plzen: 2013, p. 69-78.