Advanced Visualization Techniques for Laparoscopic Liver Surgery

Dimitrios Felekidis
Scientific Visualization Group , Linköping University, Sweden

Peter Steneteg
Scientific Visualization Group , Linköping University, Sweden

Timo Ropinski
Visual Computing Group, Ulm University, Germany

Ladda ner artikel

Ingår i: Proceedings of SIGRAD 2015, June 1st and 2nd, Stockholm, Sweden

Linköping Electronic Conference Proceedings 120:8, s. 28-31

Visa mer +

Publicerad: 2015-11-24

ISBN: 978-91-7685-855-4

ISSN: 1650-3686 (tryckt), 1650-3740 (online)


In order to make it easier for the surgeons to locate tumors during a laparoscopic liver surgery, and to form a mental image of the remaining structures, the 3D models of the liver’s inner structures are extracted from a preoperative CT scan and are overlaid onto the live video stream obtained during surgery. In that way the surgeons can virtually look into the liver and locate the tumors (focus objects) and also have a basic understanding of their spatial relation with other critical structures. Within this paper, we present techniques for enhancing the spatial comprehension of the focus objects in relation to their surrounding areas, while they are overlaid onto the live endoscope video stream. To obtain an occlusion-free view while not destroying the context, we place a cone on the position of each focus object facing the camera. The cone creates an intersection surface (cut volume) that cuts the structures, visualizing the depth of the cut and the spatial relation between the focus object and the intersected structures. Furthermore, we combine this technique with several rendering approaches, which have proven to be useful for enhancing depth perception in other scenarios.


Visualization; tumor detection; depth perception


[BG07] BRUCKNER S., GROLLER M.: Enhancing depthperception with flexible volumetric halos. Visualization and Computer Graphics, IEEE Transactions on 13, 6 (2007), 1344–1351. 1

[DWE03] DIEPSTRATEN J., WEISKOPF D., ERTL T.: Interactive cutaway illustrations. In Computer Graphics Forum (2003), vol. 22, Wiley Online Library, pp. 523–532. 2

[JS09] JUNEJA M., SANDHU P. S.: Performance evaluation of edge detection techniques for images in spatial domain. methodology 1, 5 (2009), 614–621. 2

[KSW06] KRUGER J., SCHNEIDER J., WESTERMANN R.: Clearview: An interactive context preserving hotspot visualization technique. Visualization and Computer Graphics, IEEE Transactions on 12, 5 (2006), 941–948. 2

[KTP10] KUBISCH C., TIETJEN C., PREIM B.: Gpu-based smart visibility techniques for tumor surgery planning. International journal of computer assisted radiology and surgery 5, 6 (2010), 667–678. 3

[RBG08] RAUTEK P., BRUCKNER S., GRÖLLER M. E.: Interaction-dependent semantics for illustrative volume rendering. In Computer Graphics Forum (2008), vol. 27, Wiley Online Library, pp. 847–854. 2

[TI09] TAI N., INANICI M.: Depth perception as a function of lighting, time, and spatiality. In Illuminating Engineering Society (IES) 2009 Conference (2009). 1

[VG05] VIOLA I., GRÖLLER E.: Smart visibility in visualization. In Computational Aesthetics (2005), pp. 209–216. 2

[VKG04] VIOLA I., KANITSAR A., GROLLER M. E.: Importance-driven volume rendering. In Proceedings of the conference on Visualization’04 (2004), IEEE Computer Society, pp. 139–146. 2

[War12] WARE C.: Information visualization: perception for design. Elsevier, 2012. 2

[WFG92] WANGER L. C., FERWERDA J. A., GREENBERG D. P.: Perceiving spatial relationships in computer-generated images. IEEE Computer Graphics and Applications 12, 3 (1992), 44–51. 2

Citeringar i Crossref