Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2014.050323
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 5 Issue 3, 2014.
Abstract: One of the current limits of laparosurgery is the absence of a 3D sensing facility for standard monocular laparoscopes. Significant progress has been made to acquire 3D from a single camera using Visual SLAM (Simultaneous Localization And Mapping), however most of the current approaches rely on the assumption that the observed tissue is rigid or undergoes periodic deformations. In laparoscopic surgery, these assumptions do not apply due to the unpredictable and elastic deformation of the tissues. We propose a new sequential 3D reconstruction method adapted to reconstructing organs in the abdominal cavity. We draw on recent computer vision methods exploiting a known 3D view of the environment at rest position called a template. However, no such method has ever been attempted in-vivo. State-of-the-art methods assume that the environment can be modeled as an isometric developable surface: one which deforms isometrically to a plane. While this assumption holds for paper and cloth-like surfaces, it certainly does not fit human organs and tissue in general. Our method tackles these limits: it uses a nondevelopable template and copes with natural 3D deformations by introducing quasi-conformal prior. Our method adopts a new two-phase approach. First the 3D template is reconstructed invivo using RSfM (Rigid Shape-from-Motion) while the surgeon is exploring – but not deforming – structures in the abdominal cavity. Second, the surgeon manipulates and deforms the environment. Here, the 3D template is quasi-conformally deformed to match the 2D image data provided by the monocular laparoscope. This second phase only relies on a single image. Therefore it copes with both sequential processing and self-recovery from tracking failures. The proposed approach has been validated using: (i) in-vivo animal data with ground-truth, and (ii) in-vivo laparoscopic videos of a real patient’s uterus. Our experimental results illustrate the ability of our method to reconstruct natural 3D deformations typical in real surgery.
Abed Malti, “Variational Formulation of the Template-Based Quasi-Conformal Shape-from-Motion from Laparoscopic Images” International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014. http://dx.doi.org/10.14569/IJACSA.2014.050323