REFERENCES

1. Diana M, Marescaux J. Robotic surgery. J Brit Surg 2015;102:e15-28.

2. Fang G, Chow MCK, Ho JDL, et al. Soft robotic manipulator for intraoperative MRI-guided transoral laser microsurgery. Sci Robot 2021;6:eabg5575.

3. Mo H, Wei R, Ouyang B, et al. Control of a flexible continuum manipulator for laser beam steering. IEEE Robot Autom Lett 2021;6:1074-81.

4. Li B, Wei R, Xu J, et al. 3D perception based imitation learning under limited demonstration for laparoscope control in robotic surgery. In: 2022 International Conference on Robotics and Automation (ICRA); 2022 May 23-27; Philadelphia, USA. IEEE; 2022. pp. 7664–70.

5. Wei R, Li B, Mo H, et al. Stereo dense scene reconstruction and accurate localization for learning-based navigation of laparoscope in minimally invasive surgery. IEEE Trans Biomed Eng 2023;70:488-500.

6. Zhong F, Li P, Shi J, et al. Foot-controlled robot-enabled enDOoscope manipulator (FREEDOM) for sinus surgery: Design, control, and evaluation. IEEE Trans Biomed Eng 2020;67:1530-41.

7. Schönberger JL, Frahm JM. Structure-from-motion revisited. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27-30; Las Vegas, USA. IEEE; 2016. pp. 4104–13.

8. Liu X, Sinha A, Ishii M, et al. Dense depth estimation in monocular endoscopy with self-supervised learning methods. IEEE Trans Med Imaging 2020;39:1438-47.

9. Karaoglu MA, Brasch N, Stollenga M, et al. Adversarial domain feature adaptation for bronchoscopic depth estimation. In: Medical Image Computing and Computer Assisted Intervention - MICCAI 2021. Springer, Cham; 2021. pp. 300–10.

10. Shao S, Pei Z, Chen W, et al. Self-supervised monocular depth and ego-motion estimation in endoscopy: appearance flow to the rescue. Med Image Anal 2022;77:102338.

11. Wei R, Li B, Mo H, et al. Distilled visual and robot kinematics embeddings for metric depth estimation in monocular scene reconstruction. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 8072–7.

12. Colleoni E, Edwards P, Stoyanov D. Synthetic and real inputs for tool segmentation in robotic surgery. In: Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. Springer, Cham; 2020. pp. 700–10.

13. Long Y, Wu JY, Lu B, et al. Relational graph learning on visual and kinematics embeddings for accurate gesture recognition in robotic surgery. In: 2021 IEEE International Conference on Robotics and Automation (ICRA); 2021 May 30 - Jun 05; Xi'an, China. IEEE; 2021. pp. 13346–53.

14. van Amsterdam B, Funke I, Edwards E, et al. Gesture recognition in robotic surgery with multimodal attention. IEEE Trans Med Imaging 2022;41:1677-87.

15. Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R. NeRF: representing scenes as neural radiance fields for view synthesis. Commun ACM 2021;65:99-106.

16. Niemeyer M, Geiger A. GIRAFFE: representing scenes as compositional generative neural feature fields. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2021 Jun 20-25; Nashville, USA. IEEE; 2021. pp. 11448–59.

17. Deng K, Liu A, Zhu JY, Ramanan D. Depth-supervised NeRF: fewer views and faster training for free. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2022 Jun 18-24; New Orleans, USA. IEEE; 2022. pp. 12872–81.

18. Wei Y, Liu S, Rao Y, Zhao W, Lu J, Zhou J. NerfingMVS: guided optimization of neural radiance fields for indoor multi-view stereo. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV); 2021 Oct 10-17; Montreal, Canada. IEEE; 2021. pp. 5590-9.

19. Rematas K, Liu A, Srinivasan P, et al. Urban radiance fields. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2022 Jun 18-24; New Orleans, USA. IEEE; 2022. pp. 12922–32.

20. Wang Y, Long Y, Fan SH, Dou Q. Neural rendering for stereo 3D reconstruction of deformable tissues in robotic surgery. In: Medical Image Computing and Computer-Assisted Intervention - MICCAI 2022. Springer, Cham; 2022. pp. 431–41.

21. Kajiya JT, Von Herzen BP. Ray tracing volume densities. ACM SIGGRAPH Comput Gr 1984;18:165-74.

22. Lee JH, Han MK, Ko DW, Suh IH. From big to small: multi-scale local planar guidance for monocular depth estimation. arXiv. [Preprint] Sep 23, 2021 [accessed on 2024 Aug 7]. Available from: https://doi.org/10.48550/arXiv.1907.10326.

23. Valentin J, Kowdle A, Barron JT, et al. Depth from motion for smartphone AR. ACM T Graphic 2018;37:1-19.

24. Curless B, Levoy M. A volumetric method for building complex models from range images. In: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. Association for Computing Machinery; 1996. pp. 303–12.

25. Lorensen WE, Cline HE. Marching cubes: a high resolution 3D surface construction algorithm. In: Seminal graphics: pioneering efforts that shaped the field. Association for Computing Machinery; 1998. pp. 347–53.

26. Allan M, Mcleod J, Wang C, et al. Stereo correspondence and reconstruction of endoscopic data challenge. arXiv. [Preprint] Jan 28, 2021 [accessed on 2024 Aug 7]. Available from: https://doi.org/10.48550/arXiv.2101.01133.

27. Li Z, Dekel T, Cole F, et al. Learning the depths of moving people by watching frozen people. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15-20; Long Beach, USA. IEEE; 2019. pp. 4516–25.

28. Eigen D, Puhrsch C, Fergus R. Depth map prediction from a single image using a multi-scale deep network. arXiv. [Preprint] Jun 9, 2014 [accessed on 2024 Aug 7]. Available from: https://doi.org/10.48550/arXiv.1406.2283.

29. Zhou T, Brown M, Snavely N, Lowe DG. Unsupervised learning of depth and ego-motion from video. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21-26; Honolulu, USA. IEEE; 2017. pp. 6612-9.

30. Ozyoruk KB, Gokceler GI, Bobrow TL, et al. EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med Image Anal 2021;71:102058.

31. Lu Y, Wei R, Li B, et al. Autonomous intelligent navigation for flexible endoscopy using monocular depth guidance and 3-D shape planning. In: 2023 IEEE international conference on robotics and automation (ICRA); 2023 May 29 - Jun 02; London, UK. IEEE; 2023. p. 1–7.

32. Prendergast JM, Formosa GA, Fulton MJ, Heckman CR, Rentschler ME. A real-time state dependent region estimator for autonomous endoscope navigation. IEEE Trans Robot 2021;37:918-34.

33. Cheng X, Zhong Y, Harandi M, Drummond T, Wang Z, Ge Z. Deep laparoscopic stereo matching with transformers. In: Medical Image Computing and Computer-Assisted Intervention - MICCAI 2022. Springer, Cham; 2022. pp. 464–74.

34. Zhou H, Jagadeesan J. Real-time dense reconstruction of tissue surface from stereo optical video. IEEE Trans Med Imaging 2020;39:400-12.

35. Wang J, Suenaga H, Hoshi K, et al. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. IEEE Trans Biomed Eng 2014;61:1295-304.

36. Leonard S, Sinha A, Reiter A, et al. Evaluation and stability analysis of video-based navigation system for functional endoscopic sinus surgery on in vivo clinical data. IEEE Trans Med Imaging 2018;37:2185-95.

37. Recasens D, Lamarca J, Fácil JM, Montiel JMM, Civera J. Endo-depth-and-motion: reconstruction and tracking in endoscopic videos using depth networks and photometric constraints. IEEE Robot Autom Lett 2021;6:7225-32.

38. Mahmoud N, Collins T, Hostettler A, Soler L, Doignon C, Montiel JMM. Live tracking and dense reconstruction for handheld monocular endoscopy. IEEE Trans Med Imaging 2019;38:79-89.

Artificial Intelligence Surgery
ISSN 2771-0408 (Online)
Follow Us

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/