Beyond AI and robotics: the dawn of surgical automation in spine surgery
Abstract
Artificial intelligence (AI), deep learning (DL), and machine learning (ML) algorithms are revolutionizing spine surgery. Soon, these technologies may allow the integration of automated devices into clinical practice. The roles of such devices are yet to be imagined and then developed, but one could assume that automated surgical devices can assist spine surgeons in a variety of ways, such as contextual guidance, precise screw placements, or intraoperative monitoring. In the not-too-distant future, such devices may be able to perform entire surgeries autonomously. Current literature suggests that advancements toward autonomous robotic surgery may improve surgical approaches and reduce negative clinical variation in spine surgery outcomes. This review aims to examine the current trends, practices, and advancements in surgical automation and provide an overview of the stages of automation of devices currently employed within spine surgery.
Keywords
INTRODUCTION
Artificial intelligence, machine learning, deep learning, and surgical automation
Artificial Intelligence (AI) is revolutionizing healthcare by arguably mimicking human intelligence in machines, enabling them to perform complex tasks autonomously. At the forefront of this movement is deep learning (DL), a subset of AI that uses neural networks structured in multiple layers of interconnected nodes[1]. Convolutional neural networks (CNNs), a type of DL algorithm, have shown exceptional proficiency in analyzing medical images, and more recently, transformers have further pushed the capabilities of AI models. By using a self-attention mechanism to understand context, they enable parallel processing of data and incorporate positional encoding to maintain the order of input sequences. Transformers power state-of-the-art models like GPT, Llama, and Gemini, significantly advancing applications in text generation, classification, translation, and even extending to computer vision and speech recognition.
Nearly 30% of the world’s data are produced by the healthcare sector, with 80% of the data being unstructured. It is estimated that the average American hospital produces 50 petabytes of data every year, double the size of the Library of Congress[2]. This data-rich environment presents itself as a unique opportunity for AI.
Surgical automation aims to develop devices capable of performing surgery with varying degrees of autonomy without the intervention of humans[3]. DL algorithms optimize surgical strategies based on pre- and intraoperative patient data, leveraging predictive models to anticipate complications and adapt surgical plans dynamically. The application of these technologies in surgical robotics includes systems for image-guided navigation, autonomous instrument control, and real-time decision support.
While the integration of AI-based frameworks in surgical robotics progresses at a guarded pace - as self-learning systems are still striving to achieve clinically acceptable confidence levels - mechanical advances in surgical devices have facilitated the integration of automation in the operating room (OR).
Current robotic developments focus on creating a streamlined operating space; guidance cameras, robotic arms, and attachments can be stored in a central console and manipulated from a single site. Robotic arms now offer movement up to 6 or 7 degrees of freedom (DOF) and are integrated with imaging derived from preoperative annotations, 3D field mapping cameras, and traditional O-arms to precisely guide operative trajectories[4,5]. These innovations have enabled high-accuracy implant placement and corrections while reducing radiation exposure and the need to transport patients between different pieces of equipment. Haptic feedback and self-stabilizing arms have further improved safety outcomes for patients. In addition to reducing peak force applied, instrumentation collision, and risk of undesired tissue penetration, tactile sensation coupled with self-stabilization serves to reduce surgeon fatigue[6-8]. As automated systems work toward more complex procedures, advances in both the hardware and software layers are necessary for surgical automation to materialize.
History and stages of surgical automation
The concepts of autonomous and robotic surgery go hand-in-hand and have significantly progressed together since their inception 40 years ago. The first surgical robot used in the OR was the programmable universal machine for assembly (PUMA) 200 robotic surgical arm in 1985, which performed precise neurosurgical biopsies[9]. The 1990s introduced the PROBOT, designed for prostate surgery, and the Robodoc Surgical System, which enhanced hip replacement procedures[10,11]. With the continued introduction of robotics in the surgical setting, emphasis was placed on developing a robot capable of implementing the “master-slave” framework; this framework utilizes an operator controlling the movements of the machine from a remote location. The first iteration of the teleoperation framework was the ZEUS robotic system and SOCRATES, allowing for a cholecystectomy to be done on a patient located in France by a surgeon in New York[12]. Expanding on the capabilities of the ZEUS robotic system, Intuitive Surgical Inc. created the da Vinci Surgical System, which received Food and Drug Administration (FDA) approval in 2000, revolutionizing minimally invasive surgery with unparalleled precision and control[13]. Robotic-assisted surgery continues to evolve with innovations such as improved haptic feedback, real-time imaging, and augmented reality, significantly enhancing surgical outcomes and patient safety.
The integration of DL into robotic systems has led to automated surgical systems capable of enhancing surgical precision and efficiency and making decisions intraoperatively. These systems utilize sensor fusion, combining data from multiple sources such as cameras, force sensors, and navigation systems to provide comprehensive situational awareness, while advanced control algorithms enable real-time adjustments to the robotic instruments[14]. The advancements in machine learning (ML) and DL have shifted the paradigm from developing robots governed by a master-slave framework, to autonomous surgical systems capable of aiding and making intraoperative decisions. The first autonomous surgery was conducted with the smart tissue autonomous robot (STAR) in 2016 for bowel anastomosis[15]. This system conducts the surgery autonomously, only needing a human surgeon to approve its plan at the start of the procedure, and throughout its duration if correction was needed. TSolution one is another autonomous surgical system that can drill and carve bone for knee replacement surgery according to a predetermined plan but cannot distinguish between types of tissue[16]. Thus, a human surgeon must clear a path for the device to access the bone by mobilizing the skin and fascia superficial to the bone.
Autonomous surgical systems currently employed in the OR exhibit a range of autonomous capabilities. The varying levels of autonomy in surgical devices necessitated a classification system to identify their operative capabilities. In 2017, Yang et al. proposed a framework outlining the stages of automation in surgical procedures based on the taxonomy of automated self-driving vehicles[17,18]. These six stages delineate the progression from manual to fully autonomous surgical devices, encompassing various levels of human involvement and, conversely, machine autonomy. This framework provides a comprehensive roadmap for understanding the evolution of surgical automation, delineating distinct stages of technological advancement and human-machine collaboration. The stages are outlined below [Table 1].
Stages of surgical automation proposed by Yang et al.[17]
Stage | Description |
0 | No autonomy |
1 | Robot assistance |
2 | Task autonomy |
3 | Conditional autonomy |
4 | High autonomy |
5 | Full autonomy |
Stage 0 - no autonomy
In this initial stage, surgeons perform procedures manually with minimal technological assistance. Surgeons rely solely on their skills and expertise to execute surgical tasks without the aid of automation technologies. This stage also includes teleoperated devices that respond directly to the surgeon’s command even from a distance.
Stage 1 - robot assistance
This stage provides mechanical assistance during the procedure to aid the surgeon’s skills. Surgeons operate consoles equipped with haptic feedback, enabling them to control robotic arms with precision while visualizing the surgical site through advanced automated imaging modalities.
Stage 2 - task autonomy
In stage 2, the operator maintains control of the system and the robot can perform surgeon-defined tasks autonomously. These semi-autonomous systems incorporate AI-driven algorithms to assist surgeons during specific phases of surgery. These systems analyze intraoperative data and provide contextual guidance, enhancing surgical precision and safety. Surgeons retain control over critical decision-making aspects while leveraging automation for assistance.
Stage 3 - conditional autonomy
Conditional autonomy represents a partnership between surgeons and robotic systems, where both entities contribute to surgical tasks. Surgeons choose the surgical plan, but then the robot implements the plan with predefined constraints under the surgeon’s oversight. This stage fosters synergy between human expertise and machine capabilities.
Stage 4 - high autonomy
In stage 4, robotic systems assume greater responsibility for executing surgical tasks and can make decisions while under the surveillance of a human operator. Surgeons would oversee the procedure and intervene when necessary, ensuring patient safety and procedural integrity. Robotic systems leverage AI algorithms to adapt to dynamic surgical environments and make operative decisions, enhancing adaptability and responsiveness.
Stage 5 - full autonomy
Full autonomy represents the pinnacle of surgical automation, where robotic systems perform entire surgical procedures independently without direct human intervention. Currently, there are no devices approved for operative use with stage 5 autonomy.
This stratification of surgical autonomy has been utilized extensively to analyze the levels of automation of robotic devices in both pre-clinical and clinical phases of FDA approval[19]. However, this framework has not been utilized to thoroughly analyze autonomous systems employed in spine surgery. This review aims to examine the current trends, practices, and advancements in surgical automation and serves as an overview of the stages of automation of devices currently employed within spine surgery.
METHODS
A comprehensive literature review was conducted using PubMed, Web of Science, and Google Scholar, focusing on autonomous robotic systems in spine surgery. The inclusion criteria encompassed research articles on robotic automation in spine surgery. Data were extracted regarding robot names, manufacturers, purposes, FDA status, automation stages, methods, results, and significance.
RESULTS OF THE LITERATURE REVIEW
Emerging autonomous technologies
The following technologies are currently not FDA-approved but are at the cutting edge of autonomous spine surgery [Table 2]. In some cases, companies have developed the discussed robotic platforms for explicitly surgical use; in other cases, research groups have adapted existing robotic platforms and integrated them with other components - such as sophisticated imaging modalities, augmented reality systems, and more - to achieve new degrees of autonomy and efficiency.
Experimental autonomous spine surgery devices
Device or system | Manufacturer | Function | Automation stage |
DLR Light-Weight Robot LWR-II | KUKA | Robotic drilling and milling for pedicle screw placement | Stage 2 |
LBR iiwa 7 R800 | KUKA | Autonomous spinal sonography using a robotic ultrasound probe guided by a shadow-aware dual-agent framework | Stage 3 |
AOSRV | Shenzen Futuretec | Autonomous vertebral puncture and bone cement injection for PVP | Stage 4 |
7-DOF robotic manipulator | Politecnico di Milano and IRCCS Humanitas Research Hospital | Autonomous control of exoscope | Stage 3 |
RONNA | University of Zagreb | Frameless stereotactic neurosurgery for precise navigation | Stage 2 |
KUKA light weight robot 4+ and BTS smart-D motion capture system | KUKA | Pedicle screw fixation | Stage 2 |
Hand-held bone-cutting tool | University of Tokyo | Autonomous detection of bone penetration | Stage 2 |
6D-PKM surgical robot | Homi Bhabha National Institute | Autonomous registration improving overall accuracy in robot-based neurosurgery | Stage 3 |
AUBO-i5 robot with SRI force sensor | AUBO Robotic Technology and Shanghai Yuli Industrial | Autonomous laminectomy procedures | Stage 4 |
minaroHD | RWTH Aachen University | On-site teleoperated milling with haptic assistance for precise bone surgery | Stage 1 |
Robots for pedicle cannulation or screw insertion
KUKA is a major German company that has developed numerous autonomous and assistive platforms over the years, primarily for commercial manufacturing and surgery. The DLR Light-Weight Robot LWR-II, developed by KUKA, was introduced in 2006 as an advanced robotic system for spinal surgeries, specifically for pedicle screw placement via robotic drilling and milling[20]. Pedicle screw fixation requires high precision. Since 2006, several iterations of the LWR have come to market, with the iiwa and LBR Med being the most recent products. Although not FDA-approved, it represents a stage 2 automation in surgical robotics as it provides autonomy over strict tasks previously defined by the controlling surgeon. The system features a navigation system integrated with the robotic arm, ensuring precise control during the surgical procedure. Ortmaier et al. validated robotic performance in artificial bone and bovine spine models via quantitative comparison of drill-hole diameters, showing that the milling process performed by the LWR-II is superior to traditional drilling, providing enhanced accuracy and reduced surgical errors[20].
Another group developed a semi-autonomous “shared control” pedicle screw fixation system that is deployed during the tapping phase of pedicle screw insertion. Lauretti et al. adapted the KUKA LWR 4+ such that the surgeon maintains full control over the procedure by maneuvering the robot’s end-effector using a control interface, aligning it along a pre-planned trajectory, and continuously monitoring the forces exerted on the patient’s spine during tapping[21]. This system displays level 2 autonomy, similar to the DLR LWR-II. This method was found to enhance comfort, improve ergonomic postures, and reduce fatigue for the surgeon after testing on an anthropomorphic model.
Robots for laminectomy or bone drilling
While the primary role of robots in spine surgery is to aid in pedicle screw placement, autonomous surgical platforms are promising for other spinal conditions such as percutaneous vertebroplasty (PVP) as well. The autonomous orthopaedic surgical robotic for vertebroplasty (AOSRV) surgical system, developed by Shenzen Futuretec and introduced in 2022, represents an experimental stage 4 automation in robotic vertebroplasty[22], as it is capable of making intraoperative decisions based on preoperative planning. This system is designed for autonomous vertebral puncture and bone cement injection, specifically targeting procedures for spinal stabilization. The base robotic platform is the Orthobot XGK‐6508A. It integrates preoperative planning with real-time intraoperative guidance using fused CT and C-arm fluoroscopic images. The system operates with a robotic arm capable of 6 DOF, ensuring precise movements and positioning. Key components include a bone drill and an injection propulsion unit, both equipped with force sensors for real-time pressure feedback, which enhance safety and real-time adjustment control during the procedure. The AOSRV demonstrated superior performance in comparative studies using a pig spine model, significantly reducing operation time, puncture adjustments, and intraoperative fluoroscopies while achieving high accuracy and lower bone cement leakage rates.
There is a notable absence of mature systems for robot-assisted laminectomy. A Chinese group at Peking University in 2023 proposed a novel integrative system for automated laminectomy based on a 6-DOF AUBO-i5 robotic arm which relies on preoperative CT to operate fully autonomously, a stage 4 innovation[23]. The arm is further equipped with a force sensor and an ultrasonic osteotome, providing real-time feedback and precise control. The study involved 40 vertebrae from four cadavers. Robotic performance, as evaluated by mean deviation from the cutting path, was 0.67 mm at the superior point and 0.73 mm at the inferior point, with 83% of the laminectomy planes rated as grade A for accuracy and 81% considered safe. The system demonstrated high accuracy and efficiency, with no significant differences in deviation between thoracic and lumbar procedures.
Assistive robots for visualization
The KUKA LBR iiwa 7 R800 is a robotic arm introduced in 2022 that has been adapted for use in autonomous spinal sonography[24]. Manual ultrasound acquisitions are costly and time-consuming as they require operation by trained sonographers. Groups have built the LBR iiwa 7 into a broader system to implement a “dual agent” framework (real-time reinforcement and DL) to autonomously guide an ultrasound probe in such a way that mimics the decision making of an expert sonographer by using view-specific acoustic shadowing as a robotic guidance marker[24]. With the integration of DL software into the robotic arm, this system may have stage 3 autonomous capabilities as it would function autonomously within a set of parameters set by the surgeon. The validated system demonstrated high navigational accuracy, a promising finding for future autonomous surgical systems that will need to adjust in real time for micromovements and other intraoperative positional changes.
Further innovation in autonomous imaging includes autonomous neuro-registration. The Robotic neuronavigation (RONNA) system, developed by the University of Zagreb, was introduced in 2018 for frameless stereotactic neurosurgery for spinal applications[25]. This stage 2 automation robot focuses on providing precise navigation for spinal surgeries without the need for invasive frames. The system is designed to be mounted on any robotic arm (the researchers used a KUKA robotic arm) and relies on several fiducial markers. RONNA has shown substantial accuracy in its applications; evaluations using different localization strategies revealed application errors in the sub-millimeter range, indicating a high level of precision in navigational tasks. RONNA’s ability to perform stereotactic procedures with minimal error could significantly improve the efficiency and safety of applicable procedures, making it a reliable alternative to traditional methods.
Exoscopes provide neurosurgeons with enhanced visualization and ergonomics compared to traditional surgical microscopes by projecting the surgical field onto a 2D or 3D monitor, thus offering a closer view and better access to the surgical site without the need for the surgeon to contort themselves to maintain a clear perspective. However, as conventional exoscopes like the Aesculap Aeos require manual or foot joystick repositioning, groups have proposed designs for autonomous exoscope control, aiming to improve ergonomics and reduce the surgeon’s physical and cognitive burden compared to joystick control. One group has developed a markerless method that uses visual data from the operating field to control and adjust the robotic arm of the exoscope in real time[26]. Validation was conducted using a 7-DOF robotic manipulator with a stereo camera in an eye-in-hand setup. The system achieved 89% accuracy in target detection and tracking, enhanced efficiency with a significantly shorter operation time compared to that required for foot-joystick control, and a lower overall time that the instrument spent out of the field of view relative to joystick control.
Another promising system for autonomous neuro-registration is the 6 DOF parallel kinematic mechanism (6D-PKM) robot, a 2018 stage 3 innovation developed by an Indian group[27]. This system autonomously navigates and measures fiducial marker coordinates in the patient’s real space, based on preoperative imaging, eliminating the need for manual marker placement and reducing line-of-sight issues. Validation experiments using various phantoms, including a PVC skull model and acrylic blocks, demonstrated successful registration with a tracking error ranging from 0.50 ± 0.17 cm for low-speed movements to 1.38 ± 0.73 cm for high-speed movements. The proposed system also reduced overall registration time and minimized the cognitive and physical load on surgeons. The system’s precision and repeatability were confirmed through experiments that consistently demonstrated high accuracy, indicating substantial potential benefits.
FDA-approved automated surgical systems
Current FDA-approved surgical systems predominantly fall between stages 0 and 2 proposed by Yang et al., providing mechanical assistance or improving pre- and intraoperative visualizations to streamline the execution of procedural tasks[17]. Nonetheless, their applications are wide-ranging and a testament to the potential of integrating robotic innovation into treatment paradigms[28]. We provide an overview of six major approved systems and their performances in spine surgery [Table 3].
FDA-approved robotic spine surgery systems
Device or system | Manufacturer | Function | Automation stage |
da Vinci Surgical System | Intuitive Surgical Inc. | Minimally invasive laparoscopic surgery | Stage 0-1 |
Mazor X Stealth Edition | Medtronic | Pedicle screw insertion, TLIF, and MIDLF | Stage 1 |
ROSA Spine | Medtech | Pedicle screw insertion and TLIF | Stage 2 |
ExcelsiusGPS | Globus Medical | Fluoroscopy-guided pedicle screw insertion | Stage 2 |
Renaissance | Mazor Robotics | Pedicle screw insertion | Stage 1 |
SpineAssist | Mazor Robotics | Pedicle screw insertion | Stage 1 |
The da Vinci Surgical System (stage 0-1), developed by Intuitive Surgical, Inc., was first introduced for formal clinical use in 2000 and cleared to assist with minimally invasive protocols spanning general laparoscopic surgery. Now on its fifth iteration, the system consists of a surgeon console (with a high-resolution patient imaging system and master controls), a patient-side cart holding both robotic arms and EndoWrist instruments [with increased range of motion (ROM) and embedded force feedback], and a vision system for monitoring of the surgical area. Albeit limited, applications in human spine surgery have been very promising in controlled clinical settings. Molteni et al. noted the merits of the system in reaching benign, anterior C1-C2 lesions, evasion of extensive cervical dissection, greater freedom of movement, and more efficient oro/rhino-pharyngeal suturing[29]. Perez-Cruet et al. employed an off-label, anterior approach to the resection of paraspinal tumors presenting with intrathoracic extension in two patients with the da Vinci[30]. Additionally, Sadagopan et al. presented a resection of a sciatic notch lipoma with the da Vinci Machine, demonstrating superior visualization and preservation of critical paraspinal and pelvic structures[31].
The Mazor X Stealth Edition (stage 1) by Medtronic was cleared for spine surgery application in 2018 and serves to streamline operative navigation. The system brings together preoperative and intraoperative 3D visualization/implant trajectory tracking (MRI/CT guided) software, a linear optic camera, high-speed drill systems (Midas Rex and Stealth-Midas), graft inserters (Catalyft PL Expandable Interbody System), and additional custom attachments to support transforaminal lumbar interbody fusion (TLIF), midline lumbar interbody fusion (MIDLF), and deformity-correction procedures[32]. The Mazor X system has particularly been cited as facilitating accurate pedicle screw placement (although relatively difficult at certain levels and challenging registration with complex deformities) with high degrees of safety[33].
The ROSA Spine by Medtech (stage 2) gained FDA clearance in 2015; similar to the Mazor X system, it enables pre- and intraoperative CT-based implant trajectory planning and screw insertion over guidewires. Its most recent version - the ROSA ONE - allows for complete integration with the ROSA ecosystem for robotic arm accessory attachment. Applications of the ROSA Spine system are not limited but have been most commonly cited for arthrodesis (e.g., TLIF) with high accuracy[34,35].
Introduced by Globus Medical in 2017, the ExcelsiusGPS (stage 2) improves pre- and intraoperative planning similar to the Mazor X and ROSA frameworks, bringing together a rigid robotic arm, surveillance markers and sensors, and visualization platforms. Relative to fluoroscopy-guided insertion, a GLOBUS-led study demonstrated accuracy and safety improvements (0% Grade 0 breaches) with significant reductions in time for placement and exposure to radiation and without the need for Kirschner guidewire placement[36]. The system enables the execution of common fusion procedures with better overall alignment and minimized postoperative complications.
The Renaissance system (stage 1) by Mazor Robotics (acquired by Medtronic) was approved in 2011 and enables minimally invasive correction of back pain, degenerative pathologies (slipped disk, scoliosis, nerve impingement), and a range of fusion procedures. The Renaissance consists of a CT/MRI-guided preoperative image station with trajectory mapping/auto-alignment software, multiple framework arms, and a robot with 6 DOF; while screw misalignment and skiving are cited concerns with this model, the Renaissance demonstrates notably low breach rates with some studies citing a 1.1% rate comparable to the Mazor X - reduced time to procedure completion, and minimal learning curve for users[37,38].
The SpineAssist (stage 1), another innovation by Mazor Robotics (acquired by Medtronic), is the formal predecessor to the Renaissance system and gained FDA approval in 2004. Similar to the Renaissance, the SpineAssist system lacks integrated navigation but provides preoperative imaging (CT) compatible with intraoperative fluoroscopy. As an earlier iteration, the SpineAssist is unable to flatten bone at screw entry points - compounding concerns of skiving - and stalls in processing speed comparisons[39]. However, the system has achieved consistently clinically acceptable screw placement; Devito et al. reported 98.3% of placed screws as falling within a defined “safe zone”, with breaches exceeding 2 mm in only 1.7% of placements across 3,271 total pedicle screws[40]. Notably, they cited no permanent peripheral nerve damage in their cohort and particular merits for percutaneous approaches lacking anatomical landmarks. Nonetheless, the SpineAssist has since been significantly improved in its successors.
DISCUSSION
Levels of automation specific for spine surgery
The development and implementation of autonomous devices in spine surgery are rapidly expanding due to the confluence of mechanistic, robotic, and AI innovations. Though the stratification of Yang et al. can be utilized to classify the stages of automated surgical devices in the broader surgical community, there remains a need for a spine-specific classification encompassing the unique challenges and future directions of the field[17]. Here, we propose our own classification for different levels of surgical automation currently employed, or soon to be developed, in spine surgery [Table 4].
Levels of automation in spine surgery
Level | Description | Example | Device |
0 | Manual | Freehand pedicle screw placement | Standard surgical tools |
1 | Computer-assisted navigation | Neuronavigation | RONNA |
2 | Task-specific automation | Robot-assisted pedicle screw placement | DLR Light-Weight Robot LWR-II |
3 | Semi-autonomous spine surgery | Autonomous laminectomy with surgeon oversight | AUBO-i5 robot with SRI force sensor |
4 | Highly autonomous spine surgery | Complete autonomy over all surgical steps (initial exposure, fusion, closure) with some surgeon oversight | Not yet developed |
5 | Fully autonomous spine surgery | No human surgeon intervention | Not yet developed |
Level 0 - manual
All surgical tasks are performed manually by the surgeon without any automated assistance. The surgeon relies on their expertise to navigate and execute the procedure. Procedures encompassed within this level would include freehand pedicle screw placement.
Level 1 - computer-assisted navigation
In level 1, the system provides passive support, such as enhanced imaging and navigation tools, to assist the surgeon in planning and executing the surgery through neuronavigation, endoscopic, or exoscopic visualization. The surgeon retains full control of the surgical instruments while receiving assistance in advanced visualization.
Level 2 - task-specific automation
In level 2, automation assists with specific tasks, such as pedicle screw placement or drilling, under the surgeon’s direct supervision. The surgeon initiates these tasks and monitors their execution, intervening if necessary.
Level 3 - semi-autonomous spine surgery
In level 3, the system can autonomously perform more complex sequences of tasks, such as those required for decompression or arthrodesis procedures, but still requires human surgeon oversight. The surgeon supervises the procedure and can intervene to ensure precision and safety.
Level 4 - highly autonomous spine surgery
In level 4, the system performs the majority of the surgical tasks autonomously with minimal human intervention. The surgeon's role is primarily supervisory, stepping in only for unexpected situations or critical decision making. The robot at this stage does not technically need a human and should be able to reach a safe position without human intervention (e.g., packing a bleeding wound in preparation for an angiogram).
Level 5 - fully autonomous spine surgery
Level 5 represents the highest level of autonomy within spine surgery, where the system can conduct the entire spine surgery autonomously (regardless of complexity), from planning to execution, without human intervention. This would represent “true autonomy” in spine surgery.
Our classification presents a reinterpretation of the initial stratification by Yang et al. made specifically for autonomous advancements in spine surgery[17]. The complicated anatomy and critical locations of paraspinal neurovasculature make implementing autonomy in spine surgery exceedingly difficult. As progress is made toward increasingly autonomous surgical devices in spine, there is a greater need for surgeons to identify the proper devices for their respective procedures and to understand the capabilities of these novel surgical systems. This classification can serve as a guideline for stratifying the emerging technologies that are specific to the challenges and complexities of spine surgery.
Benefits of surgical automation
The standardization of surgical techniques through automation and AI reduces variation in clinical outcomes and enhances precision. By leveraging algorithms and robotic systems, surgical procedures can be executed with increased accuracy, leading to fewer errors and improved patient outcomes[41,42]. Automation enables the execution of predefined strategies with consistency, minimizing the influence of human factors and ensuring reproducibility across different surgical settings.
The adoption of automation in surgery allows for the increased bandwidth of surgical staff to focus on human needs. By offloading repetitive and mundane tasks to automated systems, surgical teams can redirect their attention toward providing personalized care, communicating with patients and their families, and addressing the emotional and psychological aspects of the surgical experience. This shift in focus toward patient-centered care fosters a more holistic approach to healthcare delivery, promoting better overall patient satisfaction and well-being.
Incorporating preoperative and intraoperative monitoring enhances surgical precision and safety[43]. AI-driven algorithms analyze imaging data in real time, providing surgeons with detailed insights into patient anatomy and pathology, facilitating informed decision making during surgery. Additionally, the integration of preoperative and intraoperative variables enables the early recognition and mitigation of postoperative complications, morbidity, and mortality.
In high-complexity scenarios, where surgeons’ decision-making capacity may be compromised due to stress or cognitive overload, automation may be preferable. Automated systems can execute predefined surgical strategies and adapt to rapidly changing conditions, ensuring timely and effective interventions even in the most challenging circumstances[10]. By augmenting surgeons’ capabilities with AI-driven technologies, the risk of errors and adverse events can be minimized, ultimately improving success rates.
Integrating automation and AI with big data analytics holds the potential for advancing surgical practice[44]. By harnessing vast amounts of patient data, including demographic information, clinical histories, and treatment outcomes, AI algorithms can identify patterns, predict patient responses to treatment, and optimize surgical strategies. This integration enables personalized healthcare delivery and informs automated operations, leading to improved patient outcomes and enhanced efficiency in surgical practice.
Limitations of automated surgery
The integration of automation and AI into surgical practice holds immense promise for enhancing patient outcomes and optimizing healthcare delivery in spine surgery. However, several limitations and challenges must be addressed to ensure safe and effective implementation in clinical settings.
One significant limitation of automation in surgery lies in its ability to handle complex scenarios with potential unexpected intraoperative complications. While AI algorithms excel in analyzing structured data and predicting outcomes based on predefined parameters, they may struggle to adapt to unanticipated events or variations in patient anatomy[45]. In such cases, human intervention and expertise remain indispensable for navigating unforeseen challenges and ensuring patient safety.
The high cost associated with robotic surgery, including equipment acquisition, maintenance, and training, presents a significant barrier for many healthcare institutions[46]. As the level of autonomy in autonomous devices increases, the regulatory challenges also escalate. The FDA reviews and clears robotic-assisted devices via the 510(K) premarket notification process[17]. However, higher-risk devices, such as those classified as stage 3 or higher, may face more stringent regulatory scrutiny, leading to significantly increased costs of bringing the device to market[47]. The difference in costs for higher autonomy could be reflected in the cost to the patient. Thus, increases in levels of autonomy could further exacerbate healthcare disparities[48].
The legal and ethical ramifications of autonomous surgery further compound the challenges surrounding its complete implementation. Analogous to the legal debates surrounding self-driving cars, questions regarding liability and accountability arise when patients are harmed, or complications occur during autonomous surgical procedures[43]. The FDA approves devices but not the practice of medicine itself. Higher levels of automated devices, such as stage 4 or 5, will be making clinical decisions intraoperatively to the same level as a human physician[49]. New regulatory bodies will likely need to be created to oversee the practice of highly autonomous devices to ensure that the safety of the patients is upheld.
CONCLUSION
The advancements in surgical automation and robotics in spine surgery signify a transformative shift in medical practice. From the initial introduction of robot-assisted systems to the development of semi-autonomous platforms, the field has witnessed significant technological progress. These systems enhance surgical precision and reduce operative time, offering potential benefits over traditional methods[37]. The integration of AI, ML, and DL algorithms into these robotic systems may further optimize surgical planning and execution, allowing for real-time adjustments and improved outcomes.
Despite these advancements, the journey toward fully autonomous surgery is still in its early stages. With the increasing integration of real-time imaging with robotic platforms, surgical systems will become increasingly autonomous as computer vision improves unsupervised decision making. Better visualization will enable robotic systems to process more data and thus execute better movements in real time, from micro-corrections to serious changes of course when a complication may arise during surgery. Further, the autonomous spine surgery systems currently in the pipeline far exceed the scope of pedicle screw insertion alone: from neuro-registration to autonomous exoscopic guidance, groups have proposed innovative robotic approaches to many procedures and standing problems. These technologies promise to enhance the precision and safety of spinal surgeries, reducing the cognitive and physical load on surgeons.
The absence of FDA approval for many of these cutting-edge systems highlights the ongoing need for rigorous clinical validation and regulatory approval processes. While these systems have demonstrated promising accuracy and efficiency in early studies, they tend to fall prey to similar limitations; challenges such as high implementation cost, poor situational generalizability, regulatory hurdles, and the need for human oversight in complex scenarios remain significant barriers to widespread adoption.
The future of spine surgery lies in the continued integration of AI-driven technologies, which can analyze vast amounts of patient data to inform surgical decisions and predict outcomes. The potential for personalized surgical approaches, guided by big data analytics and real-time intraoperative monitoring, holds promise for improving patient care and reducing variability in surgical outcomes. Yet, the ethical and legal implications of autonomous surgical systems, including issues of liability and accountability, must be carefully addressed to ensure patient safety. While surgical automation in spine surgery is advancing rapidly, the full realization of its potential will require overcoming significant challenges. Our current trajectory suggests a future where autonomous systems will play an increasingly central role in spine surgery.
DECLARATIONS
Authors’ contributions
Conceived the project: Sadagopan NS, El Tecle NE
Conducted the literature review, drafted the initial manuscript: Sadagopan NS, Prasad D, Jain R
Created the tables: Sadagopan NS
Reviewed and edited the manuscript: Sadagopan NS, Prasad D, Jain R, Ahuja C, Dahdaleh NS, El Tecle NE
All authors read and approved the final manuscript.
Availability of data and materials
Not applicable.
Financial support and sponsorship
None.
Conflicts of interest
All authors declared that there are no conflicts of interest.
Ethical approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Copyright
© The Author(s) 2024.
REFERENCES
1. Stahlschmidt SR, Ulfenborg B, Synnergren J. Multimodal deep learning for biomedical data fusion: a review. Brief Bioinform 2022;23:bbab569.
2. Nambiar J, Juyal A. Healthcare organizations must create a strong data foundation to fully benefit from generative AI. Available from: https://www.cio.com/article/1293408/healthcare-organizations-must-create-a-strong-data-foundation-to-fully-benefit-from-generative-ai.html#. [Last accessed on 5 Nov 2024].
3. Shademan A, Dumont MF, Leonard S, Krieger A, Kim PCW. Feasibility of near-infrared markers for guiding surgical robots. In: Optical Modeling and Performance Predictions VI; 2013. pp. 123-32.
4. Buza JA 3rd, Good CR, Lehman RA Jr, et al. Robotic-assisted cortical bone trajectory (CBT) screws using the Mazor X Stealth Edition (MXSE) system: workflow and technical tips for safe and efficient use. J Robot Surg 2021;15:13-23.
5. Drossopoulos PN, Sharma A, Ononogbu-Uche FC, et al. Pushing the limits of minimally invasive spine surgery-from preoperative to intraoperative to postoperative management. J Clin Med 2024;13:2410.
6. Nakazawa A, Nanri K, Harada K, et al. Feedback methods for collision avoidance using virtual fixtures for robotic neurosurgery in deep and narrow spaces. In: 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob); 2016 Jun 26-29; Singapore. IEEE; 2016. pp. 247-52.
7. Bergholz M, Ferle M, Weber BM. The benefits of haptic feedback in robot assisted surgery and their moderators: a meta-analysis. Sci Rep 2023;13:19215.
8. Kuo LJ, Ngu JC, Lin YK, Chen CC, Tang YH. A pilot study comparing ergonomics in laparoscopy and robotics: beyond anecdotes, and subjective claims. J Surg Case Rep 2020;2020:rjaa005.
9. Kwoh YS, Hou J, Jonckheere EA, Hayati S. A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery. IEEE Trans Biomed Eng 1988;35:153-60.
11. Paul HA, Bargar WL, Mittlestadt B, et al. Development of a surgical robot for cementless total hip arthroplasty. Clin Orthop Relat Res 1992:57-66.
12. Marescaux J, Leroy J, Gagner M, et al. Transatlantic robot-assisted telesurgery. Nature 2001;413:379-80.
13. Rosen J, Hannaford B, Satava RM. Surgical robotics: systems applications and visions. New York: Springer; 2011.
14. Ma R, Vanstrum EB, Lee R, Chen J, Hung AJ. Machine learning in the optimization of robotics in the operative field. Curr Opin Urol 2020;30:808-16.
15. Saeidi H, Opfermann JD, Kam M, et al. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Sci Robot 2022;7:eabj2908.
16. Liow MHL, Chin PL, Pang HN, Tay DK, Yeo SJ. THINK surgical TSolution-One® (Robodoc) total knee arthroplasty. SICOT J 2017;3:63.
17. Yang GZ, Cambias J, Cleary K, et al. Medical robotics-regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci Robot 2017;2:eaam8638.
18. Taxonomy and definitions for terms related to on-road motor vehicle J3016_202104. Available from: https://www.sae.org/standards/content/j3016_202104/. [Last accessed on 5 Nov 2024].
19. Rivero-Moreno Y, Rodriguez M, Losada-Muñoz P, et al. Autonomous robotic surgery: has the future arrived? Cureus 2024;16:e52243.
20. Ortmaier T, Weiss H, Döbele S, Schreiber U. Experiments on robot-assisted navigated drilling and milling of bones for pedicle screw placement. Int J Med Robot 2006;2:350-63.
21. Lauretti C, Cordella F, Tamantini C, Gentile C, Luzio FSD, Zollo L. A surgeon-robot shared control for ergonomic pedicle screw fixation. IEEE Robot Autom Lett 2020;5:2554-61.
22. Huang J, Xing T, Cheng Z, et al. AOSRV: development and preliminary performance assessment of a new robotic system for autonomous percutaneous vertebroplasty. Int J Med Robot 2022;18:e2456.
23. Li Z, Wang C, Song X, et al. Accuracy evaluation of a novel spinal robotic system for autonomous laminectomy in thoracic and lumbar vertebrae: a cadaveric study. J Bone Joint Surg Am 2023;105:943-50.
24. Li K, Xu Y, Wang J, Ni D, Liu L, Meng MQ. Image-guided navigation of a robotic ultrasound probe for autonomous spinal sonography using a shadow-aware dual-agent framework. IEEE Trans Med Robot Bionics 2022;4:130-44.
25. Šuligoj F, Jerbić B, Šekoranja B, Vidaković J, Švaco M. Influence of the localization strategy on the accuracy of a neurosurgical robot system. TFAMENA 2018;42:27-38.
26. Iovene E, Casella A, Iordache AV, et al. Towards exoscope automation in neurosurgery: a markerless visual-servoing approach. IEEE Trans Med Robot Bionics 2023;5:411-20.
27. Kaushik A, Dwarakanath TA, Bhutani G. Autonomous neuro-registration for robot-based neurosurgery. Int J Comput Assist Radiol Surg 2018;13:1807-17.
28. D'Souza M, Gendreau J, Feng A, Kim LH, Ho AL, Veeravagu A. Robotic-assisted spine surgery: history, efficacy, cost, and future trends. Robot Surg 2019;6:9-23.
29. Molteni G, Greco MG, Presutti L. Transoral robotic-assisted surgery for the approach to anterior cervical spine lesions. Eur Arch Otorhinolaryngol 2017;274:4011-6.
30. Perez-Cruet MJ, Welsh RJ, Hussain NS, Begun EM, Lin J, Park P. Use of the da Vinci minimally invasive robotic system for resection of a complicated paraspinal schwannoma with thoracic extension: case report. Neurosurgery 2012;71:209-14.
31. Sadagopan NS, Poylin VY, Jun C, El Tecle NE, Wolinsky JP. Robotic resection of a sciatic notch lipoma using the DaVinci Surgical System: 2-dimensional operative video. Oper Neurosurg 2024;27:381-2.
32. Medtronic. MazorTM robotic guidance platform. Available from: https://www.medtronic.com/en-us/healthcare-professionals/products/surgical-robotics/robotic-systems/mazor-robotic-guidance-system.html. [Last accessed on 5 Nov 2024].
33. Lieberman IH, Kisinde S, Hesselbacher S. Robotic-assisted pedicle screw placement during spine surgery. JBJS Essent Surg Tech 2020;10:e0020.
34. Chenin L, Capel C, Fichten A, Peltier J, Lefranc M. Evaluation of screw placement accuracy in circumferential lumbar arthrodesis using robotic assistance and intraoperative flat-panel computed tomography. World Neurosurg 2017;105:86-94.
35. Lefranc M, Peltier J. Evaluation of the ROSA™ Spine robot for minimally invasive surgical procedures. Expert Rev Med Devices 2016;13:899-906.
36. Jiang B, Karim Ahmed A, Zygourakis CC, et al. Pedicle screw accuracy assessment in ExcelsiusGPS® robotic spine surgery: evaluation of deviation from pre-planned trajectory. Chin Neurosurg J 2018;4:23.
37. Lee NJ, Buchanan I, Boddapati V, et al. P16. Is there a difference in screw accuracy, robot time per screw, robot abandonment, and radiation exposure between the Mazor X and the renaissance? A propensity-matched analysis of 1,179 robot-assisted screws. Spine J 2021;21:S147-8.
38. BÄcker HC, Freibott CE, Perka C, Weidenbaum M. Surgeons’ learning curve of renaissance robotic surgical system. Int J Spine Surg 2020;14:818-23.
39. Alluri RK, Avrumova F, Sivaganesan A, Vaishnav AS, Lebl DR, Qureshi SA. Overview of robotic technology in spine surgery. HSS J 2021;17:308-16.
40. Devito DP, Kaplan L, Dietl R, et al. Clinical acceptance and accuracy assessment of spinal implants guided with SpineAssist surgical robot: retrospective study. Spine 2010;35:2109-15.
41. Connor MJ, Dasgupta P, Ahmed HU, Raza A. Autonomous surgery in the era of robotic urology: friend or foe of the future surgeon? Nat Rev Urol 2020;17:643-9.
42. Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, Kim PC. Supervised autonomous robotic soft tissue surgery. Sci Transl Med 2016;8:337ra64.
43. O’Sullivan S, Nevejans N, Allen C, et al. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robot 2019;15:e1968.
44. Bhandari M, Zeffiro T, Reddiboina M. Artificial intelligence and robotic surgery: current perspective and future directions. Curr Opin Urol 2020;30:48-54.
45. Moustris GP, Hiridis SC, Deliparaschos KM, Konstantinidis KM. Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature. Int J Med Robot 2011;7:375-92.
46. Attanasio A, Scaglioni B, De Momi E, Fiorini P, Valdastri P. Autonomy in surgical robotics. Annu Rev Control Robot Auton Syst 2021;4:651-79.
48. Gkegkes ID, Mamais IA, Iavazzo C. Robotics in general surgery: a systematic cost assessment. J Minim Access Surg 2017;13:243-55.
Cite This Article
How to Cite
Sadagopan, N. S.; Prasad, D.; Jain, R.; Ahuja, C.; Dahdaleh, N. S.; El Tecle, N. E. Beyond AI and robotics: the dawn of surgical automation in spine surgery. Art. Int. Surg. 2024, 4, 387-400. http://dx.doi.org/10.20517/ais.2024.34
Download Citation
Export Citation File:
Type of Import
Tips on Downloading Citation
Citation Manager File Format
Type of Import
Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.
Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.
Comments
Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.