Computation in the Surgical Suite: Navigating the Brain

Computers help surgeons steer through complex terrain

Brainlab Brainsuite® iMRI system is used for navigated neurosurgery at the M.D. Anderson Cancer Center in Houston. Photographs owned by Brainlab AG, all rights reserved.No surgical specialty has embraced computer technology more rapidly, or benefited more from it, than brain surgery. And with good reason: The brain does not readily yield its internal structure and function to the unaided eye, and a scalpel aimed a hair’s breadth off course can mean the difference between miraculous recovery and personal catastrophe. As a result, neurosurgeons are keenly interested in anything that will help them operate in a minimally invasive manner and avoid collateral damage. Or as Sujit Prabhu, MD, professor of neurosurgery at the University of Texas M.D. Anderson Cancer Center in Houston, Texas, puts it: “My job is to prevent complications.”

 

Navigational Software

Prabhu, who holds a joint appointment at Baylor College of Medicine, has been receiving assistance on that front from both a German medical technology firm and an American software company. For its part, Munich-based Brainlab is a major provider of surgical navigation systems that allow doctors to more effectively see what they’re doing in the operating room. M.D. Anderson, for instance, uses an integrated system called Brainsuite® developed by Brainlab. It is comprised of a stereoscopic infrared camera yoked to dual high-definition monitors and some very sophisticated software. The camera detects  infrared reflective markers that are attached to the patient and on Prabhu’s instruments, while the software triangulates the relative spatial positions of both patient and tools and displays them on the screen.

 

Better yet, the software merges that tracking information with medical imaging data derived from the patient’s preoperative computed tomography (CT), magnetic resonance imaging (MRI), and diffusion tensor imaging (DTI) scans, all of which can be overlaid on top of one another. Linear registration algorithms align the scans by matching up anatomical features—the tip of the nose, the tragus of an ear—or additional markers stuck to the patient’s scalp, and applying various spatial transformations (rotation, reflection, scaling) to fit the images to one another. Once those images have been registered to the patient’s anatomy—typically, by pointing a tracked tool at a few features on the patient’s skin and identifying the same features in the scans—Prabhu can touch any spot on the patient’s brain, and the software will display crosshairs onscreen directly above whatever anatomical structure (lobe or ventricle, gyrus or fiber tract) he’s pointing at. Or at least, it would if those structures didn’t move.

 

The Brain as a Moving Target

Unfortunately, due to a phenomenon known as brain shift, brain structures rarely remain in the same positions they occupied when the preoperative scans were taken. Not only can the brain swell and deflate like a crenulated grey balloon during surgery, says Louis Collins, PhD, professor of neurology, neurosurgery, and biomedical engineering at McGill University, it also has the consistency of medium-density tofu, making it prone to movement when poked or prodded. All of this means that while navigation systems can theoretically offer accuracy to within two millimeters or less, in reality, things are less clear-cut.

 

Surgeons have begun to compensate by using intraoperative scans. M.D. Anderson, for example, has built an operating suite with an integrated MRI scanner that can update a patient’s images with more accurate data that takes into account the brain’s movements. Collins, who leads the Image Processing Lab in the McConnell Brain Imaging Center at the Montreal Neurological Institute, has been using ultrasound technology for similar purposes. Aligning preoperative and intraoperative scans requires the use of more complex nonlinear registration algorithms that can warp one image to fit another, but such techniques are already finding their way into prime time: Collins has incorporated them into his prototype system in Montreal, and Uli Mezger, clinical research manager at Brainlab, says that the company will soon bundle “morphing” or “elastic fusion” algorithms into its commercial platforms.

 

Steering Around Brain Functions

Knowing where the anatomical structures in a patient’s brain are located is only half the battle. The other half involves determining their function, and predicting how slicing through them might affect the patient lying on the table before you. That’s especially true for a surgeon like Prabhu, who specializes in removing the malignant tumors known as gliomas from those areas of the brain that control speech, movement, and the senses.

 

Direct electrical stimulation is the most accurate way of determining function, but surgically inserting electrodes in a patient’s brain is both time-consuming and invasive. Functional MRI (fMRI) and positron emission tomography (PET) scans are noninvasive and relatively quick; but even when those technologies are available (and not every medical facility can offer them), they don’t necessarily provide the precision or spatial resolution that a surgeon needs. Nor can the information they supply always be closely matched to structural images like CT and standard MRI scans, in part because the tumors themselves can distort the brain’s anatomy and interfere with the functional imaging process. “We struggle with concordance, especially when we’re working in 3-D spaces,” Prabhu says.

 

Recently, however, Prabhu has been working with a Houston-based startup called Anatom-e to overcome these hurdles. Over the past 10 years, the company has developed a 3-D model of the human brain in which every internal structure has been functionally annotated. According to Mark Vabulas, Anatom-e’s CTO, the model—which is known variously as a brain atlas and a deformable anatomical template, or DAT—is spatially coherent. Consequently, any changes made to the shape of one sector ripple out to adjacent ones in a consistent and realistic fashion, allowing it to be reliably and accurately mapped to the scans of any given patient using a simple linear registration algorithm, then checked and, if necessary, During intraoperative navigation while using the Anatom-e brain atlas, a surgeon might access images like these to help remove a tumor. The top image includes the three orthogonal views of the patient’s brain (sagittal, axial, and coronal) with the tumor itself visible as a lighter-than-normal area surrounded by an orange outline or a solid orange mass. The surgeon's navigation probe is represented both as crosshairs and as a blue, 3-D wand; and the system provides a list of visible structures and their distances from the tip of the probe. The lower image, which includes a larger axial view and smaller sagittal and coronal ones, provides the surgeon with an idea of what to expect from a spatial perspective: In the large 3-D image on the right-hand side, one can see the relative positions of the blue probe, the tumor (in orange), and the surface of the patient's brain. Courtesy of Mark Vabulas and Anatom-e. tweaked by a human operator—a process that can be repeated with intraoperative scans to compensate for brain shift. Large tumors can distort local anatomy to such an extent that a user must manually register the atlas using visual landmarks, but the end result remains the same: a composite image of the patient’s brain that ties structure to function at high resolution. L. Anne Hayman, MD, one of the company’s founders, calls the DAT a “GPS for the brain,” and it may indeed prove as useful to neurosurgeons as Google Maps is to the rest of us.

 

During preoperative planning to remove a glioma, Prabhu and his team can match the atlas to a patient’s scans to outline the limits of the tumor, assess the degree to which it impinges on neighboring structures, and plan the best trajectory through the brain to reach it. In the OR, Prabhu can touch his tracked instruments to any part of the patient’s brain and the atlas will supply its function, its distance to critical nearby regions, and a host of other useful information, all displayed in a multicolored 3-D image that can be rotated, expanded, and otherwise manipulated in a variety of ways. The technology is already producing results: Last year, Prabhu and Vabulas coauthored a paper in the journal Neurosurgery describing how a team of surgeons at M.D. Anderson used the DAT to help remove tumors from three patients. And in an online demonstration for this author, Vabulas not only illustrated just how closely the atlas’ identification of various functional areas in a patient’s brain corresponded to the results of direct electrical stimulation, he also replayed the animated digital data from an actual glioma biopsy that employed a Brainlab navigation system, the DAT, and intraoperative MRI. Onscreen, one could see the slender, dark blue avatar of a biopsy needle sliding along a light blue trajectory in order to safely reach its target.

 

Vabulas is currently working on fully automating the process of mapping the DAT onto a patient’s medical images with a smart, “adaptively deformable” registration algorithm that knows enough about the physical parameters of different brain regions (e.g., their tensile properties, their water content) to realistically warp the atlas so that it can match even the most distorted anatomy. The algorithm, which Vabulas would like to begin testing this year, could also alert surgeons to potential abnormalities by recognizing areas of the brain that simply cannot be fitted to the atlas. “By knowing what the DAT can do,” Vabulas says, “the algorithm can pinpoint spots that don’t make sense.”

 

Making Navigation Faster and Cheaper

Vabulas’ DAT represents one way of using computational methods to give surgeons the information they need, when they need it—even if they don’t have access to the most advanced imaging technologies. But there are other ways of leveraging registration algorithms, navigation systems, and multimodal imaging methods to give surgeons and their patients an edge in the OR.

 

For example, while intraoperative MRI can help compensate for brain shift, it is also, says Collins, extremely expensive: outfitting an operating suite with an intraoperative scanner and non-magnetic MR-compatible tools can cost upwards of several million dollars. It also takes time to prepare a patient for scanning and to execute and process the scans themselves, forcing surgeons to wait for as long as thirty or forty minutes before they can see precisely what’s going on inside a person’s

head.

 

Seeking faster results at lower cost, Collins has turned instead to intraoperative ultrasound. An ultrasound scanner costs roughly $50,000; and when an ultrasound probe is placed on the cortex or inserted into the surgical cavity in a patient’s brain, as many as 300 to 400 images can be acquired in less than a minute. Since the probe is tracked by an optical system that is accurate to less than one millimeter, Collins’ navigation platform, which goes by the name IBIS (for Interactive Brain Imaging System), can determine the precise 3-D location of every pixel in the resulting images and create 3-D reconstructions of the patient’s brain as it appears in surgery. Using this prototype of an augmented-reality system, surgeons can get a better view of arteriovenous malformations (AVMs) within the brain. In the top image, an AVM appears white in the upper left-hand side of this CT angiography image. In the bottom image, vessels that are far away from the the AVM (now shown in purple) have been removed from the image, and the remaining vessels have been color-coded by type (red for feeding, blue for draining). In addition, deeper vessels fade into the background, appearing foggier. In the augmented reality view (middle), the second image has been combined with preoperative scans and a live camera image of a model of the patient's head, allowing the surgeon to see the AVM and related vessels below the brain's surface. Courtesy of Marta Kersten-Oertel and Louis Collins of McGill University.The system can then employ linear and nonlinear registration techniques to align the patient’s intraoperative ultrasound data with his or her preoperative scans (e.g., MRI and CT for anatomy, fMRI and PET for function), warping the latter to fit the former as necessary.

 

When Collins and his students first began developing IBIS a decade ago, it lacked the processing speed to perform those volumetric reconstructions and image registrations in the OR. Two years ago, they had the whole process down to 10 minutes; but that still wasn’t fast enough. “They were quite nice and patient with us,” Collins says of his surgical colleagues, “but they weren’t really using the data.” Now, however, thanks to a dedicated graphics processing unit, IBIS can process those requests so quickly—performing reconstructions in approximately one and a half seconds, and registrations in just one—that by the time a surgeon has completed an ultrasound scan and handed the probe back to a nurse, the results are already onscreen.

 

Collins believes this ultra-fast approach will change the way neurosurgeons work, allowing them to re-image patients far more often than they do now in order to safely remove as much tumorous tissue as possible. He is also investigating the possibility of using transcranial ultrasound to help surgeons insert exceedingly long, thin needles into the subthalamic nuclei of Parkinson’s patients in preparation for deep-brain stimulation, a task he likens to sucking a specific seed from the center of a melon with a straw. (It’s an apt analogy: the region of interest is roughly the size of a cantaloupe seed, and lies 8 to 10 centimeters inside the brain.)

 

The Right Information at the Right Time

As is often the case with new technologies, usability is an issue with computer-assisted surgery. On the one hand, researchers and developers can make things easier on clinicians (and drive down the risk of human error) by introducing more automation. On the other, they need to think carefully about the information they choose to present and how they present it, so as not to overwhelm or distract surgeons with an indiscriminate flood of data that’s difficult to interpret or irrelevant to the task at hand. “The key,” says Mezger, “is to display the right information at the right time.”

 

Those concerns underlie another one of Collins’ projects: an augmented-reality system to help surgeons remove arteriovenous malformations (AVMs) from patients’ brains. Left untreated, these abnormal tangles of blood vessels, which Collins likens to balls of knotted yarn, can cause headaches, epileptic seizures, and even strokes. Removing them can be tricky, however, since surgeons must first distinguish the vessels that feed blood to the AVMs from those that drain them, then sever them in the correct order, all while finding their way through opaque brain tissue. Preoperative CT, MRI, and angiography can map a patient’s vessels; image-processing algorithms can sort feeders from drainers by tracing their connections back to major arteries and veins; and operating microscopes can provide close-ups of the vessels once they have been exposed. But how to deliver all of that visual information in a unified, comprehensible way?

 

Initially, Collins fed the combined imagery into clunky stereoscopic goggles, which no one much liked. Now, however, he and his team present it all on a single monitor. The system employs the same navigation platform, tracking system, and registration algorithms as the intraoperative ultrasound set-up to ensure that everything lines up properly on screen. In place of a tumor and its surrounding anatomy, however, the surgeon sees blood vessels that have been color coded both by type (red for feeding, blue for draining) and by depth, so that he can zero in on the ones that matter most—namely, those associated with the AVM itself, and those that are in the way. This “chromadepth” method of representing distance isn’t perfect; while it provides a quantitative sense of the relative depths at which different vessels lie, they still look as if they are floating on top of the patient’s brain. But Collins is already experimenting with ways of enhancing depth perception, such as making deeper vessels appear foggier, like distant figures in a landscape. The goal, he says, is to minimize the cognitive load imposed by the technology so that surgeons can concentrate on the job at hand. If he succeeds, he’ll be able to add his augmented-reality system to the growing list of technologies that are helping surgeons see their patients in a whole new light.  
 



All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
All submitted comments are reviewed, so it may be a few days before your comment appears on the site. This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.