Data Driven

Fall 2011

data-driven_story5TO YOUR HEALTH

”Personalized” medicine has become a buzzword among physicians, the idea that the key to the best care possible is refining established treatment silos—age, gender, and the like—to get at the underlying unique pathophysiology that causes illness. What’s sustaining that buzz—indeed, what’s likely to make personalized medicine a reality sooner rather than later—is the huge quantity of data being amassed on anyone encountering our health care system.

That data stream is only going to increase as hospitals move to electronic record keeping for patients and more computerized testing comes online. And while it all may seem invasive and raise confidentiality concerns, there’s a growing recognition among researchers that well-managed data can reduce both morbidity and mortality.

Greg Hager, chair of the Department of Computer Science at the Whiting School, points to robotic surgery as just one area where improved data collection offers an opportunity for medicine to advance. The robot can be programmed to replicate a surgeon’s hand movements inside the body, but that’s merely the beginning; it can also be told to record data that reveals a surgeon’s skill and whether that skill is improving, slipping, or staying status quo. “There are more than 7 million surgeries a year, including more than a quarter million done by robotic surgery,” says Hager. “Certainly there’s collective wisdom there that you’d like to bring to bear” on surgical practice, he notes.

Hager’s “Language of Surgery” project is attempting to do just that. His computer science colleague Rajesh Kumar is recording surgeons’ training on the da Vinci operating robot at several sites around the country, including Hopkins. The project uses computer modeling to turn each surgical gesture, each movement of a surgeon’s hand, into quantifiable data, “a system that doesn’t just replicate motion and provide visualization but models what the surgeon is actually trying to accomplish and can gauge what’s going on relative to those objectives.”

So far Hager and Co. have looked at general tasks common to many surgeries, collecting data to answer important questions: “What does it mean to do suturing well? What does it mean to do dissection well?”

Eventually, Hager’s surgical work may intersect with that of Natalia Trayanova, a biomedical engineering professor at the Whiting School. Trayanova has been looking at hearts damaged by infarctions, or heart attacks.

It’s long been known that infarctions kill off heart muscle, but they also create electrical disruptions known as arrhythmias that form around the infarct scars. These impact the proper overall beating of the heart. While certain kinds of arrhythmias in well-charted portions of the heart can be treated with catheter ablation (essentially a burning off of the affected tissue that sustains the arrhythmia), the random location of infarction damage has made ablation an arduous, often ineffective technique—a point-by-point physical poking and burning of the area “[Right now] the procedure lasts four to eight hours, it’s very inaccurate with a high level of complications,” including perforations of the heart, says Trayanova. She and her students may have created an elegant solution.

By taking an MRI of a patient’s chest, Trayanova is able to create a computer model of the patient’s heart that simulates the heart’s behavior from the molecular level to that of the entire organ, including representation of the processes in cells that have remodeled themselves around the infarcted area. The model produces reams of data that help show how portions of the heart will function over a given period of time. The model accurately predicts the arrhythmic activity that arises from the infarct, allowing electrophysiologists to test for exactly the right places to ablate on the model as opposed to the patient. “We’ve done animal work and it worked very well. We’re now doing human retrospective studies,” says Trayanova.

But perhaps the most immediate clinical use of big data may come in the field of disease prevention. The price tag associated with genome collection and analysis has dropped greatly. “We spent $300 million to sequence the first human genome; now we can do [an individual’s specific genome] for between $2,000 and $10,000,” says Scott Zeger, a professor of biostatistics at the School of Public Health and vice provost for research at Hopkins. This falling cost means that each individual’s personal heredity map may soon be easily accessible to health care providers.

Zeger says that increasing access to genomic information offers much promise. For example, women with breast cancer related to a growth factor produced by the HER-2 gene can now be tested for the gene and, if positive, receive a drug that blocks the gene’s growth-inducing properties. Similarly, since drug action and effectiveness are often determined by which proteins our bodies can—or can’t—produce, being able to eventually catalog each individual’s proteins (a massive sequencing field called proteomics) could ensure that patients receive only those medications their bodies can process.

Uncovering our unique genes and proteins could also predict predisposition to a host of disease states ranging from diabetes to autoimmune conditions. Offering early intervention is becoming more precise thanks to heavy data crunching; as our understanding of cellular function improves, so does detection of subclinical markers that accompany the precursors of disease states such as inflammation with cardiovascular disease. To Zeger, this confluence of math and medicine has its own grace, relying on both man and machine to move individualized health forward.

“What’s happening is we’re getting much closer to the real biology that’s going on [in disease-creating states],” says Zeger. “This is a marriage of the information technologists cutting algorithms while letting deep medical pathobiological knowledge guide us.”

Here’s to the happy couple.