NCBCs Take Stock and Look Forward: Fruitful Centers Face Sunset

From hardened software to scientific productivity, the NCBCs have changed the landscape for biomedical computing.  What will happen when their funding expires?

It has been eight years since the National Institutes of Health (NIH) funded the first National Centers for Biomedical Computing (NCBCs). With two or three years remaining in the program (depending on the center), the Centers have hit their stride. And now it is time to take stock: How has the NIH investment in large centers paid off? And what’s next? How can the PIs and the NIH ensure a continued return on the NIH investment in the current Centers, and how might NIH support for biomedical computing evolve in the future?


The Payoff From National Centers

The NCBCs were established with an ambitious goal: to build a national infrastructure for biomedical computing. Such a mission requires investment and organization on a scale that goes beyond what a small entity can provide, says Ron Kikinis, PhD, Director of the Surgical Planning Laboratory, Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, and PI for the National Alliance for Medical Image Computing (NA-MIC). Kikinis compares the need for large, complex centers to the need for complex organizations to build superhighways and bridges. “A do-it-yourselfer might be able to build a room partition, but building a six-lane highway and a bridge across a river is not really something that’s DIY,” he says.

 

Moreover, the success of bigger, more complex projects must be measured differently from smaller ones: a partition need only stay put and look good; a six-lane highway and a bridge should carry traffic and take people somewhere—because it will be something large numbers of people rely on.

 

Thus, whereas some research projects are rightfully judged by whether they produce publications in high impact journals, the NCBC-built infrastructure must be evaluated by whether it is doing something bigger—creating computational environments and tools that could not have been created otherwise; and providing resources researchers and clinicians can—and do—rely on. 

 

The NCBCs have done both. They have accomplished a number of things that would not be achievable with uncoordinated investigator-initiated R01-type research (the bread-and-butter of NIH research grants), says Isaac Kohane, MD, PhD, professor of pediatrics at Harvard Medical School and a principal investigator for the NCBC called i2b2—Informatics for Integrating Biology and the Bedside. And researchers everywhere are relying on NCBC resources.

 

“Big projects have huge benefits per dollar added,” says Art Toga, PhD, professor of neurology at the University of California, Los Angeles and PI of the Center for Computational Biology (CCB). “With a coordinated effort, people complement each other in terms of specialties and disciplines. And collectively they create a whole that is bigger than the sum of its parts.”

 

Here, we describe some of the major payoffs of the NIH investment in the NCBC program.

 

• Efficient Production of Hardened, Professional-Grade Software

“The NCBCs have produced a bunch of hardened, high quality software at professional or near-professional levels of quality that wouldn’t exist without the NCBC program,” says Russ Altman, MD, PhD, professor of bioengineering, genetics, and medicine at Stanford University and a PI for Simbios, the National Center for Physics-based Simulation of Biological Structures. “And people all over are downloading [NCBC software] and using it.” As a fairly conventional measure of NCBC success, Altman says, this one is huge.

 

And it’s novel: Academic centers, built around educating graduate students, aren’t typically set up to create such professional products. “Software typically doesn’t outlive an R01,” Kohane says. That’s because there’s an 80/20 rule with software, he says: 80 percent of the success comes with a 20 percent effort, “but if you want anyone else to use it, you have to work on the hard side of the rule: use 80 percent effort to achieve the last 20 percent of the work.”

 

To take on that hard side of the effort, the centers had to create a new kind of institution within academia—an institution with an executive director and professional programming staff. Once they were up and running, the professional products started to blossom and take hold. “Hardened software created over tens of man-years of effort has a much better chance of being taken up by others,” Kohane says.

 

By building hardened software tools at large academic centers, the NCBC program also enabled economies of scale. For example, Altman says, a shared programming staff built Simbios’ two main products—OpenSim (biomechanical simulation software) and OpenMM (software for accelerated molecular mechanics simulations on high-performance computer architectures)—despite the fact that they operate at very different scales (musculoskeletal and molecular). It’s an example of the NCBCs taking advantage of their size to produce infrastructure efficiently.

 

Moreover, says Lyster, “a lot of the software that has been created has been done in a mechanism where others could contribute to the code and algorithms.” In this way, work that might have been developed in a specific biomedical context is being extended and enhanced to address similar or even fairly unrelated questions in entirely new contexts, he says.

 

For example, people who had nothing to do with the core of i2b2 are now building extensions, such as for natural language processing components. And a Canadian group has developed research software for adaptive radiation therapy based on the NA-MIC Kit. Similarly, some broadly used software packages have built NCBC products into their back end. OpenMM, for example, is now part of the widely used molecular dynamics packages CHARMM, TINKER and GROMACS.


• Well-Established Open-Source Software Repositories and Web Services

Almost every NCBC created a software repository using state-of-the-art software management systems, says Peter Lyster, PhD, program director in the Division of Biomedical Technology, Bioinformatics, and Computational Biology at the National Institute of General Medical Sciences. These allow developers around the world to contribute to the development process and include version control systems to track the provenance of software changes. “[This] gave developers the confidence to know that there’s no mistake you can’t undo.” Lyster says. “And these centers brought that to fruition for biomedical computing.”

 

Software repositories were also an area where the NCBCs fulfilled their charge of collaborating with and learning from one another, Lyster says. When they were first funded, he says, NA-MIC already had a highly advanced repository whereas Simbios was starting from scratch. “At working group meetings,” Lyster says, “we would say: Take Kikinis’ chief software engineer and have him tell Simbios how he set that up.” And now Simbios has a highly professional repository called Simtk.org that looks and operates a lot like NA-MIC’s original ITK/VTK repository. “That’s an intangible advantage of these centers: It’s very hard to quantify that we had working groups to make sure we all knew how to build professional-grade software repositories,” Lyster says.

 

Open-source repositories are valuable for three different constituents, Lyster notes. They facilitate the work of developers who create projects in the repositories; they make software freely available to users; and they empower a large in-between group of user-developers—people who want to see what they are getting and then fiddle with it to create something new.

 

In addition, says Lucila Ohno-Machado, MD, PhD, associate dean for informatics and professor of medicine at the University of California, San Diego and PI for iDASH, the NCBC for integrating Data for Analysis, Anonymization, and Sharing, “The NCBC repositories benefit small and mid-sized institutions that often would not otherwise have access to biomedical data and computational infrastructure, such as high performance computing and processes to facilitate the execution of data use agreements.”

 

The NCBCs also created extremely valuable web services such as the National Center for Biomedical Ontology (NCBO) BioPortal. This has become the go-to place for finding ontologies—sophisticated methods for annotating data to maintain deep connections that assist in revealing underlying knowledge. NCBO’s Bioportal houses more than 350 biomedical ontologies and controlled terminologies, and its web services receive upwards of 3 million hits per month.

 

 

 

• A New Way to Locate Software Resources

To make it easier for people to reliably locate, publish, and access both software and data, the NCBCs created Biositemaps, a tool that enables contributors to annotate their software and data resources in a standardized way. The Biositemaps annotations are input into a web-based search engine called the Resource Discovery System (http://biositemaps.ncbcs.org/rds)—a joint creation of the NCBCs and diverse biomedical researchers—in what Lyster calls a “fascinating volunteer effort.” Here, a user can, for example, search for “gene expression” and find 46 relevant tools including several from two different NCBCs and several more hosted at Simtk.org.

 

The NCBCs were uniquely positioned to create such a tool because they covered such a diverse set of biocomputational areas. Despite the system’s breadth of coverage and ease of use, Lyster says, “Getting widespread adoption of any method for locating resources is still challenging.” A number of other options exist and there’s no community consensus on the best way to do it, he says. It’s a problem that’s ubiquitous, and not specific to the NCBCs or Biositemaps, he notes. “It’s just one of the issues being tackled.”

 

• Inspiring the Next Generation of Computational Biologists

In addition to directly training more than 400 computational biologists, the PIs say, the centers have inspired many others to consider the field as a career and have built a sense of professional identification with computational biology. Indeed, Kohane says, by funding the centers, the NIH sent a message to the quantitative community that, “yes, there are careers and support to be had in this area and so it’s okay to invest your life in this field. At the time, that was not obvious to computational individuals.”

 

As Altman puts it, “The centers have produced a cadre of ex-students and post-docs who now have a professional identification with computational biology.” This includes many who are now in young faculty positions and have a research program in academia, he says. Because of their past ties to the center, many Simbios postdoc alums still use Simbios resources, get seed grants from Simbios, collaborate on Simbios workshops, use Simtk.org for dissemination, and work with the Center’s software developers on enhancements to code they originally developed while at Simbios.

 

Moreover, the Centers’ impact reaches beyond the funded trainees, Altman notes. “Simbios didn’t fund that many graduate students, but students were affiliated with all of the Simbios projects, so there’s this deflected involvement. It made grad students feel that computational biology was something they could do.”

 

The centers also provide a rare opportunity for graduate and postdoctoral students to quickly turn new ideas into practical applications. By creating this setting and allowing trainees to be active participants, the NCBCs promote interest in industry careers for those who do not necessarily want to pursue academic positions, says Ohno-Machado. This expands the horizons for trainees and fills an important gap in building capacity in biomedical computing.

 

Leslie Derr, PhD, program director for the NIH Common Fund, agrees that the training component is one of the NCBC program’s strengths. She also says that the training influences not only computational biologists but experimental biologists as well, the latter particularly at the NCBC for Multiscale Analysis of Genomic and Cellular Networks (MAGNet) which promotes a close integration of the two fields.

 

The NCBC program also helps MAGNet attract top-level students, says Andrea Califano, PhD, professor of chemical systems biology at Columbia University and PI for MAGNet. “Before we had an NCBC, most of the students we accepted went elsewhere (Harvard, MIT, Stanford),” he says. “Now they come here because of the effort to create an integrative program.”

• Scientific Productivity

Great publications can happen with money in the absence of centers, Altman says. But the substantial sums provided for the NCBCs certainly enabled significant scientific productivity. All told, more than 1750 papers mention the eight NCBC grant numbers, and 35,000 others cite those.

 

Califano notes that about one-third of his center’s publications are in journals with impact level above 15. He also gives MAGNet credit for specific developments in biomedicine: “We’ve come up with a new way of thinking of DNA as a molecule; we’ve combined structural and functional biology; we have a new ability to reprogram cells; the list goes on. It’s a breadth of discovery that a center allows you to have rather than a single success story.”

 

Annual downloads of 3DSlicer by region.Kohane says the critical mass of the NCBC program also accelerated the arrival of solutions, some of which were fortuitous rather than planned. For example, i2b2 never had any ambition to do pharmacovigilance, but once they had liberated data from archived health records (with the goal of doing genomic research), they found that there were some amazingly low-hanging fruit in this area. “We could easily see appallingly obvious signals of drug-induced adverse events that had gone unnoticed.” For example, they mined electronic health records to confirm the association between heart attack deaths and Vioxx and to identify a similar risk from the drug Avandia—information that is now on the drug’s warning label. “The NCBCs experienced lots of such examples where in addition to the primary objectives, having a critical mass of people led to unanticipated progress,” Kohane says.

 

As another example, the University of California organized a system to perform federated queries on data derived from electronic health records at its five medical centers, which collectively represent over 11 million patients. The hub of this system is provided by the iDASH NCBC coordinated at UCSD. “The system is designed to use software developed by three different NCBCs and lessons learned from all,” says Ohno-Machado.

 

It’s biomedical impacts like these that really made the NCBCs a scientific success story, the PIs say. And for that they credit the driving biological problems (DBPs) associated with each center. The DBPs keep the computational scientists focused on the science but also allow the centers to create robust software that can be extended and enhanced to address novel questions, Derr says.


• A Community and a Network of Leadership

Before the NCBCs came into existence, the field of biomedical computing had pockets of spontaneous collaboration in particular areas, but not the strong sense of community or common purpose that were enabled by a common fund and complementary expertise, Altman says. By empowering a disparate group of researchers to work together on a national infrastructure, he says, the NIH changed that.

 

“The NCBCs created a critical mass of computationally competent individuals working for a common biomedical purpose,” Kohane says. “The critical mass raises the overall tenor and quality of the conversation. Otherwise everyone is an island.”

 

And from that community a network of leadership at the NIH and at NCBC institutions emerged. “There is now a functional group that can think about and respond to issues of biomedical computing at a policy level,” Altman says. “Before, there was nobody to point to and say, ‘they can help us.’ And now we have a group of centers and a variety of staff who researchers and administrators can come to for help, advice, or counsel.”

 

Infrastructure Success: Galaxies of Reliance

In addition to providing the payoff one can only get from large centers, the NCBCs made significant strides toward creating a national infrastructure for biomedical computing, says Mark Musen, MD, PhD, professor of medicine at Stanford School of Medicine and PI for the National Center for Biomedical Ontology (NCBO).

 

Map of Centers that have adopted the i2b2 platform for clinical research.  A=Clinical & Translational Science Awards centers (CYSAs) adopting i2b2 platform; B=CTSAs evaluating i2b2 platform; C=Academic medical centers adopting i2b2 platform; D=Foreign medical centers adopting i2b2 platform.It’s an infrastructure that can seem ephemeral, Kikinis says, because it’s all “executed as electrons” and virtualized on computers. But he points to a map of NA-MIC’s downloads as proof of a real infrastructure. “This shows for me the worldwide demand for what we are doing,” he says. “In a nutshell: We are addressing somebody’s needs.”

 

And that’s true for all of the Centers. After eight years, 84 hospitals rely on the i2b2 platform; 22,000 researchers use Simtk.org; 3 million calls a month hit NCBO’s web services; 83 collaborators count on MAGNet methods and tools; and daily, all around the world, upwards of 100 people grab the latest 3D Slicer from NA-MIC’s web site and use it to analyze images of patients with a whole range of diseases. And then there are the thousands of other downloads that demonstrate widespread reliance on NCBC tools. A sampling of these are shown in the NCBCs by the Numbers chart above.

 

A worldwide community depends on NCBC products. Perhaps this is the best way to think about the success of NIH infrastructure grants: While high impact journal publications count for something (and the NCBCs have produced more than their fair share as mentioned above), perhaps, Altman says, having a high impact in the world of clinical and biomedical research is a better indicator of whether the centers are fulfilling their mission.

 

The piece of infrastructure that can be enhanced as the centers mature is inter-operability. At present, Lyster says, “The centers are like galaxies in that they are separate and non-overlapping. There are clumps of foci with some collaborations in interstellar space.” As galaxies, he says, “they’ve been stellar,” but the original centers were hamstrung in this regard because NIH funding didn’t cover the waterfront: many areas of computational biology and medicine were not represented by the eight centers. Even with the NCBC Collaborations program, which created at least 33 spokes for the hubs, opportunities for interoperability only scratched the surface, Lyster says. “We don’t know how much further we could have gotten with interoperability if we’d covered the waterfront better.”

 

The June 2012 Draft Report of the Data and Informatics Working group of the Advisory Committee to the Director of NIH (ACD DIWG) noted that the problem was structural: “… due to the limited funding of the NCBC program and to the size of the overall research area, there is virtually no overlap of focus among the current centers. As a result, there has been less opportunity for synergy and complementary approaches of the type that have universally benefited the research community in the past.”

 

Kikinis says that the centers naturally evolved into a hub and spoke model with ecosystems of collaborators. “It’s a bit different from the way NIH envisioned it would be,” he says, “but the way it evolved, the NIH got a lot of bang for the buck.” The centers really provided a lot of enabling infrastructure for NIH grantees, he says. “So from my point of view, the program accomplished what the RFA [Request for Applications] intended it to accomplish.”

What Might Have Been Done Differently

• More Centers (and/or More Funds)?

When the NIH made the first announcement of NCBC awards in 2004, it was clear that some areas would be well served and others would not, Altman says. Eight years in, that problem has only grown worse as the importance of computation swelled, leaving unserved areas behind.

 

As examples, Altman cites genomics and natural language processing. With the advent of next-generation sequencing, genomics data has been confounding biologists. “People are now scrambling to handle that output,” Altman says. “A center dedicated to handling and analyzing genomic data would have been a great idea.” And as for natural language processing (NLP), “Guess what,” Altman says. “Our entire understanding of biology and medicine is really contained in the published literature. And since people write in natural language, if you can’t get computers to turn that information into databases and computable information, you’re falling behind.” Had there been an NCBC for NLP, he says, database managers wouldn’t be hiring people to read the literature and distill it for others in computable format, which is what they’re doing now.

 

The fact that the current NCBCs only covered a small portion of what is needed in terms of biomedical computing for the country is also described in the ACD DIWG Draft Report. It says: “the small number of active [NCBCs] has not covered effectively all relevant areas of need for biomedical computation or for all of the active contributing groups.” The report specifically cites the lack of coverage for a number of grand challenges in biomedical computing such as multi-scale modeling; methods for “active” computational scientific inquiry; and comprehensive, integrated computational modeling/ statistical/information systems.

 

• A Grand DBP

Kohane suggests that the NIH might have asked centers to participate in one grand driving biological project. “It’s a sterile exercise to say ‘let’s share data,’” he says, “but to say ‘let’s solve this problem together,’ that’s much more tangible.” He even has a problem in mind: obesity. “It would have involved new ways of imaging, new ways of doing genetics, new ways of integrating different modalities, possible simulations of the effect of weight on organs or the human body,” he says. “It would have gotten interesting conversations going.”

 

• Dedicated Training Funds

Knowing what they know now, the NCBC mission could have been well served by separate training grant funds or a companion center for training, say Musen and Altman. “Frankly, had we had that, we would have had a larger impact,” Musen says. NCBO and other centers took advantage of existing training programs and leveraged those. “But if we really wanted to train the next generation of computational biologists who would inherit the work of our existing centers and carry that on, there could have been more dedicated training funds associated with the NCBC program to assure that,” Musen says.

 

Lyster points out that, for the NIH, it’s all about balancing competing needs. “I think that given that we started out with no centers and now have had eight with hundreds of students graduated, it’s hard to think that’s not a good thing,” he says. “In fact, it’s a very positive thing when a student is forced to confront both the biological and the computational question at once.”

 

Preparing the Centers for Sustainability

In their first eight years, the NCBCs made huge advances, but there is much more to do. And the needs that motivated funding for national centers haven’t evaporated, Ohno-Machado notes. So what’s next for the current centers when they hit their ten-year expiration date?

 

As Derr sees it, “These centers have always been aware that their funding ends within ten years, so they know that they need to think about sustainability.” The center PIs need to think about what kernels they need to sustain into the future and how those might be funded, she says. “There are certainly other programs where they can compete for funds, and they would certainly be competitive in applying for them.” Each center also has an external board helping them think about sustainability.

 

But the question remains: What can the PIs do to make sure the centers stay funded? Will there be a new program or will they cobble together a number of other approaches? As might be expected, the various PIs are each taking their own approaches to the problem. Califano, for example, plans to apply for one of the systems biology programs. And Kohane hopes that i2b2’s open source repository will be taken over by the Harvard Medical School’s Center for Biomedical Informatics.

 

Brian Athey, PhD, professor of computational medicine and bioinformatics at the University of Michigan and PI of the National Center for Integrative Biomedical Informatics (NCIBI) says his center, which was not renewed in 2010, has always taken sustainability seriously. They have already spun off two smaller efforts—a regional metabolomics center and a rare disease center—and are now deeply involved in tranSMART, an international effort to create a knowledge management platform that integrates, normalizes and aligns genetic and phenotypic data primarily for drug discovery. For financial support, Athey is talking to foundations, the FDA, the Veteran’s Administration, and pharmaceutical companies. “Anybody who wants to sustain their efforts has to do that. And we are. We are deeply involved in that.”

 

Kikinis and Altman point to the P41 Biomedical Technology Research Center (BTRC) program established by the former NCRR (National Center for Research Resources) as a good conceptual model for how the NIH might continue to support the NCBCs or portions of them. Like NCBCs, BTRCs support the development of technologies that are then made available to the research community, but unlike NCBCs, the biological problems driving BRTC technologies are funded separately. As a model for what’s possible, the BTRCs have two additional features that Kikinis says would work well for the NCBCs: They have no set end date and are subject to review by ad hoc study sections consisting of people with appropriate expertise. “To create innovative new infrastructure for biomedical research, the P41s are a good model,” Kikinis says. “If I were in charge (which unfortunately, I am not!), I would treat the existing centers as resource centers and then do a limited RFA to capture a few additional centers—however many NIH is willing to fund.”

 

Toga says that discussing whether P41s are a good model for the NCBCs is putting the cart before the horse. “The first decision is whether a national network of computational biology centers is a worthwhile endeavor.” Answer that in the affirmative, he says, and shoehorning NCBCs into BTRCs is not the way to go.

 

Kohane is hopeful of a solution. “To the extent each NCBC has friends and supporters, I think there will be a lot of creativity both in the public and private domains to support continued efforts.”

 

The Possibility of a New Program

Toga hopes that before the NIH starts designing a new program of national centers, it will conduct a meaningful programmatic evaluation of the current centers as well as of its own role in managing the centers. Just as Apple, Inc., reviews the performance of the iPhone 4S before designing iPhone 5, Toga says, the NIH needs to investigate how the research community benefited from the NCBCs and what it wants from future centers before designing the next iteration. And because the NCBCs were run as cooperative agreements, the NIH needs to turn that same critical eye on itself: Did the NIH manage, evaluate, and review the centers in the best way possible? Did their management impact the centers’ success?

 

In the meantime, the ACD DIWG Draft Report is adamant that promoting further development of biomedical computing in a coordinated manner is critical to justify large investments in “big data” collection that will need computational analyses. Without such computational infrastructure, data will remain underutilized and stored in independent silos rather than made available as a national resource. The DIWG suggests possible next steps might include creating a larger number of national centers that are smaller in size, complexity, and scope.

 

Big vs. Small

There is no definitive recipe for success moving forward. Altman, who is part of the DIWG, thinks smaller centers could work. He suggests 20 to 25 centers with smaller budgets—perhaps two million dollars per year rather than the current approximately four million. “With 25 centers covering biomedicine more broadly, you’re talking about an infrastructure that would be really robust,” he says. And there should be no official end date, he adds.

 

In a new model with smaller centers, Altman says, it’s possible that several Simbios researchers would seek to create centers—perhaps a national biomechanical simulation center (with ongoing work on OpenSim) and a national center for molecular modeling (with ongoing work on OpenMM). “The big centers allowed us to have an umbrella over these two physical programs,” he says. “In a new approach, these would be broken up, but they’d already have a bridge between them.”

 

As for the details of funding smaller centers, several options exist. For example, Altman says, if the centers paid only for their part of the DBP projects (rather than for the biomedical research itself), the centers’ budgets would drop by about 25 percent. “We can just say to application scientists, ‘you already have money to do x from regular research grants, and we’ll get the money to do the computational piece.’”

 

Kohane and Musen, however, think big centers are essential. “Big is good because of critical mass,” Kohane says. Large centers attract good trainees, he says. And they empower computational researchers to produce durable software—rather than just answer biologists’ question-of-the-day. If the centers were smaller, Kohane says, the power relationship between computational scientists and biologists would return to business-as-usual. “They’d just want their research done and not be particularly interested in us developing our software for others.”

 

Musen has some other concerns about reduced resources for smaller centers. “This recommendation is probably the most pragmatic thing the advisory committee to the director could have recommended,” he says, “but it will not in any way allow the kind of large scale development that the current NCBC program fostered.” With fewer resources, he says, “We’d be doing more maintenance and less innovation. That’s obviously not nearly as exciting for us.”

 

Covering the Waterfront

If the NIH decides to fund more and smaller centers, the question remains: how to ensure that the centers cover the waterfront? Should the selection be top-down or bottom-up? Opinions differ. “I think I would target some areas of need instead of being a free call,” Ohno-Machado says. “The NIH could identify the needs that are most important and then highlight those for reviewers.”

 

Toga agrees. “Researchers’ computational needs might be better met by identifying where the biggest needs lie and developing clusters of opportunities around those, rather than having an open call,” he says.

 

But Altman has a bottom-up view of the program. “I think you should put as much of the so-called decision-making into the hands of the scientists on the ground who write their best grant applications and let the chips fall.” There’s plenty of opportunity for coordination after the grants come in, he says. He has another big fear: “Two million dollars and a ton of mandates. To get the best people, you need to give them substantial freedom to do what they should do.”

 

Meanwhile, Califano would ensure greater connectivity among the centers by creating two programs—one for clinical informatics and another for computational and integrative biology. “That would create a set of constituencies that speak the same language, where right now we have cats and dogs in the same room.”

 

But Musen disagrees with this dichotomy. “Most of the centers are creating national infrastructure that’s applicable in a variety of domains,” he says. “I2b2 is very clinical and MAGnet is very molecular, but other NCBCs are in the middle. Imaging applies to cells and to people; physical simulation applies to molecules and muscles; ontology work involves data analysis from both clinical and life sciences domains.”

 

Where the answer lies—and how NIH will respond to the DIWG—remains to be seen.

 

Incentives for Integration

More centers might also enable interoperability by covering more of the waterfront. That hope is specifically stated in the DIWG Draft Report: “The NIH should also encourage and enable more overlap between centers, to facilitate collaboration.”

 

Califano says that if the NIH launches a new program and wants more interoperability among centers, it needs to have more thematic overlap between them. Altman concurs: “One of the lessons learned is there needs to be a finer sampling of computational space. The gaps can still be significant but need to be bridgeable with a reasonable amount of effort.” He would also encourage the NIH to provide incentives for integration, so that it happens spontaneously. “In the second generation, it might happen, but it can’t come from top-down rules.”

 

Keep It Going

All of the PIs believe the current centers need to be sustained in some way. “If they were to terminate abruptly and no longer get any kind of funding, it would be a shame because they have been tremendously productive,” Califano says.

 



All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.