By Chappell Brown
EE Times
February 09, 2004 (10:33 AM EST)
Like the twin strands of a double helix, electronics and biotechnology are joining forces in a technological explosion that experts say will dwarf what is possible for either one of them alone.
Hints of this pairing can be seen in the economic recovery that's now taking hold. One peculiarity that hasn't grabbed many headlines is biotech's role in pulling Silicon Valley out of its three-year slump. A report last month from the nonprofit organization Joint Venture: Silicon Valley Network points up this fact, showing that venture funding in biotech startups rose from 7 percent in 2000 to 24 percent last year while investment in information technology startups fell from 10 percent to 4 percent over the same period. The immediate question is whether this is a temporary anomaly or the emergence of a major trend.
Certainly computers, biochips, robotics and data sharing over the Internet have been important tools in accelerating biological and medical research, and it should be no surprise that new application areas and markets would grow around them. The view from inside the engineering cubicle might be something like, "Yes, we have created a revolutionary technology that creates new markets-biomedicine is simply one area that benefits from advances in VLSI."
But a long-term perspective suggests a tighter linkage between electronics technology and molecular biology. Indeed, it could be argued that the second half of the 20th century forged not one but two digital revolutions, fueled by two fundamental breakthroughs: transistorized digital computers and the cracking of the genetic code. The latter advance showed that the genome was transmitted through the generations by means of digital storage in the DNA molecule.
In the following decades, both developments matured at an increasingly rapid pace. Digital circuits were inspired by crude models of the nervous system (see story, below). Although the models turned out to be wrong in many respects, technologists discovered that digital representation brings the advantages of simplicity, stability and an ability to control errors. Those same properties have made DNA the viable and stable core of living systems for billions of years.
But the nervous system is only one component of the body that is encoded in DNA, which somehow not only represents the information for building the basic components of cells, but also encodes the entire process of assembling highly complex multicellular machines. The growth process is an amazing feat of bootstrapping from the genetic code to functioning organisms. Essentially, an organism is a molecular digital computer that constructs itself as part of the execution of its code.
Leroy Hood, director of the Institute for Systems Biology (Seattle), believes that science aided by computers and VLSI technology will achieve major breakthroughs in reverse-engineering the cell's assembly processes. The fallout will be new circuit and computational paradigms along with nanoscale mechanisms for building highly compact molecular computing machines.
"There will be a convergence between information technology and biotechnology that will go both ways," said Hood. "We can use new computational tools to understand the biological computational complexities of cells, and when we understand the enormous integrative powers of gene regulatory networks we will have insights into fundamentally new approaches to digital computing and IT."
But cell machinery can also be enlisted in the kind of nanostructure work that is currently done manually with tools such as the atomic-force microscope. "The convergence of materials science and biotech is going to be great, and we will be able to learn from living organisms how they construct proteins that do marvelous things and self-assemble," Hood said. "There will be lessons about how to design living computer chips that can self-assemble and have enormous capacity."
Hood is credited with inventing the automated DNA-sequencing systems that were the first step in accelerating the decoding of the human genome. Accomplished two years ahead of schedule thanks to many enhancements to the process, including MEMS-based microfluidic chips, the achievement has stimulated efforts to take on the far more complex task of decoding protein functions.
Hood's institute, which was founded in 2000, is one example of a wave of similar organizations springing up across the United States. The idea is to engage a diverse group of specialists-mechanical and electronic engineers, computer scientists, chemists, molecular biologists-in the effort to decode the cellular-growth process. Stanford University's BIO-X Biosiences Initiative, for example, is dedicated to linking life sciences, physical sciences, medicine and engineering. The Department of Energy's Pacific Northwest National Laboratory has its Biomolecular Systems Initiative, Princeton University its Lewis-Sigler Institute for Integrative Genomics. Harvard Medical School now has a systems-biology department, and MIT has set up its Computational and Systems Biology Initiative (CSBi).
Proteins have remarkable chemical versatility and go far beyond other molecules as chemical catalysts, said Peter Sorger, director of MIT's CSBi. But applications of their properties will have to contend with a difficult cost differential between medical and industrial products.
"Using proteins as catalysts was the absolute beginning of the biotech industry. We know that proteins are the most extraordinary catalysts ever developed. The problem is that most of the chemical industry is a low-margin business and biology has historically been very expensive," Sorger explained.
While organic catalysts derived from oil are not as efficient, the low cost of producing them has kept proteins out of the field. "Most of the applications of proteins to new kinds of polymers, new plastics, biodegradable materials, etc. have all been limited by the fundamental economic problem that oil is so darn cheap," he said. "As a result, bioengineered materials are only used in very high-end, specialized applications."
However, Sorger believes that such bioengineered products will arrive, probably first in biomedical applications, which will then spawn low-end mass-market products. He used the example of Velcro, which was devised as an aid to heart surgery and later became a common material in a wide range of commercial goods. Sorger is looking forward to nanotechnology applications, the assembly of materials and circuits using biological processes, as the first direct applications of protein engineering outside of the biomedical field.
Sorger cited the work of MIT researcher Angela Belcher as an example of the technological spin-offs that will come from attempts to understand cellular processes. Working in the cross-disciplinary areas of inorganic chemistry, biochemistry, molecular biology and electrical engineering, Belcher has found ways to enlist cellular processes to assemble structures from inorganic molecules. By understanding how cells direct the assembly of their structural components, Belcher is finding ways to assemble artificial materials from inorganic nanoclusters that can function as displays, sensors or memory arrays. Another interdisciplinary group at MIT is putting together a library of biological components that engineers could used to build artificial organisms able to accomplish specific nanoscale tasks.
Underlying the excitement surrounding the merger of digital electronics systems and molecular digital organisms are the dramatic capabilities of lab-on-a-chip chemical-analysis systems, automated data extraction and supercomputer data processing. These technologies are part of what made it possible to sequence the entire human genome. A benchmark for the rapid progress promised by those tools may be the announcement by three biotech companies late last year of single chips containing the human DNA molecule in addressable format-the human genome on a chip. That might compare to the advent of of the CPU-on-a-chip, which catalyzed the VLSI revolution in the mid-1970s.
The barrier to moving this capability forward lies in the physical differences between DNA and the proteins it codes. Proteins are built from DNA sequences as linear sequences of amino acids that then spontaneously fold into complicated 3-D shapes. And the process becomes more complex as proteins begin to interact with one another. For example, there is a feedback loop in which proteins regulate the further expression of proteins by DNA. As a result, there are no parallel fluidic-array techniques to accelerate the analysis of protein families. "These technologies have a long way to go. I don't see any fundamental breakthroughs [in protein analysis] in the next few years, but in 10 years, who knows?" said Steven Wiley, director of the Biomolecular Systems Initiative at Pacific Northwest National Laboratory. "There are a lot of smart people out there working on this."
The fundamental challenge is the dynamic aspect of protein function. "DNA is static; once you sequence it, you have it," Wiley said. But proteins "are constantly interacting, so you have to run multiple experiments to observe all their functions and you end up with multiple terabytes of information. So, how are you going to manage and analyze all this information?"
But the excitement generated by recent successes with the genome is contagious. Plans are afoot to decode the "language" of proteins, making their functions widely available to engineers; anyone with a personal computer and a modem can access the human genome over the Internet; lab-on-a-chip technology continues to reduce the cost of bioexperimentation while ramping up throughput. And there is venture capital funding out there.
No comments:
Post a Comment