A new era of drug discovery beckons. Tools and technologies derived from biotechnology, genomics, molecular modeling, and computational chemistry promise significant reductions in the costs and development times necessary to bring new drugs to market.
by Peter Gwynne and Gary Heebner
Applied Precision, Inc.
Carl Zeiss, Inc., Microscopy & Imaging Systems
CAChe Group, Fujitsu
Roche Molecular Biochemicals
Rosetta Inpharmatics, Inc.
Synthetic Genetics, Inc.
|•||Just A Start|
|•||A New Paradigm|
|•||Two Key Technologies|
|•||The Lure of Libraries|
|•||Cutting Costs by Reducing Volumes|
|•||Starting from Scratch|
|•||Dealing with Data|
|•||Diagnosis and Treatment|
|•||Sourcing and Outsourcing|
A new issue has emerged on the political horizon this year: As the American population ages, pressures are growing to keep the costs of health care under control. "In less than 10 years," says Carl Feldbaum, president of the Biotechnology Industry Organization (BIO), "the first wave of the baby boom generation will reach the age of 65. Seniors by definition will have chronic diseases. But it's in the country's interest to keep these seniors as healthy as possible. In addition, the pressures on this generation of seniors and those that follow them to pay for their health care will be daunting if we do nothing to substantially improve the prevention and treatment of diseases of the aging population."
Beyond coping with those pressures, the pharmaceutical industry faces a surge of expirations of key patents. That will mean reductions in pharmas' revenues and profitability. Pharmaceutical companies are responding to these changes by restructuring in such a way that they can work more innovatively, more productively, and more efficiently. Mergers and acquisitions have become common in the industry, and the consolidation shows no sign of slowing down.
However efficiently they operate, pharmas face the undeniable fact that drug development exerts huge costs in money and time. Bringing a new drug to market from scratch today typically takes 15 years and costs about $500 million.
The first stage of this process — drug discovery — represents the critical element in the entire cycle. Not surprisingly, then, pharmas have focused on improving the efficiency of the early stages of drug development programs. During the next few years, companies hope to bring more new chemical entities to market with fewer people and less in discovery spending. Life science manufacturers have developed large numbers of powerful tools and technologies to support these efforts.
At this point biotechnology enters the scene. Ever since 1982, when the U.S. Food and Drug Administration approved Humulin, the form of human insulin derived by biotechnological techniques, such drugs have made steady inroads into the pharmaceutical armamentarium. Today, patients have access to about 130 biotech-derived drugs, of which 20 have been approved for use in just the past year. Over 350 more are now in the pipeline. The great majority of those, says Feldbaum, "are dedicated to the diseases and conditions of our aging population." Indeed, he adds, "The vast majority of BIO's membership of almost a thousand is concentrating on the development of drugs to treat the diseases of aging."
Those numbers represent just a start to biotechnology's role in drug discovery. The successful sequencing of the genomes of human and other organisms in the past two years has opened the way to an entirely new approach to drug design. "What's overlooked is that biotechnology and genome sequencing came of age at the same time as information technology," explains Feldbaum. "About 8 to 10 years ago we were working with the knowledge of just one or two genes. Now we have a wealth of information on the ingredients of patients at the genetic level."
With the initial sequencing of the human genome now accomplished, pharmaceutical firms can develop new models of drug discovery that not only reduce costs but also refine the discovery process. These models will be based on identification of the genetic sequences that code for a particular protein involved in the disease process. Extension of that approach holds out the promise of diagnosing diseases more effectively and assessing which drugs are more likely to work on specific patients with specific conditions — and to do so very fast. "Genome sequencing and the application of information technology have had a profound effect," Feldbaum says. "The old style of treatment was random. With information from the genome we'll be better able to pinpoint which biotechnology product to use to treat a patient. The next shift will come when we can develop more fully the ability to predict patients' predispositions to diseases."
Drug delivery will also feel the influence of genomics and related disciplines. "Biotechnology is in the forefront of new delivery mechanisms," asserts Feldbaum. "We're moving to pills and safe implants that release drugs over time. The old problem of patients' having to take their pills on time or to come in for needed injections will be diminished."
Certainly the biotechnology business is gearing up to serve the needs of drug discovery. Forecasts in the mid-1990s that the industry would quickly consolidate from more than a thousand companies to just 50 have proven entirely wrong. Today thousands of biotech companies exist around the world, many of them focused on disciplines such as genomics and proteomics that hardly existed five years ago. And in the aftermath of the demise of the dot-com companies, angel financing and venture capital that used to go into Internet-related companies is now moving into biotechnology. "Our winter meeting drew a lot of former dot-com venture capitalists who were curious but not knowledgeable," recalls Feldbaum. "They saw two fundamental differences: Biotech products have a deep and broad market. And biotech CEOs are a lot more seasoned than dot-com CEOs; they have been through a market cycle going up, down, and up again."
Feldbaum also points out the endurance of biotechnology firms. "If a clinical trial goes wrong a biotechnology company will morph into something else," he says. "Very few of them have disappeared. There's a certain sustainability that is demonstrable."
Indeed, the biotechnology business is becoming a frequent discovery partner to the pharmaceutical industry as it not only provides new tools for pharmas but also, in many cases, performs research and development in the drug discovery process. The tools that have come from the biotechnology industry have clearly enabled pharmas to pursue new programs and to streamline their operations. These tools and technologies include chemical libraries, laboratory automation products, DNA chips and microarrays, and a host of informatics-related software programs. In the remainder of this article we shall survey the techniques and their impact on drug discovery.
In the past, natural products provided the predominant source of bioactive compounds and drug candidates. A large percentage of the drugs that exist today originally derived from natural sources. As an example of Mother Nature's power, consider the discovery of paclitaxel (Taxol), a drug produced by Bristol-Myers Squibb Company. This natural compound, isolated from the bark of the Pacific yew, has profound anticancer activity. Similarly, most of the antibiotics in use today stem from natural sources.
Early efforts in drug discovery involved screening natural products derived from plants and microorganisms and testing them for activity in animal models. This was a slow and labor intensive process, but it led to the remarkable plethora of medications available at the end of the 20th century. The process hasn't stopped. Several academic and corporate researchers now spend their time tracking down herbal remedies used by remote tribes, in hopes that active ingredients derived from the herbs will have measurable therapeutic effects.
Chemists, meanwhile, have complemented the search for natural products by developing families of compounds with potential biological activity. In doing so they hope that close family members of natural products may show more effective therapeutic power than the original. The production depends on combinatorial chemistry, which uses automated processes to synthesize large numbers of related chemical compounds with a high degree of structural diversity.
Beyond that, advances in molecular biology, genomics, automation and detection, and informatics have shown the way to a new paradigm. This new method of approaching drug discovery and even drug design relies heavily on computational power, and shifts the scientist's efforts from basic laboratory research to virtual research insilico. A major change in drug discovery involves the virtual study of bioactive molecules and the design of drug candidates that have attributes similar to those of known bioactive compounds. These technologies are commonly known as molecular modeling and computational chemistry.
The new approaches have evolved in recent years. In the past, an organic chemist might have produced a huge number of synthetic molecules in the laboratory without having a great deal of understanding of the ideal compound desired. Today a more refined process is in place. Chemists now engage with biologists to gain a better understanding of the process and mechanisms of a disease before going into the laboratory to synthesize potential drug candidates. In fact pharmaceutical and biotechnology companies involved in drug discovery recruit individuals with a more multidisciplinary approach to their research. These individuals often have M.D.-Ph.D. degrees and experience in both research and clinical work.
Molecular modeling uses sophisticated computer programs that can determine the structures and properties of molecules of interest and then intelligently analyze the data to predict the structure of an ideal drug candidate. This is no simple feat. Data that characterize molecules can exist in many formats, making the integration and analysis of the information very challenging. The analyses demand extremely fast and powerful computers. Several firms have developed software programs for molecular modeling and have found a ready market among pharmas. "Molecular modeling has become part of the armory on which pharmaceutical research and development depend," says Mike Stapleton, executive vice president and COO of Accelerys, a subsidiary of Pharmacopeia. "You see all the large pharmaceutical and biotechnology companies using it."
The power of molecular modeling stems in large part from the use of high powered, graphics-intensive work stations, such as those produced by Silicon Graphics. To solve the problem of data in differing file formats, the industry is increasingly moving to computing environments that use Microsoft's Windows NT (which can incorporate Silicon Graphics' servers). "As we move to Windows NT we can define far more open and transformable formats for general use," explains Stapleton. "But the ultimate solution is the platform change that the customers are demanding now. They want to adopt more modern information technology standards around NT and other items such as document management, web tools, and searching for databases."
Computational chemistry emerged at least a quarter of a century ago and started to make an impact on drug discovery about a decade back. "Today," says George Purvis, vice president of CAChe Software, a group owned by Fujitsu Business Systems of America, "every pharmaceutical company has a specialist modeling group that does computational chemistry. What's happening now that's different is the adoption of modeling as a part of the practice of laboratory chemistry by medicinal chemists in pharmaceutical companies. Modeling helps the experimental chemist focus on those experiments that are most likely to succeed. These researchers are benefiting from a new class of software designed to work the way they think. They work in a property driven paradigm where scientists specify first the property of the molecule then the procedure for its determination. This allows them to quickly specify the properties for a virtual compound library and trigger automated calculations and analyses, freeing them to return to the laboratory until the results are complete."
The use of molecular modeling programs to predict and design molecules saves both time and the expense of actually screening a huge library of compounds for activity against a target. In addition to CAChe, such companies as Molecular Simulations, Synopsys Scientific Systems, and Tripos have developed computer programs to assist in the design of synthetic molecules likely to have the desired biological properties while minimizing the risks of adverse effects such as toxicity.
Computational chemistry and molecular modeling provide just two of several approaches to a fundamental starting point of drug discovery: identifying lead compounds that can be screened invitro for possible therapeutic activity against systems that mimic the behavior of biological targets. New tools have helped to speed up and rationalize what used to be a slow, rather random process. As a result, drug investigators need access to a relatively large number of compounds to screen against each specific drug target.
Scientists who don't want to create their own lead compounds can turn to chemical libraries. Several companies, among them MDS Panlabs, offer collections of chemicals and biomolecules for screening work.
Sigma-Aldrich Corporation is notable in offering more than 100,000 products for screening. "Our Library of Rare Chemicals has the role of fulfilling drug discovery's key need for fast and reliable delivery of high quality, diverse compounds," says Robert Wandler, the library's product manager. "As the first commercial screening compound library, it has grown from a somewhat random collection of interesting compounds to a small thriving business. We make thousands of additions to the library every month, using the computational and medicinal skills of our chemists and managers. We assess these additions using such insilico techniques as assessment of log P, hydrogen bond acceptor and donor groups, and rotatable bonds, as well as 'visual screening' by our chemists."
Keeping any chemical library up to date demands a steady source of new products. "We work with a select group of manufacturers," says Wandler. "We are careful to add only diverse drug-like molecules. It is important to our customers that we don't waste their time offering them structures that are not of interest to them."
Whatever their sources, lead compounds must be separated and characterized. "Small molecule characterization is relatively simple," says Ken Imatani, LC-MS product manager for Agilent Technologies, a market leader in the field for several years. "But large molecule characterization can be a much tougher task. These molecules have various levels of structure. To understand the really interesting information about them you have to have the fine structural details."
Common methods of separation include gel electrophoresis, capillary electrophoresis, high performance liquid chromatography, microfluidics, and microarray technologies. Similarly researchers have plenty of technical options for detecting compounds, among them visible, ultraviolet, and fluorescence optical methods, mass spectrometry, and nuclear magnetic resonance. "Each has its own place, and together they provide complementary and confirmatory information," says Imatani.
Recent years have seen the arrival of hyphenated technologies such as liquid chromatography- mass spectrometry (LC-MS) that couple separation and detection procedures. "These increase throughput and avoid the need to transfer very small amounts of already small samples multiple times," says Imatani. The growing use of these technologies stems in large part from the reduction of data processing costs for mass spectrometry. "In the early 1980s the data system hardware for a mass spectrometer often cost nearly as much as the instrument itself — perhaps $75,000 to $100,000," recalls Bryan Miller, Agilent's LC-MS product manager for software applications. "Nowadays you're hard pressed to spend more than $10,000 on a high-end system."
Typically, units from Agilent and other vendors are integrated with personal computers and powerful data analysis systems for a more automated approach to separation and identification. Other factors in the increased use of LC-MS are dramatic advances in sensitivity and ease of use; the application to the analysis of proteins, peptides, and oligonucleotides; and the development of suitable software for identification of protein.
Increasing productivity means that more samples must be screened in shorter time, preferably with less expenditure of labor. To accomplish this, manufacturers have developed faster and more capable screening systems. These range from semiautomated work stations to fully automated robotic systems. "When you are performing cell-based assays to validate targets, to screen candidate compounds, to optimize leads, or to test for toxicity, you are doing cell biology; and cell biology has traditionally been a slow, manual, exceedingly low throughput science," says William Busa, chief scientific officer of Cellomics, Inc. "Keeping the pharmaceutical companies' pipelines full is all about increasing bandwidth. If one step in the process is too slow you have a bottleneck and the bandwidth of the overall drug discovery process suffers. By enabling automated cellular analysis, Cellomics' technologies aim to remove one of the last remaining bottlenecks in the drug discovery pipeline."
Chips to Hits
Chips to Hits, the eighth annual international microtechnology event organized by IBC USA Conferences Inc.®, will take place from October 28 to November 1 in San Diego. "The event addresses various applications of microtechnology to the pharmaceutical and biotech industries," says Abby Votto, IBC USA's marketing manager. "The meeting delivers comprehensive updates on the technologies that are redefining drug discovery through diagnostics."
Topics to be discussed at the event include Commercializing Drug Discovery Technologies: Financial and Legal Considerations, The New Molecular Diagnostics, and Emerging Technologies. The event will also include sessions on the application of microarrays to genomics, proteins, and diagnostics.
For further information you can check the event's website, www.chipstohits.com. Alternatively you can phone 508-616-5550, fax 508-616-5522, or write to IBC USA Conferences Inc. at One Research Drive, Suite 400A, Westborough, MA 01581-5195.
Cellomics' automated assay systems provides one example of laboratory automation. Others include liquid handling systems such as multichannel pipetters and 96-well plate washers. Intended for use in the research lab, these will be replaced by bigger and more automated tools as the process of drug discovery moves from research and development to scale up. Hamilton Company, Nalge Nunc International, Wheaton Science Products, and others provide many of the basic systems used for semiautomated liquid handling.
Several systems or work stations can perform several functions in addition to the basic liquid handling tasks. These units can fill, wash and rinse, and read fluorescence or other characteristics of a sample. Work stations of this type are often designed to run unattended. They can greatly reduce the manual labor required to fill and dose plates with reagents and are usually designed to perform a somewhat limited range of tasks.
This end of the range of laboratory automation includes robotic systems designed by Applied Biosystems, Beckman Coulter, Cell Robotics, Packard Bioscience, Qiagen, and Zymark, among other vendors. These companies offer sophisticated, versatile systems that can perform many of the functions needed to screen compounds for biological activity.
Robotics help pharmaceutical companies to cut the costs of drug discovery by reducing labor requirements. Firms can also meet financial targets by lowering the costs of materials. In the past, a typical assay might have required one milliliter of sample. Today, assays routinely use sample amounts in the microliter range. As sample volumes decrease, so do the costs of reagents — a significant factor in laboratories that deal with literally thousands of lead compounds. Better yet, the reduction of sample volumes can reduce the size of a laboratory needed to process the work.
Equipment from Corning, Eppendorf Scientific, Nalge Nunc, and others has undergone a transition from 96-well to 384-well to 1,536-well plates. That process has reduced the volumes of samples from hundreds of microliters to nanoliters in some cases. Reducing the sample volumes used in these microwell plates may not seem very complicated at first glance. However, collecting and dispensing such small nanoliter volumes presents many challenges. "The move to 1,536-well plates is being hindered by an instrumentation gap," says Carl Jones, manager for high throughput systems at Finnish company Thermo Labsystems Oy. "The problem is that the total assay volume is down at the 2 to 5 microliter range in a 1,536-well plate. This means that volumes in the nanoliter range now need to be addressed."
Having proved its potential in a molecular or biochemical assay, a compound will undergo a cell based assay to develop a more realistic sense of how it will perform in a cellular system. What used to be a very difficult and tedious procedure has become much more automated and consistent. Applied Biosystems, Cellomics, and other vendors have developed highly automated systems for cell based assays. These instruments allow scientists to culture living cells very closely related to the cell types found in specific organs. This gives researchers a better understanding of how cell type may affect the activity of the compound being evaluated.
The cell based assay can also provide an indication of cytotoxicity. That permits researchers to cut their losses on drug candidates at an early stage of development, well before they reach clinical trials. "Discovering complications during clinical trials is horribly expensive," says Cellomics' Busa. "By that point you've already spent years and hundreds of millions of dollars developing your compound."
Another technology permits pharmaceutical scientists to work at higher speed for lower cost. DNA chips or microarrays allow researches to screen a large number of compounds for biological activity against a target in a single experimental run. This technology is an excellent example of assay miniaturization. Scientists can spot very small samples onto a solid surface and subject them to the target molecules to see which spots have biological relevance. These chip experiments produce large numbers of data points and require specialized equipment to analyze results. Genotyping chips from companies such as Affymetrix, Amersham Pharmacia Biotech, Clontech, Corning, Motorola, and Stratagene allow researchers to screen many genes in an organism's genome for involvement in a specific disease process.
Scientists who want to start from scratch have plenty of options for obtaining the basic components they need to design and assemble DNA chips. Vendors provide the glass slides, spotters, and even sets of oligonucleotides for building microarrays. "We sell whole genome sets for human, mouse, and other model organisms such as yeast and Drosophila," says Nathan Hamilton, president of Operon Technologies. "We can also make ready-to-print oligos using our bioinformatic services.
DNA chips can also be used for expression profiling to determine which genes are producing changes in the cell when it is responding to different environmental conditions. Clontech offers microarrays of this type. This technique is useful in comparing how cells change when they are diseased. Scientists hope that studying different protein profiles will lead to discovery of proteins involved in the disease process.
Sequencing of the human genome has given a major boost to the pharmaceutical business. But applying that work to drug discovery requires a certain amount of translation. "It's like developing one's vocabulary and then trying to read and understand the meaning of a novel," says David Miles, marketing director of Invitrogen. "The next step in this whole process involves cloning and expression and analysis of what the individual components are."
Paul Goodson, Invitrogen's vice president of investor relations, points out the difficulty of taking that step. "We still believe that an understanding of the genome is critical to the development of therapies," he says. "But it's not going to be as easy as everyone originally thought. Quite a lot of experimentation will be necessary before effective therapies are developed. The more efficient you can be in screening and evaluating, the more quickly will you be successful."
Certainly the many products and kits for gene cloning and expression produced over the last decade have taken much of the black magic out of these molecular techniques. Kits with all the components necessary for cutting genes from a chromosome and inserting genes into expression vectors are now relatively simple to use. These products will play a key role in translating the genomic sequences into more meaningful information. In addition to Invitrogen, the handful of corporate pioneers in the field includes New England Biolabs, Promega, and Stratagene.
The research community has moved to the molecular level in its attempts to understand cellular functions. Scientists can now determine the DNA sequence of a gene and the amino acid sequence of a protein or peptide. Databases that hold these bytes of information often use different design architectures and processing systems, thereby complicating research groups' ability to share their data. Hence Sun Microsystems and other companies that have played key roles in the development of computer equipment for life science investigation face a situation similar to that which confronted conventional computing 20 years ago: the need for common formats. Standardization will grow in importance as more researchers access both each other's data and the information in public and private databases.
Dealing with data effectively is indeed essential for pharmaceutical researchers who want to make use of human genome sequences. "If you're doing direct discovery the genome sequence alone is not of much value," says Roy Whitfield of Incyte Genomics. "You want the protein sequences." Thus Incyte provides the data content that permits scientists to get what they need from sequence data for drug discovery. The company backs up its data with sequence-verified clones for researchers to conduct safe lab experiments. DoubleTwist, InforMax, and other companies also work in this area.
Several genes have already been identified and implicated in diseases. Researchers know, for example, that BRCA2 is associated with breast cancer susceptibility. Celera Genomics is in the process of sequencing the rat genome. That will be useful in matching known sequences in the rat genome with analogous sequences in the human genome. That may lead to a better understanding of so-called orphan genes that currently have no apparent function in the human genome.
This type of work leads directly to proteomics, the study of the structure and functional characteristics of proteins. Scientists recognize that a single gene may produce more than one protein as a result of alternative splicing or other post-transcriptional modifications. Because of these variations, it is also important to determine which form of a protein may be implicated in a disease process. Proteomics becomes an even greater challenge than genomics because proteins and polypeptides have three-dimensional conformations that dramatically impact their biological activity. Databases such as that of the Swiss Institute of Bioinformatics house the sequence information of many peptides for researchers interested in using these data in their work. Companies such as Proteome and AxCell Biosciences also provide databases for proteomics research.
Yet another method of dealing with data has gained a growing role in drug discovery. Finding the right compounds used to mean spending long hours searching the literature and making calls to colleagues who might have access to substances with specific characteristics. To reduce this time and increase the efficiency of locating possible drug candidates, companies have developed searchable databases and powerful search engines that allow researchers to enter the characteristics of a compound of interest and search for natural or synthetic compounds with similar properties. "Cheminformatics is the science of keeping track of chemicals and how they affect different disease processes," says Tad Hurst, chief technical officer of ChemNavigator Inc. "It's a new asset," adds Scott Hutton, the company's president and CEO. In addition to ChemNavigator, MDL Information Systems and CambridgeSoft provide scientists with searchable chemical databases and provide sources for the hits that the searches reveal.
The ultimate goal of the data handling is information that has practical value in the diagnosis and treatment of disease. Single nucleotide polymorphisms (SNPs) have emerged in recent years as significant players in the diagnostics arena. These are DNA base changes that have been shown to play a role in determining an individual's risk of developing a specific disease. SNP analysis has formed the basis for several recently founded companies, including Genome Therapeutics, CuraGen Corporation, and Gene Logic. Analysis that associates specific SNPs with particular diseases will open the way to diagnostic tests that can screen populations to pick out those individuals at increased risk of disease. The tests will have particular value in identifying individuals in such an early stage of a disease that they show no symptoms at the time of screening. Early diagnosis followed by the selection of appropriate treatment plainly increases patients' chances of cure and survival.
Research on SNPs also points toward the concept of personalized medicine. The idea is to tailor treatments to specific subpopulations of individuals according to their genetic characteristics. Once put into practice, the concept will provide better therapeutic effects while minimizing side effects for patients who may, for example, lack tolerance to a particular drug.
Meanwhile the surge of therapeutic products developed with molecular biology techniques is beginning to reach the market. Herceptin, produced by Genentech, provides a recent success story. This biotech drug is based on countering the overexpression of HER-2, a specific gene product involved in breast cancer. Genentech pursued the development of an antibody directed to the extracellular domain of HER-2. The drug appears to have several effects, including down-regulating the HER-2 receptor.
In May 2001 the FDA approved Gleevec (imatinib mesylate) for use in the treatment of the blood cancer chronic myelocytic leukemia (CML). Manufactured by Novartis, Gleevec is a protein-tyrosine kinase inhibitor acts on the Bcr-Abl tyrosine kinase, which is abnormally upregulated in CML. The drug inhibits proliferation and induces apoptosis (a form of cell death) in Bcr-Abl positive cell lines as well as fresh leukemic cells from patients with CML. Early clinical trials revealed that the drug had profound effects on CML patients. As a result of that finding, the drug's efficacy, and a good understanding of the molecular mechanisms of its action, the FDA put Gleevec on its fast-track approval process.
Fresh understanding of disease processes at the molecular and cellular level is leading to new means of delivering therapies as well as new drugs. French firm Neurotech S. A., for example, is developing cell based therapies for diseases of the eye. "Biotechnology is largely based on proteins, but you can't deliver proteins to the eye," explains Tom Shepherd, Neurotech's president and CEO. "We feel that by using cells as a delivery system we can open the eye to biotechnology. We know that proteins work when injected into the eye. Our focus is less drug discovery than taking previous knowledge of proteins and finding a practical way to deliver them."
Neurotech has two broad technology platforms. Its traditional method of cell therapy transplants human immortalized retinal cells into the subretinal space of the eye to replace degenerating cells in treating age related macular degeneration, the main cause of blindness in adults. With broader application, the company is working on encapsulated cell technology. The basis of this is to place human retinal cells in an implantable semipermeable membrane to protect them from the immune system while allowing the cells to continuously release a therapeutic protein in the eye. "We have good data in the case of retinitis pigmentosa [another eye affliction] following implantation," says Shepherd. "We see no reason why we could not get a device that can be implanted in a person once every year."
While high profile boutique vendors of specialized biochemicals and reagents play a strong role in the pursuit of drug discovery, researchers in the field often benefit from general supply companies as sources of the equipment and supplies that they use in their everyday work. In the United States, companies such as Fisher Scientific and VWR/Scientific Products provide the basics for research and development in this industry. Across the Atlantic, Merck Eurolab provides the same type of service.
In addition, pharmaceutical researchers rely on specialized suppliers for help with such mundane but essential tasks as removing the solvents used in preparation or purification of a bioactive compound. "Solvent removal is used throughout drug discovery but particularly in synthesis," says Harry Cole, managing director of Genevac Ltd. Founded in 1990, the British company supplies equipment for solvent extraction and other synthesis tools. "We look at maintaining purity because the drying process tends to cause cross-contamination," adds Cole.
Pharmaceutical companies have also streamlined the drug discovery process by outsourcing tasks that do not form part of their core competence. Outsourcing permits individual companies to focus on and invest in areas of strength without the distraction of having to manage unrelated processes. "The smaller pharmas tend to outsource process development and manufacturing as they don't have the resources and facilities," says Vincent Bille, customer service and global development manager of UCB-Bioproducts. "With big pharma it's a mixed picture. The decision on whether or not to outsource depends on the in-house expertise in the specific field and on how much control they want to retain on the activity or the product."
What is certain is that pharmas can outsource about every facet of drug discovery. "The broad range of contract services spans the spectrum from sourcing material development for combinatorial chemistry to high throughput screening," says Richard Mitchell, senior director of pharmacology for MDS Pharma Services. "We are not ourselves involved in drug discovery," adds Bille. "Rather, we provide services to companies that are involved in drug discovery and drug development. We specialize in peptide and peptidomimetics manufacturing, starting as early as the synthesis of pre-lead compounds."
Outsourcing has become such a significant factor in many drug discovery programs that outsourcing organizations have become strategic partners, as opposed to short-term sources of help, with the pharmas that engage them. That type of arrangement offers one particular advantage. "Most contract research organizations have a broad geographic presence," explains Mitchell. "So they know local regulations and have contact with local patient groups."
The sequencing of the human genome will provide researchers with a new tool for rational drug design. Scientists can now use DNA sequence data to look for genes that may be related to disease and to help answer fundamental questions about how the disease process works. The next challenge they face is gaining a better understanding of the complex relationship between genes and the proteins they produce. Proteomics represents the next big wave in the biotech and pharma communities. And since researchers now recognize that one gene can produce more than a single protein, the processes of transcription and translation could be intimately involved in both cellular regulation and the disease process. As with most scientific endeavors, the more investigators learn the more they understand how much more there is to learn.
Researchers will plainly need more powerful tools to pursue that learning and understanding. In response, manufacturers will respond with more innovation, providing more powerful data handling systems and more powerful analytical tools. And informatics will take its place at the center stage of this new wave, allowing researchers to manage and unravel the flood of data that their large-scale experiments are producing.
Peter Gwynne is a freelance science writer based on Cape Cod, Massachusetts, U.S.A. Gary Heebner is president of Cell Associates, a scientific consulting firm in Chesterfield, Missouri, U.S.A.
This article was published
as a special advertising supplement
in the 3 August 2001 issue of Science