Read our COVID-19 research and news.

Microscope 2.0: Artificial intelligence (AI)-powered microscopes are coming to a lab near you.

Technology Feature

Small images, big picture: Artificial intelligence to revolutionize microscopy

This special feature is brought to you by the Science/AAAS Custom Publishing Office

Researchers hope that bringing deep learning techniques to cell imaging and analysis could turn messy biological problems into solvable computations.  

It was 20 years ago—while completing her Ph.D.—that computational biologist Anne Carpenter first realized she needed to learn computer programming.

Carpenter, who runs a laboratory at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, says she remembers facing either three months of manual image analysis, or the choice to enable her microscope to run by itself. She chose the latter. This automated approach has since revealed its potential to solve—or at least begin to tackle—some of the problems that limit scientists who use microscopic techniques to manually observe the workings of cells. For example, automation can reduce the time-consuming, meticulous work of identifying changes in cells, known as cellular morphology.

Carpenter’s lab focuses on accelerating drug discovery using software to analyze the cellular morphology data contained in millions of images. “There are so many bottlenecks in the drug discovery pipeline, and the data from these images is proving useful for each of them: from building better disease-relevant assays and better screening libraries, to predicting assay outcomes and toxicity,” she says.

We are hopeful that this could lead to new ways to stratify patients, discover biomarkers, and develop effective individualized therapies.

Steven Finkbeiner, director and senior investigator at the Gladstone Institutes in San Francisco, California

Managing the limitations

Rebecca Richards-Kortum, a professor of bioengineering at Rice University in Houston, Texas, is currently collaborating with the MD Anderson Cancer Center on addressing some of the fundamental limitations of traditional microscopy. When using a conventional microscope, there is a fixed trade-off between depth-of-field (DOF) and spatial resolution: the higher the desired spatial resolution, the narrower the DOF. Working with Ashok Veeraraghavan at Rice and Ann Gillenwater at MD Anderson, the team has developed a computational microscope called a DeepDOF, which can achieve DOF greater than five times that of conventional microscopes while retaining its resolution, significantly reducing the time needed for image processing.

DeepDOF uses an optimized phase mask placed at the microscope aperture and a deep-learning-based algorithm that turns sensor data into high-resolution, large-DOF images, explains Richards-Kortum.

“Because of its low cost, high speed, and automated analysis capabilities, we hope the DeepDOF scope can greatly expand the number of surgical centers that have the capability to accurately assess oral cancer tumor margins at the time of surgery,” she says. “The ability to accurately assess diseased tissue could help optimize the outcome of surgical removal, especially in resource-constrained settings, such as rural areas.”

One of the biggest challenges to developing her innovative health care techniques using microscopy and AI is the necessity of “prospectively” demonstrating their benefits, she says.

The deep learning algorithms that power computational microscopes require large datasets to train them to perform independent tasks, but such datasets are not always readily available. Then the performance of these algorithms must be assessed to compare it to the current standard of analysis.

“This is a common challenge across the health care technology community,” she says.

Working through the challenges

Ricardo Henriques runs the Optical Cell Biology Laboratory at the Instituto Gulbenkian de Ciência, Portugal. His cross-disciplinary team of optical physicists, computer scientists, and biomedical researchers works on improving the limits of current imaging technology. The team is focused on two key challenges: how to analyze the real-time behavior of viruses infecting living cells and how to establish intelligent microscopy technology that reduces the damage caused by light to biological systems during their observation, known as phototoxicity.

To imagine these ideas working together, he recommends likening cells to football players.

“So you want to film a football match, but there’s something about the camera that is toxic to the players,” says Henriques. “To reduce the risk to them, you must minimize the amount of time you’re filming, but you also need to make good decisions about what key moments to capture to truly understand the game.”

Henriques’ team is working on developing machine learning algorithms that can make better predictions of when key events will happen in cells as a viral infection progresses, and capture those moments. Simultaneously, these algorithms will attempt to reduce the amount of time spent capturing nonrelevant changes and alleviate the exposure that the cells have in these toxic environments.

For Henriques, it’s been important to build a cross-disciplinary team to tackle these ideas, as the work involves multiple scientific skill sets.

“There needs to be a mentality shift to fully bring AI into scientific research across the board,” he says.

Many of the disciplines that are inherent to microscopy, such as physics and biology, have a natural tendency to work separately due to linguistic barriers between the fields and how funding is regularly organized to target discrete areas, rather than collaborative projects.

“I think that organizations are slowly investing in building these bridges, but more needs to be done to encourage this,” says Henriques.

Building bridges

Geoscientist Matt Andrew works at optics technology company ZEISS in Dublin, California, and his research focuses on the flow and transport processes in porous and sedimentary rocks. Andrew’s work has increasingly centered on developing technologies to make better use of the data produced by microscopes, he says. He now works with a variety of teams across the company to help colleagues to bring AI into their research practice.

He says that the key to introducing AI into the daily practice of microscopy, whether you’re looking at cells or rocks, is to ensure that the technology can be used by any scientist, regardless of their knowledge of deep learning techniques.

“Building workflows that unlock the potential and power of deep learning, and that work quickly and are easy to use, has been critical to their adoption,” Andrew says.

For example, Andrew and his team use a process called the Solutions Lab to build workflows that use AI to automatically detect sample regions scientists may wish to investigate. “You can use AI to identify regions that correspond to individual features you then want to image at a much higher resolution,” he says, adding, “AI Technologies commonly use open source libraries and shared components; our technologies are so successful because we ensure they are a lot more straightforward to use and comes in a package that’s easier to digest.”

Andrew believes that we’re at the beginning of a revolution in the use and implementation of microscopy data.

“If I think back to 5 years ago, we didn’t have a clue that we could use these kinds of techniques for microscopy,” he says. “Now, we’re moving to a point where we are going to have these algorithms sitting at the heart of every single portion of all of our workplaces.”

Luciano Guerreiro Lucas, a director at Leica Microsystems, headquartered in Wetzlar, Germany, is also focused on creating intelligent software solutions that can solve some of the biggest problems the life sciences and biopharma communities face when it comes to image data. Over the past 41/2 years, his
team has been building a library of pre-trained deep learning models as well as Aivia software that allow anyone to leverage some key AI microscopy technology.

“Present day tools ignore the fact that researchers may be experts in biology, or a similar discipline, but have very limited expertise in microscopy or image analysis,” says Lucas. “We are creating tools that leverage the biologist’s expertise and learn from them. Such tools should gradually learn what a cell is and what it can look like in multiple scenarios and ultimately do the imaging and image analysis autonomously, allowing the researcher to focus on the creative and critical thinking portion of the scientific discovery process.”

Lucas says that the current challenge of fulfilling this idea is a limited supply of high-quality, structured data, and a lack of standard image formats.

“These problems make it hard to progress faster in our field. Moreover, the data that does exist tends to be kept in isolated silos. It is hard to coordinate broad agreement around file and data standards in the research space. Researchers all like to do things their own way.”

The commercial and academic sectors need to invest more time in educating the community about the benefits of solving these issues, he says.

Data in action

Steven Finkbeiner, a director and senior investigator at the Gladstone Institutes in San Francisco, California, has been at the forefront of AI and microscopy research for the past decade. Since inventing a fully automated robotic microscope that can track cells for months at a time, he and his team have generated extraordinarily large amounts of data. This information has given his team the ability to truly explore the potential of AI.

“We have shamelessly been using the petabytes of data we generate,” he says.

For example, his team is using facial recognition AI technology—treating a cell’s morphology as a face—to identify and track individual cells in complex systems, such as tissues, over time.

“We expect these approaches will open up new possibilities to study processes that involve complex cell–cell interactions such as neuroinflammation,” he says.

Finkbeiner is also teaching learning networks to diagnose neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS) and Parkinson’s disease, by showing the neural networks examples of cell images from patients.

“We ask the network in a relatively unbiased way to see if it can find anything in the image that would enable it to make an accurate diagnosis, and we have very encouraging results,” he says. “We are hopeful that this could lead to new ways to stratify patients, discover biomarkers, and develop effective individualized therapies. It may even make it possible to diagnose a risk for a disease before symptoms start, which would be transformative.”

His team is also using AI to predict the future fate of cells. “To do this, we are leveraging our longitudinal single-cell data and the use of deep learning networks to look for features of a cell at an early time point that would predict its fate. We are using this now with a cancer project that will help us understand why some cells develop drug resistance and some do not,” says Finkbeiner.

Bringing AI into labs

Rich Gruskin is senior general manager for software systems at Nikon Instruments, headquartered in Melville, New York. He works closely with customers to ensure that researchers are easily able to adopt AI technology.

In a recent case, the customer was looking to identify multiple cell types in their label-free (brightfield) image data. As they were low-contrast images that differed only by sometimes subtle morphological features, several AI networks were trained to work together in one analysis assay to distinguish the different cell types.

“We trained a neural network using the customer’s datasets. We ran it and it works very well,” he says. “Sometimes if there’s resistance to trying something new, we jump in and help clients process the information, build new routines, and show them how it works. Ensuring that the application is pain-free and the results are quickly obtainable is key to building confidence in using new technology.”

The future of AI

Across the board, researchers working in academic and commercial settings see the greatest barrier to adopting AI into scientific life as fear of the unknown. Yet its increasing influence is undeniable.

“Change is happening in months, not years,” says Henriques. “Look at self-driving cars. They are doing exactly what we want to do with microscopes, observing their environment in real time and making decisions on how to interact with it and about how to keep an organism alive—the person who is in the car.”

However, there’s also a sense that while change is inevitable, society needs to make a stronger commitment to ensuring all scientists can benefit from these new technologies. Finkbeiner thinks it would be hugely helpful to the field to create some public image repositories that computer scientists could use to develop new algorithms and approaches.

“Kids in college and even in high school could use data like this for education and training as well,” says Finkbeiner. “The potential of this field is huge, so it would be great to invest now to train the generation we need to really take us forward.”

He also would like to see academic institutions place greater emphasis on fostering collaborations between biologists and computer scientists.

“Universities that have computer science departments and who recognize the value of multidisciplinary science have an opportunity to lead,” says Finkbeiner. “The gap between computer science and biology is large, and we need a sustained effort and sustained support to get the spark to ignite.”

Submit your new product press release/description or product literature information to new_products@aaas.org. Visit Science New Products for more information.

Newly offered instrumentation, apparatus, and laboratory materials of interest to researchers in all disciplines in academic, industrial, and governmental organizations are featured in this space. Emphasis is given to purpose, chief characteristics, and availability of products and materials. Endorsement by Science or AAAS of any products or materials mentioned is not implied. Additional information may be obtained from the manufacturer or supplier.

Search Jobs

Enter keywords, locations or job types to start searching for your new science career.