Over the past few decades, supercomputers have grown steadily faster and more powerful. Today's top machine is able to crunch 17.6 million billion calculations per second, or 17.6 petaflops. Researchers are now pondering making their next great leap in supercomputing power, up to 1000 petaflops, also known as an exaflop. Exascale computers would be more than 50 times faster than current machines, and thus ideal for running complex simulations such as determining the role of clouds in climate change and modeling new engine designs to burn advanced biofuels.
But developing exascale supercomputers is expected to require a revolution in supercomputer design and technology and cost hundreds of millions, if not billions, of dollars. Will it be possible to develop exascale supercomputers? If so, when can we expect to see them? What will they be capable of doing? And which governments will step up to foot the bill?
Join us for a live chat at a special time of 4 p.m. EST on Thursday, 24 January, on this page. You can leave your questions in the comment box below before the chat starts. The full text of the chat will be archived on this page.
You might also like:
Jack Dongarra is a professor of electrical engineering and computer science at the University of Tennessee, Knoxville. His research specializes in the use of advanced computer architectures, programming methodology, and tools for parallel computers. He also keeps close track of international developments in supercomputing technology.
Horst Simon is a computer scientist and applied mathematician, who also serves as deputy director of the Lawrence Berkeley National Laboratory in California. Simons research has focused on the development of algorithms for parallel supercomputers. He also keeps close tabs on supercomputing technology trends.
Robert (Bob) writes about chemistry and materials science, delving into topics ranging from solar energy and fuel cells to proteomics and artificial bone.