In November, while some of our team was exhibiting at MEF19, we sent another team to Denver, Colorado, for the International Conference for High Performance Computing, Networking, Storage, and Analysis, or SC19. In this article, we’ll highlight some of the more thought-provoking moments from this fascinating show, as well as how we at Champion ONE contributed to making it possible.
Power and Pace
In supercomputing, a critical measure of performance is floating point operations per second, or FLOPs. As a point of comparison, it is estimated that the human brain’s capacity is 1 exaFLOP (or 1 billion billion FLOPs). Current supercomputers can perform roughly 250 petaFLOPs, or roughly a quarter of the human brain. However, it is widely expected within the decade, computers will exceed this benchmark capacity.
However, the bottleneck for the utility of these high-performance computers is the network backbone that supports their experiments and applications. A lot of bandwidth is required to collect, store, and transmit the sheer volume of data these computers can process. Many university researchers are currently working on solutions to remove this obstacle, including the FABRIC project from the University of North Carolina.
How to Keep Cool
High-Performance computing applications have driven a densification of network equipment. A standard cabinet filled with today’s equipment now requires 4-6x as much power as the same full cabinet just a few years ago. As a result, a new challenge has arisen: how do you maintain a sufficiently cool atmosphere?
We saw two primary types of solution to the cooling challenge: liquid-based and forced air-based. Many of these cooling systems are now self-contained, i.e., integrated into the racks and cabinets they cool. It seems that liquid systems can be much more complex, but are arguably more effective.
The Dark Side of Supercomputing
This acceleration of supercomputing capability does come at a cost. One of the more thought-provoking sessions at SC19 was the ominously titled plenary, “When Technology Kills.” This session focused on the ethical development of technology in areas in which technical failures could have catastrophically lethal consequences, including driverless cars, aerospace, remote surgery, and even meteorology. In the latter case, weather services are reliant on supercomputing for precise and accurate reports, particularly in the case of natural disasters. But what happens when the available data exceeds the capabilities of predictive modeling? Have we become too reliant on supercomputing in these applications?
Accelerating speed-to-market imperatives heighten the danger of new technology in these critical applications: is “good enough” actually good enough? By extension, this discussion revives one of the timeless questions of progress: just because we can, does it mean we should (right now, or even ever)? Furthermore, the gap between technological capability and human ability to understand its ramifications and effectively regulate is widening. Is it time to pause and reconsider our relationship with what we’ve created?
C1 Plays a Part
All these exciting applications and demonstrations of the power of high-performance computing rely on one thing: sufficient network capacity. To support this initiative, Champion ONE provided several dozen 100G optical transceivers to SCinet, the official network of SC19 for both wi-fi and supercomputing demonstrations.
Assembled annually for this conference, SCinet is the fastest network in the world. With speeds of 4.22Tbps, it could download the entire Apple Music library in 6 minutes. This powerful network featured nearly 50 miles of fiber optic cable, as well as $80 million worth of network hardware from 34 participating vendors.
Want to know where we’ll be appearing next? Contact us today to join our mailing list and stay up to date on our events calendar for 2020.