theclever

The Premium The Premium The Premium

15 Huge Supercomputers That Were Less Powerful Than Your Smartphone

Tech
15 Huge Supercomputers That Were Less Powerful Than Your Smartphone

In terms of raw processing power, that smartphone you’re currently Instagramming on is a miracle of science, with capabilities far greater than those envisioned by the designers of supercomputers of the past and present that take up thousands of square feet of floor space. You’ve probably heard it pointed out many times that your Galaxy S7 or iPhone 7 is way more powerful than the computers that were used to send the Apollo astronauts to the moon and back. But how much faster is that smartphone in your hand compared to the supercomputers of years past? You’ll be impressed by the difference: the latest smartphones well outpace the computing speed developed over 40 years, between the 1950s and 1990s, and are getting faster every year. Check out these 15 supercomputers—from the first vacuum-tubed behemoth released in 1949 to the sleek models of the early 1990s—and how they stack up against the last few generations of smartphones.

15. ENIAC

via ARL Technical Library

There’s no contest when comparing a smartphone to the first-generation ENIAC (Electronic Numeral Integrator And Computer), which was developed in 1949 and used to compute firing tables for the missile range at White Sands, New Mexico. Some even argue that it’s not a supercomputer, other than its size—ENIAC took up 15,000 square feet of floor space. Rather than transistors, the computer relied on vacuum tubes. And instead of entering commands via a software program, operators pulled switches and shifted cables around to complete the compute operation. Good luck trying to calculate your tip at the end of dinner with this monster of a computer.

14. CDC 1604

via thisdayintechhistory.com

The CDC 1604 was part of the second generation of computers and was the first computer to rely on transistors rather than vacuum tubes. It was also the first such computer to enjoy commercial success—meaning Control Data Corporation (CDC), the company that invented the 1604, sold a whopping 50 of them between 1960 and 1964 (at around $750,000 apiece, or about $5 million each in today’s dollars, which isn’t too shabby). One of their customers was the U.S. Navy, which incorporated them into an automated control system for the Minuteman I missile system to handle precomputing of guidance and control information. The 1604 boasted a whopping 48 bits of word size (an early kind of memory) and could execute about 100,000 operations per second—meaning that during the hottest years of the Cold War, part of the U.S. nuclear arsenal was guided by a computer that would have trouble rendering a single pixel on your smartphone today.

13. Atlas

via Iain MacCallum

Another early supercomputer, the Atlas, also utilized 48-bit word size but aimed to handle 1 million instructions per second. Just three Atlas computers were built in the U.K. in 1962: the first for the University of Manchester as part of a joint development project between the university, Ferranti and Plessey, one for British Petroleum and University of London, and the third for the Atlas Computer Laboratory near Oxford. Atlas used what was arguably the first modern operating system, the Atlas Supervisor, and pioneered many of the software concepts still in use today.

12. IBM 7090

Photo by Hopper Stone, SMPSP via imdb

If you’ve seen the movie “Hidden Figures,” you may be familiar with the IBM 7090. Another of the first supercomputers to feature transistors rather than vacuum tubes, the 7090 played a key role in calculating orbital paths during NASA’s Mercury and Gemini space missions, which among other things helped pave the way to the Apollo moon landings. Capable of making 24,000 calculations per second, the 7090 was rented to NASA for the equivalent of a half-million dollars per month in today’s currency. As the movie depicts, it was NASA employee Dorothy Vaughan who taught herself the Fortran programming language in order to program the computer to accomplish what NASA needed.

11. CDC 6600

via Jitze Couperus / Flickr

In the 1960s and 1970s, one person, Seymour Cray, pretty much ruled the supercomputing world. Think of him as the Elon Musk of the “Mad Men” era, sans electric car. While working for CDC, he designed the first true supercomputer, the CDC 6600. Released in 1964, the 6600 was three times as fast as its closest competitor, boasting a processing speed of three megaflops, or 300,000 floating point operations per second—making it the fastest supercomputer in the world at the time. By comparison, the iPhone 5S, now almost a junker of a smartphone, can handle 76.8 gigaflops (that’s 768 trillion flops). Cray would go on to form his own company, Cray Research, which set the pace for supercomputers for the next two decades.

10. CDC 7600

via Lawrence Livermore National Laboratory

The next iteration of CDC’s supercomputers, the 7600, again tripled its performance over the groundbreaking 6600 supercomputer. To do this, Seymour Cray incorporated a number of new features including instruction pipelining, stacked circuit boards and a refrigeration system to cool down the computer and improve its performance even further. Many of its features are still standard in computer design—all the way down to the smartphone in your pocket.

9. IBM System/360 Model 75 & the Apollo Guidance Computer

via Chilton Computing

While the Apollo Guidance Computer—a 70-pound unit placed aboard the spacecraft that offered 2.3 MHz of processing power and accepted some basic word commands—got most of the attention during Apollo 11’s historic landing on the moon in 1969, the spacecraft’s operations and communications were also managed by several IBM System/360 Model 75 supercomputers back on Earth, at Goddard Space Flight Center in Maryland, and at NASA’s Manned Spacecraft Center in Houston, Texas. Introduced in 1964 with the Model 30, IBM’s System/360 series was extremely versatile, making it perfect both for earthbound commercial computing needs and for figuring out the perfect path to the moon. It’s still considered one of the most successful and influential computers ever designed.

8. Cray-1

via Lawrence Livermore National Laboratory

Seymour Cray left CDC in 1971 to form his own company, Cray Research, introducing the Cray-1 supercomputer in 1976. It is perhaps the best-known supercomputer of this era and the fastest in the world at that time, and more than 80 Cray-1s were sold over the next few years at a price tag between $5 million and $8 million. This behemoth of a computer offered 80 MHz processing speed, a vector processor, and integrated circuits. By comparison, one of the earliest iPhones, the 3GS, offered 600 MHz processing, and sold unlocked for $699.

7. Cray X-MP

via Ohio Supercomputer Center

The Cray X-MP was the successor to the wildly popular—in supercomputer terms, at least—Cray-1, and was the first multi-processor supercomputer offered by Cray Research. Principal designer Steve Chen designed the shared-memory parallel vector processor, where the two CPUs sat within the computer mainframe. Processing speed jumped to 105 MHz, making the X-MP the fastest supercomputer in the world—where it remained for almost three years, until 1985. The supercomputer also had around 16 MB of memory. All that processing power and storage came at a hefty price: around $15 million in 1984, not including the disks. Yep, they cost extra.

6. Cray-2

via Dr. Ralf Udo-Hoffman

In 1985, Seymour Cray topped the speed and performance of the X-MP with the release of the Cray-2, which took over as the world’s fastest supercomputer. It boasted 244 MHz of processing speed, way less than the 800 MHz of the iPhone 4—but getting there wasn’t easy. Think your smartphone runs a bit hot when multiple apps are opened? It’s nothing compared to the supercomputers that got really hot, and just turning up the air conditioning didn’t help. And the hotter that supercomputers got, the slower they performed—if they didn’t break down altogether. Cray designed a liquid-cooled system in response to this problem, reportedly earning the Cray-2 the nickname “Bubbles.”

5. M-13

via computerhistory.org

This Soviet supercomputer was designed in 1978 by engineer Mikhail Alexandrovich Karzev at Moscow’s Scientific Research Institute of Computer Complexes, and launched in 1984. The M-13 enjoyed a few brief months as the world’s fastest supercomputer. Using a vector multiprocessor, it was the first computer to cross the gigaflop barrier, ultimately reaching 2.4 gigaflops. Sadly, Karzev did not live to see his creation break the processing speed record; he passed away in 1983. Other Soviet attempts at supercomputing, via the BESM and Elbrus projects (pictured above), did not enjoy the same success, and issues with manufacturing of the high-tech components needed for these huge computers meant that few were completed and delivered to companies that wanted them.

4. TMC CM-5

Photo by Tom Trower via MIT

The CM-5 supercomputer hit the Top500 list in 1993, with 59.7 gigaflops of processing power—still way short of the Samsung Galaxy S5’s 142 gigaflops. Massachusetts-based Thinking Machines Corporation (TMC) was the market leader in parallel supercomputers for a few short years in the late 1980s, selling many of its supercomputers to the government, particularly DARPA. When that source of income dried up in the early 1990s, TMC filed for bankruptcy in 1994 and its hardware and parallel software divisions were sold off to Sun Microsystems. Oracle later bought up the much smaller TMC in 1999. And in a weird case of “what goes around comes around,” Sun Microsystems was acquired by Oracle just a few years later, reuniting much of TMC’s intellectual property under one roof.

3. Fujitsu Numerical Wind Tunnel

via National Aerospace Laboratory of Japan

It’s easy for today’s smartphones to beat out early supercomputers in performance and memory, but with the 1990s-era designs, the gap narrows. First up is Fujitsu’s Numerical Wind Tunnel, which in 1993 eclipsed NEC’s SX-3/44R supercomputer at 124.2 gigaflops per second. The supercomputer (which wasn’t actually a wind tunnel, but what a cool name) was a joint project of the National Aerospace Laboratory and Fujitsu, and featured 140 vector processors. Each processor’s board had 256 megabytes of memory, making it a heck of a number cruncher. Still, the supercomputer falls well behind the iPhone 7, whose A10 Fusion processor was measured by GeekBench at 6.85 gigaflops.

2. Intel Paragon XP/S 140

via NOAA

The Samsung Galaxy S5, released over two years ago, is on par with 1994’s hottest supercomputer, Intel’s Paragon XP/S 140. The supercomputer had a benchmark speed of 143.4 gigaflops, while the Galaxy S5 comes in just under that speed metric at about 142 gigaflops. The Paragon was a massively parallel supercomputer, meaning it used a whole lot of processors, or separate computers, to perform a set of computations simultaneously. Supercomputers after the Paragon XP/S 140 began to really prove out Moore’s Law, and finally, in the mid-1990s, they began bringing the speed. The Hitachi SR2201 quickly succeeded the Intel Paragon and the Numerical Wind Tunnel in 1996 with a processing speed of 220.4 gigaflops, and by the early 2000s most supercomputers were easily jumping over the teraflop barrier.

1. Deep Blue

via Business Insider

While Deep Blue wasn’t the fastest supercomputer in the world in 1997—its 11.38 gigaflops of processing speed made it the 259th most powerful—it was probably the most famous. Even though its processing power doesn’t come close to the Galaxy S5 smartphone’s 142 gigaflops, Deep Blue marshalled its total resources to beat the best chess player in the world, Garry Kasparov, trumping him 2:1 in a six-game chess match. But Deep Blue also showed that gigaflops alone don’t make the computer—while it was credited with the win, Kasparov actually forfeited the second match after Deep Blue made a completely illogical move that frustrated the previously unbeaten chess champion. Some experts speculate that the odd chess move was the result of a bug in Deep Blue’s program, not some amazing strategy. This shows that gigaflops alone don’t make a computer great—and explains why, despite their amazing processing speed, smartphones still don’t have the power to control our lives.

Or do they?

Sources: Northeastern.edu, TheVerge.com, Britannica.com, Computerweekly.com, NPR.com

  • Ad Free Browsing
  • Over 10,000 Videos!
  • All in 1 Access
  • Join For Free!
GO PREMIUM WITH THECLEVER
Go Premium!