Mainframe computers are often seen as ancient machines—practically dinosaurs. But mainframes, which are purpose-built to process enormous amounts of data, are still extremely relevant today. If they’re dinosaurs, they’re T-Rexes, and desktops and server computers are puny mammals to be trodden underfoot.
It’s estimated that there are 10,000 mainframes in use today. They’re used almost exclusively by the largest companies in the world, including two-thirds of Fortune 500 companies, 45 of the world’s top 50 banks, eight of the top 10 insurers, seven of the top 10 global retailers, and eight of the top 10 telecommunications companies. And most of those mainframes come from IBM.
In this explainer, we’ll look at the IBM mainframe computer—what it is, how it works, and why it’s still going strong after over 50 years.
Setting the stage
Mainframes descended directly from the technology of the first computers in the 1950s. Instead of being streamlined into low-cost desktop or server use, though, they evolved to handle massive data workloads, like bulk data processing and high-volume financial transactions.
Vacuum tubes, magnetic core memory, magnetic drum storage, tape drives, and punched cards were the foundation of the IBM 701 in 1952, the IBM 704 in 1954, and the IBM 1401 in 1959. Primitive by today’s standards, these machines provided the functions of scientific calculations and data processing that would otherwise have to be done by hand or mechanical calculators. There was a ready market for these machines, and IBM sold them as fast as it could make them.
In the early years of computing, IBM had many competitors, including Univac, Rand, Sperry, Amdahl, GE, RCA, NEC, Fujitsu, Hitachi, Unisys, Honeywell, Burroughs, and CDC. At the time, all of these other companies combined accounted for about 20 percent of the mainframe market, and IBM claimed the rest. Today, IBM is the only mainframe manufacturer that matters and that does any kind of business at scale. Its de facto competitors are now the cloud and clusters, but as we'll see, it's not always cost-effective to switch to those platforms, and they're not able to provide the reliability of the mainframe.
z is a beast and a real monument to brilliant engineering. In the transistor age, there was nothing close to them in terms of power. The move to microprocessors from the transistors TCMs really clobbered the performance, but made the system a realistic choice given the astronomical price of keeping all those transistors cool. The architecture endured and now is implemented in speedy microprocessors of IBM design.
However, the main reason that these bad boys survive is that they are already there. The OS and software stack is opaque and difficult to use, and incredibly expensive. The processors, although state-of-the-art for their decades-old architecture, don't come close to the processing power and I/O scalability of clustered Intel-architecture boxes, and if the cluster is modern and redundant, the z systems aren't any more reliable. If you need a giant single-system-image database with ungodly memory and CPU count, a big UNIX box (you can get good ones from IBM) will do the trick for less money.
Many customers (generally older ones) love these boxes, but many see them as expensive boat anchors. But changing core transaction-processing software, in particular for financial institutions, that has been in use for decades and works, is a very risky thing and many want to avoid that. As business needs grow, they need bigger, faster boxes, and IBM is more than willing to develop and sell them - not necessarily because selling them means tons of money, but because costly mainframe software licenses are the life-blood of the IBM software business.