"I went to see Professor Douglas Hartree, who had built the first differential analyzers in England and had more experience in using these very specialized computers than anyone else. He told me that, in his opinion, all the calculations that would ever be needed in this country could be done on the three digital computers which were then being built — one in Cambridge, one in Teddington, and one in Manchester. No one else, he said, would ever need machines of their own, or would be able to afford to buy them." -- Professor Douglas Hartree, Cambridge mathematician, 1951In 1943, design and construction of the Electronic Numerical Integrator And Computer, or ENIAC for short, began at the United States Army's Ballistic Research Laboratory. When the ENIAC was completed in 1946 it weighed over 27 tons and took up 680 square feet. The logic components of the computer consisted of 17468 vacuum tubes, 72000 diodes, 1500 relays, 70000 resistors, 10000 capacitors, and 5 million hand soldered joints. Input into the computer was done with switches and cards, some of the more complex programs requiring over a million cards. Each card would have to be input by hand into the machine, a process that could take weeks depending on the complexity of the program. The ENIAC was not the first computer built but it was one of the first to utilize a programmable, electronic architecture.
"Where a calculator on the ENIAC is equipped with 18 000 vacuum tubes and weighs 30 tons, computers of the future may have only 1 000 vacuum tubes and perhaps weigh 1½ tons." -- Popular Mechanics, March 1949.The next huge leap in computer technology came in the late 50s. Bell Labs began work on using transistors to replace vacuum tubes as the main logic controller for the computer. Transistors are semiconductors which use electrical impulses to amplify and redirect the current. Compared to vacuum tubes, these transistors were incredibly lightweight, small, and inexpensive to produce. Each generation of transistor would see the size decrease. Eventually, the transistors were able to get small enough that instead of being stored in racks next to the computer, they were able to be placed on circuits and integrated into the hardware of the computer. The transistor count on these chips was directly related to the number of operations it could perform at any given time.
Eventually, thousands of transistors would be able to be placed on the circuit, leading to the development of the microprocessor in 1971. From here, the number of transistors on these microprocessors exploded. Following a trend first noticed by Intel co-founder Gordon Moore, the number of transistors that could inexpensively placed on a chip would double every two years, a trend known as Moore's Law. This notion has been incredibly accurate for the past few decades, the current number of transistors on microprocessors currently numbering in the billions.