Wednesday, January 26, 2011

Amazing Pace: Processing Power

"I went to see Professor Douglas Hartree, who had built the first differential analyzers in England and had more experience in using these very specialized computers than anyone else. He told me that, in his opinion, all the calculations that would ever be needed in this country could be done on the three digital computers which were then being built — one in Cambridge, one in Teddington, and one in Manchester. No one else, he said, would ever need machines of their own, or would be able to afford to buy them." -- Professor Douglas Hartree, Cambridge mathematician, 1951
  In 1943, design and construction of the Electronic Numerical Integrator And Computer, or ENIAC for short, began at the United States Army's Ballistic Research Laboratory. When the ENIAC was completed in 1946 it weighed over 27 tons and took up 680 square feet. The logic components of the computer consisted of 17468 vacuum tubes, 72000 diodes, 1500 relays, 70000 resistors, 10000 capacitors, and 5 million hand soldered joints. Input into the computer was done with switches and cards, some of the more complex programs requiring over a million cards. Each card would have to be input by hand into the machine, a process that could take weeks depending on the complexity of the program. The ENIAC was not the first computer built but it was one of the first to utilize a programmable, electronic architecture.

ENIAC
"Where a calculator on the ENIAC is equipped with 18 000 vacuum tubes and weighs 30 tons, computers of the future may have only 1 000 vacuum tubes and perhaps weigh 1½ tons." -- Popular Mechanics, March 1949.
   The next huge leap in computer technology came in the late 50s. Bell Labs began work on using transistors to replace vacuum tubes as the main logic controller for the computer. Transistors are semiconductors which use electrical impulses to amplify and redirect the current. Compared to vacuum tubes, these transistors were incredibly lightweight, small, and inexpensive to produce. Each generation of transistor would see the size decrease. Eventually, the transistors were able to get small enough that instead of being stored in racks next to the computer, they were able to be placed on circuits and integrated into the hardware of the computer. The transistor count on these chips was directly related to the number of operations it could perform at any given time.

  Eventually, thousands of transistors would be able to be placed on the circuit, leading to the development of the microprocessor in 1971. From here, the number of transistors on these microprocessors exploded. Following a trend first noticed by Intel co-founder Gordon Moore, the number of transistors that could inexpensively placed on a chip would double every two years, a trend known as Moore's Law. This notion has been incredibly accurate for the past few decades, the current number of transistors on microprocessors currently numbering in the billions.

Moore's Law
  How long this trend will last is still being researched. Scientists don't see a slowing down of this trend for the next few years but eventually the chip manufacturing process will become so refined that there will be no more space to add any more transistors on this chip on the atomic scale. Once this point is reached, microprocessors will be about as powerful as they would be able to become and another avenue of processing would need to be explored. Until then, the microprocessor will continue to improve exponentially in the coming years.

1 comment:

  1. Lovely blog design, and a beautiful post. I especially love the quotations.

    I'd like to challenge you to make your writing style slightly less formal, an maybe a bit less academic, for these blog posts. Experiment with shorter posts (like the one at the top now) and connect to current events/studies as jumping off platforms.

    Your job is not to dumb information into our heads, but to engage us.

    Good work.

    ReplyDelete