From the analytical engine to AI

16.7.2020 | Ada Lovelace tells the story of digital data processing

Ada's story

Chips – or integrated circuits – are responsible for storing and processing data in our modern world. If we were to remove all chips overnight, we would have trouble from dawn to dusk. Even breakfast would be a problem, because many toasters and coffee makers have chips. We would struggle to get to work, because lots of cars wouldn’t start. And the older vehicles that do start wouldn’t get very far, as the traffic lights wouldn’t be working. And once we finally made it to work, we would find that scarcely anything is working there either. Unless of course you don’t use electronic technology in your workplace; but for how many people is that true today?

In short, chips have become part of our everyday lives. Yet hardly anyone knows what a chip actually is.

We want to take you on a journey through the history of data processing.

It all began with Ada Lovelace...

Ada's Story

Hello to you all!

My name is Ada. Pleased to make your acquaintance! I am now over 200 years old and have seen many interesting people and inventions in my time. Recently the people at Fraunhofer IIS named their Center for Analytics, Data and Applications after me: Ada Lovelace Center. It was truly an honor, and so it would be my pleasure to guide the people of the 21st century through the history of data processing.

I was born in London on the tenth of December in the year of 1815. My mother raised me by herself. How she loved the natural sciences! It was only natural, then, that she wanted me to receive education in them.

Ada Lovelace
© Source: Wikipedia
Ada at the age of four.
Ada Lovelace
© Source: Wikipedia
Ada Lovelace 1836, painting of Margaret Sarah Carpenter (1793–1872)

Later, my husband too was a scientist. He supported me in my research – for example, by copying out articles from the Royal Society’s library[DM1]  for me, because at that time, women did not have access to the collection.

As well as being a passionate scientist, I was also a loving mother. Every evening, I would play the harp to my three children.

I have followed the development of data processing, from my own beginnings in the field to the present day, and I would like to tell you this fascinating story.

It begins with a lecture by Charles Babbage, a mathematics professor who invented what became known as the Analytical Engine. Luigi Menabrea – who would later become the Prime Minister of Italy – wrote a summary of the lecture in French. An acquaintance of Charles’s asked me to translate the text into English Charles knew that, even though I was just 28 years old, I already had a good understanding of his machine, so he suggested that I append some of my own thoughts to the translation. In the end, my “notes” were twice as long as the article itself! I really put my heart and soul into them. The notes describe the first ever computer program. Accordingly, I can lay claim to being the first computer scientist, because I developed the world’s first computer program.

In addition, I devised a plan for how to calculate Bernoulli numbers using the machine. Although the Analytical Engine was never built, later scientists proved that it would have worked. In my day, the precision engineering was not advanced enough for the task. Moreover, the British parliament balked at the cost of building the machine.


 

The Analytical Engine – The first concept for a programmable computing machine

The design of the Analytical Engine was actually very impressive: if it had been built, it would have been 19 meters long and 3 meters high. Intended to be completely mechanical, with a steam engine to drive its gear wheels, it was designed to add, subtract, multiply and divide. An operator would have input commands using punch cards. A bell would have transmitted signals, and a punch card printer would have spat out the results. After all, the machine was to be able to save 1,000 numbers to 30 decimal places, which corresponds to a memory capacity of about 12 kilobytes. A typical smartphone today has around 5 million times as much memory storage.

Ah well, it was a pity for Charles and me that the machine was never built, but we have not been forgotten. Our work had a major influence on many subsequent developers.

But let us return to our history...

Zuse Z3 – Germany’s mechanical general-purpose computer

During World War II, both sides worked feverishly to develop machines that could carry out complex calculations.

In Germany, Konrad Zuse developed a prototype for a programmable general-purpose computer in 1941: the Zuse Z3. Inside the computer were some 2,000 electromechanical switches, known as relays. The Z3 was designed to carry out complicated calculations in the field of aviation for the war effort.

Now, I do not like war at all, but the invention itself was interesting. In actual fact, the Z3 was inferior to the Analytical Engine, as it was able to calculate only 64 floating-point numbers with 22-bit accuracy. But crucially, the Z3 did not remain a concept: it was actually built. That said, it operated for just three years before it was destroyed in a bombing raid in 1944.

A year later, the Zuse 4 was built. The Z4 survived the war and eventually wound up at the Swiss university ETH Zurich, where my fellow scientists improved it further.

Zuse Z23
© Fraunhofer IIS/UdoRink
Zuse Z23

ENIAC – The first electronic general-purpose computer

ENIAC
© Source: Wikipedia
ENIAC in a picture of the US Army, in the foreground Betty Holberton, in the background Glen Beck.

In 1942, the Americans developed the first electronic general-purpose computer, the Electronic Numerical Integrator and Computer – ENIAC for short. ENIAC, too, had to contribute to the war effort and calculate artillery firing tables.

ENIAC measured 10 meters by 17 meters and weighed 27 metric tons. It certainly would not have fit in my living room! I wouldn’t want to pay the electricity bill, either: its 17,468 electron tubes consumed around 175 kW of power. This is equivalent to the electricity consumed by about 90 stove burners turned on full blast.

ENIAC was able to add, subtract, multiply and divide, and to extract square roots.

Its electron tubes worked much faster than mechanical relays, allowing ENIAC to complete an impressive 5,000 additions per second. However, ENIAC was very awkward and time-consuming to program: you had to connect the parts individually with cables and use rotary switches to set what you wanted to compute.

During the war, six ladies programmed the computer: the ENIAC women. These mathematicians did pioneering work.

One of them was Betty Jennings Bartik, who went by the name of Jean. Having obtained a mathematics degree in Missouri, she was advised to become a teacher. But that was too boring for her – she craved adventure. And she got it, too: she played a major role in the beginnings of the modern software sector. As such, she had a hand in changing the world as we knew it.

The electron tube – A first step on the road to circuits

The simplest form of electron tube is the triode. It consists of an evacuated glass envelope containing three electrodes. Electrons released by the electrically heated and glowing cathode are attracted to the anode. The third electrode is the grid, which controls this flow of electrons. To this end, you have to apply voltage to the grid. In this way, you can build either a switch or an amplifier out of the triode. Because nothing moves in the tube, it works much faster than electromechanical relays. However, the contraptions were very sensitive. Every now and again, the cathodes simply burned out.

Even if just one tube was defective, it was enough to render the calculations of the electronic brain incorrect. Moreover, the high power consumption required powerful cooling systems. It is no wonder that scientists soon looked for better solutions – and found them.

Vacuum tube
© Fraunhofer IIS/Udo Rink
Vacuum tube

The transistor – The better alternative

Transistor
© Fraunhofer IIS/Udo Rink
Transistor

Transistors had already been researched and patented in 1925, but it took until 1950 before this semiconductor was fit for everyday use.
But what progress was made during that time? Well, transistors now take up a lot less room, work faster, are relatively robust mechanically speaking, and do not need any heating. As a consequence, they consume less power and break down less frequently.

The first transistors were made of germanium, with later models made from silicon. At first, bipolar transistors were used. They consisted of three thin semiconductor layers called the collector, the base and the emitter. A small current flowing into the base controls a large current between collector and emitter. Depending on how the transistor is switched from the outside, it can operate as an amplifier or – as is typical for computers – as a switch. Two mutually influencing transistors together with a few resistors then form the basis for a storage location – the so-called flip-flop.

Von Neumann architecture – How is a computer constructed?

John von Neumann adopted his all-American first name after he had moved to the United States from Hungary. I must say I like Hungary, with its fields of sunflowers stretching to the horizon ... but I suppose there were better jobs in America.

John is another founding father of computer science. He was a child prodigy who took an early interest in proof theory, quantum mechanics and game theory. He became famous for his conception of what computer architecture should look like. Apparently he did not come up with the idea on his own, but in 1945, he was the one who gave it its classic formulation:

(Ahem.) A computer consists of a processing unit with fast memory, a control unit, external storage and an input/output unit. The program for the control unit and the data are situated in the same storage memory.

The fast buffers consisted of tubes and, later, of transistors. However, that was much too expensive for the external storage, so instead it was decided to use small ferrite rings threaded onto thin wires. The whole assembly then looked like a matrix. It was possible to magnetize the rings, also called magnetic cores, in two directions – and therefore to represent the two bit states 0 and 1.

Siemens 2002

One of the first transistor computer models in the world was the Siemens 2002, which was manufactured starting in the late 1950s. Huge cabinets contained many modules crammed with individual transistors and resistors. In its full configuration, the computer was able to hold 10,000 words with a word length of 12 decimals in its internal core memory. Instead of the 5,000 operations managed by ENIAC, this impressive machine could handle around 30,000 commands per second.

The computer’s programming language also continued to develop and progress. Things began with assembler languages that were very similar to the computer’s hardware. Later, languages such as ALGOL and FORTRAN came along, which were easier for people to understand.

The first integrated circuit

Our story continues in Erlangen, Germany.

Werner Jacobi was working for Siemens in Erlangen, where in 1949 he filed a patent application for a “semiconductor amplifier.” The semiconductor amplifier was a circuit of five transistors on a single substrate. Effectively, it was the first integrated circuit. However, the semiconductor amplifier remained largely unknown back then and was not put to commercial use.

The Nobel Prize Committee was of the opinion that Jack Kilby invented the first integrated circuit, and so awarded him the Nobel Prize for Physics. That is why he is called the Father of the Microchip ­­– a title that would make anyone envious...

Jack began working at Texas Instruments in 1958. New hires were not given any time off in the summer, and given the scorching temperatures down in Texas, he was probably not delighted about this. On the other hand, at least the labs were empty. “As a new employee, I had no vacation time coming and was left alone,” he said. So he started to experiment. He turned his mind to the “tyranny of numbers”: the problem faced by computer engineers that, as the number of components in computers continued to grow, it was becoming increasingly difficult to wire them all together. Finally, he came to the conclusion that the problem could be solved by integrating various components in a single semiconductor.

On July 24, 1958, he wrote down the crucial idea in his lab notebook: “The following circuit elements could be made on a single slice: resistors, capacitor, distributed capacitor, transistor.” Two months later, the big day arrived: Jack unveiled the first example of a working integrated circuit. There was not much to see ­– just a piece of germanium with a few wires attached on a piece of glass no larger than a paper clip. Jack pressed the switch, and those present saw a sine wave appear on an oscilloscope. Clearly, it was not much of a spectacle, which may explain why the integrated circuit did not enjoy much success at first.

Jack then applied his invention to the building of pocket calculators. Previous calculators were the size of a typewriter and took much, much longer to carry out calculations.

At the same time as Jack, Robert Noyce was developing his own integrated circuit. Jack and Robert themselves never fought over this, although their companies Texas Instruments and Fairchild Semiconductor certainly did. When Jack received the Nobel Prize in the year 2000, Robert was unfortunately already deceased and therefore ineligible. Nevertheless, Robert held the patent for the first “monolithic” integrated circuit – that is, an IC manufactured from a single monocrystalline substrate.

In 1960, the integrated circuit went into production at Texas Instruments and Fairchild Semiconductor. Back then, the circuits possessed just a few dozen transistors. As the years went on, their complexity would skyrocket.

The mainframe era

The 1960s and ’70s were the era of the mainframe. Most big companies had an IBM System 360. The system forms a complete product family with swappable peripherals and software, and the largest models had up to 4 MB of main memory. But IBM was not always fair in its dealings: in response to considerably higher performance models from competitors – especially the Control Data Corporation (CDC) – IBM repeatedly announced imminent new models of its own. They were never built, but customers waited for them and did not buy a CDC model. At some stage, a court handed IBM a hefty fine for this practice.

 

Microprocessors – The road to compact computers

The first microprocessors went into production in 1970. They were handy in size and had the control unit and the arithmetic logic unit on a single microchip. However, they could not compete with the mainframes in terms of performance. They were designed to be used in desktop calculators.

With the microprocessor “4004,” Intel made a breakthrough. Originally, this chip was an engineered-to-order solution for a Japanese manufacturer of desktop calculators. The outcome was the world’s first commercially available microprocessor. Its register had a size of only 4 bits, and the addressable external memory was a meager 5,120 bits. Nonetheless, it accommodated a then legendary 2,300 transistors on a single chip.

Shortly thereafter, the Japanese client ran into financial trouble and offered to let Intel buy back the rights to the Intel 4004. Intel seized the opportunity and started to market and further develop this processor line itself – and very successfully, too. One man’s sorrow is another man’s joy, as my mother used to say.

The CRAY-1 supercomputer

Mainframe manufacturers strove to build ever faster, ever more expensive machines, and in 1976 Seymour Cray completed work on his first supercomputer. The Cray-1 was a vector processor and was supposed to compute weather forecasts. Alas, it ended up being used for nuclear weapons testing, although we can hardly blame the poor computer for that. Cray-1 was able to link large amounts of data in a single step and execute an amazing 133 million floating-point operations per second. It was not as impressive as my Analytical Engine, being rather compact in comparison. In addition, it was very expensive. Buyers had to hand over at least 5 million dollars for the 5.5-ton monster. Also, its power consumption was greater than that of the ENIAC: a whopping 200 kW, if you factor in the power needed for cooling. The days were numbered for these leviathans, as the development of microprocessors was progressing swiftly.

Moore’s Law – The complexity of integrated circuits doubles regularly

By the mid-1970s, engineers could already fit a few thousand transistors on a chip. The word size of the microprocessors doubled to 8 bits, and the clock rate increased by a factor of ten. With up to 64 kilobytes of external memory, there was much greater scope for complex programs and data. Because the memory components still had so little capacity, however, the whole memory assembly was the size of a baking tray and cost a small fortune.

The Intel 8080 and the compatible, but more powerful, Zilog Z80 became the heart of many home computers. As it didn’t enter the personal computer market until 1981, IBM was very late to the party. Other companies saw the gap and ran with the concept. The result was cheap computers that even ordinary individuals could afford.

The number of transistors on the chip – both in the memory and in the processors – grew ceaselessly. Since 1970, the number of transistors per chip has doubled approximately every two years, and has already passed 20 million. Gordon Moore, co-founder of Intel, formulated this relationship back in 1965. It came to be known as Moore’s Law – and against all expectations, it continues to hold true today.

Embedded systems and the Internet of Things

Naturally, processors kept growing in performance. In many cases, however, we do not need all that computing power. A parallel development, therefore, was the creation of small, cheap and extremely energy-efficient computers in a single chip, which can connect with each other. Technology that talks to itself – how wonderful! With their sensors, these smart mini-computers capture environmental parameters on site and then control, regulate or report accordingly. The front door talks to the lights, and the washing machine talks to the television. The goal is to automatically capture important information from the environment, connect the data together and make it available in a network. In this light, the challenge of the future will not be to generate more computing power, but to find the best way to handle data.

ADA Lovelace Center – The Competence Center for Analytics, Data and Applications in Erlangen/Nuremberg and Munich

© Fotolia / dragonstock
The ADA Lovelace Center combines AI research with AI applications.

Now, back to the ADA Lovelace Center, of which – as the center’s namesake – I am of course especially proud.

The challenge in terms of handling data is currently being addressed at the ADA Lovelace Center, where researchers are studying, developing and testing a wide range of solutions. And this work is not purely theoretical, as is so often the case, but rather has a clear link to practical applications. In particular, it is important for companies to develop new, data-driven business models. After all, data is now considered the raw material of the digital world. Imagine that! In my day, the most important raw materials were still coal, ore and steel…

The analysis, evaluation, mastery and intelligent use of data is therefore the recipe for success for companies in the 21st century. In Bavaria, the ADA Lovelace Center has made it its mission to support small and medium-sized businesses, in particular. Artificial intelligence is one of the key areas of research. But I am especially pleased to see the support being given to a new generation of scientific talent, because it is vital that we get young people excited about this subject.

All of this has taken place over the last 200 years. I, for one, can’t wait to see what the future will bring and where research at the ADA Lovelace Center will lead.

 

The article was created by Isabel Pogner.

 

Contact

Your questions and suggestions are always welcome!

Please do not hesitate to email us.

Stay connected

The newsletter

Sign up for the the Fraunhofer IIS Magazine newsletter and get the hotlist topics in one email.

Home page

Back to home page of Fraunhofer IIS Magazine