Mathematics and the Real World (47 page)

BOOK: Mathematics and the Real World
11.28Mb size Format: txt, pdf, ePub

Recognition of the potential embodied in the fantastic speed of calculation was slow. The assessment that the world would require at most five computers is attributed to Thomas Watson, president of IBM, although there is no direct evidence that he ever said it. Even if he did not say it, the fact is that there was widespread lack of recognition of the new opportunities. The first electronic computer in Israel was built in the Weizmann Institute of Science; it was named Weizac and can be seen in the Ziskind Building where it was built, which now houses the Faculty of Mathematics and Computer Science (see the photograph).

The construction of Weizac was completed in 1954, and it was in use until 1964. Albert Einstein and John von Neumann were members of the steering committee for the building of the computer, and a large share of the institute's total budget in those days was devoted to that project. Einstein, so it is reported, expressed doubt about it and asked why Israel needed its own computer. Von Neumann, who as stated above realized earlier than others the enormous potential of the electronic computer, answered that Chaim Pekeris, the founder and leader of mathematical activity in the
Weizmann Institute and a leading light in the construction of the computer, could himself keep the whole computer occupied full time. During part of this period the president of the institute was Abba Eban. I have a copy of a letter Eban wrote to Pekeris, saying more or less: “Dear Chaim, Would you agree to explain to the members of the Institute's Board of Directors what an electronic computer can be used for. I would suggest,” wrote Eban, “that you explain that it could help in carrying out surveys in Africa.” (Eban was interested in foreign policy and later became Israel's minister for foreign affairs.) Pekeris himself realized the possibilities that the computer could offer, and his desire to build a computer was motivated by scientific considerations. One of his objectives was a worldwide map of the ebb and flow of the tides. With the help of the computer he succeeded in identifying a location in the Atlantic Ocean in which there is no tidal movement, namely, a fixed point for tides. Eventually, the British Royal Navy carried out measurements that confirmed Pekeris's computed result.

As stated, the new element in the electronic computer is its high speed. We recall that one of the reasons for the rejection of the model in which the Earth revolves around the Sun was the difficulty in trying to imagine such a high speed for the Earth's movement. Intuition that says at such a speed people would fly off the face of the Earth prevailed over the mathematical model. Similarly, despite the fact that arithmetic operations are inherent in the human brain, and mathematical calculations can be easily accepted and understood, it is not easy to understand intuitively the speed at which the electronic computer operates. That speed cannot be grasped by the human brain, and I am not sure that even today the possibilities offered by the computer are fully realized.

53. THE MATHEMATICS OF COMPUTATIONS

The development of the electronic computer necessitated a change in the attitude toward mathematical calculations, and this resulted in the development of a new mathematical subject called
computability theory
or
computational complexity
. To understand the source of the change let us recall the method of logarithms discussed in section 51. The method was developed to simplify calculations. It converted multiplication into addition and reference to two tables. This conversion constitutes a great improvement for the human brain. What eases things for the human brain, however, is irrelevant for an electronic computer. For a fast computer, multiplication consists of repeated additions and is therefore a much more efficient operation than using logarithms. For a computer it is simpler and more efficient to carry out the multiplication directly rather than to calculate the logarithms and then add them. The fact that the process is based on elementary operations means that the crucial factor in determining the efficiency of a computation is the number of elementary operations required. To compare two methods of calculation by an electronic computer we need to compare the number of elementary operations that each method must perform to give the result of the calculation. But what is an elementary calculation? And how can we compare two methods or two programs developed for different purposes and different computers? It was the British mathematician Alan Turing who provided the appropriate framework for discussing these questions.

Alan Mathison Turing was born in London in 1912. His talent for and leaning toward mathematics were revealed when he was very young. He completed his first degree at King's College, London, stayed there as a fellow, and carried out research on the foundations of mathematics. He presented an alternative approach to Kurt Gödel's result regarding the theoretical limits of calculations, an approach based on a system that would later be called the Turing machine. In 1938 he received his doctorate from Princeton University, with Alonso Church (1903–1995) as his doctoral advisor. Church was a well-known mathematician whose research in mathematical logic dealt with similar topics but with a different approach. After returning to England, with the outbreak of World War II, Turing joined a team of scientists working for the Allies, trying to decipher the coded messages transmitted by the Germans. The team met at a place called Bletchley Park, which still today serves as a meeting place and as a museum of the history of computing. Within a short time Turing was in a central position in the deciphering efforts. With the help of an electromechanical computer
available to the team, the Bletchley Park group succeeded in deciphering the German Enigma code, an achievement that played a significant part in the Allied victory. Turing had been declared a war hero by the wartime prime minister, Winston Churchill.

After the war Turing continued his research at Manchester University. He made an important contribution, which we will discuss later on, to a subject called artificial intelligence, attempted to build an electronic computer, and carried out research in different areas of biology. His work was cut short when he was accused of having relations with a young homosexual, which was then a criminal offence in Britain. Turing admitted to the charges and agreed to undergo chemical treatment to depress his sexual urge, offered as an alternative to a jail sentence. However, the social pressure together with the physical pressure from taking the treatment was apparently too much for him, and in 1954, two weeks before his forty-second birthday, he ended his life by cyanide poisoning. Many years later, in 2009, the British prime minister Gordon Brown apologized on behalf of the government for “the appalling way he [Turing] had been treated,” and in December 2013 Queen Elizabeth II granted him a rare “mercy pardon.” In 1966 the Association for Computing Machinery (ACM) instituted a prize in honor of Alan Turing, the prestigious Turing Award.

Before we present the main points of the theory of computability, we should state clearly that it does not presume to assess the efficiency of specific mathematical computing programs and thus to help the potential user to select a calculation method or program better than others. To do that the user needs to try the alternatives and to compare the practical results. Computability theory is a mathematical theory that tries to find the limitations of the capabilities of different algorithms. There is no reason to be alarmed by the terminology. An algorithm is simply a group or a list of instructions to be carried out. Thus, detailed directions on how to reach a destination B from point A constitutes an algorithm. A cookbook is a collection of algorithms whose objective is to prepare tasty dishes from the ingredients by following preset recipes. Operating manuals for different equipment are sometimes in the form of algorithms. How to add or multiply two numbers is generally presented as an algorithm.

A list of instructions as a way of performing mathematical calculations always existed, of course. The sieve of Erastothenes (mentioned in section 14) as a method of finding the prime numbers is an algorithm. The algorithm states that to find all the prime numbers from, say, 2 to 1,000, first delete all multiples of 2, that is, all even numbers greater than 2. From the remaining numbers delete the multiples of 3, that is, 9, 15, 21, and so on. Then, delete all multiples of 5, and so on, until all the non-prime numbers up to 1,000 have been deleted. At the end of the process, that is, when the algorithm has been implemented, only the prime numbers between 2 and 1,000 will remain. Thus, we have described an algorithm that finds the prime numbers between 2 and 1,000.

The word
algorithm
is a distortion of the name of the Arab mathematician Abu Ja’far ibn Musa Al-Khwarizmi, who lived in Baghdad from approximately 780 to 850 CE. He made many contributions to mathematics. He determined the digits for the numbers from 1 to 9 and was the first to designate 0 as a legitimate digit in the notation of numbers. He developed methods for solving algebraic equations. The word
algebra
is derived from the title of his book
hisāb al-ğabr wa’l-muqābala
(
On Calculation by Completion and Balancing
), in which he included methods for solving such equations. He made no less a contribution by translating and in the spirit of the time, corrected and improved, the writings of the Greek mathematicians, and he thereby deserves the credit for playing a major role in and being largely responsible for the preservation of those important texts.

We have already said that the efficiency of different algorithms is assessed by comparing the number of elementary operations that they implement. Yet the questions remain: What is an elementary operation, and how can algorithms with completely different purposes be compared? That was the purpose for which the Turing machine was devised. It is not a machine in the usual sense of the word, using electricity or other fuel, producing smoke, and requiring maintenance from time to time. The Turing machine is a virtual machine, meaning it is a mathematical framework (and there are several versions of this framework). The machine receives input, an ordered finite number of symbols, or an alphabet, marked on a tape. The tape is not limited in length. The machine can be in one of several predetermined
states. The machine has a “reader” that can identify what is written in one of the cells on the tape. As a result of what it reads, the “program” of the machine causes a change in the machine's state according to a predetermined rule, a change in the letter in that cell to another letter, also in a predetermined manner, and a move to read the letter in the adjacent cell, again according to a predetermined rule. Such a program creates a series of moves from cell to cell, possible changes to the letters written in the cells, and possible changes in the state the machine is in. These operations are the elementary operations that will enable algorithms to be compared. The algorithm ends when the reader reaches a cell with a prescribed symbol. The result of the algorithm is defined by the state the machine is in when it stops and what is written on the tape at that time.

What is surprising is that every calculation imagined until now, whether a numerical calculation, finding the minimal distance for a journey between two towns, or finding the meaning of a word in a dictionary, all these can be translated into the mathematical framework of this hypothetical, virtual machine. In other words, a Turing machine can be programmed by its letters and the rules for moving from cell to cell to perform the calculation. The emphasis is on “an acceptable calculation,” meaning every imaginable calculation can be translated into a calculation on a Turing machine. That is a “thesis,” known as the
Church-Turing thesis
, and not a mathematical theorem. So far no discrepancies have been found in this thesis.

We have stated that the efficiency of algorithms is compared via the number of elementary operations they perform on a Turing machine to achieve a desired outcome. We will repeat our warning, however, that the comparison that interests the computability-theory experts is not between two methods of finding, say, the prime numbers between 2 and 1,000. With such a definite objective, it is incumbent on the high-speed computers to find the solution that will be to our satisfaction. The comparison of main concern between
two algorithms will be between their performance when, in our example, the upper limit is not a fixed number, say, 1,000, but a general number,
N
, where
N
becomes larger and larger. Obviously the larger
N
becomes, the larger will be the number of steps the algorithm will have to perform to find the prime numbers. The comparison between the two algorithms will be between the
rate of increase
in the number of steps in each algorithm as
N
increases.

We will explain now what is meant by the term
their rate of increase as N increases
. If the expression describing the number of steps required to solve the problem is given by an expression of the following form,

Other books

Prisoner's Base by Rex Stout
Oathkeeper by J.F. Lewis
Moon Dreams by Patricia Rice
Athletic Shorts by Chris Crutcher
Lord of the Highlands by Wolff, Veronica