I never paid too much attention to that course, and consequently I passed with a C. However, that sentence had been at the back of my mind for a few years now and I decided to finally get to the bottom of it. So, I decided to read the textbook in that course back-to-back and I believe it has cleared quite a few of my concepts regarding the theory of computers. Cohen introduces the concepts discussed in the book with an introduction to what we are getting into. The book discusses the historical mathematical models of computation that preceded the actual invention of the computer. Automata Theory The book starts with recursive definitions and regular expressions which form the basis of recognizing patterns read strings of characters and validating whether they are accepted read matched by a particular regular expression.
|Published (Last):||16 February 2010|
|PDF File Size:||4.13 Mb|
|ePub File Size:||12.64 Mb|
|Price:||Free* [*Free Regsitration Required]|
The twentieth century has been filled with the most incredible shocks and surprises: the the ory of relativity, the rise and fall of communism, psychoanalysis, nuclear war, television, moon walks, genetic engineering, and so on. As astounding as any of these is the advent of the computer and its development from a mere calculating device into what seems like a "thinking machine.
Its inception was certainly impelled if not provoked by war and its development was fa cilitated by the evolution of psycho-linguistics, and it has interacted symbiotically with all the aforementioned upheavals.
The history of the computer is a fascinating story; however, it is not the subject of this course. We are concerned instead with the theory of computers, which means that we shall form several mathematical models that will describe with varying degrees of accuracy parts of computers, types of computers, and similar machines.
The con cept of a "mathematical model" is itself a very modern construct. It is, in the broadest sense, a game that describes some important real-world behavior. Unlike games that are simula tions and used for practice or simply for fun, mathematical models abstract, simplify. We may assert that chess is a mathematical model for war, but it is a very poor model because wars are not really won by the simple assassination of the leader of the opposing country.
The adjective "mathematical " in this phrase does not necessari ly mean that classical mathematical tools such as Euclidean geometry or calculus will be employed. Indeed, these areas are completely absent from the present volume. What is mathematical about the mod els we shall be creating and analyzing is that the only conclusions that we shall be allowed to draw are claims that can be supported by pure deductive reasoning; in other words, we are obliged to prove the truth about whatever we discover.
Most professions, even the sciences. While most of the world is correctly preoccupied by the question of how best to do something, we shall be completely absorbed with the question of whether certain tasks can be done at all. Our main concl usions will be of the form, "this can be done" or "this can never be done. The nature of our discussion will be the frontiers of capability in an absolute and time less sense. This is the excitement of mathematics. The fact that the mathematical models that we create serve a practical purpose through their application to computer science, both in the development of structures and techniques necessary and useful to computer programming and in the engineering of computer architecture, means that we are privi leged to be playing a game that is both fun and important to civilization at the same time.
The term computer is practically never encountered in this book - we do not even de fine the term until the final pages. The way we shall be studying about computers is to build mathematical models, which we shall call machines, and then to study their limitations by analyzing the types of inputs on which they operate successfu lly.
The collection of these successful inputs we shall call the language of the machine, by analogy to humans who can understand instructions given to them in one language but not another.
Every time we intro duce a new machine we will learn its language, and every time we develop a new language we shall try to find a machine that corresponds to it. Thi s interplay between languages and machines will be our way of investigating problems and their potential solution by auto matic procedures, often called algorithms, which we shall describe in a little more detail shortly.
The history of the subject of computer theory is interesting. It was formed by fortunate coincidences, involving several seemingly unrelated branches of intellectual endeavor. A small series of contemporaneous discoveries, by very dissimilar people, separately moti vated, flowed together to become our subject. Until we have established more of a founda tion, we can only describe in general terms the different schools of thought that have melded into this field. The most fundamental component of computer theory is the theory of mathematical logic.
As the twentieth century started, mathematics was facing a dilemma. Georg Cantor had recently invented the theory of sets unions, intersections, inclusion, cardinality, etc. But at the same time he had discovered some very uncomfortable paradoxes - he created things that looked like contradictions in what seemed to be rigorously proven mathematical theorems.
Some of his unusual findings could be tolerated such as the idea that infinity comes in different sizes , but some could not such as the notion that some set is bigger than the universal set. Thi s left a cloud over mathematics that needed to be resol ved. To some the obvious solution was to ignore the existence of set theory. Some others thought that set theory had a disease that needed to be cured, but they were not quite sure where the trouble was. The naive notion of a general "set" seemed quite reasonable and in nocent.
When Cantor provided sets with a mathematical notation, they should have become mathematical objects capable of having theorems about them proven. All the theorems that dealt with finite sets appeared to be unchallengeable, yet there were definite problems with the acceptabil ity of infinite sets. In other branches of mathematics the leap from the finite to the infinite can be made without violating intuitive notions.
Calculus is full of infinite sums that act much the way finite sums do; for example, if we have an infinite sum of infinitesi mals that add up to 3, when we double each term, the total will be 6. The Euclidean notion that the whole is the sum of its parts seems to carry over to infinite sets as well ; for example, when the even integers are united with the odd integers, the result i s the set of all integers.
In the year 1 , Dav id Hilbert, as the greatest l iving mathematician, was invited to ad dress an international congress to predict what problems would be important in the century to come. Either due to his influence alone, or as a result of his keen analysis, or as a tribute CHAPTER 1 Background to his gift for prophecy, for the most part he was completely correct. The 23 areas he indi cated in that speech have turned out to be the major thrust of mathematics for the twentieth century.
Although the invention of the computer itself was not one of his predictions, several of his topics tum out to be of seminal importance to computer science. First of all, he wanted the confusion in set theory resolved. He wanted a precise ax iomatic system built for set theory that would parallel the one that Euclid had laid down for geometry. Hilbert thought that such an axiom sys tem and set of rules of inference could be developed to avoid the paradoxes Cantor and oth ers had found in set theory.
Second, Hilbert was not merely satisfied that every provable result should be true; he also presumed that every true result was provable. And even more significant, he wanted a methodology that would show mathematicians how to find this proof.
He had in his mind a specific model of what he wanted. In the nineteenth century, mathematicians had completely resolved the question of solv ing systems of linear equations. Given any algebraic problem having a specified number of linear equations, in a specified set of unknowns, with specified coefficients, a system had been developed called linear algebra that would guarantee one could decide weather the equations had any simultaneous solution at all, and find the solutions if they did exist.
Thi s would have been an even more satisfactory situation than existed in Euclidean geometry at the time. If we are presented with a correct Euclidean proposition relating line segments and angles in a certain diagram, we have no guidance as to how to proceed to pro duce a mathematically rigorous proof of its truth. We have to be creative - we may make false starts, we may get completely lost, frustrated, or angry. We may never fi n d the proof. Linear algebra guarantees that none of this will ever happen with equations.
As long as we are tireless and precise in following the rules, we must prevail, no matter how little imagination we ourselves possess. Notice how well this de scribes the nature of a computer. When we input the problem, the machine generates the proof. Math ematicians are usually in the business of creating the proofs themselves, not the proof-gener ating techniques.
What had to be invented was a whole field of mathematics that dealt with algorithms or procedures or programs we use these words interchangeably. From this we see that even before the first computer was ever built, some people were asking the question of what programs can be written. It was necessary to codify the universal language in which algorithms could be stated. Addition and circumscribing circles were certainly allowable steps in an algorithm, but such activities as guessing and trying infinitely many possibilities at once were definitely prohibited.
The language of algorithms that Hilbert required evolved in a natural way into the language of computer programs. The road to studying algorithms was not a smooth one.
The first bump occurred in when Kurt Godel proved that there was no algorithm to provide proofs for all the true state ments in mathematics.
In fact, what he proved was even worse. He showed that either there were some true statements in mathematics that had no proofs, in which case there were cer tainly no algorithms that could provide these proofs, or else there were some false state ments that did have proofs of their correctness, in which case the algorithm would be disas trous. Mathematicians then had to retreat to the question of what statements do have proofs and how can we generate these proofs? The people who worked on this problem.
They each fashioned various but simi lar versions of a univer sal model for all algorithms- what, from our perspective, we wou ld call a un iversal al gorithm machine. Turing then went one step farther. He proved that there were mathematically definable fundamental questions about the machine itself that the ma chine could not answer. If some human could figure out an algorithm to solve a particular class of mathematical problem, then the machine could be told to follow the steps in the program and execute this exact sequence of instructions on any inserted set of data tirelessly and with complete precision.
The electronic discoveries that were needed for the implementation of such a dev ice in cluded vacuum tubes, which just coincidentally had been developed recently for engineering purposes completely unrelated to the possibility of building a calculating machine. This was another fortuitous phenomenon of this period of history. All that was required was the impe tus for someone with a vast source of money to be motivated to invest in this highly specula tive project.
It is practically sacrilegious to maintain that World War II had a serendipitous impact on civil ization no matter how unintentional, yet it was exactly in this way that the first computer was born - sponsored by the Allied military to break the German secret code, with Turing himself taking part in the construction of the machine.
What started out as a mathematical theorem about mathematical theorems-an abstrac tion about an abstraction - became the single most practically applied invention since the wheel and axle.
Not only was this an ironic twist of fate, but it all happened within the re markable span of IO years. It was as incredible as if a mathematical proof of the existence of intelligent creatures in outer space were to provoke them to land immediately on Earth. Independently of all the work being done in mathematical logic, other fields of science and social science were beginning to develop mathematical models to describe and analyze difficult problems of their own.
As we have noted before, there is a natural correspondence between the study of models of computation and the study of linguistics in an abstract and mathematical sense.
It is also natural to assume that the study of thinking and learning branches of psychology and neurology-play an important part in understanding and facili tating computer theory. What is again of singular novelty is the historical fact that, rather than turning their attention to mathematical models to computerize their own applications, their initial development of mathematical models for aspects of their own science directly aided the evolution of the computer itself.
It seems that half the intel lectual forces in the world were leading to the invention of the computer, while the other half were producing ap plications that were desperate for its arrival.
Two neurophysiologists, Warren McCulloch and Walter Pitts, constructed a mathemati cal model for the way in which sensory receptor organs in animals behave. The model they constructed for a "neural net" was a theoretical machine of the same nature as the one Turing invented, but with certain limitations. Modern linguists, some influenced by the prevalent trends in mathematical logic and some by the emerging theories of developmental psychology, had been investigating a very similar subject: What is language in general?
How could primitive humans have developed language? How do people understand it? How do they learn it as children? How do people construct sentences from the ideas in their minds? Noam Chomsky created the subject of mathematical models for the description of lan guages to answer these questions. His theory grew to the point where it began to shed light on the study of computer languages. The languages humans invented to communicate with one another and the languages necessary for humans to communicate with machines shared many basic properties.
Introduction to Computer Theory
Daniel I. A. Cohen
Solutions Manual to Accompany Introduction to Computer Theory