Mathematicians have been fascinated by the study of collections of items, leading to the development and evolution of Set Theory. Attempts have been made to represent the entirety of Math using sets. Indeed, is this even possible? The Entscheidungsproblem asks if an algorithm can decide whether a given statement is provable from the axioms using the rules of logic.
It seems possible. Consider the null set {}. This represents zero. Now consider a set containing zero {{}}. This represents one. The set for two contains zero and one {{}, {{}}}. And so on. So we can represent numbers.
Now consider a general mathematical problem. It can be viewed as a function that maps a set of inputs to a set of outputs. One way to represent this is as a set of ordered pairs. A more powerful way is of the form { x | p(x) }, where p(x) is the predicate. Any element x that satisfies the predicate is a member of the set.
So far, so good. But can we really use sets for everything?
Here are some questions that make us squirm in discomfort (Take some time to reflect on each of these) -
i) Can a set contain itself?
ii) Does the set containing all the sets in the universe also contain it's power set? (Cantor's paradox)
iii) Consider all the sets that do not contain themselves. Does the set containing all these sets, contain itself? (Russel's paradox)
Here's something related, but different. A model is something that can represent a problem. It is specified using a "language", which is a collection of strings that use an "alphabet".
A language is regular, if it can be represented as a state machine. This can express an infinite number of problems, but not all.
A context free grammar is a set of rules that describe possible strings. This can express all the problems that a regular language can, and more. But again, not all.
A Turing machine is a model that defines a hypothetical machine that manipulates symbols on a strip of tape. A function maps the current state and symbol to a new state, symbol and direction to move. A subset of its states are halting states, that specify whether the input is accepted or rejected.
As expected, Turing machines are much more expressive(Chomsky's hierarchy), and in fact the theoretical basis of modern computing systems. A Universal Turing Machine can read another Turing machine and emulate it's behavior. In other words, it executes a stored program.
But are we done yet? Can we express every problem? Not really.
Returning to Set theory, let's compare sizes of sets. A set is said to be countably infinite if every element in the set can be mapped to, from natural numbers. Elements are "indexed". Let's represent a subset of such a set by a vector(also countably infinite) of 1s and 0s, where a 1 specifies that the element at that index is present in the subset.
Here's an experiment
Arrange all the vectors as a matrix, in any arbitrary manner. Consider another vector formed by taking the diagonal items in this matrix, and inverting individual bits. Does this newly formed vector exist in our matrix? A little reasoning tells us that it cannot be (Cantor's diagonal argument). The takeaway is that you may have two infinite sets, and yet they may have different cardinalities. Looks like "some infinities are greater than others". Funny as it sounds, we have a hierarchy of infinities, denoted by cardinal numbers. The smallest cardinal number is called Aleph null.
So how many problems can a Turing machine express? The answer is Aleph null. The best known mathematical models, our most advanced computing systems, are all at the lowest level of cardinal numbers. The possibilities are limitless (I was careful not to say infinite).
When faced with the task of "finding" something that we "don't know", only one thing is clear - "We don't know what we don't know". This is enough to motivate us to study the field of Open World Computing.
It seems possible. Consider the null set {}. This represents zero. Now consider a set containing zero {{}}. This represents one. The set for two contains zero and one {{}, {{}}}. And so on. So we can represent numbers.
Now consider a general mathematical problem. It can be viewed as a function that maps a set of inputs to a set of outputs. One way to represent this is as a set of ordered pairs. A more powerful way is of the form { x | p(x) }, where p(x) is the predicate. Any element x that satisfies the predicate is a member of the set.
So far, so good. But can we really use sets for everything?
Here are some questions that make us squirm in discomfort (Take some time to reflect on each of these) -
i) Can a set contain itself?
ii) Does the set containing all the sets in the universe also contain it's power set? (Cantor's paradox)
iii) Consider all the sets that do not contain themselves. Does the set containing all these sets, contain itself? (Russel's paradox)
Here's something related, but different. A model is something that can represent a problem. It is specified using a "language", which is a collection of strings that use an "alphabet".
A language is regular, if it can be represented as a state machine. This can express an infinite number of problems, but not all.
A context free grammar is a set of rules that describe possible strings. This can express all the problems that a regular language can, and more. But again, not all.
A Turing machine is a model that defines a hypothetical machine that manipulates symbols on a strip of tape. A function maps the current state and symbol to a new state, symbol and direction to move. A subset of its states are halting states, that specify whether the input is accepted or rejected.
As expected, Turing machines are much more expressive(Chomsky's hierarchy), and in fact the theoretical basis of modern computing systems. A Universal Turing Machine can read another Turing machine and emulate it's behavior. In other words, it executes a stored program.
But are we done yet? Can we express every problem? Not really.
Returning to Set theory, let's compare sizes of sets. A set is said to be countably infinite if every element in the set can be mapped to, from natural numbers. Elements are "indexed". Let's represent a subset of such a set by a vector(also countably infinite) of 1s and 0s, where a 1 specifies that the element at that index is present in the subset.
Here's an experiment
Arrange all the vectors as a matrix, in any arbitrary manner. Consider another vector formed by taking the diagonal items in this matrix, and inverting individual bits. Does this newly formed vector exist in our matrix? A little reasoning tells us that it cannot be (Cantor's diagonal argument). The takeaway is that you may have two infinite sets, and yet they may have different cardinalities. Looks like "some infinities are greater than others". Funny as it sounds, we have a hierarchy of infinities, denoted by cardinal numbers. The smallest cardinal number is called Aleph null.
So how many problems can a Turing machine express? The answer is Aleph null. The best known mathematical models, our most advanced computing systems, are all at the lowest level of cardinal numbers. The possibilities are limitless (I was careful not to say infinite).
When faced with the task of "finding" something that we "don't know", only one thing is clear - "We don't know what we don't know". This is enough to motivate us to study the field of Open World Computing.
No comments:
Post a Comment