Sonntag, 18. September 2011

Information through the Lens of Computation: Algorithmic Information Theory



Up to now, previous discussions about information and computation have been rather informal -- I have neglected to give detailed mathematical introductions to concepts like Shannon entropy, Turing universality, codes etc. for the simple reason that for our purposes, a qualitative understanding has been fully sufficient.
However, if we want to make any quantitative statements about the notions of information and computation, it will benefit us to take another look at the subject in a slightly more formal context.
To this end, the most efficient framework seems to me to be that of algorithmic information theory, introduced and developed mainly by the mathematicians Ray Solomonoff, Andrey Kolmogorov, and Gregory Chaitin in the 1960s. According to Chaitin, AIT is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously"; as such, it will aid us in achieving a better understanding, in a quantitative way, of both subjects.
In a way, AIT is concerned with a concrete realisation of the somewhat abstract-seeming notion of information or information content within the realm of computation. To this end, its central concept is the so-called Kolmogorov complexity, which, loosely speaking, provides a measure of the information content of an object -- say, a string of binary digits -- by considering the shortest program, on a given computer, that outputs this object. This might seem at first like a rather arbitrary definition: different computers generally will have different programs, of different lengths, that generate the string, leading to different complexities for one and the same object. But it turns out that despite this ambiguity, Kolmogorov complexity is useful independent of the concrete computational implementation it is based on.

Samstag, 3. September 2011

The Universal Universe, Part IV: Computational Complexity



Let me start this post on a personal note and apologize for the unannounced break -- I was on vacation in Norway, and, due to poor planning on my part, had an exam right afterwards, which kept me from attending to the blog as I planned.
It all turned out for the best in the end, though, since during my stay in Norway, I stumbled across this great essay by Scott Aaronson (discussed on his blog here), which alerted me to something I haven't paid a lot of attention to, but should have -- the field of computational complexity, and especially its implications for philosophy. Having now had some time to digest the contents, I think there's some important issues for me to think, and write, about.
Of course, I was aware of the field of computational complexity and its basic notions prior to reading that essay, but thought of it as in some way concerning 'merely' problems of practicality -- making the classic philosopher's error (not that I'm a philosopher, but this apparently doesn't keep me from committing their errors) of being content with showing something 'in principle' possible, ignoring the issues involved in making it possible 'in practice' as mere technicality.
Aaronson, now, has taken up the challenge of breaking this unfortunate habit, and does so with great clarity, showing that often enough, thinking about merely what is possible in principle, or, in more appropriate terms, what is computable, without thinking about the difficulty involved in actually undertaking that computation, misses most of the good stuff. One particularly interesting argument is his resolution of the so-called 'waterfall problem', which essentially poses that any physical process can be interpreted as implementing any computation, making thus the proposal that sentience can come about merely through computation somewhat suspect -- apparently forcing us to conclude that if there is a computation that gives rise to sentience, every (or nearly every) physical system can be viewed as implementing that computation, and hence, giving rise to sentience.