Supplement to Information

Open Problems in the Study of Information and Computation

There is no consensus on a ‘standard’ set of open problems in philosophy of information. Some typical problems of a philosophical nature in the study of information and computation are (ordered roughly in terms of estimated hardness):

The unification of various theories of information:
In a mathematical sense information is associated with measuring extensive properties of classes of systems with finite but unlimited dimensions (systems of particles, texts, codes, networks, graphs, games etc.). This suggests that a uniform treatment of various theories of information is possible. In the Handbook of Philosophy of Information three different forms of information are distinguished (Adriaans and van Benthem 2008b):
  • Information-A, Knowledge, logic, what is conveyed in informative answers
  • Information-B, Probabilistic, information-theoretic, measured quantitatively
  • Information-C, Algorithmic, code compression, measured quantitatively
Because of recent development the connections between Information-B (Shannon) and Information-C (Kolmogorov) are reasonably well understood (Cover and Thomas 2006). The historical material presented in this article suggests that reflection on Information-A (logic, knowledge) is historically much more interwoven than was generally known up till now. The research program of logical positivism can with hindsight be characterized as the attempt to marry a possible worlds interpretation of logic with probabilistic reasoning (Carnap 1945, 1950; Popper 1934). It has to my knowledge not been revitalized in the light of the huge developments in information theory in the past decades. Modern attempt to design a bayesian epistemology (Bovens and Hartmann 2003) do not seem to be aware of the work done in the first half of the 20th century. However, an attempt to unify Information-A and Information-B seems a viable exercise. Also the connection between thermodynamics and information theory have become much closer in the past decade, amongst others, due to the work of Gell-Mann & Lloyd (2003) (see also: Bais and Framer 2008). Verlinde (2010) even presented a reduction of gravity to information. All these developments suggest that information is an even deeper concept than was known up till now but, that the ambition to formulate a unified theory of information is everything but a lost cause.
What is useful/meaningful information?
All well-known quantitative information measures (specifically Shannon Information and Kolmogorov complexity) assign the highest information content to data sets with the highest entropy. In this sense a television broadcast with only white noise would contain the most meaningful information. This is counter intuitive. In the past decennia there have been a number of proposals to define a formal unit of measurement of the amount of structural (or model-) information in a data set.
  • Aesthetic measure (Birkhoff 1950)
  • Sophistication (Koppel 1987; Antunes et al. 2006; Antunes & Fortnow 2003)
  • Logical Depth (Bennet 1988)
  • Effective complexity (Gell-Mann, Lloyd 2003)
  • Meaningful Information (Vitányi 2006)
  • Self-dissimilarity (Wolpert, Macready 2007)
  • Computational Depth (Antunes et al. 2006)
  • Facticity (Adriaans 2008)
Three intuitions dominate the research. A string is ‘interesting’ when …
  • a certain amount of computation is involved in its creation (Sophistication, Computational Depth);
  • there is a balance between the model-code and the data-code under two part code optimization (effective complexity, facticity);
  • it has internal phase transitions (self-dissimilarity).
Such models penalize both maximal entropy and low information content, but the exact relationship between these intuitions is unclear. Since these proposals have a lot in common it is not inconceivable that a unified theory of meaningful information will be developed in the coming years. Phenomena that might be related to a theory of structural information and that currently are ill-understood are: phase transitions in the hardness of satisfiability problems related to their complexity (Simon & Dubois 1989; Crawford & Auton 1993) and phase transitions in the expressiveness of Turing machines related to their complexity (Crutchfield & Young 1989, 1990; Langton 1990; Dufort & Lumsden 1994).
What is an adequate logic of information?
What is a good logical system (or set of systems) that formalizes our intuitions of the relation between concepts like ‘knowing’, ‘believing’ and ‘being informed of’. Proposals by: Dretske (1981), van Benthem (2006; van Bethem & de Rooij 2003), Floridi (2003, 2011) and others. In principle these concepts probably can be mapped onto our current landscape of known logics (structural, modal). Also the discrepancies between proposed systems presumably can be analyzed within the ‘normal science’ framework of existing logics.
Continuous versus discrete models of nature
This problem seems to have bothered the development of a unified theory of information and entropy for the last 150 years or so. The central issue is this: the most elegant models of physical systems are based on functions in continuous spaces. In such models almost all points in space carry an infinite amount of information. Yet, the cornerstone of thermodynamics is that a finite amount of space has finite entropy. What is the right way to reconcile these two views? This problem is related to questions studied in philosophy of mathematics (an intuitionistic versus a more platonic view). The issue is central in some of the more philosophical discussions on the nature of computation and information (Putnam 1988; Searle 1990). The problem is also related to the notion of phase transitions in the description of nature (e.g., thermodynamics versus statistical mechanics) and to the idea of levels of abstraction (Floridi 2002).
Computation versus thermodynamics:
There is a reasonable understanding of the relationship between Kolmogorov Complexity and Shannon information (Li & Vitányi 2008; Grünwald and Vitányi 2008; Cover & Thomas 2006), but the unification between the notion of entropy in thermodynamics and Shannon-Kolmogorov information is very incomplete apart from some very ad hoc insights (Harremoës and Topsøe 2008; Bais and Farmer 2008). What is a computational process from a thermodynamical point of view? What is a thermodynamical process from a computational point of view. Can a thermodynamic theory of computing serve as a theory of non-equilibrium dynamics? This problem seems to be hard because 150 years of research in thermodynamics still leaves us with a lot of conceptual unclarities in the heart of the theory of thermodynamics itself. It is also not clear how a theory of computation will help us here, although bringing in concepts of theory of computation seems to be promising. Possible theoretical models could with high probability be corroborated with feasible experiments (e.g., Joule's adiabatic expansion, see Adriaans 2008)
Classical information versus quantum information
Classical information is measured in bits. Implementation of bits in nature involves macroscopic physical systems with at least two different stable states and a low energy reversible transition process (i.e., switches, relays, transistors). The most fundamental way to store information in nature on an atomic level involves qubits. The qubit is described by a state vector in a two-level quantum-mechanical system, which is formally equivalent to a two-dimensional vector space over the complex numbers (Von Neumann 1955; Nielsen & Chuang 2000). Quantum algorithms have, in some cases, a fundamentally lower complexity (e.g., Shor's algorithm for factorization of integers.) The exact relation between classical and quantum information is unclear. Part of it has to do with our lack of understanding of quantum mechanics and with the question whether nature is essentially deterministic or not.
Information and the theory of everything:
In the past decennia information seems to have become a vital concept in physics. Seth Lloyd and others (Zuse 1969; Wheeler 1990; Schmidhuber 1997b; Wolfram 2002; Hutter 2010) have analyzed computational models of various physical systems. The notion of information seems to play a major role in the analysis of black holes (Lloyd & Ng 2004; Bekenstein 1994). Erik Verlinde (2010) has proposed a theory in which gravity is analyzed in terms of information. For the moment these models seem to be purely descriptive without any possibility of empirical verification.
The Church-Turing Hypothesis.
We know that a lot of formal systems are Turing equivalent (Turing machines, recursive functions, lambda calculus, combinatory logic, cellular automata, to name a few). The question is: does this equivalence define the notion of computation. Dershowitz and Gurevich (2008) claim to have vindicated the hypothesis, but this result is not generally accepted (see the discussion on “Computability – What would it mean to to disprove the Church-Turing thesis”, in the Other Internet Resources section). A lot of conceptual clarification seems necessary, but for now it is unclear how one ever could verify the thesis definitively. The discovery of a system in nature that could actually compute more than a Turing machine would imply an immediate falsification, but such a device has not been found up till now.
P versus NP?
Can every problem for which the solution can be checked efficiently also be solved efficiently by a computer (Garey & Johnson 1979)? See Cook 2000 (Other Internet Resources) for a good introduction. The problem, that appears to be very hard, has been a rich source of research in computer science and mathematics although relatively little has been published on its philosophical relevance. That a solution might have profound philosophical impact is illustrated by a quote from Scott Aaronson:
If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss…. (See the post “Reasons to Believe” on Scott Aaronson's blog Shtetl-Optimized listed in Other Internet Resources. This is cited in the Wikipedia entry on the P versus NP problem (also in Other Internet Resources), as of September 10, 2012.)

Copyright © 2023 by
Pieter Adriaans <pieter@pieter-adriaans.com>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free