A definitive reference

Computational Limits and Implications

Building on our exploration of undecidability, this module delves into the fundamental limits of computation itself. We examine the boundaries of what is computable, even with unlimited resources, and investigate the profound implications of these limits across mathematics, computer science, and physics.

First, we explore theoretical boundaries from foundational mathematics, including Gödel's Incompleteness Theorems and Tarski's undefinability of truth. We then investigate non-computable functions like the Busy Beaver function, which grow faster than any computable function.

Finally, we examine practical implications of these limits in compiler optimization, formal methods, and artificial intelligence, before considering speculative models of hypercomputation that attempt to transcend these fundamental barriers and the connection to algorithmic information theory.

Theoretical Boundaries

Gödel's Incompleteness Theorems and Computability

In 1931, Kurt Gödel published two groundbreaking results that fundamentally changed our understanding of formal systems and established inherent limitations on what can be proven within consistent mathematical frameworks.

Definition: Formal System

A formal system F consists of:

  1. A finite set of symbols (the alphabet)
  2. A grammar that determines which strings of symbols are well-formed formulas (WFFs)
  3. A set of axioms, which are specific WFFs
  4. A set of inference rules for deriving new WFFs from existing ones

A formal system is consistent if it is not possible to derive a contradiction (both a statement and its negation). It is complete if every statement in the language can either be proven or disproven within the system.

Theorem: Gödel's First Incompleteness Theorem

Any consistent formal system F that is sufficiently powerful to express elementary arithmetic contains statements that can neither be proven nor disproven within F.

Gödel's proof relied on a technique now known as "Gödelization" - encoding statements about a formal system as numbers within that same system. This allows self-reference, enabling the construction of a statement that essentially says "this statement is not provable".

Proof: Proof Sketch of the First Incompleteness Theorem

The key steps of the proof are:

  1. Define a mapping between formulas and natural numbers ("Gödel numbering").
  2. Show that fundamental syntactic properties and relations (like "x is a proof of formula y") can be expressed as arithmetical relations on these Gödel numbers.
  3. Construct a formula G that effectively states "this formula is not provable in F".
  4. Show that if F is consistent, then G is neither provable nor refutable in F.

The construction relies on a "diagonalization" technique similar to Cantor's proof that the real numbers are uncountable and Turing's solution to the Halting Problem.

Theorem: Gödel's Second Incompleteness Theorem

If a formal system F is consistent and can express elementary arithmetic, then F cannot prove its own consistency.

The Second Incompleteness Theorem follows from the First by formalizing the notion of consistency within the system itself. It dealt a significant blow to Hilbert's program, which sought to establish the consistency of mathematics using only "finitary" methods.

These theorems have profound computational implications:

  • They establish fundamental limits on automated theorem proving and formal verification.
  • They connect directly to the Halting Problem through their reliance on self-reference.
  • They demonstrate that no algorithm can decide the truth of all mathematical statements.
  • They imply the existence of true statements that cannot be computed or verified algorithmically.

Tarski's Undefinability of Truth

In 1933, Alfred Tarski established another fundamental limitation relating to formal systems, focusing on the concept of truth itself.

Definition: Truth Predicate

A truth predicate for a language L is a predicate Tr such that for any sentence φ in L,Tr(⌜φ⌝) ↔ φ, where ⌜φ⌝ is a name or code for the sentence φ.

Theorem: Tarski's Undefinability Theorem

For any formal language L that contains basic arithmetic, there is no formula Tr(x) in L that defines the set of Gödel numbers of true sentences of L.

Proof: Proof Sketch of Tarski's Undefinability Theorem

The proof proceeds by contradiction. Suppose there exists a truth predicate Tr(x) in L.

We can construct a self-referential sentence λ that asserts its own falsehood:λ ↔ ¬Tr(⌜λ⌝)

If λ is true, then Tr(⌜λ⌝) holds, which means ¬Tr(⌜λ⌝) holds (by definition of λ), a contradiction.

If λ is false, then ¬Tr(⌜λ⌝) holds, which means λ is true (by definition of λ), another contradiction.

Therefore, no such truth predicate can exist in L.

Tarski's theorem represents a limitation on self-reference in formal systems and has several important computational implications:

  • It establishes that programming languages cannot completely represent their own semantics.
  • It implies fundamental limitations on self-verifying systems and meta-programming.
  • It shows that full semantic reflection is impossible in formal systems.
  • It provides insight into the distinction between syntax and semantics in computation.

Tarski's solution to the paradox was the introduction of a hierarchy of languages, where the truth of statements in one language can only be defined in a metalanguage, whose truth in turn is defined in a meta-metalanguage, and so forth.

Busy Beaver Function and Non-computable Growth Rates

The Busy Beaver function, introduced by Tibor Radó in 1962, provides a concrete example of a well-defined function that grows faster than any computable function. It exemplifies the notion of non-computability in a vivid way.

Definition: Busy Beaver Function

Let Σ(n) be the maximum number of 1s that can be written by an n-state Turing machine that starts with a blank tape and eventually halts.

Similarly, let S(n) be the maximum number of steps that can be executed by an n-state Turing machine that starts with a blank tape and eventually halts.

The values of this function grow extremely quickly:

  • Σ(1) = 1 (trivial case)
  • Σ(2) = 4
  • Σ(3) = 6
  • Σ(4) = 13
  • Σ(5) = 4098
  • Σ(6) is not known exactly but exceeds 10↑↑↑↑↑10 (using Knuth's up-arrow notation)

Theorem: Non-computability of the Busy Beaver Function

The function Σ(n) is not computable. That is, there exists no algorithm that, given n as input, outputs Σ(n) for all n.

Proof: Proof of Non-computability

We proceed by contradiction. Suppose Σ(n) is computable by some algorithm A.

Then we could solve the Halting Problem as follows:

  1. Given a Turing machine M with n states and input w, construct a machine M' that:
    • First simulates M on w
    • If M halts on w, then M' writes as many 1s as possible and halts
  2. Let k be the number of states in M'.
  3. Compute Σ(k) using the assumed algorithm A.
  4. Simulate M' for at most Σ(k) steps.
  5. If M' halts within Σ(k) steps, then M halts on w. Otherwise, M does not halt on w.

This would give us an algorithm to solve the Halting Problem, which is known to be undecidable.

Therefore, our assumption that Σ(n) is computable must be false.

Theorem: Growth Rate of the Busy Beaver Function

For any computable function f: ℕ → ℕ, there exists a constant c such that for all sufficiently large n,Σ(n) > f(n).

The Busy Beaver function provides key insights into computational limits:

  • It provides a concrete example of a non-computable function that is mathematically well-defined.
  • It demonstrates growth rates beyond what any algorithm can compute.
  • It highlights the distinction between definition and computability.
  • It connects to the Halting Problem in a deep way, showing how undecidability manifests as non-computability.

Practical Undecidability

Compiler Optimization Limitations

The theoretical limits discussed above have direct implications for what compilers can and cannot do, especially regarding optimizations.

Definition: Program Optimization

A program transformation T: P → P' is an optimization if:

  1. The semantics of P and P' are equivalent (for all inputs, both programs produce the same output)
  2. Program P' is "better" than P according to some metric (e.g., execution time, memory usage, code size)

Theorem: Rice's Theorem Applied to Optimization

Let P be any non-trivial semantic property of programs (i.e., a property that depends only on the function computed by the program, not on the specific implementation). Then the problem of determining whether a program has property P is undecidable.

This theorem has profound implications for compiler optimization. Here are several undecidable problems that compilers face:

  • Dead code elimination: Determining whether a piece of code will ever be executed is undecidable in general.
  • Constant propagation: Determining whether a variable will always hold the same value is undecidable.
  • Loop optimization: Determining if two loops compute the same result is undecidable.
  • Function inlining: Deciding if inlining a function will improve performance is undecidable in general.
  • Alias analysis: Precisely determining whether two pointers can point to the same memory location is undecidable.

Despite these theoretical limitations, compiler writers have developed various approaches to build practical optimizers:

  1. Conservative approximation: When in doubt about a property, assume the less beneficial case. This ensures correctness at the expense of missing some optimization opportunities.
  2. Restricting the domain: Creating programming languages with restricted features that make optimization problems decidable.
  3. Heuristics: Using heuristic approaches that work well for common cases, even if they aren't optimal for all programs.
  4. Profiling and feedback: Using runtime data to guide optimization decisions rather than trying to prove properties statically.

A notable achievement in this area is the CompCert project, which provides a formally verified C compiler. CompCert's optimizations come with mathematical proofs that the optimized code preserves the semantics of the original program. This represents a balance between theoretical limitations and practical compiler design.

Verification and Formal Methods Boundaries

Formal verification aims to mathematically prove that a system satisfies its specification. However, computational limits place significant constraints on what can be verified.

Definition: Formal Verification

Formal verification is the process of checking whether a system S satisfies a formal specification φ, typically written as S ⊨ φ.

The fundamental verification challenges stem from undecidability results:

  • Program equivalence: Determining if two programs compute the same function is undecidable.
  • Program termination: The Halting Problem shows that verifying a program always terminates is undecidable.
  • Reachability analysis: Determining if a program can reach a specified "bad" state is undecidable in general.
  • Liveness properties: Verifying that something good eventually happens (e.g., a request is eventually processed) is undecidable.

However, several approaches have been developed to make verification practical within these theoretical boundaries:

  1. Bounded verification: Checking properties up to a limited number of steps or inputs, rather than for all possible executions.
  2. Abstract interpretation: Working with abstract representations of program states that enable sound approximations.
  3. Interactive theorem proving: Using human guidance to help automated systems navigate undecidable problems.
  4. Domain-specific languages: Designing restricted languages where more properties become decidable.
  5. Model checking: Exhaustively checking all states in a finite-state model of a system.

Theorem: Semi-decidability of Program Verification

While general program verification is undecidable, it is semi-decidable: if a program P satisfies a specification φ, there exists a proof that can be verified algorithmically, even if finding such a proof is not algorithmic.

This semi-decidability is the basis for many practical verification systems that rely on human intuition to guide proof search, combined with automated tools to verify proof correctness.

Type Checking and Static Analysis

Type systems and static analysis provide powerful tools for ensuring program correctness, but they too face fundamental computational limits.

Definition: Type System

A type system is a formal system of rules that assigns types to program constructs and uses these types to determine whether a program is well-formed according to a set of typing rules.

Type systems and static analysis face several undecidability challenges:

  • Type inference for dependent types: Determining the most general type for expressions in languages with dependent types is undecidable.
  • Subtyping with recursive types: Deciding subtyping relationships in languages with both polymorphic and recursive types is undecidable.
  • Program termination analysis: Determining if all functions in a program terminate is undecidable, which affects total functional languages.
  • Precise flow analysis: Determining the exact set of values that can reach a program point is generally undecidable.

Theorem: Type-Checking vs. Type Inference

While type-checking (verifying that a program with explicit type annotations is well-typed) is decidable for many type systems, type inference (automatically determining types without annotations) becomes undecidable for sufficiently powerful type systems, including dependent types.

Programming language designers navigate these limitations through various approaches:

  1. Carefully designed type systems: Creating type systems that balance expressiveness with decidability.
  2. Type annotations: Requiring programmers to provide type information to make type checking decidable.
  3. Conservative approximations: Rejecting some valid programs to ensure decidability.
  4. Gradual typing: Combining static and dynamic type checking to get the benefits of both.

The Hindley-Milner type system, used in languages like ML and Haskell, represents a sweet spot that allows powerful polymorphic type inference while remaining decidable. More expressive systems, like dependent types in languages such as Coq and Agda, require more programmer guidance but can express richer properties.

AI Capabilities and Limitations from a Computability Perspective

Computational limits also shape what artificial intelligence systems can and cannot achieve, regardless of advances in hardware or algorithms.

Definition: Artificial Intelligence from a Computational Perspective

Artificial Intelligence systems can be viewed as implementations of computable functions f: I → Omapping inputs from a domain I to outputs in a range O, where the function f is constrained by the limits of computability theory.

Key implications of computational limits for AI include:

  • No algorithm can solve all instances of undecidable problems: AI systems cannot provide general solutions to problems like the Halting Problem, regardless of how advanced they become.
  • Perfect general-purpose verification is impossible: No AI can perfectly verify that arbitrary programs meet their specifications.
  • Some patterns are inherently unpredictable: Sequence prediction faces fundamental limits when sequences encode outputs of uncomputable functions.
  • Resource bounds limit approximations: Even when problems are theoretically computable, resource constraints may make them practically infeasible.

Theorem: No Free Lunch Theorem for Machine Learning

All machine learning algorithms that search for an optimal mapping from inputs to outputs have identical expected performance when averaged over all possible problems. In other words, no machine learning algorithm can outperform random guessing when averaged across all possible target functions.

Despite these fundamental limits, AI research has made substantial progress by:

  1. Domain adaptation: Specializing algorithms for specific problem domains where useful approximations can be made.
  2. Heuristic approaches: Developing methods that work well in practice for common cases, even if they don't have theoretical guarantees.
  3. Interactive systems: Combining AI algorithms with human guidance to overcome computational limitations.
  4. Probabilistic methods: Using statistical approaches to provide useful answers even when perfect solutions are uncomputable.

These computational limits also inform discussions about AI capabilities and risks. While AI systems face fundamental constraints on what they can compute, they can still reach or exceed human capabilities in many domains because humans face these same computational limits.

Beyond Classical Computation

Hypercomputation Models and Physical Realizability

Hypercomputation refers to theoretical models of computation that can compute functions beyond those computable by a Turing machine. While mathematically intriguing, the physical realizability of these models remains highly controversial.

Definition: Hypercomputation

A hypercomputer is a theoretical computational model that can compute functions not computable by a Turing machine, such as solving the Halting Problem or computing Busy Beaver values.

Several hypercomputation models have been proposed:

  • Oracle machines: Turing machines with access to "black boxes" that can answer specific undecidable questions.
  • Infinite time Turing machines: Machines that can perform infinitely many steps in finite time.
  • Analog computers: Machines operating on continuous values with infinite precision.
  • Relativistic computers: Devices exploiting relativistic time dilation to perform infinite computations.
  • Quantum computers with postulated abilities: Speculative quantum models beyond the standard quantum circuit model.

Theorem: Physical Realizability Constraints

Most hypercomputation models require at least one physically problematic feature: infinite precision, infinite energy, infinite space, or processes completed in zero time. Under currently accepted physical theories, such features appear to be physically impossible.

The common critique of hypercomputation revolves around several issues:

  1. Infinite precision: Physical measurement has fundamental quantum limits on precision.
  2. Infinite energy: Performing an infinite number of operations would require infinite energy, violating conservation laws.
  3. Infinite memory: No physical system can store an infinite amount of information in finite space.
  4. Completion of supertasks: Completing infinitely many steps in finite time appears to violate physical laws.
  5. Determinism: Some models require perfect predictability in systems where quantum effects introduce fundamental uncertainty.

Despite these challenges, hypercomputation models remain valuable for:

  • Understanding the structure of uncomputability
  • Exploring the boundaries between mathematics and physics
  • Developing hierarchies of uncomputability (e.g., arithmetic hierarchy)
  • Inspiring new approaches in theoretical computer science

Infinite-time Turing Machines

Infinite-time Turing machines (ITTMs) extend the classical Turing machine model by allowing computations to continue for transfinite ordinal time.

Definition: Infinite-time Turing Machine

An infinite-time Turing machine operates like a standard Turing machine but:

  1. At limit ordinal steps (after infinitely many previous steps), the machine enters a special limit state.
  2. Each cell on the tape is set to the lim sup of its previous values (1 if it was 1 cofinally often, 0 otherwise).
  3. The head is reset to the leftmost cell.
  4. Computation continues according to the transition function for the limit state.

Infinite-time Turing machines can solve many problems that are uncomputable for standard Turing machines:

  • They can decide the Halting Problem for standard Turing machines.
  • They can compute the Busy Beaver function values.
  • They can decide membership in any arithmetical set.
  • They can compute truth values for first-order arithmetic statements.

Theorem: Limits of Infinite-time Turing Machines

Despite their increased power, infinite-time Turing machines still face undecidable problems. Specifically, the halting problem for infinite-time Turing machines themselves is undecidable by infinite-time Turing machines.

Infinite-time computation leads to a rich hierarchy of computational power:

  1. Standard computable functions: Computable by Turing machines in finite time.
  2. Δ11 functions: Computable by ITTMs in fewer than ω2 steps.
  3. Σ11 functions: Approximable by ITTMs with increasingly accurate outputs.
  4. Higher levels: A complex hierarchy of functions requiring different ordinal bounds of computation time.

The theory of infinite-time computation provides a precise mathematical framework for understanding computability beyond the Turing barrier, even if the physical implementation of such machines remains speculative at best.

Blum-Shub-Smale Machines and Real Computation

Blum-Shub-Smale (BSS) machines provide a theoretical model for computation over the real numbers, offering a different perspective on the boundaries of computation.

Definition: Blum-Shub-Smale Machine

A BSS machine is a theoretical model of computation that:

  1. Operates on real numbers as basic units of computation
  2. Can perform exact arithmetic operations (+, -, ×, ÷) on real numbers in single steps
  3. Can test exact equality and order relationships between real numbers
  4. Uses branching based on these comparisons

BSS machines differ from Turing machines in several important ways:

  • They can directly process uncountably many different values (all real numbers).
  • They operate on real numbers as atomic entities, not as digit sequences.
  • They can make exact comparisons between any two real numbers.
  • They have different complexity classes than classical complexity theory.

Theorem: Computational Power of BSS Machines

BSS machines can compute some functions that are uncomputable by Turing machines, such as:

  • Testing if a real number is transcendental
  • Computing certain discontinuous functions on the reals
  • Solving some undecidable problems when encoded into real parameters

However, they cannot compute all functions. In particular, they cannot decide membership in arbitrary Halting Sets, and they have their own halting problem that is undecidable within the BSS model.

The physical realizability of BSS machines is highly questionable because:

  1. Infinite precision: Physical systems appear to be limited to finite precision by quantum effects.
  2. Infinite information: Representing arbitrary real numbers would require infinite information storage.
  3. Exact equality testing: Physical measuring devices have inherent error margins.
  4. Energy requirements: Manipulating infinite-precision values would require unbounded energy.

Nevertheless, BSS machines provide valuable insights into:

  • Mathematical models for numerical algorithms
  • Computational complexity of problems in scientific computing
  • Theoretical foundations for analog computation
  • Alternative formulations of computability theory

The BSS model highlights how different computational paradigms lead to different notions of computability, and reminds us that our standard model of computation is just one possible formalization among many.

Information Complexity

Kolmogorov Complexity Fundamentals

Algorithmic information theory, centered around Kolmogorov complexity, provides a deep connection between information, randomness, and computability.

Definition: Kolmogorov Complexity

The Kolmogorov complexity K(x) of a string x is the length of the shortest program that produces x when run on a universal Turing machine U:

K(x) = min{|p| : U(p) = x}

Where |p| denotes the length of program p.

Key properties of Kolmogorov complexity include:

  • Machine invariance: Complexity defined using different universal Turing machines differs by at most a constant.
  • Incompressibility: Most strings are incompressible, meaning their Kolmogorov complexity is close to their length.
  • Non-computability: The function K(x) is not computable; no algorithm can calculate it for all inputs.
  • Additivity: The complexity of concatenated strings is approximately the sum of their individual complexities.

Theorem: Non-computability of Kolmogorov Complexity

There is no algorithm that, given a string x, computes K(x) for all possible inputs x.

Proof: Proof of Non-computability

Assume, for contradiction, that K(x) is computable by some algorithm A.

Consider the following program P:

  1. Input a number n
  2. Enumerate all strings in lexicographic order
  3. For each string s, use algorithm A to compute K(s)
  4. Find the first string s such that K(s) ≥ n
  5. Output s

Let c be the length of program P. For any n, P(n) outputs a string s withK(s) ≥ n.

But note that P(n) is itself a program of length c + log(n) (where log(n) is the number of bits needed to encode n) that produces s. Therefore:

K(s) ≤ c + log(n)

For large enough n, we have c + log(n) < n, which means:

K(s) ≤ c + log(n) < n ≤ K(s)

This is a contradiction. Therefore, K(x) cannot be computable.

Despite its non-computability, Kolmogorov complexity has profound applications:

  • Providing a foundation for the mathematical definition of randomness
  • Establishing minimum bounds on data compression
  • Quantifying information content independent of any probability distribution
  • Offering a principled approach to inductive inference

Algorithmic Information Theory

Algorithmic Information Theory (AIT) extends Kolmogorov complexity to develop a comprehensive theory of information, induction, and probability based on computation.

Definition: Universal Probability

The universal probability of a string x, denoted m(x), is defined as:

m(x) = Σp:U(p)=x 2-|p|

Where the sum is over all programs p that produce x when run on a universal prefix-free Turing machine U.

Key principles and results from Algorithmic Information Theory include:

  • Solomonoff Induction: A formal solution to the problem of induction using universal probability.
  • Coding Theorem: The negative logarithm of universal probability is approximately equal to Kolmogorov complexity.
  • Algorithmic Mutual Information: A measure of shared information between strings defined via Kolmogorov complexity.
  • Minimum Description Length: A principle for model selection based on finding the shortest description of data.

Theorem: The Coding Theorem

-log m(x) = K(x) + O(1)

This theory has profound implications for understanding the limits of knowledge and prediction:

  1. Limits of prediction: Some sequences are fundamentally unpredictable because their generating mechanisms are too complex.
  2. Non-computability of optimal prediction: The universal prior that provides optimal predictions is itself non-computable.
  3. Resource-bounded approximations: Practical approximations must balance optimality with computational feasibility.
  4. Incompleteness in induction: No mechanical inductive system can capture all patterns that humans might recognize.

Despite these limitations, AIT provides the theoretically optimal framework for prediction and induction, serving as an idealized standard against which practical methods can be evaluated.

Incompressibility and Randomness

Kolmogorov complexity provides a precise mathematical definition of randomness based on incompressibility, connecting computability theory to the foundations of probability.

Definition: Algorithmic Randomness

A finite string x is c-incompressible if K(x) ≥ |x| - c.

An infinite sequence ω is Martin-Löf random if there exists a constant c such that for all prefixes ωn of length n:

K(ωn) ≥ n - c

Key insights about algorithmic randomness include:

  • Abundance of random strings: Almost all strings of length n are incompressible.
  • Multiple equivalent definitions: Martin-Löf randomness, Kolmogorov randomness, and other approaches define the same class of random sequences.
  • Unpredictability: Random sequences cannot be predicted better than chance by any algorithm.
  • Law of large numbers: Random sequences satisfy all computable statistical tests.

Theorem: Incompressibility Theorem

For any length n and any constant c, the probability that a randomly selected string x of length n has K(x) < n - c is less than 2-c+1.

The incompressibility method, based on these principles, has become a powerful mathematical proof technique:

  1. Average-case analysis: Using incompressible inputs to establish typical behavior of algorithms.
  2. Lower bounds: Proving that algorithms must use at least a certain amount of resources in the worst case.
  3. Randomized constructions: Showing that certain structures must exist by demonstrating that random examples have desired properties.
  4. Communication complexity: Establishing limits on how efficiently distributed systems can communicate.

Interestingly, despite being central to the definition of randomness, Kolmogorov complexity itself is non-computable. This leads to a profound insight: we can define randomness precisely, but we cannot reliably detect it algorithmically. This represents another manifestation of computational limits, connecting questions of information, randomness, and computability.

Summary

  • Theoretical Boundaries: Gödel's Incompleteness Theorems, Tarski's undefinability of truth, and the Busy Beaver function demonstrate fundamental limits to what formal systems and algorithms can achieve, establishing precise boundaries to computation.
  • Practical Undecidability: These theoretical limits manifest in practical domains like compiler optimization, program verification, type checking, and artificial intelligence, necessitating approximations, heuristics, and human guidance in real-world systems.
  • Beyond Classical Computation: Hypercomputation models like infinite-time Turing machines and Blum-Shub-Smale machines explore what might lie beyond standard computation, but face serious challenges to physical realizability.
  • Information Complexity: Kolmogorov complexity provides a precise, computation-based definition of information content and randomness, establishing deep connections between computability, complexity, and information theory.
  • Meta-limitations: Many of these theories themselves demonstrate a profound meta-limitation: we can precisely define concepts like Kolmogorov complexity and optimal induction, but cannot compute them, highlighting the gap between definition and computability.

Suggested Reading

  • Boolos, G., Burgess, J.P., Jeffrey, R. Computability and Logic – Comprehensive treatment of Gödel's theorems and related results
  • Li, M., Vitanyi, P. An Introduction to Kolmogorov Complexity and Its Applications – Definitive treatment of algorithmic information theory
  • Hamkins, J.D., Lewis, A. Infinite time Turing machines – Journal of Symbolic Logic, 65(2) – Key paper on infinite time computation
  • Blum, L., Cucker, F., Shub, M., Smale, S. Complexity and Real Computation – Foundational work on BSS machines
  • Leroy, X. Formal Verification of a Realistic Compiler – Communications of the ACM, 52(7) – On CompCert and formal compiler verification
  • Copeland, B.J. Hypercomputation – Minds and Machines, 12(4) – Survey of hypercomputation models and their limitations
  • Davis, M. The Myth of Hypercomputation – Alan Turing: Life and Legacy of a Great Thinker – Critique of hypercomputation
  • Calude, C.S. Information and Randomness: An Algorithmic Perspective – In-depth treatment of algorithmic randomness