Truth-conditional semantics has been successful in explaining how the meaning of a sentence can be decomposed into the meanings of its parts, and how this allows people to understand new sentences. In this talk, I will show how a truth-conditional model can be learnt in practice on large-scale datasets of various kinds (textual, visual, ontological), and how this is empirically useful, compared to non-truth-conditional models. I will then take stock of the bigger picture, and argue it is (unfortunately) computationally intractable to reduce all kinds of language understanding to truth conditions. To enable a more complete account, I will sketch a new approach to probabilistic modelling, which maintains tractability by relaxing the strict demands of Bayesian inference. This has the potential to explain how patterns of language use arise as a result of computationally constrained minds interacting with a computationally demanding world.