A core tenet in linguistic theory is the principle of compositionality, which holds that the meaning of a multi-word utterance directly derives from the meanings of the individual words, and the rules by which they are combined. Attempts at unifying formal and computational models of lexical and compositional semantics have proven challenging and often yield complex frameworks, in which word- and utterance-level meanings are patched together to form a whole, without fully integrating their semantic contributions. In this talk, I revisit the principle of compositionality from the neurocognition of language, which reveals that the human comprehension system harnesses distinct models for lexical and compositional meaning, and that these models are critically intertwined in a cyclic architecture for language comprehension. I present the outlines of a neurocomputational model that implements this notion of compositional integration, and discuss the implications of this integrative approach towards modeling the meaning of words and utterances.