Latent Semantic Analysis is a machine learning procedure that constructs a high-dimensional semantic space from reading a large amount of texts. This semantic space represents the knowledge that LSA has acquired from these texts. (For further information on LSA, see http://lsa.colorado.edu). My current research is concerned with formulating a psychological semantics based on LSA. LSA is a very powerful tool: the meaning of words, as well as sentences or whole texts, can be represented as vectors in the semantic space. Hence they can be manipulated and readily compared with other vectors as well as with human semantic intuitions. However, LSA is only the basis for a semantic theory; it is not the complete theory by itself. LSA accounts for the associative structure of knowledge; what needs to be filled in are the psychological processes that make use of this structure in various ways, thereby creating the rich and complex world of human semantics. I have been particularly interested in the problem of context: words can change their meaning or sense when used in different contexts. In LSA, each word is represented as a single vector, irrespective of how many meanings and senses it has. The Predication Model explores how context sensitive word meanings and senses emerge when words are sued in context.