Based on research in cognitive linguistics, Louw and Nida (LN) provide a functional classification of the lexis of the New Testament that attempts to partition Greek words into broad domains and subdomains of meaning. The LN lexicon, therefore, allows the interpreter of the New Testament the ability to explore general semantic patterns that stretch across the text without necessarily being tied to one word or cognate. So it provides an additional way of investigating lexical patterning that extends beyond the simplistic word counts common in traditional methods.1 This kind of analysis can be performed by plotting out semantic domain clusters in the respective chapters/ sections of a given book through a variety of mapping techniques.2 Some semantic domains will not be all that interesting in light of their frequent occurrence in the language or due to the size of the domain. Often domain 33 (Communication), for example, will not be very significant because of it’s frequency across registers. 3

Theoretical support may be sought for this approach in recent developments in cognitive and psycholinguistics dealing with text processing strategies. Graesser et. al. summarize these studies as follows:

The human mind actively constructs various types of cognitive representation (that is, codes, features, meanings, structured sets elements) that interpret linguistic input. These cognitive representations may incorporate words, syntax, sentential semantics, speech acts, dialogue patterns, rhetorical structures, pragmatics, real and imaginary worlds....Each cognitive level is important during the processes of comprehending text and talk. 4

An aspect of this research which is especially relevant to the study of semantic domains involves the concept of schema or schemata. “A schema is a data structure for representing the generic concepts of stored memory. Although schemata represent knowledge at many levels of abstraction, research in text comprehension has focused on schemata dealing with relatively high-level aspects of text structure.”5 One way which texts can be structured out of the cognitive organization of the author’s schemata is through the use of lexis. This idea also seems intuitively compelling in that if we were asked to make a list of all the words we know, it would be far easier and more natural to list them by category (e.g. physical objects, animate objects, etc.) than by alphabetical order, apart from any functional distinction.6 As Umberto Eco has shown, our knowledge seems to be organized encyclopedically, according to topic and integrative relations, not alphabetically. 7

Jeffery Reed has done pioneering work in this area by developing a semantic domain theory in terms of Halliday and Hasan’s theory of organic and componential ties. In contrast to LN, he also restricted his research to a single letter which resulted in an investigation of semantic chains throughout the discourse rather than semantic domains across a corpus.8 But LN have their own restrictions in so far as their classification is confined to words from the NT.

This exclusive focus upon the NT brings up a significant limitation of the LN project as it currently stands. It is limited to the semantic analysis of the New Testament. Currently members of the project are working toward an expansion of the LN lexicon which will better accommodate its use in the investigation of the language and literature of the Hellenistic period in general and of the New Testament in particular. One of the major additions that has been proposed is the inclusion of a representative sample of Hellenistic Greek texts outside the corpus of the New Testament. Not only will this facilitate a broader base for our understanding of the Greek of the New Testament, it will allow for more detailed annotation of Hellenistic texts outside of the New Testament.

Of course, investigation along these lines raises crucial methodological issues since many words have more than one semantic domain within their range of meaning. Therefore, a process of semantic domain disambiguation must be undertaken before the analysis is performed – at least if one is using annotated texts; otherwise the disambiguation procedure could be performed manually as each word is located in the lexicon. Many domains are obvious from context, but some do involve a certain amount of interpretation in the disambiguation process.9 This is another area that is currently under development by the research partners of the project. Specifically, we are working toward developing a range of criteria for domain disambiguation. These will involve contextual factors such as collocations, register, and co-text. While these criteria will not circumvent interpretation entirely, they will allow for the contextual frame in which the lexical item occurs to be more determinative in the disambiguation process. So while does incorporate the LN materials as the basis for the domain selections in their functional displays, it does recognize that the theory is still under development and in need of expansion and revision.

A final point of theoretical importance in the incorporation of LN within the project is the difference in emphasis. The application of semantic domain theory which promotes is quite different than what LN seem to have originally intended for the lexicon. classifies the data in a way that will be useful for pragmatic analysis while the arrangement by LN suggests a more semantic orientation. The purpose of the LN lexicon is to catalogue and classify patterns of meaning in the NT. wants to go beyond this and ask what implications the distribution and density of these patterns might have for discourse analysis and other linguistically based approaches to interpreting Greek discourse. But though the applications which have emerged from the model have incorporated the Lexicon in an innovative way, they have proven to be immensely helpful in illumining to text and we are excited about the prospects of further studies along these lines.