Semantic Domain Theory: An Introduction to the use of the Louw-Nida Lexicon in the OpenText.org Project
by Andrew W. Pitts (02/16/2006)
Based on research in cognitive linguistics, Louw and Nida (LN) provide a functional classification of the lexis of the New Testament that attempts to partition Greek words into broad domains and subdomains of meaning. The LN lexicon, therefore, allows the interpreter of the New Testament the ability to explore general semantic patterns that stretch across the text without necessarily being tied to one word or cognate. So it provides an additional way of investigating lexical patterning that extends beyond the simplistic word counts common in traditional methods.1 This kind of analysis can be performed by plotting out semantic domain clusters in the respective chapters/ sections of a given book through a variety of mapping techniques.2 Some semantic domains will not be all that interesting in light of their frequent occurrence in the language or due to the size of the domain. Often domain 33 (Communication), for example, will not be very significant because of it’s frequency across registers. 3
Theoretical support may be sought for this approach in recent developments in cognitive and psycholinguistics dealing with text processing strategies. Graesser et. al. summarize these studies as follows:
The human mind actively constructs various types of cognitive representation (that is, codes, features, meanings, structured sets elements) that interpret linguistic input. These cognitive representations may incorporate words, syntax, sentential semantics, speech acts, dialogue patterns, rhetorical structures, pragmatics, real and imaginary worlds....Each cognitive level is important during the processes of comprehending text and talk. 4
An aspect of this research which is especially relevant to the study of semantic domains involves the concept of schema or schemata. “A schema is a data structure for representing the generic concepts of stored memory. Although schemata represent knowledge at many levels of abstraction, research in text comprehension has focused on schemata dealing with relatively high-level aspects of text structure.”5 One way which texts can be structured out of the cognitive organization of the author’s schemata is through the use of lexis. This idea also seems intuitively compelling in that if we were asked to make a list of all the words we know, it would be far easier and more natural to list them by category (e.g. physical objects, animate objects, etc.) than by alphabetical order, apart from any functional distinction.6 As Umberto Eco has shown, our knowledge seems to be organized encyclopedically, according to topic and integrative relations, not alphabetically. 7
Jeffery Reed has done pioneering work in this area by developing a semantic domain theory in terms of Halliday and Hasan’s theory of organic and componential ties. In contrast to LN, he also restricted his research to a single letter which resulted in an investigation of semantic chains throughout the discourse rather than semantic domains across a corpus.8 But LN have their own restrictions in so far as their classification is confined to words from the NT.
This exclusive focus upon the NT brings up a significant limitation of the LN project as it currently stands. It is limited to the semantic analysis of the New Testament. Currently members of the OpenText.org project are working toward an expansion of the LN lexicon which will better accommodate its use in the investigation of the language and literature of the Hellenistic period in general and of the New Testament in particular. One of the major additions that has been proposed is the inclusion of a representative sample of Hellenistic Greek texts outside the corpus of the New Testament. Not only will this facilitate a broader base for our understanding of the Greek of the New Testament, it will allow for more detailed annotation of Hellenistic texts outside of the New Testament.
Of course, investigation along these lines raises crucial methodological issues since many words have more than one semantic domain within their range of meaning. Therefore, a process of semantic domain disambiguation must be undertaken before the analysis is performed – at least if one is using annotated texts; otherwise the disambiguation procedure could be performed manually as each word is located in the lexicon. Many domains are obvious from context, but some do involve a certain amount of interpretation in the disambiguation process.9 This is another area that is currently under development by the research partners of the OpenText.org project. Specifically, we are working toward developing a range of criteria for domain disambiguation. These will involve contextual factors such as collocations, register, and co-text. While these criteria will not circumvent interpretation entirely, they will allow for the contextual frame in which the lexical item occurs to be more determinative in the disambiguation process. So while OpenText.org does incorporate the LN materials as the basis for the domain selections in their functional displays, it does recognize that the theory is still under development and in need of expansion and revision.
A final point of theoretical importance in the incorporation of LN within the OpenText.org project is the difference in emphasis. The application of semantic domain theory which OpenText.org promotes is quite different than what LN seem to have originally intended for the lexicon. OpenText.org classifies the data in a way that will be useful for pragmatic analysis while the arrangement by LN suggests a more semantic orientation. The purpose of the LN lexicon is to catalogue and classify patterns of meaning in the NT. OpenText.org wants to go beyond this and ask what implications the distribution and density of these patterns might have for discourse analysis and other linguistically based approaches to interpreting Greek discourse. But though the applications which have emerged from the OpenText.org model have incorporated the Lexicon in an innovative way, they have proven to be immensely helpful in illumining to text and we are excited about the prospects of further studies along these lines.
1For critique of traditional approaches to lexicography as well as support for a semantic domain based approach see J.P. Louw and E.A. Nida, Greek-English Lexicon of the New Testament Based on Semantic Domains (2 vols.; 2nd ed.; New York: United Bible Societies, 1989), xi-xx; they offer response to criticisms, many based upon misunderstandings, in their Lexical Semantics of the Greek New Testament: A Supplement to the Greek-English Lexicon of the New Testament (Society of Biblical Literature, 1992); see also S.E. Porter, “Linguistics and New Testament Lexicography”, in Studies in the Greek New Testament: Theory and Practice (SBG; New York: Peter Lang, 1996), pp. 52-73; A.C. Thiselton, “Semantics and New Testament Interpretation,” in New Testament Interpretation: Essays on Principles and Methods (Grand Rapids: Eerdmans, 1977); M. Silva, Biblical Words and their Meaning: An Introduction to Lexical Semantics (Rev. ed.; Grand Rapids: Zondervan, 1994).
2For a variety of suggestions, see M.B. O’Donnell, “The Use of Annotated Corpora for New Testament Discourse Analysis: A Survey of Current Practice and Future Prospects", in S.E. Porter and J.T. Reed (eds.), Discourse Analysis and the New Testament: Approaches and Results (JSNTSup 170; SNTG 4; Sheffield: Sheffield Academic Press, 1999), pp. 112-17.
3S.E. Porter and M.B. O’Donnell, “Semantics and Patterns of Argumentation in the Book of Romans: Definitions, Proposals, Data, and Experiments”, in S.E. Porter (ed.), Diglossia and Other Topics in New Testament Linguistics (JSNTSup; Sheffield: Sheffield Academic Press, 2000), p. 163.
4A.C. Graesser, M.A. Gernsbacher and S.R. Goldman, “Cognition”, in T.A. van Dijk (ed.), Discourse as Structure and Process, Discourse Studies 1: A Multidisciplinary Introduction (London: Sage Publications, 1997), p. 293.
5G.H. Bower and R.K. Cirilo, “Cognitive Psychology and Text Processing”, in Handbook of Discourse Analysis, Volume 1: Disciplines of Discourse (London: Academic Press, 1985), p. 71; cf. G. Brown and G. Yule, Discourse Analysis (Cambridge Textbooks in Linguistics; Cambridge: Cambridge University Press, 1983), pp. 238-56; T.A. van Dijk, Macrostructures: An Interdisciplinary Study of Global Structures in Discourse, Interaction, and Cognition (Hillsdale, NJ: Lawrence Erlbaum Associates, 1980), pp. 225-259; Graesser et. al., “Cognition”. In New Testament studies see J.T. Reed, A Discourse Analysis of Philippians: Method and Rhetoric in the Debate over Literary Integrity (JSNTSup 136; Sheffield: Sheffield Academic Press, 1997), pp. 76-78.
6Reed, Discourse Analysis, pp. 77-78.
7U. Eco, The Role of the Reader: Explorations in the Semiotics of Texts (Bloomington: Indiana University, 1979), pp. 7-9.
8Reed, Discourse Analysis, pp. 296-338.
9On semantic domain disambiguation see O’Donnell, “The Use of Annotated Corpora”, pp. 86-88
Discuss this article in the OpenText.org discussion forum