[CHWP Titles]

‘Knowing true things by what their mockeries be’: Modelling in the Humanities

Willard McCarty

King's College London

Willard.McCarty@kcl.ac.uk |||| www.kcl.ac.uk/humanities/cch/wlm/ |||| About the Author

CHWP A.24, publ. September 2003. © Editors of CHWP 2003. [Jointly published with TEXT Technology, 12.1 (2003), McMaster University.]


[Abstract / Résumé]

KEYWORDS / MOTS-CLÉS: modelling, knowledge representation, humanities computing, simulation, philosophy of science, Ovid, , personification / modeler, représentation des connaissances, informatique en sciences humaines, simulation, philosophie des sciences, Ovid, Métamorphoses, personnification


section1.Introduction
2.Modelling
3.In humanities computing: an example
4.Experimental practice
5.Knowledge representation & the logicist programme
6.Notes
7.Works Consulted


Introduction

At the beginning of their important book, Understanding Computers and Cognition, Terry Winograd and Fernando Flores declare that

All new technologies develop within a background of tacit understanding of human nature and human work. The use of technology in turn leads to fundamental changes in what we do, and ultimately what it is to be human. We encounter deep questions of design when we recognize that in designing tools we are designing ways of being (1986: xi).

Because of computing, Brown and Duguid note in The Social Life of Information, “We are all, to some extent, designers now” (2000: 4). For us no qualification is necessary, especially so because of our commitment to a computing that is of as well as in the humanities. So, Winograd and Flores' question is ours: what ways of being do we have in mind? And since we are scholars, this question is also, perhaps primarily, what ways of knowing do we have in hand? What is the epistemology of our practice?

Three years after the book by Winograd and Flores, Ian Lancashire brought together at the first joint ACH/ALLC conference in Toronto two scholarly outsiders to comment on our work as it then was: the archaeological theoretician Jean-Claude Gardin and the literary critic Northrop Frye. Their agreement about the central aspect of computing for the humanities and implicit divergence over how best to apply it provide the starting point for this article & my response to the epistemological question just raised.

Both Gardin, in “On the ways we think and write in the humanities”, and Frye, in “Literary and mechanical models”, roughly agree on two matters:

1. Quantitative gains in the amount of scholarly data available and accessible are with certain qualifications a Good Thing -- but not the essential thing. Gardin compares the building of large resources to the great encyclopedia projects, which he regards as intellectually unimportant. (Perhaps this is a blindness of the time: it is now quite clear that the epistemological effects of these large resources are profound, though they clearly do not suit his interests and agenda.) Frye is less dismissive. He notes that the mechanically cumulative or 'Wissenschaft' notion of scholarship now has a machine to do it better than we can. But like Gardin his focus is elsewhere, namely --

2. Qualitatively better work, proceding from a principled basis within each of the disciplines. This is the central point of computing to both Gardin & Frye: work that is disciplined, i.e. distinguishable from the intelligent but unreliable opinion of the educated layperson. Gardin takes what he calls a “scientific” approach to scholarship, which means reduction of scholarly argument to a Turing-machine calculus; use of simulation to test the strength of arguments. Frye's interest is in studying the “archetypes” or “recurring conventional units” of literature; he directs attention to computer modelling techniques as the way to pursue this study.

Thus both scholars propose computing as a means of working toward a firmer basis for humanities scholarship. Gardin refers to “simulation”, Frye to “modelling”. Both of these rather ill-defined terms share a common epistemological sense: use of a likeness to gain knowledge of its original. We can see immediately why computing should be spoken of in these terms, as it represents knowledge of things in manipulable form, and thus allows us to simulate or model these things. Beyond the commonsense understanding, however, we run into serious problems of definition and so must seek help.

My intention here is to summarize the available help, chiefly from the history and philosophy of the natural sciences, especially physics, where much of the relevant work is to be found.[1] I concentrate on the term “modelling” because that is the predominate term in scientific practice -- but it is also, perhaps even by nature, the term around which meanings I find the most useful tend to cluster. I will then give an extended example from humanities computing and discuss the epistemological implications. Finally I will return to Jean-Claude Gardin's very different agenda, what he calls “the logicist programme”, and to closely allied questions of knowledge representation in artificial intelligence.

Modelling

As noted I turn to the natural sciences for wisdom on modelling because that's where the most useful form of the idea originates. Its usefulness to us is, I think, because of the kinship between the chiefly equipment-orientated practices of the natural sciences and those of humanities computing.

Before we get started let me make two points:

1. This is not an attempt to declare humanities computing a “science” or through computing to make the humanities “scientific” (in the usual Anglo-American honorific sense of this term);

2. The view of the sciences taken here is philosophical and historical, i.e. the turn is to neighbouring fields of the humanities for what they can tell us about other equipment-orientated fields of enquiry.

The first interesting observation to be made is that despite its prevalence and deep familiarity in the natural sciences, a consensus on modelling is difficult to achieve. Indeed, modelling is hard to conceptualize. There is “no model of a model”, the Dutch physicist H. J. Groenewold declares (1960: 98), and the American philosopher Peter Achinstein warns us away even from attempting a systematic theory (1968: 203). Historians and philosophers of science, including both of these, have tried their best to anatomize modelling on the basis of the only reliable source of evidence -- namely actual scientific practice. But this is precisely what makes the conceptual difficulty significant: modelling grows out of practice, not out of theory, and so is rooted in stubbornly tacit knowledge. The ensuing struggle to understand its variety yields some useful distinctions, as Achenstein says. The fact of the struggle itself points us in turn to larger questions in the epistemology of practice -- to which I will return.

The most basic distinction is, in Clifford Geertz's terms, between “an ‘of’ sense and a ‘for’ sense” of modelling (1973: 93). A model of something is an exploratory device, a more or less “poor substitute” for the real thing (Groenewold 1960: 98). We build such models-of because the object of study is inaccessible or intractable, like poetry or subatomic whatever-they-are. In contrast a model for something is a design, exemplary ideal, archetype or other guiding preconception. Thus we construct a model of an airplane in order to see how it works; we design a model for an airplane to guide its construction. A crucial point is that both kinds are imagined, the former out of a pre-existing reality, the latter into a world that doesn't yet exist, as a plan for its realization.

In both cases, as Russian sociologist Teodor Shanin has argued, the product is an ontological hybrid: models, that is, bridge subject and object, consciousness and existence, theory and empirical data (1972: 9). They comprise a practical means of playing out the consequences of an idea in the real world. Shanin goes on to argue that models-of allow the researcher to negotiate the gulf between a “limited and selective consciousness” on the one hand and “the unlimited complexity and ‘richness’ of the object” on the other (1972: 10). This negotiation happens, Shanin notes, “by purposeful simplification and by transformation of the object of study inside consciousness itself”. In other words, a model-of is made in a consciously simplifying act of interpretation. Although this kind of model is not necessarily a physical object, the goal of simplification is to make tractable or manipulable what the modeller regards as interesting about it.

A fundamental principle for modelling-of is the exact correspondence between model and object with respect to the webs of relationships among the selected elements in each. Nevertheless, such isomorphism (as it is called) may be violated deliberately in order to study the consequences. In addition to distortions, a model-of may also require ‘properties of convenience’, such as the mechanism by which a model airplane is suspended in a wind-tunnel. Thus a model-of is fictional not only by being a representation, and so not the thing itself, but also by selective omission and perhaps by distortion and inclusion as well.

Taxonomies of modelling differ, as noted. Among philosophers and historians of science there seems rough agreement on a distinction between theoretical and physical kinds, which are expressed in language and material form respectively (Black 1962: 229). Computational modelling, like thought-experiment, falls somewhere between these two, since it uses language but is more or less constrained by characteristics of the medium of its original.

From physical modelling we can usefully borrow the notion of the ‘study’ or ‘tinker-toy’ model -- a crude device knowingly applied out of convenience or necessity (Achinstein 1968: 209; Redhead 1980: 153). Indeed, in the humanities modelling seems as a matter of principle to be crude, “a stone adze in the hands of a cabinetmaker”, as Vannevar Bush said (Bush 1965: 92). This is not to argue against progress, which is real enough for technology, rather that deferral of the hard questions as solutions inevitably to be realized -- a rhetorical move typical in computing circles -- is simply irresponsible.

Theoretical modelling, constrained only by language, is apt to slip from a consciously makeshift, heuristic approximation to hypothesized reality. Black notes that in such “existential use of modelling” the researcher works “through and by means of” a model to produce a formulation of the world as it actually is (1962: 228f). In other words, a theoretical model can blur into a theory. But our thinking will be muddled unless we keep “theory” and “model” distinct as concepts. Shanin notes that modelling may be useful, appropriate, stimulating and significant -- but by definition never true (1972: 11). It is, again, a pragmatic strategy for coming to know. How it contrasts with theory depends, however, on your philosophical position. There are two major ones.

To the realist theories are true. As we all know, however, theories are overturned. To the realist, when this happens -- when in a moment of what Thomas Kuhn called “extraordinary science” a new theory reveals an old one to be a rough approximation of the truth (as happened to Newtonian mechanics about 100 years ago) -- the old theory becomes a model and so continues to be useful.

To the anti-realist, such as the historian of physics Nancy Cartwright, the distinction collapses. As she says elegantly in her “simulacrum account” of physical reality, How the Laws of Physics Lie, “the model is the theory of the phenomenon” (1983: 159).  Since we in the humanities are anti-realists with respect to our theories (however committed we may be to them politically), her position is especially useful: it collapses the distinction between theory and theoretical model, leaving us to deal only with varieties of modelling. This, it should be noted, also forges a link between our terms between humanities computing and the theorizing activities in the various disciplines.

Since modelling is pragmatic, the worth of a model must be judged by its fruitfulness. The principle of isomorphism means, however, that for a model-of, this fruitfulness is meaningful in proportion to the “goodness of the fit” between model and original, as Black points out (1962: 238). But at the same time, more than a purely instrumental value obtains. A model-of is not constructed directly from its object; rather, as a bridge between theory and empirical data, the model participates in both, as Shanin says. In consequence a good model can be fruitful in two ways: either by fulfilling our expectations, and so strengthening its theoretical basis, or by violating them, and so bringing that basis into question. I argue that from the research perspective of the model, in the context of the humanities, failure to give us what we expect is by far the more important result, however unwelcome surprises may be to granting agencies. This is so because, as the philosopher of history R. G. Collingwood has said, “Science in general... does not consist in collecting what we already know and arranging it in this or that kind of pattern. It consists in fastening upon something we do not know, and trying to discover it.... That is why all science begins from the knowledge of our own ignorance: not our ignorance of everything, but our ignorance of some definite thing....&rdquot; (1994: 9). When a good model fails to give us what we expect, it does precisely this: it points to “our ignorance of some definite thing”.

The rhetoric of modelling begins, Achinstein suggests, with analogy -- in Dr Johnson's words, “resemblance of things with regard to some circumstances or effects”. But as Black has pointed out, the existential tendency in some uses of modelling pushes it from the weak statement of likeness in simile toward the strong assertion of identity in metaphor. We know that metaphor paradoxically asserts identity by declaring difference: “Joseph is a fruitful bough” was Frye's favourite example. Metaphor is then characteristic not of theory to the realist, for whom the paradox is meaningless, but of the theoretical model to the anti-realist, who in a simulacrum-account of the world will tend to think paradoxically. This is, of course, a slippery slope. 

But it is also a difficult thought, so let me try again. Driven as we are by the epistemological imperative, to know; constrained (as in Plato's cave or before St Paul's enigmatic mirror) to know only poor simulacra of an unreachable reality -- but aware somehow that they are shadows of something -- our faith is that as the shadowing (or call it modelling) gets better, it approaches the metaphorical discourse of the poets. If we are on the right track, as Max Black says at the end of his essay, “some interesting consequences follow for the relations between the sciences and the humanities”, namely their convergence (1962: 242f). Black continues in words quoted by Shanin: “When the understanding of scientific models and archetypes comes to be regarded as a reputable part of scientific culture, the gap between the sciences and the humanities will have been partly filled. For exercise of the imagination, with all its promises and its dangers, provides a common ground”. I would argue that the humanities really must meet the sciences half-way, on this commons, and that we have in humanities computing a means to do so.

In humanities computing: an example[2]

Let me suggest how this might happen by giving an example of modelling from my own work on the Roman poet Ovid's profoundly influential epic, the Metamorphoses. My project for the last number of years has been to encode all devices of language used to indicate persons so that various indexes might be generated from the tags -- approximately 60,000 of them for the 12,000 lines of Latin hexameter. With these indexes, as I can now demonstrate, it becomes possible to make significant headway with the single most difficult literary critical problem of the Met, namely its poetic unity, by showing how persons and the ways in which they are named interconnect the stories in which they are involved.

To illustrate the role of modelling in such work, I will restrict myself here to one specific problem within the project. This problem is personification, or “the change of things to persons” by rhetorical means, as Dr Johnson said. In the Metamorphoses personification is central to Ovid's relentless subversion of ontology -- not so much through the fully personified characters, such as Envy (in book 2), Hunger (in book 8) and Rumour (in book 12), as through the momentary, often incomplete stirrings to or toward the human state contained within single phrases or a few lines of the poem. Many if not most of the approximately 500 instances of these tend to go unnoticed. But however subtle, their effects are, I would argue, profound.

Unfortunately little of the scholarship written since classical times, including James Paxson's study of 1994, helps at the minute level at which these operate. In 1963, however, the medievalist Morton Bloomfield indicated an empirical way forward by calling for a “grammatical” approach to the problem. He made the simple but profoundly consequential observation that nothing is personified independently of its context, only when something ontologically unusual is predicated of it. Thus Dr Johnson's example, “Confusion heard his voice”. A few other critics responded to the call, but by the early 1980s the trail seems to have gone cold. It did so, I think, because taking it seriously turned out to involve a forbidding amount of Sitzfleisch. Not to put too fine a point on it, the right technology was not then ready to hand.

It is now, of course: metalinguistic encoding furnishes a practical means for the scholar to record, assemble and organize a large number of suitably minute observations, to which all the usual scholarly criteria apply. That much is obvious. But seeing the methodological forest requires us to step back from the practical trees. When we do that, markup comes into focus as a kind of epistemological modelling -- not merely the industrial matter of preparing a text for scholarship, or whatever, but itself a new form of scholarship, as Michael Sperberg-McQueen argued more than 10 years ago (1991: 34). 

Let us notice where we are. Before the encoding begins (at least logically before) we have a theoretical model of personification -- let us call it T -- suitable to the poem. T assumes a conventional scala naturae, or what Lovejoy taught us to call “the great chain of being”, in small steps or links from inanimate matter to humanity. In response to the demands of the Metamorphoses personification is defined within T as any shift up the chain to or toward the human state. Thus T focuses, unusually, on ontological change per se, not achievement of a recognizable human form. T incorporates the Bloomfield hypothesis but says nothing more specific about how any such shift is marked. With T in mind, then, we build a model by tagging personifications according to the linguistic factors that affect their poetic ontology.

But what are these factors? On the grammatical level the most obvious, perhaps, is the verb that predicates a human action to an non-human entity, as in Dr Johnson's example. But that is not all, nor is it a simple matter: different kinds of entities have different potential for ontological disturbance (abstract nouns are the most sensitive); verbs are only one way of predicating action, and action only one kind of personifying agent; the degree to which an originally human quality in a word is active or fossilized varies; and finally, the local and global contexts of various kinds affect all of these in problematic ways. It is, one is tempted to say, all a matter of context, but as Jonathan Culler points out, “one cannot oppose text to context, as if context were something other than more text, for context is itself just as complex and in need of interpretation” (1988: 93f).

Heuristic modelling rationalizes the ill-defined notion of context into a set of provisional but exactly specified factors through a recursive cycle.  It goes like this. Entity X seems to be personified; we identify factors A, B and C provisionally; we then encounter entity Y, which seems not to qualify even though it has A, B and C; we return to X to find previously overlooked factor D; elsewhere entity Z is personified but has only factors B and D, so A and C are provisionally downgraded or set aside; and so on. The process thus gradually converges on a more or less stable grammar or phenomenology of personification. This grammar is a model according to the classical criteria: it is representational, fictional, tractable and pragmatic. It is a computational model because the encoding that comprises it obeys two fundamental rules: total explicitness and absolute consistency. Thus everything to be modelled must be explicitly represented, and it must be represented in exactly the same way every time.

The imaginative language of poetry doesn't survive well under such a regime. But this is only what we expect from modelling, during which (as Teodor Shanin said) the “limited and selective consciousness” of the modeller comes up against “the unlimited complexity and ‘richness’ of the object”. In the case of poetry the result can only be a model of the “tinkertoy” variety, Vannevar Bush's “stone adze in the hands of a cabinetmaker”. Nevertheless, with care, scholarly furniture may be made with this adze. In return for “suspension of ontological unbelief”, as Black said about models generally (1962: 228), modelling gives us manipulatory power over the data of personification. With such a model we can then engage in the second-order modelling of these data by adjusting the factors and their weightings, producing different results and raising new questions. The model can be exported to other texts, tried out on them in a new round of recursive modelling, with the aim of producing a more inclusive model, or a better questions about personification from which a better model may be constructed. This is really the normal course of modelling in the sciences as well: the working model begins to converge on the theoretical model.

At the same time, Edward Sapir famously remarked, “All grammars leak” (1921: 47). The failures of the model -- the anomalous cases only special pleading would admit -- are the leaks that reflect questioningly back on the theoretical model of the Metamorphoses and so challenge fundamental research. They point, again as Collingwood said, to our ignorance of a particular thing.

This is, I think, what can now come of what Northrop Frye had in mind, when he suggested in 1989 that were he writing the Anatomy of Criticism afresh he'd be paying a good deal of attention to computational modelling.

Experimental practice

As I have defined and illustrated it, modelling implies the larger environment of experimentation and so raises the question of what this is and what role it might have in the humanities. I have argued elsewhere that our heuristic use of equipment, in modelling, stands to benefit considerably from the intellectual kinship this use has with the experimental sciences (McCarty 2002). Since Paul Feyerabend's attack on the concept of a unitary scientific method in Against Method (1975) and Ian Hacking's foundational philosophy of experiment in Representing and Intervening (1983) two powerful ideas have been ours for the thinking: (1) experiment is an epistemological practice of its own, related to but not dependent on theory; and (2) experiment is not simply heuristic but, in the words the literary critic Jerome McGann borrowed famously from Lisa Samuels, is directed to “imagining what we don't know” (2001: 105ff), i.e. to making new knowledge. This is the face that modelling turns to the future, that which I have called the “model-for”, but which must be understood in Hacking's interventionist sense.

In the context of the physical sciences, Hacking argues that we make hypothetical things real by learning how to manipulate them; thus we model-for them existentially in Black's sense. But is this a productive way to think about what happens with computers in the humanities? If it is, then in what sense are the hypothetical things of the humanities realized? -- for example, the “span” in corpus linguistics; the authorial patterns John Burrows and others (e.g. in Burrows 2003; Burrows and Craig 2001) demonstrate through statistical analysis of the commonest words; Ian Lancashire's repetends (e.g. Lancashire 2003 forthcoming); my own evolving grammar of personification. Ontologically where do such entities fit between the reality of the object on the one hand and the fiction of the theoretical model on the other? They are not theoretical models. What are they?

Models-for do not have to be such conscious things. They can be the serendipitous outcome of play or of accident. What counts in these cases, Hacking notes, is not observation but being observant, attentive not simply to anomalies but to the fact that something is fundamentally, significantly anomalous -- a bit of a more inclusive reality intruding into business as usual.

Knowledge representation & the logicist programme

The conception of modelling I have developed here on the basis of practice in the physical sciences gives us a crude but useful machine with its crude but useful tools and a robust if theoretically unattached epistemology. It assumes a transcendent, imperfectly accessible reality for the artefacts we study, recognizes the central role of tacit knowledge in humanistic ways of knowing them and, while giving us unprecedented means for systematizing these ways, is pragmatically anti-realist about them. Its fruits are manipulatory control of the modelled data and, by reducing the reducible to mechanical form, identification of new horizons for research.

Jean-Claude Gardin's logicist programme, argued in his 1989 lecture and widely elsewhere, likewise seeks to reduce the reducible -- in his case, through field-related logics to reduce humanistic discourse to a Turing-machine calculus so that the strength of particular arguments may be tested against expert systems that embed these structures. His programme aims to explore, as he says, “where the frontier lies between that part of our interpretative constructs which follows the principles of scientific reasoning and another part which ignores or rejects them” (1990: 26). This latter part, which does not fit logicist criteria, is to be relegated to the realm of literature, i.e. dismissed from scholarship, or what he calls “the science of professional hermeneuticians” (1990: 28). It is the opposite of modelling in the sense that it regards conformity to logical formalizations as the goal rather than a useful but intellectually trivial byproduct of research. Whereas modelling treats the ill-fitting residue of formalization as meaningfully problematic and problematizing, logicism exiles it -- and that, I think, is the crucial point.

The same point emerges from the subfield of AI known as “knowledge representation”, whose products instantiate the expertise at the heart of expert systems. In the words of John Sowa, KR (as it is known in the trade) plays “the role of midwife in bringing knowledge forth and making it explicit”, or more precisely, it displays “the implicit knowledge about a subject in a form that programmers can encode in algorithms and data structures” (2000: xi). The claim of KR is very strong, namely to encode all knowledge in logical form. “Perhaps”, Sowa remarks in his book on the subject, “there are some kinds of knowledge that cannot be expressed in logic”. Perhaps, indeed. “But if such knowledge exists”, he continues, “it cannot be represented or manipulated on any digital computer in any other notation”  (2000: 12).

He is of course correct to the extent that KR comprises a far more rigorous and complete statement of the computational imperatives I mentioned earlier. We must therefore take account of it. But again the problematic and problematizing residue is given short shrift. It is either dismissed offhandedly in a “perhaps” or omitted by design: the first principle of KR defines a representation as a “surrogate”, with no recognition of the discrepancies between stand-in and original. This serves the goals of aptly named “knowledge engineering” but not those of science, both in the common and etymological senses. The assumptions of KR are profoundly consequential because, the philosopher Michael Williams has pointed out, “'know' is a success-term like 'win' or ‘pass’ (a test). Knowledge is not just a factual state or condition but a particular normative status”. “Knowledge” is therefore a term of judgement; projects, like KR, which demarcate it “amount to proposals for a map of culture: a guide to what forms of discourse are ‘serious’ and what are not” (2001: 11-12). So again the point made by Winograd and Flores: “that in designing tools we are designing ways of being” and knowing.

I have used the term “relegate” repeatedly: I am thinking of the poet Ovid, whose legal sentence of relegatio, pronounced by Augustus Caesar, exiled him to a place so far from Rome that he would never again hear his beloved tongue spoken.  I think that what is involved in our admittedly less harsh time is just as serious.

But dangerous to us only if we miss the lesson of modelling and mistake the artificial for the real. “There is a pernicious tendency in the human mind”, Frye remarked in his 1989 lecture, “to externalize its own inventions, pervert them into symbols of objective mastery over us by alien forces. The wheel, for example, was perverted into a symbolic wheel of fate or fortune, a remorseless cycle carrying us helplessly around with it” (1991: x). Sowa, who has a keen historical sense, describes in Knowledge Representation the 13th century Spanish philosopher Raymond Lull's mechanical device for automated reasoning, a set of concentric wheels with symbols imprinted on them. We must beware that we do not pervert our wheels of logic into “symbols of objective mastery” over ourselves but use them to identify what they cannot compute. Their failures and every other well-crafted error we make are exactly the point, so that (now to quote the Bard precisely) we indeed are “Minding true things by what their mock'ries be” (Henry V iv.53).

Notes

[1] For work on modelling in other fields, see Shanin 1972; more recently, Magnani and Nersessian 2002 and Franck 2002, reviewed in McCarty (forthcoming) with references to other recent work.

[2] Subsequent to writing this paper, I developed a preliminary computational model for personification in the Metamorphoses; {it is described and discussed in McCarty 2003. See “Depth, Markup and Modelling”url, this volume}.

Works Consulted

Items are prefixed with a subject tag: ML (Modelling), LT (Literary Studies), EP (Epistemology) and AI (Artificial Intelligence).

ML Achinstein, Peter. 1968. “Analogies and Models”. In Concepts of Science: A Philosophical Analysis. Baltimore MA: The Johns Hopkins Press. 203-225.
ML Black, Max. 1962. Models and Metaphors: Studies in Language and Philosophy. Ithaca NY: Cornell University Press.
LT Bloomfield, Morton W. 1963. “A Grammatical Approach to Personification Allegory”. Modern Philology 60: 161-71.
LT Burrows, John and Hugh Craig. 2001. “Lucy Hutchinson and the Authorship of Two Seventeenth-Century Poems: A Computational Approach”. The Seventeenth Century 16.2 (2001): 259-82.
LT Burrows, John. 2003. “Questions of Authorship: Attribution and Beyond. A Lecture Delivered on the Occasion of the Roberto Busa Award ACH-ALLC 2001, New York”. Computers and the Humanities 37.1: 5-32.
ML Bush, Vannevar. 1965. “Memex Revisited”. In Science is not Enough. New York: William Morrow and Co.
ML Cartwright, Nancy. 1983. How the Laws of Physics Lie. Oxford: Clarendon Press.
XS Collingwood, R. G. 1946; 1994. The Idea of History. Rev edn. Ed. Jan van der Dussen. Oxford: Oxford University Press.
LT Culler, Jonathan. 1988.  Framing the Sign: Criticism and its Institutions. Oxford: Basil Blackwell, 1988.
XS Feyerabend, Paul. 1975; 1993. Against Method. 3rd edn. London: Verso.
AI Franchi, Stefano and Güven Güzeldere, eds. Constructions of the Mind: Artificial Intelligence and the Humanities. 1995. Special Issue of Stanford Humanities Review 4.2. www.stanford.edu/group/SHR/4-2/text/toc.html.
EP Franck, Robert, ed. 2002. The Explanatory Power of Models. Methodos Series, vol. 1. Boston: Kluwer Academic.
ML Frye, Northrop. 1991. “Literary and Mechanical Models”. In Research in Humanities Computing 1. Papers from the 1989 ACH-ALLC Conference. Ed. Ian Lancashire. Oxford: Oxford University Press. 1-12.
AI Gardin, Jean-Claude. 1990. “L'interprétation dans les humanitiés: réflexions sur la troisième voie” / “Interpretation in the humanities: some thoughts on a third way”. In Interpretation in the Humanities: Perspectives from Artificial Intelligence. Ed. Richard Ennals and Jean-Claude Gardin. Library and Information Research Report 71. Boston Spa UK: British Library Board. 22-59.
AI ---. 1991. “On the Way We Think and Write in the Humanities: A Computational Perspective”. In Research in Humanities Computing 1. Papers from the 1989 ACH-ALLC Conference. Ed. Ian Lancashire. Oxford: Oxford University Press. 337-45.
ML Geertz, Clifford. 1973; 1993. The Interpretation of Cultures. London: HarperCollins.
ML Groenewold, H. J. 1960. “The Model in Physics”. In The Concept and the Role of the Model in Mathematics and Natural and Social Sciences. Synthese Library. Ed. B. H. Kazemier and D. Vuysje. Dordrecht-Holland: D Reidel. 98-103.
XS Hacking, Ian. 1983. Representing and Intervening: Introductory topics in the philosophy of natural science. Cambridge: Cambridge University Press.
ML Hesse, Mary. 1974. The Structure of Scientific Inference. London: Macmillan.
XS Lancashire, Ian. “Cognitive Stylistics and the Literary Imagination”. Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, and John Unsworth. Forthcoming, Blackwell's, 2003.
XS Magnani, Lorenzo and Nancy J. Nersessian, eds. 2002. Model-Based Reasoning: Science, Technology, Values. New York: Kluwer Academic / Plenum.
XS McCarty, Willard. 2002. “Humanities computing: essential problems, experimental practice”. Literary and Linguistic Computing (in press).
XS ---. (forthcoming). “Manipulating epistemic tinkertoys”. Review of The Explanatory Power of Models, ed. Robert Franck, Methodos Series, vol. 1. Boston: Kluwer Academic, 2002. Literary and Linguistic Computing.
LT McGann, Jerome. 2001. Radiant Textuality: Literature after the World Wide Web. London: Palgrave.
LT Paxson, James J. 1994. The Poetics of Personification. Literature, Culture, Theory 6. Cambridge: Cambridge University Press.
ML Redhead, Michael. 1980. “Models in Physics”. British Journal of the Philosophy of Science 31: 145-163.
ML Sapir, Edward. 1921. Language: An Introduction to the Study of Speech. London: Harcourt Brace and Company.
ML Shanin, Teodor. 1972. “Models and Thought”. In The Rules of the Game: Cross-Disciplinary Essays on Models in Scholarly Thought. Ed, Theodor Shanin. London: Tavistock. 1-22.
AI Sowa, John F. 2000. Knowledge Representation: Logical, Philosophical, and Computational Foundations. Pacific Grove CA: Brooks/Cole.
LT Sperberg-McQueen, C. M. 1991. “Text in the Electronic Age: Textual Study and Text Encoding, with Examples from Medieval Texts”. Literary and Linguistic Computing 6: 34-46
EP Williams, Michael. 2001. Problems of knowledge: A critical introduction to epistemology. Oxford: Oxford University Press.
AI Winograd, Terry and Fernando Flores. 1986. Understanding Computers and Cognition: A New Foundation for Design. Boston: Addison-Wesley.
AI Winograd, Terry. 1991. “Thinking Machines: Can There Be? Are we?”. In The Boundaries of Humanity: Humans, Animals, Machines. Ed. James J. Sheehan and Morton Sosna. Berkeley: University of California Press. 198-223.