[CHWP Titles] | [CHC 2007] |
brent.nelson@usask.ca
|||| http://www.usask.ca/english/people/faculty.php?FacultyID=45
|||| About the Author
CHWP A.41, publ. July 2008. © Editors of CHWP 2008.
section | 1. Background: the production and remediation of the reading artifact |
2. The symposium on "Reassembling the Disassembled Book" | |
3. The essays | |
Notes | |
Works Cited |
Perhaps the most significant contribution of literary studies to the digital age has been the insistence that electronic publication be undertaken with deliberate awareness of the material and social history of the book, together with the promise that the use of digital tools for remediation and analysis can “lift one’s general level of attention to a higher order” (McGann, 2001: 55).1 Early users of text technology could not take for granted the constructedness of the reading artifact. They were continuously confronted with this fact in a way that the modern reader who pulls a book off a bookstore shelf is not. Readers of medieval manuscripts were not far removed from the hours of expert human labour that went into producing what was perhaps their most valuable portable possession. The age of printing further complicated the process of assembling a book. First, individual blocks of type had to be put together to form words, sentences, paragraphs, and pages that were then printed onto sheets. At this point, the early printed book explicitly required assembly. The sheets were folded and gathered into quires, then sown together. Book-buyers often elected to purchase them in this bare form and take them to a bindery to be custom-fitted with a cover of their choosing. Disassembling was also embedded in the production of the early printed book: once a sheet was printed, the type was removed from the chase and distributed into their storage cases for easy retrieval in composing the next forme for printing.
Early textual practices also challenge modern assumptions about the organic unity of the book as delivered across the bookstore counter (or, increasingly, through cyberspace via a courier or postal service). Literary work has long been involved in assembling, disassembling, and reassembling texts. These functions of composition informed the standard notion of literary (i.e. rhetorical) invention that held sway from antiquity through the Renaissance, as expressed by Petrarch’s paraphrasing of Seneca: “We should write in the same way as bees make honey, not preserving flowers, but turning them into honeycombs, so that out of many and varied resources a single product should emerge, and that one both different and better” (Moss, 1996: 51). The pre-modern book was thus a composite affair. Books weren’t imagined to be seamless unities resulting from a solitary genius. Rather, writing involved extracting the choice bits from other texts, recombining and reformulating them to produce something new. The habit of commonplace collection that developed through the medieval and Renaissance periods served this habit of composition in the extraction of notable passages from whole texts to be stored and organized for ready deployment. Moreover, literary production was frequently collaborative (as in the case of early modern theatre) and communal (as in the coteries of Elizabethan poetry).2 A taste for the collection for its own sake developed into a publishing phenomenon in the Renaissance, producing countless compendia, anthologies, and miscellanies for a growing reading public, while assembling collections of manuscript poetry remained a popular form of networking among the gentry.
Compendia and anthologies were part of an important development in text technology through this period: as texts proliferated and circulated more efficiently, users and consumers of books sought ways in which to manage and navigate a growing corpus of text.3 Preachers, lawyers, and scholars represented an emerging class of professional readers with special needs and interests in their use of texts. The resulting technologies involved disassembling and reconfiguring texts to enable new means and methods of processing them. Lawyers digested laws and statutes into compendia for convenient reference. Philologists compiled lexicons consisting of words and their usage extracted from a corpus of texts to represent the total vocabulary of a literary language. Preachers and theologians reassembled the sacred text into concordances, polyglot and parallel texts, and distilled a whole tradition of commentary into compendia which were in turn harvested to compose sermons and to systematize theology. In the Renaissance we can recognize many roots of the discipline of literary studies in the recoverying, editing, and contextualizing of ancient texts, reassembled into a growing library that fueled a new era of intellectual discovery.
For professional readers, the core intellectual functions of reading have always been analysis (disassembly) and synthesis (reassembly). Indeed, this remains true of all reading, which at a basic level involves parsing the grammar of a text and synthesizing the parts to construct meaning. Literary scholars make a discipline of this process, pulling a text apart to expose its inner workings and how it is constructed and culturally situated, and then putting it back together, in a sense, to produce an interpretation. The development of digital tools and methods has, again, forced a new awareness of these basic processes of professional practice. In what were still the nascent years of digital literacy, Rosanne Potter (1980) addressed a problem that has continued to dog literary computing: the perception that computing processes cannot move much beyond basic, quantitative analysis. Twenty-five years later, Julia Flanders finds digital humanists still working to “heal the breach between the conventional English department and its digital counterpart, and erase the stigma of reductionism and pedantry that still clings to computer-aided research” (Flanders, 2005: 63). The answer, Flanders asserts, lies in recent developments in textual interfaces that reassemble textual details in ways that are probative and provocative, which are themselves interpretations. In other words, the processes of analysis and synthesis are equally foundational to both “lower criticism” (the establishment and representation of textual and bibliographic data) and “higher criticism” (the production and theorizing of interpretations): each digital production and manipulation of a text is itself a critical act of synthesis.4
Digital technologies have added a new layer of complexity, both theoretical and practical, to this process of reassembling and disassembling, enabling new possibilities for remediating text. At the most basic technological level, every act of digitization involves disassembling a material source artifact and reassembling it into an array of bits and bytes. In their most sophisticated applications, digital tools enable new trajectories in the intellectual tradition of analysis and synthesis. Early digital text-tools such as TACT and WordCruncher extracted words from whole texts to produce complete word lists and concordances of word instances, but still in an “essentially...dismembered state” (Flanders, 2005: 54; cf. Bradley, 2005: 506). Recent developments have paid equal attention to re-membering texts in ways that store and restore, produce and reproduce meaning. For editors attending to original materials, editing often involves some form of recovery of an obscured text (Kiernan, 2006: 262). Imaging technology has enabled restoration of damaged manuscripts, digitally reassembling obscured, faded, or otherwise lost visual information into reconstituted pages.5 It has also enabled the re-collection of material that was at one time dispersed; or, materials that never existed in physical proximity, but by some rationale belong together, can now inhabit a common digital site, or be linked through digital space.6 Technologies for encoding text (both database and XML-based) have enabled scholars both to explore and represent the embedded structures of textual content, and also to reassemble text into new structures.
The idea for this collection of essays began with a symposium on “Reassembling the Disassembled Book,” held on 29 May at the 2007 Congress of the Humanities and Social Sciences at the University of Saskatchewan in Saskatoon, Canada. This symposium was planned and hosted by the Society for Digital Humanities/Société pour l'étude des médias interactifs (SDH/SEMI) and the Canadian Association for the Study of Book Culture/l'Association canadienne pour l'étude de l'histoire du livre (CASBC/ACÉHL)7 in association with the Canadian Society of Medievalists/Société canadienne des médiévistes (CSM/SCM), and was sponsored by Electronic Text Research at the University of Saskatchewan (ETRUS) and Classical, Medieval and Renaissance Studies (CMRS) at the University of Saskatchewan, with financial aid from the Canadian Federation for the Humanities and Social Sciences (CFHSS). Two essays in this collection derive directly from presentations made at this symposium by Peter Stoicheff (keynote address) and Richard Cunningham. A third essay was first presented as a panel of three papers by Carlos Fiorentino, Piotr Michura, Milena Radzikowska, and Stan Ruecker in the program of SDH/SEMI, also at Congress 2007. The remaining two essays, one by Paul Dyck and another by Yin Liu and Jeff Smith, were solicited by the editor to round out the collection. Together these essays represent a wide historical range covering the medieval, early modern, and modern periods, and an equally diverse range of material and digital technologies.
The digital technologies represented in this collection enable disassembly and reassembly in ways that mimic, reproduce, reverse, and expose the original processes that produced the subject texts, thereby extending the scholarly and critical possibilities of analysis and synthesis. From these essays we can postulate three categories of remedial reassembly. The first is reconstitution, where the principal objective is to put a text together in digital form so as to present it, as closely as possible, in the way it once was. This is the mode of the facsimile (sometimes digitally restored or enhanced), the diplomatic transcription, and to some degree, the text of the critical edition (excluding the critical apparatus and commentary): facsimile and diplomatic editions aim in their own ways to simulate the physical appearance of a particular material embodiment of a text; the critical edition aims to recover a text or postulate a text that is no longer (or perhaps never was) extant in a single material form. One step away from textual reconstitution is representation, which is representing the text in a new way that can transcend the limitations of the material form of the source text.8 This function takes deliberate advantage of the affordances of digital remediation, such as hyperlinking, image manipulation, text encoding, animation, dynamic content generation and display, and so on. It might involve putting the text together in a way that empowers navigation of the text, or it might expose and make explicit aspects of a text’s content and structure (as TEI does), or its constructedness. The final function is defamiliarization, using digital media deliberately to strip text of its print and manuscript conventions and represent it in ways that enable and even force new ways of accessing and entering into a text.
Peter Stoicheff’s essay, “Putting Humpty Together Again: Otto Ege’s Scattered Leaves,” based on his keynote address, presents a classic and controversial case of the disassembled book. The public interest (and ire) aroused by this notorious book collector, who cut apart a portion of his manuscript collection, dividing the pages into forty boxes which he sold to libraries and collectors throughout the world, exposes a cultural bias favouring the whole over the fragment or, more precisely, the page. Stoicheff’s essay explores the tension involved in an enterprise that at once reasserts the importance of the page as a unit of information while at the same time contemplating what would otherwise have been impossible in the pre-digital age: putting Ege (or a representative thereof) back together again. The process of reconstituting the Beauvais Missal both acknowledges the insights gained by being confronted with the independence of the page (each one currently existing in geographical isolation), while at the same time anticipating new insights about the book as a whole that might be gained by reassembling them.
As Yin Liu and Jeff Smith point out in their essay, “A Relational Database Model for Text Encoding,” analytic reading often requires reconfiguring texts to allow reading in nonlinear ways. In taking the grapheme as the most basic unit of analysis, this project disassembles the text of the Middle English poem Sir Perceval of Galles to the most atomic level in order to retrace what Peter Shillingsburg calls the “script acts” (in this case, rather literally, the scribal hands) that produced the material source text.9 In this relational database model, which treats the text as a network of constituent parts, structure is encoded in the relationships between the elements of a text rather than in the text itself (a variation of“standoff” markup), allowing for representation of multiple overlapping and even conflicting structures. As a database, the text is reassembled in a form that is foreign to the original artifact and enables infinite representations and approaches to the text.
Paul Dyck and Stuart Williams’s essay, “Toward an Electronic Edition of an Early Modern Assembled Book,” attests to the “the transportability of the text or image fragment, and its redeployment in new compositions” in the Gospel concordances of the seventeenth-century devotional community of Little Gidding. The Ferrar family’s cutting apart of Bibles and illustrated books to re-compose harmonized Gospel texts was an early and constructive example of the practice of pillorying books for choice illustrations (a tradition of “book tearers” as Stoicheff calls it) that began in earnest in the eighteenth-century and was practiced (and publicly defended) by Otto Ege into the early- and mid-twentieth century. In addition to reconstituting one of these unique seventeenth-century artifacts (a gift presented to King Charles I) in digital form, and thereby making a cultural treasure with very restricted access available to the public, this project is using XML encoding in an innovative way to uncover and render the complex structure of the book, powerfully re-presenting it to enable scholarly research into its source materials and the methods and cultural practices that informed its composition. Dyck and Williams have also developed a method for populating this carefully reconstructed structure dynamically with the basic original content (the authorized 1611 translation of the Gospel texts), which is readily available on the Web: a method they call “Gospel Grab.” This is a remarkable case of a new technology that is very much in the spirit of the 400 year old cultural practice that produced the original artifact.
In the case of Richard Cunningham’s essay on “dis-Covering the Early Modern Book: An Experiment in Humanities Computing,” the disassembling of the book was initiated by a curious group of twenty-first century scholars who wanted to explore the possibilities of remediating an early modern codex. The initiating act of “bibliosection” ensured an end result that would not only reconstitute, but also re-present the artifact in a way that would be defamiliarizing to the “digital born” generation that this essay anticipates, anatomizing the book to lay bare its inner structure. The digital re-presentations of the codex discussed here are both familiar and unfamiliar: facsimile pages that must be arranged into an arcane sequence to reconstruct the original printed sheet, or reassembled into a quire whose pages can be turned by clicking and dragging a mouse.
The projects represented in “Visualizing Repetition in Text” and developed by Stan Ruecker, Milena Radzikowska, Piotr Michura, Carlos Fiorentino, and Tanya Clement exemplify perfectly the defamiliarizing function of digital remediation. While their proposed prototypes all enable a new kind of access to the source text (Getrude Stein’s The Making of Americans), they do so in a way that makes the text strange, dislocated from its familiar codex form.10 There is an element of the deconstructive in these projects in forcing the user to recognize the constructedness of a text and its dependence on (and subservience to) form. These prototypes are not concerned so much with the material origination of the text, but rather the processes of scholarly analysis that are enabled by an artificially forced way of viewing a text. This double function of defamiliarizing and navigating large bodies of text characterizes what D. Small described in 1996 as “the new generation of text analysis tools” in which “the interface itself constitutes an experiment: an experiment in visualization of the text, as well as a way of selecting items for visualisation” (Small, 1996: 64). The result is a series of newly constituted texts that, similar to the illuminated manuscript or King Charles’s richly decorated Gospel concordance, are aesthetic objects in their own right: objects for enjoyment and delight, as well as analysis and interpretation.
1 Conversely, writes McGann, “When we use books to study books, or hard copy texts to analyze other hard copy texts, the scale of the tools seriously limits the possible results” (2001: 55). For a recent (and brief) survey of the contributions of digital textual studies see Shillingsburg’s introduction to and first half of Chapter 1 in From Gutenberg to Google (2006). Building on the work of Jerome McGann, G. F. McKenzie, and others on the implications of “bibliographical code” in editorial practice, Shillingsburg’s book asserts the need to represent the “script act complex, consisting of the written text and all its parts that go (or at one time went) without saying for its author and producers,” including the circumstances and vicissitudes of production and reception (79). Here and throughout this introduction I use “remediation” in Bolter and Grusin’s sense of a translation of content from an old medium into a new medium that both increases immediacy and draws attention to the act of mediation. In a sense, the literary tradition summarized here is a history of pre-digital remediation.
2 On collaboration in early modern theatre see Heather Anne Hirschfeld, 2004; and on the literary coterie see Arthur Marotti, 1986.
3 On the development of text technologies in the Renaissance and their analogy to emerging digital technologies see Neil Rhodes and Jonathan Sawday’s collection of essays, The Renaissance Computer (2000).
4 Ray Siemens adopts these terms from Tim William Machan in an article that introduces a collection of essays on the topic of computer-assisted literary criticism (2002). He and the volume’s contributors similarly defend the full participation of digital processes, in both lower and higher criticism.
5 See Kiernan’s pioneering work in image-based electronic editions and digital tools for producing them which, in some cases, begins with reconstructing a lost or damaged artifact (2005). For examples see Meg Twycross (2003) on the use of high-resolution scans of the 1415 York Ordo paginarum to restore text that was scraped-out and overwritten; and Randall McLeod (2005) on digital restoration of a manuscript of Donne’s elegy commonly known as “To his mistress going to bed” (Rosenbach MS 239/22) using infrared reflectography.
6 Morris Eaves illustrates the limitations of the print archive with the case of William Blake, a writer who embraced new technologies to produce media-rich texts but whose “ambitious innovations produced a daunting overload of information and trail of reader resistance and resentment” (2006: 211). While developments in print production enabled this material to be brought together and thereby enable “serious study and reflection on a scale previously unthinkable,” “dispersal, dismemberment, and translation seriously distorted the true picture of Blake's achievement” (Ibid.). The Blake archive is bringing to bear tools that reassemble this material with unprecedented richness.
7 I would like to thank Leslie Howsam, president of CASBC/ACÉHL, for initiating this symposium, for allowing SDH/SEMI to become part of her plans, and for releasing me to put together this collection of essays.
8 Shillingsburg refers to “(re)presentation” with the qualification that every act of mediation is itself “a new creative act” (7). Shillingsburg does not, however, clearly articular the sorts of differences in modes of mediation that I am aiming at here.
9 Jean Guy Meunier, Ismail Biskri, and Dominic Forest posit that “The first components of the analytical model are the units of information; one has to identify the basic, if not the atomic, components of the processes. Traditional approaches usually take the word as the basic unit; this is a highly laden theoretical choice.” In this case, Yin and Smith are positing a model not only for expert reading (identifying “the constituents of a reading and analysis process as practised by experts”) but for reconstructing the expert production of the original manuscript (Meunier et al, 2005: 128).
10 Jerome McGann and Lisa J. Samuels’s similar idea of “deformance” (1999) informs McGann’s objectives for digital remediation of texts (2001: 105-35, 144).
BOLTER, J. David and Richard GRUSIN. (2000). Remediation: Understanding New Media, Cambridge: MIT Press.
BRADLEY, John. (2005). “Text Tools” in A Companion to Digital Humanities, (eds. Susan Schreibman, Ray Siemens, and John Unsworth), Malden, MA: Blackwell Pub.
EAVES, Morris. (2006). “Multimedia Body Plans: A Self-Assessment” in Electronic Textual Editing, (eds. Lou Burnard, Katherine O'Brien O'Keeffe, and John Unsworth), New York: Modern Language Association of America, 210-223.
FLANDERS, Julia. (2005). “Detailism, Digital Texts, and the Problem of Pedantry”, Text Technology 14.2: 41-70.
HIRSCHFELD, Heather Anne. (2004). Joint Enterprises: Collaborative Drama and the Institutionalization of the English Renaissance Theater, Amherst, MA: University of Massachusetts Press.
KIERNAN, Kevin. (2006). “Digital Facsimiles in Editing” in Electronic Textual Editing, (eds. Lou Burnard, Katherine O'Brien O'Keeffe, and John Unsworth), New York: Modern Language Association of America, 262-8.
MAROTTI, Arthur. (1986). John Donne: Coterie Poet, Madison: University of Wisconsin Press.
McGANN, Jerome. (2001). Radiant Textuality: Literature After the World Wide Web, New York: Palgrave.
McGANN, Jerome and Lisa J. SAMUELS. (1999). “Deformance and Interpretation,” New Literary History 30.1: 25-56.
McKENZIE, D. F. (1986). Bibliography and the Sociology of Texts, London: British Library.
McLEOD, Randall. (2005). “Obliterature: Reading a Censored Text of Donne’s ‘To his mistress going to bed’”, English Manuscript Studies 1100-1700 12: 83-138.
MEUNIER, Jean Guy, Ismail BISKRI, and Dominic FOREST. (2005). “A Model for Computer Analysis and Reading of Text (CARAT): The SATIM Approach”, Text Technology 14.2: 123-51.
MOSS, Ann. (1996). Printed Commonplace-books and the Structuring of Renaissance Thought, Oxford: Clarendon Press.
POTTER, Rosanne. (1988). “Literary Criticism and Literary Computing: The Difficulties of a Synthesis”, Computers and Humanities 22.2: 91-7.
RHODES, Neil and Jonathan SAWDAY, eds. (2000). The Renaissance Computer: Knowledge Technology in the First Age of Print, London: Routledge.
SHILLINGSBURG, Peter L. (2006). From Gutenberg to Google: Electronic Representations of Literary Texts, Cambridge: Cambridge University Press.
SIEMENS, Ray G. (2002). “A New Computer-assisted Literary Criticism?”, Computers and the Humanities 36: 259-267.
SMALL, D. (1999). “Navigating Large Bodies of Text”, IBM Systems Journal 35.3&4 (http://www.research.ibm.com/journal/sj/353/sectiond/small.html).
TWYCROSS, Meg. (2003). “Forget the 4.30 a.m. Start: Recovering a Palimpsest in the York Ordo paginarum”, Medieval English Theatre 25: 98-52.
SDH/SEMI http://www.sdh-semi.org/
CASBC/ACÉHL http://casbc-acehl.dal.ca/
ETRUS http://etrus.usask.ca/