…Current research: social histories of programming languages and cultures of software development…
I orient my research at the intersection of literary and media studies, digital humanities, and archival work in order to gather practices and theoretical approaches that allow me to describe how the human role in art is entangled with the nonhuman materiality of media, the environment, and language. As a literary scholar, I specialize in Anglophone modernist literature and identify the period as a flashpoint which catalyzed interrelated changes in aesthetics, conceptions of the human, and technical media into the twenty-first century. I take two related approaches to these questions: the first describes the material media strategies authors use to manage and present information, and the second investigates theoretical questions of how language functions as information in literary texts. I also have experience working with established digital humanities initiatives as the former Project Manager at the Modernist Journals Project and conceiving of and executing my independent digital project which transforms Ezra Pound’s A Draft of XXX Cantos into a database using the Text Encoding Initiative’s xml encoding language. This digital work represents a bridge between the theoretical and material approaches of my scholarship by using digital tools to animate the informational effects I argue are at work in literary texts. Presently, I am seeking a publisher for my first book, Avant-Garde Information Media and beginning work on my second, Language is Code for Literature. Through these paired monographs, and the digital projects that accompany them, I am establishing a conceptual vantage point from which to look forward from the modernist period and into contemporary culture. My first book project, titled Avant-Garde Information Media situates the innovative aesthetics and media effects in James Joyce’s Ulysses, Jean Toomer’s Cane, H.D.’s Palimpsest, and Ezra Pound’s A Draft of XXX Cantos within the larger shifts in early twentieth century information organization methods. During this period, archives, public and private bureaucracies, industry, and the burgeoning information technology fields all reacted to the global information overload by devising new ways of organizing and accessing cultural records. These material and conceptual innovations were unified by a transition from a linear, temporal model of organization to one that relied on associated characteristics between informational elements to organize and retrieve records according to dynamic queries. This shift resulted in media objects such as card catalogs, various sortable filing systems, and Vannevar Bush’s Memex, all of which, by mid-century, developed into the digital database. The first chapter argues that the extensive note-taking system Joyce developed to manage the information he integrated into Ulysses operated precisely like a database. He based this system on Homer’s Odyssey, using the epic as an associational framework for his novel, a decision that would provide a model for other authors seeking new ways to unify their works. My second chapter examines Toomer’s attempts to devise a model of racial identity based on intersecting differences, which he achieves in Cane by unifying a catalog of poems and stories through associations between different linguistic elements, images, texts, and identities. My third chapter builds on the first two by linking H. D.’s practice of assembling ancient textual fragments with her search for an aesthetics that culminated in Palimpsest, a text that capitalizes on the layering effect evoked by its title to create a database of feminist stories connected by the repetition of textual and narrative elements. In his XXX Cantos, my fourth chapter argues, Pound selects precise ancient and modern textual elements from the archive and arranges them onto the pages of his poem according to aesthetic, thematic, and informational patterns. These tactics represent both an experimental poetics and a database-like solution for managing a literary and historical archive that had expanded to intractable proportions. The material and conceptual information management strategies these texts develop are identical to contemporary innovations in the information technology industries and reveal the artistic and technical roots of what has, in the digital age, become the database. My work with associational logic has led me into questions about the agency of media and information in the creation and functions of literary texts. My next book project, titled Language is Code for Literature, investigates the procedures, protocols and executable functions of language in twentieth and twenty-first century literary texts. These characteristics all resemble what we now refer to in the digital age as code, programming, or software. The argument begins by following poet and critic John Cayley in defining code rhetorically in order to identify the persuasive strategies, implied audience, and executable functions of that animate code, but also to establish a definition applicable to pre-digital print media. My motivation for expanding notions of code beyond the digital media they are associated with stems from my conviction that modernist authors come to conclusions about language which prefigure digital instantiations of coding languages. In order to establish this conceptual framework, I draw on contemporary theorizations of the relationship between humans, language, and media, in particular Richard Grusin’s description of the “nonhuman turn” in the introduction to his edited collection of the same name. I historicize this turn with the rise of electronic media in the early twentieth century and trace its effects on the avant-garde experimentation with language. The introduction to this book traces the widespread efforts by modernist authors—Gertrude Stein, Ezra Pound, William Carlos Williams, James Joyce, and Samuel Beckett—to deanthropomorphize print media and the language they use in their literary works. My tentative chapter breakdown begins with Stein’s The Making of Americans in which she experiments with abstract, pronominal language in ways which resemble the bit packing of binary code in order to automate human information labor using the machinic potential inherent in language. The second chapter follows a trajectory from Joyce’s use of coded procedures for generating Ulysses into Finnegans Wake, arguing that he developed a set of rules for collecting informational elements to include in his texts which forecast the functions search algorithms use to retrieve information based on user-entered queries. I follow these early chapters focusing on modernism with proposed investigations of Susan Howe’s Spontaneous Particulars: The Telepathy of Archives and Debths, Anne Carson’s Float, and Nathaniel Mackey’s Late Arcade. My comparative approach to literary and media studies figures into my digital humanities work as well. In order to more fully express the comparison between the XXX Cantos and graph databases, I have developed what began as an experimental chapter of my dissertation and is now expanding into an independent digital humanities project. The project is a TEI encoded version of the XXX Cantos which uses structural and semantic tags to identify and relate the separate elements appearing in the poem. I created this project using the skills I acquired on the staff of the Modernist Journals Project and currently plan on using it as a centerpiece for an introductory digital humanities courses. As I extend the project into the ninety remaining cantos, I will collaborate with students providing them with instruction in text encoding as well as practice translating literary interpretation skills into digital methods and experience working on collaborative learning projects. The technical skills and approaches that inform this project will also serve as a model for future projects which can be performed on similarly complex, archivally oriented texts. For instance, I envision an encoding of Walter Benjamin’s Arcades Project, a text that, like The Cantos, comprises a set of intertextual references brought together in the codex format. After elements in these texts have been defined and encoded, I have developed an additional tagging structure through which related elements can be connected to one another, providing an agile tool for tracing patterns within these massive and highly complex miniature archives. In addition to potential digital editions, the overarching goal of projects of this kind is to provide students and scholars with tools to manipulate metadata associated with each text, generating dynamic maps, text analysis, semantically encoded searches, and various visualizations. My wife and I like to garden, camp, travel, and cook. I am a devoted Chicago Cubs fan.
Contribution to a tradition of similar texts within computer science and technology, by computer scientists and programmers, and others, in which we explain our beliefs about the tools – hardware, software, net- works and programming languages – that we use as well as their socio- political, philosophical implications.
The following is understood as a contribution toward a field of com- puter science education : a reflection of 5 months of learning the functional programming language Haskell; out of which has emerged for us that pro- gramming languages are ‘just other programs’. This lesson, so important, is never felt more than in a functional language like Haskell, we defend. It has for principal benefit to bring down the barriers between creators and users of programming languages, i.e. “programmers”, both are the same; a psychological-sociological fact not without revolutionary characteristics.
Picture This: Advanced Visualization for the Humanities is a Level II proposal to develop software tools that will open up the potential of high-resolution displays to researchers from the humanities. These tools will provide humanities users simplified access to advanced visualization resources, using the popular open-source programming environment, Processing. The short-term results of this start up project will be the development of open-source software that enables Processing to work with high-resolution tiled displays.