I had the pleasure of attending my first Association of Digital Humanities Organizations Conference last week in Montreal. The conference began with two days of workshops, and I attended the Advancing Linked Open Data in the Humanities session on Monday. Overall, the session was helpful in the reassurance that we are not alone in the trials and tribulations of adopting Linked Open Data (LOD) models. The first break out session dedicated its discussion to the challenges that come with making LOD user interfaces that are effective for users without belying the complexity of the data structures behind them. I learned of the Social Networks and Archival Context Cooperative (SNAC) project, hosted by the U.S. National Archives and Records Administration (NARA) and funded by IMLS, NEH, and the Mellon Foundation. SNAC looks to “enable archivists, librarians, and scholars to jointly maintain information about the people documented in archival collections.” Alison Hedley, of The Yellow Nineties Online, gave a great presentation on their efforts to create a proposopograhy in LOD that was especially relevant to our work on the Printers’ File. She went over “best practices” that I found incredibly instructive as they were concise, yet really sophisticated in addressing how practitioners must “look at the information structure the historical data is embedded in” and “document contingencies.” This presentation as well as those by Constance Compton and others fueled a break out session that centered on one of the most valuable conversations I partook in and heard at DH: the relationship between data as it exists and the representation –both historical and present–we look for it to capyure. In her presentation on “Cultural (Re-)formations: Structuring a Linked Data Ontology for Intersectional Identities”, Susan Brown, of the Orlando Project (and many others), perhaps summed it up best when she reflected on the “need to talk to data without endorsing an impoverished representation of gender.” Similar points were made about the ways in which data models oversimplify race, and the ways in which we can’t ignore these models (“we need to talk” to them, as Susan said), but we also want to consider how LOD might document more complex and nuanced understandings of these social constructions. In a similar vein, I saw a great panel on “Accessing Alternative Histories and Futures: Afro-Latin American Models for the Digital Humanities” in which Eduard Arriaga examined the ways in which our current understanding of diversity can be an “intellectual pitfall.” In an effort to avoid oversimplification, he called for “more powerful destruction and enablement.” I hope that I am able to carry his reflections into AAS’s continued work with Black Bibliography as well as our potential involvement in Northeastern’s Design for Diversity forum.
Another theme that emerged from the sessions I attended was the need for documentation for all we make. Through these presentations, most notable Livingstone Online: Illuminating Imperial Exploration, I began to understand the potential for documentation not just as a prevention against institutional amnesia in what we at AAS often refer to as the “hit by a bus” scenario, but also as a form of reflective project work. Documentation can be an opportunity to situate a project in time and place:
- What resources are available and what do we wish were available?
- What data must we rely on even though we see its limitations (see paragraph above)?
- With hindsight being 20/20 vision, how might we do things differently next time?
As Megan Elizabeth Ward and Adrian S. Wisnicki reflected on Livingstone Online, such documentation helps us to understand access as a matter of repair (a la Eve Sedgwick’s “reparative reading”) and transparency. This presentation was also incredibly instructive in thinking through how spectral imaging complicates our understanding of the original material object and its digital segregate. The spectral imaging in this GIF of Linivstone’s 1870 Field Diary page that apparently doubled as a coaster reveals a pre-textual moment for this object, a moment that the human eye could not recapture in the way spectral imaging makes possible.
Questions around digital publishing, both in terms of scholarly editions and scholarly monographs, percolated throughout much of the conference. Transcription tools Transkribus, which the Omohundro Institute is using for its Georgian Papers Programme and TextLab, which John Bryant is using for scholarly editions of Herman Melville’s work in the Melville Electronic Library, were showcased. The Digital Scholarly Editions Initial Training Network (DIXiT) panel discussed how digital scholarly editions are often hard to identify in library catalogs as well as how important it is to include the underlying XML for digital editions. Speaking of XML, James Cummings gave a really helpful talk on “Myths and Misconceptions about the Textual Encoding Initiative (TEI)” that demonstrated expectations and preconceived notions with which people come to digital mark-up. Conversation abounded with about new directions in scholarly publishing, with a panel on reports from the Mellon-funded Monograph Publishing in the Digital Age Initiative, including the new Greenhouse Studios at the University of Connecticut.
In the conversations that related most to libraries, there was a lot of talk about International Image Interoperability Framework (IIIF) and how it is changing image sharing among cultural heritage organizations. I saw some really computationally complex uses of bibliographic data in DH projects, including Ben Schmidt’s analysis of Hathi Trust data as well as David-Antoine Williams’s efforts to tag (or to have students tag) all of the entries in the Oxford English Dictionary. Both of these efforts avoided the confusion that often comes when scholars work with MARC data and encounter the 650 (subject) and 655 (genre/form) fields; instead, they use the shelf marks from the Library of Congress Classification system embedded in the data for a much more direct understanding of the content of their corpus. I’m not sure that this necessarily solves the genre troubles questions, but to me, it was a new approach. I presented a paper on Beyond Access: Critical Catalog Constructions entitled “‘The Technology of Shared Cataloging’: A Retrospective,” in which I looked at the creation and re-creation of two rare book union catalogs: the English Short Title Catalog (ESTC) and the North American Imprints Program (NAIP). In 1981, in a Bibliographic Society of America Symposium from which the title of my paper took its name, William Todd wrote, “Perhaps we do not yet fully appreciate the situation, now rapidly materializing, whereby computers converse with each other in any mode, while the rest of us, mere mortals, stand mute before them.” Remarks like this, which abound in the excitement and trepidation expressed during the emergence of these rare book union catalogs, echo a similar exuberance and hesitancy around the transformation from MARC to linked data models. I argued that consideration of the rare book catalog as a digital humanities project invites reassessment of legacy information architecture as well as the many hands that built the bibliographic structures on which so much of the work of the digital humanities rests. This gave me a chance to conclude with a few brief remarks about the Printers’ File and our work with the Advanced Research Consortium (ARC).
I thoroughly enjoyed the DH Conference for much of the same reason that I enjoy The Society for the History of Authorship, Reading and Publishing (SHARP) conferences: their focuses on methodology bring people of different scholarly and professional backgrounds and perspective together to share frustrations, ideas, and encouragement. Conversations about how we do what we do lead easily into conversations about why we do what we do, and such exchanges, whether partaking in them or listening to them, are most inspiring.