[EYCON] Workshop 2: ‘Using Visual AI applied to Digital Archives’, Imperial War Museum (London) – 8-9 June 2023

Posted on May 24, 2023

We are pleased to invite you to the second EyCon workshop, “Using Visual AI Applied to Digital Archives” organised in partnership with Imperial War Museums (IWM) Institute, IWM’s research and knowledge exchange hub. The EyCon (The Early Conflict Photography 1890-1918 and Visual AI) project is co-funded by the AHRC/Labex Past in the Present led by Dr Lise Jaillant (Loughborough University) and Dr Julien Schuh (Paris Nanterre University), in partnership with the University of Paris Cité and Prof Daniel Foliard. The EyCon project aims at harnessing AI-reliant tools to analyse a large corpus of colonial photographs. EyCon’s database will include thousands of historical photographs documenting armed violence. The project is partnering with a network of archival institutions in France and in the UK.

The workshop will provide a forum to address the huge challenges that academics, archivists and other professionals are facing when dealing with a very large amount of visual data (particularly sensitive data). The workshop will be a combination of presentations (Day 1) and interactive sessions (Day 2) that will allow close engagement with French and UK cultural institutions impacted by the challenges of sensitive digitised archives.

 

Programe

Day 1

Thursday, 8 June (10:00 to 16:15 GMT)

10:00 – 10:30 EyCon Team and Imperial War Museums Team: Welcoming remarks and general workshop introduction/ Tea and coffee

SESSION 1 – Using Computational Methods to Explore Illustrations in Digital Visual Collections

10:30 – 11:00 Professor Julia Thomas (Cardiff University) and Irene Testini (Cardiff University) – Getting the Picture: Developing Computational Methods to Interrogate Large Datasets of Historical Book Illustrations

The talk will introduce the methods of computational analysis that we have used to interrogate a big dataset of around a million illustrations from books spanning the seventeenth to the twentieth centuries and covering diverse subject matter, including literature, history, geography, and philosophy. This work has been developed in the context of two major AHRC-funded projects: the first led to the creation of the largest online searchable resource dedicated to book illustrations: The Illustration Archive (https://illustrationarchive.org.uk); and the second project uses computer vision tools to analyse the images in this dataset in terms of their specific representation of places and the people associated with them. Particular attention will be given to the ways in which we have made these images searchable using a variety of methods from crowdsourcing to caption capture and computer vision. These methods have revealed culturally sensitive images, especially those relating to the historical representation of people and racial and ethnic groups. The focus of our latest project, ‘Finding a Place’, is to use computational methods to actively explore these problematic illustrations and to identify how these images proliferate and circulate and how the ‘stereotype’ (which is itself, of course, a printing term) is constructed.

11:00 – 11:30 Dr Thomas Smits (Universiteit Antwerpen) – Using multimodal machine learning to study the visual representation of war

Most research in Digital Humanities is monomodal, meaning that the object of analysis is either textual or visual. Recently developed multimodal deep learning models, offer new possibilities to explore, analyze, and enrich multimodal historical collections at scale. Trained to predict which images and texts belong together, these models can be applied to a wide variety of text-to-image, image-to-image, and image-to-text prediction tasks without the need to train them for a specific task or on specific data. I’ll showcase the possibilities of multimodal machine learning by applying it to a collection of 26,000 digitized photographs of the Dutch Army Information Service, which were made during the Indonesian War of Independence (1945-1949). Previous research suggests that this collection presents a highly sanitized and ‘bloodless’ image of this war. Can we use multimodal machine learning to uncover patterns in the visual representation of violence in this collection? On a more general level, I’ll argue that multimodal machine learning enables us to move past the hyper-focus on a small number of ‘iconic’ war photographs and study the visual representation of armed conflict at scale. 

11:30 – 11:45 Q & A

11:45 – 12:00 Break

SESSION 2 – New Models of Accessing Digital Archives using Visual AI

12:00 – 12:30 Eléonore Plantard, and Véronique Pontillon-Valedon (ECPAD) – The New ECPAD Digital Platform: When Workflows Question the Communication of Archives

The origin of the digital platform project of the Establishment of Communication and Audiovisual Production of Defence (ECPAD), launched in the mid 2010’s, stems primarily from the need to accelerate the availability of the institution’s archives to the public. Combining a new archiving system, improved archival processing and dissemination on the Internet, ImagesDéfense project is an opportunity for the ECPAD to completely redesign and rethink its workflow by developing new tools offered by digital resources. A new conceptual data model and different interfaces are set up, allowing each processing step: from the transfer through a module of collecting archives, to the dissemination in the website, imagesdefense.gouv.fr, including the preservation, transcoding and even the marketing of these archives. These new and partially automated workflows distinguish the analog images from the digital archives, guarantee that the archiving complies with time requirements, but also induce both human control and new questions about the communication and dissemination of archives. In this matter, the institution must take into consideration the heritage code, the intellectual property code and the defence code. The various interfaces and softwares for accessing images, in the media library in situ or on the Internet, must make it possible to differentiate on the one hand the rules for the communicability of images, on the other hand their classification levels and finally allow them to be editorialized according to their sensitivity.

12:30 – 13:00 Giacomo Alliata (EPFL-CDH-DHI-eM+) – Augmenting the Archival Experience of Embodied Knowledge Through Visual AI: a Computational Framework  

With the burgeoning use of digital technologies in safeguarding intangible and living heritage, memory institutions have produced a significant body of material, yet to be made available to the public. The quest for new ways to unlock these massive collections has intensified, especially the knowledge embedded in complex data formats, such as audiovisual and motion capture recordings. Aiming to address the gap between the living heritage archive and embodied knowledge experience, this work inspects a computational pipeline incorporating machine intelligence, archival science, and digital museology. By reflecting on how embodied knowledge is sustained and potentially valorised, we devise approaches primarily based on vision-based extraction and estimation, movement analysis, aesthetic visual reconstruction and machine learning. The goal is to augment the archival experience with new modes of exploration, representation, and embodiment. This presentation will report the procedures and computational tools inspected through two use cases. In the first case, datafication enhancement is employed to allow archival exploration via embodied cues in the Hong Kong Martial Arts Living Archive, a multimodal archive that leverages state-of-the-art motion capture tools to document the living practices of martial arts. In another experimentation with the Prix de Lausanne archive, a collection of 50 years of video recordings of dance performances from the internationally renowned competition Prix de Lausanne, we augment the visual representation of the dancers’ movements through a range of aesthetic expressions and visual metaphors. Though holding different application purposes, both projects operate on the proposed framework and extract archive-specific features to create meaningful representations, which reveals the versatile applications that computational capacities can achieve. The practices also represent a model of interdisciplinary involvement where the archivists, computists, artists and knowledge holders join hands to renew strategies for archival exploration and heritage interpretation in a new light.

13:00 – 13:15 Q & A

13:15 – 14:15 Lunch Break

SESSION 3 – Visual AI, Digital Archives and GLAM institutions: Challenges and Future Outputs

14:15 – 14:45 Geoffrey Browell (King’s College London) – Back to basics: why technology alone is not enough to unlock the potential of visual archives  

Drawing on practical examples from King’s College London Archives’ holdings of nineteenth and twentieth century military and medical history, this talk will explore some of the challenges in acquiring, handling, describing, and facilitating access to often sensitive legacy visual collections, and their digital surrogates. These include mountainous cataloguing backlogs, a pressing shortage of professional conservators, the absence of specialist cultural or historical knowledge to identify and contextualise collections and institutional caution and inertia when confronted with collections that have been identified as controversial or deemed politically ‘difficult’. It will argue that technology is not enough: for high-tech approaches, including AI, to make a meaningful contribution to problem-solving, they need to work with the grain of professional practice, audience expectations, funding constraints and employers’ appetite for risk. Ideally, they should provide short cuts for hard-pressed archivists, teachers, and interpreters of visual history, rather than another complex layer of technology that is difficult to understand, implement and use.  The talk will explore the ways in which standalone digital humanities projects can be built into sustainable and durable services to find new uses for visual archives, and how the best examples facilitate cultural change that enables the breaking down of professional silos and encourages meaningful co-working by archivists, scholars, and technologists.     

14:45 – 15:15 Dr Christiane Sibille (ETH-Bibliothek, Zurich) and Nicole Graf (ETH-Bibliothek, Zurich) – Collections as Data in the Context of Visual AI

High-quality metadata has always been essential for the use of collections in GLAMs. The use of machine learning methods opens numerous new possibilities in this context, but also poses new challenges for cultural heritage institutions. The image archive of the ETH Library in Zurich addresses this situation by generating and curating not only one but multiple metadata layers. The basis is formed by classic manual archive and library cataloguing. In addition, a group of external volunteers assists the Image Archive (https://ba.e-pics.ethz.ch)  in enriching sparse or missing metadata (approx. 20,000 items per year). 35’000 images per year are georeferenced on a three-dimensional globe using the crowdsourcing platform sMapshot (https://smapshot.heig-vd.ch/owner/ethz). In the past months, the Image Archive started to include machine learning tools in its portfolio to broaden its experience in this field. First, key metadata fields were converted into English with the help of automatic translations. In addition, commercial software was used to recognize and label the nearly one million digitized images. The resulting keywords were integrated into the search interface, so far without manual post-processing, as a public pilot. In our paper, we would like to take a closer look at the work with these different levels, analyze the resulting challenges and provide an outlook on future activities. In doing so, we would like to contribute to the question of how archives can implement the idea of collections as data in the context of visual AI. We are particularly interested in how GLAM institutions can overcome data silos and create a data cycle that is characterized by the principles of FAIR and ideally generates added value for all stakeholders involved.

15:15 – 15:30 Q & A

15:30– 15:45 Break

15:45 – 16:15 Gallery Tour (Imperial War Museums team) – Second World War galleries by IWM colleagues

Day 2

Friday, 9 June (09:00 to 13:45 GMT)

09:00 – 09:30 Welcome to Day 2/Tea and coffee

09:30 – 10:30 Alan Wakefield (Imperial War Museums) and Helen Mavin (Imperial War Museums) – Developing the Photograph Archive at Imperial War Museums

Alan will lead a discussion around the formation and development of the photograph collection following the establishment of the Imperial War Museum in 1917, with a focus on imagery documenting conflict and British military activity in India and across Africa during the period 1900-1929. Examples from the collection will be used to highlight sensitivities that are commonly encountered and how his team are strategically working with the collection to develop a deeper and more nuanced understanding of the histories they represent. Helen will present on projects recently undertaken at IWM with external collaboration to review and reconsider how collections, particularly photographs and film, are accessed and described, and by who. Helen will share findings from recent AHRC funded projects.

10:30 – 10:45 Break

10:45 – 12:45 AI tools practical session by Dr Julien Schuh (EyCon project French Co-I) and team (2 hours) – Visual AI for the EyCon Project, Sensitive Images, and the EyCon Database

The aim of this session is to present some of the tools developed in the context of the EyCon project.

– Image segmentation: In order to be able to analyse the circulation of images in the press and in the albums of the time, we must first be able to extract these illustrations. We will present the protocols developed to produce a corpus of press illustrations.

– Image processing and similarity detection: The quality of the photographs differs greatly between the original negatives and prints and the press reproductions. In order to be able to apply object recognition or similarity algorithms on these images, we re-train models with images modified to mimic the transformations produced by photomechanical reproduction. This improves similarity recognition results, both for identifying duplicate images and for aggregating images with similar content.

– Sensitive pictures and texts recognition: we will discuss the solutions envisaged to identify sensitive contents, both in terms of visual contents and of captions and period descriptions.

– Pose estimation: The automatic recognition of people’s poses in photographs allows us to identify evolutions in the staging of bodies in the photography of violence.

– Database: The presentation of the EyCon database will allow us to get to grips with the site and to explore these corpora.