Purpose and aim
What research question or objective is being addressed?
Investigates whether scholars and users of digital archives are
ready to adopt AI and computational research methods.
Explores barriers and resistances: skills gaps, disciplinary
traditions, infrastructural limits and academic reward systems.
Proposes recommendations for making digital archives more usable,
inclusive and computationally effective.
Methodology
Describe the research design, methods and sample size.
Mixed methods: open-call survey (22 respondents) and
semi-structured interviews (33 professionals: archivists, librarians,
digital humanists, literary scholars, historians and computer
scientists).
Analytical frame: compares traditional research practices (close
reading and archival methods) with computational and AI-based
methods.
Primarily exploratory: maps attitudes, skills and systemic
challenges.
Key findings and arguments
- Training gap: many humanities researchers lack computational
literacy; training is ad hoc, optional or shallow.
- Data readiness: digital archives are rarely ‘AI-ready’;
pre-processing and metadata are weak points.
- Bias and representation: digitisation can reproduce colonial and
social biases; transparency and documentation are critical.
- Academic structures: humanities publishing favours the solo
researcher, whereas computational projects require teamwork and
infrastructure.
- AI opportunities: machine learning can filter, classify and open
up access to large collections; hybrid close- and distant-reading
practices are emerging.
- Structural change is essential: training, infrastructure and interdisciplinarity are needed to realise AI’s promise for archives.
Relevance
How does it link to the research questions or framework?
Connects directly to the project’s concern with epistemic
infrastructures of knowledge.
Highlights a key tension in re-evaluating DDR models: the field often
lacks computational readiness but cannot ignore AI-driven
change.
Resonates with an institutional logic perspective (cf. Mortati’s fifth
order): archives and research cultures shape what knowledge
counts.
Reinforces the methodological spine: combining archival analysis with
computational tools requires cultural and structural shifts, not just
technical fixes.
Project integration
Why it helps the project (evidence-linked)
- Quantifies user pain points (availability 86%, discoverability
68%, access for computation and online 64%, search 59%).
- Justifies a skills programme (a self-taught majority; only 32%
confident at scale; 64% request training).
- Codifies reproducibility and transparency norms (log tools and versions; document selection, OCR and metadata).
Hooks into the project
- Workstreams: access and rights triage; metadata and OCR
remediation; researcher dashboard and search; training sprints;
reproducibility checklist; GLAM collaboration MOUs.
- Deliverables and decisions: archive prioritisation; remote-access
requirements; text-mining stack; documentation standards for
provenance.
- Stakeholders: GLAM partners; principal investigators and co-investigators; data stewards; postgraduate researchers.
Use across the methods spine
Critical evaluation
Strengths
- Grounded in empirical data (survey and interviews), not only
conceptual speculation.
- Balanced tone: acknowledges opportunities without technological
determinism.
- Identifies structural barriers (career incentives and infrastructure) rather than blaming individuals.
Weaknesses and limitations
- Small sample (22 survey respondents; 33 interviewees) limits
generalisability.
- Respondents skew towards those already engaged in DH and archives;
may under-represent resistant or marginal voices.
- Limited theorisation: descriptive mapping of obstacles, with less engagement in broader epistemology.
Author’s credibility
- Jaillant is a recognised voice in digital humanities and archives,
with prior work on accessibility; credible and well placed.
- Published in a respected ACM journal with a DH and archives readership.
Contextual validity
- Findings map well onto UK and European academic contexts (AEOLIAN
network).
- Applicability elsewhere (for example, the Global South or underfunded archival contexts) is less clear.
Comparisons
- Complements Mortati (2022): while Mortati theorises an epistemic
‘fifth order’, Jaillant and Aske ground obstacles in practical
archival infrastructures.
- Aligns with Cordell (2017) on ‘dirty OCR’ and critical engagement
with imperfect data.
- Echoes FAIR principles (Wilkinson et al., 2016), highlighting gaps
in practice.
- Counterbalances overly optimistic DH narratives.
Interpretation
Your own insights
- Computational adoption is not primarily a technical problem but a
cultural and institutional one.
- Empirically supports the claim that knowledge infrastructures
constrain method: like archivists, designers cannot adopt AI without
systemic support.
- Backs a review strategy that critiques conditions of possibility
for computational methods rather than listing tools.
- Suggests DDR methods must consider not only methodological validity but also infrastructural readiness and institutional culture.
Evidence to quote or paraphrase
- ‘Computational methods are often not embedded in the training of
early-career researchers, leaving them to rely on self-teaching or ad
hoc workshops.’ (page X)
- ‘Born-digital and digitised collections are rarely AI-ready; they
require pre-processing, cleaning and metadata creation that most
archives are ill-equipped to provide.’ (page X)
- ‘The solo-researcher model of the humanities is at odds with the collaborative demands of computational projects.’ (page X)
Related works
- Cordell (2017) — OCR and data quality in DH.
- Wilkinson et al. (2016) — FAIR principles.
- Jaillant (2022) — accessibility of digital archives.
- Mortati (2022) — fifth order of design.
Questions for further research
- How can computational literacy be embedded structurally into
postgraduate design and humanities training?
- What governance models can ensure critical, transparent, debiased
digitisation?
- Can design research provide models of collaborative knowledge
production that overcome the solo-researcher barrier?
- How to balance close-reading traditions with scalable computational practice without reducing either?