1. Introduction

Welcome to the SpokenWeb metadata cataloguing project! Broadly speaking, a metadata scheme is a rationalized way of describing a collection with a set of rules according to which a controlled “language” operates. In this context, language means an artificial construct defined for the purposes of describing our collections in a systematic and consistent way.

The following SpokenWeb Metadata Scheme consists of two documents: 1) an Excel spreadsheet is WHAT needs to be used to enter information about a collection, and 2) the cataloguing rules (authority control and cataloguing procedures) are documents designed to explain HOW to enter information in the Excel file.

1.1. How to Engage with a Collection

Collections can differ from each other significantly. Some have been born via analog media technologies (tape recorders, for example), some have been “born digital” (recorded as digital files directly onto flash or hard drives), and some may be a mixture of the two. Some have been processed to some extent already using a different kind of cataloguing or metadata scheme, and others may never have been organized before. The nature and state of a collection, will have an impact on the way and ease with which you are able to catalogue the artifacts and objects in it. In most cases, in order to properly capture the information about the collection you will be describing with the SpokenWeb Metadata Scheme, you will engage in the following activities:

  1. You will access and handle the collection and the artifacts through your local library or research center. You will also require a computer while processing the collection so that you can input your metadata (data about the audio data) into our SpokenWeb Metadata Scheme form. This form will be a Google Sheet during the testing phase of our processing located here (https://goo.gl/2o6Pqq), and then we will develop an online system with fields that you will use to input your descriptive information.
  2. Where possible, you will photograph the analogue assets (tapes, reels) and their containers. It will be important to identify and retain images about the collection as a secondary source of information, and these images may also be integrated into digital presentations of the collection down the road. More information about how to do this appears below in the “Photographing Archival Assets” section.
  3. You will engage in describing each artifact in the collection using the fields that comprise the SpokenWeb Metadata Scheme. This will entail following a specific style of entering information into the relevant field categories, and may also entail seeking the information that you’ll need to fill in, either by examining the artifact, listening to the audible content of the sound recording, or by consulting external sources (i.e. doing research!).
  4. A note on listening: Given that some of the information that we will be including in our description will come from listening to the contents of the recordings themselves, a few initial points about how to listen, and what to listen for, are in order. In listening to an archival recording for the purpose of cataloguing, the goal, in the first instance, is to capture a sense of the nature and overall contents of the recording in question. If the digital file you are listening to was originally an analogue tape, for instance, that tape may have been used to capture a single event (say, a poetry reading that lasted an hour), or it may have been used to capture multiple events recorded at different times. During the first listening, use a transcription software of your choice (we recommend Transcriva) and produce a time-stamped account of the contents of the audio artifact, noting with time stamps what we call discrete signature moments during the recorded audio event. By signature moments we mean moments that signify important discrete sections in the audio. For example, in a recording of a poetry reading event one may hear, first, a speaker deliver an introduction to the event, followed by the poet stepping up to the podium to read a series of poems from a book. In this example, the introduction may provide you with information that will be useful to your description of the file. The introducer may state his/her name, the date, the venue, the name of the invited poet who will soon read, and other points of information that may be required information for our metadata fields. The introduction as a whole would comprise one discrete section of the audio that should be timestamped and named at the start as a signature moment with a timestamp and a tag that identifies the nature of the speech (i.e Introduction) and, if you have it, the name of the speaker. When the poet begins to read you would then identify that beginning with a time stamp, identifying the name of the poet, and the nature of the next section of speech (i.e Introductory remarks). When the poet begins to read you should then identify each poem read as a discrete section of the audio, thus describing the contents of the audio as it progresses with timestamps and poem titles. If the recording consists of multiple events that took place at different times, these events would also be reflected in the timestamping description. The result of this approach to listening will be the production of a “Table of Contents” (TOC) of sorts, for an audio recording. Beyond the production of this audio TOC, you may also find important clues that will help you fill in your other metadata fields. While such information often appears at the start of a recording (as I mentioned in my discussion of the Introduction), sometimes clues about who is speaking, where the event took place, etc., are found randomly throughout a recording. Pay attention for such moments when names, dates, place names and poem and book titles are spoken as they can be useful to you as you work on describing the nature and contents of your audio object with as much accuracy as possible.