Viewer using normaliser, helpers and CanvasPanel

Intro | Version 1 | Version 2 | Viewer 1 | Collaboration

Text goes here.

This viewer is a bit clunky. The pseudo-Canvas-Panel implementation doesn't react to resizing very well. But gloss over that for now - I'm trying to keep the code as simple as possible.

Here are my requirements for a "book-friendly" viewer, for text-heavy resources. As a JavaScript developer armed with some libraries and some components, how do I go about meeting these requirements quickly?

Hpw would canvas panel help me build this quickly, without dictating the visual appearance of the viewer?

How should my code, and the libraries, and the components collaborate to make my job as a developer easy?

Requirements

  1. Allow navigation by thumbnails
  2. Display text transcriptions alongside the canvas
  3. Render any hyperlinking annotations and allow my code to intercept them
  4. Highlight the text on the canvas when I select it in the text panel
  5. Display text transcriptions overlaying the canvas, as an option
  6. Show other content of the canvas, besides transcriptions (e.g., comments)
  7. Support multiple image annotations on the canvas
  8. Support oa:Choice

It feels like the first two can be tackled quite easily in my bespoke viewer code, without leaning on Canvas Panel. But the other requirements are going to need some interaction, some collaboration between Canvas Panel and my code, so I'll leave them for a while.

How does this viewer work? It starts to demonstrate the roles of two essential libraries, and also the separation of concerns across those two libraries.

I need to be able to normalise any IIIF resources to spec-perfect Presentation 3, because the CP developers don't want to deal with the vagaries of IIIF in the wild. CP only accepts Perfect P3. I want to be forgiving of different versions of the specs, not to mention sloppy implementation of any of the versions. But I want a library to fix everything up for me. This makes it easier for me to code against, as P3 is much friendlier to JavaScript developers. Its data structures are consistent and feel like regular JSON, pretty much. Whenever I get IIIF, I normalise it. Then I need only refer to the latest P3 spec to understand what I'm dealing with.

I also need a battery of helpers and utilities. The IIIF representation is a big data structure, it doesn't have functions I can call on to extract useful things. For example, given a IIIF resource, I want to call getThumbnail(..) with some options to generate a suitable thumbnail for use in my UI. These functions live in a helper library.

If I always normalise to Perfect P3 before I do anything else, then my helper library only needs to support Perfect P3. And I only need to understand one spec.

Design principle: normalisation to P3 is one library, helpers is another. They aren't the same library.

The Manifesto library does both of these tasks. But I want to split them up. Sometimes, I might not want to load any helpers because I don't need to do anything particularly complex (as in the Viewer1 example). Sometimes, I might be in a Perfect P3 environment already, and I don't need to normalise.

This particular viewer needs both. So I have stub implementations of both to bring in:

<script type="module">        
    // This JUST gets the model to a consistent P3 state    
    import { normalise } from "./normaliser.js"     
    // And this provides all the helpers, decorators     
    // and utility functions on top of the model.    
    import * as helpers from './helpers.js';    
    // ... 
</script>

Here's the basic layout of the viewer:

<div id="viewer">
    <div id="thumbs"></div>    
    <div id="main">        
        <div is="canvas-panel-2" id="cp" class="canvaspanel"></div>        
        <div id="textPanel" class="text">Text goes here.</div>    
    </div>
</div>

We have a panel to render thumbnails in, a canvas panel, and a panel to render text in. Of course, both the thumbnail panel and text panel may be web components in their own right, like the UV's thumbnail panel already is. Once they start acquiring more features, we'll want them as components. But for now I only want to think about collaboration between CP and the rest of my application; once we've started doing that then the collaboration between those parts of my application that are themselves components may become clearer.

The first job of the code is to load the manifest (here the value of an input box), and straight away, normalise it to Presentation 3:

    // fetch the manifest as JSON         
    let response = await fetch($('mf').value);
    let raw_manifest = await response.json();

    // normalise it
    manifest = normalise(raw_manifest, options);

Then generate the thumbnails. There are all sorts of shortcuts I'm taking here, and things that could be optimised. For example, I am not checking to see if the manifest has alternative ranges that would give me a different ordering of the canvases (most won't). I should really lazy-load the thumbnail images, so only those currently on-screen trigger http requests for thumbnails.


    manifest.items.forEach(function(cv,i){
        // some more nice helper methods:
        let thumb = helpers.getThumbnail(cv);
        let label = helpers.getString(cv.label);
        s += `<div class="tc">${label}<br/>`;
        s += `  <img id="im${i}" data-index="${i}" src="${thumb}" />`;
        s += `</div>`;         
    });

Armed with my now normalised IIIF (manifest), I can now call my helpers. Again, these are massively simplified versions of what real helpers would do. I've written elsewhere about thumbnail selection utilities. This one here doesn't let me specify any parameters, and it just returns the URI of the thumbnail image without any additional useful information. Likewise, the getString(..) function is the beginnings of a helper that extracts appropriate display strings from IIIF's JSON-LD Language Maps; you might expect it to take a preferred language parameter, and fallbacks. The two helper calls here are merely to show that this kind of utility is provided by a P3-compliant helper library, not the business logic of my own application. These are useful helpers! Or stubs for what can be useful helpers later.

When someone clicks a thumbnail:


    function clickThumbnail(){ 
        canvas = manifest.items[this.getAttribute('data-index')];
        $('cp').canvas = canvas; 
        displayText();
    };

The canvas index is stored on the thumbnail. So we just pass the canvas object at that index to the canvas panel component. This seems straightforward, but is it OK to do this? Pass an object to a web component?

The code then calls displayText(..) - which introduces a problem.


    let textAsHtml = await helpers.getTextForCanvasAsHtml(canvas);

This helper method seems straightforward enough. Get me an HTML representation of the text on the canvas. But there are several challenges with it as a library, to help me build a viewer.

What would start to give me more control over what linked annotation list I should load? Here's one less-than satisfactory way of doing it, by having the viewer evaluate the label of the linked annotation collection and makig its own decision about whether it contains the text of the page:


    let textAsHtml = "";
    let annotationCollections = helpers.getAnnotationCollections(canvas);
    for (let annocoll of annotationCollections) {
        if (feelsLikeTextContent(annocoll.label)) {
            textAsHtml = await helpers.getTextualAnnotationsAsHtml(annocoll);
            break;
        }
    }

A more elegant approach would be to get the annotation collections...


    let annotationCollections = helpers.getAnnotationCollections(canvas);

...and then bind them to a UI navigation component, to allow the user to choose which one gets loaded into the text panel.

Alternatively, show all of the linked annotation lists for the canvas, as tabs in the UI zone we I was previously calling the text component. Accept that you don't necessarily know what's in each list. As each tab (or drop down, or whatever) is selected, load the external annotation list and render its content. The default behaviour when there is only one linked list would be the same as here (don't show any selection UI if there's only one thing that could be selected).

Already this discussion has led into a simple specification for another component - an annotation rendering component for the non-painting annotations. This could start to get cleverer; for example, it could interrogate the motivation property as it loads each list (lazily, in response to user selection, or perhaps out-of-band behind the scenes). If it finds supplementing annotations in one list and commenting annotations in another, and it recognises those annotation motivations from the IIIF and W3C specifications, it could provide more UI clues as to what they are for, and maybe make different decisions about rendering.

The important thing is, this is collaboration off-canvas, between our viewer, and the helper library. And later perhaps, between our viewer and a "textual content of the canvas" component, or rather "non-painting annotation component". Canvas Panel isn't involved in this off-to-the-side rendering, even though we're dealing with canvas content. This flow suggests another principle, but I'm not confident enough of it as a full design principle yet. We could use our Canvas Panel component as a general gateway to canvas content, both as a UI element and as a provider of data through an API. Instantiate a Canvas Panel, give it a Canvas, then call on it not just for visual rendering but for access to other information and content of the canvas. This seems attractive, but we didn't need it for our textual content component. It felt more natural to do that in application code with helpers, rather than expect CP to do it just because it understands canvases.

For my deliberately constrained first two requirements I have a CP component, normaliser and helpers that let me do what I need. But I haven't started pushing it yet. As soon as I have more complex demands, the collaboration behaviour gets more complex.

On to More complex collaboration.


View on GitHub