Monthly Archives: May 2015

New Professional Reading Group venture⤴

from @

Last September I started out on a course called Developing as a Leader (DAAL). It is a pilot course being run by Scottish Borders Council, East Lothian Council and Moray House. It is exploring Teacher Leadership within schools and how we can begin to enact change within our own department and schools. The course has […]

Why education needs more fuzzy thinking⤴

from @ Ewan McIntosh | Digital Media & Education


It’s been a decade since I first heard the education conference cliché that we are preparing our kids for a future we don’t even understand. I argue that since then we've done little about it, in this week's Editorial in the Times Educational Supplement.

Ten years ago, that wasn’t really true. In fact, the immediate future was pretty predictable between 2005 and 2010: the internet remained slow and some kids didn’t have it at home, most didn’t use Facebook, smartphones were still far too expensive and the iPad wasn’t launched until January 2010. Even terrorism was mostly still “over there”, and wars likewise, rather than recruiting from comprehensives in the Home Counties.

Since then the world has learned what “exponential” really means. The normal trajectory post-school is no longer a linear certainty but a struggle with what a new breed of thinker and doer embraces as “fuzzy goals”. This emergent group of young people, activists and senior industry leaders spends most of its days grappling with unknown unknowns – technologies, jobs, ways of thinking and, yes, even terrorist groups that we didn’t even know we didn’t know about.

At the same time, our understanding of “what matters” in education hasn’t budged beyond a few pockets of relative daring. We still operate within our hierarchy of subjects, overcharged curricula and an expectation that teachers will stand and deliver it. There is little room for fuzziness here.

In this week's Times Educational Supplement, I expand on how fuzzy problem-finding and -solving are some of the core skills that we've been ignoring too long in the search for standaradisation and content-heavy cramming. What do you think?

Pic by HB2

Day of Digital Ideas 2015⤴

from @ Open World

Digital humanities is an area that I’ve been interested in for a long time but which I haven’t had much opportunity to engage with, so earlier this week I was really excited to be able to go along to the Digital Scholarship Day of Digital Ideas at the University of Edinburgh.  In the absence of my EDINA colleague Nicola Osborne and her fabulous live blogging skills, I live tweeted the event and archived tweets, links and references in a storify here: Digital Day of Ideas 2015.  I also created a TAGS archive of tweets using Martin Hawksey’s clever Twitter Archiving Google Spreadsheet.

The event featured three highly engaging keynotes from Ben Schmidt, Anouk Lang, and Ruth Ahnert, and six parallel workshops covering historical map applications and OpenLayers, corpus analysis with AntConc, data visualisations with D3, Drupal for beginners, JavaSCript basics and Python for humanities research.

Humanities Data Analysis

~ Ben Schmidt, Northeastern University

Ben explored the role of data analysis in humanities and explored the methodological and social challenges presented by humanities data analysis.  He began by suggesting that in many quarters data analysis for humanities is regarded as being on a par with “poetry for physics”.  Humanities data analysis can rase deep objections from some scholars, and seem inimical to the meaning of research.  However there are many humanistic ways of thinking about data that are intrinsic to the tradition of humanities. Serendipity is important to humanities research and there is a fear that digital research negates this, however it’s not difficult to engineer serendipity into cultural data analysis.

But what if borrowing techniques from other disciplines isn’t enough? Digital humanities needs its own approaches; it needs to use data natively and humanistically, as a source of criticism rather than to “prove” things. Humanities data analysis starts with the evidence, not with the hypothesis.  The data needs to tell stories about structures, rather than individual people.   Johanan Drucker argues that what we call “data” should really be called “capta”:

Capta is “taken” actively while data is assumed to be a “given” able to be recorded and observed. From this distinction, a world of differences arises. Humanistic inquiry acknowledges the situated, partial, and constitutive character of knowledge production, the recognition that knowledge is constructed, taken, not simply given as a natural representation of pre-existing fact.

Johanna Drucker on data vs. capta

Ben went on to illustrate these assertions with a number of examples of exploratory humanities data analyses including using ngrams to trace Google books collections, building visualisations of ship movements from digitised whaling logbooks, the Hathi Trust bookworm, and exposing gendered language in teachers reviews on Rate my Teacher.  (I’ve worked with ships musters and log books for a number of years as part of our Indefatigable 1797 project, I’ve long been a fan of Ben’s whaling log visualisations which are as beautiful as they are fascinating.)

Ships tracks in black, show the outlines of the continents and the predominant tracks on the trade winds. © Ben Schmidt

Ben concluded by introducing the analogy of Borges The Garden of Forking Paths and urged us to create data gardens and labyrinths for exploration and contemplation, and to provide tools that help us to interpret the world rather than to change it

Gaps, Cracks, Keys: Digital Methods for Modernist Studies

~ Anouk Lang, University of Edinburgh

Manifesto of Modernist Digital Humanities

Manifesto of Modernist Digital Humanities

Anouk explored the difficulties and opportunities facing scholars of twentieth-century literature and culture that result from the impact of copyright restrictions on the digitisation of texts and artefacts. Due to these restrictions many modern and contemporary texts are out of digital reach.  The LitLong project highlights gaps in modernist sources caused by copyright law.  However there are cracks  in the record where digital humanities can open up chinks in the data to let in light, and we can use this data as the key to open up interesting analytic possibilities.

During her presentation Anouk referenced the Manifesto of Modernist Digital Humanities, situating it in reference to the Blast Manifesto, Nathan Hensley’s Big Data is Coming for Your Books, and Underwood, Long and So’s Cents and Sensibility.

By way of example, Anouk demonstrated how network analysis can be used to explore biographical texts. Biographies are curated accounts of people’s lives constructed by human and social forces and aesthetic categories. There is no such thing as raw data in digital text analysis: all the choices about data are subjective. Redrawing network maps multiple times can highlight what is durable. For example network analysis of biographical texts can reveal the gendered marginality of writers’ wives.

In conclusion, Anouk argued that digital deconstruction can be regarded as a form of close reading, and questioned how we read graphical forms such as maps and network illustrations. How do network maps challenge established forms of knowledge? They force us to stand back and question what our data is and can help us to avoid the linearity of narrative.

Closing the Net: Letter Collections & Quantitative Network Analysis

~ Ruth Ahnert, Queen Mary University of London

Ruth’s closing keynote explored the nature of complex networks and the use of mathematical models to explore their underlying characteristics.  She also provided two fascinating examples of how social network analysis techniques can be used to analyse collections of early modern letters, a set of Protestant letters (1553 – 1558) and Tudor correspondence in State Papers Online,  to reconstruct the movement of people, objects, and ideas.   She also rather chillingly compared the Tudor court’s monitoring of conspiracies and interception of letters with the contemporary surveillance activities of the NSA.

Ruth Ahnart.  Picture by Kathy Simpson, @kilmunbooks

Ruth Ahnart. Picture by Kathy Simpson, @kilmunbooks.

Ruth introduced the concept of betweenness* – the connectors that are central to sustaining a network.  Networks are temporal, they change and evolve over time as they are put under pressure.  Mary I took out identifiable hubs in the Protestant network by executing imprisoned leaders, however despite removing these hubs, the networks survived because the sustainers survived, these are the people with high betweenness.  In order to fragment a network it is necessary to remove, not the hubs or edges, but the nodes with high betweenness.

Ruth went on to introduce Eigenvector centrality which can be used to measure the quality of people’s connections in a network, and she explored the curious betweenness centrality of Edward Courteney, 1st Earl of Devon (1527 – 1556). Courteney’s social capital is quantifiable; he was typical of a character with high Eigenvector centrality, who cuts across social groups and aligned himself with powerful nodes.

In conclusion, Ruth suggested that network analysis can be used to open archives, it doesn’t presume what you’re looking for, rather it can inspire close reading by revealing patterns previously unseen by traditional humanity research.

I was certainly hugely inspired by Ruth’s presentation.  I have some passing familiarity with the concepts of network analysis and betweenness centrality from the work of Martin Hawksey and Tony Hirst however this it the first time I have seen these techniques applied to historical data and the possibilities are endlessly inspiring.  One of the man aims of our Indefatigable 1797 research project is to reveal the social networks that bound together a small group of men who served on the frigate HMS Indefatigable during the French Revolutionary War.  Using traditional techniques we have pieced together these connections through an analysis of ships musters, Admiralty archives, contemporary press reports, personal letters and birth, marriage and death certificates.  We have already built up a picture of a complex and long-lived social network, but I now can’t help wondering whether a more nuanced picture of of that network might emerge through the application of social network analysis techniques.  Definitely something to think more about in the future!

Many thanks to Anouk Lang and the Digital Scholarship team for organising such a thought provoking, fun and engaging event.

* For an excellent explanation of the concept of betweeness, I can highly recommend reading Betweenness centrality – explained via twitter, featuring Tony Hirst and my former Cetis colleagues Sheila MacNeill, Wilbert Kraan, and Martin Hawksey.  It’s all about the genetically modified zombies you see…

Radical improvement to Scottish education demands radical changes to its structures⤴

from @ I've Been Thinking

Originally posted on This Little Earth:
So the results are in: data from the Scottish Survey of Literacy and Numeracy (SSLN) shows a decline in literacy standards amongst pupils in Scotland. Cue the usual hand-wringing, vague promises and political point scoring that we should all have come to expect by now. Nevermind the fact that a…

#ipaded – App Smashing in the classroom⤴

from @ teachitgeek

App smashing is the process of using more than one app to create a project or product. It is highly engaging, asks students to be creative in their approach to their learning and use of technology and challenges them to take their learning to a higher level.

– Mark Anderson @ictevangelist

Image courtesy of @ipadwells

Appsmashing is a term first coined by Greg Kulowiec of Greg has written a lot on the subject and can be found on Twitter @gregkulowiec

Appsmashing lends itself beautifully to the iPad in Education. Often, the questions ‘What is the best app to use in the classroom?’ or ‘Is there a good app for literacy/numeracy?’ are asked during CPD sessions. There is not one killer app or feature that makes the iPad a compelling choice of device, but rather the combination of apps and features that allow the pupils to express their understanding of a key concept or skill that makes it a go to for so many of our schools.

The purpose of technology in class is to enhance learning. Appsmashing is an activity that can be lots of fun, but can also focus too much on the technology if being particularly complicated. We do know, as teachers, that pupils are motivated and purposively engaged in the learning process when concepts and skills are underpinned with technology and sound pedagogy. This post will highlight a simple appsmashing activity that will motivate pupils and allow them to take ownership of their learning by giving them a realistic expectation and allowing them to be particularly creative.

For this task we will only need to use one app; Tellagami. This is a free app (the best kind) that allows pupils to create short animated videos. You can add your own background and record your voice to an animated character.tellagami

Stock iOS apps and features can sometimes be overlooked in terms of classroom benefit. Siri; while great for telling you a joke or the current weather conditions; can also be used to give you definitions of words, solving equations or showing you maps of famous landmarks. By using the command, ‘Show me the Eiffel Tower’ , pupils are able to view the famous landmark in 3D glory, screenshot the image and use it in another app. Spelling does not have to a barrier and allows pupils to focus on the clarity of their speech. This can be a huge plus for pupils who are not confident in their spelling and/or developing their language skills.


Once we have the image in our photos app, we can launch Tellagami and start to record ourselves. The video below details the process.

LRMI / validation⤴

from @ Sharing and learning


We are currently preparing some examples of LRMI metadata. While these are intended to be informative only, we know that they will affect implementations more than any normative text we could put into a spec–I mean what developer reads the spec when you can just copy an example?  So it’s important that the examples are valid, and that set me to pulling together a list of tools & services useful for validating LRMI, and by extension

Common things to test for:

  • simple syntax errors produced by typos, not closing tags and so on.
  • that the data extracted is valid / LRMI
  • using properties that don’t belong to the stated resource type, e.g. educationalRole should be a property of EducationalAudience not of CreativeWork.
  • loose or strict interpretation of expected value types, e.g. the author property should have a Person or Organization as its value, dates and times should be in iso 8601 format?
  • is the data provided for properties from the value space they should be? i.e. does the data provider use the controlled vocabulary you want?
  • check that values are provided for properties you especially require

[Hint, if it is the last two that you are interested in then you’re out of luck for now, but do skip to the “want more” section at the end.]

See also Structured Data Markup Visualization, Validation and Testing Tools by Jarno van Driel and Aaron Bradley. testing tools

Google structured data testing tool 

If Google is your target this is as close to definitive as you can get.  You can validate code on a server via a URL or by copying and pasting it into a text window, in return you get a formatted view of the data Google would extract.

Validates: HTML + microdata, HTML + RDFa, JSON-LD

Downsides: it used to be possible to pass the URL of the code to be validated as a query parameter appended to the testing tool URL and thus create a “validate this page” link, but that no longer seems to be the case.

Also, the testing tool reproduces Google’s loose interpretation of the spec, and will try to make the best sense it can of data that isn’t strictly compliant. So where the author of a creative work is supposed to be a if you supply text, the validator will silently interpret that text as the name of a Person entity. Also dates not in ISO 8601 format get corrected (October 4 2012 becomes 2012-10-4 That’s great if your target is as forgiving as Google, but otherwise might cause problems.

But the biggest problem seems to be that pretty much any syntactically valid JSON-LD will validate.

Yandex structured data validator

Similar to the Google testing tool, but with slightly larger scope (validates OpenGraph and microformats as well as schema). Not quite as forgiving as Google, a date in format October 4 2012 is flagged as an error, and while text is accepted as a value for author it is not explicitly mapped to the author’s name.

Validates: HTML + microdata, HTML + RDFa, JSON-LD

Downsides: because the tool is designed to validate raw RDF / JSON-LD etc, just because something validates does not mean that it is valid mark up. For example, this JSON-LD validates:

{ "@context": [
         "@vocab": ""
     "@type": "CreativeWork" ,
     "nonsense" : "Validates"

Unlike the Google testing tool you do get an appropriate error message if you correct the @vocab URI to have a lower-case S, making this the best JSON-LD validator I found.

Bing  markup validator

“Verify the markup that you have added to your pages with Markup Validator. Get an on-demand report that shows the markup we’ve discovered, including HTML Microdata, Microformats, RDFa,, and OpenGraph. To get started simply sign in or sign up for Bing Webmaster Tools.”

Downsides: requires registration and signing-in so I didn’t try it. highlighter

A useful feature of the validators listed above is that they produce something that is human readable. If you would like this in the context of the webpage, Paul Libbrecht has made a highlighter, a little bookmarklet that transforms the markup into visible paragraphs one can visually proof.

Translators and other parsers

Not validators as such, but the following will attempt to read microdata, RDFa or JSON-LD and so will complain if there are errors. Additionally they may provide human readable translations that make it easier to spot errors.

RDF Translator

“RDF Translator is a multi-format conversion tool for structured markup. It provides translations between data formats ranging from RDF/XML to RDFa or Microdata. The service allows for conversions triggered either by URI or by direct text input. Furthermore it comes with a straightforward REST API for developers.” …and of course is your data isn’t valid it won’t translate.

Validates pretty much any RDF / microdata format you care to name, either by entering text in a field or by reference via a URI.

Downsides: again purely syntactic checking, doesn’t check whether the code is valid markup.

Structured data linter

Produces a nicely formatted, human readable representation of structured data.

Validates: HTML + microdata, HTML + RDFa either by URL, file upload or direct input.

Downsides:  another that is purely syntactic.

JSON-LD Playground

A really useful tool for automatically simplifying or complexifying JSON-LD, but again only checks for syntactic validity.

Nav-North LR data

“A Tool to help import the content of the Learning Registry into a data store of your choice” I haven’t tried this but it does attempt to parse JSON-LD so you would expect it to complain if the code doesn’t parse.

Want more?

The common shortcoming (for this use case anyway, all the tools are good at what they set out to do) seems to be validating whether the data extracted is actually valid or LRMI. If you want to validate against some application profile, say insisting that the licence information must be provided, or that values for learningResourceType come from some specified controlled vocabulary then you are in territory that none of the above tools even tries to cover. This is, however, in the scope of the W3C RDF Data Shapes Working Group “Mission: produce a W3C Recommendation for describing structural constraints and validate RDF instance data against those.”

A colleagues at Heriot-Watt has had students working (with input from Eric Pud’Hommeaux) on Validata “an intuitive, standalone web-based tool to help building valid RDF documents by validating against preset schemas written in the Shape Expressions (ShEx) language.”  It is currently set up to work to validate linked data against some pre-set application profiles used in the pharmaceuticals industry. With all the necessary caveats about it being student work, no longer supported, using an approach that is preliminary to the W3C working group, this illustrates how instance validation against a description of an application profile would work.


Using Thinglink to extend model making activities⤴

from @

I first came across Thinglink when introduced by a colleague who teaches MFL (@ProfeScammell), she said it would be excellent to extend the model making activities we do in Geography and she was correct! I began by having a play around with Thinglink myself by signing up and creating a couple of Thinglink pictures myself. […]

Using OneNote with National 5 and Higher Classes⤴

from @ Glow Gallery

I set up a notebook for my National 5 classes and my Higher classes. I particularly used the Higher notebook in the six weeks leading up to the exam. In addition to the sections for each of the two Higher units I also created an additional section for Exam Preparation. In this section I pinpointed topics I thought would be most likely to come up. I created a page (within the section) for each topic and included on the page the following items;

  • Screen clipping of course content from course support notes
  • Screen clipping of a question related to the topic from specimen/exemplar
  • My notes relating to the topic
  • Any relevant hyperlinks
  • Videos (I had created before using screen cast software)


I also used the collaborative space to build revision resources that could be used by everyone. The screen print below shows the area of contemporary developments in Higher Computing Science.


I intend to add to this Class Notebook over the next couple of weeks (for next year’s class) by recording short audio files (from this year’s pupils) explaining specific topics and answers to questions.

In addition to adding my own pupils to this notebook I also added 5 Higher pupils from another school and their teacher. When I looked at the statistics of the number of views on this notebook after the final exam on 6th May there had been 849 views. I consider this to have been a worthwhile resource to have set up as there were only approximately 30 people accessing it.