Author Archives: Phil Barker

#OER18 Open to all⤴

from @ Sharing and learning

I spent the last couple of days in Bristol, a city I know well: I went to University there (undergrad, PhD and post doc in physics and materials science), my wife’s parents live there. I’ll be honest, meeting my friends from the OER community in a city of which I am very fond was part of what attracted me to this conference. The theme of the conference, “open to all,” with discussions about OER in the context of colonialism, was less attractive to me. Look at the rest of this blog, you’ll see I am much more comfortable talking about technical specifications, APIs and infrastructure to support the creation and dissemination of OER.

Two sides of a a quadrangle of small, 17th Cent., pink. terraced cottages.
Merchant Venturer’s alms houses, Bristol, photo by Eirian Evans, via MediaWiki, Licence CC:BY-SA

Bristol has a dark history. Like many towns and cities in Britain, it was built on the slave trade. Bristol more directly than others. I stayed at the Merchant Venturers’ Alms houses, built with the money of Edward Colston, a Bristolian “philanthropist and slavetrader” [wikipedia]. There has been a lot debate in Bristol about whether Colston’s name should still be commemorated in cultural venues and schools. I would recommend the Almshouses to anyone who wanted to stay in an apartment in a lively part of town as an alternative to run of the mill corporate hotels.

At the conference, I did get my hoped-for catch up with old friends and chance to meet new friends, I got the chance to talk with people about technical platforms and interoperabilty of eTextBooks and infrastructure for disseminating OER. That much was expected. I share some of Lorna Campbell‘s background, and I think that she encapsulated the UK OER (#UKOER?) movement superbly in her opening keynote.

The unexpected pleasure was how much I enjoyed and learned from the contributions of Momodou Sallah (keynote),  Nick Baker (paper) and Taskeen Adam (contribution to closing plenary), and Maha Bali, Catherine Cronin (& many others  in a discussion session). These people are all great communicators, talking about issues (colonialism, politics of OER in the global south, ideas of openness and availability of education from non-western cultures) that are not part of my background. I could have been out of my comfort zone, but they made me feel comfortable. I wish that many involved in science communication would learn from this.

We need to talk about the role right-wing libertarian wingnuts in open

A plaque on tree reading Caucasian Wingnut [latin name] pterocarya fraxinifolia
Caucasian wingnut sign, by Ian Poellet via wikimedia commons.
I mean that photo of Eric S. Raymond, keyboard in hand gun in the other, shown by David Wiley during his keynote. Look at what wikipedia says about Raymond’s political views. If you haven’t followed any links so far (I see the WordPress logs, I know you don’t) follow that one and come back.

If you just read the opening sentence of that section

“Raymond is a member of the Libertarian Party. He is a gun rights advocate…”

go back and read the rest. Read the bit about

Raymond accused the Ada Initiative and other women in tech groups of attempting to entrap male open source leaders and accuse them of rape…”,

and the bit about

Raymond is also known for claiming that “Gays experimented with unfettered promiscuity in the 1970s and got AIDS as a consequence…”

and so on.

I have read the Cathedral and the Bazaar, I do know Raymond’s contribution to open source software. Even coming from a background in materials science, I do understand concepts like the genetic fallacy and wrongness of ad hominem attacks. And I do not think we should be recommending this person’s work to the OER community.

The post #OER18 Open to all appeared first on Sharing and learning.

PressBooks and ePub as an OER format.⤴

from @ Sharing and learning

PressBooks does a reasonable job of importing ePub, so that ePub can be used as a portable format for open text books. But, of course, there are limits.

I have been really impressed with PressBooks, the extension to WordPress for authoring eBooks. Like WordPress it is available as a hosted service from and to host yourself from I have been using the latter for a few months. It looks like a great way of authoring, hosting, using, and distributing open books. Reports like this from Steel Wagstaff about Publishing Open Textbooks at UW-Madison really show the possibilities for education that open up if you do that. There you can read what work Steel and others have been doing around PressBooks for authoring open textbooks, with interaction (using, and h5p), connections to their VLE (LTI), and responsible learning analytics (xAPI).

PressBooks also supports replication of content from one PressBook install to another, which is great, but what is even greater is support of import from other content creation systems. We’re not wanting monoculture here.

Open text books are, of course, a type of Open Educational Resource, and so when thinking about PressBooks as a platform for open text books you’re also thinking about PressBooks and OER. So what aspects of text-books-as-OER does PressBooks support? What aspects should it support?

OER: DERPable, 5Rs & ALMS

Frameworks for thinking about requirements for openness in educational resources go back to the very start of the OER movement. Back in the early 2000s, when JISC was thinking about repositories and Learning Objects as ways of sharing educational resources, Charles Duncan used to talk about the need for resources to be DERPable: Discoverable, Editable, Repurposable and Portable. At about the same time in the US, David Wiley was defining Open Content in terms of four, later five Rs and ALMS. The five Rs are well known: the permissions to Retain, Reuse, Revise, Remix and Redistribute. ALMS is a less memorable, more tortured acronym, relating to technical choices that affect openness in practice. The choices relate to: Access to editing tools, the Level of expertise required to use these tools, the content being Meaningfully editable, and being Self-sourced (i.e. there not being separate source and distribution files).

Portability of ePub and editing in PressBooks

I tend to approach these terms back to front: I am interested in portable formats for disseminating resources, and systems that allow these to be edited. For eBooks / open textbooks my format of choice for portability is currently ePub, which is essentially HTML and other assets (images, stylesheets, etc.) with metadata, in a zip archive. Being HTML-based, ePub is largely self-sourced, and can be edited with suitable tools (though there may be caveats around some of the other assets such as images and diagrams). Furthermore, WordPress in general and PressBooks specifically makes editing, repurposing and distributing easy without requiring knowledge of HTML. It’s a good platform for remixing, revising, reusing, retaining content. And the key to this whole ramble of a blog post is the ‘import from ePub‘ feature.

So how does  the combination of ePub and PressBooks work in practice. I can go to OpenStax, and download one of their text books as ePub. As far as I can see the best-known open textbook project doesn’t seem to make ePub available (Apple’s iPub is similar, but I don’t do iBooks so couldn’t download one). So I went to Siyavula and downloaded one of their CC:BY textbooks as an ePub. Chose that download for import into PressBooks and got a screen that lets me choose which parts of the ePub to import and what type of content to import it as.

List of sections of the ePub with tick box for whether to import in PressBooks, and radio button options for what type of book part to import as

After choosing which parts to import and hitting the import button at the bottom of the page, the content is there to edit and republish in PressBooks.

From here you can edit or add content (including by import from other sources), rearrange the content, and set options for publishing it. There is other work to be done. You will need to choose a decent theme to display your book with style. You will also need to make sure internal links work as your PressBooks permalink URL scheme might not match the URLs embedded in the content. How easy this is will vary depending on choices made when the book was created and your own knowledge of some of the WordPress tools that can be used to make bulk edits.

I am not really interested in distributing maths text books, so I won’t link to the end result of this specific example. I did once write a book in a book sprint with some colleagues, and that was published as an ePub. So here an imported & republished version of Into The Wild (PressBook edition).  I didn’t do much polishing of this: it uses a stock theme, and I haven’t fixed internal links, e.g. footnotes.


Of course there are limits to this approach. I do not expect that much (if any) of the really interesting interactive content would survive a trip through ePub. Also much of Steel’s work that I described up at the top is PressBook platform specific. So that’s where cloning from PressBooks to PressBooks becomes useful. But ePub remains a viable way of getting textbook content into the PressBooks platform.

Also, while WordPress in general, and hence PressBooks, is a great way of distributing content, I haven’t looked much at whether metadata from the ePub is imported. On first sight none of it is, so there is work to do here in order to make the imported books discoverable. That applies to the package level metadata in ePubs, which is a separate file from the content. However, what also really interests me is the possibility of embedding education-specific metadata into the HTML content in such a way that it becomes transportable (easy, I think) and editable on import (harder).

The post PressBooks and ePub as an OER format. appeared first on Sharing and learning.

Using wikidata for linked data WordPress indexes⤴

from @ Sharing and learning

A while back I wrote about getting data from wikidata into a WordPress custom taxonomy. Shortly thereafter Alex Stinson said some nice things about it:

and as a result that post got a little attention.

Well, I have now a working prototype plugin which is somewhat more general purpose than my first attempt.

1.Custom Taxonomy Term Metadata from Wikidata

Here’s a video showing how you can create a custom taxonomy term with just a name and the wikidata Q identifier, and the plugin will pull down relevant wikidata for that type of entity:

[similar video on YouTube]

2. Linked data index of posts

Once this taxonomy term is used to tag a post, you can view the term’s archive page, and if you have a linked data sniffer, you will see that the metadata from WikiData is embedded in machine readable form using Here’s a screenshot of what the OpenLink structured data sniffer sees:

Or you can view the Google structured data testing tool output for that page.


  • You can create terms for custom taxonomies with just a term name (which is used as the slug for the term) and the Wikidata Q number identifier. The relevant name, description and metadata is pulled down from Wikidata.
  • Alternatively you can create a new term when you tag a post and later edit the term to add the wikidata Q number and hence the metadata.
  • The metadata retrieved from Wikidata varies to be suitable for the class of item represented by the term, e.g. birth and death details for people, date and location for events.
  • Term archive pages include the metadata from wikidata as machine readable structured data using This includes links back to the wikidata record and other authority files (e.g. ISNI and VIAF). A system harvesting the archive page for linked data could use these to find more metadata. (These onward links put the linked in linked data and the web in semantic web.)
  • The type of relationship between the term and posts tagged with it is recorded in the structure data on the term archive page. Each custom taxonomy is for a specific type of relationship (currently about and mentions, but it would be simple to add others).
  • Short codes allow each post to list the entries from a custom taxonomy that are relevant for it using a simple text widget.
  • This is a self-contained plugin. The plugin includes default term archive page templates without the need for a custom theme. The archive page is pretty basic (based on twentysixteen theme) so you would get better results if you did use it as the basis for an addition to a custom theme.

How’s it work / where is it

It’s on github. Do not use it on a production WordPress site. It’s definitely pre-alpha, and undocumented, and I make no claims for the code to be adequate or safe. It currently lacks error trapping / exception handling, and more seriously it doesn’t sanitize some things that should be sanitized. That said, if you fancy giving it a try do let me know what doesn’t work.

It’s based around two classes: one which sets up a custom taxonomy and provides some methods for outputting terms and term metadata in HTML with suitable RDFa markup; the other handles getting the wikidata via SPARQL queries and storing this data as term metadata. Getting the wikidata via SPARQL is much improved on the way it was done in the original post I mentioned above. Other files create taxonomy instances, provide some shortcode functions for displaying taxonomy terms and provide default term archive templates.

Where’s it going

It’s not finished. I’ll see to some of the deficiencies in the coding, but also I want to get some more elegant output, e.g. single indexes / archives of terms from all taxonomies, no matter what the relationship between the post and the item that the term relates to.

There’s no reason why the source of the metadata need be Wikidata. The same approach could be with any source of metadata, or by creating the term metadata in WordPress. As such this is part of my exploration of WordPress as a semantic platform. Using taxonomies related to educational properties would be useful for any instance of WordPress being used as a repository of open educational resources, or to disseminate information about courses, or to provide metadata for PressBooks being used for open textbooks.

I also want to use it to index PressBooks such as my copy of Omniana. I think the graphs generated may be interesting ways of visualizing and processing the contents of a book for researchers.

Licenses: Wikidata is CC:0, the wikidata logo used in the featured image for this post is sourced from wikimedia and is also CC:0 but is a registered trademark of the wikimedia foundation used with permission. The plugin, as a derivative of WordPress, will be licensed as GPLv2 (the bit about NO WARRANTY is especially relevant).

The post Using wikidata for linked data WordPress indexes appeared first on Sharing and learning.

Not quite certifiable⤴

from @ Sharing and learning

After a slight delay, last week I received the result of my CMALT (Certified Membership of the Association for Learning Technology) submission.  While most of it was fine, the area which I had thought weakest, Core area 3: The wider context, was rated as inadequate. It has been lovely to see so many people celebrating gaining their CMALT over the last few months; and many of them have said how useful they found it to have access to examples of successful portfolios, which has also been my experience, but in the hope that it is also useful to see examples that fall short, and also in the hope that some of you might be able to provide feedback on improving it, I thought I would share here my unsuccessful portfolio.

The whole portfolio as submitted is available on Google docs, and the feedback from the assessors is here, but to focus on the area which needs attention, here is a copy of section 3 on Google docs,  to which I have added the assessors comments. The overall comments from the assessors are also worth noting:

Overall an articulate and insightful portfolio accompanied by appropriate evidence and contextualised reflection in most areas. However, in order to award a pass some minor amendments are required in Section 3a – Understanding and engaging with legislation, policies and standards: in particular, Area 1- student needs, and Area 2 – copyright, licensing and other IPR. Both of these areas require a greater depth and breadth of reflection. The details of this requirement are noted in the comments panel for each area.
These amendments would demonstrate to the assessors that the candidate has engaged with an appropriate level of reflection required in respect to the subjects chosen, which can have significant impact and influence on pedagogic practices in the use of educational technologies.

I have added the more specific comments from the marking table to the copy of section 3, and have added as suggestions my initial thoughts on how I might address these (those thoughts might be difficult to follow, think of them as scribbled aide-memoirs rather than a draft). If anyone would like to add their own comments or suggestions that would hugely appreciated. I really would like to think deeper about these issues, and it would help to know what I am missing.

When I was writing my portfolio I wrote that “I think I have learnt more by writing this than through any other thing I’ve done in the last five years.” I think much of the value in a CMALT is in the learning opportunity it presents, but it is also good to know that the assessment and feedback are robust, at least to the extent that the assessors succeeded in recognising those areas which I thought were weakest.

The post Not quite certifiable appeared first on Sharing and learning.

Quick update on W3C Community Group on Educational and Occupational Credentials⤴

from @ Sharing and learning

The work with the W3C Community Group on educational and occupational credentials in is going well. There was a Credential Engine working group call last week where I summarised progress so far. The group has 24 members. We have around 30 outline use cases, and have some idea of the relative importance of these. The use cases fall under four categories: search, refinements of searches, secondary searches (having found a credential, want find some other thing associated with it), and non-search use cases. From each use case we have derived one or two requirements for describing credentials in We made a good start at working through these requirements.

I think the higher-level issues for the group are as follows. First, How do model the aspect of educational and occupational credentials? Where does it fit in to the hierarchy, and how does it relate to other work around verifying a claim to hold a credential? Second, the relationship between a vocabulary like which aims for a wide uptake by many disconnected providers of data, not limited to a specialist domain or a partnership who are working closely together and can build a single tightly defined understanding of what they are describing. Thirdly, and somewhat related to the previous point, what balance do we strike between pragmatism and semantic purity.  We need to be pragmatic in order to build something that is acceptable to the rest of the community: not adding too many terms, not being too complex (one of the key factors in’s success has been  the tendency to favour approaches which make it easier to provide data).

The post Quick update on W3C Community Group on Educational and Occupational Credentials appeared first on Sharing and learning.

Getting data from wikidata into WordPress custom taxonomy⤴

from @ Sharing and learning

I created a custom taxonomy to use as an index of people mentioned. I wanted it to work nicely as linked data, and so wanted each term in it to refer to the wikidata identifier for the person mentioned. Then I thought, why not get the data for the terms from wikidata?

Brief details

Lots of tutorials on how to set up a custom taxonomy with with custom metadata fields. I worked from this one from smashingmagazine, to get a taxonomy call people, with a custom field for the wikidata id.

Once the wikidata is entered, this code will fetch & parse the data (it’s a work in progress as I add more fields)

function omni_get_wikidata($wd_id) {
    print('getting wikidata<br />');
    if ('' !== trim( $wd_id) ) {
	    $wd_api_uri = ''.$wd_id.'.json';
    	$json = file_get_contents( $wd_api_uri );
    	$obj = json_decode($json);
    	return $obj;
    } else {
    	return false;

function get_wikidata_value($claim, $datatype) {
	if ( isset( $claim->mainsnak->datavalue->value->$datatype ) ) {
		return $claim->mainsnak->datavalue->value->$datatype;
	} else {
		return false;

function omni_get_people_wikidata($term) {
	$term_id = $term->term_id;
    $wd_id = get_term_meta( $term_id, 'wd_id', true );
   	$args = array();
   	$wikidata = omni_get_wikidata($wd_id);
   	if ( $wikidata ) {
    	$wd_name = $wikidata->entities->$wd_id->labels->en->value;
    	$wd_description = $wikidata->entities->$wd_id->descriptions->en->value;
    	$claims = $wikidata->entities->$wd_id->claims;
   		$type = get_wikidata_value($claims->P31[0], 'id');
   		if ( 'Q5' === $type ) {
			if ( isset ($claims->P569[0] ) ) {
				$wd_birth_date = get_wikidata_value($claims->P569[0], 'time');
				print( $wd_birth_date.'<br/>' );
   		} else {
	   		echo(' Warning: that wikidata is not for a human, check the ID. ');
	   		echo(' <br /> ');
    	$args['description'] = $wd_description;
    	$args['name'] = $wd_name;
		print_r( $args );print('<br />');
    	update_term_meta( $term_id, 'wd_name', $wd_name );
    	update_term_meta( $term_id, 'wd_description', $wd_description );
    	wp_update_term( $term_id, 'people', $args );
   	} else {
   		echo(' Warning: no wikidata for you, check the Wikidata ID. ');
add_action( 'people_pre_edit_form', 'omni_get_people_wikidata' );

(Note: don’t add this to edited_people hook unless you want along wait while causes itself to be called every time it is called…)

That on its own wasn’t enough. While the name and description of the term were being updated, the values for them displayed in the edit form weren’t updated until the page was refreshed. (Figuring out that it was mostly working took a while.) A bit of javascript inserted into the edit form fixed this:

function omni_taxonomies_edit_fields( $term, $taxonomy ) {
    $wd_id = get_term_meta( $term->term_id, 'wd_id', true );
    $wd_name = get_term_meta( $term->term_id, 'wd_name', true ); 
    $wd_description = get_term_meta( $term->term_id, 'wd_description', true ); 
//JavaScript required so that name and description fields are updated 
	  var f = document.getElementById("edittag");
	  var n = document.getElementById("name");
  	  var d = document.getElementById("description");
  	  function updateFields() {
  		n.value = "<?php echo($wd_name) ?>";
  		d.innerHTML = "<?php echo($wd_description) ?>";

    <tr class="form-field term-group-wrap">
        <th scope="row">
            <label for="wd_id"><?php _e( 'Wikidata ID', 'omniana' ); ?></label>
            <input type="text" id="wd_id"  name="wd_id" value="<?php echo $wd_id; ?>" />
add_action( 'people_edit_form_fields', 'omni_taxonomies_edit_fields', 10, 2 );


The post Getting data from wikidata into WordPress custom taxonomy appeared first on Sharing and learning.

Educational and occupational credentials in⤴

from @ Sharing and learning

Since the summer I have been working with the Credential Engine, which is based at Southern Illinois University, Carbondale, on a project to facilitate the description of educational and occupational credentials in We have just reached the milestone of setting up a W3C Community Group to carry out that work.  If you would like to contribute to the work of the group (or even just lurk and follow what we do) please join it.

Educational and occupational credentials

By educational and occupational credentials I mean diplomas, academic degrees, certifications, qualifications, badges, etc., that a person can obtain by passing some test or examination of their abilities. (See also the Connecting Credentials project’s glossary of credentialing terms.)  These are already alluded to in some properties that are pending, for example an Occupation or JobPosting’s qualification or a Course’s educationalCredentialAwarded. These illustrate how educational and occupational credentials are useful for linking career aspirations with discovery of educational opportunities. The other entity type to which educational and occupational credentials link is Competence, i.e. the skills, knowledge and abilities that the credential attests. We have been discussing some work on how to describe competences with in recent LRMI meetings, more on that later.

Not surprisingly there is already a large amount of relevant work done in the area of educational and occupational credentials. The Credential Engine has developed the Credential Transparency Description Language (CTDL) which has a lot of detail, albeit with a US focus and far more detail that would be appropriate for The Badge Alliance has a model for open badges metadata that is applicable more generally. There is a W3C Verifiable Claims working group which is looking at credentials more widely, and the claim to hold one. Also, there are many frameworks which describe credentials in terms of the level and extent of the knowledge and competencies they attest, in the US Connecting Credentials cover this domain, while in the EU there are many national qualification frameworks and a Framework for Qualifications of the European Higher Education Area.

Potential issues

One potential issue is collision with existing work. We’ll have to make sure that we know where the work of the educational and occupational credential working group ends, i.e what work would best be left to those other initiatives, and how we can link to the products of their work. Related to that is scope creep. I don’t want to get involved in describing credentials more widely, e.g. issues of identification, authentication, authorization; hence the rather verbose formula of ‘educational and occupational credential. That formula also encapsulates another issue, a tension I sense between the educational world and the work place: does a degree certificate qualify someone to do anything, or does it just relate to knowledge?  Is an exam certificate a qualification?

The planned approach

I plan to approach this work in the same way that the schema course extension community group worked. We’ll use brief outline use cases to define the scope, and from these define a set of requirements, i.e. what we need to describe in order to facilitate the discovery of educational and occupational credentials. We’ll work through these to define how to encode the information with existing terms, or if necessary, propose new terms. While doing this we’ll use a set of examples to provide evidence that the information required is actually available from existing credentialling organizations.

Get involved

If you want to help with this, please join the community group. You’ll need a W3C account, and you’ll need to sign an assurance that you are not contributing any intellectual property that cannot be openly and freely licensed.

The post Educational and occupational credentials in appeared first on Sharing and learning.

Partnership with Cetis LLP⤴

from @ Sharing and learning

I have worked with Cetis in one way or another for about 15 years, but am very happy to announce that at the end of last week I became a partner of Cetis LLP.

Cetis, a co-operative consultancy

For many years CETIS was a university-based innovation support centre funded by JISC. A few years ago the Jisc funding stopped, and most of my colleagues lost their university posts. They decided to keep offering the same range of services as a limited liability partnership, and so Cetis LLP was born as a cooperative consultancy for innovation in educational technology. I was lucky, and did not loose my position at Heriot-Watt at that time as Cetis was only a part of my role there, and I was able to fill the gap with other work. I did remain as an Associate of Cetis, i.e. someone with whom they work regularly, and we did several joint projects on that basis.

One of the first decisions I made when I left Heriot-Watt was that I wanted to be a full member of Cetis LLP. They are a great team, they do great work, and with them I will be able to continue to work on their many interesting projects, while also contributing what is necessary to keep the partnership going. I have already been working with them testing out the TrunkDB project (a cloud based relational database for researchers) which is in private beta, and we are starting a new project on data wrangling orchestration. And I know we need to sort out the Cetis website so that it properly reflects all the work that the partnership has done over the last two or three years.

Going forward, I hope most of my work will be through Cetis. I’ll keep PJJK Limited for any work that doesn’t fit in with their interests, and there is a chance I’ll do some work through other channels, but the benefits of working with such a brilliant group of partners far outweigh any benefits of independent work.

The post Partnership with Cetis LLP appeared first on Sharing and learning.

TIL: getting Skype for Linux working⤴

from @ Sharing and learning

Microsoft’s Skype for Linux is a pain for Linux (well, for Ubuntu at least). It stopped working for me, no one could hear me.

Apparently it needs pulse audio to  work properly, but as others have found “most problems with the sound in Linux can be solved by removing PulseAudio”. The answer, as outlined in this post, is apulse “PulseAudio emulation for ALSA”.

The post TIL: getting Skype for Linux working appeared first on Sharing and learning.