Author Archives: Phil Barker

Using wikidata for linked data WordPress indexes⤴

from @ Sharing and learning

A while back I wrote about getting data from wikidata into a WordPress custom taxonomy. Shortly thereafter Alex Stinson said some nice things about it:


and as a result that post got a little attention.

Well, I have now a working prototype plugin which is somewhat more general purpose than my first attempt.

1.Custom Taxonomy Term Metadata from Wikidata

Here’s a video showing how you can create a custom taxonomy term with just a name and the wikidata Q identifier, and the plugin will pull down relevant wikidata for that type of entity:

[similar video on YouTube]

2. Linked data index of posts

Once this taxonomy term is used to tag a post, you can view the term’s archive page, and if you have a linked data sniffer, you will see that the metadata from WikiData is embedded in machine readable form using schema.org. Here’s a screenshot of what the OpenLink structured data sniffer sees:

Or you can view the Google structured data testing tool output for that page.

Features

  • You can create terms for custom taxonomies with just a term name (which is used as the slug for the term) and the Wikidata Q number identifier. The relevant name, description and metadata is pulled down from Wikidata.
  • Alternatively you can create a new term when you tag a post and later edit the term to add the wikidata Q number and hence the metadata.
  • The metadata retrieved from Wikidata varies to be suitable for the class of item represented by the term, e.g. birth and death details for people, date and location for events.
  • Term archive pages include the metadata from wikidata as machine readable structured data using schema.org. This includes links back to the wikidata record and other authority files (e.g. ISNI and VIAF). A system harvesting the archive page for linked data could use these to find more metadata. (These onward links put the linked in linked data and the web in semantic web.)
  • The type of relationship between the term and posts tagged with it is recorded in the schema.org structure data on the term archive page. Each custom taxonomy is for a specific type of relationship (currently about and mentions, but it would be simple to add others).
  • Short codes allow each post to list the entries from a custom taxonomy that are relevant for it using a simple text widget.
  • This is a self-contained plugin. The plugin includes default term archive page templates without the need for a custom theme. The archive page is pretty basic (based on twentysixteen theme) so you would get better results if you did use it as the basis for an addition to a custom theme.

How’s it work / where is it

It’s on github. Do not use it on a production WordPress site. It’s definitely pre-alpha, and undocumented, and I make no claims for the code to be adequate or safe. It currently lacks error trapping / exception handling, and more seriously it doesn’t sanitize some things that should be sanitized. That said, if you fancy giving it a try do let me know what doesn’t work.

It’s based around two classes: one which sets up a custom taxonomy and provides some methods for outputting terms and term metadata in HTML with suitable schema.org RDFa markup; the other handles getting the wikidata via SPARQL queries and storing this data as term metadata. Getting the wikidata via SPARQL is much improved on the way it was done in the original post I mentioned above. Other files create taxonomy instances, provide some shortcode functions for displaying taxonomy terms and provide default term archive templates.

Where’s it going

It’s not finished. I’ll see to some of the deficiencies in the coding, but also I want to get some more elegant output, e.g. single indexes / archives of terms from all taxonomies, no matter what the relationship between the post and the item that the term relates to.

There’s no reason why the source of the metadata need be Wikidata. The same approach could be with any source of metadata, or by creating the term metadata in WordPress. As such this is part of my exploration of WordPress as a semantic platform. Using taxonomies related to educational properties would be useful for any instance of WordPress being used as a repository of open educational resources, or to disseminate information about courses, or to provide metadata for PressBooks being used for open textbooks.

I also want to use it to index PressBooks such as my copy of Omniana. I think the graphs generated may be interesting ways of visualizing and processing the contents of a book for researchers.

Licenses: Wikidata is CC:0, the wikidata logo used in the featured image for this post is sourced from wikimedia and is also CC:0 but is a registered trademark of the wikimedia foundation used with permission. The plugin, as a derivative of WordPress, will be licensed as GPLv2 (the bit about NO WARRANTY is especially relevant).

The post Using wikidata for linked data WordPress indexes appeared first on Sharing and learning.

Not quite certifiable⤴

from @ Sharing and learning

After a slight delay, last week I received the result of my CMALT (Certified Membership of the Association for Learning Technology) submission.  While most of it was fine, the area which I had thought weakest, Core area 3: The wider context, was rated as inadequate. It has been lovely to see so many people celebrating gaining their CMALT over the last few months; and many of them have said how useful they found it to have access to examples of successful portfolios, which has also been my experience, but in the hope that it is also useful to see examples that fall short, and also in the hope that some of you might be able to provide feedback on improving it, I thought I would share here my unsuccessful portfolio.

The whole portfolio as submitted is available on Google docs, and the feedback from the assessors is here, but to focus on the area which needs attention, here is a copy of section 3 on Google docs,  to which I have added the assessors comments. The overall comments from the assessors are also worth noting:

Overall an articulate and insightful portfolio accompanied by appropriate evidence and contextualised reflection in most areas. However, in order to award a pass some minor amendments are required in Section 3a – Understanding and engaging with legislation, policies and standards: in particular, Area 1- student needs, and Area 2 – copyright, licensing and other IPR. Both of these areas require a greater depth and breadth of reflection. The details of this requirement are noted in the comments panel for each area.
These amendments would demonstrate to the assessors that the candidate has engaged with an appropriate level of reflection required in respect to the subjects chosen, which can have significant impact and influence on pedagogic practices in the use of educational technologies.

I have added the more specific comments from the marking table to the copy of section 3, and have added as suggestions my initial thoughts on how I might address these (those thoughts might be difficult to follow, think of them as scribbled aide-memoirs rather than a draft). If anyone would like to add their own comments or suggestions that would hugely appreciated. I really would like to think deeper about these issues, and it would help to know what I am missing.

When I was writing my portfolio I wrote that “I think I have learnt more by writing this than through any other thing I’ve done in the last five years.” I think much of the value in a CMALT is in the learning opportunity it presents, but it is also good to know that the assessment and feedback are robust, at least to the extent that the assessors succeeded in recognising those areas which I thought were weakest.

The post Not quite certifiable appeared first on Sharing and learning.

Quick update on W3C Community Group on Educational and Occupational Credentials⤴

from @ Sharing and learning

The work with the W3C Community Group on educational and occupational credentials in schema.org is going well. There was a Credential Engine working group call last week where I summarised progress so far. The group has 24 members. We have around 30 outline use cases, and have some idea of the relative importance of these. The use cases fall under four categories: search, refinements of searches, secondary searches (having found a credential, want find some other thing associated with it), and non-search use cases. From each use case we have derived one or two requirements for describing credentials in schema.org. We made a good start at working through these requirements.

I think the higher-level issues for the group are as follows. First, How do model the aspect of educational and occupational credentials? Where does it fit in to the  schema.org hierarchy, and how does it relate to other work around verifying a claim to hold a credential? Second, the relationship between a vocabulary like schema.org which aims for a wide uptake by many disconnected providers of data, not limited to a specialist domain or a partnership who are working closely together and can build a single tightly defined understanding of what they are describing. Thirdly, and somewhat related to the previous point, what balance do we strike between pragmatism and semantic purity.  We need to be pragmatic in order to build something that is acceptable to the rest of the schema.org community: not adding too many terms, not being too complex (one of the key factors in schema.org’s success has been  the tendency to favour approaches which make it easier to provide data).

The post Quick update on W3C Community Group on Educational and Occupational Credentials appeared first on Sharing and learning.

Getting data from wikidata into WordPress custom taxonomy⤴

from @ Sharing and learning

I created a custom taxonomy to use as an index of people mentioned. I wanted it to work nicely as linked data, and so wanted each term in it to refer to the wikidata identifier for the person mentioned. Then I thought, why not get the data for the terms from wikidata?

Brief details

Lots of tutorials on how to set up a custom taxonomy with with custom metadata fields. I worked from this one from smashingmagazine, to get a taxonomy call people, with a custom field for the wikidata id.

Once the wikidata is entered, this code will fetch & parse the data (it’s a work in progress as I add more fields)

<?php
function omni_get_wikidata($wd_id) {
    print('getting wikidata<br />');
    if ('' !== trim( $wd_id) ) {
	    $wd_api_uri = 'https://wikidata.org/entity/'.$wd_id.'.json';
    	$json = file_get_contents( $wd_api_uri );
    	$obj = json_decode($json);
    	return $obj;
    } else {
    	return false;
	}
}

function get_wikidata_value($claim, $datatype) {
	if ( isset( $claim->mainsnak->datavalue->value->$datatype ) ) {
		return $claim->mainsnak->datavalue->value->$datatype;
	} else {
		return false;
	}
}

function omni_get_people_wikidata($term) {
	$term_id = $term->term_id;
    $wd_id = get_term_meta( $term_id, 'wd_id', true );
   	$args = array();
   	$wikidata = omni_get_wikidata($wd_id);
   	if ( $wikidata ) {
    	$wd_name = $wikidata->entities->$wd_id->labels->en->value;
    	$wd_description = $wikidata->entities->$wd_id->descriptions->en->value;
    	$claims = $wikidata->entities->$wd_id->claims;
   		$type = get_wikidata_value($claims->P31[0], 'id');
   		if ( 'Q5' === $type ) {
			if ( isset ($claims->P569[0] ) ) {
				$wd_birth_date = get_wikidata_value($claims->P569[0], 'time');
				print( $wd_birth_date.'<br/>' );
			}
   		} else {
	   		echo(' Warning: that wikidata is not for a human, check the ID. ');
	   		echo(' <br /> ');
   		} 
    	$args['description'] = $wd_description;
    	$args['name'] = $wd_name;
		print_r( $args );print('<br />');
    	update_term_meta( $term_id, 'wd_name', $wd_name );
    	update_term_meta( $term_id, 'wd_description', $wd_description );
    	wp_update_term( $term_id, 'people', $args );
    	
   	} else {
   		echo(' Warning: no wikidata for you, check the Wikidata ID. ');
   	}
}
add_action( 'people_pre_edit_form', 'omni_get_people_wikidata' );
?>

(Note: don’t add this to edited_people hook unless you want along wait while causes itself to be called every time it is called…)

That on its own wasn’t enough. While the name and description of the term were being updated, the values for them displayed in the edit form weren’t updated until the page was refreshed. (Figuring out that it was mostly working took a while.) A bit of javascript inserted into the edit form fixed this:

function omni_taxonomies_edit_fields( $term, $taxonomy ) {
    $wd_id = get_term_meta( $term->term_id, 'wd_id', true );
    $wd_name = get_term_meta( $term->term_id, 'wd_name', true ); 
    $wd_description = get_term_meta( $term->term_id, 'wd_description', true ); 
//JavaScript required so that name and description fields are updated 
    ?>
    <script>
	  var f = document.getElementById("edittag");
	  var n = document.getElementById("name");
  	  var d = document.getElementById("description");
  	  function updateFields() {
  		n.value = "<?php echo($wd_name) ?>";
  		d.innerHTML = "<?php echo($wd_description) ?>";
  	  }

	  f.onsubmit=updateFields();
	</script>
    <tr class="form-field term-group-wrap">
        <th scope="row">
            <label for="wd_id"><?php _e( 'Wikidata ID', 'omniana' ); ?></label>
        </th>
        <td>
            <input type="text" id="wd_id"  name="wd_id" value="<?php echo $wd_id; ?>" />
        </td>
    </tr>
    <?php
}
add_action( 'people_edit_form_fields', 'omni_taxonomies_edit_fields', 10, 2 );

 

The post Getting data from wikidata into WordPress custom taxonomy appeared first on Sharing and learning.

Educational and occupational credentials in schema.org⤴

from @ Sharing and learning

Since the summer I have been working with the Credential Engine, which is based at Southern Illinois University, Carbondale, on a project to facilitate the description of educational and occupational credentials in schema.org. We have just reached the milestone of setting up a W3C Community Group to carry out that work.  If you would like to contribute to the work of the group (or even just lurk and follow what we do) please join it.

Educational and occupational credentials

By educational and occupational credentials I mean diplomas, academic degrees, certifications, qualifications, badges, etc., that a person can obtain by passing some test or examination of their abilities. (See also the Connecting Credentials project’s glossary of credentialing terms.)  These are already alluded to in some schema.org properties that are pending, for example an Occupation or JobPosting’s qualification or a Course’s educationalCredentialAwarded. These illustrate how educational and occupational credentials are useful for linking career aspirations with discovery of educational opportunities. The other entity type to which educational and occupational credentials link is Competence, i.e. the skills, knowledge and abilities that the credential attests. We have been discussing some work on how to describe competences with schema.org in recent LRMI meetings, more on that later.

Not surprisingly there is already a large amount of relevant work done in the area of educational and occupational credentials. The Credential Engine has developed the Credential Transparency Description Language (CTDL) which has a lot of detail, albeit with a US focus and far more detail that would be appropriate for schema.org. The Badge Alliance has a model for open badges metadata that is applicable more generally. There is a W3C Verifiable Claims working group which is looking at credentials more widely, and the claim to hold one. Also, there are many frameworks which describe credentials in terms of the level and extent of the knowledge and competencies they attest, in the US Connecting Credentials cover this domain, while in the EU there are many national qualification frameworks and a Framework for Qualifications of the European Higher Education Area.

Potential issues

One potential issue is collision with existing work. We’ll have to make sure that we know where the work of the educational and occupational credential working group ends, i.e what work would best be left to those other initiatives, and how we can link to the products of their work. Related to that is scope creep. I don’t want to get involved in describing credentials more widely, e.g. issues of identification, authentication, authorization; hence the rather verbose formula of ‘educational and occupational credential. That formula also encapsulates another issue, a tension I sense between the educational world and the work place: does a degree certificate qualify someone to do anything, or does it just relate to knowledge?  Is an exam certificate a qualification?

The planned approach

I plan to approach this work in the same way that the schema course extension community group worked. We’ll use brief outline use cases to define the scope, and from these define a set of requirements, i.e. what we need to describe in order to facilitate the discovery of educational and occupational credentials. We’ll work through these to define how to encode the information with existing schema.org terms, or if necessary, propose new terms. While doing this we’ll use a set of examples to provide evidence that the information required is actually available from existing credentialling organizations.

Get involved

If you want to help with this, please join the community group. You’ll need a W3C account, and you’ll need to sign an assurance that you are not contributing any intellectual property that cannot be openly and freely licensed.

The post Educational and occupational credentials in schema.org appeared first on Sharing and learning.

Partnership with Cetis LLP⤴

from @ Sharing and learning

I have worked with Cetis in one way or another for about 15 years, but am very happy to announce that at the end of last week I became a partner of Cetis LLP.

Cetis, a co-operative consultancy

For many years CETIS was a university-based innovation support centre funded by JISC. A few years ago the Jisc funding stopped, and most of my colleagues lost their university posts. They decided to keep offering the same range of services as a limited liability partnership, and so Cetis LLP was born as a cooperative consultancy for innovation in educational technology. I was lucky, and did not loose my position at Heriot-Watt at that time as Cetis was only a part of my role there, and I was able to fill the gap with other work. I did remain as an Associate of Cetis, i.e. someone with whom they work regularly, and we did several joint projects on that basis.

One of the first decisions I made when I left Heriot-Watt was that I wanted to be a full member of Cetis LLP. They are a great team, they do great work, and with them I will be able to continue to work on their many interesting projects, while also contributing what is necessary to keep the partnership going. I have already been working with them testing out the TrunkDB project (a cloud based relational database for researchers) which is in private beta, and we are starting a new project on data wrangling orchestration. And I know we need to sort out the Cetis website so that it properly reflects all the work that the partnership has done over the last two or three years.

Going forward, I hope most of my work will be through Cetis. I’ll keep PJJK Limited for any work that doesn’t fit in with their interests, and there is a chance I’ll do some work through other channels, but the benefits of working with such a brilliant group of partners far outweigh any benefits of independent work.

The post Partnership with Cetis LLP appeared first on Sharing and learning.

TIL: getting Skype for Linux working⤴

from @ Sharing and learning

Microsoft’s Skype for Linux is a pain for Linux (well, for Ubuntu at least). It stopped working for me, no one could hear me.

Apparently it needs pulse audio to  work properly, but as others have found “most problems with the sound in Linux can be solved by removing PulseAudio”. The answer, as outlined in this post, is apulse “PulseAudio emulation for ALSA”.

The post TIL: getting Skype for Linux working appeared first on Sharing and learning.

The end of Open Educational Practices in Scotland⤴

from @ Sharing and learning

On Monday I was at Our Dynamic Earth, by the Holyrood Parliament in Edinburgh, for a day meeting on the Promise of Open Education. This was the final event of the Open Educational Practices in Scotland project (OEPS), which (according to the evaluation report):

involved five universities in leading a project based in the Open University in Scotland. Its aims were to facilitate best practice in open education in Scotland, and to enhance capacity for developing publicly available online materials across the tertiary education sector in Scotland. The project particularly focused on fostering the use of open educational practices to build capacity and promote widening participation.

 

There have always been questions about this project, notably the funnelling of money to the OU without any sign of an open bidding process, but at least it was there. With the OEPS finishing, two things caught my attention: how do we get political support for open education, and what open educational practice is current in Scotland. To paraphrase Orwell: if there is hope, it lies in the grass roots [hmm, that didn’t end well for Winston].

Open Education in Policy

Good places to start looking for current practice at both policy and operational levels are the ALT-Scotland SIG and Scottish Open Education Declaration. There are strong links between the two: key members of ALT-Scotland (notably Lorna M Campbell and Joe Wilson) are involved in developing and promoting Scottish Open Education Declaration; and OEPS also supported some of this work. The Scottish Open Education Declaration and ALT-Scotland have been successful in supporting policy in Scottish HE around open education, and beyond, but it would be nice if this success were recognised and supported from outside of the Open Education community.

It seems you only get recognised at a political level if you claim to be able solve big problems: local and global inequalities, widening educational participation. Anyone who says Open Education will solve these inequalities is a charlatan, anyone who believes them is gullible. As Pete Cannell of OEPS said, open as in licensing content is not the whole answer (to widening participation) but it is important part of answer.

Open Education in Practice

More hopefully, there is a lot happening at grass roots level that is easy to overlook. Edinburgh University are leading the way,  with central support and a vision. As I saw, they are producing some fine OERs created by student interns.

A similar model for production is being used in my old workplace of Computer Science at Heriot-Watt University, but with less by way of strategic support. A small team of content interns, working under Lisa Scott, have been using open tools (WordPress, H5P, Lumen5) to create learning resources for the new Graduate Level Apprenticeship programme in Software Development. The actual course is closed, delivered in BlackBoard, but the resources are openly licensed and available to all (this not only allows the team to use CC:SA resources in their creation but saves the hassle of setting up access management to the collection).

Like Edinburgh, Glasgow Caledonian University has a policy for OER and a repository replete with resources, but the examples I found seemed locked for local use only. That’s not a criticism (and I may just have been unlucky in what I tried to view) because the important thing is that here is an example of open supporting the work of one of our Universities.

In Dundee, Natalie Lafferty runs a student selected component of the medical course on The Doctor as Digital Teacher for which students create a learning resource. Here’s an example of an iBook created by one student using original and openly licensed resources, and an account of its creation.


There are probably other examples from Scottish F&HE that I don’t know or have forgotten (sorry about that–but do use the comment box below to remedy this), but one of the key messages from the Promise of Open Education meeting was that Open Educational Practice is not just about Universities giving access to resources they create, valuable as that is.  There were great examples presented at the conference of OEPS working with Dyslexia Uk and Education Scotland, and working with Parkinson’s UK. And in the final discussion Lorna Campbell did a great job of highlighting the variety of open educational practice in Scotland, from Scotland’s three Wikimedians in residence and networks such as Girl Geek Scotland. And that really is just the tip of the iceberg.

The end?

So, in conclusion, this was not the end of open educational practices in Scotland. The future lies not just in continuing the legacy of one project, but in the ongoing efforts of a great diversity of effort. But you know what, it would be really nice if those efforts got the recognition and support from national policy makers.

[Acknowledgement: the feature image for this post, which you may see in Tweets etc,  is the conference pack for OEPS Promise of Open Education. Courtesy of OEPS project.]

The post The end of Open Educational Practices in Scotland appeared first on Sharing and learning.