Tag Archives: WordPress

Word Press for Weans 2018 #pressedconf18⤴

from @ wwwd – John's World Wide Wall Display

This is a summary of my presentation for PressED – A WordPress and Education, Pedagogy and Research Conference on Twitter. I’ve pasted the text from the tweets, without the conference hash tags below.

I am @johnjohnston a primary school teacher in Scotland. I acted as ‘Product Owner’ for Glow Blogs from 2014 to 2016 & continue the role on a part time basis.

Glow is a service for to all schools & education establishments across Scotland.

Glow gives access to a number of different web services.

One of these services is Glow Blogs which runs on WordPress.

  • Glow Blogs consist of 33 multisites
  • Total number of blogs 219,834
  • Total number of views in February 2018 1,600,074
  • Number of blog users logging on in Feb 2018 243,199

All teachers and pupils in Scotland can have access to #GlowBlogs via a Single signon via RMUNIFY (shibboleth)

 

Development

#GlowBlogs developed & maintained by Scottish Government considerable amount of work going into dev, testing, security and data protection. This differs from many edu #WordPress set ups as changes developed relatively slowly.

Major customisations include shibboleth signon, user roles & privacy. Teachers/Pupils have slightly different permissions.
Blogs can be public, private or “Glow Only”
There is also an e-Portfolio facility added via a plugin.

 

How the Blogs are used

Glow Blogs are currently used for School Websites, Class Blogs, Project Blogs, Trips, Libraries, eportfolios. Blogs By Learners, Blogs for Learners (Resources, revision ect), collaborations, aggregations.

 

e-Portfolios

ePortfolios supported by plugin, custom taxonomy. ‘Profiles’ print or export to PDF. Pupil portfolio blogs can have sparkly unicorns or black vampire styles but the profiles that come out look clean and neat.

Pupils

Pupils can learn to be on the web but with <13 we have duty of care.
Pupils can create blogs. Cannot make blogs public.

A member of staff can make pupil’s blogs public. Pupils can be members of public blog and post publicly.

 

Examples


A collaboration https://blogs.glowscotland.org.uk/glowblogs/worldmustbecomingtoanend
Bees https://blogs.glowscotland.org.uk/nl/buzzingaboutbees/
A Blacksmith https://blogs.glowscotland.org.uk/st/scottishblacksmith
An aggregation https://blogs.glowscotland.org.uk/glowblogs/uodedushare
pupil projects: https://blogs.glowscotland.org.uk/ab/endeavour
more https://blogs.glowscotland.org.uk/glowblogs/glowingposts

Possibilities

Only scratched the surface of the potential of #WordPress the tools are in place, Scottish teachers and learners are exploring the possibilities but it is early days. We are tooled up for the future.

 

 

 

PressBooks and ePub as an OER format.⤴

from @ Sharing and learning

PressBooks does a reasonable job of importing ePub, so that ePub can be used as a portable format for open text books. But, of course, there are limits.

I have been really impressed with PressBooks, the extension to WordPress for authoring eBooks. Like WordPress it is available as a hosted service from PressBooks.com and to host yourself from PressBooks.org. I have been using the latter for a few months. It looks like a great way of authoring, hosting, using, and distributing open books. Reports like this from Steel Wagstaff about Publishing Open Textbooks at UW-Madison really show the possibilities for education that open up if you do that. There you can read what work Steel and others have been doing around PressBooks for authoring open textbooks, with interaction (using hypothe.is, and h5p), connections to their VLE (LTI), and responsible learning analytics (xAPI).

PressBooks also supports replication of content from one PressBook install to another, which is great, but what is even greater is support of import from other content creation systems. We’re not wanting monoculture here.

Open text books are, of course, a type of Open Educational Resource, and so when thinking about PressBooks as a platform for open text books you’re also thinking about PressBooks and OER. So what aspects of text-books-as-OER does PressBooks support? What aspects should it support?

OER: DERPable, 5Rs & ALMS

Frameworks for thinking about requirements for openness in educational resources go back to the very start of the OER movement. Back in the early 2000s, when JISC was thinking about repositories and Learning Objects as ways of sharing educational resources, Charles Duncan used to talk about the need for resources to be DERPable: Discoverable, Editable, Repurposable and Portable. At about the same time in the US, David Wiley was defining Open Content in terms of four, later five Rs and ALMS. The five Rs are well known: the permissions to Retain, Reuse, Revise, Remix and Redistribute. ALMS is a less memorable, more tortured acronym, relating to technical choices that affect openness in practice. The choices relate to: Access to editing tools, the Level of expertise required to use these tools, the content being Meaningfully editable, and being Self-sourced (i.e. there not being separate source and distribution files).

Portability of ePub and editing in PressBooks

I tend to approach these terms back to front: I am interested in portable formats for disseminating resources, and systems that allow these to be edited. For eBooks / open textbooks my format of choice for portability is currently ePub, which is essentially HTML and other assets (images, stylesheets, etc.) with metadata, in a zip archive. Being HTML-based, ePub is largely self-sourced, and can be edited with suitable tools (though there may be caveats around some of the other assets such as images and diagrams). Furthermore, WordPress in general and PressBooks specifically makes editing, repurposing and distributing easy without requiring knowledge of HTML. It’s a good platform for remixing, revising, reusing, retaining content. And the key to this whole ramble of a blog post is the ‘import from ePub‘ feature.

So how does  the combination of ePub and PressBooks work in practice. I can go to OpenStax, and download one of their text books as ePub. As far as I can see the best-known open textbook project doesn’t seem to make ePub available (Apple’s iPub is similar, but I don’t do iBooks so couldn’t download one). So I went to Siyavula and downloaded one of their CC:BY textbooks as an ePub. Chose that download for import into PressBooks and got a screen that lets me choose which parts of the ePub to import and what type of content to import it as.

List of sections of the ePub with tick box for whether to import in PressBooks, and radio button options for what type of book part to import as

After choosing which parts to import and hitting the import button at the bottom of the page, the content is there to edit and republish in PressBooks.

From here you can edit or add content (including by import from other sources), rearrange the content, and set options for publishing it. There is other work to be done. You will need to choose a decent theme to display your book with style. You will also need to make sure internal links work as your PressBooks permalink URL scheme might not match the URLs embedded in the content. How easy this is will vary depending on choices made when the book was created and your own knowledge of some of the WordPress tools that can be used to make bulk edits.

I am not really interested in distributing maths text books, so I won’t link to the end result of this specific example. I did once write a book in a book sprint with some colleagues, and that was published as an ePub. So here an imported & republished version of Into The Wild (PressBook edition).  I didn’t do much polishing of this: it uses a stock theme, and I haven’t fixed internal links, e.g. footnotes.

Limitations

Of course there are limits to this approach. I do not expect that much (if any) of the really interesting interactive content would survive a trip through ePub. Also much of Steel’s work that I described up at the top is PressBook platform specific. So that’s where cloning from PressBooks to PressBooks becomes useful. But ePub remains a viable way of getting textbook content into the PressBooks platform.

Also, while WordPress in general, and hence PressBooks, is a great way of distributing content, I haven’t looked much at whether metadata from the ePub is imported. On first sight none of it is, so there is work to do here in order to make the imported books discoverable. That applies to the package level metadata in ePubs, which is a separate file from the content. However, what also really interests me is the possibility of embedding education-specific schema.org metadata into the HTML content in such a way that it becomes transportable (easy, I think) and editable on import (harder).

The post PressBooks and ePub as an OER format. appeared first on Sharing and learning.

Using wikidata for linked data WordPress indexes⤴

from @ Sharing and learning

A while back I wrote about getting data from wikidata into a WordPress custom taxonomy. Shortly thereafter Alex Stinson said some nice things about it:


and as a result that post got a little attention.

Well, I have now a working prototype plugin which is somewhat more general purpose than my first attempt.

1.Custom Taxonomy Term Metadata from Wikidata

Here’s a video showing how you can create a custom taxonomy term with just a name and the wikidata Q identifier, and the plugin will pull down relevant wikidata for that type of entity:

[similar video on YouTube]

2. Linked data index of posts

Once this taxonomy term is used to tag a post, you can view the term’s archive page, and if you have a linked data sniffer, you will see that the metadata from WikiData is embedded in machine readable form using schema.org. Here’s a screenshot of what the OpenLink structured data sniffer sees:

Or you can view the Google structured data testing tool output for that page.

Features

  • You can create terms for custom taxonomies with just a term name (which is used as the slug for the term) and the Wikidata Q number identifier. The relevant name, description and metadata is pulled down from Wikidata.
  • Alternatively you can create a new term when you tag a post and later edit the term to add the wikidata Q number and hence the metadata.
  • The metadata retrieved from Wikidata varies to be suitable for the class of item represented by the term, e.g. birth and death details for people, date and location for events.
  • Term archive pages include the metadata from wikidata as machine readable structured data using schema.org. This includes links back to the wikidata record and other authority files (e.g. ISNI and VIAF). A system harvesting the archive page for linked data could use these to find more metadata. (These onward links put the linked in linked data and the web in semantic web.)
  • The type of relationship between the term and posts tagged with it is recorded in the schema.org structure data on the term archive page. Each custom taxonomy is for a specific type of relationship (currently about and mentions, but it would be simple to add others).
  • Short codes allow each post to list the entries from a custom taxonomy that are relevant for it using a simple text widget.
  • This is a self-contained plugin. The plugin includes default term archive page templates without the need for a custom theme. The archive page is pretty basic (based on twentysixteen theme) so you would get better results if you did use it as the basis for an addition to a custom theme.

How’s it work / where is it

It’s on github. Do not use it on a production WordPress site. It’s definitely pre-alpha, and undocumented, and I make no claims for the code to be adequate or safe. It currently lacks error trapping / exception handling, and more seriously it doesn’t sanitize some things that should be sanitized. That said, if you fancy giving it a try do let me know what doesn’t work.

It’s based around two classes: one which sets up a custom taxonomy and provides some methods for outputting terms and term metadata in HTML with suitable schema.org RDFa markup; the other handles getting the wikidata via SPARQL queries and storing this data as term metadata. Getting the wikidata via SPARQL is much improved on the way it was done in the original post I mentioned above. Other files create taxonomy instances, provide some shortcode functions for displaying taxonomy terms and provide default term archive templates.

Where’s it going

It’s not finished. I’ll see to some of the deficiencies in the coding, but also I want to get some more elegant output, e.g. single indexes / archives of terms from all taxonomies, no matter what the relationship between the post and the item that the term relates to.

There’s no reason why the source of the metadata need be Wikidata. The same approach could be with any source of metadata, or by creating the term metadata in WordPress. As such this is part of my exploration of WordPress as a semantic platform. Using taxonomies related to educational properties would be useful for any instance of WordPress being used as a repository of open educational resources, or to disseminate information about courses, or to provide metadata for PressBooks being used for open textbooks.

I also want to use it to index PressBooks such as my copy of Omniana. I think the graphs generated may be interesting ways of visualizing and processing the contents of a book for researchers.

Licenses: Wikidata is CC:0, the wikidata logo used in the featured image for this post is sourced from wikimedia and is also CC:0 but is a registered trademark of the wikimedia foundation used with permission. The plugin, as a derivative of WordPress, will be licensed as GPLv2 (the bit about NO WARRANTY is especially relevant).

The post Using wikidata for linked data WordPress indexes appeared first on Sharing and learning.

Gutenberg on iPad⤴

from @ wwwd – John's World Wide Wall Display

I’ve been testing the new Gutenberg editor for WordPress a little. I just sent the url to this short video in the feedback form.

I am finding using the editor a little tricky on iOS. It is a lot better in portrait mode. I can see that many folk will like Gutenberg and it has some interesting features.

I really hope that the experience on iOS can get better before we get this in Glow Blogs. Just from a selfish point of view, my class use iPads to post to their e-portfolios. Having said that I alway get them to write in the Notes app first. Pasting multi-lne text in to Gutenberg seems to be handled nicely, the double/treble returns my pupil like to type gets stripped out sensibly.

Getting data from wikidata into WordPress custom taxonomy⤴

from @ Sharing and learning

I created a custom taxonomy to use as an index of people mentioned. I wanted it to work nicely as linked data, and so wanted each term in it to refer to the wikidata identifier for the person mentioned. Then I thought, why not get the data for the terms from wikidata?

Brief details

Lots of tutorials on how to set up a custom taxonomy with with custom metadata fields. I worked from this one from smashingmagazine, to get a taxonomy call people, with a custom field for the wikidata id.

Once the wikidata is entered, this code will fetch & parse the data (it’s a work in progress as I add more fields)

<?php
function omni_get_wikidata($wd_id) {
    print('getting wikidata<br />');
    if ('' !== trim( $wd_id) ) {
	    $wd_api_uri = 'https://wikidata.org/entity/'.$wd_id.'.json';
    	$json = file_get_contents( $wd_api_uri );
    	$obj = json_decode($json);
    	return $obj;
    } else {
    	return false;
	}
}

function get_wikidata_value($claim, $datatype) {
	if ( isset( $claim->mainsnak->datavalue->value->$datatype ) ) {
		return $claim->mainsnak->datavalue->value->$datatype;
	} else {
		return false;
	}
}

function omni_get_people_wikidata($term) {
	$term_id = $term->term_id;
    $wd_id = get_term_meta( $term_id, 'wd_id', true );
   	$args = array();
   	$wikidata = omni_get_wikidata($wd_id);
   	if ( $wikidata ) {
    	$wd_name = $wikidata->entities->$wd_id->labels->en->value;
    	$wd_description = $wikidata->entities->$wd_id->descriptions->en->value;
    	$claims = $wikidata->entities->$wd_id->claims;
   		$type = get_wikidata_value($claims->P31[0], 'id');
   		if ( 'Q5' === $type ) {
			if ( isset ($claims->P569[0] ) ) {
				$wd_birth_date = get_wikidata_value($claims->P569[0], 'time');
				print( $wd_birth_date.'<br/>' );
			}
   		} else {
	   		echo(' Warning: that wikidata is not for a human, check the ID. ');
	   		echo(' <br /> ');
   		} 
    	$args['description'] = $wd_description;
    	$args['name'] = $wd_name;
		print_r( $args );print('<br />');
    	update_term_meta( $term_id, 'wd_name', $wd_name );
    	update_term_meta( $term_id, 'wd_description', $wd_description );
    	wp_update_term( $term_id, 'people', $args );
    	
   	} else {
   		echo(' Warning: no wikidata for you, check the Wikidata ID. ');
   	}
}
add_action( 'people_pre_edit_form', 'omni_get_people_wikidata' );
?>

(Note: don’t add this to edited_people hook unless you want along wait while causes itself to be called every time it is called…)

That on its own wasn’t enough. While the name and description of the term were being updated, the values for them displayed in the edit form weren’t updated until the page was refreshed. (Figuring out that it was mostly working took a while.) A bit of javascript inserted into the edit form fixed this:

function omni_taxonomies_edit_fields( $term, $taxonomy ) {
    $wd_id = get_term_meta( $term->term_id, 'wd_id', true );
    $wd_name = get_term_meta( $term->term_id, 'wd_name', true ); 
    $wd_description = get_term_meta( $term->term_id, 'wd_description', true ); 
//JavaScript required so that name and description fields are updated 
    ?>
    <script>
	  var f = document.getElementById("edittag");
	  var n = document.getElementById("name");
  	  var d = document.getElementById("description");
  	  function updateFields() {
  		n.value = "<?php echo($wd_name) ?>";
  		d.innerHTML = "<?php echo($wd_description) ?>";
  	  }

	  f.onsubmit=updateFields();
	</script>
    <tr class="form-field term-group-wrap">
        <th scope="row">
            <label for="wd_id"><?php _e( 'Wikidata ID', 'omniana' ); ?></label>
        </th>
        <td>
            <input type="text" id="wd_id"  name="wd_id" value="<?php echo $wd_id; ?>" />
        </td>
    </tr>
    <?php
}
add_action( 'people_edit_form_fields', 'omni_taxonomies_edit_fields', 10, 2 );

 

The post Getting data from wikidata into WordPress custom taxonomy appeared first on Sharing and learning.

WordCamp Edinburgh, thoughts #wcedin⤴

from @ wwwd – John's World Wide Wall Display

I just spent Saturday and half of Sunday at WordCamp Edinburgh 2017. This is only my third WordCamp, but I though it might be worth typing up a few impressions.

The camp was very nicely organised, ran to time, had good food, the venue was great. Minimal friction for attendees.

The vibe was quite like a TeachMeet although most of the presentations were an hour long and a bit more formal. I guess Wordcamp like TM has its roots in Bar Camp? Compared to a TeachMeet the sponsored were more visible and more part of the community. This felt fine as I guess most of the attendees were professional working alongside the sponsors. (I am not a fan of the over sponsorship of TeachMeets)

The talks were very varied, some technical, some business related. All the ones I went to were informative and enjoyable. There seemed to be a strong strand about using WordPress for the good, democracy and social change.

Social Good

Two of the keynotes were to do with this idea of social good. The opening one on day one was by Leah Lockhart, who talked about helping community groups and local politicians to communicate. I felt there were a lot in common with eduction. Schools have embraced online communication in the same sort of way, veering towards twitter ( probably less Facebook that community groups) as an easy way to get messages out. In the same way they lose control of their information and its organisation. Leah spoke of the way WordPress could give you a better long term result.

Leah also explained that it is hard for community groups to be able to design how their information gets out. I think we are at the point where WordPress is easy enough to use the difficulty comes in using it in a strategic way that maximises its potential. I’ve got a fair bit of experience in helping schools use WordPress in a practical sense and there is plenty of online help for that. There is a gap to be filled in the preparation and planning. If this is solved for community groups it might be easy to repurpose the information and processes for education.

Bridget Hamilton spoke of Using WordPress to create social change. Her story of her site Verbal Remedy was inspirational. A blog provide effective communication without much in the way of backing.

Technical

I went to a few of the more technical talks.

Mark Wilkinson spoke of ‘a deep understanding of actions and filters’. Since I mess around with code in WordPress at a very basic level this was a really useful talk for me. It was just pitched at the right level. I’ve used these with only a basic understanding. I think Mark got me to the point I could being to understand things a lot better the next time I dip in. Mark’s Slides

Tom Nowell spoke about the WordPress Rest API for beginners, he meant beginners with the API not generally. I held on by the skin of my teeth. Luckily I follow Tom Woodward and had played with the API in a much simpler way than either Tom documented. Yesterday I added a wee bit to my homepage to pull in the last status from my blog! Tom’s Slides

Twitter vs Blogs

Franz Vitulli talked about aspects of the pull between Social media and blogging it was good to hear another view of the area I’ve been reading and thinking about from an indieweb point of view.

Progressive Enhancement

Ben Usher Smith gave this talk, at first I thought it was a bit out of my wheelhouse, but it became apparent that the process of progressive enhancement can be applied to any sort of enterprise. I hope to be more aware of this when planning for my class next session. Ben’s post Progressive enhancement — More than just works without JavaScript on medium.

Even More…

I went to a few other talks all of which I enjoyed. Even the ones I though I was choosing almost at random had something interesting to them. Often it was in thinking about how the ideas or principles fitted into my world.

I took notes during the talks using Little Outliner 2, this meant I could publish as I went along: Notes from #wcedin. I am really liking using an outliner for this process, although I don’t think an iPad was as good as a laptop would have been. There are a few different links and thoughts there.

After I got back I feed the twitter hash tag into Tags, Martin Hawksey’s tool. This gives me TAGSExplorer: Interactive archive of twitter conversations from a Google Spreadsheet for #wcedin .

I probably missed a few opportunities to talk to folk, I found myself feeling a bit less social than I do in my TeachMeet comfort zone. But the atmosphere was very relaxed and inclusive. I’d recommend educators with an interest in blogging to join in if there is a Wordcamp near them.

WordPress menus fixed, fault mine & mine alone⤴

from @ wwwd – John's World Wide Wall Display

I just updated my blog to the latest version of WordPress.

All seemed fine until I had a look at the site. Some of my menus had went a little weird.

Quite a few of the titles had changed to ellipses! I went into the dashboard and changed them back, my changes didn’t stick. I presumed that it must have been the upgrade. I had no idea where to start so shot off a quick tweet for help.

After dinner I calmed down a little and though about the changes to my blog I’d made recently for mico blogging. One of those was to give posts without a title titles on wp_insert_post_data. It uses ellipse! This was meant for posts arriving from the micro.blog app. They would get titles set when they arrived on my blog, to prevent ugliness in he dashboard.


function modify_post_title($data)
{
    if ($data[ 'post_title' ] == ''  ) {
        //wp_filter_nohtml_kses strips html and then I replace &nbsp; 
        $title = str_replace( "&nbsp;" , " " , substr( wp_filter_nohtml_kses( $data[ 'post_content' ] ) , 0, 60 ) ) . "..." ;
        $data['post_title'] =  $title ;
    }
    return $data; // Returns the modified data.
} 

so I’ve changed the if to:
if ($data[ 'post_title' ] == '' && $data[ 'post_type' ] == 'post' )

Which seems to have solved the problem and taught me a lesson.

Adventures in micro blogging part 1⤴

from @ wwwd – John's World Wide Wall Display

I signed up for the kickstarter of micro.blog, it went live earlier this week.

Micro.blog is a new social network for independent microblogs.
Start a microblog today. Easy to publish, own your content, great cross-posting.

Micro.blog

The service is very new and so far has changed and developed every day.

The idea is, you publish short posts, these are mirrored on micro.blog/yourusername via RSS. The posts can be from any RSS feed. You can get a micro.blog hosted blog at yourusername.micro.blog or use your own hosting.

The micro.blog iOS app will post to your micro.blog blog or your own WordPress blog. Or you can use your own system. There is a microblog bot that will post your posts on to Twitter too.

The difference between the hosted blog and your micro.blog/username stream is a mite confusing at the moment. I wonder if a different domain name might have helped.

Both the hosted blog and the twitter bot are paid for options. The docs make it clear that you can host your own and point to IFTTT as an alternative to the bot.

The system follows the indieweb principle of controlling your own content and sending it on to other spaces.

Replies on micro.blog to your posts are sent as webmentions to your own blog and show up as comments if you have the webmention plugin installed. I had that already to get twitter replies as comments.

My setup

I’ve added a new category here, micro. I’ve edited the blog to not have posts with this category show on the home page, they show on micro instead.

I set the micro.blog app to create posts with the status format in the micro category.

I turned off the jetpack social posting to Twitter function. I’ll manually post normal posts. I’ve set up a micro.blog bot to post to Twitter.

The service is very much a work in progress, and I’ve not really read the docs but I’ve noticed a few interesting things.

titleless

On is that the posts on micro.blog consist of descriptions with no titles. When you post form the app, you get a post on your blog without a title. A post with a title on your blog is posted as a link to micro.blog. With a post without out a title the description becomes the content of the micro.blog post.

That means you get lots of posts listed in your dashboard as ‘no title’. Since I didn’t like this I tried to auto add titles to posts without titles with a little Google-fu and some WordPress coding.

This worked out fine, except the posts on micro.blog consist of a title and a link, the tweet posted by the twitter bot is the same.

I am now looking to create a custom RSS feed without title. More googling ahead.

Alternatively I could use the code from Tweaks for micro.blog that adds dates as titles, micro.blog ignore these.

Or just learn to live with ‘no title’ posts in the dashboard.

Me on Micro.blog

Preparing for the microblog is a lot more coherent than this post if you are looking for setup advice.

I’ll post the code I’ve mentioned above at some point, it is pretty simple stuff.

Reflections on a little bit of open education (TL;DR: it works).⤴

from @ Sharing and learning

We are setting up a new honours degree programme which will involve use of online resources for work based blended learning. I was asked to demonstrate some the resources and approaches that might be useful. This is one of the quick examples that I was able to knock up(*) and some reflections on how Open Education helped me. By the way, I especially like the last bit about “open educational practice”. So if the rest bores you, just skip to the end.

(*Disclaimer: this really is a quickly-made example, it’s in no way representative of the depth of content we will aim for in the resources we use.)

Making the resource

I had decided that I wanted to show some resources that would be useful for our first year, first semester Praxis course. This course aims to introduce students to some of the skills they will need to study computer science, ranging from appreciating the range of topics they will study to being able to use our Linux systems, from applying study skills to understanding some requirements of academic writing. I was thinking that much of this would be fairly generic and must be covered by a hundred and one existing resources when  I saw this tweet:

That seemed to be in roughly the right area, so I took a look at the University of Nottingham’s HELM Open site and found an Introduction to Referencing. Bingo. The content seemed appropriate, but I wasn’t keen on a couple of things. First, breaking up the video in 20sec chunks I fear would mean the student spend more time ‘interacting’ with the Next-> button than thinking about the content. Second, it seems a little bit too delivery oriented, I would like the student to be a little more actively engaged.

I noticed there is a little download arrow on each page which let me download the video. So I downloaded them all and used OpenShot to string them together into one file. I exported this and used the h5p WordPress plugin to show how it could be combined with some interactive elements and hosted on a WordPress site with the hypothes.is annotation plugin, to get this:

The remixed resource: on the top left is the video, below that some questions to prompt the students to pay attention to the most significant points, and on the right the hypothes.is pop-out for discussion.

How openness helps

So that was easy enough, a demo of the type of resource we might produce, created in less than an afternoon. How did “openness” help make it easy.

Open licensing and the 5Rs

David Wiley’s famous 5Rs define open licences as those that let you  Reuse, Revise, Remix, Retain and Redistribute learning resources. The original resource was licensed as CC:BY-NC and so permitted all of these actions. How did they help?

Reuse: I couldn’t have produced the video from scratch without learning some new skills or having sizeable budget, and having much more time.

Revise: I wasn’t happy with the short video / many page turns approach, but was  able to revise the video to make it play all the way through in one go.

Remix: The video was then added to some formative exercises, and discussion facility added.

Retain: in order for us to rely on these resources when teaching we need to be sure that the resource remains available. That means taking responsibility keeping it available. Hence we’ll be hosting it on a site we control.

Redistribute: we will make our version available to other. This isn’t just about “paying forward”, it’s about the benefits that working in an open network being, see the discussion about nebulous open education below.

One point to make here: the licence has a Non-Commercial restriction. I understand why some people favour this, but imagine if I were an independent consultant brought in to do this work, and charged for it. Would I then be able to use the HELM material? The recent case about a commercial company charging to duplicate CC-licensed material for schools, which a US judge ruled within the terms of the licence might apply, but photocopying seems different to remixing. To my mind, the NC clause just complicates things too much.

Open standards, and open source

I hadn’t heard much about David Wiley’s ALMS framework for technical choices to facilitate openness (same page as before, just scroll a bit further) but it deals directly with issues I am very familiar with. Anyone who thinks about it will realise that a copy-protected PDF is not open no matter what the licence on it says. The ALMS framework breaks the reasoning for this down to four aspects: Access to editing tools, Level of expertise required, Meaningfully editable, Self sources. Hmmm. Maybe sometimes it’s clearer not to force category names into acronyms? Anyway, here’s how these helped.

Self-sourced, meaning the distribution format is the source code. This is especially relevant as the reason HELM sent the tweet that alerted me to their materials was that they are re-authoring material from Flash to HTML5. Aside from modern browser support, one big advantage of them doing this is that instead of having an impenetrable SWF package I had access to the assets that made the resource, notably the video clips.

Meaningfully editable: that access to the assets meant that I could edit the content, stringing the videos together, copying and pasting text from the transcript to use as questions.

Level of expertise required: I have found all the tools and services used (OpenShot, H5P, hypothes.is, WordPress) relatively easy to use, however some experience is required, for example to be familiar with various plugins available for WordPress and how to install them. Video editing in particular takes some expertise. It’s probably something that most people don’t do very often (I don’t).  Maybe the general level of digital literacy level we should now aim for is one where people are familiar with photo and video editing tools as well as text oriented word processing and presentation tools. However, I’m inclined to think that the details of using the H264 video codec and AAC audio codec, packaged in a MPEG-4 Part 14 container (compare and contrast with VP9 and ogg vorbis packaged in a profile of Matroska) should remain hidden from most people. Fortunately, standardisation means that the number of options is less than it would otherwise be, and it was possible to find many pages on the web with guidance on the browser compatibility of these options (MP4 and WebM respectively).

Access to editing tools, where access starts with low cost. All the tools used were free, most were open source, and all ran on Ubuntu (most can also run on other platforms).

It’s notable that all these ultimately involve open source software and open standards, and work especially well when then “open” for open standards includes free to implement. That complicated bit around MP4 & WebM video formats, that comes about because royalty requirements for those implementing MP4.

Open educational practice: nebulous but important.

Open education includes but is more than open education resources, open content, open licensing and open standards. It also means talking about what we do. It means that I found out about HELM because they were openly tweeting about their resources. I think that is how I learnt about nearly all the tools discussed here ina similar manner. Yes, “pimping your stuff” is importantly open. Open education also means asking questions and writing how-to articles that let non-experts like me deal with complexities like video encoding.

There’s a deeper open education at play here as well. See that resource from HELM that I started with? It started life in the RLO CETL, i.e. in a publicly funded initiative, now long gone. And the reason I and others in the UKHE know about Creative Commons and David Wiley’s analysis of open content, that largely comes down to #UKOER, again a publicly  funded initiative. UKOER and the stuff about open standards and open source was supported by Jisc, publicly funded. Alumni from these initiatives are to be found all over UKHE, through which these initiatives continue to be crucially important in building our capability and capacity to support learners in new and innovative settings.

 

The post Reflections on a little bit of open education (TL;DR: it works). appeared first on Sharing and learning.