Tag Archives: Other

PressBooks and ePub as an OER format.⤴

from @ Sharing and learning

PressBooks does a reasonable job of importing ePub, so that ePub can be used as a portable format for open text books. But, of course, there are limits.

I have been really impressed with PressBooks, the extension to WordPress for authoring eBooks. Like WordPress it is available as a hosted service from PressBooks.com and to host yourself from PressBooks.org. I have been using the latter for a few months. It looks like a great way of authoring, hosting, using, and distributing open books. Reports like this from Steel Wagstaff about Publishing Open Textbooks at UW-Madison really show the possibilities for education that open up if you do that. There you can read what work Steel and others have been doing around PressBooks for authoring open textbooks, with interaction (using hypothe.is, and h5p), connections to their VLE (LTI), and responsible learning analytics (xAPI).

PressBooks also supports replication of content from one PressBook install to another, which is great, but what is even greater is support of import from other content creation systems. We’re not wanting monoculture here.

Open text books are, of course, a type of Open Educational Resource, and so when thinking about PressBooks as a platform for open text books you’re also thinking about PressBooks and OER. So what aspects of text-books-as-OER does PressBooks support? What aspects should it support?

OER: DERPable, 5Rs & ALMS

Frameworks for thinking about requirements for openness in educational resources go back to the very start of the OER movement. Back in the early 2000s, when JISC was thinking about repositories and Learning Objects as ways of sharing educational resources, Charles Duncan used to talk about the need for resources to be DERPable: Discoverable, Editable, Repurposable and Portable. At about the same time in the US, David Wiley was defining Open Content in terms of four, later five Rs and ALMS. The five Rs are well known: the permissions to Retain, Reuse, Revise, Remix and Redistribute. ALMS is a less memorable, more tortured acronym, relating to technical choices that affect openness in practice. The choices relate to: Access to editing tools, the Level of expertise required to use these tools, the content being Meaningfully editable, and being Self-sourced (i.e. there not being separate source and distribution files).

Portability of ePub and editing in PressBooks

I tend to approach these terms back to front: I am interested in portable formats for disseminating resources, and systems that allow these to be edited. For eBooks / open textbooks my format of choice for portability is currently ePub, which is essentially HTML and other assets (images, stylesheets, etc.) with metadata, in a zip archive. Being HTML-based, ePub is largely self-sourced, and can be edited with suitable tools (though there may be caveats around some of the other assets such as images and diagrams). Furthermore, WordPress in general and PressBooks specifically makes editing, repurposing and distributing easy without requiring knowledge of HTML. It’s a good platform for remixing, revising, reusing, retaining content. And the key to this whole ramble of a blog post is the ‘import from ePub‘ feature.

So how does  the combination of ePub and PressBooks work in practice. I can go to OpenStax, and download one of their text books as ePub. As far as I can see the best-known open textbook project doesn’t seem to make ePub available (Apple’s iPub is similar, but I don’t do iBooks so couldn’t download one). So I went to Siyavula and downloaded one of their CC:BY textbooks as an ePub. Chose that download for import into PressBooks and got a screen that lets me choose which parts of the ePub to import and what type of content to import it as.

List of sections of the ePub with tick box for whether to import in PressBooks, and radio button options for what type of book part to import as

After choosing which parts to import and hitting the import button at the bottom of the page, the content is there to edit and republish in PressBooks.

From here you can edit or add content (including by import from other sources), rearrange the content, and set options for publishing it. There is other work to be done. You will need to choose a decent theme to display your book with style. You will also need to make sure internal links work as your PressBooks permalink URL scheme might not match the URLs embedded in the content. How easy this is will vary depending on choices made when the book was created and your own knowledge of some of the WordPress tools that can be used to make bulk edits.

I am not really interested in distributing maths text books, so I won’t link to the end result of this specific example. I did once write a book in a book sprint with some colleagues, and that was published as an ePub. So here an imported & republished version of Into The Wild (PressBook edition).  I didn’t do much polishing of this: it uses a stock theme, and I haven’t fixed internal links, e.g. footnotes.

Limitations

Of course there are limits to this approach. I do not expect that much (if any) of the really interesting interactive content would survive a trip through ePub. Also much of Steel’s work that I described up at the top is PressBook platform specific. So that’s where cloning from PressBooks to PressBooks becomes useful. But ePub remains a viable way of getting textbook content into the PressBooks platform.

Also, while WordPress in general, and hence PressBooks, is a great way of distributing content, I haven’t looked much at whether metadata from the ePub is imported. On first sight none of it is, so there is work to do here in order to make the imported books discoverable. That applies to the package level metadata in ePubs, which is a separate file from the content. However, what also really interests me is the possibility of embedding education-specific schema.org metadata into the HTML content in such a way that it becomes transportable (easy, I think) and editable on import (harder).

The post PressBooks and ePub as an OER format. appeared first on Sharing and learning.

Quick update on W3C Community Group on Educational and Occupational Credentials⤴

from @ Sharing and learning

The work with the W3C Community Group on educational and occupational credentials in schema.org is going well. There was a Credential Engine working group call last week where I summarised progress so far. The group has 24 members. We have around 30 outline use cases, and have some idea of the relative importance of these. The use cases fall under four categories: search, refinements of searches, secondary searches (having found a credential, want find some other thing associated with it), and non-search use cases. From each use case we have derived one or two requirements for describing credentials in schema.org. We made a good start at working through these requirements.

I think the higher-level issues for the group are as follows. First, How do model the aspect of educational and occupational credentials? Where does it fit in to the  schema.org hierarchy, and how does it relate to other work around verifying a claim to hold a credential? Second, the relationship between a vocabulary like schema.org which aims for a wide uptake by many disconnected providers of data, not limited to a specialist domain or a partnership who are working closely together and can build a single tightly defined understanding of what they are describing. Thirdly, and somewhat related to the previous point, what balance do we strike between pragmatism and semantic purity.  We need to be pragmatic in order to build something that is acceptable to the rest of the schema.org community: not adding too many terms, not being too complex (one of the key factors in schema.org’s success has been  the tendency to favour approaches which make it easier to provide data).

The post Quick update on W3C Community Group on Educational and Occupational Credentials appeared first on Sharing and learning.

Getting data from wikidata into WordPress custom taxonomy⤴

from @ Sharing and learning

I created a custom taxonomy to use as an index of people mentioned. I wanted it to work nicely as linked data, and so wanted each term in it to refer to the wikidata identifier for the person mentioned. Then I thought, why not get the data for the terms from wikidata?

Brief details

Lots of tutorials on how to set up a custom taxonomy with with custom metadata fields. I worked from this one from smashingmagazine, to get a taxonomy call people, with a custom field for the wikidata id.

Once the wikidata is entered, this code will fetch & parse the data (it’s a work in progress as I add more fields)

<?php
function omni_get_wikidata($wd_id) {
    print('getting wikidata<br />');
    if ('' !== trim( $wd_id) ) {
	    $wd_api_uri = 'https://wikidata.org/entity/'.$wd_id.'.json';
    	$json = file_get_contents( $wd_api_uri );
    	$obj = json_decode($json);
    	return $obj;
    } else {
    	return false;
	}
}

function get_wikidata_value($claim, $datatype) {
	if ( isset( $claim->mainsnak->datavalue->value->$datatype ) ) {
		return $claim->mainsnak->datavalue->value->$datatype;
	} else {
		return false;
	}
}

function omni_get_people_wikidata($term) {
	$term_id = $term->term_id;
    $wd_id = get_term_meta( $term_id, 'wd_id', true );
   	$args = array();
   	$wikidata = omni_get_wikidata($wd_id);
   	if ( $wikidata ) {
    	$wd_name = $wikidata->entities->$wd_id->labels->en->value;
    	$wd_description = $wikidata->entities->$wd_id->descriptions->en->value;
    	$claims = $wikidata->entities->$wd_id->claims;
   		$type = get_wikidata_value($claims->P31[0], 'id');
   		if ( 'Q5' === $type ) {
			if ( isset ($claims->P569[0] ) ) {
				$wd_birth_date = get_wikidata_value($claims->P569[0], 'time');
				print( $wd_birth_date.'<br/>' );
			}
   		} else {
	   		echo(' Warning: that wikidata is not for a human, check the ID. ');
	   		echo(' <br /> ');
   		} 
    	$args['description'] = $wd_description;
    	$args['name'] = $wd_name;
		print_r( $args );print('<br />');
    	update_term_meta( $term_id, 'wd_name', $wd_name );
    	update_term_meta( $term_id, 'wd_description', $wd_description );
    	wp_update_term( $term_id, 'people', $args );
    	
   	} else {
   		echo(' Warning: no wikidata for you, check the Wikidata ID. ');
   	}
}
add_action( 'people_pre_edit_form', 'omni_get_people_wikidata' );
?>

(Note: don’t add this to edited_people hook unless you want along wait while causes itself to be called every time it is called…)

That on its own wasn’t enough. While the name and description of the term were being updated, the values for them displayed in the edit form weren’t updated until the page was refreshed. (Figuring out that it was mostly working took a while.) A bit of javascript inserted into the edit form fixed this:

function omni_taxonomies_edit_fields( $term, $taxonomy ) {
    $wd_id = get_term_meta( $term->term_id, 'wd_id', true );
    $wd_name = get_term_meta( $term->term_id, 'wd_name', true ); 
    $wd_description = get_term_meta( $term->term_id, 'wd_description', true ); 
//JavaScript required so that name and description fields are updated 
    ?>
    <script>
	  var f = document.getElementById("edittag");
	  var n = document.getElementById("name");
  	  var d = document.getElementById("description");
  	  function updateFields() {
  		n.value = "<?php echo($wd_name) ?>";
  		d.innerHTML = "<?php echo($wd_description) ?>";
  	  }

	  f.onsubmit=updateFields();
	</script>
    <tr class="form-field term-group-wrap">
        <th scope="row">
            <label for="wd_id"><?php _e( 'Wikidata ID', 'omniana' ); ?></label>
        </th>
        <td>
            <input type="text" id="wd_id"  name="wd_id" value="<?php echo $wd_id; ?>" />
        </td>
    </tr>
    <?php
}
add_action( 'people_edit_form_fields', 'omni_taxonomies_edit_fields', 10, 2 );

 

The post Getting data from wikidata into WordPress custom taxonomy appeared first on Sharing and learning.

TIL: getting Skype for Linux working⤴

from @ Sharing and learning

Microsoft’s Skype for Linux is a pain for Linux (well, for Ubuntu at least). It stopped working for me, no one could hear me.

Apparently it needs pulse audio to  work properly, but as others have found “most problems with the sound in Linux can be solved by removing PulseAudio”. The answer, as outlined in this post, is apulse “PulseAudio emulation for ALSA”.

The post TIL: getting Skype for Linux working appeared first on Sharing and learning.

Wikidata driven timeline⤴

from @ Sharing and learning

I have been to a couple of wikidata workshops recently, both involving Ewan McAndrew; between which I read Christine de Pizan‘s Book of the City of Ladies(*). Christine de Pizan is described as one of the first women in Europe to earn her living as a writer, which made me wonder what other female writers were around at that time (e.g. Julian of Norwich and, err…). So, at the second of these workshops, I took advantage of Ewan’s expertise, and the additional bonus of Navino Evans cofounder of Histropedia  also being there, to create a timeline of medieval European female writers.  (By the way, it’s interesting to compare this to Asian female writers–I was interested in Christina de Pizan and wanted to see how she fitted in with others who might have influenced her or attitudes to her, and so didn’t think that Chinese and Japanese writers fitted into the same timeline.)

Histropedia timeline of medieval female authors (click on image to go to interactive version)

This generated from a SPARQL query:

#Timeline of medieval european female writers
#defaultView:Timeline
SELECT ?person ?personLabel ?birth_date ?death_date ?country (SAMPLE(?image) AS ?image) WHERE {
  ?person wdt:P106 wd:Q36180; # find everything that is a writer
          wdt:P21 wd:Q6581072. # ...and a human female
  OPTIONAL{?person wdt:P2031 ?birth_date} # use florit if present for birth/death dates  
  OPTIONAL{?person wdt:P2032 ?death_date} # as some v impecise dates give odd results 
  ?person wdt:P570 ?death_date. # get their date of death
  OPTIONAL{?person wdt:P569 ?birth_date} # get their birth date if it is there
  ?person wdt:P27 ?country.   # get there country
  ?country wdt:P30  wd:Q46.   # we want country to be part of Europe
  FILTER (year(?death_date) < 1500) FILTER (year(?death_date) > 600)
  SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
  OPTIONAL { ?person wdt:P18 ?image. }
}
GROUP BY ?person ?personLabel ?birth_date ?death_date ?country
Limit 100

[run it on wikidata query service]

Reflections

I’m still trying to get my head around SPARQL, Ewan and Nav helped a lot, but I wouldn’t want to pass this off as exemplary SPARQL. In particular, I have no idea how to optimise SPARQL queries, and the way I get birth_date and death_date to be the start and end of when the writer flourished, if that data is there, seems a bit fragile.

It was necessary to to use florit dates because some of the imprecise birth & death dates lead to very odd timeline displays: born C12th . died C13th showed as being alive for 200 years.

There were other oddities in the wikidata. When I first tried, Julian of Norwich didn’t appear because she was a citizen of the Kingdom of England, which wasn’t listed as a country in Europe. Occitania, on the other hand was.  That was fixed. More difficult was a writer from Basra who was showing up because Basra was in the Umayyad Caliphate, which included Spain and so was classed as a European country. Deciding what we mean by European has never been easy.

Given the complexities of the data being represented, it’s no surprise that the Wikidata data model isn’t simple. In particular I found that dealing with qualifiers for properties was mind bending (especially with another query I tried to write).

Combining my novice level of SPARQL and the complexity of the Wikidata data model, I could definitely see the need for SPARQL tutorials that go beyond the simple “here’s how you find triple that matches a pattern” level.

Finally: histropedia is pretty cool.

Footnote:

The Book of the City of Ladies is a kind of women in red for Medieval Europe.  Rosalind Brown-Grant’s translation for Penguin Classics is very readable.

The post Wikidata driven timeline appeared first on Sharing and learning.

An ending, and a beginning⤴

from @ Sharing and learning

On 30 June 2017 I will be leaving my current employment at Heriot-Watt University. I aim to continue to support the use of technology to enhance learning as an independent consultant.

I first joined Heriot-Watt’s Institute for Computer Based Learning in 1996 on a six month secondment. I was impressed that ICBL was part of a large, well-supported Learning Technology Centre–which was acknowledged at that time as one of the leading centres for the use of technology in teaching and learning. You can get a sense of the scope of the LTC by looking at the staff list from around that time. Working with, and learning from, colleagues with this common interest was hugely appealing to me; so when I had the opportunity I re-joined ICBL in 1997, and this time I stayed.

In my time at Heriot-Watt I have been fortunate beyond belief to collaborate with people in ICBL and through work such as the Engineering Subject Centre (and other subject centres of the LTSN and then HE Academy), EEVL, Cetis and many Jisc projects. But things change. The LTC was dismantled. A reduced ICBL moved to be a part of the Computer part of the School of Mathematics and Computer Sciences (MACS). Funding became difficult, and while I greatly appreciate the huge effort made by several individuals which kept me in continuous employment, like many in similar roles I frequently felt my position was precarious.  I really enjoyed teaching Computer Science and Information Systems students, and I worked with some great people in MACS, but the work became more internally focused, isolated from current developments…not what I had joined ICBL for.

What next?

When Heriot-Watt announced that it planned to offer staff voluntary severance terms, I applied and was happy to be accepted. My professional interests remain the same: supporting the selection and use appropriate learning resources; supporting the management and dissemination of learning resources; open education; sharing and learning. I do this through work on resource description, course description, OER platforms, I use specific technologies like schema.org, LRMI and wordpress. I intend to continue working in these areas, as an independent consultant and with colleagues in Cetis LLP. Contact me if you think I can help you.

Photograph of a person up a flight of stairs into the open. by flickr user Allen, licensed CC:BY.
Exit, by Allen ( https://www.flickr.com/photos/78139009@N03/) Licence CC:BY. Click image for original.

The post An ending, and a beginning appeared first on Sharing and learning.

CMALT – Another open portfolio⤴

from @ Sharing and learning

I’ve finally made a start on drafting my CMALT Portfolio (and so has Lorna,* we’re writing buddies), and in the interests of open practice I’m going to attempt to write the whole thing as an open Google doc before moving it here on my blog.  I have a shared folder on Google Drive, Phil’s CMALT, where I’ll be building up my portfolio over the coming weeks.  I’ve made a start drafting the first two Core Areas:  Operational Issues and Learning Teaching and Assessment, I’ll be adding more sections shortly, I hope. I’d love to have some feedback on  my portfolio so if you’ve got any thoughts, comments or guidance I’d be very grateful indeed.  I’d also be very interested to know if anyone else has created their portfolio as an exercise in open practice, and if so, how they found the experience.

Wish me luck!

An open door and the word Openness, in ALT branding.
CC BY @BryanMMathers for ALT

*Credit: This post is totally copied from Lorna’s post, with slight adaptation, under the terms of the Creative Commons Attribution 3.0 Unported License she used.

More importantly, she let me copy her idea as well. That’s open practice which can’t be formally licensed, though I know that she is cool with it.

The post CMALT – Another open portfolio appeared first on Sharing and learning.

Flying cars, digital literacy and the zone of possibility⤴

from @ Sharing and learning

Where’s my flying car? I was promised one in countless SF films from Metropolis through to Fifth Element. Well, they exist.  Thirty seconds on the search engine of your choice will find you a dozen of so working prototypes (here’s a YouTube video with five).

A fine and upright gentle man flying in a small helicopter like vehicle.
Jess Dixon’s flying automobile c. 1940. Public Domain, held by State Library and Archives of Florida, via Flickr.

They have existed for some time.  Come to think about it, the driving around on the road bit isn’t really the point. I mean, why would you drive when you could fly. I guess a small helicopter and somewhere to park would do.

So it’s not lack of technology that’s stopping me from flying to work. What’s more of an issue (apart from cost and environmental damage) is that flying is difficult. The slightest problem like an engine stall or bump with another vehicle tends to be fatal. So the reason I don’t fly to work is largely down to me not having learnt how to fly.

The zone of possibility

In 2010 Kathryn Dirkin studied how three professors taught using the same online learning environment, and found that they were very different. Not something that will surprise many people, but the paper (which unfortunately is still behind a paywall) is worth a read for the details of the analysis. What I liked from her conclusions was that how someone teaches online depends on the intersection of their knowledge of the content, beliefs about how it should be taught and understanding technology. She calls this intersection the zone of possibility. As with the flying car the online learning experience we want may already be technologically possible, we just need to learn how to fly it (and consider the cost and effect on the environment).

I have been thinking about Dirkin’s zone of possibility over the last few weeks. How can it be increased? Should it be increased? On the latter, let’s just say that if technology can enhance education, then yes it should (but let’s also be mindful about the costs and impact on the environment).

So how, as a learning technologist, to increase this intersection of content knowledge, pedagogy and understanding of technology? Teachers’ content knowledge I guess is a given, nothing that a learning technologist can do to change that. Also, I have come to the conclusion that pedagogy is off limits. No technology-as-a-Trojan-horse for improving pedagogy, please, that just doesn’t work. It’s not that pedagogic approaches can’t or don’t need to be improved, but conflating that with technology seems counter productive.  So that’s left me thinking about teachers’ (and learners’) understanding of technology. Certainly, the other week when I was playing with audio & video codecs and packaging formats that would work with HTML5 (keep repeating H264  and AAC in MPEG-4) I was aware of this. There seems to be three viable approaches: increase digital literacy, tools to simplify the technology and use learning technologists as intermediaries between teachers and technology. I leave it at that because it is not a choice of which, but of how much of each can be applied.

Does technology or pedagogy lead?

In terms of defining the”zone of possibility” I think that it is pretty clear that technology leads. Content knowledge and pedagogy change slowly compared to technology. I think that rate of change is reflected in most teachers understanding of those three factors. I would go as far as to say that it is counterfactual to suggest that our use of technology in HE has been led by anything other than technology. Innovation in educational technology usually involves exploration of new possibilities opened up by technological advances, not other factors. But having acknowledged this, it should also be clear that having explored the possibilities, a sensible choice of what to use when teaching will be based on pedagogy (as well as cost and the effect on the environment).

The post Flying cars, digital literacy and the zone of possibility appeared first on Sharing and learning.

Shared WordPress archive for different post types⤴

from @ Sharing and learning

In a WordPress plugin I have custom post types for different types of publication: books, chapters, papers, presentations, reports. I want one single archive of all of these publications.

I know that the theme template hierarchy allows templates with the pattern archive-$posttype.php, so  I tried setting the slug for all the custom post types to ‘presentations’. WordPress doesn’t like that.  So what I did was set the slug for one of the publication custom post types to ‘presentations’, that gives me a /presentations/ archive for that custom post type(1). I then edited the archive.php file to use a different  template parts for custom post types(2):

<?php $cpargs = array('_builtin' => False,
				  'exclude_from_search' => False);
	$custom_post_types = get_post_types( $cpargs, 'names', 'and' );
	if ( is_post_type_archive( $custom_post_types ) ) {
		get_template_part( 'archive-publication' );
	} else {
		get_template_part( 'archive-default' );
	}  
?>

See anything wrong with this approach? Any comments on how better to do this would be welcome.

Notes:
  1. 1 could edit the .htaccess file to redirect the /books/, /chapters/ …etc archives to /publications/, which would be neater in some ways but would make setting up the theme a bit of a faff.
  2. Yes, the code gives all the custom post types with an archive the same archive. That’s fixable if you make the array of post types for which you want a shared archive manually.

The post Shared WordPress archive for different post types appeared first on Sharing and learning.