Tag Archives: WordPress

WordPress LTI Testing: Part 4⤴

from @ education

This follows on from a series of previous posts documenting some thinking about integrating WordPress with a VLE via LTI: Laying it all out to begin WordPress LTI Testing: Part 1 WordPress LTI Testing: Part 2 WordPress LTI Testing: Part 3 Doing the thinking above … Continue reading WordPress LTI Testing: Part 4

A feed for my microcast⤴

from @ wwwd – John's World Wide Wall Display

As part of my summer holiday fun with WordPress I though I might create a ‘proper’ RSS feed for my microcast.

There are quite a few podcast plugins that would do the job but I though it might be interesting to try a bit of DIY.

Back when I started a class podcast at Radio Sandaig I used to create the RSS feed by hand with a text editor and a fair bit of copy and paste. Over at Edutalk we use feedburner to massage the feed for iTunes.

I used information from How to Roll Your Own Simple WordPress Podcast Plugin | CSS-Tricks to get me started with the template.

I copied the feed-rss2.php file from the wp-includes folder to my child theme folder renaming it feed-microcast.php

wp-content/themes/sempress-child/feed-microcast.php

I adjusted the query to get the posts from my microcast category. I also hard coded the title, link, image and a few other things to simplify the process a little.

I then used the template from CSS-Tricks as a guide to adding the various podcast tags to my template.

This ended up with a pretty broken feed, mostly due to my lack of care, but I fixed it up later I got it linked up.

I didn’t want to use the custom post type approach used in the article because that would involve editing all the old posts or converting them to the new type somehow.

My first idea was to create a feed template and switch to that when the RSS feed for my microcast category was called for.

After failing to get the template to switch for the standard category feed, /category/microcast/feed I ended up with a custom feed at /feed/microcast.

and I add

add_action('init', 'customRSS');
function customRSS(){
        add_feed('microcast', 'customRSSFunc');
}

function customRSSFunc(){
        get_template_part('feed', 'microcast');
}

to my functions.php file.

I then spent a bit of time using the W3C feed validation service until I fixed the feed up to valadate.

I’ve still got to get a link to the feed into the microcast category page head tag and I hope to do that as soon as I’ve gone a bit of research. For now I’ve a link in the sidebar.

Here is the template: WordPress RSS feed template for my microcast

Using the WordPress REST API to post a book from WikiSource to PressBooks with python⤴

from @ Sharing and learning

I am using Pressbooks to build an online edition of Southey and Coleridge’s Omniana. I transcribed the text for Volume I on wikisource. This post is about how I got that text into pressbooks; copy and paste didn’t appeal, so I thought I would try using the WordPress REST API. You could probably write a PHP plugin that would do this, but I find python a bit easier for exploratory work, so I used that.

Getting the data from Wikisource is reasonably trivial. On wikisource I have transcluded the page transcriptions into a single HTML file of the whole book. This file is relatively easy to parse into the individual articles for posting to Pressbooks, especially as I added <hr /> tags before each article (even the first) and added stop at the end.

In the longer term I want to start indexing the PressBook Omniana using wikidata for linked data. This will let me look at the semantic graph of what Southey and Coleridge were interested in.

First steps with the WordPress API

I’ve not used the WordPress API before, but it is well documented and there is a useful series of articles on envatoTuts+: Introducing the WP REST API.

Put /wp-json onto the end of a WordPress blog URL and you can see the routes and endpoints (e.g. this blog, my Pressbooks/Omniana). (I use the JSON viewer chrome plugin to make these easier to read.) I found wp-api-python very useful in helping make requests against these in python. It’s available via pip as wordpress-api and I found it required python the libraries request beautifulsoup4requests-oauthlib and six. It authenticates via  OAuth, so on WordPress you need the  WordPress REST API – Oauth1.0a plugin or similar; there’s more than you need to know about how OAuth works  on envatotuts+.

I installed the Oauth1.0a plugin for the network on a WordPress multisite and PressBook test servers. Network activation seemed to generate errors on Pressbooks and plain multisite WordPress, so I activated it only for the individual blog/book. Then in the Users tab on the admin screen I was will be able to view and set up applications:

Add Application screen from the OAuth1.0a plugin

Filling out the details and clicking on save consumer and  gave me a client key and client secret.

Back in python I used these to poke around the various API endpoints of my test multisite installation of WordPress, e.g.

from wordpress import API
base_url = "http://wordpress.home.local/test"
api_path = "/wp-json/wp/v2/"
wpapi = API(
    url=base_url,
    consumer_key="thisismykey",
    consumer_secret="thisismysecret",
    api="wp-json",
    version="wp/v2",
    wp_user="phil",
    wp_pass="thisismypassword",
    oauth1a_3leg=True,
    creds_store="~/.wc-api-creds.json",
    callback="http://wordpress.home.local/test/api-test"
)
print("listing posts")
resource = "posts"
try:
    response = wpapi.get(base_url+api_path+resource)
    for post in response.json():
        print(post['id'], post['title'])
except Exception as e:
    print("couldn't get posts")
    print(e)

wpapi uses requests methods, documented here.  Other useful properties and methods are

  • r.ok: boolean, True if HTTP status code is <400
  • r.content, response content in bytes,
  • r.text, response content in text
  • r.headers, response headers
  • r.iter_lines() content a line at a time
  • r.json() response as a json object

Posting to WordPress

Following the envatoTuts+ Creating, Updating, and Deleting Data article and translating to python:

from wordpress import API
base_url = "http://wordpress.home.local/test"
api_path = "/wp-json/wp/v2/"
wpapi = API(
    url=base_url,
    consumer_key="thisismykey",
    consumer_secret="thisismysecret",
    api="wp-json",
    version="wp/v2",
    wp_user="phil",
    wp_pass="thisismypassword",
    oauth1a_3leg=True,
    creds_store="~/.wc-api-creds.json",
    callback="http://wordpress.home.local/test/api-test"
)

print("creating new post")
resource = "posts"
title = "86. Glover's Leonidas."
content = """Glover's Leonidas was unduly praised at its first appearance, and more unduly ...
..."""
excerpt = """Glover's Leonidas was unduly praised at its ..."""
data = {
    "content": content,
    "title": title,
    "excerpt": excerpt,
    "status": "draft",
    "categories": [190]
}
try:
    response = wpapi.post(base_url+api_path+resource, data)
    print(response.json())
except Exception as e:
    print("couldn't post")
    print(e)

The posts resource collection allows creation and retrieval  (POST and GET methods); a specific posts/(?P<id>[\d]+) resource allows update and delete (PUT, PATCH and DELETE methods).

The keys for the data dict are the same as the schema for the WordPress API method, which are also shown in the arguments listed in the JSON returned by wp-json for each endpoint under each route.

Posting to Pressbooks

Pressbooks has a whole extended set of api routes and endpoints, no ‘posts’ resources, but front-matter, back-matter, parts and chapters; all under the /pressbooks/v2/ path.

There is some documentation on the Pressbooks site.  I’m posting articles as chapters into a Pressbook site that already has some organised content, so I don’t have to worry about setting them up. Adapting from the above, changing to URL and credentials to those for my local test instance of Pressbooks, and changing the api-path, version, and resource name, this posts a test chapter to the content part of my book, as a “numberless” chapter-type:

from wordpress import API
base_url = "http://books.home.local/omniana"
api_path = "/wp-json/pressbooks/v2/"
wpapi = API(
    url=base_url,
    consumer_key="thisismykey",
    consumer_secret="thisismysecret",
    api="wp-json",
    version="pressbooks/v2",
    wp_user="phil",
    wp_pass="thisismypassword",
    oauth1a_3leg=True,
    creds_store="~/.wc-api-creds3.json",
    callback="http://books.home.local/omniana/api-test"
)
print("creating new chapter")
resource = "chapters"
data = {
"content": "test",
"title": "test",
"status": "publish",
"chapter-type": 48,
"part": 27
}
try:
response = wpapi.post(base_url+api_path+resource, data)
pprint(response.json())
except Exception as e:
print("couldn't post")
print(e)

Finding the ids for chapter-type and part need a little detective work. You can, of course use an API call to GET the parts and  list their names and ids, in a similar way to listing the posts in the first example above; or you can just edit the part or chapter-type in the Bookpress admin interface and inspect the url. It’s also worth noting that you need a different creds_store for each OAUTH provider you connect to.

Next Steps

As I said, parsing reading through and parsing the transcluded the page transcriptions wasn’t too hard (I put some markers in the transclusion to help). I made some changes to the content before posting it: perhaps the most interesting issue was  changing the wiki style footnotes to Pressbook style.

At the time of writing, I have started posting to the live/public instance of Omniana on Pressbooks but still have to sort some formatting issues: removing line breaks, making sure that the CSS selectors are appropriate for WordPress; that shouldn’t take long to fix.

Then I want to start indexing the articles using wikidata for linked data.

The post Using the WordPress REST API to post a book from WikiSource to PressBooks with python appeared first on Sharing and learning.

Update on Gutenberg — WordPress⤴

from @ wwwd – John's World Wide Wall Display

Bookmarked Update on Gutenberg (WordPress News)
Progress on the Gutenberg project, the new content creating experience coming to WordPress, has come a long way. Since the start of the project, there have been 30 releases and 12 of those happened…

WordPress 5.0 could be as soon as August with hundreds of thousands of sites using Gutenberg before release.

Source: Update on Gutenberg — WordPress

Although GlowBlogs will not be getting this until later in the year and after much testing I am still watching and occasionally testing Gutenberg.

From a selfish POV (my class uses iPads) I am still seeing some of the same issue on iPad as I mentioned before: Gutenberg on iPad. A lot better now, but the active text still goes behind the keyboard on occasion. I hope to do a bit more testing over the summer break.

A couple of quick blog tweaks⤴

from @ wwwd – John's World Wide Wall Display

Firstly; I’ve removed most of the post formats leaving the 2 I actually use here. Standard goes to the front page, status to the status. I organise kinds with the post kinds plugin. My Format box now looks like this:

add_action( 'after_setup_theme', 'childtheme_formats', 11 );
function childtheme_formats(){
add_theme_support( 'post-formats', array( 'status') );
}

I added the above to my child themes function.php

Based on Post Formats Formats_in_a_Child_Theme in the WordPress Codex. Standard Format is formatless, so you just add the ones you want in addition.

Secondly; I’ve moved the quote and content generated from the Post Kinds plugin to below the post. This is in the Post Kinds setting so was simple. Having them above my remarks meant that the quote was going to micro.blog and twitter rather than my comment.

I hope to have a bit more time over the summer holidays to rethink and rewire the blog. Some of the decisions I’ve made were perhaps not the best.

Most of the functions that have do with micro.blog and microblogging that live in my child theme’s functions.php in a gist.

Downloading Media from WordPress using AppleScript⤴

from @ wwwd – John's World Wide Wall Display

I got a request from a teacher who wanted to download a years worth of images from a Glow Blog (for end of year slideshow).

Although there are plugins that can do this these are not available on Glow Blogs. I was stumped apart from going through the site and downloading them 1 by 1. But after a wee bit of thinking I though I’d try using the REST API via AppleScript.

The REST API will list in JSON format the media:

http://johnjohnston.info/blog/wp-json/wp/v2/media/

Look at that in FireFox for a pretty view.

JSON Helper is

an agent (or scriptable background application) which allows you to do useful things with JSON directly from AppleScript.

So I can grab the list of media from a site in JSON format use appleScript to download all the files.

The script I wrote is not great, you can’t download from a particular year, but a quick look at the JSON will help in working out how many files to download.

I am sure there are more efficient ways to do this and I’ve only tested on a couple of site, but it seems to do the trick and might be useful again sometime.

 

Digital Education and WordPress: an historical romp for #pressedconf18⤴

from

Today myself and Jen Ross took part in the PressEd Twitter conference, brilliantly organised by Pat Lockley and Natalie Lafferty. They had the genius idea of re-mixing the Public Archaeology Twitter conference format and with much heroic cajoling succeeded pulled in over 40 presentations from … Continue reading Digital Education and WordPress: an historical romp for #pressedconf18

Word Press for Weans 2018 #pressedconf18⤴

from @ wwwd – John's World Wide Wall Display

This is a summary of my presentation for PressED – A WordPress and Education, Pedagogy and Research Conference on Twitter. I’ve pasted the text from the tweets, without the conference hash tags below.

I am @johnjohnston a primary school teacher in Scotland. I acted as ‘Product Owner’ for Glow Blogs from 2014 to 2016 & continue the role on a part time basis.

Glow is a service for to all schools & education establishments across Scotland.

Glow gives access to a number of different web services.

One of these services is Glow Blogs which runs on WordPress.

  • Glow Blogs consist of 33 multisites
  • Total number of blogs 219,834
  • Total number of views in February 2018 1,600,074
  • Number of blog users logging on in Feb 2018 243,199

All teachers and pupils in Scotland can have access to #GlowBlogs via a Single signon via RMUNIFY (shibboleth)

 

Development

#GlowBlogs developed & maintained by Scottish Government considerable amount of work going into dev, testing, security and data protection. This differs from many edu #WordPress set ups as changes developed relatively slowly.

Major customisations include shibboleth signon, user roles & privacy. Teachers/Pupils have slightly different permissions.
Blogs can be public, private or “Glow Only”
There is also an e-Portfolio facility added via a plugin.

 

How the Blogs are used

Glow Blogs are currently used for School Websites, Class Blogs, Project Blogs, Trips, Libraries, eportfolios. Blogs By Learners, Blogs for Learners (Resources, revision ect), collaborations, aggregations.

 

e-Portfolios

ePortfolios supported by plugin, custom taxonomy. ‘Profiles’ print or export to PDF. Pupil portfolio blogs can have sparkly unicorns or black vampire styles but the profiles that come out look clean and neat.

Pupils

Pupils can learn to be on the web but with <13 we have duty of care.
Pupils can create blogs. Cannot make blogs public.

A member of staff can make pupil’s blogs public. Pupils can be members of public blog and post publicly.

 

Examples


A collaboration https://blogs.glowscotland.org.uk/glowblogs/worldmustbecomingtoanend
Bees https://blogs.glowscotland.org.uk/nl/buzzingaboutbees/
A Blacksmith https://blogs.glowscotland.org.uk/st/scottishblacksmith
An aggregation https://blogs.glowscotland.org.uk/glowblogs/uodedushare
pupil projects: https://blogs.glowscotland.org.uk/ab/endeavour
more https://blogs.glowscotland.org.uk/glowblogs/glowingposts

Possibilities

Only scratched the surface of the potential of #WordPress the tools are in place, Scottish teachers and learners are exploring the possibilities but it is early days. We are tooled up for the future.

 

 

 

PressBooks and ePub as an OER format.⤴

from @ Sharing and learning

PressBooks does a reasonable job of importing ePub, so that ePub can be used as a portable format for open text books. But, of course, there are limits.

I have been really impressed with PressBooks, the extension to WordPress for authoring eBooks. Like WordPress it is available as a hosted service from PressBooks.com and to host yourself from PressBooks.org. I have been using the latter for a few months. It looks like a great way of authoring, hosting, using, and distributing open books. Reports like this from Steel Wagstaff about Publishing Open Textbooks at UW-Madison really show the possibilities for education that open up if you do that. There you can read what work Steel and others have been doing around PressBooks for authoring open textbooks, with interaction (using hypothe.is, and h5p), connections to their VLE (LTI), and responsible learning analytics (xAPI).

PressBooks also supports replication of content from one PressBook install to another, which is great, but what is even greater is support of import from other content creation systems. We’re not wanting monoculture here.

Open text books are, of course, a type of Open Educational Resource, and so when thinking about PressBooks as a platform for open text books you’re also thinking about PressBooks and OER. So what aspects of text-books-as-OER does PressBooks support? What aspects should it support?

OER: DERPable, 5Rs & ALMS

Frameworks for thinking about requirements for openness in educational resources go back to the very start of the OER movement. Back in the early 2000s, when JISC was thinking about repositories and Learning Objects as ways of sharing educational resources, Charles Duncan used to talk about the need for resources to be DERPable: Discoverable, Editable, Repurposable and Portable. At about the same time in the US, David Wiley was defining Open Content in terms of four, later five Rs and ALMS. The five Rs are well known: the permissions to Retain, Reuse, Revise, Remix and Redistribute. ALMS is a less memorable, more tortured acronym, relating to technical choices that affect openness in practice. The choices relate to: Access to editing tools, the Level of expertise required to use these tools, the content being Meaningfully editable, and being Self-sourced (i.e. there not being separate source and distribution files).

Portability of ePub and editing in PressBooks

I tend to approach these terms back to front: I am interested in portable formats for disseminating resources, and systems that allow these to be edited. For eBooks / open textbooks my format of choice for portability is currently ePub, which is essentially HTML and other assets (images, stylesheets, etc.) with metadata, in a zip archive. Being HTML-based, ePub is largely self-sourced, and can be edited with suitable tools (though there may be caveats around some of the other assets such as images and diagrams). Furthermore, WordPress in general and PressBooks specifically makes editing, repurposing and distributing easy without requiring knowledge of HTML. It’s a good platform for remixing, revising, reusing, retaining content. And the key to this whole ramble of a blog post is the ‘import from ePub‘ feature.

So how does  the combination of ePub and PressBooks work in practice. I can go to OpenStax, and download one of their text books as ePub. As far as I can see the best-known open textbook project doesn’t seem to make ePub available (Apple’s iPub is similar, but I don’t do iBooks so couldn’t download one). So I went to Siyavula and downloaded one of their CC:BY textbooks as an ePub. Chose that download for import into PressBooks and got a screen that lets me choose which parts of the ePub to import and what type of content to import it as.

List of sections of the ePub with tick box for whether to import in PressBooks, and radio button options for what type of book part to import as

After choosing which parts to import and hitting the import button at the bottom of the page, the content is there to edit and republish in PressBooks.

From here you can edit or add content (including by import from other sources), rearrange the content, and set options for publishing it. There is other work to be done. You will need to choose a decent theme to display your book with style. You will also need to make sure internal links work as your PressBooks permalink URL scheme might not match the URLs embedded in the content. How easy this is will vary depending on choices made when the book was created and your own knowledge of some of the WordPress tools that can be used to make bulk edits.

I am not really interested in distributing maths text books, so I won’t link to the end result of this specific example. I did once write a book in a book sprint with some colleagues, and that was published as an ePub. So here an imported & republished version of Into The Wild (PressBook edition).  I didn’t do much polishing of this: it uses a stock theme, and I haven’t fixed internal links, e.g. footnotes.

Limitations

Of course there are limits to this approach. I do not expect that much (if any) of the really interesting interactive content would survive a trip through ePub. Also much of Steel’s work that I described up at the top is PressBook platform specific. So that’s where cloning from PressBooks to PressBooks becomes useful. But ePub remains a viable way of getting textbook content into the PressBooks platform.

Also, while WordPress in general, and hence PressBooks, is a great way of distributing content, I haven’t looked much at whether metadata from the ePub is imported. On first sight none of it is, so there is work to do here in order to make the imported books discoverable. That applies to the package level metadata in ePubs, which is a separate file from the content. However, what also really interests me is the possibility of embedding education-specific schema.org metadata into the HTML content in such a way that it becomes transportable (easy, I think) and editable on import (harder).

The post PressBooks and ePub as an OER format. appeared first on Sharing and learning.

Using wikidata for linked data WordPress indexes⤴

from @ Sharing and learning

A while back I wrote about getting data from wikidata into a WordPress custom taxonomy. Shortly thereafter Alex Stinson said some nice things about it:


and as a result that post got a little attention.

Well, I have now a working prototype plugin which is somewhat more general purpose than my first attempt.

1.Custom Taxonomy Term Metadata from Wikidata

Here’s a video showing how you can create a custom taxonomy term with just a name and the wikidata Q identifier, and the plugin will pull down relevant wikidata for that type of entity:

[similar video on YouTube]

2. Linked data index of posts

Once this taxonomy term is used to tag a post, you can view the term’s archive page, and if you have a linked data sniffer, you will see that the metadata from WikiData is embedded in machine readable form using schema.org. Here’s a screenshot of what the OpenLink structured data sniffer sees:

Or you can view the Google structured data testing tool output for that page.

Features

  • You can create terms for custom taxonomies with just a term name (which is used as the slug for the term) and the Wikidata Q number identifier. The relevant name, description and metadata is pulled down from Wikidata.
  • Alternatively you can create a new term when you tag a post and later edit the term to add the wikidata Q number and hence the metadata.
  • The metadata retrieved from Wikidata varies to be suitable for the class of item represented by the term, e.g. birth and death details for people, date and location for events.
  • Term archive pages include the metadata from wikidata as machine readable structured data using schema.org. This includes links back to the wikidata record and other authority files (e.g. ISNI and VIAF). A system harvesting the archive page for linked data could use these to find more metadata. (These onward links put the linked in linked data and the web in semantic web.)
  • The type of relationship between the term and posts tagged with it is recorded in the schema.org structure data on the term archive page. Each custom taxonomy is for a specific type of relationship (currently about and mentions, but it would be simple to add others).
  • Short codes allow each post to list the entries from a custom taxonomy that are relevant for it using a simple text widget.
  • This is a self-contained plugin. The plugin includes default term archive page templates without the need for a custom theme. The archive page is pretty basic (based on twentysixteen theme) so you would get better results if you did use it as the basis for an addition to a custom theme.

How’s it work / where is it

It’s on github. Do not use it on a production WordPress site. It’s definitely pre-alpha, and undocumented, and I make no claims for the code to be adequate or safe. It currently lacks error trapping / exception handling, and more seriously it doesn’t sanitize some things that should be sanitized. That said, if you fancy giving it a try do let me know what doesn’t work.

It’s based around two classes: one which sets up a custom taxonomy and provides some methods for outputting terms and term metadata in HTML with suitable schema.org RDFa markup; the other handles getting the wikidata via SPARQL queries and storing this data as term metadata. Getting the wikidata via SPARQL is much improved on the way it was done in the original post I mentioned above. Other files create taxonomy instances, provide some shortcode functions for displaying taxonomy terms and provide default term archive templates.

Where’s it going

It’s not finished. I’ll see to some of the deficiencies in the coding, but also I want to get some more elegant output, e.g. single indexes / archives of terms from all taxonomies, no matter what the relationship between the post and the item that the term relates to.

There’s no reason why the source of the metadata need be Wikidata. The same approach could be with any source of metadata, or by creating the term metadata in WordPress. As such this is part of my exploration of WordPress as a semantic platform. Using taxonomies related to educational properties would be useful for any instance of WordPress being used as a repository of open educational resources, or to disseminate information about courses, or to provide metadata for PressBooks being used for open textbooks.

I also want to use it to index PressBooks such as my copy of Omniana. I think the graphs generated may be interesting ways of visualizing and processing the contents of a book for researchers.

Licenses: Wikidata is CC:0, the wikidata logo used in the featured image for this post is sourced from wikimedia and is also CC:0 but is a registered trademark of the wikimedia foundation used with permission. The plugin, as a derivative of WordPress, will be licensed as GPLv2 (the bit about NO WARRANTY is especially relevant).

The post Using wikidata for linked data WordPress indexes appeared first on Sharing and learning.