I have a new publication: “Analysing and Improving Embedded Markup of Learning Resources on the Web,” which Stefan Dietze and Davide Taibi have presented at the 2017 International World Wide Web Conference in Perth Australia. I played a minor role in the “analysing” part of this work, the heavy lifting was done by my co-authors. They analysed data from the Common Crawl to identify sites that were using LRMI terms in their schema.org markup. The analysis provides answers to important questions such as: who is using LRMI metadata and which terms are they using? How many resources have been marked up with LRMI metadata? Are the numbers of users growing? What mistakes are being made in implementing LRMI?
We also see how once a term is in schema.org it is interpreted in ways that may not have been anticipated by those who created it, with any implicit assumptions held within a community of practice being ignored. Thus terms that have a specific meaning within the learning, education and training field are construed in their more generic meaning. The result of this is that some LRMI terms are used for resources that we in LRMI did not have in mind when creating them. Consequently the presence of LRMI metadata on a web resource may not be a good indicator that a resource is intended for education–this is true of some properties more than others. To avoid this when making additions to schema.org (if you see it as a problem), the domain to which a term applies should be in the term name.
A second observation that seems important to me is the strong inverse relationship between sophisticated data structures and amount of usage. Yes, I’m talking about the AlignmentObject: potentially very expressive, but either it solves a problem no one has (which I don’t think is the case) or it is so complex that few people understand it well enough to use it. In general, properties with simple text/literal values get much more use than entity-valued properties.
The official reference is: Stefan Dietze, Davide Taibi, Ran Yu, Phil Barker, and Mathieu d’Aquin. 2017. Analysing and Improving Embedded Markup of Learning Resources on the Web. In Proceedings of the 26th International Conference on World Wide Web Companion (WWW ’17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 283-292. DOI: 10.1145/3041021.3054160
Web-scale reuse and interoperability of learning resources have been major concerns for the technology-enhanced learning community. While work in this area traditionally focused on learning resource metadata, provided through learning resource repositories, the recent emergence of structured entity markup on the Web through standards such as RDFa and Microdata and initiatives such as schema.org, has provided new forms of entity-centric knowledge, which is so far under-investigated and hardly exploited. The Learning Resource Metadata Initiative (LRMI) provides a vocabulary for annotating learning resources through schema.org terms. Although recent studies have shown markup adoption by approximately 30% of all Web pages, understanding of the scope, distribution and quality of learning resources markup is limited. We provide the first public corpus of LRMI extracted from a representative Web crawl together with an analysis of LRMI adoption on the Web, with the goal to inform data consumers as well as future vocabulary refinements through a thorough understanding of the use as well as misuse of LRMI vocabulary terms. While errors and schema misuse are frequent, we also discuss a set of simple heuristics which significantly improve the accuracy of markup, a prerequisite for reusing learning resource metadata sourced from markup.
I was asked to put forward my thoughts on how I thought the use of technology to enhance teaching and learning should be supported where I work. I work in a UK University that has campuses overseas, and which is organised into Schools (Computer Science is in a School with Maths, to form one of the smaller schools). This was my first round brain dump on the matter. It looks like something might come of it, so I’m posting it here asking for comments.
Does any of this look wrong?
Do you/ have you worked in a similar or dissimilar unit and have any suggestions for how well that worked?
What would be the details that need more careful thought?
Get in touch directly by email or use the form below (if the latter let me know if you don’t want your reply publishing).
Why support Technology Enhanced Learning (TEL)?
Why would you not? This isn’t about learning technology for its own sake, it’s about enhancing learning and teaching with technology. Unless you deny that technology can in any way enhance teaching and learning, the questions remaining centre on how can technology help and how much is that worth. Advances in technology and in our understanding of how to use it in teaching and learning create a “zone of possibility,” the extent of which and success of how it is exploited depend on the intersection of teacher’s understanding of the technologies being offered and the pedagogies suitable for their subject (Dirkin & Mishra, 2010 [paywalled ]).
Current examples of potential enhancement which is largely unsupported (or supported only by ad hoc provision) include
Online exams in computer science
Formative assessment and other formative exercises across the school
Providing resources for students learning off-campus
Supporting the delivery of course material when students won’t attend lectures
Providing course information to students
Location of support: in School, by campus, or central services?
There are clearly some services that apply institution wide (VLE), or need to be supported at each campus (computer labs), however there are dangers to centralising too much. Centralisation creates a division between the support and the people who need it, a division which is reinforced by separation of funding and management lines for the service and the academic provision. This division makes it difficult for those who understand the technology and those who understand the pedagogy of the subject being taught to engage around the problems to be solved. Instead they interact but stay within the remits laid down by their management structures.
There should of course be strong links between the support in my School and others, central support and campus specific support, but an arrangement where these links are prioritised over the link between support for TEL in maths and computing and the provision of teaching and learning in maths and computer science seems wrong.
This is something of a brain dump based on current activity, in no particular order.
Seminar series and other regular meetings to gather and spread new ideas.
Developing resources for off-campus learning (currently we need in CS to provide support materials based on existing courses for a specific programme) these and similar materials could also be used to support students on conventional courses who don’t attend lectures.
Managing tools and systems for formative assessment and other formative experiences, e.g. mathematical and programming practice.
Developing resources and systems for working with partner institutions who deliver courses we accredit, some of which may be applicable to mainstream teaching.
Student course information website: maintenance and updating information, liaison with central student portal.
Online exams, advice on question design and managing workflow from question authoring to test delivery.
Evaluation of innovative teaching (where innovative is defined as something for which we are unsure enough of the benefits for it to be worth evaluating).[*]
Maintain links with development organisations in Learning Technology, e.g. ALT and Jisc and scholarship in areas such as digital pedagogy and open education which underpin technology enhanced learning.
Liaise with central & campus services, e.g. VLE management group
Advise staff in school on use of central facilities e.g. BlackBoard
Liaise with other schools. There is potential to provide some of these services to other schools (or vice versa), assuming financial recompense can be arranged.
[*Note: this raises the question of whether the support should be limited to technology to enhance learning, should address other innovations too.]
This needs to be provided by a core of people with substantial knowledge of learning technology, who might also contribute to other activities in the school. We have a group of three or four people who can do this. It is a little biased to Computer Science and to one campus so there should be thought given to how to bring in other subjects and locations.
We would involve project students and interns provided this was done in such a way as to contribute sustainable enhancement of a service or creation of new resources. For example, we would use of tools such as git so that each student left work that could be picked up by others. As well as supervising project students within the group we could co-supervise with academic staff who had their own ideas for learning-related student projects. This would help keep tight contacts with day-to-day teaching.
Funding and management
This support needs an allocated budget and well controlled project management. Funding for core staff should be long term on a par with commitment to teaching within the School. Management and reporting should be through the Director of Learning and Teaching and the Learning and Teaching Committee with information and discussion at the subject Boards of Studies as appropriate.
Dirkin, K., & Mishra, P. (2010). Values, Beliefs, and Perspectives: Teaching Online within the Zone of Possibility Created by Technology Retrieved from https://www.learntechlib.org/p/33974/
Where’s my flying car? I was promised one in countless SF films from Metropolis through to Fifth Element. Well, they exist. Thirty seconds on the search engine of your choice will find you a dozen of so working prototypes (here’s a YouTube video with five).
They have existed for some time. Come to think about it, the driving around on the road bit isn’t really the point. I mean, why would you drive when you could fly. I guess a small helicopter and somewhere to park would do.
So it’s not lack of technology that’s stopping me from flying to work. What’s more of an issue (apart from cost and environmental damage) is that flying is difficult. The slightest problem like an engine stall or bump with another vehicle tends to be fatal. So the reason I don’t fly to work is largely down to me not having learnt how to fly.
The zone of possibility
In 2010 Kathryn Dirkin studied how three professors taught using the same online learning environment, and found that they were very different. Not something that will surprise many people, but the paper (which unfortunately is still behind a paywall) is worth a read for the details of the analysis. What I liked from her conclusions was that how someone teaches online depends on the intersection of their knowledge of the content, beliefs about how it should be taught and understanding technology. She calls this intersection the zone of possibility. As with the flying car the online learning experience we want may already be technologically possible, we just need to learn how to fly it (and consider the cost and effect on the environment).
I have been thinking about Dirkin’s zone of possibility over the last few weeks. How can it be increased? Should it be increased? On the latter, let’s just say that if technology can enhance education, then yes it should (but let’s also be mindful about the costs and impact on the environment).
So how, as a learning technologist, to increase this intersection of content knowledge, pedagogy and understanding of technology? Teachers’ content knowledge I guess is a given, nothing that a learning technologist can do to change that. Also, I have come to the conclusion that pedagogy is off limits. No technology-as-a-Trojan-horse for improving pedagogy, please, that just doesn’t work. It’s not that pedagogic approaches can’t or don’t need to be improved, but conflating that with technology seems counter productive. So that’s left me thinking about teachers’ (and learners’) understanding of technology. Certainly, the other week when I was playing with audio & video codecs and packaging formats that would work with HTML5 (keep repeating H264 and AAC in MPEG-4) I was aware of this. There seems to be three viable approaches: increase digital literacy, tools to simplify the technology and use learning technologists as intermediaries between teachers and technology. I leave it at that because it is not a choice of which, but of how much of each can be applied.
Does technology or pedagogy lead?
In terms of defining the”zone of possibility” I think that it is pretty clear that technology leads. Content knowledge and pedagogy change slowly compared to technology. I think that rate of change is reflected in most teachers understanding of those three factors. I would go as far as to say that it is counterfactual to suggest that our use of technology in HE has been led by anything other than technology. Innovation in educational technology usually involves exploration of new possibilities opened up by technological advances, not other factors. But having acknowledged this, it should also be clear that having explored the possibilities, a sensible choice of what to use when teaching will be based on pedagogy (as well as cost and the effect on the environment).
We are setting up a new honours degree programme which will involve use of online resources for work based blended learning. I was asked to demonstrate some the resources and approaches that might be useful. This is one of the quick examples that I was able to knock up(*) and some reflections on how Open Education helped me. By the way, I especially like the last bit about “open educational practice”. So if the rest bores you, just skip to the end.
(*Disclaimer: this really is a quickly-made example, it’s in no way representative of the depth of content we will aim for in the resources we use.)
Making the resource
I had decided that I wanted to show some resources that would be useful for our first year, first semester Praxis course. This course aims to introduce students to some of the skills they will need to study computer science, ranging from appreciating the range of topics they will study to being able to use our Linux systems, from applying study skills to understanding some requirements of academic writing. I was thinking that much of this would be fairly generic and must be covered by a hundred and one existing resources when I saw this tweet:
That seemed to be in roughly the right area, so I took a look at the University of Nottingham’s HELM Open site and found an Introduction to Referencing. Bingo. The content seemed appropriate, but I wasn’t keen on a couple of things. First, breaking up the video in 20sec chunks I fear would mean the student spend more time ‘interacting’ with the Next-> button than thinking about the content. Second, it seems a little bit too delivery oriented, I would like the student to be a little more actively engaged.
I noticed there is a little download arrow on each page which let me download the video. So I downloaded them all and used OpenShot to string them together into one file. I exported this and used the h5p WordPress plugin to show how it could be combined with some interactive elements and hosted on a WordPress site with the hypothes.is annotation plugin, to get this:
How openness helps
So that was easy enough, a demo of the type of resource we might produce, created in less than an afternoon. How did “openness” help make it easy.
Open licensing and the 5Rs
David Wiley’s famous 5Rs define open licences as those that let you Reuse, Revise, Remix, Retain and Redistribute learning resources. The original resource was licensed as CC:BY-NC and so permitted all of these actions. How did they help?
Reuse: I couldn’t have produced the video from scratch without learning some new skills or having sizeable budget, and having much more time.
Revise: I wasn’t happy with the short video / many page turns approach, but was able to revise the video to make it play all the way through in one go.
Remix: The video was then added to some formative exercises, and discussion facility added.
Retain: in order for us to rely on these resources when teaching we need to be sure that the resource remains available. That means taking responsibility keeping it available. Hence we’ll be hosting it on a site we control.
Redistribute: we will make our version available to other. This isn’t just about “paying forward”, it’s about the benefits that working in an open network being, see the discussion about nebulous open education below.
One point to make here: the licence has a Non-Commercial restriction. I understand why some people favour this, but imagine if I were an independent consultant brought in to do this work, and charged for it. Would I then be able to use the HELM material? The recent case about a commercial company charging to duplicate CC-licensed material for schools, which a US judge ruled within the terms of the licence might apply, but photocopying seems different to remixing. To my mind, the NC clause just complicates things too much.
Open standards, and open source
I hadn’t heard much about David Wiley’s ALMS framework for technical choices to facilitate openness (same page as before, just scroll a bit further) but it deals directly with issues I am very familiar with. Anyone who thinks about it will realise that a copy-protected PDF is not open no matter what the licence on it says. The ALMS framework breaks the reasoning for this down to four aspects: Access to editing tools, Level of expertise required, Meaningfully editable, Self sources. Hmmm. Maybe sometimes it’s clearer not to force category names into acronyms? Anyway, here’s how these helped.
Self-sourced,meaning the distribution format is the source code. This is especially relevant as the reason HELM sent the tweet that alerted me to their materials was that they are re-authoring material from Flash to HTML5. Aside from modern browser support, one big advantage of them doing this is that instead of having an impenetrable SWF package I had access to the assets that made the resource, notably the video clips.
Meaningfully editable: that access to the assets meant that I could edit the content, stringing the videos together, copying and pasting text from the transcript to use as questions.
Level of expertise required: I have found all the tools and services used (OpenShot, H5P, hypothes.is, WordPress) relatively easy to use, however some experience is required, for example to be familiar with various plugins available for WordPress and how to install them. Video editing in particular takes some expertise. It’s probably something that most people don’t do very often (I don’t). Maybe the general level of digital literacy level we should now aim for is one where people are familiar with photo and video editing tools as well as text oriented word processing and presentation tools. However, I’m inclined to think that the details of using the H264 video codec and AAC audio codec, packaged in a MPEG-4 Part 14 container (compare and contrast with VP9 and ogg vorbis packaged in a profile of Matroska) should remain hidden from most people. Fortunately, standardisation means that the number of options is less than it would otherwise be, and it was possible to find many pages on the web with guidance on the browser compatibility of these options (MP4 and WebM respectively).
Access to editing tools, where access starts with low cost. All the tools used were free, most were open source, and all ran on Ubuntu (most can also run on other platforms).
It’s notable that all these ultimately involve open source software and open standards, and work especially well when then “open” for open standards includes free to implement. That complicated bit around MP4 & WebM video formats, that comes about because royalty requirements for those implementing MP4.
Open educational practice: nebulous but important.
Open education includes but is more than open education resources, open content, open licensing and open standards. It also means talking about what we do. It means that I found out about HELM because they were openly tweeting about their resources. I think that is how I learnt about nearly all the tools discussed here ina similar manner. Yes, “pimping your stuff” is importantly open. Open education also means asking questions and writing how-to articles that let non-experts like me deal with complexities like video encoding.
There’s a deeper open education at play here as well. See that resource from HELM that I started with? It started life in the RLO CETL, i.e. in a publicly funded initiative, now long gone. And the reason I and others in the UKHE know about Creative Commons and David Wiley’s analysis of open content, that largely comes down to #UKOER, again a publicly funded initiative. UKOER and the stuff about open standards and open source was supported by Jisc, publicly funded. Alumni from these initiatives are to be found all over UKHE, through which these initiatives continue to be crucially important in building our capability and capacity to support learners in new and innovative settings.
In a WordPress plugin I have custom post types for different types of publication: books, chapters, papers, presentations, reports. I want one single archive of all of these publications.
I know that the theme template hierarchy allows templates with the pattern archive-$posttype.php, so I tried setting the slug for all the custom post types to ‘presentations’. WordPress doesn’t like that. So what I did was set the slug for one of the publication custom post types to ‘presentations’, that gives me a /presentations/ archive for that custom post type(1). I then edited the archive.php file to use a different template parts for custom post types(2):
Some time back we started looking for an online exam system for some of our computer science exams. Part of the process was to list a set of “acceptance criteria,” i.e. conditions that any system we looked at had to meet. One of my aims in writing these was to avoid chasing after some mythical ‘perfect’ system, and focus on finding one that would meet our needs. Although the headings below differ, as a system for high stakes assessment the overarching requirements were security, reliability, scalability, which are reflected below.
Having these criteria were useful in reaching a consensus decision when there was no ‘perfect’ system.
Only authorised staff (+ external examiners) to have access before exam time.
Only authorised staff and students to have access during exams.
Only authorised staff (+ external examiners) to have access to results.
Authorised staff and external examiners to have only the level of access they need, no more.
Software must be kept up-to-date and patched in a timely fashion
Must track and report all access attempts
Must not rely on security by obscurity.
Secure access must not depend on location.
Provide suitable access to internal checkers and external examiners.
Logging of changes to questions and exams would be desirable.
It must be possible to set a point after which exams cannot be changed (e.g. once they are passed by checkers)
Must be able to check marking (either exam setter or other individual), i.e. provide clear reports on how each question was answered by each candidate.
Must be possible to adjust marking/remark if an error is found after the exam (e.g. if a mistake was made in setting the correct option for mcq, or if question was found to be ambiguous or too hard)
Must should be possible to reproduce content of previous CS electronic exams in similar or better format [this one turned out not to be important]
Must be able to decide how many points to assign to each question
Desirable to have provision for alternate answers or insignificant difference in answers (e.g. y=a*b, y=b*a)
Desirable to reproduce style of standard HW CS exam papers, i.e. four potentially multipart questions, with student able to choose which 3 to answer
Desirable to be possible to provide access to past papers on formative basis
Desirable to support formative assessment with feedback to students
Must be able to remove access to past papers if necessary.
Students should be able to practice with same (or very similar) system prior to exam
Desirable to be able to open up access to a controlled list of websites and tools (c.f. open book exams)
Should be able to use mathematical symbols in questions and answers, including student entered text answers.
Desirable to have programmatic transfer of staff information to assessment system (i.e. to know who has what role for each exam)
Must be able to transfer student information from student information system to assessment system (who sits which exam and at which campus).
Desirable to be able to transfer study requirements from student information system to assessment system (e.g. who gets extra time in exams)
Programmatic transfer student results from assessment system to student record systems or VLE (one is required)
Desirable to support import/export of tests via QTI.
Integration with VLE for access to past papers, mock exams, formative assessment in general (e.g. IMS LTI)
Hardware & software requirements for test taking must be compatible with PCs we have (at all campuses and distance learning partners).
Set up requirements for labs in which assessments are taken must be within capabilities of available technical staff at relevant centre (at all campuses and distance learning partners).
Lab infrastructure* and servers must be able to operate under load of full class logging in simultaneously (* at all campuses and distance learning partners)
Must have adequate paper back up at all stages, at all locations
Must be provision for study support exam provision (e.g. extra time for some students)
Need to know whether there is secure API access to responses.
API documentation must be open and response formats open and flexible.
Require support helpline / forum / community.
Timing of release of encryption key
Costs. Clarify how many students would be involved, what this would cost.
When developing WordPress for use as a CMS one approach I have used is to create a custom post type for each type of resource and custom metadata boxes for relevant properties of those types. I’ve used that approach when exploring the possibility of using WordPress as a semantic web platform to edit schema.org metadata, when building course information pages for students and am doing so again in updating some work I did on WordPress as a lightweight repository. Registering a custom post type is pretty straightforward, follow the example in the codex page, I found handling custom metadata boxes a little more difficult. Here are three resources that helped.
Doing it long hand
It’s a few years old, but I found Justin Tadlock’s Smashing Magazine article How To Create Custom Post Meta Boxes In WordPressreally useful as a clear and informative tutorial. It was invaluable in understanding how metaboxes work. If I had only wanted one or two simple text custom metadata fields then coding them myself would be an option, but I found a couple of problems. Firstly, I was repeating the same code too many times. Secondly when I thought about wanting to store dates or urls or links to other posts, with suitable user interface elements and data validation, I could see the amount of code needed was only going to increase. So I looked to see whether any better programmers than I had created anything I could use.
Using a helper plugin
I found two plugins that promised to provide a framework to simplify the creation of metaboxes. These are not plugins that provide anything that the end user can see directly, rather they provide functions that can be used in theme an plugin development. They both reduce the work of creating a metabox down to creating an array with the properties you want the metabox to have. They both introduce a dependency on code I cannot maintain, which is something I am always cautious about in using third-party plugins, but it’s much more viable than the alternative of creating such code from scratch and maintaining it myself.
CMB2is “a metabox, custom fields, and forms library for WordPress that will blow your mind.” It is free and open source, with development hosted on GitHub. It seems quite mature (version 1.0 was in Nov 2013), with a large installation base and decent amount of current activity on github.
Meta Box is “a powerful, professional developer toolkit to create custom meta boxes and custom fields for WordPress.” It too is free and released under GPL2 licence, but there are paid-for extensions (also GPL2 licensed) and I don’t see any open source development (I may not have looked in the right place). Meta box has been around for a couple of years, is regularly updated and has a very large user base. The paid-for extensions give me some hope that the developers have a sustainable business model, but a worry that maybe ‘free’ doesn’t include the one function that at sometime I will really need. Well, developers cannot live on magic beans so I wouldn’t mind paying.
In the end both plugins worked well, but Meta Box allows the creation of custom fields for a link from one post to another, which I didn’t see in CMB2. That’s what I need for a metadata field to say that the author of the book described in one post is a person described in another.
I do just enough theme and plugin development on WordPress to need an alternative to using a live WordPress site for development and testing, but at the same time I want to be testing on site as similar to the live site as possible. So I set up clones of WordPress sites either on my local machine or a server for development and testing. (Normally I have clones on the localhost server of couple of machines I use for development and another clone on a web accessible testing or staging server for other people to look at.) I don’t do this very often, but each time I do it I spend as much time trying to remember what it is I need to do as it actually takes to do it. So here, as much as an aide-memoire for myself as anything, else I’ve gathered it all in one place. What I do is largely based on the Moving WordPress information in the codex, but there are a couple of things that doesn’t cover and a couple of things I find it easier to do differently.
Assuming that the pre-requisites for WordPress are in place (i.e. MySQL, webserver, PHP), there are three stages to creating a clone. A. copy the WordPress files to the development site; B. clone the database; C. fix the links between WordPress and the database for the new site. A and B are basically creating backup copies of your site, but you will want to make sure that whatever routine backups you use are up to date and ready to restore in case something goes wrong. Also, this assumes that you are notwant to clone just one site on a WordPress Multisite installation.
Copying the WordPress files
Simply copy all the files from the folder you have WordPress installed in, and all the sub-folders to where you want the new site to be. This will mean that all the themes, plugins and uploaded media will be the same on both sites. Depending on whether the development site is on the same server as the main site I do this either with file manager or by making a compressed archive and ftp. Make sure the web server can read the files on the dev site (and write to the relevant folders if that is how you upload media, plugins and themes).
Cloning the database
First I create a new, blank database on for the new site, either from the command line or using something like MySQL Database Wizard which my hosting provider has on CPanel. I create a new user with full access to that data base–the username and password for this user will be needed to configure WordPress with access to this database. If you have complete control of over the database name and user name then use the same name username and password as is in the wp-config.php file of the site you are cloning. Otherwise you can change these later.
Second, I use PHP MyAdmin to export the data base from the original site and import it to the one on which you are making a clone.
Fix all the bits that break
All that remains is to reconnect the PHP files to the database and fix a few other things that break. This is where it get fiddly. Also, from now on be really careful about which site you are working on: they look the same and you really don’t want to set up your public site as a development server. Make all these changes on the new development site.
In wp-config.html (it’s in the top of the WordPress folder hierarchy) find the following lines and change the values to be those for your new development server and database.
mysql -u root -p
SELECT * FROM wp_options WHERE option_name = 'home';
UPDATE wp_options SET option_value="http://example.org/blog" WHERE option_name = "home";
SELECT * FROM wp_options WHERE option_name = 'siteurl';
UPDATE wp_options SET option_value="http://example.org/blog" WHERE option_name = "siteurl";
You should now have access to the new cloned site, though some things will still be misbehaving.
You will probably have the old site’s URL in various posts and GUIDs. I use the better search replace plugin to fix these.
iesiesIf you do any fancy redirects with .htaccess, make sure that these are written in such a way that works for the new URL.
If you are using Jetpack you will need to use it in safe mode if the development server is connected to the web or development mode if running on localhost. (This is a bit of a pain if you want to test Jetpack settings.)
On a development site you’ll probably want to add this to wp-config.php:
If you are running a development or testing server on a web accessible site you probably want to restrict who has access to it. I use the My private site plugin so that only site admins have access.
Keeping in sync
While it’s not entirely necessary that a development or testing site be kept completely in sync with the main one, it is worth keeping them close so that you don’t get unexpected issues on the main site. You can manually update the plugins and themes, and use the wordpress export / import plugins to transfer new content from the live site to the clone. Every now and again you might want to re-clone the site afresh. Something I find useful for development and testing of new plugins and themes is to have the plugin or theme directory that I am developing in set up as a git repository linked to github and keep files in sync with git push and git pull.
I think that is it. If I have forgotten anything or if you have tips on making any of this easier please leave a comment.
These are three resources that look like they might be useful in understanding and avoiding gender bias. They caught my attention because I cover some cognitive biases in the Critical Thinking course I teach. I also cover the advantages of having diverse teams working on problems (the latter based on discussion of How Diversity Makes Us Smarter in SciAm). Finally, like any responsible teacher in information systems & computer science I am keen to see more women in my classes.
Iris Bohnet on BBC Radio 4 Today programme 3 January. If you have access via a UK education institution with an ERA licence you can listen to the clip via the BUFVC Box of Broadcasts. Otherwise here’s a quick summary. Bohnet stresses that much gender bias is unconscious, individuals may not be aware that they act in biased ways. Awareness of the issue and diversity training is not enough on its own to ensure fairness. She stresses that organisational practise and procedures are the easiest effective way to remove bias. One example she quotes is that to recruit more male teachers job adverts should not “use adjectives that in our minds stereotypically are associated with women such as compassionate, warm, supportive, caring.” This is not because teachers should not have these attributes or that men cannot be any of these, but because research shows[*] that these attributes are associated with women and may subconsciously deter male applicants.
[*I don’t like my critical thinking students saying broad and vague things like ‘research shows that…’. It’s ok for 3 minute slot on a breakfast news show but I’ll have to do better. I hope the details are somewhere in Iris Bohnet, (2016). What Works: Gender Equality by Design]
This raised a couple of questions in my mind. If gender bias is unconscious, how do you know you do it? And, what can you do about it? That reminded me of two other things I had seen on bias over the last year.
A gender bias calculator for recommendation letters based on the words that might be associated with stereotypically male or female attributes. I came across this via Athene Donald’s blog post Do You Want to be Described as Hard Working? which describes the issue of subconscious bias in letters of reference. I guess this is the flip side of the job advert example given by Bohnet. There is lots of other useful and actionable advice in that blog post, so if you haven’t read it yet do so now.
There are contributions by people I know and look up to in the OER world, and some equally good chapters by folk I had not come across before. It seems to live up to its billing of offering an international perspective by not being US-centric (though it would be nice to see more from S America, Asia and Africa), and it provides a wide view of Open Education, not limited to Open Education Resources. There is a foreword by David Wiley, a chapter on a human rights theory for open education by the editors, one on whether emancipation through open education is theory or rhetoric by Andy Lane. Other people from the Open University’s Open Education team (Martin Weller, Beatriz de los Arcos, Rob Farrow, Rebecca Pitt and Patrick McAndrew) have written about identifying categories of OER users. There are chapters on aspects such as open science, open text books, open assessment and credentials for open learning; and several case studies and reflections on open education in practice.
The chapter that Lorna and I wrote is an overview drawing on our experiences through the UKOER programme and our work on LRMI looking at managing the dissemination and discovery of open education resources. Here’s the abstract in full, and a link to the final submitted version of our chapter.
This chapter addresses issues around the discovery and use of Open Educational Resources (OER) by presenting a state of the art overview of technology strategies for the description and dissemination of content as OER. These technology strategies include institutional repositories and websites, subject specific repositories, sites for sharing specific types of content (such as video, images, ebooks) and general global repositories. There are also services that aggregate content from a range of collections, these may specialize by subject, region or resource type. A number of examples of these services are analyzed in terms of their scope, how they present resources, the technologies they use and how they promote and support a community of users. The variety of strategies for resource description taken by these platforms is also discussed. These range from formal machine-readable metadata to human readable text. It is argued that resource description should not be seen as a purely technical activity. Library and information professionals have much to contribute, however academics could also make a valuable contribution to open educational resource (OER) description if the established good practice of identifying the provenance and aims of scholarly works is applied to learning resources. The current rate of change among repositories is quite startling with several repositories and applications having either shut down or having changed radically in the year or so that the work on which this contribution is based took. With this in mind, the chapter concludes with a few words on sustainability.