Author Archives: Phil Barker

Cloning WordPress sites for development⤴

from @ Sharing and learning

I do just enough theme and plugin development on WordPress to need an alternative to using a live WordPress site for development and testing, but at the same time I want to be testing on site as similar to the live site as possible. So I set up clones of WordPress sites either on my local machine or a server for development and testing. (Normally I have clones on the localhost server of couple of machines I use for development and another clone on a web accessible testing or staging server for other people to look at.) I don’t do this very often, but each time I do it I spend as much time trying to remember what it is I need to do as it actually takes to do it. So here, as much as an aide-memoire for myself as anything, else I’ve gathered it all in one place. What I do is largely based on the Moving WordPress information in the codex, but there are a couple of things that doesn’t cover and a couple of things I find it easier to do differently.

Assuming that the pre-requisites for WordPress are in place (i.e. MySQL, webserver, PHP), there are three stages to creating a clone. A. copy the WordPress files to the development site; B. clone the database; C. fix the links between WordPress and the database for the new site. A and B are basically creating backup copies of your site, but you will want to make sure that whatever routine backups you use are up to date and ready to restore in case something goes wrong. Also, this assumes that you are notwant to clone just one site on a WordPress Multisite installation.

Copying the WordPress files

Simply copy all the files from the folder you have WordPress installed in, and all the sub-folders to where you want the new site to be. This will mean that all the themes, plugins and uploaded media will be the same on both sites. Depending on whether the development site is on the same server as the main site I do this either with file manager or by making a compressed archive and ftp. Make sure the web server can read the files on the dev site (and write to the relevant folders if that is how you upload media, plugins and themes).

Cloning the database

First I create a new, blank database on for the new site, either from the command line or using something like MySQL Database Wizard which my hosting provider has on CPanel. I create a new user with full access to that data base–the username and password for this user will be needed to configure WordPress with access to this database. If you have complete control of over the database name and user name then use the same name username and password as is in the wp-config.php file of the site you are cloning. Otherwise you can change these later.

Second, I use PHP MyAdmin to export the data base from the original site and import it to the one on which you are making a clone.

phpMyAdmin Export screen

Fix all the bits that break

All that remains is to reconnect the PHP files to the database and fix a few other things that break. This is where it get fiddly. Also, from now on be really careful about which site you are working on: they look the same and you really don’t want to set up your public site as a development server. Make all these changes on the new development site.

In wp-config.html (it’s in the top of the WordPress folder hierarchy) find the following lines and change the values to be those for your new development server and database.

define( 'WP_CONTENT_URL', 'http://example.org/blog' );
define( 'WP_CONTENT_DIR', 'path/to/wp-content' );

define('DB_NAME', 'databaseName');

/** MySQL database username */
define('DB_USER', 'databaseUserName');

/** MySQL database password */
define('DB_PASSWORD', 'password');

You might also need to change the value for DB_HOST

Then you need to change the options that WordPress stores in the database. Normally you do this through the WordPress admin interface, but this is not yet available on your new site. There are various ways you can do this, I change the url directly in the data base with PHPMyAdmin, either by direct editing as described in the codex page or from the command line as described here.

mysql -u root -p

USE databaseName
SELECT * FROM wp_options WHERE option_name = 'home';
UPDATE wp_options SET option_value="http://example.org/blog" WHERE option_name = "home";
SELECT * FROM wp_options WHERE option_name = 'siteurl';
UPDATE wp_options SET option_value="http://example.org/blog" WHERE option_name = "siteurl";

You should now have access to the new cloned site, though some things will still be misbehaving.

You will probably have the old site’s URL in various posts and GUIDs. I use the better search replace plugin to fix these.

iesiesIf you do any fancy redirects with .htaccess, make sure that these are written in such a way that works for the new URL.

If you are using Jetpack you will need to use it in safe mode if the development server is connected to the web or development mode if running on localhost. (This is a bit of a pain if you want to test Jetpack settings.)

On a development site you’ll probably want to add this to wp-config.php:

define('WP_DEBUG', true);

If you are running a development or testing server on a web accessible site you probably want to restrict who has access to it. I use the My private site plugin so that only site admins have access.

Keeping in sync

While it’s not entirely necessary that a development or testing site be kept completely in sync with the main one, it is worth keeping them close so that you don’t get unexpected issues on the main site. You can manually update the plugins and themes, and use the wordpress export / import plugins to transfer new content from the live site to the clone. Every now and again you might want to re-clone the site afresh. Something I find useful for development and testing of new plugins and themes is to have the plugin or theme directory that I am developing in set up as a git repository linked to github and keep files in sync with git push and git pull.

Anything else?

I think that is it. If I have forgotten anything or if you have tips on making any of this easier please leave a comment.

The post Cloning WordPress sites for development appeared first on Sharing and learning.

Three resources about gender bias⤴

from @ Sharing and learning

These are three resources that look like they might be useful in understanding and avoiding gender bias. They caught my attention because I cover some cognitive biases in the Critical Thinking course I teach. I also cover the advantages of having diverse teams working on problems (the latter based on discussion of How Diversity Makes Us Smarter in SciAm). Finally, like any responsible  teacher in information systems & computer science I am keen to see more women in my classes.

Iris Bohnet on BBC Radio 4 Today programme 3 January.  If you have access via a UK education institution with an ERA licence you can listen to the clip via the BUFVC Box of Broadcasts.  Otherwise here’s a quick summary. Bohnet stresses that much gender bias is unconscious, individuals may not be aware that they act in biased ways. Awareness of the issue and diversity training is not enough on its own to ensure fairness. She stresses that organisational practise and procedures are the easiest effective way to remove bias. One example she quotes is that to recruit more male teachers job adverts should not “use adjectives that in our minds stereotypically are associated with women such as compassionate, warm, supportive, caring.” This is not because teachers should not have these attributes or that men cannot be any of these, but because research shows[*] that these attributes are associated with women and may subconsciously deter male applicants.

[*I don’t like my critical thinking students saying broad and vague things like ‘research shows that…’. It’s ok for 3 minute slot on a breakfast news show but I’ll have to do better. I hope the details are somewhere in Iris Bohnet, (2016). What Works: Gender Equality by Design]

This raised a couple of questions in my mind. If gender bias is unconscious, how do you know you do it? And, what can you do about it? That reminded me of two other things I had seen on bias over the last year.

An Implicit Association Test (IAT) on Gender-Career associations, which  I took a while back. It’s a clever little test based on how quickly you can classify names and career attributes. You can read more information about them on the Project Implicit website  or try the same test that I did (after a few disclaimers and some other information gathering, it’s currently the first one on their list).

A gender bias calculator for recommendation letters based on the words that might be associated with stereotypically male or female attributes. I came across this via Athene Donald’s blog post Do You Want to be Described as Hard Working? which describes the issue of subconscious bias in letters of reference. I guess this is the flip side of the job advert example given by Bohnet. There is lots of other useful and actionable advice in that blog post, so if you haven’t read it yet do so now.

The post Three resources about gender bias appeared first on Sharing and learning.

Book chapter: Technology Strategies for Open Educational Resource Dissemination⤴

from @ Sharing and learning

A book with a chapter by Lorna M Campbell and I has just been published. The book is Open Education: International Perspectives in Higher Education edited by Patrick Blessinger and TJ Bliss, published by Open Book Publishers.

There are contributions by people I know and look up to in the OER world, and some equally good chapters by folk I had not come across before. It seems to live up to its billing of offering an international perspective by not being US-centric (though it would be nice to see more from S America, Asia and Africa), and it provides a wide view of Open Education, not limited to Open Education Resources. There is a foreword by David Wiley, a chapter on a human rights theory for open education by the editors, one on whether emancipation through open education is theory or rhetoric by Andy Lane. Other people from the Open University’s Open Education team (Martin Weller, Beatriz de los Arcos, Rob Farrow, Rebecca Pitt and Patrick McAndrew) have written about identifying categories of OER users.  There are chapters on aspects such as open science, open text books, open assessment and credentials for open learning; and several case studies and reflections on open education in practice.

Open Education: International Perspectives in Higher Education is available under a CC:BY licence as a free PDF, as very cheap mobi or ePub, or reasonably priced soft and hard back editions. You should get a copy from the publishers.

Technology Strategies for OER

The chapter that Lorna and I wrote is an overview drawing on our experiences through the UKOER programme and our work on LRMI looking at managing the dissemination and discovery of open education resources. Here’s the abstract in full, and a link to the final submitted version of our chapter.

This chapter addresses issues around the discovery and use of Open Educational Resources (OER) by presenting a state of the art overview of technology strategies for the description and dissemination of content as OER. These technology strategies include institutional repositories and websites, subject specific repositories, sites for sharing specific types of content (such as video, images, ebooks) and general global repositories. There are also services that aggregate content from a range of collections, these may specialize by subject, region or resource type. A number of examples of these services are analyzed in terms of their scope, how they present resources, the technologies they use and how they promote and support a community of users. The variety of strategies for resource description taken by these platforms is also discussed. These range from formal machine-readable metadata to human readable text. It is argued that resource description should not be seen as a purely technical activity. Library and information professionals have much to contribute, however academics could also make a valuable contribution to open educational resource (OER) description if the established good practice of identifying the provenance and aims of scholarly works is applied to learning resources. The current rate of change among repositories is quite startling with several repositories and applications having either shut down or having changed radically in the year or so that the work on which this contribution is based took. With this in mind, the chapter concludes with a few words on sustainability.

Preprint of full chapter (MS Word)

The post Book chapter: Technology Strategies for Open Educational Resource Dissemination appeared first on Sharing and learning.

Reflective learning logs in computer science⤴

from @ Sharing and learning

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Reflective learning logs in computer science⤴

from @ Sharing and learning

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

XKCD or OER for critical thinking⤴

from @ Sharing and learning

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, which is mostly how to spot logical fallacies; and a little on cognitive development. One the better things about teaching on this course is that a lot of it is covered by XKCD, and that XKCD is CC licensed. Open Education Resources can be fun.

how scientists think

[explain]

hypothesis testing

Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.

[explain]

Blind trials

[explain]

Interpreting statistics

[explain]

p hacking

[explain]

Confounding variables

There are also a lot of global versions of this map showing traffic to English-language websites which are indistinguishable from maps of the location of internet users who are native English speakers

[explain]

Extrapolation

[explain]

[explain]

Confirmation bias in information seeking

[explain]

[explain]

undistributed middle

[explain]

post hoc ergo propter hoc

Or correlation =/= causation.

He holds the laptop like that on purpose, to make you cringe.

[explain]

[explain]

Bandwagon Fallacy…

…and fallacy fallacy

[explain]

Diversity and inclusion

[explain]

LRMI at #DCMI16 Metadata Summit, Copenhagen⤴

from @ Sharing and learning

I was in Copenhagen last week, at the Dublin Core Metadata Initiative 2016 conference, where I ran a workshop entitled “Building on Schema.org to describe learning resources” (as one of my colleagues pointed out, thinking of the snappy title never quite happened). Here’s a quick overview of it.

There were three broad parts to the workshop: presentations on the background organisations and technology; presentations on how LRMI is being used; and a workshop where attendees got to think about what could be next for LRMI.

Fundamentals of Schema.org and LRMI

An introduction to Schema.org (Richard Wallis)

A brief history of Schema.org, fast becoming a de facto vocabulary for structured web data for sharing with search engines and others to understand interpret and load into their knowledge graphs. Whist addressing the issue of simple structured markup across the web it is also through its extension capabilities facilitating the development of sector specific enhancement that will be widely understood.

An Introduction to LRMI (Phil Barker)

A short introduction to the Learning Resource Metadata Initiative, originally a project which developed a common metadata framework for describing learning resources on the web. LRMI metadata terms have been added to Schema.org. The task group currently works to support those terms as a part of Schema.org and as a DCMI community specification.

[

Use of LRMI

Overview of LRMI in the wild  (Phil Barker)

The results of a series of case studies looking at initial implementations are summarised, showing that LRMI metadata is used in various ways not all of which are visible to the outside worlds. Estimates of how many organisations are using LRMI properties in publicly available websites and pages, and some examples are shown.

The Learning Registry and LRMI (Steve Midgley)

The learning registry is a new approach to capturing, connecting and sharing data about learning resources available online with the goal of making it easier for educators and students to access the rich content available in our ever-expanding digital universe. This presentation will explain what the Learning Registry is, how it is used and how it used LRMI / Schema.org metadata. This will include what has been learned about structuring, validating and sharing LRMI resources, including expressing alignments to learning standards, validation of json-ld and json-schema.

[On the day we failed to connect to Steve via skype, but here are his slides that we missed]

What next for LRMI?

I presented an overview of nine ideas that LRMI could prioritise for future work. These ideas were the basis for a balloon debate, which I will summarise in more detail in my next post.

 

 

Schema course extension update⤴

from @ Sharing and learning

This progress update on the work to extend schema.org to support the discovery of any type of educational course is cross-posted from the Schema Course Extension W3C Community Group. If you are interested in this work please head over there.

What aspects of a course can we now describe?
As a result of work so far addressing the use cases that we outlined, we now have answers to the following questions about how to describe courses using schema.org:

As with anything in schema.org, many of the answers proposed are not the final word on all the detail required in every case, but they form a solid basis that I think will be adequate in many instances.

What new properties are we proposing?
In short, remarkably few. Many of the aspects of a course can be described in the same way as for other creative works or events. However we did find that we needed to create two new types Course and CourseInstance to identify whether the description related to a course that could be offered at various times or a specific offering or section of that course. We also found the need for three new properties for Course: courseCode, coursePrerequisites and hasCourseInstance; and two new properties for CourseInstance: courseMode and instructor.

There are others under discussion, but I highlight these as proposed because they are being put forward for inclusion in the next release of the schema.org core vocabulary.

showing how Google will display information about courses in a search galleryMore good news:  the Google search gallery documentation for developers already includes information on how to provide the most basic info about Courses. This is where we are going ?

Sustainability and Open Education⤴

from @ Sharing and learning

 

Last week I was on a panel at Edinburgh University’s Repository Fringe event discussing sustainability and OER. As part of this I was asked to talk for ten minutes on some aspect of the subject. I don’t think I said anything of startling originality, but I must start posting to this blog again, so here are the notes I spoke from. The idea that I wanted to get over is that projects should be careful about what services they tried to set up, they (the services) should be suitable and sustainable, and in fact it might be best if they did the minimum that was necessary (which might mean not setting up a repository).

Between 2009 and 2012 Jisc and the HE Academy ran the UK Open Education Resources programme (UKOER), spending approximately £15M of Hefce funding in three phases. There were 65 projects, some with personal, institutional or discipline scope releasing resources openly, some with a remit of promoting dissemination or discoverability, and  there were some related activities and services providing technical, legal, policy support, & there was Jorum: there was a mandate that OERs released through the project should be deposited in the Jorum repository. This was a time when open education was booming, as well as UKOER, funding from foundations in the US, notably Hewlett and Gates, was quite well established and EU funding was beginning. UKOER also, of course, built on previous Jisc programmes such as X4L, ReProduce, and the Repositories & preservation programme.

In many ways UKOER was a great success: a great number of resources were created or released, but also it established open education as a thing that people in UK HE talked about. It showed how to remove some of the blockers to the reuse and sharing of content for teaching and learning in HE (–especially in the use of standard CC licences with global scope rather than the vague, restrictive and expensive custom variations on  “available to other UK HEIs” of previous programmes). Helped by UKOER, many UK HEIs were well placed to explore the possibilities of MOOCs. And in general showed the potential to change how HEIs engage with the wider world and to help make best use of online learning–but it’s not just about opening exciting but vague possibilities: being a means to avoid problems such as restrictive licensing, and being in position to explore new possibilities, means avoiding unnecessary costs in the future and helps to make OER financially attractive (and that’s important to sustainability). Evidence of this success: even though UKOER was largely based on HEFCE funding, there are direct connections from UKOER to the University of Edinburgh’s Open Ed initiative and (less directly) to their engagement with MOOCs.

But I am here to talk sustainability. You probably know that Jorum, the repository in to which UKOER projects were required to deposit their OERs, is closing. Also, many of the discipline-based and discovery projects were based at HE Academy subject centres, which are now gone. At the recent OER16 here, Pat Lockley suggested that OER were no longer being created. He did this based on what he sees coming in to the Solvonauts aggregator that he develops and runs. Martin Poulter showed the graph, there is a fairly dramatic drop in the number of new deposits he sees. That suggests something is not being sustained.

But what?

Let’s distinguish between sustainability and persistence: sustainability suggests to me a manageable on-going effort. The content as released may be persistent, it may still be available as released (though without some sort of sustainable effort of editing, updating, preservation it may not be much use).  What else needs sustained effort? I would suggest: 1, the release of new content; 2, interest and community; 3, the services around the content (that includes repositories). I would say that UKOER did create a community interested in OER which is still pretty active. It could be larger, and less inward looking at times but for an academic community it doing quite well. New content is being released. But the services created by UKOER (and other OER initiatives) are dying. That, I think , is why Pat Lockley isn’t seeing new resources being published.

What is the lesson we should learn? Don’t create services to manage and disseminate your OERs that that require “project” level funding. Create the right services, don’t assume that what works for research outputs will work for educational resources, make sure that there is that “edit” button (or at least a make-your-own-editable-copy button).  Make the best use of what is available. Use everything that is available. Use wikimedia services, but also use flickr, wordpress, youtube, itunes, vimeo,—and you may well want to create your own service to act as a “junction” between all the different places you’re putting your OERs, linking with them via their APIs for deposit and discovery. This is the basic idea behind POSSE: Publish (on your) Own Site, Syndicate Elsewhere.