Author Archives: Phil Barker

Reflective learning logs in computer science⤴

from @ Sharing and learning

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Reflective learning logs in computer science⤴

from @ Sharing and learning

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

XKCD or OER for critical thinking⤴

from @ Sharing and learning

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, which is mostly how to spot logical fallacies; and a little on cognitive development. One the better things about teaching on this course is that a lot of it is covered by XKCD, and that XKCD is CC licensed. Open Education Resources can be fun.

how scientists think

[explain]

hypothesis testing

Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.

[explain]

Blind trials

[explain]

Interpreting statistics

[explain]

p hacking

[explain]

Confounding variables

There are also a lot of global versions of this map showing traffic to English-language websites which are indistinguishable from maps of the location of internet users who are native English speakers

[explain]

Extrapolation

[explain]

[explain]

Confirmation bias in information seeking

[explain]

[explain]

undistributed middle

[explain]

post hoc ergo propter hoc

Or correlation =/= causation.

He holds the laptop like that on purpose, to make you cringe.

[explain]

[explain]

Bandwagon Fallacy…

…and fallacy fallacy

[explain]

Diversity and inclusion

[explain]

LRMI at #DCMI16 Metadata Summit, Copenhagen⤴

from @ Sharing and learning

I was in Copenhagen last week, at the Dublin Core Metadata Initiative 2016 conference, where I ran a workshop entitled “Building on Schema.org to describe learning resources” (as one of my colleagues pointed out, thinking of the snappy title never quite happened). Here’s a quick overview of it.

There were three broad parts to the workshop: presentations on the background organisations and technology; presentations on how LRMI is being used; and a workshop where attendees got to think about what could be next for LRMI.

Fundamentals of Schema.org and LRMI

An introduction to Schema.org (Richard Wallis)

A brief history of Schema.org, fast becoming a de facto vocabulary for structured web data for sharing with search engines and others to understand interpret and load into their knowledge graphs. Whist addressing the issue of simple structured markup across the web it is also through its extension capabilities facilitating the development of sector specific enhancement that will be widely understood.

An Introduction to LRMI (Phil Barker)

A short introduction to the Learning Resource Metadata Initiative, originally a project which developed a common metadata framework for describing learning resources on the web. LRMI metadata terms have been added to Schema.org. The task group currently works to support those terms as a part of Schema.org and as a DCMI community specification.

[

Use of LRMI

Overview of LRMI in the wild  (Phil Barker)

The results of a series of case studies looking at initial implementations are summarised, showing that LRMI metadata is used in various ways not all of which are visible to the outside worlds. Estimates of how many organisations are using LRMI properties in publicly available websites and pages, and some examples are shown.

The Learning Registry and LRMI (Steve Midgley)

The learning registry is a new approach to capturing, connecting and sharing data about learning resources available online with the goal of making it easier for educators and students to access the rich content available in our ever-expanding digital universe. This presentation will explain what the Learning Registry is, how it is used and how it used LRMI / Schema.org metadata. This will include what has been learned about structuring, validating and sharing LRMI resources, including expressing alignments to learning standards, validation of json-ld and json-schema.

[On the day we failed to connect to Steve via skype, but here are his slides that we missed]

What next for LRMI?

I presented an overview of nine ideas that LRMI could prioritise for future work. These ideas were the basis for a balloon debate, which I will summarise in more detail in my next post.

 

 

Schema course extension update⤴

from @ Sharing and learning

This progress update on the work to extend schema.org to support the discovery of any type of educational course is cross-posted from the Schema Course Extension W3C Community Group. If you are interested in this work please head over there.

What aspects of a course can we now describe?
As a result of work so far addressing the use cases that we outlined, we now have answers to the following questions about how to describe courses using schema.org:

As with anything in schema.org, many of the answers proposed are not the final word on all the detail required in every case, but they form a solid basis that I think will be adequate in many instances.

What new properties are we proposing?
In short, remarkably few. Many of the aspects of a course can be described in the same way as for other creative works or events. However we did find that we needed to create two new types Course and CourseInstance to identify whether the description related to a course that could be offered at various times or a specific offering or section of that course. We also found the need for three new properties for Course: courseCode, coursePrerequisites and hasCourseInstance; and two new properties for CourseInstance: courseMode and instructor.

There are others under discussion, but I highlight these as proposed because they are being put forward for inclusion in the next release of the schema.org core vocabulary.

showing how Google will display information about courses in a search galleryMore good news:  the Google search gallery documentation for developers already includes information on how to provide the most basic info about Courses. This is where we are going ?

Sustainability and Open Education⤴

from @ Sharing and learning

 

Last week I was on a panel at Edinburgh University’s Repository Fringe event discussing sustainability and OER. As part of this I was asked to talk for ten minutes on some aspect of the subject. I don’t think I said anything of startling originality, but I must start posting to this blog again, so here are the notes I spoke from. The idea that I wanted to get over is that projects should be careful about what services they tried to set up, they (the services) should be suitable and sustainable, and in fact it might be best if they did the minimum that was necessary (which might mean not setting up a repository).

Between 2009 and 2012 Jisc and the HE Academy ran the UK Open Education Resources programme (UKOER), spending approximately £15M of Hefce funding in three phases. There were 65 projects, some with personal, institutional or discipline scope releasing resources openly, some with a remit of promoting dissemination or discoverability, and  there were some related activities and services providing technical, legal, policy support, & there was Jorum: there was a mandate that OERs released through the project should be deposited in the Jorum repository. This was a time when open education was booming, as well as UKOER, funding from foundations in the US, notably Hewlett and Gates, was quite well established and EU funding was beginning. UKOER also, of course, built on previous Jisc programmes such as X4L, ReProduce, and the Repositories & preservation programme.

In many ways UKOER was a great success: a great number of resources were created or released, but also it established open education as a thing that people in UK HE talked about. It showed how to remove some of the blockers to the reuse and sharing of content for teaching and learning in HE (–especially in the use of standard CC licences with global scope rather than the vague, restrictive and expensive custom variations on  “available to other UK HEIs” of previous programmes). Helped by UKOER, many UK HEIs were well placed to explore the possibilities of MOOCs. And in general showed the potential to change how HEIs engage with the wider world and to help make best use of online learning–but it’s not just about opening exciting but vague possibilities: being a means to avoid problems such as restrictive licensing, and being in position to explore new possibilities, means avoiding unnecessary costs in the future and helps to make OER financially attractive (and that’s important to sustainability). Evidence of this success: even though UKOER was largely based on HEFCE funding, there are direct connections from UKOER to the University of Edinburgh’s Open Ed initiative and (less directly) to their engagement with MOOCs.

But I am here to talk sustainability. You probably know that Jorum, the repository in to which UKOER projects were required to deposit their OERs, is closing. Also, many of the discipline-based and discovery projects were based at HE Academy subject centres, which are now gone. At the recent OER16 here, Pat Lockley suggested that OER were no longer being created. He did this based on what he sees coming in to the Solvonauts aggregator that he develops and runs. Martin Poulter showed the graph, there is a fairly dramatic drop in the number of new deposits he sees. That suggests something is not being sustained.

But what?

Let’s distinguish between sustainability and persistence: sustainability suggests to me a manageable on-going effort. The content as released may be persistent, it may still be available as released (though without some sort of sustainable effort of editing, updating, preservation it may not be much use).  What else needs sustained effort? I would suggest: 1, the release of new content; 2, interest and community; 3, the services around the content (that includes repositories). I would say that UKOER did create a community interested in OER which is still pretty active. It could be larger, and less inward looking at times but for an academic community it doing quite well. New content is being released. But the services created by UKOER (and other OER initiatives) are dying. That, I think , is why Pat Lockley isn’t seeing new resources being published.

What is the lesson we should learn? Don’t create services to manage and disseminate your OERs that that require “project” level funding. Create the right services, don’t assume that what works for research outputs will work for educational resources, make sure that there is that “edit” button (or at least a make-your-own-editable-copy button).  Make the best use of what is available. Use everything that is available. Use wikimedia services, but also use flickr, wordpress, youtube, itunes, vimeo,—and you may well want to create your own service to act as a “junction” between all the different places you’re putting your OERs, linking with them via their APIs for deposit and discovery. This is the basic idea behind POSSE: Publish (on your) Own Site, Syndicate Elsewhere.

Schema course extension progress update⤴

from @ Sharing and learning

I am chair of the Schema Course Extension W3C Community Group, which aims to develop an extension for schema.org concerning the discovery of any type of educational course. This progress update is cross-posted from there.

If the forming-storming-norming-performing model of group development still has any currency, then I am pretty sure that February was the “storming” phase. There was a lot of discussion, much of it around the modelling of the basic entities for describing courses and how they relate to core types in schema (the Modelling Course and CourseOffering & Course, a new dawn? threads). Pleased to say that the discussion did its job, and we achieved some sort of consensus (norming) around modelling courses in two parts

Course, a subtype of CreativeWork: A description of an educational course which may be offered as distinct instances at different times and places, or through different media or modes of study. An educational course is a sequence of one or more educational events and/or creative works which aims to build knowledge, competence or ability of learners.

CourseInstance, a subtype of Event: An instance of a Course offered at a specific time and place or through specific media or mode of study or to a specific section of students.

hasCourseInstance, a property of Course with expected range CourseInstance: An offering of the course at a specific time and place or through specific media or mode of study or to a specific section of students.

(see Modelling Course and CourseInstance on the group wiki)

This modelling, especially the subtyping from existing schema.org types allows us to meet many of the requirements arising from the use cases quite simply. For example, the cost of a course instance can be provided using the offers property of schema.org/Event.

The wiki is working to a reasonable extent as a place to record the outcomes of the discussion. Working from the outline use cases page you can see which requirements have pages, and those pages that exist point to the relevant discussion threads in the mail list and, where we have got this far, describe the current solution.  The wiki is also the place to find examples for testing whether the proposed solution can be used to mark up real course information.

As well as the wiki, we have the proposal on github, which can be used to build working test instances on appspot showing the proposed changes to the schema.org site.

The next phase of the work should see us performing, working through the requirements from the use cases and showing how they can be me. I think we should focus first on those that look easy to do with existing properties of schema.org/Event and schema.org/CreativeWork.

Why is there no LearningResource type in schema.org?⤴

from @ Sharing and learning

A couple of times in the last month or so the question of why isn’t there a LearningResource type in schema.org as a subtype of CreativeWork. In case it comes up again, here’s my answer.

We took a deliberate decision way back at the start of LRMI not to define a LearningResource as a subtype of CreativeWork. Essentially the problem comes when you try to define what is a Learning Resource. Everyone who has tried so far has come up with something like “a resource which is used in learning, education or training”. That doesn’t rule out anything. Whether a magazine like Germany’s Spiegel is a learning resource depends on whether you are a German speaker or an American studying German. In presentations I have compared this problem to that of defining “what is a seat”. You can get seats in all shapes and forms with many different characteristics: chairs, sofas, saddles, stools; so in the end you just have to say a seat is something you sit on. Rather than rehash the problem of deciding what is and isn’t a learning resource, we took the approach of providing a way by which people can describe the educational properties of any Creative Work.

We recognised that there are some “types” of resource that are specific for learning. You can sensibly talk about textbooks and instructional videos as being are qualitatively different to novels and the movies people watch in the cinema, without denying that novels and movies are useful in education. That’s why we have the learningResourceType property. You can think of this as describing the educational genre of the resource.

In practice there are two choices for searching for learning resources. You can search those sites that are curated collections of what someone has decided are educational resources. Or you can search for the educational properties you want. So in our attempt at creating a Google Custom Search Engine we looked for the AlignmentObject. Looking for the presence of a learningResourceType would be another way. The educationalUse property should likewise be a good indicator.

HECoS, a new subject coding system for Higher Education⤴

from @ Sharing and learning

You may have missed that just before Christmas HECoS (the Higher Education Classification of Subjects) was announced. I worked a little on the project that lead up to this, along with colleagues in Cetis (who lead the project), Alan Paull Serices and Gill Ferrell, so I am especially pleased to see it come to fruition. I believe that as a flexible classification scheme built on semantic web / linked data principles it is a significant contribution to how we share data in HE.

HECoS was commissioned as part of the Higher Education Data & Information Improvement Programme (HEDIIP) in order to find a replacement to JACS, the subject coding scheme currently used in UK HE when information from different institutions needs to be classified by subject. When I was first approached by Gill Ferrell while she was working on a preliminary study of to determine if it needed changing, my initial response was that something which was much more in tune with semantic web principles would be very welcome (see the second part of this post that I wrote back in 2013). HECoS has been designed from the outset to be semantic web friendly. Also, one of the issues identified by the initial study was that aggregation of subjects was politically sensitive. For starters, the level of funding can depend on whether a subject is, for example, a STEM subject or not; but there are also factors of how universities as institutions are organised into departments/faculties/schools and how academics identify with disciplines. These lead to unnecessary difficulties in subject classification of courses: it is easy enough to decide whether a course is about ‘actuarial science’ but deciding whether ‘actuarial science’ should be grouped under ‘business studies’ or ‘mathematics’ is strongly context dependent. One of the decisions taken in designing HECoS was to separate the politics of how to aggregate subjects from the descriptions of those subjects and their more general relationships to each other. This is in marked contrast to JACS where the aggregation was baked into the very identifiers used. That is not to say that aggregation hierarchies aren’t important or won’t exist: they are, and they will, indeed there is already one for the purpose of displaying subjects for navigation, but they will be created through a governance process that can consider the politics involved separately from describing the subjects. This should make the subject classification terms more widely usable, allowing institutions and agencies who use it to build hierarchies for presentation and analysis that meet their own needs if these are different from those represented by the process responsible for the standard hierarchy. A more widely used classification scheme will have benefits for the information improvement envisaged by HEDIIP.

The next phase of HECoS will be about implementation and adoption, for example the creation of the governance processes detailed in the reports, moving HECoS up to proper 5-star linked data, help with migration from JACS to HECoS and so on. There’s a useful summary report on the HEDIIP site, and a spreadsheet of the coding system itself. There’s also still the development version Cetis used for consultation, which better represents its semantic webbiness but is non-definitive and temporary.