Tag Archives: random musings

Strings to things in context⤴

from @ Sharing and learning

As part of work to convert plain JSON records to proper RDF in JSON-LD I often want to convert a string value to a URI that identifies a thing (real world concrete thing or a concept).

Simple string to URI mapping

Given a fragment of a schedule in JSON

{"day": "Tuesday"}

As well as converting "day" to a property in an RDF vocabulary I might want to use a concept term for “Tuesday” drawn from that vocabulary. JSON-LD’s @context lets you do this: the @vocab keyword says what RDF vocabulary you are using for properties; the @base keyword says what base URL you are using for values that are URIs; the @id keyword maps a JSON key to an RDF property; and, the @type keyword (when used in the @context object) says what type of value a property should be, the value of @type that says you’re using a URI is "@id" (confused by @id doing double duty? it gets worse). So:

{
  "@context": {
    "@vocab": "http://schema.org/",
    "@base": "http://schema.org/",
    "day": {
       "@id": "dayOfWeek",
       "@type": "@id"
    }
  },
  "day": "Tuesday"
}

Pop this in to the JSON-LD playground to convert it into N-QUADS and you get:

_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .

Cool.

What type of thing is this?

The other place where you want to use URI identifiers is to say what type/class of thing you are talking about. Expanding our example a bit, we might have

{
  "type": "Schedule",
  "day": "Tuesday"
}

Trying the same approach as above, in the @context block we can use the @id keyword to map the string value "type" to the special value "@type"; and, use the @type keyword with special value "@id" to say that the type of value expected is a URI, as we did to turn the string “Tuesday” into a schema.org URI. (I did warn you it got more confusing). So:

{
  "@context": {
    "@vocab": "http://schema.org/",
    "@base": "http://schema.org/",
    "type": {
       "@id": "@type",
       "@type": "@id"    
    },
    "day": {
       "@id": "dayOfWeek",
       "@type": "@id"
    }
  },
  "type": "Schedule",
  "day": "Tuesday"
}

Pop this into the JSON-LD playground and convert to N-QUADS and you get

_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .
_:b0 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Schedule> .

As we want.

Mixing it up a bit

So far we’ve had just the one RDF vocabulary, say we want to use terms from a variety of vocabularies. For the sake of argument, let’s say that no one vocabulary is more important than another, so we don’t want to use @vocab and @base to set global defaults. Adding  another term from a custom vocab in to the our example:

{ 
  "type": "Schedule",
  "day": "Tuesday",
  "onDuty": "Phil" 
}

In the context we can set prefixes to use instead of full length URIs, but the most powerful feature is that we can use different @context blocks for each term definition to set different @base URI fragments. That looks like:

{
  "@context": {
    "schema": "http://schema.org/",
    "ex" : "http://my.example.org/",
    "type": {
       "@id": "@type",
       "@type": "@id",
       "@context": {
         "@base": "http://schema.org/"        
      }
    },
    "day": {
      "@id": "schema:dayOfWeek",
      "@type": "@id",
      "@context": {
         "@base": "http://schema.org/"        
      }
    },
   "onDuty": {
     "@id": "ex:onDuty",
       "@type": "@id",
       "@context": {
         "@base": "https://people.pjjk.org/"
      }
    }
  },
  "type": "Schedule",
  "day": "Tuesday",
  "onDuty": "phil"
}

Translated by JSON-LD Playground that gives:

_:b0 <http://my.example.org/onDuty> <https://people.pjjk.org/phil> .
_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .
_:b0 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <https://json-ld.org/playground/Schedule> .

Hmmm. The first two lines look good. The JSON keys have been translated to URIs for properties from two different RDF vocabularies, and their string values have been translated to URIs for things with different bases, so far so good. But, that last line: the @base for the type isn’t being used, and instead JSON-LD playground is using its own default. That won’t do.

The fix for this seems to be not to give the @id keyword for type the special value of "@type", but rather treat it as any other term from an RDF vocabulary:

{
  "@context": {
    "schema": "http://schema.org/",
    "ex" : "http://my.example.org/",
    "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
    "type": {
       "@id": "rdf:type",
       "@type": "@id",
       "@context": {
         "@base": "http://schema.org/"        
      }
    },
    "day": {
      "@id": "schema:dayOfWeek",
      "@type": "@id",
      "@context": {
         "@base": "http://schema.org/"        
      }
    },
   "onDuty": {
     "@id": "ex:onDuty",
       "@type": "@id",
       "@context": {
         "@base": "https://people.pjjk.org/"
      }
    }
  },
  "type": "Schedule",
  "day": "Tuesday",
  "onDuty": "phil"
}

Which gives:

_:b0 <http://my.example.org/onDuty> <https://people.pjjk.org/phil> .
_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .
_:b0 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Schedule> .

That’s better, though I do worry that the lack of a JSON-LD @type key might bother some.

Extensions and Limitations

The nested context for a JSON key works even if the value is an object, it can be used to specify the @vocab and @base and any namespace prefixes used in the keys and values of the value object. That’s useful if title in one object is dc:title and title in another needs to be schema:title.

Converting string values to URIs for things like this is fine if the string happens to match the end of the URI that you want. So, while I can change the a JSON key "author" into the property URI <https://www.wikidata.org/prop/direct/P50> I cannot change the value string "Douglas Adams" into <https://www.wikidata.org/entity/Q42>. For that I think you need to use something a bit more flexible, like RML, but please comment if you know of a solution to that!

Also, let me know if you think the lack of a JSON-LD @type keyword, or anything else shown above seems problematic.

The post Strings to things in context appeared first on Sharing and learning.

Reading one of 25 years of EdTech⤴

from @ Sharing and learning

I enjoyed Martin Weller‘s blog post series on his 25 years of Ed Tech, and the book that followed, so when Lorna said that she had agreed to read the chapter on e-Learning Standards, and would I like to join her and make it a double act I thought… well, honestly I thought about how much I don’t enjoy reading stuff out loud for other people. But, I enjoy working with Lorna, and don’t get as many chances to do that as I would like, and so it happened.

I think the reading went well. You decide. Reading the definitions of the Dublin Core metadata element set  I learnt one thing: I don’t want to be the narrator for audiobook versions of tech standards.

And then there’s the “between the chapters” podcast interview, which Lorna and I have just finished recording with Laura Pasquini, which was fun. We covered a lot of the things that Lorna and I wanted to: that we think Martin was hard on Dublin Core Metadata, I think his view of it was tarnished by the IEEE LOM; but that we agree with the general thrust of what Martin wrote. Many EdTech Standards were not a success, certainly the experience that many in EdTech had with standards was not a good one. But we all learnt from the experience and did better when it came to dealling with OER (Lorna expands on this in her excellent post reflecting on this chapter). Also, many technical standards relevant to education were a success, and we use them every day without (as Martin says) knowing much about them. And there’s the thing: Martin probably should never have been in the position knowing about Dublin Core, IEEE LOM and UK LOM Core, they should just have just been there behind that systems that he used, making things work. But I guess we have to remember that back then there weren’t many Learning Technologists to go round and so it wasn’t so easy to find the right people to get involved.

We did forget to cover a few things in the chat with Laura.

We forgot how many elephants were involved in UK LOM Core.

We forgot “that would be an implementation issue”.

But my main regret is that we didn’t get to talk about #EduProg, which came about a few years later (the genesis story is on Lorna’s blog) as an analysis of a trend in Ed Tech that contrasted with the do-it-yourself-and-learn approach of EduPunk. EduProg was exemplified in many of the standards which were either “long winded and self-indulgent” or “virtuoso boundary pushing redefining forms and developing new techniques”, depending on your point of view. But there was talent there — many of the people behind EduProg were classically trained computer scientists. And it could be exciting. I for one will never forget Scott plunging a dagger into the keyboard to hold down the shift key while he ran arpeggios along the angle brackets. I hear it’s still big in Germany.

Thank you to Martin, Laura, Clint, Lorna and everyone who made it the reading & podcast possible.

Added 5 Jan: here’s Lorna’s reflections on this recording.

[Feature image for this post, X-Ray Specs by @visualthinkery, is licenced under CC-BY-SA]

The post Reading one of 25 years of EdTech appeared first on Sharing and learning.

Mapping learning resources to curricula in RDF⤴

from @ Sharing and learning

Some personal reflections on relating educational content to curriculum frameworks prompted by some conversation about the Oak National Academy (a broad curriculum of online material available to schools, based on the English national curriculum), and OEH-Linked-Frameworks (an RDF tool for visualizing German educational frameworks). It draws heavily on the BBC curriculum ontology (by Zoe Rose, I think). I’m thinking about these with respect to work I have been involved in such as K12-OCX and LRMI.

If you want to know why you would do this, you might want to skip ahead and read the “so what?” section first. But in brief: representing curriculum frameworks in a standard, machine-readable way, and mapping curriculum materials to that, would help when sharing learning resources.

Curriculum?

But first: curriculum. What does it mean to say  “a broad curriculum of online material available to schools, based on the English national curriculum”? The word curriculum is used in several different ways (there are 71 definitions in the IGI Global dictionary). ranging from “the comprehensive multitude of learning experiences provided by school to its students” (source) to “the set of standards, objectives, and concepts required to be taught and learned in a given course or school year” (source).  So curriculum in one sense is the teaching, in the other all that should be learnt. Those are different: the Oak National Academy provides teaching materials and activities (for learning experiences); the English National Curriculum specifies what should be learnt. Because very few people are interested in one but not the other, these two meanings often get conflated, which is normally fine but here I want to treat them separately and show how they relate to each other. Lets call them Curriculum Content and Materials, and Curriulum Frameworks respectively, think about how to represent the framework, and then how to relate to content and materials to that framework.

Curriculum Frameworks

This is where the BBC curriculum ontology comes in. It has a nice three-dimensional structure, creating the framework on the axes of Field of Study, Level and Topic.

The three dimensions of the BBC Curriculum Onology model. From https://www.bbc.co.uk/ontologies/curriculum

The levels are those that are defined by the national curriculum for progression English schools (KS = Key Stage, children aged 5 to 7 are normally at Key Stage 1; GCSE is the exam typically taken at 16, so represents the end of compulsory education, though students may stay on to study A-levels or similar after that). The levels used in curriculum frameworks tend to be very contextual, normally relating to the grade levels and examinations used in the school system for which the framework is written. It may be useful to relate them to more neutral (or at least, less heavily contextualised) schemes such as the levels of the EQF, or the levels of the Connecting Credentials framework.

The field of study may be called the “educational subject” (though I don’t like to writing RDF statements with Subject as the object) or, especialy in HE, “discipline”. Topics are the subjects studied within a field or discipline. I don’t much like the examples given here because the topics do just look like mini fields of study. I would wonder where to put “biology”–is it a topic within science or a field of study in its own right. A couple of points about field of study and one about topic may help clarify. In higher education a field of study if often called a discipline, which highlights that it is not just the thing being studied, but a community with a common interest and agreed norms on the tools and techniques used to study the subject. Most HE disciplines have an adjectival form that relates to people (I am a Physicists, she is a Humanist). In schools, fields of study are sometimes artifacts of the curriculum design process with no real equilavent outside of school. These artifacts often seem to have names that are initialisms that you won’t come across outside of specific school settings, for example RMPS ( Religious, Moral and Philosophical Studies), PE (Physical Education), PSHE (personal, social, health and economic education), ESL (English as a Second Language) / ESOL (English for Speakers of Other Languages), ICT (Information and Computer Techonolgy) DT (Design and Technology) — but very often the fields of study will have the same names as the top levels of a topic taxonomy (math/s, english, science). Most fields of study will have someone in a school who is a teacher of that field or leader of its teaching for the school. Topics are more neutral of context, less personal, more like the subjects of the Dewey Decimal System (at least more like they are supposed to be). It’s important to note that the same topic may be covered in different fields of study / disciplines in different ways. For example statistics may be a discipline itself (part of maths), with a very theoretical approach taken to studying the topics, but those topics may also be studied in biology, physics and economics. Crucially when it comes to facilitating discovery of suitable content materials for the curriculum, the approach taken and examples used will probably mean a resource aimed at teaching a statistics topic for economics is not very useful for teaching the same topic as part of physics or mathematics.

On to these axes get mapped the what are variously called learning objectives, intended learning outcomes, learning standards, and so on: the competences you want the students to acheive. They exist in the framework as statements of what  knowledge, skills, abilities a student is expected to be able to demonstrate. Let’s call them competences because that is a term that has wides currency beyond education, for example a competence can link educational outcomes to job requirements. There is a lot written about competences. There’s lots about how to write competence statements, including the form the descriptions should take (“you will be able to …”; how to form them as objectives (specific, mearsurable, …); how they relate to context (“able to … under supervision”); how they relate to each other (“you must learn to walk before you learn to run”); what tools should be used (“able to use a calculator to …”). And, of course, there are the specifications, standards and RDF vocabularies for representing all these aspects of competences, e.g. ASN, IMS CASE, ESCO. Let’s not go into that except to say that a curriculum framework will describe these competences as learning objectives and map them to the Field of study, topic and level schemes used by the framework. The same terms described below for mapping content to frameworks can be useful in doing this.

Mapping Curriculum Content to Curriculum Frameworks

So we have some curriculum content material; how do we map it to the curriculum framework?

It may help to model the content material in the way K12-OCX did, following oerschema, as a hierarchy of course, module, unit, lesson, activity, with associated materials and assessments:

Shows a hierarchy of course, module, unit, lesson, activity, with associated materials and assessments
The content model used by K12-OCX, based on oerschema.org

(Aside: any given course may not have modules or units, or either.)

Breaking curriculum materials down from monolithic courses to their constituent parts (while keeping the logical and pedagogical relationships between those parts) creates finer grained resources more easily accomodated into existing contexts.

At the Course level, oerschema.org gives us the property syllabus which can be used to relate the course to the framework as a whole, called by oerschema a CourseSyllabus, (“syllabus” is another word used in various ways, so lets not worry about any difference between a syllabus and a curriculum framework). This may also be useful at finer-grained levels, e.g. Module and Unit.

@prefix oer: <http://oerschema.org/> .
@prefix sdo: <http://schema.org/> .
@base <http://example.org/> .
<myCurriculumFramework> a oer:CourseSyllabus .
<myCourse> a oer:Course, sdo:Course ;
    oer:syllabus <myCurriculumFramework> .

[example code in tutle, there’s a JSON-LD version of it all below]

We can use the schema.org educationalLevel property to relate the resource to the educational level of the framework:

<myCourse> sdo:educationalLevel <myCurriculumFramework/Levels/KS4> .

Lets say our course deals with Mathematics and has a Unit on Statistics (no modules). We can use the schema.org AlignmentObject to say that there is an educationAlignment between my Course and my Unit to the field of study (that is, in the language of the alignment object, the educational subject). We can use the schema.org about property to say what the topic is:

<myCourse> sdo:hasPart <myUnit> ;
    sdo:educationalAlignment [
        a sdo:AlignmentObject ;
        sdo:alignmentType "educationalSubject";
        sdo:targetUrl <myCurriculumFramework/FieldsOfStudy/Mathematics>
    ] .

<myUnit> a oer:Unit, sdo:LearningResource ;
    sdo:educationalAlignment [
        a sdo:AlignmentObject ;
        sdo:alignmentType "educationalSubject";
        sdo:targetUrl <myCurriculumFramework/FieldsOfStudy/Mathematics>
    ] ;
    sdo:about <myCurriculumFramework/Topic/Statistics> .

For lessons, and especially for activities, we can relate to competences as individual learning objectives. The schema.org teaches property is designed for this:

<myUnit> sdo:hasPart <myLesson> .
<myLesson> a oer:Lesson, sdo:LearningResource ;
    sdo:hasPart <myActivity> .

<myActivity> a oer:Activity, sdo:LearningResource ;
   sdo:teaches <myCurriculumFramework/Objective/Competence0123> .

Whether you repeat about and educationalAlignment statements linking to “Field of Study” and “Topic” in the descriptions of Lessons and Activities depends on how much you want to rely on inferencing that something which is a part of a course has the same Fields of Study, something which is a part of Unit has the same topic, and so on. If your parts might get scattered, or used by systems that don’t do RDF inferencing, then you’ll want to repeat them (they will, you should). I haven’t done so here just to avoid repetition.

Finally, let’s link the competence statement to the framework (the framework here represented in a fairly crude way, not wanting to get into the intricacies of competence frameworks):

<myCurriculumFramework> a oer:CourseSyllabus, sdo:DefinedTermSet ;
    sdo:hasDefinedTerm <myCurriculumFramework/Objective/Competence0123> .

<myCurriculumFramework/Objective/Competence0123> a sdo:DefinedTerm,  
                                                   sdo:LearningResource ;
    sdo:educationalAlignment [ 
        a sdo:AlignmentObject ; 
        sdo:alignmentType "educationalSubject"; 
        sdo:targetUrl <myCurriculumFramework/FieldsOfStudy/Mathematics> 
    ] ;
    sdo:about <myCurriculumFramework/Topic/Statistics> ;
    sdo:educationalLevel <myCurriculumFramework/Levels/KS4> ;
    sdo:description "You will be able to use a calculator to find the mean..." ;
    sdo:name "Calculate the arithmetic mean" .

(Aside: Modelling a learning objective / competence as a defined term and a LearningResource is probably the most  controversial thing here, but I think it works for illustration.)

So What?

Well this shows several things I think would be useful:

  • Having metadata for a curriculum (whatever it is) will help others find it and use it, if suitable tools for using the metadata exist.
  • Tools are more likely to exist if the metadata is nicely machine readable (RDF, not PDF) and standardised (widely used vocabularies like schema.org).
  • A common model for curriculum frameworks will make mapping from one to another easier. For example. it’s easier to map from UK to US educational levels if they are clearly and separately defined.
  • Breaking curriculum materials down from monolithic courses to their constituent parts (while keeping the logical and pedagogical relationships between those parts) creates finer grained resources more easily accomodated into existing contexts.
  • Mapping curriculum materials to learning objectives in a given framework makes it easier to find resources for that curriculum, which is great, but the world is bigger than one curriculum.
  • Mapping both learning objectives and curriculum materials to the axes of the curriculum framework model makes it easier to find resources appropriate accross different curricula.

Finally, if you prefer your RDF as JSON-LD:

{
  "@context": {
    "oer": "http://oerschema.org/",
    "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
    "rdfs": "http://www.w3.org/2000/01/rdf-schema#",
    "schema": "http://schema.org/",
    "sdo": "http://schema.org/",
    "xsd": "http://www.w3.org/2001/XMLSchema#"
  },
  "@graph": [
    {
      "@id": "http://example.org/myCurriculumFramework",
      "@type": [
        "oer:CourseSyllabus",
        "schema:DefinedTermSet"
      ],
      "schema:hasDefinedTerm": {
        "@id": "http://example.org/myCurriculumFramework/Objective/Competence0123"
      }
    },
    {
      "@id": "http://example.org/myActivity",
      "@type": [
        "oer:Activity",
        "schema:LearningResource"
      ],
      "schema:teaches": {
        "@id": "http://example.org/myCurriculumFramework/Objective/Competence0123"
      }
    },
    {
      "@id": "http://example.org/myCourse",
      "@type": [
        "schema:Course",
        "oer:Course"
      ],
      "oer:syllabus": {
        "@id": "http://example.org/myCurriculumFramework"
      },
      "schema:educationalAlignment": {
        "@id": "_:ub132bL12C30"
      },
      "schema:educationalLevel": {
        "@id": "http://example.org/myCurriculumFramework/Levels/KS4"
      },
      "schema:hasPart": {
        "@id": "http://example.org/myUnit"
      }
    },
    {
      "@id": "http://example.org/myLesson",
      "@type": [
        "schema:LearningResource",
        "oer:Lesson"
      ],
      "schema:hasPart": {
        "@id": "http://example.org/myActivity"
      }
    },
    {
      "@id": "http://example.org/myUnit",
      "@type": [ 
        "oer:Unit",
        "schema:LearningResource"
       ],
       "schema:about": {
        "@id": "http://example.org/myCurriculumFramework/Topic/Statistics"
      },
      "schema:educationalAlignment": {
        "@id": "_:ub132bL21C30"
      },
      "schema:hasPart": {
        "@id": "http://example.org/myLesson"
      }
    },
    {
      "@id": "_:ub132bL12C30",
      "@type": "schema:AlignmentObject",
      "schema:alignmentType": "educationalSubject",
      "schema:targetUrl": {
        "@id": "http://example.org/myCurriculumFramework/FieldsOfStudy/Mathematics"
      }
    },
    {
      "@id": "_:ub132bL40C30",
      "@type": "schema:AlignmentObject",
      "schema:alignmentType": "educationalSubject",
      "schema:targetUrl": {
        "@id": "http://example.org/myCurriculumFramework/FieldsOfStudy/Mathematics"
      }
    },
    {
      "@id": "http://example.org/myCurriculumFramework/Objective/Competence0123",
      "@type": [
        "schema:LearningResource",
        "schema:DefinedTerm"
      ],
      "schema:about": {
        "@id": "http://example.org/myCurriculumFramework/Topic/Statistics"
      },
      "schema:description": "You will be able to use a calculator to find the mean ...",
      "schema:educationalAlignment": {
        "@id": "_:ub132bL40C30"
      },
      "schema:educationalLevel": {
        "@id": "http://example.org/myCurriculumFramework/Levels/KS4"
      },
      "schema:name": "Calculate the arithmetic mean"
    },
    {
      "@id": "_:ub132bL21C30",
      "@type": "schema:AlignmentObject",
      "schema:alignmentType": "educationalSubject",
      "schema:targetUrl": {
        "@id": "http://example.org/myCurriculumFramework/FieldsOfStudy/Mathematics"
      }
    }
  ]
}

 

The post Mapping learning resources to curricula in RDF appeared first on Sharing and learning.

The confusing concepts of credentials and competences⤴

from @ Sharing and learning

Back in July and August the Talent Marketplace Signaling W3C Community Group made good progress on how to relate JobPostings to Educational and Occupational Credentials (qualifications, if you prefer) and Compentences. These seem to me to be central concepts for linking between the domain of training, education and learning and the domain of talent sourcing, employment and career progression; a common understanding of them would be key to people from one domain understanding signals from the other. I posted a sketch of how I saw these working,.. and that provoked a lot of discussion, some of which led me to evaluate what leads to misunderstandings when trying to discuss such concepts.

This post is my attempt to describe the source those misunderstandings and suggest that we try to avoid them. Finding clarity in talking about competences and credentials is certainly not “all my own work”. Jim Goodell, Alex Jackl and Stuart Sutton and many others in the Talent Signal group and beyond have all been instrumental navigating us through to what I hope will be a common understanding, documentation of which is currently being editted by Alex. This is, however, my own take on some of the factors feeding in to that discussion, I wouldn’t want to laden anyone else with any blame for what’s described below. Also, I have simplified some of the issues raised in the discussion, and for that reason do not want to suggest that they represent the views of specific people. If you want to look at who said what, it’s best you read it in their own words on the email list.

So what are the factors that make talking about competences and credentials difficult?

abstractness

We might think that we know what a credential/qualification is: the University of Bristol offers a BSc in Physics; I have a BSc in physics from the University of Bristol–I could show you the website describing that qualification and a photo of my certificate. But they are not the same thing, there’s a difference between the abstract credential being offered and the specific certificate that I have, just as the story “Of Mice and Men” is not the same as the thing with the ISBN  978-0582461468, and the thing identified by that ISBN is not the physical copy of the book that I have on my shelf. We’re all familiar with distinguishing between abstract classes (think of Platonic archetypes) and specific instances and we’re pretty good at it, but let’s just acknowledge that it’s difficult. It is difficult to know what level of generality to give to the abstraction (or abstractions, it’s often not a simple as instance and class); it’s difficult to know the words that you can use to make clear which you’re talking about, and it’s easy to talk at cross-purposes by incorrectly assuming that we have made that clear.

metonymy

Naming things is hard, especially abstract things. One way that we try to deal with this is to refer to things by relation to something more concrete (in our experience): thus we call programmes of study after the credential they lead to (“Jamie is doing an MA in Film Studies”), or we call parts of a course a skill “Phil  has completed 20 skills in Duolingo Greek”, or we refer to people after the credential they hold (“Google hires lots of PhDs”). This may help in narrow contexts: if all you ever talk about is programmes, courses and their components it doesn’t matter if you call them after credentials and skills; but when you start doing that when you talk to someone who usually only deals with credentials and skills then you’ll cause confusion.

jargon

Another way to deal with naming abstraction is to coin terms that have specific meanings in context. The only problem with this is that in the Talent Signal work the point is that we are trying to talk across contexts, and the odds are that the jargon used is either meaningless out of context or, worse, means something different. (This latter is especially likely if it is a metonym. Seriously, don’t do metonyms.)

different approaches

As I wrote while I was thinking about this, different technical modelling approaches talk about different things, specifically in RDF we refer to things in the outside world: a description of a person is about that person, the identifier used to say what it is about is the identifier of an instance who is an actual person. When building data objects we might create a class of Person and have instances of that class to describe individual people. So for RDF the instance of Person is out there in the world, for the data object modellers the instance of Person only exists in an information system. This matters if you want to get a copy of the instance. Of course the reality is that what is in the information system is a description of a person, and we have just hit metonymy again.

fallacy of the beard

Bearing all that in mind we can come up with a set of definitions for Compentences and Credentials (the abstract things), Competence and Credential Definitions (descriptions of generic competences and credentials that may exist in information systems or elsewhere), Competence Assertions and Credential Awards (associating instances of competences and credentials with other things).

One use of a credential award is to assert that an individual has acheived a certain competence, so is a credential just a competence assertion? Here we hit some of the issues raised in discussion: it was important to some people that institutions should be able to “offer competences” as distinct from “issuing credentials”. I would rephrase that as make Competence Assertions without there being a Credential Award. This seemed tied to the idea that a credential was related to a complete course or programme whereas a competence was related to a part of the course.

In RDF we make assertions, so is the statement <Phil> <hasAbility> <ShoelaceTying> a competency assertion? If so, what more (if anything) do you need to have a Credential Award? We seems agreed that a Credential is somewhat more substantial and more formal, but the problem with that is that it is a judgement based on graduations along a continuum, not a clear disinction. That’s not to say that credentials and competence assertions are not distinct. I am clean shaven. If don’t shave tomorrow I won’t have beard; if I don’t shave for one more day or another after that I still won’t have one; so at what point would I have a beard? How many days I have to go unshaven before my stubble becomes a beard is impossible to define, but that does not mean that beards don’t exist as distinct category.

conclusion

I hope that if we acknoweldge and identify the right abstract concepts, recognise that people in different contexts will understand jargon differently and take different approaches to what they consider as important parts of a model, and avoid metonyms then I think we can make ourselves better understood.

The post The confusing concepts of credentials and competences appeared first on Sharing and learning.

#OER18 Open to all⤴

from @ Sharing and learning

I spent the last couple of days in Bristol, a city I know well: I went to University there (undergrad, PhD and post doc in physics and materials science), my wife’s parents live there. I’ll be honest, meeting my friends from the OER community in a city of which I am very fond was part of what attracted me to this conference. The theme of the conference, “open to all,” with discussions about OER in the context of colonialism, was less attractive to me. Look at the rest of this blog, you’ll see I am much more comfortable talking about technical specifications, APIs and infrastructure to support the creation and dissemination of OER.

Two sides of a a quadrangle of small, 17th Cent., pink. terraced cottages.
Merchant Venturer’s alms houses, Bristol, photo by Eirian Evans, via MediaWiki, Licence CC:BY-SA

Bristol has a dark history. Like many towns and cities in Britain, it was built on the slave trade. Bristol more directly than others. I stayed at the Merchant Venturers’ Alms houses, built with the money of Edward Colston, a Bristolian “philanthropist and slavetrader” [wikipedia]. There has been a lot debate in Bristol about whether Colston’s name should still be commemorated in cultural venues and schools. I would recommend the Almshouses to anyone who wanted to stay in an apartment in a lively part of town as an alternative to run of the mill corporate hotels.

At the conference, I did get my hoped-for catch up with old friends and chance to meet new friends, I got the chance to talk with people about technical platforms and interoperabilty of eTextBooks and infrastructure for disseminating OER. That much was expected. I share some of Lorna Campbell‘s background, and I think that she encapsulated the UK OER (#UKOER?) movement superbly in her opening keynote.

The unexpected pleasure was how much I enjoyed and learned from the contributions of Momodou Sallah (keynote),  Nick Baker (paper) and Taskeen Adam (contribution to closing plenary), and Maha Bali, Catherine Cronin (& many others  in a discussion session). These people are all great communicators, talking about issues (colonialism, politics of OER in the global south, ideas of openness and availability of education from non-western cultures) that are not part of my background. I could have been out of my comfort zone, but they made me feel comfortable. I wish that many involved in science communication would learn from this.

We need to talk about the role right-wing libertarian wingnuts in open

A plaque on tree reading Caucasian Wingnut [latin name] pterocarya fraxinifolia
Caucasian wingnut sign, by Ian Poellet via wikimedia commons.
I mean that photo of Eric S. Raymond, keyboard in hand gun in the other, shown by David Wiley during his keynote. Look at what wikipedia says about Raymond’s political views. If you haven’t followed any links so far (I see the WordPress logs, I know you don’t) follow that one and come back.

If you just read the opening sentence of that section

“Raymond is a member of the Libertarian Party. He is a gun rights advocate…”

go back and read the rest. Read the bit about

Raymond accused the Ada Initiative and other women in tech groups of attempting to entrap male open source leaders and accuse them of rape…”,

and the bit about

Raymond is also known for claiming that “Gays experimented with unfettered promiscuity in the 1970s and got AIDS as a consequence…”

and so on.

I have read the Cathedral and the Bazaar, I do know Raymond’s contribution to open source software. Even coming from a background in materials science, I do understand concepts like the genetic fallacy and wrongness of ad hominem attacks. And I do not think we should be recommending this person’s work to the OER community.

The post #OER18 Open to all appeared first on Sharing and learning.

Wikidata driven timeline⤴

from @ Sharing and learning

I have been to a couple of wikidata workshops recently, both involving Ewan McAndrew; between which I read Christine de Pizan‘s Book of the City of Ladies(*). Christine de Pizan is described as one of the first women in Europe to earn her living as a writer, which made me wonder what other female writers were around at that time (e.g. Julian of Norwich and, err…). So, at the second of these workshops, I took advantage of Ewan’s expertise, and the additional bonus of Navino Evans cofounder of Histropedia  also being there, to create a timeline of medieval European female writers.  (By the way, it’s interesting to compare this to Asian female writers–I was interested in Christina de Pizan and wanted to see how she fitted in with others who might have influenced her or attitudes to her, and so didn’t think that Chinese and Japanese writers fitted into the same timeline.)

Histropedia timeline of medieval female authors (click on image to go to interactive version)

This generated from a SPARQL query:

#Timeline of medieval european female writers
#defaultView:Timeline
SELECT ?person ?personLabel ?birth_date ?death_date ?country (SAMPLE(?image) AS ?image) WHERE {
  ?person wdt:P106 wd:Q36180; # find everything that is a writer
          wdt:P21 wd:Q6581072. # ...and a human female
  OPTIONAL{?person wdt:P2031 ?birth_date} # use florit if present for birth/death dates  
  OPTIONAL{?person wdt:P2032 ?death_date} # as some v impecise dates give odd results 
  ?person wdt:P570 ?death_date. # get their date of death
  OPTIONAL{?person wdt:P569 ?birth_date} # get their birth date if it is there
  ?person wdt:P27 ?country.   # get there country
  ?country wdt:P30  wd:Q46.   # we want country to be part of Europe
  FILTER (year(?death_date) < 1500) FILTER (year(?death_date) > 600)
  SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
  OPTIONAL { ?person wdt:P18 ?image. }
}
GROUP BY ?person ?personLabel ?birth_date ?death_date ?country
Limit 100

[run it on wikidata query service]

Reflections

I’m still trying to get my head around SPARQL, Ewan and Nav helped a lot, but I wouldn’t want to pass this off as exemplary SPARQL. In particular, I have no idea how to optimise SPARQL queries, and the way I get birth_date and death_date to be the start and end of when the writer flourished, if that data is there, seems a bit fragile.

It was necessary to to use florit dates because some of the imprecise birth & death dates lead to very odd timeline displays: born C12th . died C13th showed as being alive for 200 years.

There were other oddities in the wikidata. When I first tried, Julian of Norwich didn’t appear because she was a citizen of the Kingdom of England, which wasn’t listed as a country in Europe. Occitania, on the other hand was.  That was fixed. More difficult was a writer from Basra who was showing up because Basra was in the Umayyad Caliphate, which included Spain and so was classed as a European country. Deciding what we mean by European has never been easy.

Given the complexities of the data being represented, it’s no surprise that the Wikidata data model isn’t simple. In particular I found that dealing with qualifiers for properties was mind bending (especially with another query I tried to write).

Combining my novice level of SPARQL and the complexity of the Wikidata data model, I could definitely see the need for SPARQL tutorials that go beyond the simple “here’s how you find triple that matches a pattern” level.

Finally: histropedia is pretty cool.

Footnote:

The Book of the City of Ladies is a kind of women in red for Medieval Europe.  Rosalind Brown-Grant’s translation for Penguin Classics is very readable.

The post Wikidata driven timeline appeared first on Sharing and learning.

Thoughts on Support for Technology Enhanced Learning in HE⤴

from @ Sharing and learning

I was asked to put forward my thoughts on how I thought the use of technology to enhance teaching and learning should be supported where I work. I work in a UK University that has campuses overseas, and which is organised into Schools (Computer Science is in a School with Maths, to form one of the smaller schools). This was my first round brain dump on the matter. It looks like something might come of it, so I’m posting it here asking for comments.

Does any of this look wrong?

Do you/ have you worked in a similar or dissimilar unit and have any suggestions for how well that worked?

What would be the details that need more careful thought?

Get in touch directly by email or use the form below (if the latter let me know if you don’t want your reply publishing).

Why support Technology Enhanced Learning (TEL)?

Why would you not? This isn’t about learning technology for its own sake, it’s about enhancing learning and teaching with technology. Unless you deny that technology can in any way enhance teaching and learning, the questions remaining centre on how can technology help and how much is that worth. Advances in technology and in our understanding of how to use it in teaching and learning create a “zone of possibility,” the extent of which and success of how it is exploited depend on the intersection of teacher’s understanding of the technologies being offered and the pedagogies suitable for their subject (Dirkin  & Mishra, 2010 [paywalled ? ]).

Current examples of potential enhancement which is largely unsupported (or supported only by ad hoc provision) include

  • Online exams in computer science
  • Formative assessment and other formative exercises across the school
  • Providing resources for students learning off-campus
  • Supporting the delivery of course material when students won’t attend lectures
  • Providing course information to students

Location of support: in School, by campus, or central services?

There are clearly some services that apply institution wide (VLE), or need to be supported at each campus (computer labs), however there are dangers to centralising too much. Centralisation creates a division between the support and the people who need it, a division which is reinforced by separation of funding and management lines for the service and the academic provision. This division makes it difficult for those who understand the technology and those who understand the pedagogy of the subject being taught to engage around the problems to be solved. Instead they interact but stay within the remits laid down by their management structures.

There should of course be strong links between the support in my School and others, central support and campus specific support, but an arrangement where these links are prioritised over the link between support for TEL in maths and computing and the provision of teaching and learning in maths and computer science seems wrong.

What support?

This is something of a brain dump based on current activity, in no particular order.

  • Seminar series and other regular meetings to gather and spread new ideas.
  • Developing resources for off-campus learning (currently we need in CS to provide support materials based on existing courses for a specific programme) these and similar materials could also be used to support students on conventional courses who don’t attend lectures.
  • Managing tools and systems for formative assessment and other formative experiences, e.g. mathematical and programming practice.
  • Developing resources and systems for working with partner institutions who deliver courses we accredit, some of which may be applicable to mainstream teaching.
  • Student course information website: maintenance and updating information, liaison with central student portal.
  • Online exams, advice on question design and managing workflow from question authoring to test delivery.
  • Evaluation of innovative teaching (where innovative is defined as something for which we are unsure enough of the benefits for it to be worth evaluating).[*]
  • Maintain links with development organisations in Learning Technology, e.g. ALT and Jisc and scholarship in areas such as digital pedagogy and open education which underpin technology enhanced learning.
  • Liaise with central & campus services, e.g. VLE management group
  • Advise staff in school on use of central facilities e.g. BlackBoard
  • Liaise with other schools. There is potential to provide some of these services to other schools (or vice versa), assuming financial recompense can be arranged.

[*Note: this raises the question of whether the support should be limited to technology to enhance learning, should address other innovations too.]

Who?

This needs to be provided by a core of people with substantial knowledge of learning technology, who might also contribute to other activities in the school.  We have a group of three or four people who can do this. It is a little biased to Computer Science and to one campus so there should be thought given to how to bring in other subjects and locations.

We would involve project students and interns provided this was done in such a way as to contribute sustainable enhancement of a service or creation of new resources. For example, we would use of tools such as git so that each student left work that could be picked up by others. As well as supervising project students within the group we could co-supervise with academic staff who had their own ideas for learning-related student projects. This would help keep tight contacts with day-to-day teaching.

Funding and management

This support needs an allocated budget and well controlled project management. Funding for core staff should be long term on a par with commitment to teaching within the School. Management and reporting should be through the Director of Learning and Teaching and the Learning and Teaching Committee with information and discussion at the subject Boards of Studies as appropriate.

Reference

Dirkin, K., & Mishra, P. (2010). Values, Beliefs, and Perspectives: Teaching Online within the Zone of Possibility Created by Technology Retrieved from https://www.learntechlib.org/p/33974/

 

 

Comments Please

The post Thoughts on Support for Technology Enhanced Learning in HE appeared first on Sharing and learning.

Flying cars, digital literacy and the zone of possibility⤴

from @ Sharing and learning

Where’s my flying car? I was promised one in countless SF films from Metropolis through to Fifth Element. Well, they exist.  Thirty seconds on the search engine of your choice will find you a dozen of so working prototypes (here’s a YouTube video with five).

A fine and upright gentle man flying in a small helicopter like vehicle.
Jess Dixon’s flying automobile c. 1940. Public Domain, held by State Library and Archives of Florida, via Flickr.

They have existed for some time.  Come to think about it, the driving around on the road bit isn’t really the point. I mean, why would you drive when you could fly. I guess a small helicopter and somewhere to park would do.

So it’s not lack of technology that’s stopping me from flying to work. What’s more of an issue (apart from cost and environmental damage) is that flying is difficult. The slightest problem like an engine stall or bump with another vehicle tends to be fatal. So the reason I don’t fly to work is largely down to me not having learnt how to fly.

The zone of possibility

In 2010 Kathryn Dirkin studied how three professors taught using the same online learning environment, and found that they were very different. Not something that will surprise many people, but the paper (which unfortunately is still behind a paywall) is worth a read for the details of the analysis. What I liked from her conclusions was that how someone teaches online depends on the intersection of their knowledge of the content, beliefs about how it should be taught and understanding technology. She calls this intersection the zone of possibility. As with the flying car the online learning experience we want may already be technologically possible, we just need to learn how to fly it (and consider the cost and effect on the environment).

I have been thinking about Dirkin’s zone of possibility over the last few weeks. How can it be increased? Should it be increased? On the latter, let’s just say that if technology can enhance education, then yes it should (but let’s also be mindful about the costs and impact on the environment).

So how, as a learning technologist, to increase this intersection of content knowledge, pedagogy and understanding of technology? Teachers’ content knowledge I guess is a given, nothing that a learning technologist can do to change that. Also, I have come to the conclusion that pedagogy is off limits. No technology-as-a-Trojan-horse for improving pedagogy, please, that just doesn’t work. It’s not that pedagogic approaches can’t or don’t need to be improved, but conflating that with technology seems counter productive.  So that’s left me thinking about teachers’ (and learners’) understanding of technology. Certainly, the other week when I was playing with audio & video codecs and packaging formats that would work with HTML5 (keep repeating H264  and AAC in MPEG-4) I was aware of this. There seems to be three viable approaches: increase digital literacy, tools to simplify the technology and use learning technologists as intermediaries between teachers and technology. I leave it at that because it is not a choice of which, but of how much of each can be applied.

Does technology or pedagogy lead?

In terms of defining the”zone of possibility” I think that it is pretty clear that technology leads. Content knowledge and pedagogy change slowly compared to technology. I think that rate of change is reflected in most teachers understanding of those three factors. I would go as far as to say that it is counterfactual to suggest that our use of technology in HE has been led by anything other than technology. Innovation in educational technology usually involves exploration of new possibilities opened up by technological advances, not other factors. But having acknowledged this, it should also be clear that having explored the possibilities, a sensible choice of what to use when teaching will be based on pedagogy (as well as cost and the effect on the environment).

The post Flying cars, digital literacy and the zone of possibility appeared first on Sharing and learning.

Three resources about gender bias⤴

from @ Sharing and learning

These are three resources that look like they might be useful in understanding and avoiding gender bias. They caught my attention because I cover some cognitive biases in the Critical Thinking course I teach. I also cover the advantages of having diverse teams working on problems (the latter based on discussion of How Diversity Makes Us Smarter in SciAm). Finally, like any responsible  teacher in information systems & computer science I am keen to see more women in my classes.

Iris Bohnet on BBC Radio 4 Today programme 3 January.  If you have access via a UK education institution with an ERA licence you can listen to the clip via the BUFVC Box of Broadcasts.  Otherwise here’s a quick summary. Bohnet stresses that much gender bias is unconscious, individuals may not be aware that they act in biased ways. Awareness of the issue and diversity training is not enough on its own to ensure fairness. She stresses that organisational practise and procedures are the easiest effective way to remove bias. One example she quotes is that to recruit more male teachers job adverts should not “use adjectives that in our minds stereotypically are associated with women such as compassionate, warm, supportive, caring.” This is not because teachers should not have these attributes or that men cannot be any of these, but because research shows[*] that these attributes are associated with women and may subconsciously deter male applicants.

[*I don’t like my critical thinking students saying broad and vague things like ‘research shows that…’. It’s ok for 3 minute slot on a breakfast news show but I’ll have to do better. I hope the details are somewhere in Iris Bohnet, (2016). What Works: Gender Equality by Design]

This raised a couple of questions in my mind. If gender bias is unconscious, how do you know you do it? And, what can you do about it? That reminded me of two other things I had seen on bias over the last year.

An Implicit Association Test (IAT) on Gender-Career associations, which  I took a while back. It’s a clever little test based on how quickly you can classify names and career attributes. You can read more information about them on the Project Implicit website  or try the same test that I did (after a few disclaimers and some other information gathering, it’s currently the first one on their list).

A gender bias calculator for recommendation letters based on the words that might be associated with stereotypically male or female attributes. I came across this via Athene Donald’s blog post Do You Want to be Described as Hard Working? which describes the issue of subconscious bias in letters of reference. I guess this is the flip side of the job advert example given by Bohnet. There is lots of other useful and actionable advice in that blog post, so if you haven’t read it yet do so now.

The post Three resources about gender bias appeared first on Sharing and learning.

XKCD or OER for critical thinking⤴

from @ Sharing and learning

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, which is mostly how to spot logical fallacies; and a little on cognitive development. One the better things about teaching on this course is that a lot of it is covered by XKCD, and that XKCD is CC licensed. Open Education Resources can be fun.

how scientists think

[explain]

hypothesis testing

Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.

[explain]

Blind trials

[explain]

Interpreting statistics

[explain]

p hacking

[explain]

Confounding variables

There are also a lot of global versions of this map showing traffic to English-language websites which are indistinguishable from maps of the location of internet users who are native English speakers

[explain]

Extrapolation

[explain]

[explain]

Confirmation bias in information seeking

[explain]

[explain]

undistributed middle

[explain]

post hoc ergo propter hoc

Or correlation =/= causation.

He holds the laptop like that on purpose, to make you cringe.

[explain]

[explain]

Bandwagon Fallacy…

…and fallacy fallacy

[explain]

Diversity and inclusion

[explain]