Tag Archives: schema.org

Educational and occupational credentials in schema.org⤴

from @ Sharing and learning

Since the summer I have been working with the Credential Engine, which is based at Southern Illinois University, Carbondale, on a project to facilitate the description of educational and occupational credentials in schema.org. We have just reached the milestone of setting up a W3C Community Group to carry out that work.  If you would like to contribute to the work of the group (or even just lurk and follow what we do) please join it.

Educational and occupational credentials

By educational and occupational credentials I mean diplomas, academic degrees, certifications, qualifications, badges, etc., that a person can obtain by passing some test or examination of their abilities. (See also the Connecting Credentials project’s glossary of credentialing terms.)  These are already alluded to in some schema.org properties that are pending, for example an Occupation or JobPosting’s qualification or a Course’s educationalCredentialAwarded. These illustrate how educational and occupational credentials are useful for linking career aspirations with discovery of educational opportunities. The other entity type to which educational and occupational credentials link is Competence, i.e. the skills, knowledge and abilities that the credential attests. We have been discussing some work on how to describe competences with schema.org in recent LRMI meetings, more on that later.

Not surprisingly there is already a large amount of relevant work done in the area of educational and occupational credentials. The Credential Engine has developed the Credential Transparency Description Language (CTDL) which has a lot of detail, albeit with a US focus and far more detail that would be appropriate for schema.org. The Badge Alliance has a model for open badges metadata that is applicable more generally. There is a W3C Verifiable Claims working group which is looking at credentials more widely, and the claim to hold one. Also, there are many frameworks which describe credentials in terms of the level and extent of the knowledge and competencies they attest, in the US Connecting Credentials cover this domain, while in the EU there are many national qualification frameworks and a Framework for Qualifications of the European Higher Education Area.

Potential issues

One potential issue is collision with existing work. We’ll have to make sure that we know where the work of the educational and occupational credential working group ends, i.e what work would best be left to those other initiatives, and how we can link to the products of their work. Related to that is scope creep. I don’t want to get involved in describing credentials more widely, e.g. issues of identification, authentication, authorization; hence the rather verbose formula of ‘educational and occupational credential. That formula also encapsulates another issue, a tension I sense between the educational world and the work place: does a degree certificate qualify someone to do anything, or does it just relate to knowledge?  Is an exam certificate a qualification?

The planned approach

I plan to approach this work in the same way that the schema course extension community group worked. We’ll use brief outline use cases to define the scope, and from these define a set of requirements, i.e. what we need to describe in order to facilitate the discovery of educational and occupational credentials. We’ll work through these to define how to encode the information with existing schema.org terms, or if necessary, propose new terms. While doing this we’ll use a set of examples to provide evidence that the information required is actually available from existing credentialling organizations.

Get involved

If you want to help with this, please join the community group. You’ll need a W3C account, and you’ll need to sign an assurance that you are not contributing any intellectual property that cannot be openly and freely licensed.

The post Educational and occupational credentials in schema.org appeared first on Sharing and learning.

Schema course extension update⤴

from @ Sharing and learning

This progress update on the work to extend schema.org to support the discovery of any type of educational course is cross-posted from the Schema Course Extension W3C Community Group. If you are interested in this work please head over there.

What aspects of a course can we now describe?
As a result of work so far addressing the use cases that we outlined, we now have answers to the following questions about how to describe courses using schema.org:

As with anything in schema.org, many of the answers proposed are not the final word on all the detail required in every case, but they form a solid basis that I think will be adequate in many instances.

What new properties are we proposing?
In short, remarkably few. Many of the aspects of a course can be described in the same way as for other creative works or events. However we did find that we needed to create two new types Course and CourseInstance to identify whether the description related to a course that could be offered at various times or a specific offering or section of that course. We also found the need for three new properties for Course: courseCode, coursePrerequisites and hasCourseInstance; and two new properties for CourseInstance: courseMode and instructor.

There are others under discussion, but I highlight these as proposed because they are being put forward for inclusion in the next release of the schema.org core vocabulary.

showing how Google will display information about courses in a search galleryMore good news:  the Google search gallery documentation for developers already includes information on how to provide the most basic info about Courses. This is where we are going ?

Schema course extension progress update⤴

from @ Sharing and learning

I am chair of the Schema Course Extension W3C Community Group, which aims to develop an extension for schema.org concerning the discovery of any type of educational course. This progress update is cross-posted from there.

If the forming-storming-norming-performing model of group development still has any currency, then I am pretty sure that February was the “storming” phase. There was a lot of discussion, much of it around the modelling of the basic entities for describing courses and how they relate to core types in schema (the Modelling Course and CourseOffering & Course, a new dawn? threads). Pleased to say that the discussion did its job, and we achieved some sort of consensus (norming) around modelling courses in two parts

Course, a subtype of CreativeWork: A description of an educational course which may be offered as distinct instances at different times and places, or through different media or modes of study. An educational course is a sequence of one or more educational events and/or creative works which aims to build knowledge, competence or ability of learners.

CourseInstance, a subtype of Event: An instance of a Course offered at a specific time and place or through specific media or mode of study or to a specific section of students.

hasCourseInstance, a property of Course with expected range CourseInstance: An offering of the course at a specific time and place or through specific media or mode of study or to a specific section of students.

(see Modelling Course and CourseInstance on the group wiki)

This modelling, especially the subtyping from existing schema.org types allows us to meet many of the requirements arising from the use cases quite simply. For example, the cost of a course instance can be provided using the offers property of schema.org/Event.

The wiki is working to a reasonable extent as a place to record the outcomes of the discussion. Working from the outline use cases page you can see which requirements have pages, and those pages that exist point to the relevant discussion threads in the mail list and, where we have got this far, describe the current solution.  The wiki is also the place to find examples for testing whether the proposed solution can be used to mark up real course information.

As well as the wiki, we have the proposal on github, which can be used to build working test instances on appspot showing the proposed changes to the schema.org site.

The next phase of the work should see us performing, working through the requirements from the use cases and showing how they can be me. I think we should focus first on those that look easy to do with existing properties of schema.org/Event and schema.org/CreativeWork.

Is there a Library shaped black hole in the web? Event summary.⤴

from @ Open World

Is there a Library shaped black hole in the web? was the question posed by an OCLC event at the Royal College of Surgeons last week that focused on exploring the potential benefits of using linked data to make library data available to users through the web. For a comprehensive overview of the event, I’ve put together a Storify of tweets here: https://storify.com/LornaMCampbell/oclc-linked-data

Following a truly dreadful pun from Laura J Wilkinson…

Owen Stephens kicked off the event with an overview of linked data and its potential to be  a lingua franca for publishing library data.  Some of the benefits that linked data can afford to libraries including improving search, discovery and display of library catalogue record information, improved data quality and data correction, and the ability to work with experts across the globe to harness their expertise.  Owen also introduced the Open World Assumption which, despite the coincidental title of this blog, was a new concept to me.  The Open World Assumption states that

“there may exist additional data, somewhere in the world to complement the data one has at hand”.

This contrasts with the Closed World Assumption which assumes that

“data sources are well-known and tightly controlled, as in a closed, stand-alone data silo.”

Learning Linked Data
http://lld.ischool.uw.edu/wp/glossary/

Traditional library catalogues worked on the basis of the closed world assumption, whereas linked data takes an open world approach and recognises that other people will know things you don’t.  Owen quoted Karen Coyle “the catalogue should be an information source, not just an inventory” and noted that while data on the web is messy, linked data provides the option to select sources we can trust.

Cathy Dolbear of Oxford University Press, gave a very interesting talk from the perspective of a publisher providing data to libraries and other search and discovery services. OUP provides data to library discovery services, search engines, wiki data, and other publishers.  Most OUP products tend to be discovered by search engines, only a small number of referrals, 0.7%, come from library discovery services.  OUP have two OAI-PMH APIs but they are not widely used and they are very keen to learn why.  The publisher’s requirements are primarily driven by search engines, but they would like to hear more from library discovery services.

Neil Jeffries of the Bodleian Digital Library was not able to be present on the day, but he overcame the inevitable technical hitches to present remotely.  He began by arguing that digital libraries should not be seen as archives or museums; digital libraries create knowledge and artefacts of intellectual discourse rather than just holding information. In order to enable this knowledge creation, libraries need to collaborate, connect and break down barriers between disciplines.  Neil went on to highlight a wide range of projects and initiatives, including VIVO, LD4L, CAMELOT, that use linked data and the semantic web to facilitate these connections. He concluded by encouraging libraries to be proactive and to understand the potential of both data and linked data in their own domain.

Ken Chad posed a question that often comes up in discussions about linked data and the semantic web; why bother?  What’s the value proposition for linked data?  Gartner currently places linked data in the trough of disillusionment, so how do we cross the chasm to reach the plateau of productivity?  This prompted my colleague Phil Barker to comment:

Ken recommended using the Jobs-to-be-Done framework to cross the chasm. Concentrate on users, but rather than just asking them what they want focus on, asking them what they are trying to do and identify their motivating factors – e.g. how will linked data help to boost my research profile?

For those willing to take the leap of faith across the chasm, Gill Hamilton of the National Library of Scotland presented a fantastic series of Top Tips! for linked data adoption which can be summarised as follows:

  • Strings to things aka people smart, machines stupid – library databases are full of things, people are really smart at reading things, unfortunately machines are really stupid. Turn things into strings with URIs so machines can read them.
  • Never, ever, ever dumb down your data.
  • Open up your metadata – license your metadata CC0 and put a representation of it into the Open Metadata Registry.  Open metadata is an advert for your collections and enables others to work with you.
  • Concentrate on what is unique in your collections – one of the unique items from the National Library of Scotland that Gill highlighted was the order for the Massacre of Glencoe.  Ahem. Moving swiftly on…
  • Use open vocabularies.

Simples! Linked Data is still risky though; services go down, URIs get deleted and there’s still more playing around than actual doing, however it’s still worth the risk to help us link up all our knowledge.

Richard J Wallis brought the day to a close by asking how can libraries exploit the web of data to liberate their data?  The web of data is becoming a web of related entities and it’s the relationships that add value.  Google recognised this early on when they based their search algorithm on the links between resources.  The web now deals with entities and relationships, not static records.

One way to encode these entities and relationships is using Schema.org. Schema.org aims to help search engines to interpret information on web pages so that it can be used to improve the display of search results.  Schema.org has two components; an ontology for naming the types and characteristics of resources, their relationships with each other, and constraints on how to describe these characteristics and relationships, and the expression of this information in machine readable formats such as microdata, RDFa Lite and JSON-LD. Richard noted that Schema.org is a form or linked data, but “it doesn’t advertise the fact” and added that libraries need to “give the web what it wants, and what it wants is Schema.org.”

If you’re interested in finding out more about Schema.org, Phil Barker and I wrote a short Cetis Briefing Paper on the specification which is available here: What is Schema.org?  Richard Wallis will also be presenting a Dublin Core Metadata Initiative webinar on the Schema.org and its applicability to the bibliographic domain on the 18th of November, registration here http://dublincore.org/resources/training/#2015wallis.


Is there a Library shaped black hole in the web? Event summary.⤴

from @ Open World

Is there a Library shaped black hole in the web? was the question posed by an OCLC event at the Royal College of Surgeons last week that focused on exploring the potential benefits of using linked data to make library data available to users through the web. For a comprehensive overview of the event, I’ve put together a Storify of tweets here: https://storify.com/LornaMCampbell/oclc-linked-data

Following a truly dreadful pun from Laura J Wilkinson…

Owen Stephens kicked off the event with an overview of linked data and its potential to be  a lingua franca for publishing library data.  Some of the benefits that linked data can afford to libraries including improving search, discovery and display of library catalogue record information, improved data quality and data correction, and the ability to work with experts across the globe to harness their expertise.  Owen also introduced the Open World Assumption which, despite the coincidental title of this blog, was a new concept to me.  The Open World Assumption states that

“there may exist additional data, somewhere in the world to complement the data one has at hand”.

This contrasts with the Closed World Assumption which assumes that

“data sources are well-known and tightly controlled, as in a closed, stand-alone data silo.”

Learning Linked Data
http://lld.ischool.uw.edu/wp/glossary/

Traditional library catalogues worked on the basis of the closed world assumption, whereas linked data takes an open world approach and recognises that other people will know things you don’t.  Owen quoted Karen Coyle “the catalogue should be an information source, not just an inventory” and noted that while data on the web is messy, linked data provides the option to select sources we can trust.

Cathy Dolbear of Oxford University Press, gave a very interesting talk from the perspective of a publisher providing data to libraries and other search and discovery services. OUP provides data to library discovery services, search engines, wiki data, and other publishers.  Most OUP products tend to be discovered by search engines, only a small number of referrals, 0.7%, come from library discovery services.  OUP have two OAI-PMH APIs but they are not widely used and they are very keen to learn why.  The publisher’s requirements are primarily driven by search engines, but they would like to hear more from library discovery services.

Neil Jeffries of the Bodleian Digital Library was not able to be present on the day, but he overcame the inevitable technical hitches to present remotely.  He began by arguing that digital libraries should not be seen as archives or museums; digital libraries create knowledge and artefacts of intellectual discourse rather than just holding information. In order to enable this knowledge creation, libraries need to collaborate, connect and break down barriers between disciplines.  Neil went on to highlight a wide range of projects and initiatives, including VIVO, LD4L, CAMELOT, that use linked data and the semantic web to facilitate these connections. He concluded by encouraging libraries to be proactive and to understand the potential of both data and linked data in their own domain.

Ken Chad posed a question that often comes up in discussions about linked data and the semantic web; why bother?  What’s the value proposition for linked data?  Gartner currently places linked data in the trough of disillusionment, so how do we cross the chasm to reach the plateau of productivity?  This prompted my colleague Phil Barker to comment:

Ken recommended using the Jobs-to-be-Done framework to cross the chasm. Concentrate on users, but rather than just asking them what they want focus on, asking them what they are trying to do and identify their motivating factors – e.g. how will linked data help to boost my research profile?

For those willing to take the leap of faith across the chasm, Gill Hamilton of the National Library of Scotland presented a fantastic series of Top Tips! for linked data adoption which can be summarised as follows:

  • Strings to things aka people smart, machines stupid – library databases are full of things, people are really smart at reading things, unfortunately machines are really stupid. Turn things into strings with URIs so machines can read them.
  • Never, ever, ever dumb down your data.
  • Open up your metadata – license your metadata CC0 and put a representation of it into the Open Metadata Registry.  Open metadata is an advert for your collections and enables others to work with you.
  • Concentrate on what is unique in your collections – one of the unique items from the National Library of Scotland that Gill highlighted was the order for the Massacre of Glencoe.  Ahem. Moving swiftly on…
  • Use open vocabularies.

Simples! Linked Data is still risky though; services go down, URIs get deleted and there’s still more playing around than actual doing, however it’s still worth the risk to help us link up all our knowledge.

Richard J Wallis brought the day to a close by asking how can libraries exploit the web of data to liberate their data?  The web of data is becoming a web of related entities and it’s the relationships that add value.  Google recognised this early on when they based their search algorithm on the links between resources.  The web now deals with entities and relationships, not static records.

One way to encode these entities and relationships is using Schema.org. Schema.org aims to help search engines to interpret information on web pages so that it can be used to improve the display of search results.  Schema.org has two components; an ontology for naming the types and characteristics of resources, their relationships with each other, and constraints on how to describe these characteristics and relationships, and the expression of this information in machine readable formats such as microdata, RDFa Lite and JSON-LD. Richard noted that Schema.org is a form or linked data, but “it doesn’t advertise the fact” and added that libraries need to “give the web what it wants, and what it wants is Schema.org.”

If you’re interested in finding out more about Schema.org, Phil Barker and I wrote a short Cetis Briefing Paper on the specification which is available here: What is Schema.org?  Richard Wallis will also be presenting a Dublin Core Metadata Initiative webinar on the Schema.org and its applicability to the bibliographic domain on the 18th of November, registration here http://dublincore.org/resources/training/#2015wallis.

ETA  Phil Barker has also written a comprehensive summary of this even over at his own blog , Sharing and Learning, here: A library shaped black hole in the web?


Presentation: LRMI – using schema.org to facilitate educational resource discovery on the web and beyond⤴

from @ Sharing and learning

Today I am in London for the ISKO Knowledge Organisation in Learning and Teaching meeting, where I am presenting on LRMI and schema.org to facilitate educational resource discovery on the web and beyond. My slides are here, mostly they cover similar ground to presentations I’ve given before which have been captured on video or which I have written up in more detail. So here I’ll just point to my slides for today and summarise the new stuff.

LRMI uptake

People always want to know how much LRMI exists in the wild, and now schema.org reports this infomation. Go to the schema.org page for any class or property and at the top it says in how many domains they find markup for it. Obviously this misses that not all domains are equal in extent or importance: finding LRMI on pjjk,net should not count as equal to finding it on bbc.co.uk, but as a broad indicator it’s OK: finding a property on 10 domains or 10,000 domains is a valid comarison. LRMI properties are mostly reported as found on 100-1000 domains (e.g. learning resource type) or 10-100 domains (e.g. educational alignment). A couple of LRMI properties have greater usage, e.g. typical age range and is based on URL (10-50,00 and 1-10,000 domains respectively), but I guess that reflects their generic usefulness beyond learning resources. We know that in some cases LRMI is used for internal systems but not exposed on web pages, but still the level of usage is not as high as we would like.

I also often get asked about support for creating LRMI metadata, this time I’m including a mention of how it is possible to write WordPress plugins and themes with schema / LRMI support, and the drupal schema.org plugin. I’m also aware of “tagging tools” associated with various repositories, e.g. the learning registry and the Illinois Shared Learning Environment. I think it’s always going to be difficult to answer this one as the best support will always come from customising whatever CMS an organisation uses to manage their content or metadata and will be tailored to their workflow and the types of resources and educational contexts they work in.

As far implementation for search I still cover google custom search, as in the previous presentations.

Current LRMI activities

The DCMI LRMI task group is active, one of our priorities is to improve the support for people who want to use LRMI. Two activities are nearing fruitition: firstly, we are hoping to provide examples for relevant properties and type on the schema.org web site. Secondly, we want to provide better support for the vocabularies used for properties such as alignment type (in the Alignment Object), learning resource type etc, by way of clear definitions and machine readable vocabulary encodings (using SKOS). We are asking for public review and comment on LRMI vocabularies, so please take a look and get in touch.

Other work in progress is around schema for courses and extending some of the vocabularies mentioned above. We have monthly calls, if you would like to lend a hand please do get in touch.

Checking schema.org data with the Yandex structured data validator API⤴

from @ Sharing and learning

I have been writing examples of LRMI metadata for schema.org. Of course I want these to be valid, so I have been hitting various online validators quite frequently. This was getting tedious. Fortunately, the Yandex structured data validator has an API, so I could write a python script to automate the testing.

Here it is

#!/usr/bin/python
import httplib, urllib, json, sys 
from html2text import html2text
from sys import argv

noerror = False

def errhunt(key, responses):                 # a key and a dictionary,  
    print "Checking %s object" % key         # we're going on an err hunt
    if (responses[key]):
        for s in responses[key]:             
            for object_key in s.keys(): 
                if (object_key == "@error"):              
                    print "Errors in ", key
                    for error in s['@error']:
                        print "tError code:    ", error['error_code'][0]
                        print "tError message: ", html2text(error['message'][0]).replace('n',' ')
                        noerror = False
                elif (s[object_key] != ''):
                    errhunt(object_key, s)
                else:
                    print "No errors in %s object" % key
    else:
        print "No %s objects" % key

try:
    script, file_name = argv 
except:
    print "tError: Missing argument, name of file to check.ntUsage: yandexvalidator.py filename"
    sys.exit(0)

try:
    file = open( file_name, 'r' )
except:
    print "tError: Could not open file ", file_name, " to read"
    sys.exit(0)

content = file.read()

try:
    validator_url = "validator-api.semweb.yandex.ru"
    key = "12345-1234-1234-1234-123456789abc"
    params = urllib.urlencode({'apikey': key, 
                               'lang': 'en', 
                               'pretty': 'true', 
                               'only_errors': 'true' 
                             })
    validator_path = "/v1.0/document_parser?"+params
    headers = {"Content-type": "application/x-www-form-urlencoded",
               "Accept": "*/*"}
    validator_connection = httplib.HTTPSConnection( validator_url )
except:
    print "tError: something went wrong connecting to the Yandex validator." 

try:
    validator_connection.request("POST", validator_path, content, headers)
    response = validator_connection.getresponse()
    if (response.status == 204):
        noerror= True
        response_data = response.read()   # to clear for next connection
    else:
        response_data = json.load(response)
    validator_connection.close()
except:
    print "tError: something went wrong getting data from the Yandex validator by API." 
    print "tcontent:n", content
    print "tresponse: ", response.read()
    print "tstatus: ", response.status
    print "tmessage: ", response.msg 
    print "treason: ", response.reason 
    print "n"

    raise
    sys.exit(0)

if noerror :
    print "No errors found."
else:
    for k in response_data.keys():
        errhunt(k, response_data)

Usage:

$ ./yandexvalidator.py test.html
No errors found.
$ ./yandexvalidator.py test2.html
Checking json-ld object
No json-ld objects
Checking rdfa object
No rdfa objects
Checking id object
No id objects
Checking microformat object
No microformat objects
Checking microdata object
Checking http://schema.org/audience object
Checking http://schema.org/educationalAlignment object
Checking http://schema.org/video object
Errors in  http://schema.org/video
	Error code:     missing_empty
	Error message:  WARNING: Не выполнено обязательное условие для передачи данных в Яндекс.Видео: **isFamilyFriendly** field missing or empty  
	Error code:     missing_empty
	Error message:  WARNING: Не выполнено обязательное условие для передачи данных в Яндекс.Видео: **thumbnail** field missing or empty  
$ 

Points to note:

  • I’m no software engineer. I’ve tested this against valid and invalid files. You’re welcome to use this, but it might not work for you. (You’ll need your own API key). If you see something needs fixing, drop me a line.
  • Line 51: has to be an HTTPS connection.
  • Line 58: we ask for errors only (at line 46) so no news is good news.
  • The function errhunt does most of the work, recursively.

The response from the API is a json object (and json objects are converted into python dictionary objects by line 62), the keys of which are the “id” you sent and each of the structured data formats that are checked. For each of these there is an array/list of objects, and these objects are either simple key-value pairs or the value may be an array/list of objects. If there is an error in any of the objects, the value for the key “@error” gives the details, as a list of Error_code and Error_message key-value pairs. errhunt iterates and recurses through these lists of objects with embedded lists of objects.