Tag Archives: schema.org

New work with the Credential Engine⤴

from @ Sharing and learning

Credential Engine logoI am delighted to be starting a new consulting project through Cetis LLP with the Credential Engine, helping them make credentials more transparent in order to empower everyone to make more informed decisions about credentials and their value. The problem that the Credential Engine sets out to solve is that there are (at the last count) over 730,000 different credentials on offer in the US alone. [Aside: let me translate ‘credential’ before going any further; in this context we mean what in Europe we call an educational qualification, from school certificates through to degrees, including trade and vocational qualifications and microcredentials.] For many of these credentials it is difficult to know their value in terms of who recognises them, the competences that they certify, and the occupations they are relevant for. This problem is especially acute in the relatively deregulated US, but it is also an issue when we have learner and worker mobility and need to recognise credentials from all over the world.

The Credential Engine sets out to alleviate this problem by making the credentials more transparent through a Credential Registry. The registry holds linked data descriptions of credentials being offered, using the Credential Transparency Description Language, CTDL, which is based largely on schema.org. (Note that neither the registry nor CTDL deals with information relating to whether an individual holds any credential.) These descriptions include links to Competence Frameworks described in the Credential Engine’s profile of the Achievement Standards Network vocabulary, CTDL-ASN. The registry powers a customizable Credential Finder service as well as providing a linked data platform and an API for partners to develop their own services–there are presentations about some example thrid-party apps on the Credential Engine website.

I have been involved with the Credential Engine since the end of 2015, when it was the Credential Transparency Initiative, and have since worked with them to strengthen the links between the CTDL and schema.org by leading a W3C Community Group to add EducationalOccupationalCredentials ot schema.org. I’ve also helped represent them at a UNESCO World Reference Level expert group meeting, helped partners interested in using data from the registry at an appathon in Indianapolis.  I have come to appreciate that there is a great team behind the Credential Engine, and I am really looking forward to continuing to work with them. I hope to post regular updates here on the new work as we progress.

There are strong linkages between this work and the other main project I have on talent marketplace signalling, and with talent pipeline management in general; and also with other areas of interest such as course description  and with work of the rest of Cetis in curriculum analytics and competency data standards. This new project isn’t exclusive so I will continue to work in those areas.  Please get in touch if you would like to know more about partnering with the Credential Engine or are interested in the wider work.

The post New work with the Credential Engine appeared first on Sharing and learning.

One year of Talent Marketplace Signaling⤴

from @ Sharing and learning

I chair the Talent Marketplace Signaling W3C Community Group, this progress report is cross-posted from its blog

It is one year since the initial call for participation in the Talent Marketplace Signaling W3C Community Group. That seems like a good excuse to reflect on what we have done so far, where we are, and what’s ahead.

I’m biassed, but I think progress has been good. We have 35 participants in the group, we have had some expansive discussions to outline the scope and aims of the group, the detail of which we filled in with issues and use cases. We also had some illuminating discussions about how we conceptualize the domain we are addressing (see most of August in the mail list archive). Most importantly, I think that we have made good on the aim arising from our initial kick-off meeting to identify issues arising from use cases and fix them individually with discrete enhancements to schema.org. Here’s a list of the fixes we have suggested that have been accepted by schema.org, drawn from the schema.org release log:

Translating those back to our use cases / issues we can now:

Looking forward…

First I want to note that many of those contributions have been accepted into what schema.org calls its pending section, which it defines as “a staging area for work-in-progress terms which have yet to be accepted into the core vocabulary”. While there are caveats about terms in pending being subject to change and that they should be used with caution, their acceptance into the core of the schema.org vocabulary relies on them being shown to be useful. So we have a task remaining of promoting and highlighting the use of these terms and showing how they are used. Importantly, “use” here means not just publishing data, but the existence of services built on that data.

Looking at the remaining issues that we identified from our use cases and examples, it seems that we have come to the end of those that can be picked off individually and dealt with without consequences elsewhere. Several are issues of choice, along the lines of “there’s more than one way to do X, can we clarify which is best?” Best practice is difficult to define and identify, and there will be winners and losers whatever option is picked. The choice will depend on analysis of whatever existing practice currently is as well as trade-offs such simplicity versus expressiveness. Another example where existing practice is important comes with issues that will affect how Google services such as Job Search work. Specifically, Google recommends values for employmentType that don’t seem to match all requirements, and these values are just textual tokens whereas we might want to suggest the more flexible and powerful DefinedTerm. However, we don’t want to recommend practice that conflicts with getting job postings listed properly by Google. While some Google search products leverage schema.org terms, the requirements that they specify for value spaces like the different employmentTypes are not defined in schema.org; and while schema.org development is open, other channels are required to make suggestions that affect Google products. The final category of open issue that I see is where a new corner of our domain needs to be mapped, rather just one or two new terms provided. This is the case for providing information about assessments, and for where we touch on providing information about the skills etc. that a person has.

So, there is more work to be done. I think starting with some further work on examples and best practice is a good idea. This will involve looking at existing usage, and mapping relevant parts of schema.org to other specifications (that latter task is happening in other fora, so probably something to report on here rather than start as a separate task). As ever, more people in the group and engagement from key players is key to success, so we should continue to try to grow the membership of the group.

Thank you all for your attention and contributions over the last year; I’m looking forward to more in the coming months.

Ackowledgement / disclosure

I (Phil Barker) remain grateful to the continued support of the US Chamber of Commerce Foundation, who fund my involvement in this group.

The post One year of Talent Marketplace Signaling appeared first on Sharing and learning.

Indiana Appathon Credential Data Learn and Build⤴

from @ Sharing and learning

This week I took part in the Credential Engine’s Indiana Appathon in Indianapolis. The Credential Engine is a registry of information about educational and occupational credentials (qualifications, if you prefer; or not, if you don’t) that can be earned, along with further information such as what they are useful for, what competencies a person would need in order to earn one and what opportunities exist to learn those competencies. Indiana is one state that is working with the Credential Engine to ensure that the credentials offered by all the state’s public higher education institutions are represented in the registry. About 70 people gathered in Indianapolis (a roughly equal split between Hoosiers and the rest of the US, plus a couple of Canadians and me) with the stated intentions of Learn and Build: learn about the data the Credential Engine has, how to add more and how to access what is there, and build ideas for apps that use that  data, showing what data was valuable and potentially highlighting gaps.

circles and lines representing entity-relationship domain modelsI was there as a consequence of my project work (supported by the Credential Engine) to represent Educational and Occupational Credentials in schema.org, with the aim of helping  people understand the benefits of putting credentials on to the open web. Cue my chance to reuse here my pictures of how schema.org can act as a cross-domain unifying schema for linked data and how different domains link together from Education to HR.

I’ve been involved in a few events where the idea is to try to get people together to learn/discuss/make, and I know it is really difficult to get the right balance between structure and flexibility. Too much pre-planned activity and delegates don’t get to do what they want, too little and they are left wondering what they should be doing. So I want to emphasize how hugely impressed I was with the event organization and facilitation: Laura Faulkner and colleagues at Credential Engine and Sonya Lopes and team at Learning Tapestry did a great job. Very cleverly they gave the event a headstart with webinars in advance to learn about the aims and technology of the credential engine, and then in Indianapolis we had a series of  activities. On day one these were: cycling through quick, informal presentations in small groups to find out about the available expertise; demos of existing apps that use data from the Credential Engine; small group discussions of personas to generate use cases; generating ideas for apps based on these. On day two we split into some ‘developer’ groups who worked to flesh out some of these ideas (while the ‘publisher’ group did something else, learnt more about publishing data into the Credential Engine, I think,–I wasn’t in that group), before the developer groups presented their ideas to people from the publisher group in a round-robin “speed-dating” session, and then finally to the whole group.

I went wanting to learn more about what data and connections would be of value in a bigger ecosystem around credentials and for the more focussed needs of individual apps. This I did. The one app that I was involved with most surfaced a need for the Credential Engine to be able to provide data about old, possibly discontinued credentials so that this information could be accessed by those wanting more details about a credential held by an individual. I think this is an important thing to learn for a project that has largely focussed on use cases relating to people wanting to develop their careers and look forward to what credentials are currently on offer that are relevant to their aspirations. I (and others I spoke to) also noticed how many of the use cases and apps required information about the competencies entailed in the credentials, quite often in detail that related to the component courses of longer programs of study. This, and other requirements for fine detail about credentials, is of concern to the institutions publishing information into the Credential Engine’s registry. Often do not have all of that information accessible in a centrally managed location, and when they do it is maybe not at the level of detail or in language  suitable for externally-facing applications.

This problem relating to institutions supplying data about courses was somewhat familiar to me. It seems entirely analogous to the experiences of the Jisc XCRI related programs (for example, projects on making the most of course data and later projects on managing course-related data). I would love to say that from those programs we now know how to provide this sort of data at scale and here’s the product that will do it.., but of course it’s not that simple. What we do have are briefings and advice explaining the problem and some of the approaches that have been taken and what were the benefits from these: for example, Ruth Drysdale’s overview Managing and sharing your course information and the more comprehensive guide Managing course information. My understanding from those projects is that they found benefits to institutions from a more coherent approach to their internal management of course data, and I hope that those supplying data to the Credential Engine might be encouraged by this. I also hope that the Credential Engine (or those around it who do funding) might think about how we could create apps and services that help institutions manage their course data better in such a way that benefits their own staff and incidentally provides the data the Credential Engine needs.

Finally, it was great to spend a couple of days in sunny Indianapolis, catching up with old friends, meeting in-person with some colleagues who I have only previously met online and doing some sightseeing. Many thanks to the Credential Engine for their financial assistance in getting me there.

Photo of brick build building next to corporate towers
Aspects of Inidianapolis reminded me of SimCity 2000
White column with statues
Monument in centre of town
plan of a city grid for Inianapolis
The original city design plan (plat)
Photo of indoor market
One of the indoor markets
Fountain and war memorial
The sign said no loitering, but I loitered. That, in the background, is one of the biggest war memorials I have seen

 

The post Indiana Appathon Credential Data Learn and Build appeared first on Sharing and learning.

K12 Open Content Exchange⤴

from @ Sharing and learning

I’ve been intending to write about the K12 Open Content Exchange project, and its metadata, but for various reasons haven’t got round to it. As I have just submitted a use case to the DCMI Application Profile Interest Group based on the projects requirements I’ll post that.

The project is being run by Learning Tapestry in the US, I am contracted to build the metadata specification in an iterative fashion based on experiences of content exchange between the partners (currently we are at iteration 1). I want to stress that this isn’t an authoritative description of all the project’s intents and purposes, or even all the metadata requirements, it’s a use case based on them, but I hope it gives a flavour of the work. The requirements section is more relevant if you’re interested in how to specify application profiles than for the OCX spec per se.

Problem statement

We wish to share relatively large amounts of content and curriculum materials relating to educational courses, for example: all the activities, student learning resources, teacher and parent curriculum guides, formative assessments, etc relating to one school-year of study in a subject such as mathematics.

We wish for the metadata to facilitate the editing, adaptation and reuse of the resources. For example teachers should be able to substitute resources/activities provided with other resources addressing the same objective as they see fit. Or course designers should be able pull out all the resources of a particular type (say activity plans), about a given topic, or addressing a specific learning outcome and use them in a new course. In some cases alternate equivalent content may be provided, for example online Vs offline content or to provide accessible provision for people with disabilities.

We wish for these to be discoverable on the open web at both broad and fine levels of granularity, i.e. whole courses and the individual parts should be discoverable via search engines.

We have decided to use the following vocabularies to describe the structure and content of the material: schema.org + LRMI, OERSchema and local extensions where necessary.

Stakeholders

  • Original content providers who create and publish curriculum and course materials using existing systems and already have (local) terminology for many aspects of resource description.
  • Content processors take the original content and offer it through their own systems. They also have (different) terminology for many aspects of resource description
  • Content purchasers wish to discover and buy content that meets their requirements
  • Course designers and Teachers use the content that has been purchased and wish to adapt it for their students
  • Parents wish to understand what and how their students are being taught
  • Learners want a seamless experience using high quality materials and ideas, in which all the work above is invisible.

Requirements

The application profile must reference evolving specifications: schema.org, OERSchema, various controlled vocabularies/concept schemes/encoding schemes and local extensions. If a local extension is adopted by one of the more widely recognised specifications it should be easy to modify the application profile to reflect this.

The application profile must include a content model that meets the following requirements for describing the structure of content:

  • identify the distinct types of entity involved (Course, Unit, Lesson, Supporting Material, Audience, Learning Objectives, Assessments etc)
  • show the hierarchy of the content (i.e. the Course comprises Units, the Units comprise Lessons, the Lessons comprise Activities)
  • show related content that is not part of the hierarchy but is related to (e.g. information about Units for teachers and parents, content to be used in the Activities)
  • show the ordering of sequential content
  • show options for alternative equivalent content

The application profile must specify optional (may), desired (should) and required (must) descriptive metadata as appropriate for each type of entity. For example

  • Activities must have a learning Time
  • an Activity should have a name
  • any resource may be associated with an Audience

The application profile must specify cardinality. for example:

  • Courses must have one and only one title
  • Units must have one or more Lesson
  • Units may have one or more Assessment

There may be alternative ways of satisfying a requirement

  • Courses must associated with either one or more Lesson or one or more Unit
  • If there is more than one Lesson in a Unit or Course, the Lessons must be ordered

The application profile must specify the expected type or value space for any property

  • this may be more restrictive than the base specification
  • this may extent the base specification
  • this may include alternatives (e.g. free text or reference concept from controlled scheme)

 

The post K12 Open Content Exchange appeared first on Sharing and learning.

Inclusion of Educational and Occupational Credentials in schema.org⤴

from @ Sharing and learning

The new terms developed by the EOCred community group that I chaired were added to the pending area in the April 2019 release of schema.org. This marks a natural endpoint for this round of the community group’s work. You can see most of the outcome  under EducationalOccupationalCredential. As it says, these terms are now “proposed for full integration into Schema.org, pending implementation feedback and adoption from applications and websites”. I am pretty pleased with this outcome.

Please use these terms widely where you wish to meet the use cases outlined in the previous post, and feel free to use the EOCred group to discuss any issues that arise from implementation and adoption.

My own attention is moving on the Talent Marketplace Signalling community group which is just kicking off (as well as continuing with LRMI and some discussions around Courses that I am having). One early outcome for me from this is a picture of how I see Talent Signalling requiring all these linked together:

Outline sketch of the Talent Signaling domain, with many items omitted for clarity. Mostly but not entirely based on things already in schema.org

 

The post Inclusion of Educational and Occupational Credentials in schema.org appeared first on Sharing and learning.

Talent marketplace signalling and schema.org JobPostings⤴

from @ Sharing and learning

For some time now I have been involved in the Data Working Group of the Jobs Data Exchange (JDX) project. That project aims to help employers and technology partners better describe their job positions and hiring requirements in a machine readable format. This will allow employers to send clearer signals to individuals, recruitment, educational and training organizations about the skills and qualifications that are in demand.  The data model behind JDX, which has been developed largely by Stuart Sutton working with representatives from the HR Open Standards body, leverages schema.org terms where possible. Through the development of this data model, as well as from other input, we have many ideas for guidance on, and improvements to the schema.org JobPosting schema. In order to advance those ideas through a broader community and feed them back to schema.org, we have now created the Talent Marketplace Signaling W3C Community Group.

In the long term I hope that the better expression of job requirements in the same framework as can be used to describe qualifications and educational courses will lead to better understanding and analysis of what is required and provided where, and to improvements in educational and occupational prospects for individuals.circles and lines representing entity-relationship domain models

 

About the Talent Marketplace Signaling Community Group

Currently, workforce signaling sits at the intersection of a number of existing schema.org types: Course, JobPosting, Occupation, Organization, Person and the proposed EducationalOccupationalCredential. The TalentSignal Community Group will focus initially on the JobPosting Schema and related types. I think the TalentSignal CG can help by:

  • providing guidance on how to use existing schema.org terms to describe JobPostings;
  • proposing refinements (e.g. improved definitions) to existing schema.org types serving the talent pipeline; and
  • suggesting new types and properties where improved signaling cannot otherwise be achieved.

I hope that the outcomes of this work will be discrete improvements to the JobPostings schema, e.g. small changes to definitions, changes to how things like competences are represented and linked to JobPostings, and guidance, probably on the schema.org wiki, about using the JobPosting schema to mark up job adverts. Of course, whatever the Community Group suggests, it’s up to the schema.org steering group to decide on whether they are adopted into schema.org, and then it’s up to the search engines and other data consumers as to whether they make any use of the mark up.

The thinking behind the having a wider remit than the currently envisaged work is to avoid setting up a whole series of new groups every time we have a new idea [lesson learnt from moving from LRMI to Course description to educational and occupational credentials].

Call for participation

If you’ve read this far you must be somewhat interested  in this area of work, so why not join the TMS Community Group to show your support for the JDX and more broadly the need and importance for improved workforce signaling in the talent marketplace? You can join via pink/tan button on the Talent Signal CG web page. You will need to have a W3C account and to be signed in order to join (see the top right of the page to sign-in or join). The only restriction on joining is that you must give some assurances about the openness of the IPR of any contributions that you make. The outcomes of this work will feed into a specification that anyone can use, so there must be no hidden IPR restrictions in there.

The group  is open to all stakeholders so please feel free to share this information with your colleagues and network.

Disclosure

I’m being paid by the US Chambers of Commerce Federation to carry out this work. Thank you US CCF!

The post Talent marketplace signalling and schema.org JobPostings appeared first on Sharing and learning.

Progress report for Educational and Occupational Credentials in schema.org⤴

from @ Sharing and learning

[This is cross-posted from the Educational and Occupational Credentials in schema.org W3C community group, if you interested please direct your comments there.]

Over the past few months we have been working systematically through the 30-or-so outline use cases for describing Educational and Occupational Credentials in schema.org, suggesting how they can be met with existing schema.org terms, or failing that working on proposals for new terms to add. Here I want to summarize the progress against these use cases, inviting review of our solutions and closure of any outstanding issues.

Use cases enabled

The list below summarizes information from the community group wiki for those use cases that we have addressed, with links to the outline use case description, the wiki page showing how we met the requirements arising from that use case, and proposed new terms on a test instance of schema.org (may be slow to load). I tried to be inclusive / exhaustive in what I have called out as an issue.

1.1 Identify subtypes of credential

1.2 Name search for credential

1.3 Identify the educational level of a credential

1.4 Desired/required competencies

1.6 Name search for credentialing organization

1.8 Labor market value

1.11 Recognize current competencies

1.13 Language of Credential

2.1 Coverage

2.2 Quality assurance

2.5 Renewal/maintenance requirements

2.6 Cost

3.1 Find related courses, assessments or learning materials

3.3 Relate credentials to competencies

3.4 Find credentialing organization

4.2 Compare credentials

  • Credentials can be compared in terms of any of the factors above, notably cost, compentencies required, recognition and validity.

4.3 Build directories

1.5 Industry and occupation analysis

1.7 Career and education goal

1.10 Job vacancy

3.2 Job seeking

Use cases that have been ‘parked’

The following use cases have not been addressed; either they were identified as low priority or there was insufficient consensus as to how to enable them:

1.9 Assessment (see issue 5, no way to represent assessments in schema.org)

1.12 Transfer value: recognizing current credentials (a complex issue, relating to “stackable” credentials, recognition, and learning pathways)

2.3 Onward transfer value (as previous)

2.4 Eligibility requirements (discussed, but no consensus)

3.5 Find a service to verify a credential (not discussed, low priority)

4.1 Awarding a Credential to a Person  (not discussed, solution may be related to personal self-promotion)

4.4 Personal Self-promotion (pending discussion)

4.5 Replace and retire credentials (not discussed, low priority)

Summary of issues

As well as the unaddressed use cases above, there are some caveats about the way other use cases have been addressed. I have tried to be inclusive / exhaustive in what I have called out as an issue,–I hope many of them can be acknowledged and left for future contributions to schema.org, we just need to clarify that they have been.

  • Issue 1: whether EducationalOccupationalCredential is a subtype of CreativeWork or Intangible.
  • Issue 2: competenceRequired only addresses the simplest case of individual required competencies.
  • Issue 3: whether accreditation is a form of recognition.
  • Issue 4: the actual renewal / maintenance requirements aren’t specified.
  • Issue 5: there is no way represent Assessments in schema.org
  • Issue 6: there is no explicit guidance on how to show required learning materials for a Course in schema.org.

There is an issues page on the wiki for tracking progress in disposing of these issues.

Summary of proposed changes to schema.org

Many of the use cases were addressed using terms that already exist in schema.org. The changes we currently propose are

Addition of a new type EducationalOccupationalCredential

Addition of four properties with domain EducationalOccupationalCredential:

Addition of EducationalOccupationalCredential to the domain of two existing properties (with changes to their definition to reflect this):

Addition of EducationalOccupationalCredential to the range of three existing properties:

The post Progress report for Educational and Occupational Credentials in schema.org appeared first on Sharing and learning.

Using wikidata for linked data WordPress indexes⤴

from @ Sharing and learning

A while back I wrote about getting data from wikidata into a WordPress custom taxonomy. Shortly thereafter Alex Stinson said some nice things about it:


and as a result that post got a little attention.

Well, I have now a working prototype plugin which is somewhat more general purpose than my first attempt.

1.Custom Taxonomy Term Metadata from Wikidata

Here’s a video showing how you can create a custom taxonomy term with just a name and the wikidata Q identifier, and the plugin will pull down relevant wikidata for that type of entity:

[similar video on YouTube]

2. Linked data index of posts

Once this taxonomy term is used to tag a post, you can view the term’s archive page, and if you have a linked data sniffer, you will see that the metadata from WikiData is embedded in machine readable form using schema.org. Here’s a screenshot of what the OpenLink structured data sniffer sees:

Or you can view the Google structured data testing tool output for that page.

Features

  • You can create terms for custom taxonomies with just a term name (which is used as the slug for the term) and the Wikidata Q number identifier. The relevant name, description and metadata is pulled down from Wikidata.
  • Alternatively you can create a new term when you tag a post and later edit the term to add the wikidata Q number and hence the metadata.
  • The metadata retrieved from Wikidata varies to be suitable for the class of item represented by the term, e.g. birth and death details for people, date and location for events.
  • Term archive pages include the metadata from wikidata as machine readable structured data using schema.org. This includes links back to the wikidata record and other authority files (e.g. ISNI and VIAF). A system harvesting the archive page for linked data could use these to find more metadata. (These onward links put the linked in linked data and the web in semantic web.)
  • The type of relationship between the term and posts tagged with it is recorded in the schema.org structure data on the term archive page. Each custom taxonomy is for a specific type of relationship (currently about and mentions, but it would be simple to add others).
  • Short codes allow each post to list the entries from a custom taxonomy that are relevant for it using a simple text widget.
  • This is a self-contained plugin. The plugin includes default term archive page templates without the need for a custom theme. The archive page is pretty basic (based on twentysixteen theme) so you would get better results if you did use it as the basis for an addition to a custom theme.

How’s it work / where is it

It’s on github. Do not use it on a production WordPress site. It’s definitely pre-alpha, and undocumented, and I make no claims for the code to be adequate or safe. It currently lacks error trapping / exception handling, and more seriously it doesn’t sanitize some things that should be sanitized. That said, if you fancy giving it a try do let me know what doesn’t work.

It’s based around two classes: one which sets up a custom taxonomy and provides some methods for outputting terms and term metadata in HTML with suitable schema.org RDFa markup; the other handles getting the wikidata via SPARQL queries and storing this data as term metadata. Getting the wikidata via SPARQL is much improved on the way it was done in the original post I mentioned above. Other files create taxonomy instances, provide some shortcode functions for displaying taxonomy terms and provide default term archive templates.

Where’s it going

It’s not finished. I’ll see to some of the deficiencies in the coding, but also I want to get some more elegant output, e.g. single indexes / archives of terms from all taxonomies, no matter what the relationship between the post and the item that the term relates to.

There’s no reason why the source of the metadata need be Wikidata. The same approach could be with any source of metadata, or by creating the term metadata in WordPress. As such this is part of my exploration of WordPress as a semantic platform. Using taxonomies related to educational properties would be useful for any instance of WordPress being used as a repository of open educational resources, or to disseminate information about courses, or to provide metadata for PressBooks being used for open textbooks.

I also want to use it to index PressBooks such as my copy of Omniana. I think the graphs generated may be interesting ways of visualizing and processing the contents of a book for researchers.

Licenses: Wikidata is CC:0, the wikidata logo used in the featured image for this post is sourced from wikimedia and is also CC:0 but is a registered trademark of the wikimedia foundation used with permission. The plugin, as a derivative of WordPress, will be licensed as GPLv2 (the bit about NO WARRANTY is especially relevant).

The post Using wikidata for linked data WordPress indexes appeared first on Sharing and learning.