Tag Archives: Talent Signal

JDX: a schema for Job Data Exchange⤴

from @ Sharing and learning

[This rather long blog post describes a project that I have been involved with through consultancy with the U.S. Chamber of Commerce Foundation.  Writing this post was funded through that consultancy.]

The U.S. Chamber of Commerce Foundation has recently proposed a modernized schema for job postings based on the work of HR Open and Schema.org, the Job Data Exchange (JDX) JobSchema+. It is hoped JDX JobSchema+ will not just facilitate the exchange of data relevant to jobs, but will do so in a way that helps bridge the various other standards used by relevant systems.  The aim of JDX is to improve the usefulness of job data including signalling around jobs, addressing such questions as: what jobs are available in which geographic areas? What are the requirements for working in these jobs? What are the rewards? What are the career paths? This information needs to be communicated not just between employers and their recruitment partners and to potential job applicants, but also to education and training providers, so that they can create learning opportunities that provide their students with skills that are valuable in their future careers. Job seekers empowered with greater quantity and quality of job data through job postings may secure better-fitting employment faster and for longer duration due to improved matching. Preventing wasted time and hardship may be particularly impactful for populations whose job searches are less well-resourced and those for whom limited flexibility increases their dependence on job details which are often missing, such as schedule, exact location, and security clearance requirement. These are among the properties that JDX provides employers the opportunity to include for easy and quick identification by all.  In short, the data should be available to anyone involved in the talent pipeline. This broad scope poses a problem that JDX also seeks to address: different systems within the talent pipeline data ecosystem use different data standards so how can we ensure that the signalling is intelligible across the whole ecosystem?

The starting point for JDX was two of the most widely used data standards relevant to describing jobs: HR Open Standards Recruiting standard, part of the foremost suite of standards covering all aspects of the HR sector and the schema.org JobPosting schema, which is used to make data on web pages accessible to search engines, notably Google’s Job Search. These, and an analysis of the information required around jobs, job descriptions and job postings, their relationships to other entities such as organizations, competencies, credentials, experience and so on, were modelled in RDF to create a vocabulary of classes, properties, and concept schemes that can be used to create data. The full data model, which can be accessed on GitHub, is quite extensive: the description of jobs that JDX enables goes well beyond what is required for a job posting advertising a vacancy. A subset of the full model comprising those terms useful for job postings was selected for pilot testing, and this is available in a more accessible form on the Chamber Foundation’s website and is documented on the Job Data Exchange website. The results of the data analysis, modelling and piloting were then fed back into the HR Open and schema.org standards that were used as a starting point.

This is where things start to get a little complicated, as it means JDX has contributed to three related efforts.

JobPostings in schema.org

The modelling and piloting highlighted and addressed some issues that were within schema.org’s scope of enabling the provision of structured data about job postings on the web. These were discussed through a W3C Community Group on Talent Marketplace Signalling, and the solutions were reconciled with schema.org’s wider model and scope as a web-wide vocabulary that covers many other types of things apart from Jobs. The outcomes include that schema.org/JobPosting has several new properties (or modifications to how existing properties are used) allowing for such things as: a job posting with more than one vacancy, a job posting with a specified start date, a job posting with requirements other than competencies — i.e. physical, sensory and security clearance requirements, and more specific information about contact details and location within the company structure for the job being advertised.

Because schema.org and JDX are both modelled in RDF as sets of terms that can be used to make independent statements about entities (rather than a record-based model such as XML documents) it was relatively easy to add terms to schema.org that were based on those in JDX. The only reason that the terms added to schema.org are not exactly the same as the terms in JDX JobSchema+ is that it was sometimes necessary to take into account already existing properties in schema.org, and the wider purpose and different audience of schema.org.

JDX in HROpen

As with schema.org, JDX highlighted some issues that are within the scope of the HROpen Standards Recruiting standard, and the aim is to incorporate the lessons learnt from JDX into that standard. However the Recruiting standard is part of the inter-linked suite of specifications that HROpen maintains across all aspects of the HR domain, and these standards are in plain JSON, a record-based format specified through JSON-Schema files not RDF Schema. This makes integration of new terms and modelling approaches from JDX into HROpen more complicated than was the case with schema.org. As a first step the property definitions have been translated into JSON-Schema, and partially integrated into the suite of HROpen standards, however some of the structures, for example for describing Organizations, were significantly different to how other HROpen standards treat the same types of entity, and so these were kept separate. The plan for the next phase is to further integrate JDX into the existing standards, enhance the use cases and documentation and include RDF, JSON Schema, and XML XSD.

JDX JobPosting+ RDF Schema

Finally, of course, JDX still exists as an RDF Schema, currently on github.  The work on integration with HROpen surfaced some errors and other issues, which have been addressed. Likewise feeding back into schema.org JobPosting means that there are new relationships between terms in JDX and schema.org that can be encoded in the JDX schema. Finally there is potential for other changes and remodelling as a result of findings from the JDX pilot of job postings. But given the progress made with integrating lessons learnt into schema.org and the HROpen Recruiting standard, what is the role of the RDF Schema compared to these other two?

Standard Strengths and Interoperability

Each of the three standards has strengths in its own niche. Schema.org provides a widely scoped vocabulary, mostly used for disseminating information on the open web. The most obvious consumers of data that use terms from schema.org are search engines trying to make sense of text in web pages, so that they can signal the key aspects of job postings with less ambiguity than can easily be done by processing natural text. Of course such data is also useful for any system that tries to extract data from webpages. Schema.org is also widely used as a source of RDF terms for other vocabularies, after all it doesn’t make much sense for every standard to define its own version of a property for the name of the thing being described, or a textual description of it—more on this below in the discussion of harmonization.

HROpen Standards are designed for system-to-system interoperability within the HR domain. If organization A and organization B (not to mention organizations C through to Z) have systems that do the same sort of thing with the same sort of data, then using an agreed standard for the data they care about clearly brings efficiencies by allowing for systems to be designed to a common specification and for organizations to share data where appropriate. This is the well understood driving force for interoperability specifications.

it is useful to have a common set of “terms” from which data providers can pick and choose what is appropriate for communicating different aspects of what they care about

But what about when two organizations are using the same sort of data for different things? For example, it might be that they are part of different verticals which interact with each other but have significant differences aside from where they overlap; or it might be that one organization provides a horizontal service, such as web search, across several verticals. This is where it is useful to have a common set of “terms” from which data providers can pick and choose what is appropriate for communicating different aspects of what they care about to those who provide services that intersect or overlap with their own concern. For example a fully worked specification for learning outcomes in education would include much that is not relevant to the HR domain and much that overlaps; furthermore HR and education providers use different systems for other aspects of their work: HR will care about integration with payroll systems, education about integration with course management systems. There is no realistic prospect that the same data standards can be used to the extent that the record formats will be the same; however with the RDF approach of entity-focused description rather than defining a single record structure, there is no reason why some of the terms that are used to describe the HR view of competency shouldn’t also be used to describe the education view of learning outcomes. Schema.org provides a broad horizontal layer of RDF terms that can be used across many domains; JDX provides a deeper dive into the more specific vocabulary used in jobs data.

Data Harmonization

This approach to allowing mutual intelligibility between data standards in different domains to the extent that the data they care about overlaps (or, for that matter, competing data standards in the same domain) is known as data harmonization. RDF is very much suited to harmonization for these reasons:

  • its entity-based modelling approach does not pre-impose the notion of data requirements or inter-relationships between data elements in the way that a record-based modelling approach does;
  • in the RDF data community it is assumed that different vocabularies of terms (classes and properties for describing aspects of a resource) and concepts (providing the means to classify resources) will be developed in such a way that someone can mix and match terms from relevant vocabularies to describe all the entities that they care about; and
  • as it is assumed that there will be more than one relevant vocabulary it has been accepted that there will be related terms in separate vocabularies, and so the RDF schema that describe these vocabularies should also describe these relationships.

JDX was designed in the knowledge that it overlaps with schema.org. For example JDX deals with providing descriptions of organizations (who offer jobs), and with things that have names and so does schema.org. It is not necessary for JDX to define its own class of Organizations or property of name, it simply uses the class and property defined by schema.org. That means that any data that conforms to the JDX RDF schema automatically has some data that conforms with schema.org. No need to extract and transform RDF data before loading it when the modelling approach and vocabularies used are the same in the first place.

Sometimes the match in terminology isn’t so good. At some point in the future we might, for example, be prepared to say that everything JDX calls a JobPosting is something that schema.org calls a JobPosting and vice versa. In this case we could add to the JDX schema a declaration that these are equivalent classes. In other cases we might say that some class of things in JDX form a subset of what schema.org has grouped as a class, in which case we could add to the JDX schema a declaration that the JDX class is a subclass of the schema.org class. Similar declarations can be made about properties.

by querying the data provided about things along with information about relationships between the data terms used we can achieve interoperability across data provided in different data standards

The reason why this is useful is that RDF schema are written in RDF and RDF data includes links to the definitions of the terms in the schema, so data about jobs and organizations and all the other entities described with JDX can be in a data store linked to the definitions of the terms used to describe them. These definitions can link to other definitions of related terms all accessible for querying.  This is linked data at the schema level. For a long time we have referred to this network of data along with definitions, which were seen as sprawling across the internet, as the Semantic Web, but more recently it has been found to be useful for datastores to be more focused, and the result of data about a domain along with the schema for those data is now commonly known as a knowledge graph. What matters is the consequence that by querying the data provided about things along with information about relationships between the data terms used we can achieve interoperability across data provided in different data standards. If a query system knows that some data relates to what JDX calls a JobPosting (because the data links to the JDX schema), and that everything JDX calls a JobPosting schema.org also calls a JobPosting (let’s say this is declared in the schema) then when asked about schema.org  JobPostings the query system knows it can return information about JDX JobPostings. RDF data management systems do this routinely and, for the end user, transparently.

That’s lovely if your data is in RDF; what if it is not? Most system-to-system interoperability standards don’t use RDF. This is the problem taken on by the  Data Ecosystem Schema Mapper (DESM) Tool. The approach it takes is to create local RDF schema describing the classes, properties and classifications used in these standards. The local RDF schema can assert equivalences between the RDF terms corresponding to each standard, or from each standard to an appropriate formal RDF vocabulary such as JDX.  Data can then be extracted from the record formats used and expressed as RDF using technologies such as the RDF Mapping Language (RML). This would allow us to build knowledge graphs that draw on data provided in existing systems, and query them without knowing what format or standard the data was originally in. For example, an employer could publish data in JSON using HR Open Standards’ Recruiting Standard. This data could be translated to the RDF representation of the standard created with the DESM Tool. Relationships expressed in the schema for the RDF representation would allow mapping of some or all of the data to JDX JobSchema+, schema.org JobPosting and other relevant standards. (The other standards may cover only part of the data, for example mapping skills requirements to standards used for competencies as learning objectives in the education domain.) This provides a route to translating data between standards that cover the same ground, and also provides data that can link to other domains.

Acknowledgements

Stuart Sutton, of Sutton & Associates, led the creation of the JDX JobSchema+ and originated many of the ideas described in this blog post.

Many thanks to people who commented on drafts of this post, including Stuart Sutton, Danielle Saunders, Jeanne Kitchens, Joshua Westfall, Kim Bartkus. Any errors remaining are my fault.

Writing this post was part of work funded by the U.S. Chamber of Commerce Foundation.

The post JDX: a schema for Job Data Exchange appeared first on Sharing and learning.

New work with the Credential Engine⤴

from @ Sharing and learning

Credential Engine logoI am delighted to be starting a new consulting project through Cetis LLP with the Credential Engine, helping them make credentials more transparent in order to empower everyone to make more informed decisions about credentials and their value. The problem that the Credential Engine sets out to solve is that there are (at the last count) over 730,000 different credentials on offer in the US alone. [Aside: let me translate ‘credential’ before going any further; in this context we mean what in Europe we call an educational qualification, from school certificates through to degrees, including trade and vocational qualifications and microcredentials.] For many of these credentials it is difficult to know their value in terms of who recognises them, the competences that they certify, and the occupations they are relevant for. This problem is especially acute in the relatively deregulated US, but it is also an issue when we have learner and worker mobility and need to recognise credentials from all over the world.

The Credential Engine sets out to alleviate this problem by making the credentials more transparent through a Credential Registry. The registry holds linked data descriptions of credentials being offered, using the Credential Transparency Description Language, CTDL, which is based largely on schema.org. (Note that neither the registry nor CTDL deals with information relating to whether an individual holds any credential.) These descriptions include links to Competence Frameworks described in the Credential Engine’s profile of the Achievement Standards Network vocabulary, CTDL-ASN. The registry powers a customizable Credential Finder service as well as providing a linked data platform and an API for partners to develop their own services–there are presentations about some example thrid-party apps on the Credential Engine website.

I have been involved with the Credential Engine since the end of 2015, when it was the Credential Transparency Initiative, and have since worked with them to strengthen the links between the CTDL and schema.org by leading a W3C Community Group to add EducationalOccupationalCredentials ot schema.org. I’ve also helped represent them at a UNESCO World Reference Level expert group meeting, helped partners interested in using data from the registry at an appathon in Indianapolis.  I have come to appreciate that there is a great team behind the Credential Engine, and I am really looking forward to continuing to work with them. I hope to post regular updates here on the new work as we progress.

There are strong linkages between this work and the other main project I have on talent marketplace signalling, and with talent pipeline management in general; and also with other areas of interest such as course description  and with work of the rest of Cetis in curriculum analytics and competency data standards. This new project isn’t exclusive so I will continue to work in those areas.  Please get in touch if you would like to know more about partnering with the Credential Engine or are interested in the wider work.

The post New work with the Credential Engine appeared first on Sharing and learning.

One year of Talent Marketplace Signaling⤴

from @ Sharing and learning

I chair the Talent Marketplace Signaling W3C Community Group, this progress report is cross-posted from its blog

It is one year since the initial call for participation in the Talent Marketplace Signaling W3C Community Group. That seems like a good excuse to reflect on what we have done so far, where we are, and what’s ahead.

I’m biassed, but I think progress has been good. We have 35 participants in the group, we have had some expansive discussions to outline the scope and aims of the group, the detail of which we filled in with issues and use cases. We also had some illuminating discussions about how we conceptualize the domain we are addressing (see most of August in the mail list archive). Most importantly, I think that we have made good on the aim arising from our initial kick-off meeting to identify issues arising from use cases and fix them individually with discrete enhancements to schema.org. Here’s a list of the fixes we have suggested that have been accepted by schema.org, drawn from the schema.org release log:

Translating those back to our use cases / issues we can now:

Looking forward…

First I want to note that many of those contributions have been accepted into what schema.org calls its pending section, which it defines as “a staging area for work-in-progress terms which have yet to be accepted into the core vocabulary”. While there are caveats about terms in pending being subject to change and that they should be used with caution, their acceptance into the core of the schema.org vocabulary relies on them being shown to be useful. So we have a task remaining of promoting and highlighting the use of these terms and showing how they are used. Importantly, “use” here means not just publishing data, but the existence of services built on that data.

Looking at the remaining issues that we identified from our use cases and examples, it seems that we have come to the end of those that can be picked off individually and dealt with without consequences elsewhere. Several are issues of choice, along the lines of “there’s more than one way to do X, can we clarify which is best?” Best practice is difficult to define and identify, and there will be winners and losers whatever option is picked. The choice will depend on analysis of whatever existing practice currently is as well as trade-offs such simplicity versus expressiveness. Another example where existing practice is important comes with issues that will affect how Google services such as Job Search work. Specifically, Google recommends values for employmentType that don’t seem to match all requirements, and these values are just textual tokens whereas we might want to suggest the more flexible and powerful DefinedTerm. However, we don’t want to recommend practice that conflicts with getting job postings listed properly by Google. While some Google search products leverage schema.org terms, the requirements that they specify for value spaces like the different employmentTypes are not defined in schema.org; and while schema.org development is open, other channels are required to make suggestions that affect Google products. The final category of open issue that I see is where a new corner of our domain needs to be mapped, rather just one or two new terms provided. This is the case for providing information about assessments, and for where we touch on providing information about the skills etc. that a person has.

So, there is more work to be done. I think starting with some further work on examples and best practice is a good idea. This will involve looking at existing usage, and mapping relevant parts of schema.org to other specifications (that latter task is happening in other fora, so probably something to report on here rather than start as a separate task). As ever, more people in the group and engagement from key players is key to success, so we should continue to try to grow the membership of the group.

Thank you all for your attention and contributions over the last year; I’m looking forward to more in the coming months.

Ackowledgement / disclosure

I (Phil Barker) remain grateful to the continued support of the US Chamber of Commerce Foundation, who fund my involvement in this group.

The post One year of Talent Marketplace Signaling appeared first on Sharing and learning.

On Talent Pipeline Management⤴

from @ Sharing and learning

I’m prompted by a #femEdTech tweet to write about some of the work I’m involved in regarding linking education to employment:

This is going to be a tricky topic to write about, if I get it wrong one way or another I will either offend people with whom I enjoy working or seem to be giving the opposite message to the one I intend.

The work in question is on Talent Signalling for the Job Data Exchange, but what I have in mind in particular is some of the wider context for that work, which goes under the banner of Talent Pipeline Management. Now, there is a lot that I don’t like about the rhetoric and metaphors here, I won’t dwell on them, if you’re likely to get it you won’t need it explaining. Once I got passed that, what  impressed me, was the idea brought in from supply chain management, explained to me by Bob Sheets, that if you want to go beyond a low quality commodity-like approach (by analogy cheap components  sourced with price as the only criterion) you needed to “go deep”. That is, you need to build a deep relationship to create understanding–it’s all social constructivism now–between those all involved education, training and learning, those involved in recruitment, and those involved in strategic planing for the local economy.

The approach seems much deeper than I have seen in the UK, for example in industry liaison committees at Universities, because it involves getting all levels & contexts of education provider together to work with industry and business on things like curricula and training opportunities. This is described in detail through the TPM Academy. Again, anyone from an education background will flinch at the industry-focused utilitarian view of education shown in how it is presented, but the underlying idea seems valuable.

So my current thoughts and questions are: how does this look from the learner/worker/job seeker point of view? [Quick note to self: check on whether they are included in the conversations defining curricula.] I think that is key to keeping this work on the right side of education being just about satisfying the need for cheap labour. Secondary question: is my glibly stated opinion that this goes deeper than approaches I’ve seen in the UK just an admission of ignorance? [Answers in the comments please!]

Going forward, my work will continue to look at the data that can be communicated through things like job adverts, course and qualification descriptions, trying to build the underlying infrastructure that allows “faster clearer signals” and stonger linkages between employment and education / training to help build these deeper relationships. I’m also getting involved in how  individuals’ acheivements can be represented semantically, so that will bring in a whole raft of questions about who controls the creation and dissemination of this data.

The post On Talent Pipeline Management appeared first on Sharing and learning.

The confusing concepts of credentials and competences⤴

from @ Sharing and learning

Back in July and August the Talent Marketplace Signaling W3C Community Group made good progress on how to relate JobPostings to Educational and Occupational Credentials (qualifications, if you prefer) and Compentences. These seem to me to be central concepts for linking between the domain of training, education and learning and the domain of talent sourcing, employment and career progression; a common understanding of them would be key to people from one domain understanding signals from the other. I posted a sketch of how I saw these working,.. and that provoked a lot of discussion, some of which led me to evaluate what leads to misunderstandings when trying to discuss such concepts.

This post is my attempt to describe the source those misunderstandings and suggest that we try to avoid them. Finding clarity in talking about competences and credentials is certainly not “all my own work”. Jim Goodell, Alex Jackl and Stuart Sutton and many others in the Talent Signal group and beyond have all been instrumental navigating us through to what I hope will be a common understanding, documentation of which is currently being editted by Alex. This is, however, my own take on some of the factors feeding in to that discussion, I wouldn’t want to laden anyone else with any blame for what’s described below. Also, I have simplified some of the issues raised in the discussion, and for that reason do not want to suggest that they represent the views of specific people. If you want to look at who said what, it’s best you read it in their own words on the email list.

So what are the factors that make talking about competences and credentials difficult?

abstractness

We might think that we know what a credential/qualification is: the University of Bristol offers a BSc in Physics; I have a BSc in physics from the University of Bristol–I could show you the website describing that qualification and a photo of my certificate. But they are not the same thing, there’s a difference between the abstract credential being offered and the specific certificate that I have, just as the story “Of Mice and Men” is not the same as the thing with the ISBN  978-0582461468, and the thing identified by that ISBN is not the physical copy of the book that I have on my shelf. We’re all familiar with distinguishing between abstract classes (think of Platonic archetypes) and specific instances and we’re pretty good at it, but let’s just acknowledge that it’s difficult. It is difficult to know what level of generality to give to the abstraction (or abstractions, it’s often not a simple as instance and class); it’s difficult to know the words that you can use to make clear which you’re talking about, and it’s easy to talk at cross-purposes by incorrectly assuming that we have made that clear.

metonymy

Naming things is hard, especially abstract things. One way that we try to deal with this is to refer to things by relation to something more concrete (in our experience): thus we call programmes of study after the credential they lead to (“Jamie is doing an MA in Film Studies”), or we call parts of a course a skill “Phil  has completed 20 skills in Duolingo Greek”, or we refer to people after the credential they hold (“Google hires lots of PhDs”). This may help in narrow contexts: if all you ever talk about is programmes, courses and their components it doesn’t matter if you call them after credentials and skills; but when you start doing that when you talk to someone who usually only deals with credentials and skills then you’ll cause confusion.

jargon

Another way to deal with naming abstraction is to coin terms that have specific meanings in context. The only problem with this is that in the Talent Signal work the point is that we are trying to talk across contexts, and the odds are that the jargon used is either meaningless out of context or, worse, means something different. (This latter is especially likely if it is a metonym. Seriously, don’t do metonyms.)

different approaches

As I wrote while I was thinking about this, different technical modelling approaches talk about different things, specifically in RDF we refer to things in the outside world: a description of a person is about that person, the identifier used to say what it is about is the identifier of an instance who is an actual person. When building data objects we might create a class of Person and have instances of that class to describe individual people. So for RDF the instance of Person is out there in the world, for the data object modellers the instance of Person only exists in an information system. This matters if you want to get a copy of the instance. Of course the reality is that what is in the information system is a description of a person, and we have just hit metonymy again.

fallacy of the beard

Bearing all that in mind we can come up with a set of definitions for Compentences and Credentials (the abstract things), Competence and Credential Definitions (descriptions of generic competences and credentials that may exist in information systems or elsewhere), Competence Assertions and Credential Awards (associating instances of competences and credentials with other things).

One use of a credential award is to assert that an individual has acheived a certain competence, so is a credential just a competence assertion? Here we hit some of the issues raised in discussion: it was important to some people that institutions should be able to “offer competences” as distinct from “issuing credentials”. I would rephrase that as make Competence Assertions without there being a Credential Award. This seemed tied to the idea that a credential was related to a complete course or programme whereas a competence was related to a part of the course.

In RDF we make assertions, so is the statement <Phil> <hasAbility> <ShoelaceTying> a competency assertion? If so, what more (if anything) do you need to have a Credential Award? We seems agreed that a Credential is somewhat more substantial and more formal, but the problem with that is that it is a judgement based on graduations along a continuum, not a clear disinction. That’s not to say that credentials and competence assertions are not distinct. I am clean shaven. If don’t shave tomorrow I won’t have beard; if I don’t shave for one more day or another after that I still won’t have one; so at what point would I have a beard? How many days I have to go unshaven before my stubble becomes a beard is impossible to define, but that does not mean that beards don’t exist as distinct category.

conclusion

I hope that if we acknoweldge and identify the right abstract concepts, recognise that people in different contexts will understand jargon differently and take different approaches to what they consider as important parts of a model, and avoid metonyms then I think we can make ourselves better understood.

The post The confusing concepts of credentials and competences appeared first on Sharing and learning.

Indiana Appathon Credential Data Learn and Build⤴

from @ Sharing and learning

This week I took part in the Credential Engine’s Indiana Appathon in Indianapolis. The Credential Engine is a registry of information about educational and occupational credentials (qualifications, if you prefer; or not, if you don’t) that can be earned, along with further information such as what they are useful for, what competencies a person would need in order to earn one and what opportunities exist to learn those competencies. Indiana is one state that is working with the Credential Engine to ensure that the credentials offered by all the state’s public higher education institutions are represented in the registry. About 70 people gathered in Indianapolis (a roughly equal split between Hoosiers and the rest of the US, plus a couple of Canadians and me) with the stated intentions of Learn and Build: learn about the data the Credential Engine has, how to add more and how to access what is there, and build ideas for apps that use that  data, showing what data was valuable and potentially highlighting gaps.

circles and lines representing entity-relationship domain modelsI was there as a consequence of my project work (supported by the Credential Engine) to represent Educational and Occupational Credentials in schema.org, with the aim of helping  people understand the benefits of putting credentials on to the open web. Cue my chance to reuse here my pictures of how schema.org can act as a cross-domain unifying schema for linked data and how different domains link together from Education to HR.

I’ve been involved in a few events where the idea is to try to get people together to learn/discuss/make, and I know it is really difficult to get the right balance between structure and flexibility. Too much pre-planned activity and delegates don’t get to do what they want, too little and they are left wondering what they should be doing. So I want to emphasize how hugely impressed I was with the event organization and facilitation: Laura Faulkner and colleagues at Credential Engine and Sonya Lopes and team at Learning Tapestry did a great job. Very cleverly they gave the event a headstart with webinars in advance to learn about the aims and technology of the credential engine, and then in Indianapolis we had a series of  activities. On day one these were: cycling through quick, informal presentations in small groups to find out about the available expertise; demos of existing apps that use data from the Credential Engine; small group discussions of personas to generate use cases; generating ideas for apps based on these. On day two we split into some ‘developer’ groups who worked to flesh out some of these ideas (while the ‘publisher’ group did something else, learnt more about publishing data into the Credential Engine, I think,–I wasn’t in that group), before the developer groups presented their ideas to people from the publisher group in a round-robin “speed-dating” session, and then finally to the whole group.

I went wanting to learn more about what data and connections would be of value in a bigger ecosystem around credentials and for the more focussed needs of individual apps. This I did. The one app that I was involved with most surfaced a need for the Credential Engine to be able to provide data about old, possibly discontinued credentials so that this information could be accessed by those wanting more details about a credential held by an individual. I think this is an important thing to learn for a project that has largely focussed on use cases relating to people wanting to develop their careers and look forward to what credentials are currently on offer that are relevant to their aspirations. I (and others I spoke to) also noticed how many of the use cases and apps required information about the competencies entailed in the credentials, quite often in detail that related to the component courses of longer programs of study. This, and other requirements for fine detail about credentials, is of concern to the institutions publishing information into the Credential Engine’s registry. Often do not have all of that information accessible in a centrally managed location, and when they do it is maybe not at the level of detail or in language  suitable for externally-facing applications.

This problem relating to institutions supplying data about courses was somewhat familiar to me. It seems entirely analogous to the experiences of the Jisc XCRI related programs (for example, projects on making the most of course data and later projects on managing course-related data). I would love to say that from those programs we now know how to provide this sort of data at scale and here’s the product that will do it.., but of course it’s not that simple. What we do have are briefings and advice explaining the problem and some of the approaches that have been taken and what were the benefits from these: for example, Ruth Drysdale’s overview Managing and sharing your course information and the more comprehensive guide Managing course information. My understanding from those projects is that they found benefits to institutions from a more coherent approach to their internal management of course data, and I hope that those supplying data to the Credential Engine might be encouraged by this. I also hope that the Credential Engine (or those around it who do funding) might think about how we could create apps and services that help institutions manage their course data better in such a way that benefits their own staff and incidentally provides the data the Credential Engine needs.

Finally, it was great to spend a couple of days in sunny Indianapolis, catching up with old friends, meeting in-person with some colleagues who I have only previously met online and doing some sightseeing. Many thanks to the Credential Engine for their financial assistance in getting me there.

Photo of brick build building next to corporate towers
Aspects of Inidianapolis reminded me of SimCity 2000
White column with statues
Monument in centre of town
plan of a city grid for Inianapolis
The original city design plan (plat)
Photo of indoor market
One of the indoor markets
Fountain and war memorial
The sign said no loitering, but I loitered. That, in the background, is one of the biggest war memorials I have seen

 

The post Indiana Appathon Credential Data Learn and Build appeared first on Sharing and learning.

Inclusion of Educational and Occupational Credentials in schema.org⤴

from @ Sharing and learning

The new terms developed by the EOCred community group that I chaired were added to the pending area in the April 2019 release of schema.org. This marks a natural endpoint for this round of the community group’s work. You can see most of the outcome  under EducationalOccupationalCredential. As it says, these terms are now “proposed for full integration into Schema.org, pending implementation feedback and adoption from applications and websites”. I am pretty pleased with this outcome.

Please use these terms widely where you wish to meet the use cases outlined in the previous post, and feel free to use the EOCred group to discuss any issues that arise from implementation and adoption.

My own attention is moving on the Talent Marketplace Signalling community group which is just kicking off (as well as continuing with LRMI and some discussions around Courses that I am having). One early outcome for me from this is a picture of how I see Talent Signalling requiring all these linked together:

Outline sketch of the Talent Signaling domain, with many items omitted for clarity. Mostly but not entirely based on things already in schema.org

 

The post Inclusion of Educational and Occupational Credentials in schema.org appeared first on Sharing and learning.

Talent marketplace signalling and schema.org JobPostings⤴

from @ Sharing and learning

For some time now I have been involved in the Data Working Group of the Jobs Data Exchange (JDX) project. That project aims to help employers and technology partners better describe their job positions and hiring requirements in a machine readable format. This will allow employers to send clearer signals to individuals, recruitment, educational and training organizations about the skills and qualifications that are in demand.  The data model behind JDX, which has been developed largely by Stuart Sutton working with representatives from the HR Open Standards body, leverages schema.org terms where possible. Through the development of this data model, as well as from other input, we have many ideas for guidance on, and improvements to the schema.org JobPosting schema. In order to advance those ideas through a broader community and feed them back to schema.org, we have now created the Talent Marketplace Signaling W3C Community Group.

In the long term I hope that the better expression of job requirements in the same framework as can be used to describe qualifications and educational courses will lead to better understanding and analysis of what is required and provided where, and to improvements in educational and occupational prospects for individuals.circles and lines representing entity-relationship domain models

 

About the Talent Marketplace Signaling Community Group

Currently, workforce signaling sits at the intersection of a number of existing schema.org types: Course, JobPosting, Occupation, Organization, Person and the proposed EducationalOccupationalCredential. The TalentSignal Community Group will focus initially on the JobPosting Schema and related types. I think the TalentSignal CG can help by:

  • providing guidance on how to use existing schema.org terms to describe JobPostings;
  • proposing refinements (e.g. improved definitions) to existing schema.org types serving the talent pipeline; and
  • suggesting new types and properties where improved signaling cannot otherwise be achieved.

I hope that the outcomes of this work will be discrete improvements to the JobPostings schema, e.g. small changes to definitions, changes to how things like competences are represented and linked to JobPostings, and guidance, probably on the schema.org wiki, about using the JobPosting schema to mark up job adverts. Of course, whatever the Community Group suggests, it’s up to the schema.org steering group to decide on whether they are adopted into schema.org, and then it’s up to the search engines and other data consumers as to whether they make any use of the mark up.

The thinking behind the having a wider remit than the currently envisaged work is to avoid setting up a whole series of new groups every time we have a new idea [lesson learnt from moving from LRMI to Course description to educational and occupational credentials].

Call for participation

If you’ve read this far you must be somewhat interested  in this area of work, so why not join the TMS Community Group to show your support for the JDX and more broadly the need and importance for improved workforce signaling in the talent marketplace? You can join via pink/tan button on the Talent Signal CG web page. You will need to have a W3C account and to be signed in order to join (see the top right of the page to sign-in or join). The only restriction on joining is that you must give some assurances about the openness of the IPR of any contributions that you make. The outcomes of this work will feed into a specification that anyone can use, so there must be no hidden IPR restrictions in there.

The group  is open to all stakeholders so please feel free to share this information with your colleagues and network.

Disclosure

I’m being paid by the US Chambers of Commerce Federation to carry out this work. Thank you US CCF!

The post Talent marketplace signalling and schema.org JobPostings appeared first on Sharing and learning.