Author Archives: Phil Barker

Thoughts on IEEE ILR⤴

from @ Sharing and learning

I was invited to present as part of a panel for a meeting of the  IEEE P 1484.2 Integrated Learner Records (ILR) working group discussing issues around the “payload” of an ILR, i.e. the description of what someone has achieved. For context I followed Kerri Lemoie who presented on the work happening in the W3C VC-Ed Task Force on Modeling Educational Verifiable Credentials, which is currently the preferred approach. Here’s what I said:

My interest in ILR comes through working with the Credential Engine on the Credential Transparency Description Language (CTDL) family of RDF vocabularies for describing educational credentials (qualifications) and many of the things like competences, assessments, learning opportunities, and education organizations, that are related to them. My relevant areas of expertise are interoperability, RDF data models and education data standards.

The areas I would like us to consider as we go forward are related to issues of scope, issues of maturity and technical issues. 

Issues of scope: we know the W3C VC approach (i.e. cryptographically signed assertions in JSON-LD) has certain desirable characteristics, but in previous calls we have heard of extensive, mature practice regarding learner records that does not make use of such an approach. Are these in scope? What is the use case for the VC approach that they might need?

Issues of maturity and velocity: Verifiable Credentials is relatively new W3C recommendation; the work of the W3C VC-Edu group looking at how it can be applied to education is ongoing (it’s progressing well, being very well led by Kim Hamilton Duffy and Anthony Camilleri, but I hope I do not offend anyone if I say it is not near a conclusion). There are few implementations of VCs in the education domain, and these are ongoing projects (progressing well but not conclusive). It is hard to recommend anything as best practice until you have evaluated the consequences of several options. We need to be especially careful when we say what “should” be done that we do not, by implication, say that the mature, tried, tested and working approaches (that I alluded to above) are somehow less good, as somehow doing something that shouldn’t be done.

Technical issues: cryptographically signed assertions in JSON-LD. JSON-LD is a form of Linked Data, the @context block provides a mapping to an RDF model. Therefore there has to be an RDF model. 

You have to anticipate people taking the JSON-LD and using it as RDF, i.e. as a series of independent statements: triples in the form subject-predicate-object. This has implications because some things that work when you treat properties as “elements” embedded in an XML tree or as “fields” in a record do not work for predicates that link a subject to an object that is an independent entity: the data about the object should not assume any “context” from the subject that is linked to it. 

There is also the question of how far does the RDF model go into describing the payload: W3C VC provides the mechanism for verifiable credentials in JSON-LD, but we are talking about educational credentials as being the payload — this is confusing to many people. Not every credential issuing organization will be adopting VC as their credentialing approach — other approaches are still valid (PESC Transcripts, IMS CLR, secured PDFs with or without XMP metadata) — so we have to allow for credentials that are “opaque to RDF” (though not to verification) in the “payload”. I’m looking for a balance that exploits the advantages of RDF approaches to describing credentials (like CTDL) while still being capable of providing value to those who don’t use RDF.

The post Thoughts on IEEE ILR appeared first on Sharing and learning.

Strings to things in context⤴

from @ Sharing and learning

As part of work to convert plain JSON records to proper RDF in JSON-LD I often want to convert a string value to a URI that identifies a thing (real world concrete thing or a concept).

Simple string to URI mapping

Given a fragment of a schedule in JSON

{"day": "Tuesday"}

As well as converting "day" to a property in an RDF vocabulary I might want to use a concept term for “Tuesday” drawn from that vocabulary. JSON-LD’s @context lets you do this: the @vocab keyword says what RDF vocabulary you are using for properties; the @base keyword says what base URL you are using for values that are URIs; the @id keyword maps a JSON key to an RDF property; and, the @type keyword (when used in the @context object) says what type of value a property should be, the value of @type that says you’re using a URI is "@id" (confused by @id doing double duty? it gets worse). So:

{
  "@context": {
    "@vocab": "http://schema.org/",
    "@base": "http://schema.org/",
    "day": {
       "@id": "dayOfWeek",
       "@type": "@id"
    }
  },
  "day": "Tuesday"
}

Pop this in to the JSON-LD playground to convert it into N-QUADS and you get:

_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .

Cool.

What type of thing is this?

The other place where you want to use URI identifiers is to say what type/class of thing you are talking about. Expanding our example a bit, we might have

{
  "type": "Schedule",
  "day": "Tuesday"
}

Trying the same approach as above, in the @context block we can use the @id keyword to map the string value "type" to the special value "@type"; and, use the @type keyword with special value "@id" to say that the type of value expected is a URI, as we did to turn the string “Tuesday” into a schema.org URI. (I did warn you it got more confusing). So:

{
  "@context": {
    "@vocab": "http://schema.org/",
    "@base": "http://schema.org/",
    "type": {
       "@id": "@type",
       "@type": "@id"    
    },
    "day": {
       "@id": "dayOfWeek",
       "@type": "@id"
    }
  },
  "type": "Schedule",
  "day": "Tuesday"
}

Pop this into the JSON-LD playground and convert to N-QUADS and you get

_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .
_:b0 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Schedule> .

As we want.

Mixing it up a bit

So far we’ve had just the one RDF vocabulary, say we want to use terms from a variety of vocabularies. For the sake of argument, let’s say that no one vocabulary is more important than another, so we don’t want to use @vocab and @base to set global defaults. Adding  another term from a custom vocab in to the our example:

{ 
  "type": "Schedule",
  "day": "Tuesday",
  "onDuty": "Phil" 
}

In the context we can set prefixes to use instead of full length URIs, but the most powerful feature is that we can use different @context blocks for each term definition to set different @base URI fragments. That looks like:

{
  "@context": {
    "schema": "http://schema.org/",
    "ex" : "http://my.example.org/",
    "type": {
       "@id": "@type",
       "@type": "@id",
       "@context": {
         "@base": "http://schema.org/"        
      }
    },
    "day": {
      "@id": "schema:dayOfWeek",
      "@type": "@id",
      "@context": {
         "@base": "http://schema.org/"        
      }
    },
   "onDuty": {
     "@id": "ex:onDuty",
       "@type": "@id",
       "@context": {
         "@base": "https://people.pjjk.org/"
      }
    }
  },
  "type": "Schedule",
  "day": "Tuesday",
  "onDuty": "phil"
}

Translated by JSON-LD Playground that gives:

_:b0 <http://my.example.org/onDuty> <https://people.pjjk.org/phil> .
_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .
_:b0 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <https://json-ld.org/playground/Schedule> .

Hmmm. The first two lines look good. The JSON keys have been translated to URIs for properties from two different RDF vocabularies, and their string values have been translated to URIs for things with different bases, so far so good. But, that last line: the @base for the type isn’t being used, and instead JSON-LD playground is using its own default. That won’t do.

The fix for this seems to be not to give the @id keyword for type the special value of "@type", but rather treat it as any other term from an RDF vocabulary:

{
  "@context": {
    "schema": "http://schema.org/",
    "ex" : "http://my.example.org/",
    "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
    "type": {
       "@id": "rdf:type",
       "@type": "@id",
       "@context": {
         "@base": "http://schema.org/"        
      }
    },
    "day": {
      "@id": "schema:dayOfWeek",
      "@type": "@id",
      "@context": {
         "@base": "http://schema.org/"        
      }
    },
   "onDuty": {
     "@id": "ex:onDuty",
       "@type": "@id",
       "@context": {
         "@base": "https://people.pjjk.org/"
      }
    }
  },
  "type": "Schedule",
  "day": "Tuesday",
  "onDuty": "phil"
}

Which gives:

_:b0 <http://my.example.org/onDuty> <https://people.pjjk.org/phil> .
_:b0 <http://schema.org/dayOfWeek> <http://schema.org/Tuesday> .
_:b0 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Schedule> .

That’s better, though I do worry that the lack of a JSON-LD @type key might bother some.

Extensions and Limitations

The nested context for a JSON key works even if the value is an object, it can be used to specify the @vocab and @base and any namespace prefixes used in the keys and values of the value object. That’s useful if title in one object is dc:title and title in another needs to be schema:title.

Converting string values to URIs for things like this is fine if the string happens to match the end of the URI that you want. So, while I can change the a JSON key "author" into the property URI <https://www.wikidata.org/prop/direct/P50> I cannot change the value string "Douglas Adams" into <https://www.wikidata.org/entity/Q42>. For that I think you need to use something a bit more flexible, like RML, but please comment if you know of a solution to that!

Also, let me know if you think the lack of a JSON-LD @type keyword, or anything else shown above seems problematic.

The post Strings to things in context appeared first on Sharing and learning.

JDX: a schema for Job Data Exchange⤴

from @ Sharing and learning

[This rather long blog post describes a project that I have been involved with through consultancy with the U.S. Chamber of Commerce Foundation.  Writing this post was funded through that consultancy.]

The U.S. Chamber of Commerce Foundation has recently proposed a modernized schema for job postings based on the work of HR Open and Schema.org, the Job Data Exchange (JDX) JobSchema+. It is hoped JDX JobSchema+ will not just facilitate the exchange of data relevant to jobs, but will do so in a way that helps bridge the various other standards used by relevant systems.  The aim of JDX is to improve the usefulness of job data including signalling around jobs, addressing such questions as: what jobs are available in which geographic areas? What are the requirements for working in these jobs? What are the rewards? What are the career paths? This information needs to be communicated not just between employers and their recruitment partners and to potential job applicants, but also to education and training providers, so that they can create learning opportunities that provide their students with skills that are valuable in their future careers. Job seekers empowered with greater quantity and quality of job data through job postings may secure better-fitting employment faster and for longer duration due to improved matching. Preventing wasted time and hardship may be particularly impactful for populations whose job searches are less well-resourced and those for whom limited flexibility increases their dependence on job details which are often missing, such as schedule, exact location, and security clearance requirement. These are among the properties that JDX provides employers the opportunity to include for easy and quick identification by all.  In short, the data should be available to anyone involved in the talent pipeline. This broad scope poses a problem that JDX also seeks to address: different systems within the talent pipeline data ecosystem use different data standards so how can we ensure that the signalling is intelligible across the whole ecosystem?

The starting point for JDX was two of the most widely used data standards relevant to describing jobs: HR Open Standards Recruiting standard, part of the foremost suite of standards covering all aspects of the HR sector and the schema.org JobPosting schema, which is used to make data on web pages accessible to search engines, notably Google’s Job Search. These, and an analysis of the information required around jobs, job descriptions and job postings, their relationships to other entities such as organizations, competencies, credentials, experience and so on, were modelled in RDF to create a vocabulary of classes, properties, and concept schemes that can be used to create data. The full data model, which can be accessed on GitHub, is quite extensive: the description of jobs that JDX enables goes well beyond what is required for a job posting advertising a vacancy. A subset of the full model comprising those terms useful for job postings was selected for pilot testing, and this is available in a more accessible form on the Chamber Foundation’s website and is documented on the Job Data Exchange website. The results of the data analysis, modelling and piloting were then fed back into the HR Open and schema.org standards that were used as a starting point.

This is where things start to get a little complicated, as it means JDX has contributed to three related efforts.

JobPostings in schema.org

The modelling and piloting highlighted and addressed some issues that were within schema.org’s scope of enabling the provision of structured data about job postings on the web. These were discussed through a W3C Community Group on Talent Marketplace Signalling, and the solutions were reconciled with schema.org’s wider model and scope as a web-wide vocabulary that covers many other types of things apart from Jobs. The outcomes include that schema.org/JobPosting has several new properties (or modifications to how existing properties are used) allowing for such things as: a job posting with more than one vacancy, a job posting with a specified start date, a job posting with requirements other than competencies — i.e. physical, sensory and security clearance requirements, and more specific information about contact details and location within the company structure for the job being advertised.

Because schema.org and JDX are both modelled in RDF as sets of terms that can be used to make independent statements about entities (rather than a record-based model such as XML documents) it was relatively easy to add terms to schema.org that were based on those in JDX. The only reason that the terms added to schema.org are not exactly the same as the terms in JDX JobSchema+ is that it was sometimes necessary to take into account already existing properties in schema.org, and the wider purpose and different audience of schema.org.

JDX in HROpen

As with schema.org, JDX highlighted some issues that are within the scope of the HROpen Standards Recruiting standard, and the aim is to incorporate the lessons learnt from JDX into that standard. However the Recruiting standard is part of the inter-linked suite of specifications that HROpen maintains across all aspects of the HR domain, and these standards are in plain JSON, a record-based format specified through JSON-Schema files not RDF Schema. This makes integration of new terms and modelling approaches from JDX into HROpen more complicated than was the case with schema.org. As a first step the property definitions have been translated into JSON-Schema, and partially integrated into the suite of HROpen standards, however some of the structures, for example for describing Organizations, were significantly different to how other HROpen standards treat the same types of entity, and so these were kept separate. The plan for the next phase is to further integrate JDX into the existing standards, enhance the use cases and documentation and include RDF, JSON Schema, and XML XSD.

JDX JobPosting+ RDF Schema

Finally, of course, JDX still exists as an RDF Schema, currently on github.  The work on integration with HROpen surfaced some errors and other issues, which have been addressed. Likewise feeding back into schema.org JobPosting means that there are new relationships between terms in JDX and schema.org that can be encoded in the JDX schema. Finally there is potential for other changes and remodelling as a result of findings from the JDX pilot of job postings. But given the progress made with integrating lessons learnt into schema.org and the HROpen Recruiting standard, what is the role of the RDF Schema compared to these other two?

Standard Strengths and Interoperability

Each of the three standards has strengths in its own niche. Schema.org provides a widely scoped vocabulary, mostly used for disseminating information on the open web. The most obvious consumers of data that use terms from schema.org are search engines trying to make sense of text in web pages, so that they can signal the key aspects of job postings with less ambiguity than can easily be done by processing natural text. Of course such data is also useful for any system that tries to extract data from webpages. Schema.org is also widely used as a source of RDF terms for other vocabularies, after all it doesn’t make much sense for every standard to define its own version of a property for the name of the thing being described, or a textual description of it—more on this below in the discussion of harmonization.

HROpen Standards are designed for system-to-system interoperability within the HR domain. If organization A and organization B (not to mention organizations C through to Z) have systems that do the same sort of thing with the same sort of data, then using an agreed standard for the data they care about clearly brings efficiencies by allowing for systems to be designed to a common specification and for organizations to share data where appropriate. This is the well understood driving force for interoperability specifications.

it is useful to have a common set of “terms” from which data providers can pick and choose what is appropriate for communicating different aspects of what they care about

But what about when two organizations are using the same sort of data for different things? For example, it might be that they are part of different verticals which interact with each other but have significant differences aside from where they overlap; or it might be that one organization provides a horizontal service, such as web search, across several verticals. This is where it is useful to have a common set of “terms” from which data providers can pick and choose what is appropriate for communicating different aspects of what they care about to those who provide services that intersect or overlap with their own concern. For example a fully worked specification for learning outcomes in education would include much that is not relevant to the HR domain and much that overlaps; furthermore HR and education providers use different systems for other aspects of their work: HR will care about integration with payroll systems, education about integration with course management systems. There is no realistic prospect that the same data standards can be used to the extent that the record formats will be the same; however with the RDF approach of entity-focused description rather than defining a single record structure, there is no reason why some of the terms that are used to describe the HR view of competency shouldn’t also be used to describe the education view of learning outcomes. Schema.org provides a broad horizontal layer of RDF terms that can be used across many domains; JDX provides a deeper dive into the more specific vocabulary used in jobs data.

Data Harmonization

This approach to allowing mutual intelligibility between data standards in different domains to the extent that the data they care about overlaps (or, for that matter, competing data standards in the same domain) is known as data harmonization. RDF is very much suited to harmonization for these reasons:

  • its entity-based modelling approach does not pre-impose the notion of data requirements or inter-relationships between data elements in the way that a record-based modelling approach does;
  • in the RDF data community it is assumed that different vocabularies of terms (classes and properties for describing aspects of a resource) and concepts (providing the means to classify resources) will be developed in such a way that someone can mix and match terms from relevant vocabularies to describe all the entities that they care about; and
  • as it is assumed that there will be more than one relevant vocabulary it has been accepted that there will be related terms in separate vocabularies, and so the RDF schema that describe these vocabularies should also describe these relationships.

JDX was designed in the knowledge that it overlaps with schema.org. For example JDX deals with providing descriptions of organizations (who offer jobs), and with things that have names and so does schema.org. It is not necessary for JDX to define its own class of Organizations or property of name, it simply uses the class and property defined by schema.org. That means that any data that conforms to the JDX RDF schema automatically has some data that conforms with schema.org. No need to extract and transform RDF data before loading it when the modelling approach and vocabularies used are the same in the first place.

Sometimes the match in terminology isn’t so good. At some point in the future we might, for example, be prepared to say that everything JDX calls a JobPosting is something that schema.org calls a JobPosting and vice versa. In this case we could add to the JDX schema a declaration that these are equivalent classes. In other cases we might say that some class of things in JDX form a subset of what schema.org has grouped as a class, in which case we could add to the JDX schema a declaration that the JDX class is a subclass of the schema.org class. Similar declarations can be made about properties.

by querying the data provided about things along with information about relationships between the data terms used we can achieve interoperability across data provided in different data standards

The reason why this is useful is that RDF schema are written in RDF and RDF data includes links to the definitions of the terms in the schema, so data about jobs and organizations and all the other entities described with JDX can be in a data store linked to the definitions of the terms used to describe them. These definitions can link to other definitions of related terms all accessible for querying.  This is linked data at the schema level. For a long time we have referred to this network of data along with definitions, which were seen as sprawling across the internet, as the Semantic Web, but more recently it has been found to be useful for datastores to be more focused, and the result of data about a domain along with the schema for those data is now commonly known as a knowledge graph. What matters is the consequence that by querying the data provided about things along with information about relationships between the data terms used we can achieve interoperability across data provided in different data standards. If a query system knows that some data relates to what JDX calls a JobPosting (because the data links to the JDX schema), and that everything JDX calls a JobPosting schema.org also calls a JobPosting (let’s say this is declared in the schema) then when asked about schema.org  JobPostings the query system knows it can return information about JDX JobPostings. RDF data management systems do this routinely and, for the end user, transparently.

That’s lovely if your data is in RDF; what if it is not? Most system-to-system interoperability standards don’t use RDF. This is the problem taken on by the  Data Ecosystem Schema Mapper (DESM) Tool. The approach it takes is to create local RDF schema describing the classes, properties and classifications used in these standards. The local RDF schema can assert equivalences between the RDF terms corresponding to each standard, or from each standard to an appropriate formal RDF vocabulary such as JDX.  Data can then be extracted from the record formats used and expressed as RDF using technologies such as the RDF Mapping Language (RML). This would allow us to build knowledge graphs that draw on data provided in existing systems, and query them without knowing what format or standard the data was originally in. For example, an employer could publish data in JSON using HR Open Standards’ Recruiting Standard. This data could be translated to the RDF representation of the standard created with the DESM Tool. Relationships expressed in the schema for the RDF representation would allow mapping of some or all of the data to JDX JobSchema+, schema.org JobPosting and other relevant standards. (The other standards may cover only part of the data, for example mapping skills requirements to standards used for competencies as learning objectives in the education domain.) This provides a route to translating data between standards that cover the same ground, and also provides data that can link to other domains.

Acknowledgements

Stuart Sutton, of Sutton & Associates, led the creation of the JDX JobSchema+ and originated many of the ideas described in this blog post.

Many thanks to people who commented on drafts of this post, including Stuart Sutton, Danielle Saunders, Jeanne Kitchens, Joshua Westfall, Kim Bartkus. Any errors remaining are my fault.

Writing this post was part of work funded by the U.S. Chamber of Commerce Foundation.

The post JDX: a schema for Job Data Exchange appeared first on Sharing and learning.

Harmonizing Learning Resource Metadata (even LOM)⤴

from @ Sharing and learning

The best interoperability is interoperability between standards. I mean it’s one thing for you and I to agree to use the same standard in the same way to do the same thing, but what if we are doing slightly different things? What if Dublin Core is right for you, but schema.org is right for me–does that mean we can’t exchange data? That would be a shame, as one standard to rule them all isn’t a viable solution. This has been exercising me through a couple of projects that I have worked on recently, and what I’ll show here is demo based on the ideas from one of these (The T3 Data Ecosystem Mapping Project) applied to another where learning resource metadata is available in many formats and desired in others. In this post I focus on metadata available as IEEE Learning Object Metadata (LOM) but wanted in either schema.org or DCAT.

The Problem

Interoperability in spite of diverse standards being used seems an acute problem when dealing with metadata about learning resources. It makes sense to use existing (diverse) schema for describing books, videos, audio, software etc, supplemented with just enough metadata about learning to describe those things when they are learning resources (textbooks, instructional videos etc.). This is the approach taken by LRMI. Add to this the neighbouring domains with which learning resource metadata needs to connect, e.g. research outputs, course management, learner records, curriculum and competency frameworks, job adverts…, all of which have there own standards ecosystems and perhaps you see why interoperability across standards is desirable.

(Aside it often also makes sense to use a large all-encompassing standard like schema.org, often as well as more specialized standards, which is why LRMI terms are in schema.org.)

This problem of interoperability in an ecosystem of many standards was addressed by Mikael Nilsson in his PhD thesis “From Interoperability to Harmonization in Metadata Standardization” where he argued that syntax wasn’t too important, what mattered more was the abstract model. Specifically he argued that interoperability (or harmonization) was possible between specs that used the RDF entity-based metamodel but less easy between specs that used a record-like metamodel.  IEEE Learning Object Metadata is largely record-like: a whole range of different things are described in one record, and the meanings of many elements depend on the context of the element in the record, and sometimes the values of other elements in the same record. Fortunately it is possible to identify LOM elements that are independent characteristics of an identified entity, which means it is possible to represent some LOM metadata in RDF. Then it is possible to map that RDF representation to terms from other vocabularies.

Step 1: RML to Map LOM XML to a local RDF vocabulary

This RML is the RDF Mapping Language, “a language for expressing customized mappings from heterogeneous data structures and serializations to the RDF data model … and to RDF datasets”.  It does so through a set of RDF statements in turtle syntax that describe the mapping from (in my case, here) XML fragments specified as XPath strings to subjects, predicates and objects. There is parser called RMLMapper that will then execute the mapping to transform the data.

My LOM data came from the Lifewatch training catalogue which has a nice set of APIs allowing access to sets of the metadata. Unfortunately the LOM XML provided deviates from the LOM XML Schema in many ways, such  as element names with underscore separations rather than camel case (so element_name, not elementName) and some nesting errors, so the RML I produced won’t work on other LOM XML instances.

Here’s a fragment of the RML, to give a flavour. The whole file is on github, along with other files mentioned here:

 
<#Mapping> a rr:TriplesMap ;
  rml:logicalSource [
    rml:source "lifewatch-lom.xml" ;
    rml:referenceFormulation ql:XPath ;
    rml:iterator "/lom/*"
  ];
  rr:subjectMap [
    rr:template "http://pjjk.local/resources/{external_id}" ;
    rr:class lom:LearningObject
  ] ;
  rr:predicateObjectMap [
    rr:predicate lom:title;
    rr:objectMap [
      rml:reference "general/title/langstring";
      rr:termType rr:Literal;
    ]
  ] ;
 #etc...
 .

I have skipped the prefix declarations and jumped straight the part of the mapping that specifies the source file for the data, and the XMLPath of the element to iterate over in creating new entities. The subjectMap generates an entity identifier using a non-standard element in the LOM record appended to a local URI, and assigns this a class. After that a series of predicateObjectMaps specify predicates and and where in the XML to find the values to use as objects. Running this through the mapper generates RDF descriptions, such as:

<http://pjjk.local/resources/DSEdW5uVa2> a lom:LearningObject;
  lom:title "Research Game";
#etc...

Again I have omitted the namespaces; the full file, all statements for all resources, is on github.

Step 2: Describe the mappings in RDF

You’ll notice that lom: namespace in the mapping and generated instance data. That’s not a standard rdf representation of the IEEE LOM, it’s a local schema that defines the relationships of some of the terms mapped from IEEE LOM to more standard schema. The full file is on github, but again, here’s a snippet:

lom:LearningObject a rdfs:Class ;
  owl:equivalentClass sdo:LearningResource , lrmi:LearningResource ;
  rdfs:subClassOf dcat:Resource .

lom:title a rdfs:Property ;
  rdfs:subPropertyOf sdo:name ;
  owl:equivalentProperty dcterms:title .

This is where the magic happens. This is the information that later allows us to use the metadata extracted from LOM records as if it is schema.org or LRMI, Dublin Core or DCAT. Because this schema is used locally only I haven’t bothered to put in much information about the terms other than their mapping to other more recognized terms. The idea here isn’t to be able to work with LOM in RDF, the idea is to take the data from LOM records and work with it as if it were from well defined RDF metadata schema. I also haven’t worried too much about follow-on consequences that may derive from the mappings that I have made, i.e. implied statements about relationships between terms in other schema, such as the implication that if lom:title is equivalent to dcterms:title, and also a subProperty of schema.org/name, then I am saying that dcterms:title is a subProperty of schema.org/name. This mapping is for local use, I’ll assert what if locally useful, if you disagree that’s fine because you won’t be affected by my local assertions.

Just to complete the schema picture, I also have RDF schema definitions files for Dublin Core Terms, LRMI, DCAT and schema.org.

(Aside: I also created some SKOS Concept Schemes for controlled vocabularies used in the LOM records, but they’re not properly working yet.)

Step 3: Build a Knowlege Graph

(Actually I just put all the schema definitions and the RDF representation of the LOM metadata into a triple store, but calling it a knowledge graph gets people’s attention.) I use a local install of Ontotext GraphDB (free version). It’s important when initializing the repository to choose a ruleset that allows lots of inferencing: I use the OWL-MAX option. Also, it’s important when querying the data to select the option to include inferred results.

SPARQL interface for GraphDB showing option to include inferred data
SPARQL interface for GraphDB showing option to include inferred data

Step 4: Results!

The data can now be queried with SPARQL. Ror example, a simple query to check what’s there:

PREFIX lom: <http://ns.pjjk.local/lom/terms/>

SELECT ?r ?t { 
    ?r a lom:LearningObject ;
    lom:title ?t .  
}

This produces a list of URIs & titles for the resources:

r,n
http://pjjk.local/resources/DSEdW5uVa2,Research Game
http://pjjk.local/resources/FdW84TkcrZ,Alien and Invasive Species showcase
http://pjjk.local/resources/RcwrBMYavY,EcoLogicaCup
http://pjjk.local/resources/SOFHCa8sIf,ENVRI gaming
http://pjjk.local/resources/Ytb7016Ijs,INTERNATIONAL SUMMER SCHOOL Data FAIRness in Environmental & Earth Science Infrastructures: theory and practice
http://pjjk.local/resources/_OhX8O6YwP,MEDCIS game
http://pjjk.local/resources/kHhx9jiEZn,PHYTO VRE guidelines
http://pjjk.local/resources/wABVJnQQy4,Save the eel
http://pjjk.local/resources/xBFS53Iesg,ECOPOTENTIAL 4SCHOOLS 

Nothing here other than what I put in that was converted from the LOM XML records.

More interestingly, this produces the same:

PREFIX sdo: <http://schema.org/> 

Select ?r ?n { 
    ?r a sdo:LearningResource ;
    sdo:name ?n ;
}

This is more interesting because it’s showing query using schema.org terms yielding results from metadata that came from LOM records.

If you prefer your metadata in DCAT, with a little added lrmi to describe the educational characteristics this:

PREFIX dcat: &lt;http://www.w3.org/ns/dcat#&gt;
PREFIX dcterms: &lt;http://purl.org/dc/terms/&gt;
PREFIX lrmi: &lt;http://purl.org/dcx/lrmi-terms/&gt;

CONSTRUCT {
  ?r a dcat:Resource ;
  dcterms:title ?t ;
  dcat:keywords ?k ;
  lrmi:educationalLevel ?l .
} WHERE {
  ?r a dcat:Resource ;
  dcterms:title ?t ;
  dcat:keywords ?k ;
  lrmi:educationalLevel ?l .
}

will return a graph of such:

@prefix dcat: <http://www.w3.org/ns/dcat#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix lrmi: <http://purl.org/dcx/lrmi-terms/> .

<http://pjjk.local/resources/DSEdW5uVa2> a dcat:Resource ;
	dcterms:title "Research Game" ;
	dcat:keywords "competition" ;
	lrmi:educationalLevel <lomCon:/difficulty/Medium> ;
	dcat:keywords "european" , "gaming" , "research game" , "schools" .

<http://pjjk.local/resources/FdW84TkcrZ> a dcat:Resource ;
	dcterms:title "Alien and Invasive Species showcase" ;
	dcat:keywords "EUNIS habitat" ;
	lrmi:educationalLevel <lomCon:/difficulty/Medium> ;
	dcat:keywords "alien species" , "invasive species" .

<http://pjjk.local/resources/RcwrBMYavY> a dcat:Resource ;
	dcterms:title "EcoLogicaCup" ;
	dcat:keywords "game" ;
	lrmi:educationalLevel <lomCon:/difficulty/Medium> ;
	dcat:keywords "Italian" , "competition " , "ecology" , "school" .

That’s what I call interoperability in spite of multiple standards. Harmonization of metadata so that, even though the data started of as LOM XML records, we can create a database that can queried and exported as if the metadata were schema.org, Dublin Core, LRMI, DCAT…

Acknowledgements:

This brings together work from two projects that I have been involved in. The harmonization of metadata is from the DESM project, funded by the US Chamber of Commerce Foundation, and the ideas coming from Stuart Sutton. The application to LOM, schema.org, DCAT+LRMI came about from a small piece of work I did for DCC at the University of Edinburgh as input to the FAIRsFAIR project.

The post Harmonizing Learning Resource Metadata (even LOM) appeared first on Sharing and learning.

Reading one of 25 years of EdTech⤴

from @ Sharing and learning

I enjoyed Martin Weller‘s blog post series on his 25 years of Ed Tech, and the book that followed, so when Lorna said that she had agreed to read the chapter on e-Learning Standards, and would I like to join her and make it a double act I thought… well, honestly I thought about how much I don’t enjoy reading stuff out loud for other people. But, I enjoy working with Lorna, and don’t get as many chances to do that as I would like, and so it happened.

I think the reading went well. You decide. Reading the definitions of the Dublin Core metadata element set  I learnt one thing: I don’t want to be the narrator for audiobook versions of tech standards.

And then there’s the “between the chapters” podcast interview, which Lorna and I have just finished recording with Laura Pasquini, which was fun. We covered a lot of the things that Lorna and I wanted to: that we think Martin was hard on Dublin Core Metadata, I think his view of it was tarnished by the IEEE LOM; but that we agree with the general thrust of what Martin wrote. Many EdTech Standards were not a success, certainly the experience that many in EdTech had with standards was not a good one. But we all learnt from the experience and did better when it came to dealling with OER (Lorna expands on this in her excellent post reflecting on this chapter). Also, many technical standards relevant to education were a success, and we use them every day without (as Martin says) knowing much about them. And there’s the thing: Martin probably should never have been in the position knowing about Dublin Core, IEEE LOM and UK LOM Core, they should just have just been there behind that systems that he used, making things work. But I guess we have to remember that back then there weren’t many Learning Technologists to go round and so it wasn’t so easy to find the right people to get involved.

We did forget to cover a few things in the chat with Laura.

We forgot how many elephants were involved in UK LOM Core.

We forgot “that would be an implementation issue”.

But my main regret is that we didn’t get to talk about #EduProg, which came about a few years later (the genesis story is on Lorna’s blog) as an analysis of a trend in Ed Tech that contrasted with the do-it-yourself-and-learn approach of EduPunk. EduProg was exemplified in many of the standards which were either “long winded and self-indulgent” or “virtuoso boundary pushing redefining forms and developing new techniques”, depending on your point of view. But there was talent there — many of the people behind EduProg were classically trained computer scientists. And it could be exciting. I for one will never forget Scott plunging a dagger into the keyboard to hold down the shift key while he ran arpeggios along the angle brackets. I hear it’s still big in Germany.

Thank you to Martin, Laura, Clint, Lorna and everyone who made it the reading & podcast possible.

Added 5 Jan: here’s Lorna’s reflections on this recording.

[Feature image for this post, X-Ray Specs by @visualthinkery, is licenced under CC-BY-SA]

The post Reading one of 25 years of EdTech appeared first on Sharing and learning.

JSON Schema for JSON-LD⤴

from @ Sharing and learning

I’ve been working recently on definining RDF application profiles, defining specifications in JSON-Schema, and converting specifications from a JSON Schema to an RDF representation. This has lead to me thinking about, and having conversations with people  about whether JSON Schema can be used to define and validate JSON-LD. I think the answer is a qualified “yes”. Here’s a proof of concept; do me a favour and let me know if you think it is wrong.

Terminology might get confusing: I’m discussing JSON, RDF as JSON-LD, JSON Schema, RDF Schema and schema.org; which are all different things (go an look them up if you’re not sure of the differences).

Why JSON LD + JSON Schema + schema.org?

To my mind one of the factors in the big increase in visibility of linked data over that last few years has been the acceptability of JSON-LD to programmers familiar with JSON. Along with schema.org, this means that many people are now producing RDF based linked data often without knowing or caring that that is what they are doing. One of the things that seems to make their life easier is JSON Schema (once they figure it out). Take a look at the replies to this question from @apiEvangelist for some hints at why and how:

Also, one specification organization I am working with publishes its specs as JSON Schema. We’re working with them on curating a specification that was created as RDF and is defined in RDF Schema, and often serialized in JSON-LD. Hence the thinking about what happens when you convert a specification from RDF Schema to JSON Schema —  can you still have instances that are linked data? can you mandate instances that are linked data? if so, what’s the cost in terms of flexibility against the original schema and against what RDF allows you to do?

Another piece of work that I’m involved in is the DCMI Application Profile Interest Group, which is looking at a simple way of defining application profiles — i.e. selecting which terms from RDF vocabularies are to be used, and defining any additional constraints, to meet the requirements of some application. There already exist some not-so-simple ways of doing this, geared to validating instance data, and native to the W3C Semantic Web family of specifications: ShEx and ShACL. Through this work I also got wondering about JSON Schema. Sure, wanting to use JSON Schema to define an RDF application profile in JSON Schema may seem odd to anyone well versed in RDF and W3C Semantic Web recommendations, but I think it might be useful to developers who are familiar with JSON but not Linked Data.

Can JSON Schema define valid JSON-LD?

I’ve heard some organizations have struggled with this, but it seems to me (until someone points out what I’ve missed) that the answer is a qualified “yes”. Qualifications first:

  • JSON Schema doesn’t defined the semantics of RDF terms. RDF Schema defines RDF terms, and the JSON-LD context can map keys in JSON instances to these RDF terms, and hence to their definitions.
  • Given definitions of RDF terms, it is possible to create a JSON Schema such that any JSON instance that validates against it is a valid JSON-LD instance conforming to the RDF specification.
  • Not all valid JSON-LD representations of the RDF will validate against the JSON Schema. In other words the JSON Schema will describe one possible serialization of the RDF in JSON-LD, not all possible serializations. In particular, links between entities in an @graph array are difficult to validate.
  • If you don’t have an RDF model for your data to start with, it’s going to be more difficult to get to RDF.
  • If the spec you want to model is very flexible, you’ll have difficulty making sure instances don’t flex it beyond breaking point.

But, given the limited ambition of the exercise, that is “can I create a JSON Schema so that any data it passes as valid is valid RDF in JSON-LD?”, those qualifications don’t put me off.

Proof concept of examples

My first hint that this seems possible came when I was looking for a tool to use when working with JSON Schema and found this online JSON Schema Validator.  If you look at the “select schema” drop down and scroll a long way, you’ll find a group of JSON Schema for schema.org. After trying a few examples of my own, I have a JSON Schema that will (I think) only validate JSON instances that are valid JSON-LD based on notional requirements for describing a book (switch branches in github for other examples).

Here are the rules I made up and how they are instantiated in JSON Schema.

First, the “@context” sets the default vocabulary to schema.org and allows nothing else:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "$id": "context.json",
  "name": "JSON Schema for @context using schema.org as base",
  "description": "schema.org is @base namespace, but others are allowed",
  "type": "object",
  "additionalProperties": false,
  "required": [ "@vocab" ],
  "properties": {
    "@vocab": {
      "type": "string",
      "format": "regex",
      "pattern": "http://schema.org/",
      "description": "required: schema.org is base ns"
    }
  }
}

This is super-strict, it allows no variations on @context": {"@vocab" : "http://schema.org"} which obviously precludes doing a lot of things that RDF is good at, notably using more than one namespace. It’s not difficult to create looser rules, for example madate schema.org as the default vocabulary but allow some or any others. Eventually you create enough slack to allow invalid linked data (e.g. using namespaces that don’t exist; using terms from the wrong namespace) and I promised you only valid linked data would be allowed. In real life, there would be a balance between permissiveness and reliability.

Rule 2: the book ids must come from wikidata:

{
 "$schema": "http://json-schema.org/draft-07/schema#",
 "$id": "wd_uri_schema.json",
 "name": "Wikidata URIs",
 "description": "regexp for Wikidata URIs, useful for @id of entities",
 "type": "string",
 "format": "regex",
 "pattern": "^https://www.wikidata.org/entity/Q[0-9]+" 
}

Again, this could be less strict, e.g. to allow ids to be any http or https URI.

Rule 3: the resource described is a schema.org/Book, for which the following fragment serves:

    "@type": {
      "name": "The resource type",
      "description": "required and must be Book",
      "type": "string",
      "format": "regex",
      "pattern": "^Book$"
    }

You could allow other options, and you could allow multiple types, maybe with one type manadatory (I have an example schema for Learning Resources which requires an array of type that must include LearningResource)

Rules 4 & 5: the book’s name and description are strings:

    "name": {
      "name": "title of the book",
      "type": "string"
    },
    "description": {
      "name": "description of the book",
      "type": "string"
    },

Rule 6, the URL for the book (i.e. a link to a webpage for the book) must be an http[s] URI:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "$id": "http_uri_schema.json",
  "name": "URI @ids",
  "description": "required: @id or url is a http or https URI",
  "type": "string",
  "format": "regex",
  "pattern": "^http[s]?://.+"
}

Rule 7, for the author we describe a schema.org/Person, with a wikidata id, a familyName and a givenName (which are strings), and optionally with a name and description, and with no other properties allowed:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "$id": "person_schema.json",
  "name": "Person Schema",
  "description": "required and allowed properties for a Person",
  "type": "object",
  "additionalProperties": false,
  "required": ["@id", "@type", "familyName", "givenName"],
  "properties": {
    "@id": {
      "description": "required: @id is a wikidata entity URI",
      "$ref": "wd_uri_schema.json"
    },
    "@type": {
      "description": "required: @type is Person",
      "type": "string",
      "format": "regex",
      "pattern": "Person"
    },
    "familyName": {
      "type": "string"
    },
    "givenName": {
      "type": "string"
    },
    "name": {
      "type": "string"
    },
    "description": {
      "type": "string"
    }
  }
}

The restriction on other properties is, again, simply to make sure no one puts in any properties that don’t exist or aren’t appopriate for a Person.

The subject of the book (the about property) must be provided as wikidata URIs, with optional @type, name, description and url; there may be more than one subject for the book, so this is an array:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "$id": "about_thing_schema.json",
  "name": "About Thing Schema",
  "description": "Required and allowed properties for a Thing being used to say what something is about.",
  "type": "array",
  "minItems": 1,
  "items": {
    "type": "object",
    "additionalProperties": false,
    "required": ["@id"],
    "properties": {
      "@id": {
        "description": "required: @id is a wikidata entity URI",
        "$ref": "wd_uri_schema.json"
      },
      "@type": {
        "description": "required: @type is from top two tiers in schema.org type hierarchy",
        "type": "array",
        "minItems": 1,
        "items": {
          "type": "string",
          "uniqueItems": true,
          "enum": [
            "Thing",
            "Person",
            "Event",
            "Intangible",
            "CreativeWork",
            "Organization",
            "Product",
            "Place"
          ]
        }
      },
      "name": {
        "type": "string"
      },
      "description": {
        "type": "string"
      },
      "url": {
        "$ref": "http_uri_schema.json"
      }
    }
  }
}

Finally, bring all the rules together, making the @context, @id, @type, name and author properties mandatory; about, description and url are optional; no others are allowed.

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "$id": "book_schema.json",
  "name": "JSON Schema for schema.org Book",
  "description": "Attempt at a JSON Schema to create valid JSON-LD descriptions. Limited to using a few schema.org properties.",
  "type": "object",
  "required": [
    "@context",
    "@id",
    "@type",
    "name",
    "author"
  ],
  "additionalProperties": false,
  "properties": {
    "@context": {
      "name": "JSON Schema for @context using schema.org as base",
      "$ref": "./context.json"
    },
    "@id": {
      "name": "wikidata URIs",
      "description": "required: @id is from wikidata",
      "$ref": "./wd_uri_schema.json"
    },
    "@type": {
      "name": "The resource type",
      "description": "required and must be Book",
      "type": "string",
      "format": "regex",
      "pattern": "^Book$"
    },
    "name": {
      "name": "title of the book",
      "type": "string"
    },
    "description": {
      "name": "description of the book",
      "type": "string"
    },
    "url": {
      "name":"The URL for information about the book",
      "$ref": "./http_uri_schema.json"
    },
    "about": {
      "name":"The subject or topic of the book",
      "oneOf": [
        {"$ref": "./about_thing_schema.json"},
        {"$ref": "./wd_uri_schema.json"}
      ]
    },
    "author": {
      "name":"The author of the book",
      "$ref": "./person_schema.json"
    }
  }
}

I’ve allowed the subject (about) to be given as an array of wikidata entity link/descriptions (as described above) or a single link to a wikidata entity; which hints at how similar flexibility could be built in for other properties.

Testing the schema

I wrote a python script (running in a Jupyter Notebook) to test that this works:

from jsonschema import validate, ValidationError, SchemaError, RefResolver
import json
from os.path import abspath
schema_fn = "book_schema.json"
valid_json_fn = "book_valid.json"
invalid_json_fn = "book_invalid.json"
base_uri = 'file://' + abspath('') + '/'
with open(schema_fn, 'r') as schema_f:
    schema = json.loads(schema_f.read())
with open(valid_json_fn, 'r') as valid_json_f:
    valid_json = json.loads(valid_json_f.read())
resolver = RefResolver(referrer=schema, base_uri=base_uri)
try :
    validate(valid_json,  schema, resolver=resolver)
except SchemaError as e :
    print("there was a schema error")
    print(e.message)
except ValidationError as e :
    print("there was a validation error")
    print(e.message)

Or more conveniently for the web (and sometimes with better messages about what failed), there’s the JSON Schema Validator I mentioned above. Put this in the schema box on the left to pull in the JSON Schema for Books from my github:

{
  "$ref": "https://raw.githubusercontent.com/philbarker/lr_schema/book/book_schema.json"
}

And here’s a valid instance:

{
  "@context": {
    "@vocab": "http://schema.org/"
  },
  "@id": "https://www.wikidata.org/entity/Q3107329",
  "@type": "Book",
  "name": "Hitchhikers Guide to the Galaxy",
  "url": "http://example.org/hhgttg",
  "author": {
    "@type": "Person",
    "@id": "https://www.wikidata.org/entity/Q42",
    "familyName": "Adams",
    "givenName": "Douglas"
  },
  "description": "...",
  "about": [
    {"@id": "https://www.wikidata.org/entity/Q3"},
    {"@id": "https://www.wikidata.org/entity/Q1"},
    {"@id": "https://www.wikidata.org/entity/Q2165236"}
  ]
}

Have a play, see what you can break; let me know if you can get anything that isn’t valid JSON LD to validate.

The post JSON Schema for JSON-LD appeared first on Sharing and learning.

LRMI Metadata in use⤴

from @ Sharing and learning

There was no Dublin Core conference this year, but there was the DCMI Virtual Event over the second half of September, and for the last session of that I hosted a panel session on LRMI Metadata in use. The recordings for many of the sessions are now available, including our LRMI panel.

Further Details

We had four presentations, each fifteen minutes long, and a discussion at the end. Here’s an index for the recording and links to further resources mentioned. The four presentations were:

The discussion follows.

I have linked the title of each presentation to its starting point in the YouTube video.

About LRMI

You can find many of the LRMI properties in schema.org under the LearniningResource type, but you can also find LRMI Specifications on DCMI Website, including the version 1.1 of the specification as it was handed to DCMI, the terms in RDF (currently being updated), and concept schemes to provide controlled vocabularies for the values for some of those terms.

The DCMI LRMI Task Group curates LRMI and liases with other standards bodies that use the properties (notably schema.org). We meet the first Tuesday of every month. To join us, simply join the task group Mailing list.

If you’re interested in LRMI, join the google group for occasional updates about the spec, or to ask questions of the wider community.

Reflections

I’ve not had any formal feedback from attendees, but I felt it went well and kind people are telling me similar (or not saying anything). A colleague on LRMI made a useful comment, that we should have had a few questions for audience at the start that would have helped us understand their background. That would help us gauge whether we had got the technical level of the content right.

I’m quite happy with the structure of the session, starting with design priniciples for LRMI as a whole, looking at how it has been applied to the description of learning resources, and finally how it has been applied to a search solution.  I hope there is enough of a logical flow there to give some coherence to what is a fairly long session. I was also happy with the discussion session afterwards, and the overall balance of how we used the time. Hopefully the recording and slides will be useful resources themselves, and in our latest LRMI call we agreed that it would be useful to repeat this type of presentation for other audiences (though I hope not as a manel). Many thanks to the presenters and others who took part.

Finally, I had a small role in organizing the event as a whole, enough to recognise how much effort Paul Walk and the other people on the organizing committee put in to making this a successful event–thank you.

The post LRMI Metadata in use appeared first on Sharing and learning.

HeyPressto, a conference on Twitter⤴

from @ Sharing and learning

Pat Lockley (of the pedagogical and technical outfitters Pgogy Webstuff) and I did a thing last week: HeyPressto, a WordPress and ClassicPress conference which happened only on Twitter.  That’s right, a conference on Twitter: presentations were a series of 15 Tweets, one per minute with the conference hashtag, in a scheduled time slot. Adding images, gifs or links to the tweets allows presenters to go into a bit more depth than Twitter’s character limit would suggest. Replies to tweets, and other forms of engagement, allow discussion to develop around the issues raised.  It was also semi-synchronous–or asynchronous after the event if you like: the tweets persist, they can be revisited, engagement can be continued. One way in which the tweets persist is that Pat turned all the presentations into Twitter moments, so first thing you should do if you missed the event is to go to the schedule page and look at some of the presentations that are linked from it.

Ethos

“We want to be the best we can” was a phrase Pat used in describing our efforts.

We wanted to be inclusive. Having the conference on Twitter faciliates this through removal of financial, geographic and logistical obstacles to participation. However, we know that Twitter’s not a great place for people from many groups, and so we felt that a code of conduct was important even though we doubt who has the authority to set and enforce such a code for an open-participation conference. Our code came originally from the Open Code of Conduct from the TODOGroup, and has been through a few other communities (such as #FemEdTech and Open Scotand), so thank you to them for providing a broader basis than we could manage ourselves. We also set ourselves goals for accessibility and privacy that I hope we lived up to. I was pleased that these were noticed and commented on more than once, I think they set a standard if nothing else.

We wanted participation from all around the world, and not just in English. It was clear that we wouldn’t deserve this if the call was only in English, so it had to be translated. It wasn’t easy to chose which languages to translate into: we looked at which are the most widely spoken languages, but also tried to take into account which languages were spoken by the people furthest from justice.  We were half successful. We had presentations from India, Africa, Europe and North America, but not in the proportions we would have liked and all in English. It’s hard to get out of your bubble, to do this properly we would have to start with more diversity on the organization side.

We wanted to value people’s work and the environment. We paid for what we used (translations, for example). Pat found the susty theme, which is incredibly lightweight  and so easier on the carbon footprint with the bonus that it is lightning fast (a cache slowed it down). We’ll be planting some trees to offset the carbon that we did use.

Organization

This isn’t the first conference Pat has done on Twitter, he and Natalie Lafferty have run the very successful PressEd conference on WordPress in Education for three years, indeed it was my helping out a little on last years PressEd that got Pat and I talking about HeyPressto. As well as experience of all the things that need doing, Pat has a WordPress plugin for running the conference that manages lots of the submission process, communication with authors, scheduling and creations of pages for each presentation. I found myself trying to be as useful as could be around the core activities that Pat had sussed.

Starting with initial discussions in April, we chose a name (a few hours of discussion, and then Pat’s partner coming up with the right name in an instant), set up social media accounts (Twitter mostly, obvs), set up a domain (thank you Reclaim), email address, and ko-fi account. Pat’s artistic skills gave us great visuals and our mascot Hopful Bunny. We drafted the call for proposals, spent quite a while trying to work out what languages to translate it into. The call went out at the end of July.

It’s hard getting a Twitter thing going from zero to conference in a couple of months. Our networks helped—thank you friends, it wouldn’t have happened without your support, amplification and participation—but one of the things that we wanted to do was to reach beyond our own social, cultural and geographic bubbles. We got picked up by a couple of podcast channels, so you can hear us talking about HeyPressto on Radio EduTalk and the Sentree blog Thank you John and Michelle. (I think these were the first podcast / internet radio things I have been on, I’ve been pretty good at refusing them until now, and they were nothing like as stressful as I had feared). We picked up followers, and some who supported us quite actively. I want to give a special shout out to @getClassicPress because the engagement from the ClassicPress community strongly contrasted with the blanking from the big noises in the  WordPress community.

Perhaps because we were going outside our own community, there were quite a few folk who didn’t seem to get what we were doing. We listened and re-doubled our efforts to explain, provide examples, respond to feedback on what was confusing. I’m proud of our efforts to fix things that weren’t clear.

The rest was smooth running. Proposals came in steadily. Presentations were scheduled. Advice given to presenters who were unfamiliar with the format. We continued to promote and made new friends who boosted our message. Time zones were a problem. Introductory Tweets were scheduled. Presentations were presented (mostly at the right time), and I think the day went really quite well.

Personal Highlights

I’ve mentioned some of my highlights in the process described above. I’ld also like to call out a few of the presenters that I personally appreciated, without prejudice to the others who also did a great job:

Jan Koch, our opening presenter, did a fine presentation but backed it up with a video version of great professionalism. Superb effort, Jan.

Frances Bell and Lorna M Campbell gave a great presentation on #FemEdTech that  absolutely hit the spot on a number of problematic issues such as being equitable, accessible & inclusive in a place that can be “driven by algorithms & plagued by bots”.

Chris Aldrich whose presentation gave a name and coherence to something that I have long wanted to try, and providing enough hints on how to do it that I might yet get there.

Many presenters (too many to list , it turns out, without just reproducing the schedule page) gave really useful hints on approaches they use, or stories of their experience in implementing them, and inspiration for what I want to try next.

Speaking of what I want to try next, the ClassicPress presentations (roughly covering “why” & “how“) deserve special mention for their engagement with this baffling thing that a conference on Twitter is.

Finally, I think the most important presentation of the day was from Ronald Huereca on Mental Illness in Tech

The post HeyPressto, a conference on Twitter appeared first on Sharing and learning.