Tag Archives: ethics

The Human Element in Moderation: A Journey from Process to Educational Ethic⤴

from

Personal Prelude 


When I first started working in education, the term for ensuring consistent assessment was 'verification.' It felt precise, almost clinical. “Internal Verification” seemed positively unpleasant. Later, the terminology shifted to 'moderation’, a word that, for me, always carried a hint of something more nuanced, perhaps even something human. Yet, as I delved into the practicalities of the process, particularly in vocational education assessment, I often found myself looking for that 'human element' amidst the checklists and procedures. After all, nobody gets into education simply to fill in paperwork; we want to help and bring on the next generation.


However, despite being more comfortable with the name of the process, a vital question lingered for me: is moderation, however robust, just a process? All too often, moderation—despite its inherent good intentions—can get bogged down in a bureaucratic morass. When this happens, the paperwork and procedures (the means) inadvertently become the end themselves, losing sight of the genuine educational benefits. My argument, however, is that by actively focusing on the broader "educational good" that moderation serves, the very nature of this "box-ticking" process can itself be transformed, becoming an essential vehicle for positive change. It was this search, this personal reflection on the nature of our collective work, that led me to consider how moderation mirrors ideas of reflective practice – not just individual reflection, but a powerful form of group reflection. And from that, to broader philosophical concepts via Aristotle’s Nicomachean Ethics, and the idea that moderation, particularly through its feedback loops, can truly be a route to 'the good' – becoming an educational ethic rather than merely an education management function. This active potential of moderation is crucial; when it degrades into a mere administrative exercise; it fails to grasp its own transformative power.

Introduction

This is not a ‘how to’ post. There are hundreds of those just a Google away. Instead, I want to begin by considering how moderation both diachronically and synchronically can improve standards. In a sense considering what moderation does (or can/could do) before progressing to consider what moderation is and why in order to capture the transformative power of moderation it has to break away from a closed managerial function. 


Moderation: The 3-Way Pivot for Continuous Improvement

Moderation, particularly on an unsuccessful individual candidate's performance, extends far beyond that individual. It serves as a 3-way pivot for continuous improvement, directing feedback and insights across three crucial types of standards: educational, performance; assessment. But beyond this three-way pivot within the standards principle, moderation also critically pivots to the foundational educational principles of validity, reliability and fairness, underscoring its role as a truly remarkable and vitally important procedure.

When moderation highlights common areas of weakness across multiple components of an assessment or specific deficiencies in an apprentice’s performance, this feedback is invaluable for the college. It can inform improvements in teaching methodologies, the curriculum’s emphasis on certain topics, and the overall instructional design. This continuous feedback loop is essential for maintaining and elevating educational standards over time. It is feedback now for improved performance tomorrow. By ensuring that the curriculum remains current, teaching methods are effective, and the emphasis placed on certain features of the course genuinely lead to the required levels of competence.

The direct feedback given to the apprentice about their unsuccessful performance, clearly articulated, becomes a vital part of their preparation for a resit or retake. It pinpoints exactly where they went wrong and what specific skills or knowledge need further development. But the feedback that moderation provides to improve educational standards can also provide a benefit when colleges provide remedial training for candidates prior to the retaking of the assessment. Effective moderation thus establishes what might be called a two-track feedback model which helps improve the performance standards associated with the assessment.

This notion of moderation providing different routes to improvement can also be found in the way that moderation can be utilised to improve assessment standards. Moderation inherently scrutinizes the assessment itself. If the moderation process reveals ambiguities in the assessment task, inconsistencies in marking guides, or issues with the clarity of instructions, this feedback directly improves the quality and fairness of future assessments. This ensures that the assessment accurately measures the intended learning outcomes.

Crucially, the internal verifier’s role in monitoring patterns of mistakes across different apprentices in the same assessment is key to prompting reflective actions regarding the educational and assessment standards. By reviewing these patterns, perhaps on a quarterly basis and reported in pre-arranged, standardized meetings/consortia, where assessors and college leaders can determine if an apprentice’s mistakes are truly individual or, more significantly, evidence of underlying weaknesses in the training provided or the assessment design itself. There will always be a requirement for full-scale reviews of the assessment and the assessment process but moderation provides continuous professional monitoring of the standards. This systematic analysis transforms individual assessment outcomes into valuable data for continuous improvement across the entire vocational programme. To borrow an electrical metaphor, moderation is the equivalent of ongoing maintenance in contrast to periodic inspections. 


Beyond Apprentice Performance: Moderating Assessor Performance

While the initial verification process focuses on the apprentice’s performance, a truly comprehensive moderation strategy must also consider assessor performance. This requires a sensitive and supportive approach, particularly given industrial relations considerations.
The best way to address assessor performance within moderation is through a collaborative decentralisation of the moderation process.

Assessors should regularly engage in peer review of each other’s judgments. This involves assessors critically examining each other’s marking and feedback against agreed standards. This collaborative approach fosters a shared understanding of criteria and helps to identify unconscious biases or inconsistencies in application without being punitive. It also, because no-one is perfect, induces a recognition of professional humility. Indeed, moderation is the antidote to perfectionism.


This, in turn, provides opportunities for the professional development of assessors. When patterns in assessor marking are identified, these become opportunities for targeted training, workshops, or one-on-one support, rather than criticism or disciplinary action. The goal is to enhance their understanding of assessment practices and standards.

Education Scotland emphasizes that "engaging in the moderation process with colleagues will assist you in arriving at valid and reliable decisions on learners' progress" and promotes a "shared understanding of standards and expectations" among practitioners across all sectors. This aligns with the collaborative and transparent principles of moderation.

Moderation meetings, especially those involving the internal verifier, should include calibrated discussions where assessors can collectively review samples of work and discuss their rationale for marks and feedback. This open dialogue helps to align individual interpretations with the shared understanding of standards.

Ultimately, what emerges from these best practices is that moderation is effectively professional group reflection. It's a structured and collaborative process where assessors, internal verifiers, and indeed the entire vocational education institution engage in a cycle of learning and improvement. Much like theories of reflective practice, moderation moves beyond simply "doing" assessment to actively "reflecting on" and "reflecting in" the practice of assessment itself. It allows educators to collectively scrutinize their judgements, challenge assumptions, identify systemic issues in training or assessment design, and continually refine their approach to ensure that every apprentice receives fair, consistent, and high-quality assessment.


Moderation as Ethic

At the beginning of this post, I mentioned how the change from verification to moderation suggested a clear Aristotelian ethic. Aristotle saw "the good" as the aim of all human activity, (our telos) achieved through virtuous practice and the pursuit of excellence. In this light, moderation in vocational education can be seen not merely as an administrative process, but as a route to "the good" in education itself. 

This pursuit of 'the good' finds a tangible parallel in industry standards. BS 7671 states that "good workmanship shall be used", (134.1.1). But it's not just worth considering what the Wiring Regulations state, but where they state it. The positioning in the very first part of the book underscores that "good" is a pervasive standard – a responsibility woven into the very fabric of electrical installation work. It is there because it is only by aiming at the good from the very beginning that we might ultimately hope to arrive there. 

The Regulation’s simple yet powerful directive applies not just to the apprentice or qualified electrician, but extends as a guiding principle to the assessor and the organization that the assessor belongs to. It underscores that pursuit of "the good" is fundamentally that which connects the various components of vocational education. Just as in electrical installation, so too is the pursuit of the good a responsibility woven into the very fabric of training, assessment, and educational professional practice. 

It's almost possible to regard moderation as the single most important process of the whole learning and assessing field. It is the process that ensures that the teaching, learning and assessing operate at their most effective. When moderation is practiced with integrity and a focus on continuous improvement, it cultivates:
 
  • A Good Educational Standard: Ensuring that what is taught at college truly equips apprentices with the necessary knowledge and skills.
  • A Good Assessment: Guaranteeing that assessments accurately and fairly measure competence, providing a clear pathway for learners to demonstrate their abilities.
  • A Good Performance: Supporting apprentices to develop the practical skills and theoretical understanding required to excel in their chosen trade.

These elements all combine to produce "good electricians" who are not only technically proficient but also ethically grounded in their practice. In this sense, moderation, particularly through its two-track feedback mechanisms (direct from the assessor and systemic insights via the college), elevates itself beyond a management function to become a profound educational ethic – a commitment to excellence and the holistic development of competent, skilled, and responsible individuals in the workforce.

Sources & Further Reading:


Aristotle, tr.D. Ross, (2009) Nicomachean Ethics, Oxford, Oxford World's Classics, Oxford University Press 

BS 7671:2018+A2:2022, Requirements for Electrical Installations, IET Wiring Regulations Eighteenth Edition, (2022) IET, London

Image Credit


Aristotle, photographer Nick Thompson, Flickr, Uploaded on March 31, 2012, https://www.flickr.com/photos/pelegrino/6884873348, CC-BY-NC-SA 2.0


People and trust first, technology second⤴

from @ education

After a productive early morning call with my excellent OSI colleagues and a satisfying burst of administrivia deck-clearing, I made a second cup of coffee and settled down to read this morning's HESA blog post from Alex Usher. Today he was summarising his thoughts from … Continue reading People and trust first, technology second

ALT Winter Summit on Ethics and Artificial Intelligence⤴

from

Last week I joined the ALT Winter Summit on Ethics and an Artificial Intelligence. Earlier in the year I was following developments at the interface between ethics, AI and the commons, which resulted in this blog post: Generative AI: Ethics all the way down.  Since then, I’ve been tied up with other things, so I appreciated the opportunity to turn my attention back to these thorny issues.  Chaired by Natalie Lafferty, University of Dundee, and Sharon Flynn, Technological Higher Education Association, both of whom have been instrumental in developing ALT’s influential Framework for Ethical Learning Technology, the online summit presented a wide range of perspectives on ethics and AI, both practical and philosophical, from scholars, learning technologists and students.  

Whose Ethics? Whose AI? A relational approach to the challenge of ethical AI – Helen Beetham

Helen Beetham opened the summit with an inspiring and thought-provoking keynote that presented the case for relational ethics. Positionality is important in relational ethics; ethics must come from a position, from somewhere. We need to understand how our ethics are interwoven with relationships and technologies. The ethics of AI companies come from nowhere. Questions of positionality and power engender the question “whose artificial intelligence”?  There is no definition of AI that does not define what intelligence is. Every definition is an abstraction made from an engineering perspective, while neglecting other aspects of human intelligence.  Some kinds of intelligence are rendered as important, as mattering, others are not. AI has always been about global power and categorising people in certain ways.  What are the implications of AI for those that fall into the wrong categories?

Helen pointed out that DARPA have funded AI intensively since the 1960’s, reminding me of many learning technology standards that have their roots in defence and aeronautical industries.

A huge amount of human refinement is required to produce training data models; this is the black box of human labour, mostly involving labourers in the global south.  Many students are also working inside the data engine in the data labelling industry. We don’t want to think about these people because it affects the magic of AI.

At the same time, tools are being offered to students to enable them to bypass AI detection, to ‘humanise” the output of AI tools.  The “sell” is productivity, that this will save students’ time, but who benefits from this productivity?

Helen noted that the terms “generative”, “intelligence”, and “artificial” are all very problematic and said she preferred the term “synthetic media”.  She argued that it’s unhelpful to talk about the skills humans need to work alongside AI, as these tools have no agency, they are not co-workers. These approaches create new divisions of labour among people, and new divisions about whose intelligence matters. We need a better critique of AI literacy and to think about how we can ask questions alongside our students. 

Helen called for universities to share their research and experience of AI openly, rather than building their own walled gardens, as this is just another source of inequity.  As educators we hold a key ethical space.  We have the ingenuity to build better relationships with this new technology, to create ecosystems of agency and care, and empower and support each other as colleagues.

Helen ended by calling for spaces of principled refusal within education. In the learning of any discipline there may need to be spaces of principled refusal, this is a privilege that education institutions can offer. 

Developing resilience in an ever-changing AI landscape ~ Mary Jacob, Aberystwyth University

Mary explored the idea of resilience and why we need it. In the age of AI we need to be flexible and adaptable, we need an agile response to emerging situations, critical thinking, emotional regulation, and we need to support and care for ourselves and others. AI is already embedded everywhere, we have little control over it, so it’s crucial we keep the human element to the forefront.  Mary urged us to notice our emotions and think critically, bring kindness and compassion into play, and be our real, authentic selves.  We must acknowledge we are all different, but can find common ground for kindness and compassion.  We need tolerance for uncertainty and imperfection and a place of resilience and strength.

Mary introduced Aberystwyth’s AI Guidance for staff and students and also provided a useful summary of what constitutes AI literacy at this point in time.

Mary Jacob's AI Literacy

Achieving Inclusive education using AI – Olatunde Duruwoju, Liverpool Business School

Tunde asked us how we address gaps in inequity and inclusion?  Time and workload are often cited as barriers that prevent these issues from being addresses, however AI can help reduce these burdens by improving workflows and capacity, which in turn should help enable us to achieve inclusion.

When developing AI strategy, it’s important to understand and respond to your context. That means gathering intersectional demographic data that goes beyond protected characteristics.  The key is to identify and address individual students issues, rather than just treating everyone the same. Try to understand the experience of students with different characteristics.  Know where your students are coming from and understand their challenges and risks, this is fundamental to addressing inclusion.

AI can be used in the curriculum to achieve inclusion.  E.g. Using AI can be helpful for international students who may not be familiar with specific forms of assessment. Exams trigger anxiety, so how do we use AI to move away from exams?

Olatunde Duruwoju - Think intersectionality

AI Integration & Ethical Reflection in Teaching – Tarsem Singh Cooner

Tarsem presented a fascinating case study on developing a classroom exercise for social work students on using AI in practice.  The exercise drew on the Ethics Guidelines on Reliable AI from the European Group on Ethics, Science and New Technologies and mapped this against the Global Social Work Ethical Principles.

Tarsem Singh Cooner - comparison of Principles on Reliable AI  and Global Social Work Ethical Principles

The assignment was prompted by the fact that practitioners are using AI to uncritically write social work assessments and reports. Should algorithms be used to predict risk and harm, given they encode race and class bias? The data going into the machine is not benign and students need to be aware of this.

GenAI and the student experience – Sue Beckingham, Louise Drum, Peter Hartley & students

Louise highlighted the lack to student participation in discussions around AI. Napier University set up an anonymous padlet to allow students to tell them what they thought. Most students are enthusiastic about AI. They use it as a dialogue partner to get rapid feedback. It’s also helpful for disabled and neurodivergent students, and those who speak English as a second language, who use AI as an assistive technology.  However students also said that using AI is unfair and feels like cheating.  Some added that they like the process of writing and don’t want to loose that, which prompted Louise to ask if we’re outsourcing the process of critical thinking?  Louise encouraged us to share our practice through networks, adding that collaboration and cooperation is key and can lead to all kinds of serendipity.

The students provided a range of different perspectives:

Some reported conflicting feelings and messages from staff about whether and how AI can be used, or whether it’s cheating.  Students said they felt they are not being taught how to use AI effectively.

GCSEs and the school system just doesn’t work for many students, not just neurotypical ones, it’s all about memorising things.  We need more skills based learning rather than outcome based learning.

Use of AI tools echoes previous concerns about the use of the internet in education. There was a time when there was considerable debate about whether the internet should be used for teaching & learning.

AI can be used to support new learning. It provides on hand personal assistance that’s there 24/7.  Students create fictional classmates and partners who they can debate with.  A lot of it is garbage but some of it is useful. Even when it doesn’t make sense, it makes you think about other things that do make sense.

A few thoughts…

As is often the case with any new technology, many of the problematic issues that AI has thrown up relate less to the technology itself, and more to the nature of our educational institutions and systems.  This is particularly true in the cases of issues relating to equity, diversity and inclusion; whose knowledge and experiences are valued, and whose are marginalised?   

It’s notable that several speakers mentioned the use of AI in recruitment. Sue Beckingham noted that AI can be helpful for interview practice, though Helen highlighted research that suggested applicants who used chatGPT’s paid functionality perform much better in recruitment than those who don’t.  This suggests that we need to be thinking about authentic recruitment practices in much the same way we think about authentic assessment.  Can we create recruitment process that mitigate or bypass the impact of these systems?

I particularly liked Helen’s characterisation of AI as synthetic media, which helps to defuse some of the hype and sensationalism around these technologies.

The key to addressing many of the issues relating to the use of AI in education is to share our practice and experience openly and to engage our colleagues and students in conversations that are underpinned by contextual ethical frameworks such as ALT’s Framework for Ethical Learning Technology.  Peter Hartley noted that universities that have already invested in student engagement and co-creation are at an advantage when it comes to engaging with AI tools.

I’m strongly in favour of Helen’s call for spaces of principled refusal, however at the same time we need to be aware that the genie is out of the bottle.  These tools are out in the world now, they are in our education institutions, and they are being used by students in increasingly diverse and creative ways, often to mitigate the impact of systemic inequities. While it’s important to acknowledge the exploitative nature and very real harms perpetrated by the AI industry, the issues and potential raised by these tools also give us an opportunity to question and address systemic inequities within the academy. AI tools provide a valuable starting point to open conversations about difficult ethical questions about knowledge, understanding and what it means to learn and be human.  

Generative AI – Ethics all the way down⤴

from

How to respond to the affordances and challenges of generative AI is a pressing issue that many learning technologists and open education practitioners are grappling with right now and I’ve been wanting to write a blog post about the interface between AI, large language models and the Commons for some time. This isn’t that post.  I’ve been so caught up with other work that I’ve barely scratched the surface of the articles on my rapidly expanding reading list.  Instead, these are some short, sketchy notes about the different ethical layers that we need to consider when engaging with AI.  This post is partly inspired by technology ethics educator Casey Fiesler, who has warned education institutions of the risk of what she refers to as ethical debt. 

“What’s accruing here is not just technical debt, but ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.”
~ Casey Fiesler, The Conversation

Apologies for glossing over the complexity of these issues, I just wanted to get something down in writing while it’s fresh in my mind 

Ethics of large language models and Common Crawl data sets

Most generative AI tools use data sets scraped from the web and made available for research and commercial development.  Some of the organisations creating these data sets are non-profits, others are commercial companies, the relationship between the two is not always transparent. Most of these data sets scrape content directly from the web regardless of ownership, copyright, licensing and consent, which has led to legitimate concerns about all kinds of rights violations. While some companies claim to employ these data sets under the terms of fair use, questions have been raised about using such data for explicitly commercial purposes. Some open advocates have said that while they have no objection to these data sets being used for research purposes they are very concerned about commercial use. Content creators have also raised objections to their creative works being used to train commercial applications without their knowledge or consent.  As a result, a number copyright violation lawsuits have been raised by artists, creators, cultural heritage organisations and copyright holders.

There are more specific issues relating to these data sets and Creative Commons licensed content.  All CC licenses include an attribution clause, and in order to use a CC licensed work you must attribute the creator. LLMs and other large data sets are unable to fulfil this crucial attribution requirement so they ride roughshod over one of the foundational principles of Creative Commons. 

LLMs and common crawl data sets are out there in the world now.  The genie is very much out of the bottle and there’s not a great deal we can do to put it back, even if we wanted to. It’s also debatable what, if anything, content creators, organisations and archives can do to prevent their works being swept up by web scraping in the future. 

Ethics of content moderation and data filtering

Because these data sets are scraped wholesale from the web, they inevitably include all kinds of offensive, degrading and discriminatory content. In order to ensure that this content does not influence the outputs of generative AI tools and damage their commercial potential, these data sets must be filtered and moderated.  Because AI tools are not smart enough to filter out this content automatically, the majority of content moderation is done by humans, often from the global majority, working under exploitative and extractive conditions. In May, content moderators in Africa who provide services for Meta, Open AI and others voted to establish the first African Content Moderators Union, to challenge low pay and exploitative working conditions in the industry. 

Most UK universities have a commitment to ending modern slavery and uphold the terms of the Modern Slavery Act. For example the University of Edinburgh’s Modern Slavery Statement says that it is “committed to protecting and respecting human rights and have a zero-tolerance approach to slavery and human trafficking in all its forms.” It is unclear how commitments such as these relate to content workers who often work under conditions that are exploitative and degrading at best, and a form of modern slavery at worst. 

Ethics of anthropomorphising AI 

The language used to describe generative AI tools often humanises and anthropomorphises them, either deliberately or subconsciously. They are ascribed human characteristics and abilities, such as intelligence and the ability to dream. One of the most striking examples is the use of hallucinating.  When Chat GPT makes up non-existent references to back up erroneous “facts” this is often described as “hallucinating“.  This propensity has led to confusion among some users when they have attempted to find these fictional references. Many commenters have pointed out that these tools are incapable of hallucinating, they’re just getting shit wrong, and that the use of such humanising language purposefully disguises and obfuscates the limitations of these systems. 

“Hallucinate is the term that architects and boosters of generative AI have settled on to characterize responses served up by chatbots that are wholly manufactured, or flat-out wrong.”
~ Naomi Klein, The Guardian

Ethics of algorithmic bias

Algorithmic bias is a well known and well documented phenomenon (cf Safiya U. Noble‘s Algorithms of Oppression) and generative AI tools are far from immune to bias. Valid arguments have been made about the bias of the ‘intelligence” these tools claim to generate.  Because the majority of AI applications are produced in the global north, they invariably replicate a particularly white, male, Western world view, with all the inherent biases that entails. Diverse they are not. Wayne Holmes has noted that AI ignores minority opinions and marginalised perspectives, perpetuating a Silicon Valley perspective and world outlook. Clearly there are considerable ethical issues about education institutions that have a mission to be diverse and inclusive using tools that engender harmful biases and replicate real world inequalities. 

“I don’t want to say I’m sure. I’m sure it will lift up the standard of living for everybody, and, honestly, if the choice is lift up the standard of living for everybody but keep inequality, I would still take that.”
~ Sam Altman, OpenAI CEO. 

Ethics of catastrophising
 

Much has been written about the dangers of AI, often by the very individuals who are responsible for creating these tools. Some claim that generative AI will end education as we know it, while others prophesy that AI will end humanity altogether. There is no doubt that this catastrophising helps to feed the hype cycle and drive traffic to to these tools and applications, however Timnit Gebru and others have pointed out that by focusing attention on some nebulous future catastrophe, the founding fathers of AI are purposeful distracting us from current real world harms caused by the industry they have created, including reproducing systems of oppression, worker exploitation, and massive data theft. 

“The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”
Nirit Weiss-Blatt’s (@DrTechlash) “Taxonomy of AI Panic Facilitators” A visualization of leading AI Doomers (X-risk open letters, media interviews & OpEds). Some AI experts enable them, while others oppose them. The gender dynamics are fucked up. It says a lot about the panic itself.

Not really a conclusion

Clearly there are many ethical issues that education institutions must take into consideration if they are to use generative AI tools in ways that are not harmful.  However this doesn’t mean that there is no place for AI in education, far from it.  Many AI tools are already being used in education, often with beneficial results, captioning systems are just one example that springs to mind.  I also think that generative AI can potentially be used as an exemplar to teach complex and nuanced issues relating to the creation and consumption of information, knowledge equity, the nature of creativity, and post-humanism.  Whether this potential outweighs the ethical issues remains to be seen.

A few references 

AI has social consequences, but who pays the price? Tech companies’ problem with ‘ethical debt’ ~ Casey Fiesler, The Conversation 

Statement from the listed authors of Stochastic Parrots on the “AI pause” letter ~ Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face)

Open letter to News Media and Policy Makers re: Tech Experts from the Global Majority ~ @safiyanoble (Algorithms of Oppression), @timnitGebru (ex Ethical Artificial Intelligence Team)@dalitdiva, @nighatdad, @arzugeybulla, @Nanjala1, @joana_varon

150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting ~ Time

AI machines aren’t ‘hallucinating’. But their makers are ~ Naomi Klein, The Guardian  

Just Because ChatBots Can’t Think Doesn’t Mean They Can’t Lie ~ Maria Bustillos, The Nation 

Artificial Intelligence and Open Education: A Critical Studies Approach ~ Dr Wayne Holmes, UCL 

‘What should the limits be?’ The father of ChatGPT on whether AI will save humanity – or destroy it ~ Sam Altman interview, The Guardian 

5 Things You Need to Know Before You Buy Edtech #OTESSA23⤴

from @ education

I did an invited talk yesterday for the kick off day of the OTESSA conference, part of the larger Canadian Congress conference. It was a lovely experience, not least because I was introduced by Connie Blomgren from AU, and Jon Dron was also in the … Continue reading 5 Things You Need to Know Before You Buy Edtech #OTESSA23

Procurement aka the crack in everything that lets the bullshit in⤴

from @ education

I've been thinking and writing and talking a lot about edtech procurement recently. In fact I've been thinking and writing and talking about it for years but it feels like in this moment it might just be getting a little more traction. It might of … Continue reading Procurement aka the crack in everything that lets the bullshit in

Edtech is killing us: Random notes on a Neil Selwyn talk about edtech and climate crisis⤴

from @ education

Thanks to a blog post yesterday by George Veletsianos I was alerted to a recent talk Neil Selwyn gave to my friends at the Centre for Research in Digital Education at Edinburgh, titled "Studying digital education in a times of climate crisis: what can we … Continue reading Edtech is killing us: Random notes on a Neil Selwyn talk about edtech and climate crisis

The Penguin and the Piper: Telling a different story⤴

from

This post was originally posted on the Open.Ed blog

Today is Word Penguin Day so I took the opportunity to share some of my all time favourite open licensed images from the University’s collections on twitter; the famous pictures of the piper and the penguin.

These images were taken during the Scottish National Antarctic Expedition led by William Speirs Bruce in 1902 – 1904, and they are preserved among the papers of Spiers Bruce in the University’s collections.  One of these images had its own 15 minutes of fame several years ago after it was added to the Wikipedia article about the expedition and a twitter user noted dryly….

I also happen to use one of these images in my slides when I talk about the University OER Service as it’s a great way to promote the University’s open licensed image collections and a fun way to illustrate that the OER Service is composed of about one and a half people!

However these pictures also tell a different story. When I tweeted them, Malcolm Brown from the University’s Digital Imaging Unit contacted me to point out that the penguin is actually tethered to the spot with a rope tied around its legs.  The rope can’t be seen in most of the low resolution images shared on the web under open licence but it is visible in the high resolution scans.  Suddenly these fun images start to look rather cruel.

Image of piper and penguin in the Antarctic with rope visible. Image of penguin with rope visible around legs

This raises an important question about the ethics of sharing and reusing open licensed historical content.  As more museums, galleries and institutions open up their collections, all kinds of images that we might now regard as questionable at best or offensive at worst, are released into the public domain or shared under open licence. While I believe that it’s vitally important that public heritage collections are freely and openly available to the public, it’s equally important that we view these collections through a critical lens and that we consider the ethical implications of the historical images we share and reuse.

Obviously we don’t know what happened to the penguin in these pictures.  I hope it was released and was none the worse for its unexpected moment in the spotlight.  Learning more about this famous image has certainly made me think about how I use this picture myself, and whether I want to continue using it.  Perhaps now would be a good time to have another look through our image collections to see if I can find a different open licensed image to use on my slides.

ALTC 2021 – Ethics, joy and no gobackery⤴

from

This year’s annual ALT Conference was a bit of a different experience for me as it’s the first time in years that I wasn’t speaking or doing social media coverage, I was “just” a delegate among over 300 others, and honestly it was a welcome experience just to be part of that community and to listen to and learn from colleagues across the UK and beyond.  This was the first ALT Annual Conference to take place entirely online, and although I was only able to dip in and out over the course of the three days, I got a real sense of the buzz around the event.  It really did feel like a broad and diverse community coming together. 

With the launch of the ALT Framework for Ethical Learning Technology, ethics was a central theme that ran throughout the conference.  The Framework is an important and timely initiative co-created by members of the community and led by ALT Trustees Bella Abrams, Sharon Flynn and Natalie Lafferty. The aim of the Framework is to provide scaffolding to help learning technologists, institutions, and industry to make decisions around technology in an informed and ethical manner. This work is very much a starting point and over the next year, ALT will be gathering case studies and example policies from across the sector.  At Edinburgh, we’ve already submitted our open licensed Lecture Recording Policy as an example.  Speaking in a panel as part of the launch, Javiera Atenas suggested that we use the Framework as a starting point and urged us to go further than the principle of Do No Harm when it comes to gathering and using data.

Sonia Livingstone’s keynote also focused on the ethics of data use and the quantification and instrumentalisation of learning.  We operate in a system where learning is rendered invisible if it cannot be quantified, and increasingly we’re moving from the quantification of learning to the datafication of everything.   Sonia asked a lot of hard questions, including what does “good” look like when it comes to children’s data rights, and how do we ensure children’s agency and participation in the collection and use of their data?

Chris Rowell and Matthew Acevedo presented an excellent session on academic integrity and critical digital pedagogy from the forthcoming book Critical Digital Pedagogy – Broadening Horizons, Bridging Theory and Practice edited by Suzan Koseoglu, George Veletsianos, Chris Rowell. Matthew’s talk explored virtual proctoring, the Panoptic gaze, and the discourse of academic integrity. It was another thought-provoking session and I’ll look forward to reading the book once it’s published. 

Mutale Nkonde’s keynote explored the intersection of ethics and technology in her revealing dissection of the racist nature of the TikTok algorithm and the impact this can have on real lived experience.  We know that algorithms and technologies reproduce racial biases that exist in society, but we lack the literacy to be able to view and talk honestly about ourselves as victims and perpetrators of white supremacy. Mutale introduced the Framework for Racial Literacy in Technology which provides us with the means to talk about racism and algorithmic bias through cognitive, emotional and active lenses.  Mutale challenged us to ask ourselves, when we are creating algorithms, how can we optimise them for fairness and justice?  How can we make the lives of marginalised peoples better rather than promoting those who are already privileged?

As the parent of a teen who is a frequent TikTok user, Mutale’s talk left me with a lot to think about and discuss with my daughter.  At 15 she is well aware that TikTok is a massively racist platform and she knows that the way the algorithm pushes content to users can be extremely harmful.  In particular she highlighted the prevalence of content relating to self-harm and trauma dumping.  On the one hand I’m glad that she has sufficient digital literacy to recognise that the content she views is being manipulated by the platform, but at the same time it’s deeply concerning that harmful content is being pushed to users at such a young age.

Lou Mycroft’s keynote was one of the highlights of the conference for me.  I’ve been familiar with Lou’s work for a long time and have often seen the #JoyFE hashtag passing on my timeline, but this is the first time I’ve heard her talking about the philosophies and ethics that underpin this amazing collective.  In an inspiring and expansive talk, Lou explored an ethics of joy as characterised by Spinoza’s concept of potentia; by practising joy, we enact our power in the form of potentia.  Lou challenged us to use our potentia to drive change, and resist the fatal pull of “gobackery”, the gravitational pull of the old.  While Lou acknowledged that the ethics of accountability and KPIs will not be changing any time soon, she argued that we can have parallel values based on an ethics of joy, and urged us to put our core values into strategic planning, asking; What might assessment look like as a process of hope? What might induction look like as a practice of compassion? Or timetabling as a practice of equity?

One of the points Lou made in her talk, which stopped me in my tracks, was that right now Higher Education carries a burden of pain.  It’s true, we all know that, we all feel that pain every day, but to hear it stated so plainly was transformative.  However there is still a place for hope and joy.  In a sector that currently appears to be exercising all its considerable power to pull us back to old entrenched ways living, working, being, learning, we need to use our own hope and joy to keep driving change forwards.  To do that we need educator lead communities with explicit shared values and affirmative ethics.  I believe ALT is one of those communities, with its shared values of openness, independence, participation and collaboration.

As is inevitable with such a packed programme, and juggling the conference around existing work commitments, I missed so many sessions that looked really interesting, including several by colleagues here at the University of Edinburgh and one on Using OER to empower communities of undergraduate scholars by Carlos Goller.   I’ll look forward to catching up with the recordings of these sessions in the coming weeks. 

Right at the beginning of the conference, co-chairs Farzana Latif, Roger Emery, and Mat Lingard asked us when we first attended an ALT Conference or whether we were new to the event.  My first ALT Conference was in Manchester in 2000, where I presented a paper with Allison Littlejohn and Charles Duncan called Share and share alike: encouraging the reuse of academic resources through the Scottish electronic Staff Development Library.  It’s amazing to see how the ALT community has grown and developed over the last 20 years.  I look forward to seeing where the next 20 will lead us.

Enormous thanks once again to everyone who made this year’s ALT Conference such an inspiring and joy-full event, particularly the co-chairs, the keynotes, and of course Maren Deepwell and all the ALT team. 

ALT Annual Conference by Gloria Corra, winner of the #altc student competition at London College of Communication

 

 

Mind the Ethics Gap⤴

from @ education

I've Tweeted a few things this week relating to a colleague in another Canadian university, Ian Linkletter, who now finds himself on the sticky end of a lawsuit from Proctorio, an automated proctoring service, for sharing some videos that they regarded to be confidential. These … Continue reading Mind the Ethics Gap