A couple of days ago I saw a “guess the cubomania” challenge from Theo. I’ve had an interest in Cubomania in the past and played around with the idea a bit. After a chat with D. who gave me a few engravers I googled a bit and guessed, wrongly, Goya.
Next I thought to ask ChatGPT. It suggested it could match by image matching techniques, gave me a fairly obviously wrong first row and ran out of credit.
I then thought to ask Claude to make me an interactive page where I could drag things around. It made a couple of not very good attempts.
I was thinking about a better prompt, when I remembered and asked:
Could we use the whole image for each piece but ‘crop’ it with css?
Claude replied:
Brilliant idea! Yes, we can absolutely use CSS to create a “window” effect where each piece shows only its portion of the full image. This is much more elegant than trying to extract individual pieces.
I was flattered1 and when Claude came up with another fail I decided to abandon AI and DIY. This turned out a lot better. I started by remembering background-position and finding interact.js . The last time I did any drag and drop I dimly recall some sort of jQuery and a shim for mobile/tablets. interact.js did a grand job for my simple needs. It was probably overkill as it seems to do a lot more.
It is pretty simple stuff, but potentially a lot of fun, different images, making cubomania puzzles who knows. I did extend it a bit, learning about localStorage (to save any progress) and the dialogue tag. All without AI but few visits to HTML reference – HTML | MDN and the odd search.
I had a lot of fun with this, more than if I had just managed to get either of the AIs it to do the whole thing. What it did make me think of is that AI chat was useful for working out what I wanted to do and how to do it. I could probably have done that bit too all by myself. Usually I just start messing about and see what happens. This points to a bit of planning, or maybe typing some notes/pseudocode/outline might work for me when I am playing.
The Featured Image of this post was generated by ChatGPT in response to ” I want an image of a chatbot character chatting with a person, friendly, helpful & futuristic.” It has been run through Cubomania Gif!
The 16th annual Open Education Conference (OER25) is taking place in London next week and the theme “Speaking truth to power: open education and AI in the age of populism” could be more urgent or important. Chaired by Sheila MacNeil and Dr Louise Drumm, both of whom have a long standing commitment to critical engagement with ed tech, the conference features keynotes by Helen Beetham and Joe Wilson.
Helen’s keynote, “When speaking truth is not enough: repurpose, rebuild, refuse”, will explore the links between the AI industry and the politics of populism. Helen’s thoughtful, contextual approach to education technology and AI in particular has already made me step back and question the foundational concepts of artificial intelligence. I’m still thinking about her keynote at the 2023 ALT Winter Conference “Whose Ethics? Whose AI? A relational approach to the challenge of ethical AI.”
Joe Wilson has been my Open Scotland partner in crime for over a decade now and I’m continually inspired by his optimism and his commitment to openness. Joe’s keynote, “Shaping Open Education ” will focus on the challenges of closing the attainment gap, promoting social mobility, ethical use of AI and keeping open education at the heart of change.
I’m also really pleased to see that Natalie Lafferty and Sharon Flynn will be leading a workshop on reviewing ALT’s Framework for Ethical Learning Technology, which is more critically important now than ever. The workshop will inform an updated version of the framework, which is due to be launched at the end of the year.
I’ve been hugely privileged to attend all fifteen OER Conferences, going right back to OER10 in Cambridge, but unfortunately I won’t be able to go to London this year. I’ve had to step back from all work commitments as I was diagnosed with stage two throat cancer earlier in the year. I’ve already completed six weeks of radiotherapy treatment and am now (hopefully!) on the slow and convoluted road to recovery. (The jury is still out as to whether and how this relates to the autoimmune disease I was diagnosed with last year. That remains to be seen.) Over the last six months I’ve been deeply moved by how immensely kind people have been, I really can’t express my gratitude enough.
I haven’t had much energy to focus on anything other than recovery for the last six months, but during occasional bright spots I’ve found myself turning more and more to independent writing and journalism in an attempt to find some respite from endless doomscrolling. Shout out to Audrey Watter’s Second Breakfast, Rebecca Solnit’s Meditations in an Emergency, Carole Cadwalladr’s How to survive the Broligarchy, and Helen Beetham’s imperfect offerings for keeping me sane, more or less. All inspiring women with fearless voices speaking truth to power.
I’ve also been enthralled by the Manchester Mill’s tenacious investigative journalism that led to the suspension of two member’s of the University of Greater Manchester’s senior leadership team, including the vice chancellor, and the subsequent police enquiry into “allegations of financial irregularity“. As a former (brief) employee of the University of Greater Manchester, when it was better known as the University of Bolton, I’ll be watching with interest to see how this investigation develops.
I’ve been making a rather half-hearted attempt at following the progress of the government’s questionable Data (Use and Access) Bill, particularly as it relates to AI and copyright, but I haven’t got the brain or will power to write about that right now.
In the meantime, I’ll hopefully be able to follow some of the OER25 Conference online and I’ll be with everyone in spirit, if not in person, this year.
Forgive me, dear reader, it has been too long since my I last published anything here. I seem to have fallen out of the habit of writing and more importantly publishing here regularly.
At the start of the year, I was hopefully about finding my voice here again, about being more reflective , sparked by what LLM models spit out about me. It’s April now, and I just haven’t managed to do it. I have a quite a few unfinished posts in draft form. All with some potential, some relevance to something relevant at that particular moment, but the moments have all gone.
Writing is hard. It should be hard, it should be a struggle, it should feel you with that mixture of pleasure and pride when you hit that publish button or see something in a print edition. Sure we all need help to learn the craft of writing, but we do need to remember it is a craft not something that can turned into an activity that just needs to be passable.
I don’t need help with the actually writing. Well that is probably debatable, but this is my blog and the thing I’ve always appreciated about blogging is that I’m in charge, it’s not a peer reviewed journal, it’s my voice, typos and all, and I have always found that liberating. I don’t want or need any GenAI tool to help (yes looking at you Co-pilot, I am not going to “turn you on” )suggest, misappropriate what I am trying to say.
There is no easy answer to what I am experiencing. It’s a mix of malaise, not quite sure what I want to say, and in all honesty not sure who I am saying anything too. Maybe I just have rose tinted glasses about the “good old days”, when blogging was still new, when Twitter was actually a supportive network, when you felt you could reach “your people” . . . The takeover of social networks by advertising, by algorithms has fragmented “my people”. We have dispersed to other spaces . . . there are still strong and valued connections, but it’s not the same . . . or have just given up and got the “cba” (cant be a*sed) syndrome? What system should I be on?
Finding out that some of my publications have been scraped for use in LLM training data sets without publishers seeking consent was not unexpected but to see it in black and white so to speak. Well that was, how can I put it? I felt violated. As one of my friends said when he checked the site, “it’s a bit like seeing your Stasi file.”
But there is, I feel, something more insidious happening just now: the wider political narratives around AI adoption; the myths of increased productivity, the debasing of human interactions into simplified automated lowest common denominator processes; the AI revolution will save the economy (it might kill the planet in the process, but hey that’s ok because one day will have the ultimate algorithm that will solve everything . . . will that be 42?
In terms of education, I keep hearing /reading about systems that will “save time, increase personalisation, allow teachers to “do the things that matter” (because obvs they’re not doing that just now!). So plug in, create your curriculum, your lesson plans, your activities, assessments. Let’s live all work towards the myth of personalisation which in reality will actually be mass homogenisation. But, hey your AI assistant will know your name, and have made lots of assumptions about you based on all that demographic and consumer, health and social media data it has on you, your family, friends and neighbours. Let’s all just help build the pedagogies of oppression that “the system” will use against us.
I feel that the systems are silencing me just now. I’m worried about what to say, where to say it. What systems aren’t totally f*cked up by right wing politics, neoliberal tech bro ownership . . . Is that what “they” want – new cultures of silence, where we are all displaced and can’t collectively and effectively resist their narratives?
A couple of things have really hit home over the past week to give me hope and make me find a little bit of my voice again. The first was reading There are Rivers in the Sky by Elif Shafak. It’s a wonderful book, and at its heart, is the struggle around history, who controls it, who classifies it, who owns it, who is displaced/forgotten (mainly women), how we (human kind) so easily destroy the environments we rely on to survive. So many resonances with the debates around AI and knowledge/information. Not so much the white saviour but the tech bro saviour syndrome – but of course most of the “big” tech bros are white. The current LLMs don’t “know” everything, not everything is datafiable (not sure if that is a word), what about stories, oral traditions, not in the database so they don’t count, or are at best a digital footnote.
Also this Ted talk from Carol Cadwaller, This is what a digital coup looks like – watch, listen and like me start to think more about how we can fight back.
Irresponsible AI companies are already imposing huge loads on Wikimedia infrastructure, which is costly both from a pure bandwidth perspective, but also because it requires dedicated engineers to maintain and improve systems to handle the massive automated traffic. And AI companies that do not attribute their responses or otherwise provide any pointers back to Wikipedia prevent users from knowing where that material came from, and do not encourage those users to go visit Wikipedia, where they might then sign up as an editor, or donate after seeing a request for support. (This is most AI companies, by the way. Many AI “visionaries” seem perfectly content to promise that artificial superintelligence is just around the corner, but claim that attribution is somehow a permanently unsolvable problem.)
A good post to read or listen to at the beginning of Scottish AI in Schools week . The article does not want the stable door closed.
Education Scotland are running a week #ScotAI25: Scottish AI in Schools 2025 with live lessons for pupils & some cpd for staff. I might try to make some of those.
This week I’ve used: ChatGPT to make some questions up about a passage of text for an individual in my class; Write an example text about levers; create a formula for a number spreadsheet and create a regular expression.
Claude to make a fractions matching game and a trivia quiz.
I am occasionally using lovable.dev to play around making an alternative way of posting to WordPress.
I might have used ChatGPT a couple more times in school. Although it is accessible the login options didn’t seem to be so I’ve no history to check.
Quite a few teachers I know use it in some of these ways in a, like me, fairly causal way. This is a lot easier than thinking about any ethical and moral implication.
(This post was previously published on the Open.Ed Blog.)
With many image and media applications now integrating AI tools, it’s easier than ever to generate all kinds of eye-catching graphical content for your presentations, blog posts, teaching materials, and publications. Want a picture of a cartoon mouse to liven up your slides? No problem! Stable Diffusion, Midjourney, DALL-E, or Media Magic can create one for you. And if your AI generated rodent happens to bear a striking resemblance to another well known cartoon mouse, well that’s just a coincidence, no?
Copyright and AI
The relationship between ownership, copyright and AI is still highly contested both in terms of the works ingested by the data models driving these tools, and also the content they generate. Many of these data models ingest content that has been scraped from the web, with scant regard for intellectual property, copyright and ownership. Whether this constitutes legal use of protected works is a moot point. Creative Commons position is that “training generative AI constitutes fair use under current U.S. law”. Not everyone agrees; several artists and media organisations are attempting to sue various AI companies that they claim have used their creative works without their consent. Creative Commons believe that preference signals could offer a way to enable creators to indicate how their works can be used above and beyond the terms of the licence, and are exploring the practicalities of this approach (Preference signals for AI training.) It remains to be seen whether this is likely to be an effective solution to an intractable problem.
The European Union have taken a slightly different approach to copyright and AI with their EU Artificial Intelligence Act. Broadly speaking, the Act permits GenAI providers to use copyright content to train data models under the terms of the text and data mining exceptions of the existing Directive on Copyright in the Digital Single Market (DSM Directive). However, rights holders are able to reserve their rights to prevent their content from being used for text and data mining and training genAI. Furthermore, providers must keep detailed records and provide a public summary of the content used to train their data models. In short, it’s a compromise; Gen AI models can scrape the web, but they must keep a public record of all the content they use, and they must allow copyright holders to opt out. How this will work in practice, remains to be seen.
Then there’s the issue of who owns the copyright of AI generated content. One common assumption is that AI generated images are not subject to copyright because they are not creative works produced by humans. Creative Commons perspective is that “creative works produced with the assistance of generative AI tools should only be eligible for protection where they contain a significant enough degree of human creative input to justify protection.” (This is not a bicycle: Human creativity and generative AI.) The problems start when AI tools generate images that are almost indistinguishable from the content they have ingested. Take that AI generated cartoon mouse for example. The reason it’s so similar to Disney’s famous, and famously copyright mouse is that the AI data models are likely to have scraped millions of images of Mickey Mouse from the web, with little regard for Disney’s intellectual property. Rights holders may be able to argue that an AI generated image infringes their copyright on the basis of substantial similarity (The complex world of style, copyright, and generative AI.) This represents a risk which AI application developers are keen to shift on to their users. It’s not uncommon for AI applications to explicitly make no copyright claim over the images generated by their tools. For example with regards to the copyright of AI generated images, Canva states:
“The treatment of AI-generated images and other works under copyright law is an open question and the answer may vary depending on what country you live in.
For now, Canva does not make any copyright claim over the images you create with our free AI image generator app. As between you and Canva, you own the images you create with Text to Image (subject to you following our terms), and you give us the right to host them on our platform and to use them for marketing our products.”
So if Disney does happen to spot your AI generated cartoon mouse and decides to sue, it’s you, or your employer, that’s going to be liable, not the tool you used to generate the image.
OER Service Guidance
The University of Edinburgh’s OER Service currently provides the following advice and guidance on using AI generated images:
A more ethical, and environmentally friendly, alternative to using AI generated images is to use public domain images, of which there are millions, with more entering the commons every year. Public domain works, are creative works that are no longer under copyright protection because copyright has expired and they have entered the public domain, or they have been dedicated to the public domain by creators who choose to give up their copyright. This means that they can be used free of charge, by anyone, for any purpose, without any restrictions whatsoever. You don’t even have to provide attribution to the creator, though we always recommend that you do.
There are many fabulous sources of easily discoverable public domain images on the web, including:
Public Domain Day is celebrated on the 1stof Januaryeach year.In many countries,this is the day that copyright expires on creative works, and they become part of the public domain. This year, on Public Domain Day, the Public Domain Review launched a new interface to their Image Archive to enable users to search and explore their collections.
And if you do happen to be looking for a cartoon mouse to use in your slides you’ll find one in the public domain that you can use with no restrictions or risk of copyright infringement, either for you or your employer. The original version of Mickey Mouse from the 1928 cartoon Steamboat Willie entered the public domain in 2024.
Mickey Mouse by Walt Disney, public domain image from the 1928 cartoon Steamboat Willie.
After a productive early morning call with my excellent OSI colleagues and a satisfying burst of administrivia deck-clearing, I made a second cup of coffee and settled down to read this morning's HESA blog post from Alex Usher. Today he was summarising his thoughts from … Continue reading People and trust first, technology second
After quite a hectic end to last year, I’ve being enjoying a slower start to 2025, taking some time to be with family, reading, pottering around and catching up on “stuff”. The world is still batshit crazy, and today’s inauguration is going to dial up the crazy another couple of notches past 11.
I still don’t quite understand how one man who lies so blatantly can be re-elected to arguably the most powerful office in the world, but even just dipping in and our of news coverage here, I can see and hear how much more organised those around him are this time. This morning I heard one of his legal team talk about how the new administration are going to “bring back science, real science”. I didn’t realise that it had gone away. Guess I am just too woke to notice . . .
So whilst trying to avoid news from across the pond, I have been catching up on tv, podcasts, readings etc. Last week on a long train journey I managed to get a decent enough wifi signal to watch tv without it dropping out every 2 minutes. Dear reader, I have to confess I succumbed to “The Traitors“, and I am now “all in”.
Binge watching the first 3 episodes though did really highlight the power of group think (this is quite a good article in the Guardian about that very subject), and how lying is a key factor for the programmes success. Oh the treachery, the deceit! We love it from our comfy, edited viewpoint. I can’t help but have a niggle about that. Despite the fact the “Linda, the vicar” had a word with him upstairs before she went on the show, and got the ok, I feel the resonance of Trump and his ilk, devaluing truth, offering alternative facts, or now in 2025 “real science”. Lying is good, there are no consequences for lying, it’s a means to an end, the strong can influence the week, smart people are “a threat or a danger. I was quite shocked how everyone turned on the doctor in the group – he was too nice, too clever . . .
The power of group think as the Guardian article highlights is nothing new – and hey The Traitors is only tv and I am aware I am being hypocritical by watching it. Still there is something that resonates with wider trends about how we can, and are all be manipulated by powerful business and the men (sadly yes it does seem to be mainly (white) heading up all the big tech companies). Hello the AI “turbocharge” to, well everything.
It seems there really is a pot of gold for anything labelled AI, but nothing but cuts to services to us little ol’ people. Tho’ I have to say that using AI to help with our pothole problems is something I do have sympathy for (you may have heard a mini rant from me about this dear reader). If we have to use AI then let it be for something like that, not for writing or god forbid art. On that subject Katharina Grosse has a great explanation why AI can’t replace painting on this recent episode of The Great Women Artists.
There just seems to be acceptance that we have to get on with AI, let it into everything, because . . . well actually I don’t really know why. Again, probably just being a bit woke, but if the UK government could get even quarter of the investment they are talking about into the NHS, well people might get well, might get healthier, we might be able to do something about the obesity crisis. And where is the debate about the cost to our climate that these new data centres will bring? Washed away in floods, or hidden in fires and droughts. . . .
I was so grateful to listen to the discussion between Helen Beetham and Dan MacQuillan last week. If are curious as to why we don’t need AI, just have a listen. I’ve also been grateful to the weekly newsletters from Audrey Waters, who is back on the edtech/AI case. This month she has written so eloquently about reading, writing, the amnesia around tech failures, and today about AI literacy.
Group think (well highly funded tech group think) seems to be winning out again. I hope that voices like Helen’s, Audrey’s, Dan’s and so many more will help all of us resist, engage, have empathy. And for those of us in education ensure that our students can develop agency, criticality and humility. All the things that Elif Shafak so eloquently describes here.
Who knows what will happen today, but I am grateful for the time all these writers take to share and provide a counter narrative to the hype. Later this week maybe the faithful will find the traitors, but until then if you have some time please, follow some of the links I have shared.
I’ve been thinking a lot about slowness and refusal; in technology, in practice, in life more generally.
Slowness and refusal was the focus of an Edinburgh Futures Institute Contested Computing event earlier this month on Imagining Feminist Technofutures, with Sharon Webb, Usha Raman, Mar Hicks, and Aisha Sobey. In a wide ranging discussion that questioned the dominance of techno-solutionism, the biases and inequalities that are encoded in technology, and the role of education in countering these historical structures of dominance, the panel touched on feminist refusal and the importance of “slowing down” development cycles in order to hold tech companies to account and give corrective measures and ways of refusal a chance to thrive. Slowing down can be seen as a form of progressive innovation, a way to offer resistance, and academia is a space where this can be brought to life.
(I couldn’t help thinking about my own domain of open education where there has always been a tendency to privilege techno-solutionism as the height of innovation. Going right back to the early days of learning objects, there has been a tension between those who take a programmatic, content-centric view of open education, and those who focus more on the affordances of open practice. Proselytising about the transformative potential of generative AI education is just the latest incarnation of this dichotomy.)
Recognising the value of refusal brought to mind a point Helen Beetham made in her ALT Winter Summit keynote last December, which I’m still thinking about, slowly.
Helen called for universities to share their research and experience of AI openly, rather than building their own walled gardens, as this is just another source of inequity. As educators we hold a key ethical space. We have the ingenuity to build better relationships with this new technology, to create ecosystems of agency and care, and empower and support each other as colleagues.
Helen ended by calling for spaces of principled refusal within education. In the learning of any discipline there may need to be spaces of principled refusal, this is a privilege that education institutions can offer.
During the Technofutures event, Sharon Webb asked “where is the feminist joy we can take from these things? How can we share our feminist practice and make community accessible?”
This is a question that Frances Bell, Guilia Forsythe, Lou Mycroft, Anne-Marie Scott and I tried to address in the chapter we contributed to Laura Czerniewicz and Catherine Cronin’s generative book Higher Education for Good. “HE4Good assemblages: FemEdTech Quilt of Care and Justice in Open Education” explores the creation of the FemEdTech quilt assemblage through a “slow ontology of feminist praxis”. Quilting, and other forms of communal making, have always provided a space for women to share their skill, labour and practice on their own terms outwith the strictures of capitalist society and institutions that seek to exploit and appropriate their labour. These are also a space that necessarily invite us to slow down. Contributors to the FemEdTech quilt were
“compelled by the process to decelerate, helping them to curate, to stitch, to draw, to write, and to think. We acknowledge the pressures of the time: being creative in neoliberal times is itself a form of resistance.
…
Resistance requires radical rest (rest for health, rest for hope). The slow ontology of the assemblage required waves and pauses which allowed space to think. This may be the most crucial resistance of all in an industrialised HE which fills every potential pause with compliance activity. Feminists create, feminists resist, and feminists celebrate difference.”
This is how we can share our feminist joy; by decelerating, by sharing our feminist practices and making our communities accessible, through networks like FemEdTech.
Of course it’s difficult to disentangle the process of sharing practice and building community from the technology, and particularly the social media, that mediates so much of our lives. The exodus of users from X to Bluesky at the end of the year promoted some interesting conversations on Mastodon about the role of different social media platforms. I particularly appreciated this conversation with Robin de Rosa and Kate Bowles about the ability of Mastodon to provide a space for “big thinking” and slowing down.
I’ve been forced to embrace slowness on a more personal level this year as a result of serious ongoing health issues. Its been a salutary reminder that although our practice is mediated by technology, it is still embodied and that ultimately it’s that embodiment that governs our ability to work, create, and contribute to our communities. I’m still trying to figure out what all this means on both a personal and professional level; how to make slowing down and refusal a conscious progressive act, and to find the joy in embracing radical rest for health and hope. Like the FemEdTech quilt and network, it’s a slow process of becoming.