D-SAIL 2025 Workshop at AIED's Avatar

D-SAIL 2025 Workshop at AIED

@dsail.hci.social.ap.brid.gy

D-SAIL: Transformative Curriculum Design: Digitalisation, Sustainability, and AI Literacy for 21st Century Learning. Abstract submission deadline: 19 May 2025. Workshop to […] 🌉 bridged from ⁂ https://hci.social/@dsail, follow @ap.brid.gy to interact

7 Followers  |  5 Following  |  25 Posts  |  Joined: 30.04.2025  |  2.4334

Latest posts by dsail.hci.social.ap.brid.gy on Bluesky

Original post on hci.social

The problem with AI in schools is not just cheating. This is only a symptom. Education, one of most consecutive systems in society needs to find its approach towards the most disruptive change in history […]

07.11.2025 06:11 — 👍 0    🔁 0    💬 0    📌 0
Original post on masto.bg

“We don’t need super intelligence to save us, because we’re already a superintelligent species. We just need to move from singularity to plurality.”

The plurality Tang speaks of is the cooperation between opposites: “Instead of treating conflict as a volcanic eruption that must be extinguished […]

03.11.2025 04:54 — 👍 1    🔁 2    💬 0    📌 0
Preview
Data Sciences Speaker Series - DSI The Data Sciences Speaker Series is a collaboration of data science programs at U of T. Seminars are held on the third Monday of each month.

Ok y'all I'm throwing out a hot take on LLMs in Toronto in January:

"We’re Talking About the Wrong Error: Why Variance Matters More than Bias in AI"

Enough of the bias talk. LLMs are a completely different beast and our old frameworks are no longer useful.

datasciences.utoronto.ca/dsi-home/dat...

29.10.2025 19:25 — 👍 33    🔁 7    💬 1    📌 0
Post image Post image Post image Post image

Passar a manhã nisto é recompensador.

A ironia: quando cheguei à turma, a professora estava a trabalhar texto poético com os alunos, que liam poemas escritos por si. Num caso, vincou junto de uma miúda que o poema dela não estava correto, rimava mas a conjugação de […]

[Original post on masto.pt]

30.10.2025 02:09 — 👍 0    🔁 0    💬 0    📌 0
Original post on hci.social

The proceedings from our workshop are out!

D-SAIL Workshop on Transformative Curriculum Design - Digitalisation, Sustainability, and AI Literacy for 21st Century Learning

https://ceur-ws.org/Vol-4051

Thank you to everyone who was part of this. We are looking forward to meeting again at the […]

05.10.2025 17:43 — 👍 2    🔁 4    💬 0    📌 0
Original post on sciences.social

Criticism about AI in schools and colleges is disproportionately focused on how it allows students to get away with something (by using OpenAI to write their essays). But shouldn't we be talking more about its detrimental effects on teaching and learning than on how it impedes our ability to […]

26.09.2025 15:55 — 👍 3    🔁 4    💬 0    📌 0

The irony...
https://arstechnica.com/ai/2025/09/education-report-calling-for-ethical-ai-use-contains-over-15-fake-sources/

13.09.2025 20:13 — 👍 1    🔁 4    💬 0    📌 0
Preview
From Libraries to Schools: Why Organizations Should Install Privacy Badger ​​In an era of pervasive online surveillance, organizations have an important role to play in protecting their communities’ privacy. Schools, libraries, and other organizations can make private browsing the norm by deploying Privacy Badger on their computers.

An needed institutional response to commercial abuse online: why organisations with public access need to install protective software

https://www.eff.org/deeplinks/2025/09/libraries-schools-why-organizations-should-install-privacy-badger

05.09.2025 05:06 — 👍 0    🔁 0    💬 0    📌 0
Preview
There's Something Bizarre About When GPT-5 Writes in a Literary Style Amid its many letdowns, OpenAI's latest large language model (LLM), the overhyped and under-delivering GPT-5, appears to be spitting out flowery, mysterious gibberish — and it may not be meant for human eyes.

It is extremely probable that GPT-5 is optimised on GenAI evaluation, and as a consequence human and automated evaluation start to diverge. This is where AI slop becomes central to the process.
https://futurism.com/gpt-5-literary-outputs

02.09.2025 04:45 — 👍 1    🔁 3    💬 0    📌 0

One, published by Italiano LinguaDue, describes the design and the develpment of our digital archive, and it analyses how it can be used for researching second language awareness and for training teachers.
riviste.unimi.it/.../promoita...

21.06.2025 09:48 — 👍 3    🔁 3    💬 1    📌 0
Original post on qoto.org

This is a very good article regardless, but it makes a point widely ignored in the age of GenAI:

"While our results point to real growth in students’ intellectual abilities and dispositions, they do not capture everything philosophers mean by “intellectual virtue.” Intellectual virtue is not […]

22.08.2025 06:03 — 👍 1    🔁 3    💬 0    📌 0
Preview
UZH: Lecturer Teaching The Art History Department (Program Directorate) in collaboration of the departments of Film Studies, Archaeology, Media and Communication, and Educational Sciences of the University of Zurich is seek...

JOB: Lecturer in Digital Methods for the Study of Visual Cultural Data

Start: Jan 2026 | Permanent | 100% 👀
Focus: digital curation, computational analysis (CV, ML, network analysis, data ethics) of artworks, photographs, films…

#DigitalHumanities #VisualCulture #DigitalArtHistory

14.08.2025 10:16 — 👍 22    🔁 21    💬 0    📌 0
Preview
Instructors Will Now See AI Throughout a Widely Used Course Software New features integrated into Canvas include a grading assistant, a discussion-post summarizer, and even a way to pair assignments with generative AI tools.

This article will rightly freak people out, but not for the reason in the headline. Integrating AI in course management as such—to summarize student posts or whatev—is not a huge deal. What's scary about the article is that it reveals what the next step is: which is to integrate AI in instruction.

24.07.2025 03:37 — 👍 42    🔁 11    💬 3    📌 2
Preview
AI Literacy Framework for Primary & Secondary Education Empower learners for the age of AI with the AILit Framework—a joint EC & OECD initiative, supported by Code.org and international experts.

At #AIED2025 we're being told that the current #publicDiscussion at https://ailiteracyframework.org is going to shape the new #PISA test on #AIliteracy education

23.07.2025 13:40 — 👍 0    🔁 1    💬 0    📌 0

Our workshop is taking place in #Palermo now. We've already heard about studies of perceptions of students, teachers and other staff. Now we are moving to various technical implementations of AI in the curriculum. We will close with one theoretical and one policy presentations. #DSAIL2025 #AIED2025

22.07.2025 13:58 — 👍 0    🔁 1    💬 0    📌 0

@mapto the previous post is about the #DigitalEducationHub, as the officially recommended hashtag is

22.07.2025 10:59 — 👍 0    🔁 0    💬 0    📌 0

At #AIED2025 we are getting a preview of the report of the #EuropeanDigitalEducationHub, providing practical examples of how XAI-Ed could be used https://knowledgeinnovation.eu/kic-publication/explainable-ai-in-education-fostering-human-oversight-and-shared-responsibility/

22.07.2025 10:56 — 👍 2    🔁 2    💬 1    📌 0
Preview
'I'm being paid to fix issues caused by AI' Businesses that rush to use AI to write content or computer code, often have to pay humans to fix it.

Ai-fixing jobs are well-paid, but they have a huge AI-education element in them.

The BBC writes about several stories of AI firefighting.

https://www.bbc.com/news/articles/cyvm1dyp9v2o

08.07.2025 06:01 — 👍 2    🔁 4    💬 0    📌 0
Original post on mastodon.social

Fellow AI skeptics, please slow your roll on the "your brain on chatgpt" study and please think about base rate fallacy a little bit here.

If you, like me, do not know anything about what typical EEGs look like, then citing a bunch of EEGs that appear to prove our point is not science, it's […]

17.06.2025 06:42 — 👍 1    🔁 14    💬 2    📌 0

The verdicts are out. We have received 14 submissions and accepted 9. We have asked authors to revise their works according to the reviewer feedback they received.

We are looking forward to having a fruitful discussion in July in Palermo.

#DSAIL2025 #AIED2025

23.06.2025 14:34 — 👍 0    🔁 0    💬 0    📌 0
Preview
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

An interesting study comparing cognitive load while using LLM, search engine and nothing. With their small sample they find that LLM is most disengaging in terms of cognitive effort. The validity of the method could be discussed, but certainly an important effort.
https://arxiv.org/abs/2506.08872

11.06.2025 20:24 — 👍 0    🔁 0    💬 0    📌 0
Original post on hci.social

AI tools suspended during national exams in China. Is this a sustainable solution? Clearly not, for multiple reasons, be it business continuity or offline models.

https://www.theguardian.com/world/2025/jun/09/chinese-tech-firms-freeze-ai-tools-exam-cheats-universities-gaokao

This and other […]

10.06.2025 05:04 — 👍 0    🔁 1    💬 0    📌 0
An announcement about XAI-Ed and MAIEd workshops at AIED still open for submissions

An announcement about XAI-Ed and MAIEd workshops at AIED still open for submissions

Are you still willing to attend AIED and have research to submit that might be of interest? Two workshops still accept submissions.

07.06.2025 05:57 — 👍 0    🔁 1    💬 0    📌 0
A post in LinkedIn explaining a creative use of AI in education

A post in LinkedIn explaining a creative use of AI in education

Rethinking #education with #AI is happening. We just need to talk about it more. This is what we are going to do in Palermo

29.05.2025 11:44 — 👍 0    🔁 0    💬 0    📌 0
Original post on hci.social

By the deadline we received 9 complete and formatted submissions. Those that did not make it, are still welcome to finalise their work, but the delay would have consequences on the quality of review feedback, and ultimately, on the chances to get accepted to the workshop. Thank you for the […]

28.05.2025 05:13 — 👍 0    🔁 0    💬 0    📌 0
It’s Breathtaking How Fast AI Is Screwing Up the Education System The AI industry has promised to “disrupt” large parts of society, and you need look no further than the U.S. educational system to see how effectively it’s done that. Education has been “disrupted,” all right. In fact, the disruption is so broad and so shattering that it’s not clear we’re ever going to have a functional society again. Probably the most unfortunate and pathetic snapshot of the current chaos being unfurled on higher education is a recent story by New York magazine that revealed the depths to which AI has already intellectually addled an entire generation of college students. The story, which involves interviews with a host of current undergraduates, is full of anecdotes like the one that involves Chungin “Roy” Lee, a transfer to Columbia University who used ChatGPT to write the personal essay that got him through the door: > When he started at Columbia as a sophomore this past September, he didn’t worry much about academics or his GPA. “Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort. When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.” The cynical view of America’s educational system—that it is merely a means by which privileged co-eds can make the right connections, build “social capital,” and get laid—is obviously on full display here. If education isn’t actually about learning anything, and is merely a game for the well-to-do, why not rig that game as quickly, efficiently, and cynically as possible? AI capitalizes on this cynical worldview, exploiting the view-holder and making them stupider while also profiting from them. When you think about the current assault on the educational system, it’s easy to forget how quickly this has all happened. A more recent story from 404 Media shows that the American educational system was largely caught unawares by the deluge of cheating that the AI industry would inspire. After accumulating thousands of pages of school district documents via FOIA requests from around the country, 404’s Jason Koebler found that ChatGPT has “become one of the biggest struggles in American education.” Koebler’s reporting notes that, in the early days of the AI deluge, school districts were courted by “pro-AI consultants” who were known to give presentations that “largely encouraged teachers to use generative AI in their classrooms.” For instance, Koebler writes that the Louisiana Department of Education sent him… > …a presentation it said it consulted called “ChatGPT and AI in Education,” made by Holly Clark, the author of __The AI Infused Classroom__, Ken Shelton, the author of __The Promises and Perils of AI in Education__, and Matt Miller, the author of __AI for Educators__. The presentation includes slides that say AI “is like giving a computer a brain so it can learn and make decisions on its own,” note that “it’s time to rethink ‘plagiarism’ and ‘cheating,’” alongside a graph of how students can use AI to help them write essays, “20 ways to use ChatGPT in the classroom,” and “**Warning:** Going back to writing essays—only in class—can hurt struggling learners and doesn’t get our kids ready for their future.” In other words, AI acolytes seemed to anticipate that the technology would effectively ruin essay-writing and test-taking, and wanted to spin it to present the ruination as mere “transformation”—a new way of doing things—instead of a destructive force that would devastate education. This new way of doing things appears to be corrosive not just to students but also to teachers. Koebler’s investigation shows that the AI lobbyists courted schools by making appeals to instructors, showing them that the likes of ChatGPT would make curriculum-building and assignment-giving that much easier. Now, teachers, too, seem to be taking the easy way out, as a recent New York Times story shows that college professors have been using chatbots to create their lesson plans, just as their students are using them to complete said lesson. The result of all of this is so obvious that it doesn’t really bear repeating, but I guess will anyways: Everybody who uses AI is going to get exponentially stupider, and the stupider they get, the more they’ll need to use AI to be able to do stuff that they were previously able to do with their minds. The tech industry’s subscriber-based, “as-a-service” model is obviously on full display here, except that the subscription will be to intellectual capacity. The more you subscribe, the less “organic” capacity you’ll have. Eventually, companies will be able to pipe AI directly into your brain with the kind of neuro-implants being hawked by Neuralink and Apple. By then, of course, there will be no need for school, as we’ll all just be part of the Borg collective.

AIED is making waves around the world, but for the wrong reasons.

https://gizmodo.com/its-breathtaking-how-fast-ai-is-screwing-up-the-education-system-2000603100

We can change this...

25.05.2025 18:31 — 👍 0    🔁 0    💬 0    📌 0
Original post on hci.social

We have received 12 abstract submissions for the #DSAIL2025 workshop at #AIED2025. One more hour remaining if you also wish to join. In the remaining week until the full-paper submission deadline, please ensure that your work aligns well with the workshop guidelines.

#CfP #AIED […]

20.05.2025 07:57 — 👍 1    🔁 3    💬 0    📌 0
Original post on hci.social

"Don't Forget the Teachers" - an insightful study presented at #chi2025

“#Edtech providers are spending a lot of time on reducing the chance of #LLM #hallucinations,... Our findings suggest they could also design tools so that #educators can intervene when hallucinations happen to correct […]

17.05.2025 20:29 — 👍 1    🔁 2    💬 0    📌 0

@dsail.hci.social.ap.brid.gy is following 3 prominent accounts