Stay up to date on our progress in low-resource AI -
sign up for the #lorAI newsletter today! π€³β¨
@kinitsk.bsky.social
KInIT is an independent, non-profit institute dedicated to intelligent technology research. We bring together experts in different areas of computer science.
Stay up to date on our progress in low-resource AI -
sign up for the #lorAI newsletter today! π€³β¨
π How well do vision-language models really understand language?
Our upcoming DisAI Amplified paper Hidden in Top K takes a new angle β instead of human-made βhardβ examples, we test what the model itself finds challenging.
π Preprint coming soon!
#disaiamplified
A5. As a part of #aicodeproject, we definitely strive for the former. On the other hand, platforms already use a lot of AI to moderate content under the hood, but the transparency or means to redress are lagging behind.
30.10.2025 12:22 β π 2 π 0 π¬ 0 π 0A4. In the survey, we noted the current fragmentation of the credibility assessment research and highlighted the need for more multilingual and multicategory dataset. Also, the potential of LLMs for credibility signals assessment still remains largely untapped.
30.10.2025 12:14 β π 1 π 0 π¬ 0 π 0A4. AI is only part of the answer. Ideally, it should provide enough information for people to make their own judgement. Together with @vera-ai.bsky.social y.social, we specifically surveyed in the #aicodeproject the role of AI and LLMs in credibility assessment: doi.org/10.1145/3770...
30.10.2025 12:12 β π 3 π 1 π¬ 0 π 0A3. Both are needed and also connected to some extent. In this regard, it is important that we see an increasing number of open-weight models and do not need to rely solely on models behind APIs. What we need is more truly open-source models including training data transparency. #aicodeproject
30.10.2025 12:03 β π 3 π 0 π¬ 0 π 0A2. In this context, a study by @ebu.ch showed that AI assistants misrepresent news content 45% of the time: www.ebu.ch/news/2025/10... So letβs focus on making it better and also on educating the public (incl. media professionals) on its limitations. Thatβs also one of the goals of #aicodeproject
30.10.2025 12:01 β π 2 π 0 π¬ 0 π 0A2. We think building better (that is more transparent, fair and accurate) AI is still more challenging. In fact, it seems that at least part of the public may be already overrelying on AI despite its current problems. #aicodeproject
30.10.2025 11:58 β π 3 π 0 π¬ 0 π 0A1. For example, as part of the #aicodeproject, we created a dataset of generated and human-written social media texts: aclanthology.org/2025.acl-lon... We also examined the prevalence of such content in online disinformation and on social media: arxiv.org/abs/2503.23242
30.10.2025 11:48 β π 2 π 0 π¬ 0 π 0A1. It is challenging also due to persisting issues with access to data on social media, which should be addressed by DSA. Despite this, we try to stay ahead in the #aicodeproject, e.g., by researching the state of play in machine text generation and detection.
30.10.2025 11:47 β π 5 π 0 π¬ 1 π 0Our colleague and researcher RΓ³bert MΓ³ro will join the interactive #AICODEPROJECT Bluesky Chat tomorrow via our account. π¬π€
Heβll be answering most pressing questions about artificial intelligence. π§ β¨
Make sure to join the conversation too! π
π€ How can we keep AI safe and under control as it scales?
Join the Czecho-Slovak edition of BETTER_AI_MEETUP x prg.ai in Prague π¨πΏ.
π
Nov 13, 18:00
π» Stream online: kntn.ly/63f54fb3
W/ support from the Slovak Diaspora Project, Slovaks.ai & @loraiproject.bsky.social
#betteraimeetup #ai #prgai
π This Thursday, Martin Tamajka, Technology Lead here at KInIT, will lead a workshop on neural networks - from the basics to real-world applications.
ποΈ October 30
π The Spot, Bratislava
π Register here: lnkd.in/d_Sn9PBr
Join us!
#hopero #hoperoproject #workshop #ai
π This Thursday, 2 pm, online.
π Discover how digital humanism is transforming theory into real-world action - learn from experts who are putting human values at the heart of digital innovation.
π» Register for free: eudhit.eu/event/sympos...
#eudhit #eudhitproject #digitalhumanism #freeregistration
π€ A key part of our lorAI project is strong collaboration among partners: @kinitsk.bsky.social , @adaptcentre.bsky.social , @dfki.bsky.social and Centre for Research & Technology Hellas (CERTH).
ποΈ During the lorAI kickoff event, we discussed why projects like this are important.
π― The lorAI Project aims to transform @kinitsk.bsky.social into a leading center for research and innovation in the field of low-resource AI.
π€ Hear what @adaptcentre.bsky.social, @dfki.bsky.social and Centre for Research & Technology Hellas (CERTH) wish for the future of the project!
π β¨ Stay tuned for the latest achievements and updates from the lorAI Project! Follow project's Bluesky profile π²
#lorai
The #DisAIAMPLIFIED project is making great strides π
β
Validated LLMs in fact-check retrievalβ¨
β
Released first datasetβ¨
β
Probing multimodal modelsβ¨
β
Papers under review
More to come, stay tuned! π
Team photo of the veraAI group
Last physical team meeting - and last group photo of a fantastic team!
One month to go!
πΌοΈ NEW BLOG: Final part of our #DisAIAMPLIFIED series!
How can images boost fact-checking?
π· Visuals = key evidence
π€ Text + multimodal + LLMs
π OCR when text hides in images
π kinit.sk/multimodal-f...
#FactChecking #Misinformation #TrustworthyAI #NLP
π‘ @kinitsk.bsky.social researchers created MultiSocial, a new dataset to test how well tools can spot #AI-generated text on social media.
π Fine-tuned detectors adapt best, and platform & language choice affects performance.
π bit.ly/4nuZoRc
#VIGILANTProject #DigitalSecurity #Disinformation
β³ Almost time for #RecSys2025!
β¨ Our CEO MΓ‘ria BielikovΓ‘ & lead researcher Michal Kompan are co-chairs, and PhD student Santiago de Leon Martinez will run a workshop.
π Big thanks to the organizers for a great program. See you in Prague! π
π recsys.acm.org/recsys25/reg...
Join us in two weeks in Prague at CEDMO CafΓ©. Our Lead and Researcher Jakub Ε imko is moderating "Fact-checking and AI Tools under Pressure" panel.
ποΈ See you on 23rd September 2025.
π Register via link in comments π
π Great resource from @ebu.ch: a handbook on C2PA (Coalition for Content Provenance and Authentication) and its role in building trust and countering disinformation.
π Also, don't forget to check possible interlinking with @vera-ai.bsky.social tools π§
π€π Spotting #AI text on social media?
@kinitsk.bsky.social's
MultiSocial dataset (470k posts, 22 languages, 5 platforms) tests how well tools detect AI vs human content.
Results show fine-tuned models work bestβeven small ones! π‘
π Check it out: bit.ly/3VbrWTE
#Disinformation
Fact-checking isnβt just about text π Visual data can make a big difference β even across languages. π Our #DisAI paper βMultimodal and Multilingual Fact-Checked Article Retrievalβ is out now.
Read here π dl.acm.org/doi/abs/10.1...
#ResearchPaper #MultimodalAI #FactChecking
4οΈβ£ Thatβs why AI ethics is understood as the ethics of developing and deploying AI systems. Itβs about the values guiding the humans who create them, and how those values can be translated into concrete technological solutions.
28.08.2025 10:50 β π 0 π 0 π¬ 0 π 03οΈβ£ π and most crucially many ethical challenges arise already during development, not just when AI is deployed.
28.08.2025 10:50 β π 0 π 0 π¬ 0 π 02οΈβ£ Why do we speak about βAI ethicsβ as a separate field? Because AI systems:
π impact large numbers of people at once,
π often operate in ways we canβt fully see or explain (the βblack boxβ problem),
1οΈβ£ While this is an important aspect, it is equally crucial to focus on the people who design and implement these systems and to support them in making the right decisions.
28.08.2025 10:50 β π 0 π 0 π¬ 0 π 0