Tinghui Duan (段庭辉)'s Avatar

Tinghui Duan (段庭辉)

@t-duan.bsky.social

12 Followers  |  40 Following  |  3 Posts  |  Joined: 13.08.2024  |  1.8743

Latest posts by t-duan.bsky.social on Bluesky

Video thumbnail

The journey to DH2026 in Daejeon has begun! 🇰🇷

Here is a first glimpse of our host city and the conference theme of 'Engagement'.

For all official information, including future calls for proposals and registration, our website is your main hub.

🌐 Visit us:
dh2026.adho.org

#DH2026 #Daejeon

22.07.2025 12:23 — 👍 14    🔁 8    💬 0    📌 0
Logo of the COMUTE project.

Logo of the COMUTE project.

Just now #DH2025, Sandra Balck (@traed.bsky.social) and Sascha Heße presenting the COMUTE project (Collation of Multilingual Text), run jointly by colleagues @unihalle.bsky.social @unipotsdam.bsky.social @freieuniversitaet.bsky.social.

www.comute-project.de/en/index.html

17.07.2025 08:17 — 👍 18    🔁 6    💬 1    📌 1
Preview
Collective Biographies of Women Find the 1271 English-language books that collect chapter-length biographies of women of all types, famous and obscure, from queens to travelers, from writers to activists. CBW studies versions of wom...

"Fabulation and Care: What AI, Wikidata, and an XML Schema Can Recognize in Women's Biographies" -- @alisonbooth.bsky.social promises to be as cheekily adversarial to contemporary AI as possible. Interrogating (caringly) metaphors of text "mining," "distant" reading, &c.

#DH2025

16.07.2025 15:45 — 👍 14    🔁 4    💬 1    📌 0

Gephi Lite is great, it allows for a direct import of character network data from #DraCor, included in our "Tools" tab. Try the German "Hamlet" translation (Schlegel/Tieck edition 1843/44): dracor.org/id/gersh0000...

(Especially love the "Guess settings" option for ForceAtlas2 layouting!)

#DH2025

17.07.2025 15:32 — 👍 19    🔁 9    💬 1    📌 0
First slide of our presentation showing the title, authors and their affiliation.

First slide of our presentation showing the title, authors and their affiliation.

Slides from our #DH2025 presentation:

"Wikipedia as an Echo Chamber of Canonicity: ›1001 Books You Must Read Before You Die‹"

bit.ly/1001echo

#DigitalHumanities @viktor.im @temporal-communities.de @freieuniversitaet.bsky.social

16.07.2025 08:31 — 👍 46    🔁 17    💬 2    📌 1
Slide showing the geographic distribution of canonical literary works retrieved from wikidata

Slide showing the geographic distribution of canonical literary works retrieved from wikidata

As expected: we need more diverse sources to study canonicity and world literature.

Great work!

#dh2025

16.07.2025 08:47 — 👍 10    🔁 6    💬 0    📌 0
Screenshot of the dracor.org frontpage from 14 July 2025.

Screenshot of the dracor.org frontpage from 14 July 2025.

💫 We just rolled out a new #DraCor release featuring significant improvements to the API, Frontend, and Schema.

Full update here:
weltliteratur.net/dracor-platf...

#DigitalHumanities #DH2025

14.07.2025 09:50 — 👍 23    🔁 12    💬 0    📌 1
Post image Post image

I picked up a book and found myself a brain rot.

02.12.2024 07:08 — 👍 1    🔁 0    💬 0    📌 0
Post image

Konrad, ein Wanderer aus Thüringen hat eine mystische Begegnung. An einem stürmischen. Herbsttag trifft er an der Ostsee auf die Meerjungfrau Selina.

Das KI-generierte Kurzmärchen kannst du hier nachlesen:
derdigitaledichter.d...

#literatur #lesen #LitWiss #KI-Literatur

10.09.2024 08:00 — 👍 1    🔁 1    💬 0    📌 0
Img credit N. Muennighoff, taken from the linked paper. 

This image has two sections:

Performance vs. Cost Graph:

The y-axis shows performance (% MMLU) and the x-axis shows cost (billion active parameters).
Blue circles represent Dense LMs, and dark blue diamonds represent Mixture-of-Experts (MoE) models.
OLMoE-1B-7B (marked by a pink star) stands out in the top-left for having a good performance/cost ratio.

Table: "How open are open MoEs?":

The table compares models based on their openness in categories like "Model, Data, Code, Logs," and the number of checkpoints.
OLMoE-1B-7B is highlighted and is open in all categories.

Img credit N. Muennighoff, taken from the linked paper. This image has two sections: Performance vs. Cost Graph: The y-axis shows performance (% MMLU) and the x-axis shows cost (billion active parameters). Blue circles represent Dense LMs, and dark blue diamonds represent Mixture-of-Experts (MoE) models. OLMoE-1B-7B (marked by a pink star) stands out in the top-left for having a good performance/cost ratio. Table: "How open are open MoEs?": The table compares models based on their openness in categories like "Model, Data, Code, Logs," and the number of checkpoints. OLMoE-1B-7B is highlighted and is open in all categories.

Allen AI has released a fully open (weights, data, and code) mixture-of-experts model. It's super small (1B active parameters out of 7B total) but has good performance for its size. Paper: arxiv.org/abs/2409.02060 #machinelearning #AI 🤖 🧪

04.09.2024 03:22 — 👍 33    🔁 8    💬 2    📌 0

Heads-up that I'll be speaking in this series on Oct 23, with the title "Why AI Needs the Humanities as a Partner."

30.08.2024 15:13 — 👍 12    🔁 4    💬 1    📌 1
Preview
Large Language Models and Literature | American Comparative Literature Association

What can literature and literary studies bring to LLMs and vice versa?
Our seminar on LLMs and Literature is up on ACLA2025 website! @arhanlon.bsky.social and I are excited to read your paper proposals. Please submit them via portal between September 13 and October 14.
www.acla.org/large-langua...

15.08.2024 17:41 — 👍 20    🔁 14    💬 0    📌 1

How about using AI to recreate a lost romantic world?

15.08.2024 16:55 — 👍 2    🔁 0    💬 0    📌 0
Created by Flux Dev.

Created by Flux Dev.

"Il meglio è l'inimico del bene".

15.08.2024 16:42 — 👍 1    🔁 0    💬 0    📌 0

@t-duan is following 20 prominent accounts