Letβs make it speak Ithkuil.
12.10.2025 00:17 β π 3 π 1 π¬ 0 π 0@malper.bsky.social
PhD student researching multimodal learning (language, vision, ...). Also a linguistics enthusiast. morrisalp.github.io
Letβs make it speak Ithkuil.
12.10.2025 00:17 β π 3 π 1 π¬ 0 π 0Iβm fascinated by the idea of a game like No Manβs Sky where infinite new civilizations could be generated dynamically with their own languages as the player explores the universe.
11.10.2025 23:15 β π 9 π 3 π¬ 0 π 0Thatβs definitely a valid concern, and I believe we need to rethink how LLMs and other generative models are deployed and used in practice. We do explicitly discuss some of these considerations.
11.10.2025 20:06 β π 1 π 0 π¬ 0 π 0As a conlanger myself, I was mainly curious to explore whether LLMs could be used as a creative assistant for humans, as well as procedural generation in games with unbounded worlds. I hope this gets more people interested in conlanging and experimenting themselves.
11.10.2025 18:56 β π 2 π 0 π¬ 1 π 0Thanks for the heads-up, fixing this.
11.10.2025 17:38 β π 1 π 0 π¬ 0 π 0Try out some of the newest languages on our project page:
conlangcrafter.github.io
ConlangCrafter could potentially be used in pedagogy, typological and NLP work, and many entertainment applications. Imagine a video game where aliens can speak countless new procedurally-generated languages.
11.10.2025 05:35 β π 6 π 0 π¬ 2 π 1To enhance consistency and diversity, our pipeline incorporates randomness injection and self-refinement mechanisms. This is measured by our novel evaluation framework, providing rigorous evaluation for the new task of computational conlanging.
11.10.2025 05:35 β π 4 π 0 π¬ 2 π 0The ConlangCrafter pipeline harnesses an LLM to generate a description of a constructed language and self refines it in the process. We decompose language creation into phonology, grammar, and lexicon, and then translate sentences while constructing new needed grammar points.
11.10.2025 05:35 β π 8 π 2 π¬ 1 π 0Conlangs (Constructed Languages), from Tolkienβs Elvish to Esperanto, have long been created for artistic, philosophical, or practical purposes.
As generative AI proves its creative power, we ask:
Can it also take on the laborious art of conlang creation?
The number of languages in the world just got a lot higher! At least constructed ones.
Meet ConlangCrafter - a pipeline for creating novel languages with LLMs.
A Japanese-Esperanto creole? An alien cephalopod color-based language?
Enter your idea and see a conlang emerge. π§΅π
Now accepted to #NeurIPS2025!
18.09.2025 15:54 β π 4 π 0 π¬ 0 π 0Check out our project page and paper for more info:
Project page: wildcat3d.github.io
Paper: arxiv.org/abs/2506.13030
(5/5)
At inference time, we inject the appearance of the observed view to get consistent novel views. This also enables cool applications like appearance-conditioned NVS! (4/5)
17.06.2025 16:16 β π 0 π 0 π¬ 1 π 0To learn from this data, we use a novel multi-view diffusion architecture adapted from CAT3D, modeling appearance variations with a bottleneck encoder applied to VAE latents and disambiguating scene scale via warping. (3/5)
17.06.2025 16:16 β π 0 π 0 π¬ 1 π 0Photos like the ones below differ in global appearance (day vs. night, lighting), aspect ratio, and even weather. But they give clues to how scenes are build in 3D. (2/5)
17.06.2025 16:16 β π 0 π 0 π¬ 1 π 0π₯New preprint! WildCAT3D uses tourist photos in-the-wild as supervision to learn to generate novel, consistent views of scenes like the one shown below. h/t Tom Monnier and all collaborators (1/5)
17.06.2025 16:16 β π 5 π 0 π¬ 1 π 1Disappointing that arXiv doesn't allow XeLaTex/LuaLaTex submissions, which have the least broken multilingual support of LaTeX compilers. The web shouldn't be limited to English in 2025!
13.06.2025 23:53 β π 1 π 0 π¬ 0 π 0More coverage of our work on AI for ancient cuneiform! news.cornell.edu/stories/2025...
31.03.2025 15:31 β π 4 π 0 π¬ 0 π 0See our paper, project page, and GitHub for more details and a full implementation!
ArXiv: arxiv.org/abs/2502.00129
Project page: tau-vailab.github.io/ProtoSnap/
GitHub: github.com/TAU-VAILab/P...
Finally we show that ProtoSnap-aligned skeletons can be used as conditions for a ControlNet model to generate synthetic OCR training data. By controlling the shapes of signs in training, we can achieve SOTA on cuneiform sign recognition. (Bottom: synthetic generated sign images)
04.02.2025 18:24 β π 0 π 0 π¬ 1 π 0Our results show that ProtoSnap effectively aligns wedge-based skeletons to scans of real cuneiform signs, with global and local refinement steps. We provide a new expert-annotated test set to quantify these results.
04.02.2025 18:24 β π 0 π 0 π¬ 1 π 0ProtoSnap uses features from a fine-tuned diffusion model to optimize for the correct alignment between a skeleton matched with a prototype font image and a scanned sign. Perhaps surprising that image generation models can be applied to this sort of discriminative task!
04.02.2025 18:24 β π 0 π 0 π¬ 1 π 0We tackle this by directly measuring the internal configuration of characters. Our approach ProtoSnap "snaps" a prototype (font)-based skeleton onto a scanned cuneiform sign using a multi-stage pipeline with SOTA methods from computer vision and generative AI.
04.02.2025 18:24 β π 1 π 0 π¬ 1 π 0Some prior work has tried to classify scans of signs categorically, but signs' shapes differ drastically in different time periods and regions making this less effective. E.g. both signs below are AN, from different eras. (Top: font prototype; bottom: scan of sign real tablet)
04.02.2025 18:24 β π 0 π 0 π¬ 1 π 0Arguably the most ancient writing system in the world (since ~3300 BCE), cuneiform inscriptions in ancient languages (e.g. Sumerian, Akkadian) are numerous but hard to read due to the complex writing system, wide variation in sign shapes, and physical nature as imprints in clay.
04.02.2025 18:24 β π 0 π 0 π¬ 1 π 0Cuneiform at #ICLR2025! ProtoSnap finds the configuration of wedges in scanned cuneiform signs for downstream applications like OCR. A new tool for understanding the ancient world!
tau-vailab.github.io/ProtoSnap/
h/t Rachel Mikulinsky @ShGordin @ElorHadar and all collaborators.
π§΅π
Our results show that ProtoSnap effectively aligns wedge-based skeletons to scans of real cuneiform signs, with global and local refinement steps. We provide a new expert-annotated test set to quantify these results.
04.02.2025 18:13 β π 0 π 0 π¬ 0 π 0ProtoSnap uses features from a fine-tuned diffusion model to optimize for the correct alignment between a skeleton matched with a prototype font image and a scanned sign. Perhaps surprising that image generation models can be applied to this sort of discriminative task!
04.02.2025 18:13 β π 0 π 0 π¬ 1 π 0We tackle this by directly measuring the internal configuration of characters. Our approach ProtoSnap "snaps" a prototype (font)-based skeleton onto a scanned cuneiform sign using a multi-stage pipeline with SOTA methods from computer vision and generative AI.
04.02.2025 18:13 β π 1 π 0 π¬ 1 π 0