How TildeOpen LLM is built
A closer look at key moments, breakthroughs, and whatās next.
Notes from the Lab š
Behind-the-scenes commentary, insights, and updates from our research team on TildeOpen development.
04.09.2025.
8.01.2026.
16.07.2025.
TildeOpen is nearing the end of its development! After seeing 2 trillion tokens, the foundational model has been trained, and weāre now moving on to fine-tuning and evaluation. Once ready, the fine-tuned models will be published on Hugging Face.
Show older notes
09.06.2025.
Weāre proud to be among the first companies to test JUPITER, Europeās first exascale supercomputer! With 1.2 million GPU hours granted to us, weāll adapt TildeOpen for real-world use ā including multilingual enterprise search, context-aware assistants, and other secure AI tools.
Great news! Weāve secured an additional 140,000 GPU hours on LUMI through EuroHPC JU. These resources will be used to instruction-tune the model as part of the FFplus-funded project, focusing on key multilingual tasks such as translation, summarisation, and question answering.
We have finally started the long-awaited TildeOpen pretraining. Borrowing from Mark Twain: “Quitting smoking is the easiest thing in the world; I’ve done it thousands of times.” Let’s hope this run is not a false start and delivers the results we’ve been working towards for so long!
See how LLMs really perform
Created by our researchers, TildeBench is a public leaderboard tracking how various LLMs handle tasks like machine translation, in-context question-answering, and grammar-sensitive text generation ā all in languages that are often overlooked. Itāll be updated with new tasks and models over time.
Build AI that speaks your languageāÆĀ
TildeOpen gives you the foundation to create secure and sovereign AI. Explore the model now
or talk to us about tailoring it to your needs.