How TildeOpen LLM is built
Notes from the Lab 📋
Behind-the-scenes commentary, insights, and updates from our research team on TildeOpen development.
04.09.2025.
16.07.2025.
TildeOpen is nearing the end of its development! After seeing 2 trillion tokens, the foundational model has been trained, and we’re now moving on to fine-tuning and evaluation. Once ready, the fine-tuned models will be published on Hugging Face.
09.06.2025.
We’re proud to be among the first companies to test JUPITER, Europe’s first exascale supercomputer! With 1.2 million GPU hours granted to us, we’ll adapt TildeOpen for real-world use – including multilingual enterprise search, context-aware assistants, and other secure AI tools.
Show older notes
Great news! We’ve secured an additional 140,000 GPU hours on LUMI through EuroHPC JU. These resources will be used to instruction-tune the model as part of the FFplus-funded project, focusing on key multilingual tasks such as translation, summarisation, and question answering.
We have finally started the long-awaited TildeOpen pretraining. Borrowing from Mark Twain: “Quitting smoking is the easiest thing in the world; I’ve done it thousands of times.” Let’s hope this run is not a false start and delivers the results we’ve been working towards for so long!

See how LLMs really perform
Created by our researchers, TildeBench is a public leaderboard tracking how various LLMs handle tasks like machine translation, in-context question-answering, and grammar-sensitive text generation – all in languages that are often overlooked. It’ll be updated with new tasks and models over time.
Build AI that speaks your language
TildeOpen gives you the foundation to create secure and sovereign AI. Explore the model now
or talk to us about tailoring it to your needs.