Human-level AI: What scientific breakthroughs are happening?Ā

Yann LeCun, one of the pioneers of artificial intelligence (AI) and the chief AI scientist at Meta, said, speaking about human-level AI, that āsuch a level of AI is not just around the corner. It will take a long time and will require new scientific breakthroughs that we do not yet know about.ā However, there […]
Going beyond simple Retrieval-Augmented GenerationĀ

What is Retrieval-Augmented Generation (RAG)?Ā Retrieval-Augmented Generation (RAG) has become a powerful method for grounding large language models (LLMs) in content tailored to specific domains. In essence, RAG systems allow users to index a body of documents and ask questions concerning the content of those documents in natural language. The system responds by first retrieving […]
The AI race: How are smaller languages keeping up?Ā

Since the emergence of statistical language models in the 1980s, the field has undergone a remarkable transformation. Over the decades, methodologies for building these AI models have evolved, the volume of processed data has increased, and the models themselves have grown in complexity. The number of parameters ā mathematical coefficients in the formulas ā has […]
The LLM data dilemma: Ocean of dirt or drop of gold?Ā

By Dr. Toms Bergmanis, AI Researcher at Tilde Building AI systems capable of understanding and generating human language requires vast amounts of language data. This data is the foundation for an LLMās ability to comprehend and produce human-like language. However, the cliche that not all data is created equal stands true here. So, this distinction […]