Yann LeCun, one of the pioneers of artificial intelligence (AI) and the chief AI scientist at Meta, said, speaking about human-level AI, that “such a level of AI is not just around the corner. It will take a long time and will require new scientific breakthroughs that we do not yet know about.” However, there is no shortage of breakthroughs – they are just happening quietly. Impressive AI results receive most of the attention in the public arena, but the discussion rarely focuses on what scientific discoveries can follow. So what is new in AI science today and are we getting closer to creating human-level AI?
Artificial intelligence learns to reason
One of the latest breakthroughs is the so-called “chain of thought” mechanism. Its essence is not to immediately come up with a response, as AI used to do before, but to solve the task sequentially. How does this look in practice?
Imagine: a person weighs X kilograms and wants to know what dose of medicine they should take. The instruction indicates that X milligrams of medicine is required for every kilogram of body weight, and the calculated daily dose should be divided into three equal parts to be taken in the morning, at lunchtime and in the evening. What would a human do? First, they would calculate the total daily dose, and then divide it by three. Now AI does the same thing: it does not provide a pre-learnt answer but instead solves the task step by step. This is already reasoning.
This principle is being developed further. The so-called “tree of thought” technique has been developed, whereby one thought can give rise to several alternatives, and AI evaluates which alternative is most likely to logically lead to the goal. In other words, AI learns not only to provide an answer but also to reason: which is similar to what happened in school during mathematics lessons when it was important to not only get a result but also demonstrate the correct method for arriving at a solution.

Language-neutral AI is being developed: hope for small languages
Small languages today cannot fully utilise the potential of AI, but active work is being done in this direction – the aim is to make models language-neutral.
A study was conducted using a large language model, with testing only in English. When presented with Danish texts, the model was able to successfully adapt. Analysing the structure of the model itself, it turned out that the parameters that changed the most were those related to input and output, i.e., language processing. This means that the knowledge itself in the model is neither English nor Danish, it is language-neutral. This is reminiscent of a person’s ability to think independently of language: although I acquire information in English, I can later interpret it and convey it in Lithuanian. Today, science has not yet fully understood how to create a universal language-independent model, but it is clear that we are getting closer to it.
What does this mean for us as representatives of a small language? If this idea is further developed, there will be no need to accumulate huge text collections, test models and constantly be trailing in the wake of big languages. AI models will already have a lot of language-neutral knowledge. All that remains is to supplement them with specialised Lithuanian content – such as that which is simply not available in other languages – and we will have AI that works with Lithuanian as effectively as with English.
From distributed thinking to collaborative systems
Another significant direction of progress in AI is model architecture. Instead of one huge, all-knowing model, the principle of a “mixture of experts” developed by DeepSeek is increasingly being applied. Today, it is used in the latest versions of advanced language models, such as Grok or Llama. The idea is simple but effective: separate “experts” operate inside the model, each of whom specialises in their own field. When a user asks a question, the algorithm selects the most suitable experts and activates them for a specific task. This not only saves computing resources; this principle of operation has obvious similarities with how the human mind works. Different areas in our brain are responsible for different functions: one interprets language content, another processes mathematical content, and yet another processes emotional or creative content. The development of AI only reminds us once again that intelligence is not a single all-controlling force but a multitude of different abilities that only gain meaning when acting together.
A similar logic of decentralisation is used in another direction of AI development: agent systems. In this instance, it is not a single omniscient model that is created but a network consisting of specialised agents capable of acting independently: analysing, planning, achieving a goal, evaluating decisions, and, when necessary, cooperating. Let’s imagine a team of human experts: each of them knows their field, works individually or coordinates actions with others to achieve a common goal. This principle is also adopted by agent AI: a distributed, autonomous intelligence that is not only able to act but also to decide for us. And we, humans, are we ready for this?

Not a race, but a mirror: what does human-level AI really mean?
The question “When will AI overtake humans?” has been raised since 1950, when Alan Turing first publicly questioned whether machines could think. Since then, technological progress has often been portrayed as a race: when will the day come when AI surpasses humans? But the reality is more complex. It is not a race, but a constant attempt to catch up with our own expectations, which change faster than technology. Yesterday, we were satisfied with AI being able to respond to an email message. Today we expect it to plan a project, complete tasks and evaluate its decisions. Tomorrow, we may want it to understand not only our language but also our emotions. And eventually, we may want it to read our thoughts, to intuitively sense even what we ourselves have not yet fully realised.
Therefore, the human-level AI that Yann LeCun talks about may not be “just around the corner”, but breakthroughs “that we do not yet know about” are taking place. And each of them is not just a step towards technological progress. These are reminders that we are approaching an undefined future, in which humanity itself is still searching for answers: what do we really want from technology and from ourselves?
The commentary was prepared by Jurgita Kapočiūtė-Dzikienė, Senior Researcher in Language Technology at the language technology company Tilde and academic at Vytautas Magnus University.