IA‑Speak is a pioneering project designed to improve the quality of life of people living with dysarthria. Led by a consortium of companies, research centres and partner organisations, it has received support under the Government of Navarra’s strategic R&D funding scheme.
Dysarthria
Acquired brain injury (ABI) affects around 435,400 people in Spain, with approximately 104,000 new cases every year. It can cause motor impairments that affect speech, producing dysarthria and, in many cases, serious difficulties in communication. The impact extends to families and frequently limits day‑to‑day autonomy.
Rehabilitation for these disorders must include specific work on communication and the use of assistive devices, tailored to each individual’s needs. Despite technological advances, there are still too few aids adapted to patients’ real circumstances and capable of accompanying them throughout rehabilitation to facilitate effective communication.
Rapid progress in artificial intelligence and in speech technologies opens the door to accessible, affordable solutions for people with dysarthria. Automatic speech recognition and high‑quality voice synthesis are already transforming accessibility in other fields, but there are still very few clinical datasets for dysarthria on which to train robust systems.
IA‑Speak focuses on designing, developing and validating a device and digital platform capable of restoring expressive capacity for people with dysarthria—both in everyday communication and in therapy—thus improving social participation and quality of life.
The project advances along two complementary lines. First, it creates a personalised synthetic voice for each person, trained where possible on pre‑injury recordings or, alternatively, on samples from close relatives so that the generated voice feels familiar and natural. Second, it develops an application that converts impaired articulation into a clear, intelligible voice in real time.
For daily communication, the device will be portable, affordable and robust, with an autonomy of at least five hours and a product lifetime of more than twenty years—features that ensure reliability in real‑world conditions.
For rehabilitation, the platform will objectively record acoustic and speech markers, enabling clinicians to track progress and to tailor exercises and therapeutic recommendations to each person’s evolution using measurable criteria.

The knowledge obtained about a person during rehabilitation and in their day-to-day communication will make it possible to obtain objective information about their condition, which will form the basis for achieving a fully personalised system and, therefore, one with greater effectiveness than the current offering.
The solution will constantly be receiving information that will be used to train speech-processing algorithms, enabling the system to learn automatically.
IA-Speak represents a high degree of innovation worldwide, being the first system focused both on rehabilitation and on effective communication for people with speech disorders following an acquired brain injury (DCA).
Participating companies and each one’s tasks
The IA-Speak project is led by Copysan, a company specialising in Information and Communication Technologies. It coordinates all the technological developments carried out by the other participants, being involved from requirements definition through to final integration and validation. In addition, it develops the rehabilitation component of the solution, which includes an intelligent, personalised AI-based recommendation system for exercises.
The Navarra Artificial Intelligence Research Centre, NAIR Center, works on obtaining and analysing voice patterns to develop a real-time machine-translation system. NAIR Center has researchers working, for example, in computational neuroscience and the application of AI to biomedical signal analysis. Within the project, it develops AI models for automatic recognition of non-standard speech, enabling users to communicate effectively and fluently. These models are a key part of developing the communication device. In addition, the work carried out at NAIR Center will be used to extract speech characteristics and analyse them in order to determine the user’s progress and condition via the rehabilitation platform.
Falcón Electrónica, a leading company in the electronics sector, designs and develops the device’s electronics for effective communication, in coordination with the hardware design and taking into account the same requirements of usability, circularity and sustainability, as well as functionality. It seeks to optimise the final system’s power consumption, size and cost as much as possible.
For its part, BigD, a company specialising in Industrial and Digital Design, is responsible for the design and development of the integrated solution (device), focusing on the user experience and on criteria of circularity and sustainability. It is also designing the intelligent rehabilitation platform.
Veridas. A technology company specialising in biometrics. It is responsible for developing a biometric engine adapted to non-standard speech, in order to verify the user who wishes to use both the rehabilitation platform and the device, guaranteeing the accessibility and personalisation of the solution.
Adacen, an agent of the SINAI, with its expert profile in brain injury and speech and language therapy, participates from the definition of IA-SPEAK’s requirements through to its validation with chronic users, ensuring a solution adapted to them. Its aim is to achieve a system that helps these users maintain their speech characteristics, delaying deterioration as much as possible. At Adacen, the personalised exercises for the platform have been designed and voice recordings are being carried out with chronic patients for the databases. They will also validate the resulting solution.
Finally, the Miguel Servet Foundation, also specialising in brain injury and speech and language therapy, collaborates in defining requirements and validating the system with patients in a subacute state, seeking a significant improvement in speech.
High degree of innovation
Thanks both to the individual capabilities of each of the consortium participants and to their complementarity and collaboration throughout the project, it will be possible to bring together, within a single project, important innovations in different areas:
Rehabilitation methodologies: currently, in dysarthria rehabilitation an exercise plan is established for each patient based on speech therapists’ subjective perception. With IA-SPEAK, objective evaluation criteria will be obtained, remote completion of exercises will be facilitated in order to increase frequency, and exercises specifically designed based on each patient’s progress and needs will be available.
Algorithms for obtaining and interpreting voice patterns and voice biometrics: there are no proven voice-processing solutions for people with speech disorders due to the scarcity and variability of data from this type of user. In IA-SPEAK, different models and techniques will be created that will make it possible to obtain speech characteristics that belong to the person and not to their pathology.
Intelligent recommendation platform: existing rehabilitation platforms for people with acquired brain injury (DCA) allow speech and language therapy professionals to view data, but they do not analyse how the user is performing the exercises, nor do they make recommendations on that basis. IA-SPEAK will be the first platform to characterise user types, use facial and voice recognition and, in addition, design an intelligent recommendation system using multimodal models.
Translation device: there are technologically advanced devices for language translation, but at the algorithmic level they are not prepared to recognise people with speech disorders, and at the hardware level they are not adapted to their needs. Only one recent application is known for translating this type of speech; it is designed for use on mobile devices/tablets and, therefore, is also not adapted to their motor needs. IA-SPEAK proposes the use of a device fully adapted at the hardware level and in terms of functionality, which promotes effective communication for users while also being perfectly adapted to their motor and cognitive needs.
The role of Artificial Intelligence
Artificial intelligence techniques are key to solving one of the project’s greatest technological challenges: developing systems capable of understanding and translating impaired speech, something for which current commercial models are completely ineffective. Thanks to machine learning, it will be possible to create models trained specifically for this type of non-standard speech, enabling users to communicate effectively and fluently.
IA-SPEAK also proposes a rehabilitation platform capable of evaluating each user’s progress objectively and continuously, automatically adapting exercises to their specific needs. This personalisation is only possible through AI models that analyse, in real time, the voice, the face and each patient’s individual progress.
In addition, artificial intelligence gives the system a capacity for continuous learning. As more people use the solution, their data will feed the models, which can be fine-tuned, improve their predictions and adapt to new clinical situations. This evolutionary quality makes IA-SPEAK a living tool, constantly improving—something impossible without AI.
On the other hand, the system’s very architecture, which integrates voice data and image or text data to provide a coherent and useful response, requires advanced multimodal AI techniques. This intelligent integration enables not only understanding what a person says but how they say it, generating a user-centred solution.


