Speech is a biomarker that reflects, in a sensitive way, the integrated functioning of several physiological systems, namely the nervous, respiratory, and muscular systems. This complexity makes it a promising resource for detecting changes associated with health status. Although speech is not in itself a digital biomarker, it can acquire this status when it is captured, digitized and analyzed by computational methods, namely those supported by AI. In these circumstances, it becomes possible to extract relevant vocal patterns for screening, early diagnosis and monitoring of different clinical conditions.

Home / Publications / Publication

Home / Publications / Publication

Voz em IA

Publication type: Article Summary
Original title: Speech as biomarker for multidisease screening
Article publication date: November 2024
Source: Repositório Institucional do Instituto Superior Técnico (Scholar)
Author: Catarina Botelho
Supervisors: Isabel Trancoso, Alberto Abad & Tanja Schultz

What is the goal, target audience, and areas of digital health it addresses?
     This study aims to explore and validate the use of speech as a non-invasive and low-cost digital biomarker for remote screening of multiple diseases, particularly those affecting the respiratory, neuvosus, and muscular systems. It is directed at the medical community, as well as researchers and professionals in the fields of Artificial Intelligence (AI) and signal processing. Within the field of digital health, the study contributes to key areas such as remote telemonitoring and surveillance, digital biomarkers, AI-powered diagnostics, predictive and personalized health, and silent computational paralinguistics.

What is the context?
     Speech is a biomarker that reflects, in a sensitive way, the integrated functioning of several physiological systems, namely the nervous, respiratory, and muscular systems. This complexity makes it a promising resource for detecting changes associated with health status. Although speech is not in itself a digital biomarker, it can acquire this status when it is captured, digitized and analyzed by computational methods, namely those supported by AI. In these circumstances, it becomes possible to extract relevant vocal patterns for screening, early diagnosis and monitoring of different clinical conditions.

     From the formulation of communicative intention in the cortical areas of the brain — including Broca’s area (associated with motor control of speech) and Wernicke’s area (related to language comprehension) to the final sound emission, the speech process requires precise motor control, integrated cognitive function and continuous regulation by auditory and proprioceptive feedback mechanisms. Hearing regulates characteristics such as intonation, volume, and articulation, while proprioception ensures the muscle coordination necessary to produce clear and fluent speech. Any dysfunction along this circuit — resulting from neurodegenerative disease, respiratory disorder, psychiatric condition, or changes associated with aging — can lead to anomalous and detectable acoustic patterns.

     Conditions such as obstructive sleep apnea — in which recurrent obstruction of the upper airways compromises vocal quality —, Alzheimer’s disease  — which affects language coherence, resulting in shorter and less precise sentences, reduced vocabulary, and more frequent pauses —, Parkinson’s disease — which affects motor control and causes weak, monotonous speech and imprecise articulation — or psychiatric disorders such as depression, are associated with characteristic vocal profiles, showing changes in intensity, pronunciation, rhythm, articulation or linguistic content. Aging, although not a disease, can also induce changes in speech, such as reduced control of tone and vocal strength, which can mimic changes associated with certain diseases, making differential diagnosis difficult.

What are the current approaches?
     Traditional medical procedures used for the diagnosis of these diseases are often poorly scalable and accessible, especially for early or large-scale screenings. For example, the go-to method for diagnosing obstructive sleep apnea is polysomnography — an overnight sleep study conducted in a clinic that monitors breathing, heart rate, brain activity, and body movements. However, this method is expensive, time-consuming, and uncomfortable for patients. In the case of Alzheimer’s disease and depression, the diagnosis remains largely subjective: Alzheimer’s symptoms are often mistaken for aging, and depression depends on self-reports and clinical judgment, which leads to great variability and sometimes delays in recognizing the disease.

     Speech-based detection therefore represents a promising, non-invasive, and potentially more affordable alternative. However, most current AI models face several limitations: many are designed to detect only one condition at a time, and they use complex “black-box” algorithms — systems trained on small, homogeneous datasets whose decision-making process is difficult to interpret, limiting their clinical adoption. Other obstacles include the scarcity of diverse speech datasets, ethical and legal concerns related to privacy, the difficulty to generalizing models to different languages, speech tasks, or acoustic environments, and the tendency to pick up irrelevant patterns, such as background noise, rather than signals associated with actual symptoms. These challenges point to the need to develop more robust, interpretable, and reliable AI approaches with greater applicability in real-world clinical settings.

     At the same time, other approaches explore the analysis of non-verbal signals produced during speech, such as the muscle activity of the face and neck. This field, known as silent computational paralinguistics, focuses on the study of aspects such as pauses, facial expressions, and related physiological cues. Muscle activity is usually measured using surface electromyography (EMG), which uses small sensors placed on the face and neck to record electrical signals of the muscles. These signals can be used to reconstruct speech in individuals who are unable to speak. However, EMG remains an invasive, expensive technique limited to laboratory settings, which restores its large-scale applicability.

What does innovation consist of? How is the impact of this study assessed?
     This study explored new non-invasive approaches to detecting diseases through speech. The central innovation consisted of the development of a system that, from the recording of the voice with a microphone, generates artificial signals that replicate muscle activity during speech production. For this, parallel voice recordings and EMG signals from 8 individuals were used. Initially, acoustic characteristics – such as pitch and rhythm – and hourglass-shaped neural networks were extracted from the recordings to recreate simplified muscle signals, which were later compared with real EMG signals. In a second step, the simplified real EMG signals were processed by convolutional neural networks and bidirectional long short-term memory networks, allowing the generation of artificial EMG signals, which were also compared with real EMG to assess accuracy.

     In the analysis of obstructive sleep apnea, 40 YouTube videos were used, from which three modalities were extracted: voice recordings, facial images, and lip movements. These were processed by convolutional neural networks to identify patterns associated with the disease. The voice recordings were analyzed for pitch and harshness, with noise filtering techniques and identifying unique vocal patterns. Facial images were evaluated based on shape and texture, while the lip movements allowed us to analyse to joint. The three modalities were integrated through two strategies: early fusion, with join analysis from the beginning, and late fusion, with individual analysis followed by combination of forecasts. The performance of the models was evaluated for their ability to distinguish patients with obstructive sleep apnea from healthy individuals.

     In the case of Alzheimer’s disease, two datasets were used: the Interdisciplinary Longitudinal Study on Adult Development and Aging – ILSE (long interviews in German) and Alzheimer’s Dementia Recognition through Spontaneous Speech – ADReSS. Models such as Gaussian Mixture Models, Linear Discriminant Analysis, and Support Vector Machines were applied to analyze linguistic (such as lexical richness, grammatical structure, and pauses) and acoustic features (such as pitch, rhythm, and voice quality). The study tested which features and models performed best when evaluated on the same dataset they were trained on, and how models trained with German data performed when applied to English data and vice versa, to distinguish between Alzheimer’s patients and healthy individuals.

     Additionally, reference parameters for healthy speech were defined based on the Crowdsourced Language Assessment Corpus dataset, composed of recordings of individuals describing images and producing vowel sounds. From these recordings, representative vocal features were extracted — such as tone, speech rate, and lexical diversity. Then, Machine learning algorithms, including Support Vector Machines, Logistic Regression, and Neural Additive Models, were trained to classify individuals as healthy or diseased based on deviations from these parameters. The models were then evaluated with the ADReSS (Alzheimer’s) set and the Parkinson’s dataset in Spanish (PC-GITA), where they produced sustained vowel sounds, to test their ability to detect speech alterations associated with these diseases.

     Finally, Large Language Models (LLMs), including GPT-4-Turbo, Llama-2-13B, Mistral-7B, and Mixtral-8x7B, were tested to assess their ability to identify Alzheimer’s disease through the analysis of textual transcripts of speech. Based on ADReSS, two approaches were explored: one direct, questioning the models about the speaker’s condition (with or without examples of patient speech) and another based on the previous evaluation of linguistic features such as textual coherence, lexical diversity, sentence length, and lexical retrieval difficulty. The LLM’s predictions were combined with machine learning models such as Support Vector Machines, Linear Discriminant Analysis, One-Nearest Neighbour, Decision Trees, and Random Forests. Speech rate (syllables per second) and explicit annotations of pauses (short, medium, long) were also considered in the accuracy analysis. The models were evaluated for their ability to distinguish between Alzheimer’s patients and healthy individuals, with the LLMs providing a step-by-step explanation, a YES/NO prediction, and a confidence level.

What are the main results? What is the future of this approach?
     The results obtained demonstrated that the AI models developed can predict simplified muscle signals with an accuracy of approximately 75% when trained and tested with data from the same person and recording. In more challenging scenarios, such as generalization to different recording sessions or different individuals, the accuracies obtained were 57% and 46%, respectively. The artificial EMG signals achieved a reasonable match with the real EMG signals, with an average similarity of 66.3%.

     For the detection of obstructive sleep apnea, lip movements were the isolated modality the best performance (80%), followed by facial images (77.5%) and voice recordings (67.5%). The late fusion of the three modalities — combining the forecasts after the individual analysis — obtained the highest overall accuracy (82.5%), highlighting the benefit of multimodal integration, especially in contexts with noisy or incomplete data.

     Regarding Alzheimer’s disease, the Support Vector Machines models outperformed the other classifiers in the ADReSS and ILSE datasets. In ADReSS linguistic features were more informative (77.1% accuracy) than acoustic features (66.7%). In ILSE, both features showed high results, with 86% accuracy for acoustic features and 83.8% for linguistic features. However, when the models trained in German were tested in English, and vice versa, it was revealed a marked loss of performance, with accuracies dropping to values close to change, underlining the difficulty of transferring AI models between different languages and different recording conditions.

     The analysis of healthy speech established benchmarks for the detection of deviations, allowing the models to identify subtle changes in Parkinson’s and Alzheimer’s patients, such as altered pitch, slower speech, and reduced vocabulary. In the case of Parkinson’s disease, the Neural Additive Model correctly identified patients in 75% of cases during training and in 69% of cases with new data. For Alzheimer’s disease, performance was even better, with an accuracy of 84% in training and 75% in new data. Although the Neural Additive Model was slightly less accurate than Logistic Regression and Support Vector Machines in detecting Parkinson’s, it outperformed both in detecting Alzheimer’s and offered the added advantage of interpretability by showing how each speech feature contributed to its predictions.

     The study also revealed that GPT-4-Turbo was the best-performing model of LLMs, with 77% accuracy in detecting Alzheimer’s disease in data not used during training. Approaches based on the classification of linguistic characteristics — such as textual coherence and lexical diversity — have outperformed the strategy of asking LLMs directly to predict the diagnosis. The inclusion of speech rate slightly improved detection, while pause annotations did not bring significant gains. Among the classifiers, the Support Vector Machines achieved the highest accuracy, with 81.3%.

     Overall, the study demonstrated the potential of speech as a remote and scalable biomarker for screening for multiple diseases. Integrating multimodal data — such as facial images, lip movements, and artificial EMG signals — with interpretable machine learning models strengthens the ability to detect changes related to neurological, respiratory, and psychiatric diseases. The results highlight the benefits of late fusion in the management of noisy real-world data and revealed practical challenges, such as variations in language, context, and recording devices (e.g., differences between recordings made with a home phone vs clinical microphones). These findings underlined the importance of establishing normative parameters of healthy speech to accurately detect subtle deviations indicative of pathology.

     Future work should focus on the development of large-scale and diverse datasets to improve the generalization of models across diseases, languages, contexts (including uncontrolled environments), and recording conditions. The priority will be to integrate additional non-invasive biosignals – such as cough – and adopt multimodal approaches to capture overlapping effects of comorbidities. Advances in machine learning could also increase interpretability and reliability. Collaboration with clinicians and speech therapists will allow multicenter studies to be conducted, ensuring relevance and clinical applicability. User-centric mobile prototypes, in compliance with the general data protection regulation and medical regulations, will facilitate integration into clinical workflows. The goal is to enable this technology for mobile and telehealth platforms, allowing continuous and passive monitoring for preventive and personalized healthcare, safeguarding ethical and privacy issues.

Do you have an innovative idea in healthcare field?

Share it with us and see it come to life.
We will help bring your projects to life!

Newsletter

Receive the latest updates from the InovarSaúde portal.

República Portuguesa logo
logotipo SNS
SPMS logotipo

Follow Us

YouTube
LinkedIn

Co-funded by

PRR Logotipo
república Portuguesa logo
União Europeia Logo

Newsletter

Receive the latest updates from the InovarSaúde portal.

República Portuguesa logo
SNS Logo
SPMS Logo

Follow Us

Co-funded by

PRR Logotipo
República Portuguesa logo
União Europeia Logo

Home / Publications / Publication

Voz em IA

Publication type: Article Summary
Original title: Speech as biomarker for multidisease screening
Article publication date: November 2024
Source: Repositório Institucional do Instituto Superior Técnico (Scholar)
Author: Catarina Botelho
Supervisors: Isabel Trancoso, Alberto Abad & Tanja Schultz

What is the goal, target audience, and areas of digital health it addresses?
     This study aims to explore and validate the use of speech as a non-invasive and low-cost digital biomarker for remote screening of multiple diseases, particularly those affecting the respiratory, neuvosus, and muscular systems. It is directed at the medical community, as well as researchers and professionals in the fields of Artificial Intelligence (AI) and signal processing. Within the field of digital health, the study contributes to key areas such as remote telemonitoring and surveillance, digital biomarkers, AI-powered diagnostics, predictive and personalized health, and silent computational paralinguistics.

What is the context?
     Speech is a biomarker that reflects, in a sensitive way, the integrated functioning of several physiological systems, namely the nervous, respiratory, and muscular systems. This complexity makes it a promising resource for detecting changes associated with health status. Although speech is not in itself a digital biomarker, it can acquire this status when it is captured, digitized and analyzed by computational methods, namely those supported by AI. In these circumstances, it becomes possible to extract relevant vocal patterns for screening, early diagnosis and monitoring of different clinical conditions.

     From the formulation of communicative intention in the cortical areas of the brain — including Broca’s area (associated with motor control of speech) and Wernicke’s area (related to language comprehension) to the final sound emission, the speech process requires precise motor control, integrated cognitive function and continuous regulation by auditory and proprioceptive feedback mechanisms. Hearing regulates characteristics such as intonation, volume, and articulation, while proprioception ensures the muscle coordination necessary to produce clear and fluent speech. Any dysfunction along this circuit — resulting from neurodegenerative disease, respiratory disorder, psychiatric condition, or changes associated with aging — can lead to anomalous and detectable acoustic patterns.

     Conditions such as obstructive sleep apnea — in which recurrent obstruction of the upper airways compromises vocal quality —, Alzheimer’s disease  — which affects language coherence, resulting in shorter and less precise sentences, reduced vocabulary, and more frequent pauses —, Parkinson’s disease — which affects motor control and causes weak, monotonous speech and imprecise articulation — or psychiatric disorders such as depression, are associated with characteristic vocal profiles, showing changes in intensity, pronunciation, rhythm, articulation or linguistic content. Aging, although not a disease, can also induce changes in speech, such as reduced control of tone and vocal strength, which can mimic changes associated with certain diseases, making differential diagnosis difficult.

What are the current approaches?
     Traditional medical procedures used for the diagnosis of these diseases are often poorly scalable and accessible, especially for early or large-scale screenings. For example, the go-to method for diagnosing obstructive sleep apnea is polysomnography — an overnight sleep study conducted in a clinic that monitors breathing, heart rate, brain activity, and body movements. However, this method is expensive, time-consuming, and uncomfortable for patients. In the case of Alzheimer’s disease and depression, the diagnosis remains largely subjective: Alzheimer’s symptoms are often mistaken for aging, and depression depends on self-reports and clinical judgment, which leads to great variability and sometimes delays in recognizing the disease.

     Speech-based detection therefore represents a promising, non-invasive, and potentially more affordable alternative. However, most current AI models face several limitations: many are designed to detect only one condition at a time, and they use complex “black-box” algorithms — systems trained on small, homogeneous datasets whose decision-making process is difficult to interpret, limiting their clinical adoption. Other obstacles include the scarcity of diverse speech datasets, ethical and legal concerns related to privacy, the difficulty to generalizing models to different languages, speech tasks, or acoustic environments, and the tendency to pick up irrelevant patterns, such as background noise, rather than signals associated with actual symptoms. These challenges point to the need to develop more robust, interpretable, and reliable AI approaches with greater applicability in real-world clinical settings.

     At the same time, other approaches explore the analysis of non-verbal signals produced during speech, such as the muscle activity of the face and neck. This field, known as silent computational paralinguistics, focuses on the study of aspects such as pauses, facial expressions, and related physiological cues. Muscle activity is usually measured using surface electromyography (EMG), which uses small sensors placed on the face and neck to record electrical signals of the muscles. These signals can be used to reconstruct speech in individuals who are unable to speak. However, EMG remains an invasive, expensive technique limited to laboratory settings, which restores its large-scale applicability.

What does innovation consist of? How is the impact of this study assessed?
     This study explored new non-invasive approaches to detecting diseases through speech. The central innovation consisted of the development of a system that, from the recording of the voice with a microphone, generates artificial signals that replicate muscle activity during speech production. For this, parallel voice recordings and EMG signals from 8 individuals were used. Initially, acoustic characteristics – such as pitch and rhythm – and hourglass-shaped neural networks were extracted from the recordings to recreate simplified muscle signals, which were later compared with real EMG signals. In a second step, the simplified real EMG signals were processed by convolutional neural networks and bidirectional long short-term memory networks, allowing the generation of artificial EMG signals, which were also compared with real EMG to assess accuracy.

     In the analysis of obstructive sleep apnea, 40 YouTube videos were used, from which three modalities were extracted: voice recordings, facial images, and lip movements. These were processed by convolutional neural networks to identify patterns associated with the disease. The voice recordings were analyzed for pitch and harshness, with noise filtering techniques and identifying unique vocal patterns. Facial images were evaluated based on shape and texture, while the lip movements allowed us to analyse to joint. The three modalities were integrated through two strategies: early fusion, with join analysis from the beginning, and late fusion, with individual analysis followed by combination of forecasts. The performance of the models was evaluated for their ability to distinguish patients with obstructive sleep apnea from healthy individuals.

     In the case of Alzheimer’s disease, two datasets were used: the Interdisciplinary Longitudinal Study on Adult Development and Aging – ILSE (long interviews in German) and Alzheimer’s Dementia Recognition through Spontaneous Speech – ADReSS. Models such as Gaussian Mixture Models, Linear Discriminant Analysis, and Support Vector Machines were applied to analyze linguistic (such as lexical richness, grammatical structure, and pauses) and acoustic features (such as pitch, rhythm, and voice quality). The study tested which features and models performed best when evaluated on the same dataset they were trained on, and how models trained with German data performed when applied to English data and vice versa, to distinguish between Alzheimer’s patients and healthy individuals.

     Additionally, reference parameters for healthy speech were defined based on the Crowdsourced Language Assessment Corpus dataset, composed of recordings of individuals describing images and producing vowel sounds. From these recordings, representative vocal features were extracted — such as tone, speech rate, and lexical diversity. Then, Machine learning algorithms, including Support Vector Machines, Logistic Regression, and Neural Additive Models, were trained to classify individuals as healthy or diseased based on deviations from these parameters. The models were then evaluated with the ADReSS (Alzheimer’s) set and the Parkinson’s dataset in Spanish (PC-GITA), where they produced sustained vowel sounds, to test their ability to detect speech alterations associated with these diseases.

     Finally, Large Language Models (LLMs), including GPT-4-Turbo, Llama-2-13B, Mistral-7B, and Mixtral-8x7B, were tested to assess their ability to identify Alzheimer’s disease through the analysis of textual transcripts of speech. Based on ADReSS, two approaches were explored: one direct, questioning the models about the speaker’s condition (with or without examples of patient speech) and another based on the previous evaluation of linguistic features such as textual coherence, lexical diversity, sentence length, and lexical retrieval difficulty. The LLM’s predictions were combined with machine learning models such as Support Vector Machines, Linear Discriminant Analysis, One-Nearest Neighbour, Decision Trees, and Random Forests. Speech rate (syllables per second) and explicit annotations of pauses (short, medium, long) were also considered in the accuracy analysis. The models were evaluated for their ability to distinguish between Alzheimer’s patients and healthy individuals, with the LLMs providing a step-by-step explanation, a YES/NO prediction, and a confidence level.

What are the main results? What is the future of this approach?
     The results obtained demonstrated that the AI models developed can predict simplified muscle signals with an accuracy of approximately 75% when trained and tested with data from the same person and recording. In more challenging scenarios, such as generalization to different recording sessions or different individuals, the accuracies obtained were 57% and 46%, respectively. The artificial EMG signals achieved a reasonable match with the real EMG signals, with an average similarity of 66.3%.

     For the detection of obstructive sleep apnea, lip movements were the isolated modality the best performance (80%), followed by facial images (77.5%) and voice recordings (67.5%). The late fusion of the three modalities — combining the forecasts after the individual analysis — obtained the highest overall accuracy (82.5%), highlighting the benefit of multimodal integration, especially in contexts with noisy or incomplete data.

     Regarding Alzheimer’s disease, the Support Vector Machines models outperformed the other classifiers in the ADReSS and ILSE datasets. In ADReSS linguistic features were more informative (77.1% accuracy) than acoustic features (66.7%). In ILSE, both features showed high results, with 86% accuracy for acoustic features and 83.8% for linguistic features. However, when the models trained in German were tested in English, and vice versa, it was revealed a marked loss of performance, with accuracies dropping to values close to change, underlining the difficulty of transferring AI models between different languages and different recording conditions.

     The analysis of healthy speech established benchmarks for the detection of deviations, allowing the models to identify subtle changes in Parkinson’s and Alzheimer’s patients, such as altered pitch, slower speech, and reduced vocabulary. In the case of Parkinson’s disease, the Neural Additive Model correctly identified patients in 75% of cases during training and in 69% of cases with new data. For Alzheimer’s disease, performance was even better, with an accuracy of 84% in training and 75% in new data. Although the Neural Additive Model was slightly less accurate than Logistic Regression and Support Vector Machines in detecting Parkinson’s, it outperformed both in detecting Alzheimer’s and offered the added advantage of interpretability by showing how each speech feature contributed to its predictions.

     The study also revealed that GPT-4-Turbo was the best-performing model of LLMs, with 77% accuracy in detecting Alzheimer’s disease in data not used during training. Approaches based on the classification of linguistic characteristics — such as textual coherence and lexical diversity — have outperformed the strategy of asking LLMs directly to predict the diagnosis. The inclusion of speech rate slightly improved detection, while pause annotations did not bring significant gains. Among the classifiers, the Support Vector Machines achieved the highest accuracy, with 81.3%.

     Overall, the study demonstrated the potential of speech as a remote and scalable biomarker for screening for multiple diseases. Integrating multimodal data — such as facial images, lip movements, and artificial EMG signals — with interpretable machine learning models strengthens the ability to detect changes related to neurological, respiratory, and psychiatric diseases. The results highlight the benefits of late fusion in the management of noisy real-world data and revealed practical challenges, such as variations in language, context, and recording devices (e.g., differences between recordings made with a home phone vs clinical microphones). These findings underlined the importance of establishing normative parameters of healthy speech to accurately detect subtle deviations indicative of pathology.

     Future work should focus on the development of large-scale and diverse datasets to improve the generalization of models across diseases, languages, contexts (including uncontrolled environments), and recording conditions. The priority will be to integrate additional non-invasive biosignals – such as cough – and adopt multimodal approaches to capture overlapping effects of comorbidities. Advances in machine learning could also increase interpretability and reliability. Collaboration with clinicians and speech therapists will allow multicenter studies to be conducted, ensuring relevance and clinical applicability. User-centric mobile prototypes, in compliance with the general data protection regulation and medical regulations, will facilitate integration into clinical workflows. The goal is to enable this technology for mobile and telehealth platforms, allowing continuous and passive monitoring for preventive and personalized healthcare, safeguarding ethical and privacy issues.

Cadeira de rodas

Autonomous Patient Mobility in a Hospital Environment

The internal transport of patients in healthcare institutions, although at first glance it may seem like a simple task, represents a complex, continuous, demanding and time-consuming logistical operation that cuts across all levels of the…

Read more
Literatura sobre os cuidados de saúde no futuro

What Literature Reveals About Healthcare in the Future

The healthcare sector is undergoing rapid transformation driven by population aging, increasing complexity of care, and digital advancements, in a context that requires greater integration, sustainability and adaptation to new realities such as the European…

Read more
Perturbação do sono

A Digital Intervention for Insomnia in Oncology

Insomnia is a sleep disorder characterised by persistent difficulties in initiating sleep, maintaining sleep during the night, or achieving restful sleep. These difficulties arise even in the presence of adequate sleeping conditions and are often…

Read more
Sistema de telemonitorização remota

Digital Technology Revolutionising Post-cardiac Surgery

According to the World Health Organisation, cardiovascular disease remains the leading cause of death worldwide, responsible for around 17.9 million deaths a year. Its high prevalence is associated with unhealthy lifestyles characterised by poor diet,…

Read more
Sistema robótico autónomo INSIDE

Autonomous Robotics System for Autism Therapy

Autism spectrum disorder is a neurodevelopmental condition with significant clinical, social and economic repercussions throughout life. According to the World Health Organization, it is estimated to affect approximately 1 in 160 children worldwide. Its origin…

Read more
Enfermeira com um telefone

Mobile Application to Improve Workflows in Nursing Homes

Portugal has one of the highest aging populations in the world, placing increasing pressure on elderly care services, especially in nursing homes. Healthcare professionals in these facilities are often overwhelmed due to the increasing number…

Read more
troca de informações de saúde e interoperabilidade

New Era of Interoperability in Healthcare Systems

The growing use of electronic health records, digital diagnostic systems and remote monitoring technologies has led to a significant increase in the volume and complexity of health data. This increase intensifies the need for continuous,…

Read more
robótica colaborativa

Collaborative Robotics Improves Working Conditions

Workers face growing challenges in the industrial environment. Among the most critical are fatigue and inappropriate postures, often associated with repetitive tasks and working conditions that lack ergonomic suitability. These factors represent significant risks for…

Read more
Benefícios da Eletrônica Médica

Detection of Anxiety and Panic Attacks in Real Time

The growing number of people with anxiety disorders, along with increased awareness of mental health, drives the need for new technological tools that provide remote and continuous monitoring of anxiety and panic disorders. Thus, the…

Read more
tele-ecografia

A Novel Approach for Robotic-assisted Tele-echography

Currently, robotic systems for ultrasound diagnostic procedures fall into two main categories: portable robots that require manual positioning and fully autonomous robotic systems that independently control the ultrasound probe’s orientation and positioning. Portable robots rely…

Read more
Personalização e tecnologia na gestão da Diabetes

Personalization and Technology in Diabetes Management

IPDM has significant potential to improve diabetes management and drive health system reforms to become high-performing, effective, equitable, accessible, and sustainable. Evidence and good practices inspire health system transformation. Adopting person-centred approaches like co-creation and…

Read more
TEF-HEALTH Logo

SPMS Integrates the TEF-Health Initiative

SPMS participates in the TEF-Health initiative as a partner in a consortium composed of 51 entities from 9 European Union countries. This action is co-financed by the European Commission and has a duration of five…

Read more
Global Digital Health Partnership Logo

SPMS Represents Portugal as Vice-president of GDHP

The GDHP is an intergovernmental organization in the digital health sector that facilitates cooperation and collaboration between government representatives and the World Health Organization (WHO). Its purpose is to foster policymaking that promote the digitalization…

Read more
Portugal INCoDe.2030

Digital Transformation of Health at INCoDe.2030 in Tomar

The “National Digital Skills Initiative e.2030, Portugal” (INCoDe.2030) is an initiative that aims to improve the Portuguese population’s level of digital skills, placing Portugal at the level of the most advanced European countries in this…

Read more
HealthData@PT Logo

HealthData@PT: New SPMS Initiative for Health Data

Action HealthData@PT is launched in the context of the implementation of the European Health Data Space, and is an initiative approved by the European Commission under the EU4Health 2021-2027 programme. This initiative contributes to the…

Read more

Do you have an innovative idea in healthcare field?

Share it with us and see it come to life.
We will help bring your projects to life!

Newsletter

Receive the latest updates from the Inovarsaúde portal.

República Portuguesa logo
SNS Logo
SPMS Logo

Follow Us

YouTube
LinkedIn

Co-funded by

PRR Logotipo
República Portuguesa logo
União Europeia Logo
Scroll to Top