Publications

Refine Results

(Filters Applied) Clear All

An exploratory characterization of speech- and fine-motor coordination in verbal children with Autism spectrum disorder

Summary

Autism spectrum disorder (ASD) is a neurodevelopmental disorder often associated with difficulties in speech production and fine-motor tasks. Thus, there is a need to develop objective measures to assess and understand speech production and other fine-motor challenges in individuals with ASD. In addition, recent research suggests that difficulties with speech production and fine-motor tasks may contribute to language difficulties in ASD. In this paper, we explore the utility of an off-body recording platform, from which we administer a speech- and fine-motor protocol to verbal children with ASD and neurotypical controls. We utilize a correlation-based analysis technique to develop proxy measures of motor coordination from signals derived from recordings of speech- and fine-motor behaviors. Eigenvalues of the resulting correlation matrix are inputs to Gaussian Mixture Models to discriminate between highly-verbal children with ASD and neurotypical controls. These eigenvalues also characterize the complexity (underlying dimensionality) of representative signals of speech- and fine-motor movement dynamics, and form the feature basis to estimate scores on an expressive vocabulary measure. Based on a pilot dataset (15 ASD and 15 controls), features derived from an oral story reading task are used in discriminating between the two groups with AUCs > 0.80, and highlight lower complexity of coordination in children with ASD. Features derived from handwriting and maze tracing tasks led to AUCs of 0.86 and 0.91, however features derived from ocular tasks did not aid in discrimination between the ASD and neurotypical groups. In addition, features derived from free speech and sustained vowel tasks are strongly correlated with expressive vocabulary scores. These results indicate the promise of a correlation-based analysis in elucidating motor differences between individuals with ASD and neurotypical controls.
READ LESS

Summary

Autism spectrum disorder (ASD) is a neurodevelopmental disorder often associated with difficulties in speech production and fine-motor tasks. Thus, there is a need to develop objective measures to assess and understand speech production and other fine-motor challenges in individuals with ASD. In addition, recent research suggests that difficulties with speech...

READ MORE

Quantifying speech production coordination from non- and minimally-speaking individuals

Published in:
J. Autism Dev. Disord., 13 April 2024.

Summary

Purpose: Non-verbal utterances are an important tool of communication for individuals who are non- or minimally-speaking. While these utterances are typically understood by caregivers, they can be challenging to interpret by their larger community. To date, there has been little work done to detect and characterize the vocalizations produced by non- or minimally-speaking individuals. This paper aims to characterize five categories of utterances across a set of 7 non- or minimally-speaking individuals. Methods: The characterization is accomplished using a correlation structure methodology, acting as a proxy measurement for motor coordination, to localize similarities and differences to specific speech production systems. Results: We specifically find that frustrated and dysregulated utterances show similar correlation structure outputs, especially when compared to self-talk, request, and delighted utterances. We additionally witness higher complexity of coordination between articulatory and respiratory subsystems and lower complexity of coordination between laryngeal and respiratory subsystems in frustration and dysregulation as compared to self-talk, request, and delight. Finally, we observe lower complexity of coordination across all three speech subsystems in the request utterances as compared to self-talk and delight. Conclusion: The insights from this work aid in understanding of the modifications made by non- or minimally-speaking individuals to accomplish specific goals in non-verbal communication.
READ LESS

Summary

Purpose: Non-verbal utterances are an important tool of communication for individuals who are non- or minimally-speaking. While these utterances are typically understood by caregivers, they can be challenging to interpret by their larger community. To date, there has been little work done to detect and characterize the vocalizations produced by...

READ MORE

Dissociating COVID-19 from other respiratory infections based on acoustic, motor coordination, and phonemic patterns

Published in:
Sci. Rep., Vol. 13, No. 1, January 2023, 1567.

Summary

In the face of the global pandemic caused by the disease COVID-19, researchers have increasingly turned to simple measures to detect and monitor the presence of the disease in individuals at home. We sought to determine if measures of neuromotor coordination, derived from acoustic time series, as well as phoneme-based and standard acoustic features extracted from recordings of simple speech tasks could aid in detecting the presence of COVID-19. We further hypothesized that these features would aid in characterizing the effect of COVID-19 on speech production systems. A protocol, consisting of a variety of speech tasks, was administered to 12 individuals with COVID-19 and 15 individuals with other viral infections at University Hospital Galway. From these recordings, we extracted a set of acoustic time series representative of speech production subsystems, as well as their univariate statistics. The time series were further utilized to derive correlation-based features, a proxy for speech production motor coordination. We additionally extracted phoneme-based features. These features were used to create machine learning models to distinguish between the COVID-19 positive and other viral infection groups, with respiratory- and laryngeal-based features resulting in the highest performance. Coordination-based features derived from harmonic-to-noise ratio time series from read speech discriminated between the two groups with an area under the ROC curve (AUC) of 0.94. A longitudinal case study of two subjects, one from each group, revealed differences in laryngeal based acoustic features, consistent with observed physiological differences between the two groups. The results from this analysis highlight the promise of using nonintrusive sensing through simple speech recordings for early warning and tracking of COVID-19.
READ LESS

Summary

In the face of the global pandemic caused by the disease COVID-19, researchers have increasingly turned to simple measures to detect and monitor the presence of the disease in individuals at home. We sought to determine if measures of neuromotor coordination, derived from acoustic time series, as well as phoneme-based...

READ MORE

An emotion-driven vocal biomarker-based PTSD screening tool

Summary

This paper introduces an automated post-traumatic stress disorder (PTSD) screening tool that could potentially be used as a self-assessment or inserted into routine medical visits for PTSD diagnosis and treatment. Methods: With an emotion estimation algorithm providing arousal (excited to calm) and valence (pleasure to displeasure) levels through discourse, we select regions of the acoustic signal that are most salient for PTSD detection. Our algorithm was tested on a subset of data from the DVBIC-TBICoE TBI Study, which contains PTSD Check List Civilian (PCL-C) assessment scores. Results: Speech from low-arousal and positive-valence regions provide the best discrimination for PTSD. Our model achieved an AUC (area under the curve) equal to 0.80 in detecting PCL-C ratings, outperforming models with no emotion filtering (AUC = 0.68). Conclusions: This result suggests that emotion drives the selection of the most salient temporal regions of an audio recording for PTSD detection.
READ LESS

Summary

This paper introduces an automated post-traumatic stress disorder (PTSD) screening tool that could potentially be used as a self-assessment or inserted into routine medical visits for PTSD diagnosis and treatment. Methods: With an emotion estimation algorithm providing arousal (excited to calm) and valence (pleasure to displeasure) levels through discourse, we...

READ MORE

Artificial intelligence for detecting COVID-19 with the aid of human cough, breathing and speech signals: scoping review

Summary

Background: Official tests for COVID-19 are time consuming, costly, can produce high false negatives, use up vital chemicals and may violate social distancing laws. Therefore, a fast and reliable additional solution using recordings of cough, breathing and speech data forpreliminary screening may help alleviate these issues. Objective: This scoping review explores how Artificial Intelligence (AI) technology aims to detect COVID-19 disease by using cough, breathing and speech recordings, as reported in theliterature. Here, we describe and summarize attributes of the identified AI techniques and datasets used for their implementation. Methods: A scoping review was conducted following the guidelines of PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews). Electronic databases (Google Scholar, Science Direct, and IEEE Xplore) were searched between 1st April 2020 and 15st August 2021. Terms were selected based on thetarget intervention (i.e., AI), the target disease (i.e., COVID-19) and acoustic correlates of thedisease (i.e., speech, breathing and cough). A narrative approach was used to summarize the extracted data. Results: 24 studies and 8 Apps out of the 86 retrieved studies met the inclusion criteria. Halfof the publications and Apps were from the USA. The most prominent AI architecture used was a convolutional neural network, followed by a recurrent neural network. AI models were mainly trained, tested and run-on websites and personal computers, rather than on phone apps. More than half of the included studies reported area-under-the-curve performance of greater than 0.90 on symptomatic and negative datasets while one study achieved 100% sensitivity in predicting asymptomatic COVID-19 for cough-, breathing- or speech-based acoustic features. Conclusions: The included studies show that AI has the potential to help detect COVID-19 using cough, breathing and speech samples. However, the proposed methods with some time and appropriate clinical testing would prove to be an effective method in detecting various diseases related to respiratory and neurophysiological changes in human body.
READ LESS

Summary

Background: Official tests for COVID-19 are time consuming, costly, can produce high false negatives, use up vital chemicals and may violate social distancing laws. Therefore, a fast and reliable additional solution using recordings of cough, breathing and speech data forpreliminary screening may help alleviate these issues. Objective: This scoping review...

READ MORE

Human balance models optimized using a large-scale, parallel architecture with applications to mild traumatic brain injury

Published in:
2020 IEEE High Performance Extreme Computing Conf., HPEC, 22-24 September 2020.

Summary

Static and dynamic balance are frequently disrupted through brain injuries. The impairment can be complex and for mild traumatic brain injury (mTBI) can be undetectable by standard clinical tests. Therefore, neurologically relevant modeling approaches are needed for detection and inference of mechanisms of injury. The current work presents models of static and dynamic balance that have a high degree of correspondence. Emphasizing structural similarity between the domains facilitates development of both. Furthermore, particular attention is paid to components of sensory feedback and sensory integration to ground mechanisms in neurobiology. Models are adapted to fit experimentally collected data from 10 healthy control volunteers and 11 mild traumatic brain injury volunteers. Through an analysis by synthesis approach whose implementation was made possible by a state-of-the-art high performance computing system, we derived an interpretable, model based feature set that could classify mTBI and controls in a static balance task with an ROC AUC of 0.72.
READ LESS

Summary

Static and dynamic balance are frequently disrupted through brain injuries. The impairment can be complex and for mild traumatic brain injury (mTBI) can be undetectable by standard clinical tests. Therefore, neurologically relevant modeling approaches are needed for detection and inference of mechanisms of injury. The current work presents models of...

READ MORE

Sensorimotor conflict tests in an immersive virtual environment reveal subclinical impairments in mild traumatic brain injury

Summary

Current clinical tests lack the sensitivity needed for detecting subtle balance impairments associated with mild traumatic brain injury (mTBI). Patient-reported symptoms can be significant and have a huge impact on daily life, but impairments may remain undetected or poorly quantified using clinical measures. Our central hypothesis was that provocative sensorimotor perturbations, delivered in a highly instrumented, immersive virtual environment, would challenge sensory subsystems recruited for balance through conflicting multi-sensory evidence, and therefore reveal that not all subsystems are performing optimally. The results show that, as compared to standard clinical tests, the provocative perturbations illuminate balance impairments in subjects who have had mild traumatic brain injuries. Perturbations delivered while subjects were walking provided greater discriminability (average accuracy ≈ 0.90) than those delivered during standing (average accuracy ≈ 0.65) between mTBI subjects and healthy controls. Of the categories of features extracted to characterize balance, the lower limb accelerometry-based metrics proved to be most informative. Further, in response to perturbations, subjects with an mTBI utilized hip strategies more than ankle strategies to prevent loss of balance and also showed less variability in gait patterns. We have shown that sensorimotor conflicts illuminate otherwise-hidden balance impairments, which can be used to increase the sensitivity of current clinical procedures. This augmentation is vital in order to robustly detect the presence of balance impairments after mTBI and potentially define a phenotype of balance dysfunction that enhances risk of injury.
READ LESS

Summary

Current clinical tests lack the sensitivity needed for detecting subtle balance impairments associated with mild traumatic brain injury (mTBI). Patient-reported symptoms can be significant and have a huge impact on daily life, but impairments may remain undetected or poorly quantified using clinical measures. Our central hypothesis was that provocative sensorimotor...

READ MORE

Showing Results

1-7 of 7