New York: A new tool created by researchers could diagnose a stroke based on abnormalities in a patient's speech ability and facial muscular movements, and with the accuracy - all within minutes from an interaction with a smartphone

According to a study, researchers have developed a machine learning model to aid in, and potentially speed up, the diagnostic process by physicians in a clinical setting.

"Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan," said study author James Wang from Penn State University in the US.

ADVERTISEMENT

"We are trying to simulate or emulate this process by using our machine learning approach," Wang added.

The team's novel approach analysed the presence of stroke among actual emergency room patients with suspicion of stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient's face or voice, such as a drooping cheek or slurred speech.

To train the computer model, the researchers built a dataset from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas.

ADVERTISEMENT

Each patient was asked to perform a speech test to analyze their speech and cognitive communication while being recorded on an Apple iPhone.

"The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use, and ultimately empowers our method for remote diagnosis of stroke and self-assessment," said Huang.

Testing the model on the Houston Methodist dataset, the researchers found that its performance achieved 79 per cent accuracy -- comparable to clinical diagnostics by emergency room doctors, who use additional tests such as CT scans.

ADVERTISEMENT

However, the model could help save valuable time in diagnosing a stroke, with the ability to assess a patient in as little as four minutes.