Signal processing algorithms for telehealth signal validation and interpretation

Download files
Access & Terms of Use
open access
Copyright: Xie, Yang
Altmetric
Abstract
Telecare is rapidly becoming a preferred solution to the challenge of remotely monitoring subjects who are suffering from chronic disease. In home telecare, physiological measurements are often routinely taken in the home environment and returned via a communications link to a central database, where they can be accessed and analysed by the treating physician. However, this frequent upload of data increases the workload of the monitoring clinician, contributing to an information overload. Thus, a more automated approach is required to assist the clinician in efficiently interpreting these telehealth data. The concept of an intelligent software system, often termed a decision support system (DSS), is envisaged to meet these needs through a combined use of various automated subcomponents such as trend detection, threshold analysis, deterioration identification and alert generation. With a carefully constructed DSS, much of the workload can be removed from the clinicians. In telecare, physiological signals, such as the ECG and pulse oximetry plethysmogram, are often recorded by the subject in their own home. Many signal processing approaches have been utilised to extract features from the physiological signals under the assumption that the data it is interpreting is entirely trustworthy. This is normally the case when physiological signals are recorded in supervised environments. Conversely, this is rarely the case in the unsupervised recording environment. For example, for most remotely acquired pulse oximetry, relative motion between the finger tip and the probe is one of the sources that would cause error, while for automated remote blood pressure measurement, undesired additions such as movement and background noise, may frequently corrupt the audio recording. Since the features extracted from the physiological signals, such as the heart rate (HR) extracted from the ECG and systolic pressure derived from the blood pressure measurement, are ultimately destined for the DSS, it is crucial to ensure that the quality of the signal recording is within some tolerance which would not undermine the DSS outcome. In this thesis, one such physiological signal of interest is the electrocardiogram (ECG), and a common ECG derivative -- the heart rate. Reliably estimating the heart rate ideally requires an extended uninterrupted epoch of ECG. But during the unsupervised acquisition of ECG, line noise, movement artifact and muscle tremor are extremely common, which are very likely to corrupt the ECG. The first part of this thesis proposes an approach to determine the quality of single-lead ECG recordings obtained from telehealth patients. This method includes three algorithms: one to identify gross movement artifact; a second to detect QRS complexes; and a third to estimate the ECG signal quality, using a supervised statistical classifier model. The quality classifier model uses a number of time-domain and frequency-domain features extracted from the ECG waveform signals, which define the signal quality type, to classify the ECG signals into three quality classes of `Good', `Average' and `Bad'. The quality classifier gives an accuracy in classifying signal quality of 79.7%, using the automated annotation (artifact sections and QRS complexes returned by the previous two algorithms). The second part of this thesis examines how detrimental an effect poor signal quality will have on DSS subsystems. One such subsystem process is that of trend detection. While simple threshold-based alert techniques provide some utility in notifying clinicians of extreme out-of-range parameter values, more incipient changes in a subject's condition may be sooner recognised by identifying trends in the longitudinal parameter data. The first half of this study combines previous work in this area, related to artifact detection in ECG signals, and piecewise-linear trend detection in longitudinal heart rate parameter records, to investigate the influence of using an artifact detection prior to trend detection in the resulting longitudinal heart rate records. The results show that the application of the artifact detection results in a significant improvement in trend fitting, compared to the case without artifact detection, by reducing the mean RMSE value in the heart rate trend fit from 2.90 BPM to 1.16 BPM. The second half of this study, incorporates the ECG's quality score, derived using the ECG signal quality measures developed in the first part, into the trend detection to make the trend fitting more robust.
Persistent link to this record
Link to Publisher Version
Link to Open Access Version
Additional Link
Author(s)
Xie, Yang
Supervisor(s)
Lovell, Nigel
Redmond, Stephen
Creator(s)
Editor(s)
Translator(s)
Curator(s)
Designer(s)
Arranger(s)
Composer(s)
Recordist(s)
Conference Proceedings Editor(s)
Other Contributor(s)
Corporate/Industry Contributor(s)
Publication Year
2011
Resource Type
Thesis
Degree Type
Masters Thesis
UNSW Faculty
Files
download whole.pdf 1.37 MB Adobe Portable Document Format
Related dataset(s)