Project 1

Compression Algorithms for Biomedical Signals

 

Staff:

  • Garry Higgins

Multichannel electroencephalogram (EEG) is a tool for measuring the electrical activity of the brain, and the use of EEG to diagnose a variety of neurological conditions such as Epilepsy [1]and Alzheimer’s disease [2] has long been established. However, this often requires periods of prolonged EEG recording and monitoring. This usually involves patients spending long periods of time in medical facilities, tying up resources and clinicians.

Due to the nature of EEG, even short periods of capture can result in large amounts of data being recorded. This can prove a problem for the transmission and/or recording of this data. Minimising the size of this data provides a measurable advantage. In comparison to other methods of measuring electrical activity in the human body for physiological diagnosis, such as ECG, relatively little research has been done in the compression of EEG signals.

My work is currently examining the use of near-lossless and lossy methods for compression, utilising wavelet transform-based approaches. The method I am currently investigating is an adaptation of the JPEG2000 Part 1 image compression algorithm. The core of this method involves a number of modern compression methods such as Discrete Wavelet Transform and Adaptive Binary Arithmetic Coding. A thresholding step has been added to this to provide a trade off between achieving a higher compression ratio, and maintaining signal integrity. Figure 1 provides an overview of the compression algorithm.

Figure 1: Core components of compression algorithm

 

The Freiburg database[3] is being used as the source of EEG data due to its abundance of both seizure and non seizure data. In order to evaluate our algorithm we will need to look at both how well the EEG signals can be compressed and how much the decompressed signal differs in comparison to the original signal. Two performance measures are typically used  to aid evaluation:

  • Compression Ratio (CR)
  • Percentage RMS Difference (PRD)

Compression ratio is a measure of the reduction in size of the data compared to the original signal and can be obtained from the formula : CR = (S.R) / b

Where S is the length of the original signal segment (number of samples), R is its bit resolution and b  is the bit length of the compressed signal.

Percentage root-mean square difference (PRD) is often used to evaluate the integrity of a reconstructed signal [4] and is given by the formula:

PRD=( | | z - y | | / | | z | |).100 

Where z  and y  represent the original and reconstructed signal respectively and | |    | | represents the Euclidean norm.

 

Figure 2: Sample plots of PRD vs CR for a number of signals

 

The algorithm parameters can be varied in order to provide an optimal trade-off between CR and PRD, depending on the application and clinical need. Figure 2 illustrated measured values for PRD and CR for different algorithm settings.

Evaluating the compression algorithms in terms of complexity is also a key component as increasing CR can lead to a more complex implementation, with extra power consumption and processing time required to do so. As the algorithm is to be implemented on a portable device, the need for efficient, low-powered compression is paramount to its design.

References

[1]    D. Hill, “Value of the EEG in Diagnosis of Epilepsy,” British Medical Journal, Mar. 1958.

[2]    R. Polikar, F. Keinert, and M.H. Greer, “Wavelet analysis of event related potentials for early diagnosis of Alzheimer’s disease,” Wavelets in Signals and Image Analysis. From Theory to Practice, pp. 453–478.

[3]    “EEG Database — Seizure Prediction in Freiburg, Germany.” https://epilepsy.uni-freiburg.de/freiburg-seizure-prediction-project/eeg-database

[4]    M. Blanco-Velasco, F. Cruz-Roldán, E. Moreno-Martínez, J. Godino-Llorente, and K.E. Barner, “Embedded filter bank-based algorithm for ECG compression,” Signal Processing,  vol. 88, Jun. 2008, pp. 1402-1412.

Top