Project 1

Compression and Compressive Sampling for Biomedical Applications


  • Brian McGinley

The aim of this research is to develop efficient sampling and compression techniques for the acquisition and storage of biomedical signals.

Background and Motivation:

Ambulatory monitoring of patients on a continuous basis generates a large amount of data. For example, multichannel ECG or EEG systems sample at over 100 Hz and employ up to 16-bit quantization. The trend towards ambulatory monitoring in place of data collection in a clinical environment means that trials now last many hours or even days. Usage trials produce a huge amount of data that must be stored or transmitted. This work sets out to shrink the data, in order to reduce the amount of storage and compression needed for recovering this information at a later stage. Two approaches to this compression goal, Transform-based compression and Compressive Sampling (CS), are investigated in this research.

1. Transform-based compression

Transform-based compression involves conventional sampling, followed by the application of a Fourier or Wavelet transformation to the sampled data. Depending on the method employed, this new (transformed) representation of the data is more efficiently expressed than when in its traditional form. A classic example of this technique is a sine wave in the time domain, which can be represented (by the Fourier transform), with a single entry in the frequency domain. This research investigates the wavelet transform for efficient representation of biomedical signals. Specifically, the means by which the output coefficients of the wavelet transform are encoded for transmission is of primary interest. The SPIHT (Set Partitioning In Hierarchical Trees) [1] and EZW (Embedded Zerotree Wavelet) [2] encoders are recognised as state-of-the-art wavelet coefficient encoders and have been implemented as part of this research. Currently, the implementation of other efficient mechanisms for low-power implementation of wavelet encoding is being investigated. Figure 1 illustrates the process of acquiring, digitising, transforming, encoding and transmitting the data on an ambulatory device with the decoding, signal reconstruction and subsequent analysis and classification being performed on a server computer.


Figure 1: Role of Wavelet Compression and Embedded coding in Ambulatory Clinical Monitoring

2. Compressive Sampling

The second element of this work is the investigation of Compressive Sampling (CS) [3]. CS is an emerging technique that simultaneously samples and compresses signals. This methodology is related to conventional sampling and transformation, in that information is eventually sparsely expressed. A key difference is that with compressive sampling, the information is directly acquired in a sparse basis. This differs from the traditional Nyquist/Shannon approach, which performs complete sampling prior to compression.

2.1 Traditional Sampling and Compression:

The Shannon/Nyquist Theorem states that the Nyquist rate (the minimum sampling rate required to avoid aliasing) is equal to twice the highest frequency contained within the signal [4]. An anti-aliasing filter is often employed with Shannon Sampling, to remove spurious high frequency components from the signal, prior to sampling. Once the signal is quantized and stored, transform based (Fourier/DCT/Wavelet) compression is performed on the N collected data-points. From the transform’s output, K relevant coefficients are often retained. This approach is wasteful when N>>K. Therefore Shannon/Nyquist sampling may be considered inefficient, as much of the information acquired is discarded immediately after transformation.

2.2 Compressive Sampling  

CS relies on two principal concepts (Sparsity and Incoherence) to recover signals from far fewer samples than traditional Nyquist systems [5]. Sparsity means that the information contained within a signal may be far smaller than that suggested by the signal’s bandwidth. This is often the case if the signal is expressed in a suitable basis. In fact, many real world signals have a sparse representation in a particular basis. With CS, the degree of a signal’s sparsity determines the number of measurements needed to perfectly reconstruct the signal. The concept of incoherence is that a spread out basis function representation in the acquisition domain should have a sparse representation in another. One example is that a Dirac function in time is spread out in the frequency domain.

An interesting outcome of CS is that random noise is highly incoherent with many sparse basis functions. For example I.I.D Gaussian/Bernouilli distributions [5] and Noiselets [6] have been proposed as sensing basis functions.

It has been demonstrated that CS boasts several advantages, including its generic/non-adaptive nature. This means that the same hardware and random projections can be used for any compressible signal. The principal advantage, however, is reduced storage and transmission costs.

Following a signal’s measurement and transmission, signal reconstruction relies on numeric optimisation, to recover the original signal from the small number of measurements. This reconstruction process needs far more computational power than that required to measure/sample the signal in the first instance. This asymmetric nature, however, may be advantageous to the goals of the EEDSP cluster, whereby enhanced complexity and power requirements on a server workstation might be sacrificed for added power efficiency on an ambulatory device. The reduced sampling frequency associated with CS and consequent reduced storage/transmission offer much potential for reducing power consumption in ambulatory data acquisition devices.

Reconstruction involves the selection of the most sparse signal match that interpolates correctly between the sampled measurements. Several heuristic techniques have been proposed to implement this, including Matching Pursuit [7], [8] and Iterative Thresholding [9]. These algorithms and the development of novel reconstruction algorithms are the subject of current investigation. Regarding measurement guidelines, it has been noted, in practice, that 4S incoherent samples suffice for signal recovery (where S is the signal’s sparsity level) [5].


Compression and Compressive Sampling can be represented as an effort to find  trade-off compromises. For example; the effort to compress versus the effort to sample, store and transmit the raw data. Effort in this context can mean several things, including power consumption, hardware requirements, size and cost. The optimisation of these efforts is a key objective of the EEDSP cluster. Another investigative theme in CS research is the quantification of computational effort, required for signal reconstruction. An advantage of CS, however, is that this reconstruction can be performed offline on a resource unconstrained platform. Finally, the development of additional optimisation algorithms for signal recovery, from measured CS data, is of interest in this research.


[1]  A. Said and W. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Trans. on Circuits and Systems for Video Technology,  vol. 6, 1996, pp. 243-250.

[2]  J. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,” IEEE Transactions on Signal Processing,  vol. 41, 1993, pp. 3445-3462.

[3]  E. Candes, “Compressive sampling” Proc. International Congress of Math, Madrid, pp. 1433–1452, 2006.

[4]  C. Shannon, “Communication in the presence of noise,” Proceedings of the IEEE, vol. 72, 1984, pp. 1192-1201.

[5]  E. Candes and M. Wakin, “An Introduction To Compressive Sampling,” IEEE Signal Processing Magazine,  vol. 25, 2008, pp. 21-30.

[6]  R. Coifman, F. Geshwind, and Y. Meyer, “Noiselets,” Applied and Computational Harmonic Analysis,  vol. 10, Jan. 2001, pp. 27-44.

[7]  A. Gilbert and J. Tropp, “Signal recovery from partial information via Orthogonal Matching Pursuit”, 2005.

[8]  D. Needell and J.A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, 2009, pp. 301–321.

[9]  T. Blumensath and M.E. Davies, “Iterative hard thresholding for compressed sensing,” Applied and Computational Harmonic Analysis, vol. 27, 2009, pp. 265–274.