• 沒有找到結果。

利用可調式取樣及二維轉換法之心電信號編碼

N/A
N/A
Protected

Academic year: 2021

Share "利用可調式取樣及二維轉換法之心電信號編碼"

Copied!
53
0
0

加載中.... (立即查看全文)

全文

(1)

國 立 交 通 大 學

電 信 工 程 學 系 碩 士 班

碩 士 論 文

利用可調式取樣及二維轉換法之心電圖編碼

Electrocardiogram Signal Coding via Adaptive Sampling

and 2-D Transform Domain Methods

研究生:王琬瑜

指導教授:蘇育德 教授

(2)

利用可調式取樣及二維轉換法之心電信號編碼

Electrocardiogram Signal Coding via Adaptive Sampling and

2-D Transform Domain Methods

研究生:王琬瑜 Student: Wan-Yu Wang

指導教授:蘇育德 教授 Advisor: Dr. Yu T. Su

國 立 交 通 大 學

電 信 工 程 學 系 碩 士 班

碩 士 論 文

A Thesis

Submitted to Institute of Communication Engineering Collage of Electrical Engineering and Computer Science

National Chiao Tung University in Partial Fulfillment of Requirements

for the Degree of Master of Science

in

Communication Engineering August 2007

(3)

利用可調式取樣及二維轉換法之心電信號編碼

研究生:王琬瑜 指導教授:蘇育德 博士 國立交通大學電信工程學系碩士班 中文摘要 本文提出兩種新型的心電圖訊號壓縮技術。第一種方式是採取可調式取樣方 法,而第二種方式則透過二維離散餘弦轉換(DCT)法。後者可提供較佳的壓縮比 但前者的複雜度較低。 由於心電圖波形並不是每個地方變化都很快,因此我們根據其波形的變化率來 決定它的取樣速率,在我們的演算法裡採取兩種取樣頻率,當波形變化快時,則 取樣頻率高;反之,則取樣頻率低。透過可調式取樣方法不僅使得記憶體的需要 量減少,並且演算複雜度也大大降低。 除此之外,我們也利用心電圖每個心跳之間的相關性來增加壓縮比。首先,先 將一維的心電圖轉換成二維矩陣之後,接著做二維離散餘弦轉換法,則可得到稀 疏矩陣,其中非零項的部分已非常少,我們可以只量化其中非零的部分項,所以 其壓縮比大約是一維方法壓縮比的兩倍。 由於心電圖波形的重要部份在於 QRS 這個位置,我們為了讓醫生可以更精確 的偵測此病人是否有心臟毛病,我們想出了一套方法來強調這個重要部份的精確 性。首先,我們先將心電圖分成兩個部分,一個是 QRS 的部分,這個部分我們 在量化其頻譜的值時,採用較多的位元去量化,如此一來這個部分的失真率則大 為減少。反之,比較不重要的部分,我們則採取較少的位元來去作量化,透過此 方法,我們的壓縮比率超過 22,而在均方跟誤差百分比則達到 3%以下。

(4)

Electrocardiogram signal coding via adaptive

sampling and 2-D transform domain methods

Student : Wan-Yu Wang Advisor : Yu T. Su

Department of Communication Engineering National Chiao Tung University

Abstract

The electrocardiogram (ECG) is an important biomedical signal for the diagnosis of heart diseases. Efficient ECG waveform coding for long-term recording and effec-tive real-time transmission have received constant intensive attention in modern clinical applications. In this thesis, we propose two new ECG waveform coding methods that require reduced memory and provide high compression ratio (CR) performance. The first approach uses a variable sampling rate analog-to-digital converter that adapts to the waveform variation rate. It reduces not only the computational complexity but also the memory requirement for subsequent discrete cosine transform as well. The second approach is based on the observation that ECG signals often exhibit a near-periodic behavior. We first convert the one-dimensional ECG waveform into a two dimensional (2D) array by the Average Magnitude Difference Function (AMDF) method then apply a two dimension time/frequency transform to increase CR as much as possible. The resulting CR is about 21.87 and the percent-root-mean-square (PRD) is 3.25% which is much better than that of the 1-D approach based on adaptive sampling. In order to further improve the performance of the 2-D approach and capture the important part of ECG signals (QRS wave), we employ a sample-dependent multi-rate quatization approach which gives an improved CR of 22.05 and PRD of 2.88%.

(5)

誌謝

首先感謝我的指導教授 蘇育德博士這二年來不只在研究上的敦敦教誨,使 得論文能更加順利的完成,讓我在通訊領域上有更加深入的了解。並且在人生的 道路上給我適時的指引讓我不至於迷失人生的方向。感謝口試委員蘇賜鄰教授, 陸曉峰教授以及李大嵩教授給予的寶貴意見,以補足這份論文上的缺失與不足之 處。另外也要感謝實驗室的學長姐、同學及學弟妹的幫忙還有鼓勵,讓我不僅在 學習的過程中獲益匪淺,同時也為這二年的生活增添了許多歡樂。 最後,我更要感謝一直關心我、鼓勵我的家人,沒有他們的背後的支持我無 法這麼順利的完成論文,僅獻上此論文,代表我最深的敬意。

(6)

Contents

English Abstract i

Contents ii

List of Figures iv

1 Introduction 1

2 Electrocardiogram Data Compression Methods 3

2.1 Background . . . 3

2.2 Electrocardiogram Signals . . . 5

2.3 Direct ECG Data Compression Schemes . . . 10

2.4 Transformation Domain ECG Data Compression Schemes . . . 12

2.5 Transform Domain Representations . . . 13

3 Adaptive Sampling Approach for ECG Compression 16 3.1 Variation-Dependent Non-uniform Sampling . . . 17

3.2 Non-uniform Quantization . . . 18

3.3 Dual-rate adaptive sampling and quantization . . . 20

3.4 Further Improvement Techniques . . . 22

3.4.1 DCT block length . . . 22

3.4.2 Redundancy . . . 22

(7)

3.6 Adaptive Sampling . . . 24

4 A 2D Transform Domain Approach 29 4.1 Converting 1D ECG Sequence . . . 29

4.2 Complexity Reduction Techniques . . . 32

4.3 Multi-Rate Quantization Based on Signal Importance . . . 33

4.4 Simulation Result . . . 35

(8)

List of Figures

2.1 Schematic representation of normal ECG trace (sinus rhythm), with waves,

segments, and intervals labeled . . . 7

2.2 input signal . . . 14

2.3 ECG Signals Spectrum after Discrete Cosine Transform. . . 15

3.1 A dual sampling rate (S1 and S2) solution based on signal’s short-term

variation rate. . . 17

3.2 A typical ECG signal and its sampling positions when the proposed

dual-rate adaptive sampling approach is applied. High sampling dual-rate S2 is in

place for the QRS wave. . . 19

3.3 ECG signal spectrum obtained by using the 1D DCT. . . 20

3.4 Block diagram of an ECG compression/de-compression system that

incor-porating the concepts of adaptive sampling and nonuniform quantization. 20

3.5 Performance of Method 1: A typical ECG waveform, the reconstructed

and compression error sequences. . . 21

3.6 Complex frequency B-spline wavelet . . . 24

3.7 QRS detection result. . . 25

3.8 Block diagram of ECG compression for the Method 2. Non-uniform

sam-pling to reduce the DCT Transform complexity. . . 26

3.9 It’s the time domain range of QRS waves . . . 26

(9)

3.11 Block diagram of the (Method 3-based) adaptive sampling ECG

compres-sion system. . . 27

3.12 Performance of Method 3 for a typical ECG waveform. . . 28

4.1 1D-to-2D conversion on an ECG signal. . . 30

4.2 2D transform domain representation of a typical ECG signal. . . 31

4.3 Block diagram of the proposed 2D ECG compression system (Method 4). 32 4.4 ECG data compression, reconstruction and error performance of Method 4 33 4.5 Block diagram of 2D transform domain ECG compression with adaptive sampling and multirate quantization. . . 34

4.6 Original, reconstructed ECG signal based on Method 5 and the corre-sponding error sequence. . . 35

4.7 A 2D time domain ECG signal representation with adaptive sampling. . 36

4.8 2D ECG representation with adaptive sampling and 2D DCT. . . 37

4.9 Block diagram of ECG compression; the Method 6. . . 38

4.10 Components of the less important part . . . 38

4.11 Components of the important part . . . 39

4.12 Method 6 simulation result . . . 39

4.13 Block diagram of ECG compression for the Method 7. . . 40

4.14 Method 7 simulation result. . . 40

4.15 Comparison PRD of all methods with different digitized bits for A/D . . 41

(10)

Chapter 1

Introduction

Depending on the sampling rate, quantization precisions and number of sensors, the amount of electrocardiogram (ECG) data that has to be stored and transmitted grows at the rate of 7.5-540 kB per minute per patient. Compression of ECG signals is thus needed not only for archival purposes but for real-time transmission for remote interpretation. A variety of techniques for ECG data compression have been proposed during the last four decades and the subject continues to attract research attention. These techniques have been pivotal in reducing the digital ECG data volume and are essential to a wide spectrum of applications ranging from diagnostic to ambulatory ECG’s.

The ECG compression techniques can in general be categorized into the classes of lossless, lossy and hybrid compressions. The lossless approaches like differencing, run-length coding, diatomic coding, null suppression, pattern substitution, Huffmann, and LZ family, do not have satisfactory compression ratio (CR). As long all clinically sig-nificant features including P-wave, QRS complex and the T-wave are retained, ECG waveform coding allows a certainty degree of distortion, therefore the lossy methods like polynomial predictors and interpolators, orthogonal transforms, AZTEC, CORTES, TP, subband and wavelet coding, probabilistic neural networks and adaptive Fourier coeffi-cient techniques, are often preferred. There are also proposals that combine the lossy and lossless techniques (ALZ77).

(11)

reduced memory and provide high CR performance. The first approach uses a variable sampling rate analog-to-digital converter that adapts to the waveform variation rate. This is motivated by the fact that an ECG signal exhibits significant variations only during the QRS part and it is wasteful to apply a constant sampling rate over the entire observation interval.

The second approach is based on the observation that ECG signals, though of

non-stationary nature, often render quasi-periodic behaviors. We first convert the

one-dimensional ECG waveform into a two one-dimensional (2D) array by the Average Magni-tude Difference Function (AMDF) method then apply a two dimension time/frequency transform to obtain a sparse matrix representation.

In order to capture the important part of ECG signals (QRS wave), we suggest a sample-dependent multi-rate quantization approach which do gives an improved perfor-mance. Because the 2D method’s computing complexity and memory size are higher than those of the 1D method, we incorporate the adaptive sampling concept for com-plexity reduction.

The rest of this thesis is organized as follows. In Chapter 2, we review some of the popular ECG compression methods, including direct and transformation domain schemes and discuss the basic concept of ECG signals. The adaptive sampling and quantization approach is presented in Chapter 3. In Chapter 4, we propose a method based on 2D discrete cosine transform (DCT) and multirate sampling. We also compare the performance, memory requirements and computational complexities of all proposed methods with those of some existing algorithms.

(12)

Chapter 2

Electrocardiogram Data

Compression Methods

2.1

Background

The main target of compression methods is to achieve maximum data volume reduc-tion while preserving the significant signal morphology features upon reconstrucreduc-tion. A broad spectrum of techniques for electrocardiogram (ECG) data compression have been proposed during the last four decades. Such techniques have been vital in reducing the digital ECG data volume for storage and transmission. These techniques are essential to a wide variety of applications ranging from diagnostic to ambulatory ECG’s. Due to the diverse procedures that have been employed, comparison of ECG compression meth-ods is a major problem. Present evaluation methmeth-ods preclude any direct comparison among existing ECG compression techniques. ECG data compression schemes are pre-sented in two major groups: direct data compression and transformation methods. The direct data compression techniques are: ECG differential pulse code modulation and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods briefly presented, include: Fourier, Walsh, and K-L transforms. The theoretical basis behind the direct ECG data compression schemes are presented and classified into three cate-gories: tolerance-comparison compression, differential pulse code modulation (DPCM),

(13)

and entropy coding methods.

Existing ECG data compression techniques have been developed and evaluated under different conditions and constraints. Independent databases, with ECG’s sampled and

digitized at different sampling frequencies ( 100− 1000 Hz) and precisions (8 − 12 bits),

have been mainly employed. The reported CRs have been strictly based on comparing the number of samples in the original data with the resulting compression parameters without taking into account factors such as bandwidth, sampling frequency, precision of the original data, wordlength of compression parameters, reconstruction error threshold, database size, lead selection, and noise level.

Each compression scheme is presented in accordance to the following five issues: a) a brief description of the structure and the methodology behind each ECG compression scheme is presented along with any reported unique advantages and disadvantages. b) The issue of processing time requirement for each scheme has been excluded. In light of the current technology, all ECG compression techniques can be implemented in real-time environments due to the relatively slow varying nature of ECG signals. c) The sampling rate and precision of the ECG signals originally employed in evaluating each compression scheme are presented along with the reported compression ratio. d) Since most of the databases utilized in evaluating ECG compression schemes are nonstandard, database comparison has been excluded. We believe such information does not provide additional clarity and at times may be misleading. However, every effort has been made to include comments on how well each compression scheme has performed. The intent is to give a feeling for the relative value of each compression technique. e) Finally, the fidelity measure of the reconstructed signal compared to the original ECG has been primarily based on visual inspection.

(14)

root-mean-square difference (PRD) defined below as a performance measure. P RD =          n  i=1

[xorg(i)− xrec(i)]2 n



i=1

x2org(i)

∗ 100 (2.1)

where{xorg} and {xrec} are samples of the original and reconstructed data sequences.

2.2

Electrocardiogram Signals

An electrocardiogram (ECG or EKG, abbreviated from the German Elektrokardio-gramm) is a graphic representation produced by an electrocardiograph, which records the electrical voltage in the heart in the form of a continuous strip graph. It is the prime tool in cardiac electrophysiology, and has a prime function in screening and diagnosis of cardiovascular diseases. An ECG is constructed by measuring electrical potential be-tween various points of the body using a galvanometer. Leads I, II and III are measured over the limbs: I is from the right to the left arm, II is from the right arm to the left leg and III is from the left arm to the left leg. From this, the imaginary point V is constructed, which is located centrally in the chest above the heart. The other nine leads are derived from potential between this point and the three limb leads (aVR, aVL

and aVF) and the six precordial leads (V1−6).

Therefore, there are twelve leads in total. Each, by their nature, record information from particular parts of the heart:

• The inferior leads (leads II, III and aVF) look at electrical activity from the vantage

point of the inferior region (wall) of the heart. This is the apex of the left ventricle.

• The lateral leads (I, aVL, V5 and V6) look at the electrical activity from the vantage

point of the lateral wall of the heart, which is the lateral wall of the left ventricle.

• The anterior leads, V1 through V6, and represent the anterior wall of the heart, or

(15)

• aVR is rarely used for diagnostic information, but indicates if the ECG leads were

placed correctly on the patient.

Understanding the usual and abnormal directions, or vectors, of depolarization and repolarization yields important diagnostic information. The right ventricle has very little muscle mass. It leaves only a small imprint on the ECG, making it more difficult to diagnose than changes in the left ventricle.

The leads measure the average electrical activity generated by the summation of the action potentials of the heart at a particular moment in time. For instance, during normal atrial systole, the summation of the electrical activity produces an electrical vector that is directed from the SA node towards the AV node, and spreads from the right atrium to the left atrium (since the SA node resides in the right atrium). This turns into the P wave on the EKG, which is upright in II, III, and aVF (since the general electrical activity is going towards those leads), and inverted in aVR (since it is going away from that lead).

A typical ECG tracing of a normal heartbeat consists of a P wave, a QRS complex and a T wave, is shown in Fig. 2.1. A small U wave is not normally visible.

1. Axis

The axis is the general direction of the electrical impulse through the heart. It is

usually directed to the bottom left (normal axis: −30o to +90o), although it can

deviate to the right in very tall people and to the left in obesity.

• Extreme deviation is abnormal and indicates a bundle branch block,

ventric-ular hypertrophy or (if to the right) pulmonary embolism.

• It also can diagnose dextrocardia or a reversal of the direction in which the

heart faces, but this condition is very rare and often has already been diag-nosed by something else (such as a chest X-ray).

(16)

PR Interval PR Segment QRS Complex ST Segment QT Interval

Figure 2.1: Schematic representation of normal ECG trace (sinus rhythm), with waves, segments, and intervals labeled

2. P wave

The P wave is the electrical signature of the current that causes atrial contraction. Both the left and right atria contract simultaneously. Its relationship to QRS complexes determines the presence of a heart block.

• Irregular or absent P waves may indicate arrhythmia. • The shape of the P waves may indicate atrial problems.

3. QRS

The QRS complex corresponds to the current that causes contraction of the left and right ventricles, which is much more forceful than that of the atria and involves

(17)

more muscle mass, thus resulting in a greater ECG deflection. The duration of the QRS complex is normally less than or equal to 0.10 second. The Q wave, when present, represents the small horizontal (left to right) current as the action potential travels through the interventricular septum.

• Very wide and deep Q waves do not have a septal origin, but indicate

my-ocardial infarction that involves the full depth of the myocardium and has left a scar.

The R and S waves indicate contraction of the myocardium itself.

• Abnormalities in the QRS complex may indicate bundle branch block (when

wide), ventricular origin of tachycardia, ventricular hypertrophy or other ven-tricular abnormalities.

• The complexes are often small in pericarditis or pericardial effusion.

4. T wave

The T wave represents the repolarization of the ventricles. The QRS complex usually obscures the atrial repolarization wave so that it is not usually seen. Elec-trically, the cardiac muscle cells are like loaded springs. A small impulse sets them off, they depolarize and contract. Setting the spring up again is repolarization (more at action potential). In most leads, the T wave is positive.

• Inverted (also described as negative) T waves can be a sign of disease,

al-though an inverted T wave is normal in V1 (and V2-V3 in African-Americans/Afro-Caribbeans).

• T wave abnormalities may indicate electrolyte disturbance, such as

hyper-kalemia or hypohyper-kalemia.

(18)

• This segment ordinarily lasts about 0.08 second and is usually level with the

PR segment. Upward or downward displacement may indicate damage to the cardiac muscle or strain on the ventricles. It can be depressed in ischemia and elevated in myocardial infarction, and upslopes in digoxin use.

5. U Wave

The U wave is not always seen. It is quite small, and follows the T wave by definition. It is thought to represent re-polarization of the papillary muscles or Purkinje fibers. Prominent U waves are most often seen in hypokalemia, but may be present in hypercalcemia, thyrotoxicosis, or exposure to digitalis, epinephrine, and Class 1A and 3 anti-arrhythmics, as well as in congenital long QT syndrome and in the setting of intracranial hemorrhage. An inverted U wave may represent myocardial ischemia or left ventricular volume overload.

6. QT interval

The QT interval is measured from the beginning of the QRS complex to the end of the T wave. A normal QT interval is usually about 0.40 seconds. The QT interval as well as the corrected QT interval are important in the diagnosis of long QT syndrome and short QT syndrome. The QT interval varies based on the heart rate, and various correction factors have been developed to correct the QT interval for the heart rate. The most commonly used method for correcting the QT interval for rate is the one formulated by Bazett and published in 1920. Bazett’s

formula is QTc = √QTRR , where QTc is the QT interval corrected for rate, and RR

is the interval from the onset of one QRS complex to the onset of the next QRS complex, measured in seconds. However, this formula tends to not be accurate, and over-corrects at high heart rates and under-corrects at low heart rates. 7. PR interval

(19)

of the QRS complex. It is usually 0.12 to 0.20 seconds. A prolonged PR indicates a first degree heart block, while a shorting may indicate an accessory bundle that depolarizes the ventricle early, such as seen in Wolff-Parkinson-White syndrome.

2.3

Direct ECG Data Compression Schemes

This section presents the direct data compression schemes developed specifically for ECG data compression, namely, the AZTEC, Fan/SAPA, TP, and CORTES ECG compression schemes.

1. The AZTEC Technique: The amplitude zone - time epoch coding (AZTEC) algo-rithm originally developed by Cox et al. [1] for preprocessing real-time ECGs for rhythm analysis. It has become a popular data reduction algorithm for ECG mon-itors and databases with an achieved compression ratio of 10: 1 (500 Hz sampled ECG with 12 b resolution). However, the reconstructed signal demonstrates sig-nificant discontinuities and distortion. In particular, most of the signals distortion occurs in the reconstruction of the P and T waves due to their slow varying slopes. The AZTEC algorithm converts raw ECG sample points into plateaus and slopes. The AZTEC plateaus (horizontal lines) are produced by utilizing the zero-order interpolation (ZOI). The stored values for each plateau are the amplitude value of the line and its length (the number of samples with which the line can be inter-polated within aperture ). The production of an AZTEC slope starts when the number of samples needed to form a plateau is less than three. The slope is saved whenever a plateau of three samples or more can be formed. The stored values for the slope are the duration (number of samples of the slope) and the final elevation (amplitude of last sample point). Signal reconstruction is achieved by expanding the AZTEC plateaus and slopes into a discrete sequence of data points.

(20)

tinuity (step-like quantization) that occurs in the reconstructed ECG waveform. A significant reduction of such discontinuities is usually achieved by utilizing a smoothing parabolic filter. The disadvantage of utilizing the smoothing process is the introduction of amplitude distortion to the ECG waveform.

2. The Turning Point Technique: The turning point (TP) data reduction algorithm [2] was developed for the purpose of reducing the sampling frequency of an ECG signal from 200 to 100 Hz without diminishing the elevation of large amplitude QRS’s. The algorithm processes three data points at a time; a reference point

(X0) and two consecutive data points (X1 and X2). Either X1 or X2 is to be

retained. This depends on which point preserves the slope of the original three points. The TP algorithm produces a fixed compression ratio of 2 : 1 whereby the reconstructed signal resembles the original signal with some distortion. A disadvantage of the TP method is that the saved points do not represent equally spaced time intervals.

3. The CORTES Scheme: The coordinate reduction time encoding system (CORTES) algorithm [3] is a hybrid of the AZTEC and TP algorithms. CORTES applies the TP algorithm to the high frequency regions (QRS complexes ), whereas it applies the AZTEC algorithm to the isoelectric regions of the ECG signal. The AZTEC and TP algorithms are applied in parallel to the incoming sampled ECG data. Whenever an AZTEC line is produced, a decision based on the length of the line is used to determine whether the AZTEC data or the TP data is to be saved. If the line is longer than an empirically determined threshold, the AZTEC line is saved, otherwise the TP data are saved. Only AZTEC plateaus (lines) are generated; no slopes are produced. The CORTES signal reconstruction is achieved by ex-panding the AZTEC plateaus into discrete data points and interpolating between each pair of the TP data. Parabolic smoothing is applied to AZTEC portions

(21)

of the reconstructed CORTES signal to reduce distortion. Detailed description of the CORTES implementation and reconstruction procedures are discussed in Tompkins and Webster [4]

4. Fun and SAPA Techniques: Fan and scan-along polygonal approximation (SAPA) algorithms, developed for ECG data compression, are based on the first-order interpolation with two degrees of freedom (FOI-2DF) technique. A recent report [5] claimed that the SAPA-2 algorithm is equivalent to an older algorithm, the Fan.

2.4

Transformation Domain ECG Data

Compres-sion Schemes

Unlike direct data compression, most of the transformation domain compression techniques have been employed in ECG or multilead ECG compression and require ECG wave detection. In general, transformation domain techniques involve prepro-cessing the input signal by means of a linear orthogonal transformation and properly encoding the transformed output (expansion coefficients) and reducing the amount of data needed to adequately represent the original signal. Upon signal reconstruction, an inverse transformation is performed and the original signal is recovered with a certain degree of error. However, the rationale is to efficiently represent a given data sequence by a set of transformation coefficients utilizing a series expansion (transform) technique. Many discrete orthogonal transforms have been employed in digital signal represen-tation such as Karhunen-Loeve transform (KLT)[6], Fourier (FT), Cosine (CT), Walsh (WT), Haar (HT), etc. The optimal transform is the KLT (also known as the principal components transform or the eignevector transform) in the sense that the least number of orthonormal functions is needed to represent the input signal for a given rms error. Moreover, the KLT results in decorrelated transform coefficients (diagonal covariance

(22)

the computational time needed to calculate the KLT basis vectors (functions) is very intensive. This is due to the fact that the KLT basis vectors are based on determining the eigenvalues and corresponding eigenvectors of the covariance matrix of the original data, which can be a large symmetric matrix. The lengthy processing requirement of the KLT has led to the use of suboptimum transforms with fast algorithms (i.e., FT, WT, CT, HT, etc). Unlike the KLT, the basis vectors of these suboptimum transforms are input-independent (predetermined). For instance, the basis vectors in the FT are simply sines and cosines (fundamental frequency and multiples thereafter), whereas the WT basis vectors are square waves of different sequences. It should be pointed out that the performance of these suboptimal transforms is usually upper-bounded by the one of the KLT.

Orthogonal transforms provide alternate signal representations that can be useful for ECG data compression. The goal is to select as small a subset of the transform coeffi-cients as possible which contain the most information about the signal, with introducing objectionable error after reconstruction. A more adaptive method is to calculate the upper bound in the spectrum and keep the coefficients that contain a predetermined fraction of this power. The method is attractive because it can adapt to store more or less coefficients as necessary. This is a particularly useful feature for ambulatory systems where good compression under a variety of ECG rhythms is desirable.

2.5

Transform Domain Representations

Orthogonal transforms provide alternate signal representations that can be useful for ECG data compression. Fig. 2.2 show an input signal with a sampling rate of 977 points/second and in Fig. 2.3 the DCT of an ECG signal is given; it is clear from this diagram that the majority of the power in the transform is generally contained within the first 100 of the 1024 coefficients. Although a compression ratio of 10 : 1 with very little distortion might seem possible, this is not the case. The compression algorithm

(23)

must also select a threshold to decide how many coefficients are to be stored at their original accuracy. There are two methods to do this, the first and simpler one is to fix the number of data points kept, e.g., the first 20% of coefficients retained at original accuracy. The vast majority of the power in the transform will be contained in these coefficients. A more adaptive method we proposed is to determine the upper bound in the spectrum and use it as the threshold for retaining the coefficients that contain the vast majority of the power.

0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 time (samples) am pl it ude

(24)

0 200 400 600 800 1000 1200 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 frequency c o ef fi c ient a m pl it u de

(25)

Chapter 3

Adaptive Sampling Approach for

ECG Compression

The importance of body sensors networks [7] used to monitor patients over a pro-longed period of time has advanced in home health care applications. Sensor nodes need to operate with very low-power consumption and under the constraint of limited memory capacity.

The low power consumption and limited memory space requirements bring to our attention the issue of sampling rate. For the general ECG data compression method, signals are usually sampled in time domain at a constant rate. However, although Shannon’s Sampling Theorem [8] says “If a function f (t) contains no frequencies higher

than W , it is completely determined by giving its ordinates at a series of points spaced2W1

apart.” Later in his article Shannon also stated, ”Any function limited to the bandwidth

W and the time interval T can be specified by giving 2T W numbers. The 2T W numbers

need not be equally spaced.” Thus, Shannon’s results did not require that a function has to sample at a regular rate. Considering the fact that ECG waveforms contain some segments of rapid and slow changes, it becomes obvious that the constant sampling rate is not an efficient approach in terms of implementation complexity and power consumption.

(26)

3.1

Variation-Dependent Non-uniform Sampling

Large variations are encountered at the QRS spikes, whereas the P wave and the period between pulses show only little variation in signal amplitude and contribute the lower frequency components of the ECG spectrum. It is therefore undesirable and waste-ful to operate the converter at a constant high sampling rate during the periods of slow signal variation. Ideally, one would like to have an adaptive sampling rate that is con-sistent with the instantaneous frequency of the signal to be converted. However, the information of the time-varying sampling rate would have to be recorded in order to re-constructed from the digitized sample sequence. An optimal tradeoff for our application that reduces the average sampling rate (thus the memory requirement) and requires the minimum amount of the time stamps associated with rate-changing positions is using

a simple dual-rate analog to digital (A/D) converter; see Fig. 3.1. Let S1 denotes the

S1type

S2type

Figure 3.1: A dual sampling rate (S1 and S2) solution based on signal’s short-term

variation rate.

(27)

S2 be that used at the fast-varying part. Then our algorithm is described as follows.

The sampler uses rate S1 if the short-term (total) variation

[|x(2) − x(1)| + |x(3) − x(2)| + ... + |x(B) − x(B − 1)|] < Vth

and it uses rate S2 when

[|x(2) − x(1)| + |x(3) − x(2)| + ... + |x(B) − x(B − 1)|] > Vth

where B is the local observation interval (LOI) used for determining the short-term

variation. We choose B = 4 in our algorithm. Vth is the threshold for decidung the

sampling rate. If the short-term variation in a LOI is smaller than the threshold Vth,

the A/D converter samples with rate S1. On the other hand, if the variation rate within

this period is higher than the threshold Vth, rate S2 is used instead. Fig. 3.2 depicts a

simulated sample waveform where we find that high rate S2 for sampling the QRS waves

and lower rate S1 is used in other segments.

In order to inform the receiver the sampling rate associated with a LOI we need to send one redundant bit per interval. We call this redundant bit the rate classification

bit (RCB). 1 and 0 are used to denote rate S1 and S2, respectively. These RCBs are

placed in the header part of a transmission frame (or packet).

3.2

Non-uniform Quantization

The discrete cosine transform (DCT) based compression algorithms have become industry standards (JPEG, MPEG) for still and video image compression systems. It can also be used in ECG compression as is evidenced from Fig. 3.3, which is obtained by applying one dimensional DCT on the sampled ECG sequence. The spectrum shows that a very high percentage of signal power lies in the low frequency band.

(28)

0 100 200 300 400 500 600 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 time vo lt a g e

decide which one should output

Figure 3.2: A typical ECG signal and its sampling positions when the proposed dual-rate

adaptive sampling approach is applied. High sampling rate S2 is in place for the QRS

wave.

Region 2. consists of Subcarrier 11 to Subcarrier 40 Region 3. consists of Subcarrier 41 to Subcarrier 100

The majority of the signal power lies in the first two regions, especially Region 1, hence the spectral components are represented by longer words, i.e., more bits are used to represent the quantized spectral coefficients. On the other hand, the spectral coefficients in the third region are very close to zero so fewer bits are used for quantization.

We have discuss the main ingredients of our algorithms, e.g., adaptive sampling and adaptive (nonuniform) quantization. The complete algorithm is presented in the following section

(29)

0 50 100 150 200 250 300 350 400 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 Frequency Coef fi c ient A m pl it ude

Figure 3.3: ECG signal spectrum obtained by using the 1D DCT.

3.3

Dual-rate adaptive sampling and quantization

The algorithm that is based on the concepts of adaptive sampling and quantization is referred to as M ethod 1 and the corresponding compression and de-compression systems are shown in Fig. 3.4. Because there are two quantization precisions for the DCT

A/D Segmentation with time stamps Adaptive Decimation DCT Non-uniform Quantization IDCT Tx Rx RCB Removal Redudant Classification Bits (RCB) Frequency Domain Block and

Coefficient Synchronization TS Removal and Adaptive Upsampling D/A

Figure 3.4: Block diagram of an ECG compression/de-compression system that incor-porating the concepts of adaptive sampling and nonuniform quantization.

(30)

Domain Block and Coefficient Synchronization. After taking inverse DCT, we check

time stamps in each time domain segment to separate the high rate and low rate blocks.

For the rate S1 block, we can either perform up-sampling or use the linear interpolation

to recover the signal.

d(i)∈ S1, ˆx(t + j) = d(i) +d(i + 1)− d(i)

B ∗ j, 0 ≤ j ≤ B − 1

where ˆx(t) represents the recovery signal.

For the rate S2 block, we simply pass the samples

d(i)∈ S2, ˆx(t) = d(i)

to the digital-to-analog (D/A) converter. Fig. 3.5 depicts a typical performance example with the original, the reconstructed waveform and the corresponding error signal shown on the top, middle and bottom parts of the figure. M ethod 1 can achieves a compression

0 200 400 600 800 1000 1200 -1 0 1 time vo lt a g e Original Signal 0 200 400 600 800 1000 1200 -1 0 1 time vo lt a g e Reconstructed Signal, CR = 10.0392 , PRD=3.7135% 0 200 400 600 800 1000 1200 -0.1 0 0.1 time vo lt a g e Error

Figure 3.5: Performance of Method 1: A typical ECG waveform, the reconstructed and compression error sequences.

(31)

a compression ratio of 6.8 and P RD = 10%, the Fan algorithm provides a compression ratio of 7.4 and P RD = 8.7% while the symmetric wavelet transform method has a compression ratio of 8 and P RD = 3.9%.

3.4

Further Improvement Techniques

There are several techniques that can be used to improve the performance of Method 1.

3.4.1

DCT block length

DCT block length is an important issue that affects the system complexity. One can choose the block length to be equal to one or several beat periods. The minimum block length must be more than one normal beat period (about 790 to 840 samples) if we use a constant sampling rate. As a DCT needs O(T logT ) multiplication operations for a block of T samples, one should have a block length as small as possible. M ethod 1 uses a non-uniform sampling approach and T is reduced to about 360 to 380 samples. The DCT operation complexity is reduced and P RD is acceptable.

3.4.2

Redundancy

The disadvantage of the M ethod 1 is that there are many redundant bits are needed to carry the time stamps thus decreases the CR performance. Increasing the small block size B, it will reduce the redundant bits but PRD will increase. We cannot recover the bits in the small block size if B is large and it will cause the distortion rate is higher. Maybe we need to find other method to detect the location of QRS waves.

3.5

QRS Waves Detection Method

In the last section the redundant bits are large and this section will introduce the method [9] to detect the QRS waves location.

(32)

The complex continuous wavelet transform is calculated such as the inner product, or the convolution of the analyzed signal f (t) with the complex conjugate version of the wavelet function ψ(t):

CW Tf(a, b) =< f (t), ψ(t) >=



−∞

f (t)ψa,b (t)dt. (3.1) After comparative analysis with several types of complex valued wavelets the fre-quency B-spline wavelet was chosen as a function-prototype for the QRS detection, because of its good temporal localization properties. The complex frequency B-spline wavelet is defined as:

ψ(t) =fb(sinc( fbt

m))

m.e2iπfct. (3.2)

depending on three parameters:

fb is the bandwidth parameter (fb = 1 is chosen);

fc is the wavelet central frequency (fc = 1.5 is chosen);

m is an integer order parameter (m 1);

For computational reduction the wavelet used for the analysis is of the first order (m = 1). Frequency B-spline wavelets of the higher order do not improve the detection

ratio. By varying the parameters fb and fc optimal for the QRS detection values are

determined. The real part and the imaginary part of the analytical wavelet can be seen in Fig.3.6.

The result of QRS wave detection is shown in Fig.3.7.

In Fig.3.7, we find that we can exactly detect the location of QRS waves. Here, we propose another method. The M ethod 2 is shown in Fig.3.8.

If we find the location of QRS waves is at time μ, how can we determine that this

small block is the S1 or S2 type?

For μ−duration  t  μ+daration , we choose the S2type to send the fast-changing

(33)

Figure 3.6: Complex frequency B-spline wavelet

choose the S1 type to send this small block’s data into the DCT block. The performance

for the M ethod 2 is shown in Fig.3.10.

The compression ratio in the M ethod 2 is 14.76 which is higher than that in the

M ethod 1. The reason is that we only transmit 10 bits to represent the location of QRS

waves. According to those 10 bits, we can know the timing range for S1 type and S2

type and don’t need to send many redundant bits as M ethod 1. However, the M ethod 2 reduces the complexity in DCT , but it needs some extra computations to find the location of QRS waves.

3.6

Adaptive Sampling

In the previous section, we use the sampled data collected from the hospital to reduce the DCT complexity. It is shown that the performance is acceptable when we decrease the number of slowly-changing datas. In this section, we put this concept into the real sampling circuit.

There are two sampling rates to sample the ECG data. S1 means that the circuit

(34)

0 1000 2000 3000 4000 5000 6000 7000 8000 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 time (samples) am pl it ude ECG signal

B-spline wavelet detection

Figure 3.7: QRS detection result.

means that the circuit samples the fast-changing ECG and its sampling rate is R2= 990

samples/sec. R2 = B ∗ R1. In our algorithm, B = 3 and Vth is the threshold to choose

S1 or S2 type. Initially, the circuit chooses S1 sampling type. The detailed process is

shown as follows:

Case 1. When the current state is S1and the A/D converter detects an instantaneous slope

greater than Vth, then the A/D circuit switches to state S2 and remains so for the

next 2× duration + 1 samples; otherwise, the circuit remains at the same state.

Case 2. When the current state is S2 and the A/D converter finds that the instantaneous

slope is below the threshold Vththen the circuit switches back to state S1; otherwise

the circuit stays at the same state.

Case 3. If the ECG wave stays at either state for a prolonged period, an alarm will be activated, indicating that the patient may have some heart problems.

(35)

A/D Segmentation with time stamps Adaptive Decimation (Judged by its detection of QRS location) DCT Non-uniformQuantization IDCT Tx Rx RCB Removal Redudant Classification Bits (RCB) Frequency Domain Block and

Coefficient Synchronization TS Removal and Adaptive Upsampling D/A

Figure 3.8: Block diagram of ECG compression for the Method 2. Non-uniform sampling to reduce the DCT Transform complexity.

duration

duration

+

μ

μ

μ

Figure 3.9: It’s the time domain range of QRS waves

We have to only transmit the quantization bits in DCT domain and 10 redundant bits. The redundant bits are represented as the location of the starting fast-changing ECG data. We use μ to express that location. If the receiver receives μ, it knows that

it chooses S2 type from μ to μ + 2∗ duration. The redundant bits in the Method 3 are

fewer than those in the M ethod 1. If it chooses the S1 type, we use the interpolation

method as before. The M ethod 3 performance is shown in Fig.3.12.

We reduce the DCT complexity as the M ethod 1 and the block size T is reduced to

T

D where D is the sampling rate reduction ratio and it is about 2.2 in our algorithm.

The memory size is reduced to DT. Compression rate of the M ethod 3 is 12.58 and PRD

is 3.59 %. The M ethod 3 also decreases the sampling rate and its advantage is that the hardware does not require to sample the data so fast.

(36)

0 200 400 600 800 1000 1200 -1 0 1 time v ol tage Original Signal 0 200 400 600 800 1000 1200 -1 0 1 time vo lt a g e Reconstructed Signal, CR = 14.7604 , PRD=3.4933% 0 200 400 600 800 1000 1200 -0.05 0 0.05 time vo lt a g e Error

Figure 3.10: Method 2 simulation result

Variable rate A/D Segmentation with

time stamps DCT Non-uniform Quantization Rx IDCT Redudant Classification Bits (RCB) RCB Removal TS Removal and adaptive upsampling D/A Frequency Domain Block and Coefficient synchronization

Figure 3.11: Block diagram of the (Method 3-based) adaptive sampling ECG compres-sion system.

(37)

0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 original ECG 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 reconstructed ECG,CR=12.5841,PRD=3.5962% 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -0.05 0 0.05 Error time (samples) v ol tage

(38)

Chapter 4

A 2D Transform Domain Approach

The discrete cosine transform (DCT) [10] has been widely used in many video coding applications, the best known example being the family of ISO/IEC MPEG audio coding standards [11]. Classical one-dimension approach does not use the beat to beat correlation and it results in lower compression rate. In our algorithm, we make use of 2D time/frequency transform to take advantage of the correlations.

The 2D DCT transform y(p, q) of the input 2D signal x(m, n) 0 ≤ m ≤ M, 0 ≤ n ≤ N

is defined by y(p, q) = αpαq M−1 m=0 N−1 n=0 x(m, n)ampbnq (4.1) where amp = cos( π(2m + 1)p 2M ) (4.2) bnq = cos( π(2n + 1)q 2N ) (4.3)

p = 0, ..., M − 1 and q = 0, ..., N − 1. αp, αq are the normalization factors. It can

be easily seen that the 2D DCT is equivalent to a 1D DCT performed along a single dimension followed by another 1D DCT in the other dimension.

4.1

Converting 1D ECG Sequence

To apply the 2D DCT on an ECG signal we have to transform the original 1D waveform into a 2D array. This can be done by arranging the array such that each

(39)

period is placed in a row. The ECG waveforms are not periodic in general, hence we have to determine an average period first. We use the Average Magnitude Difference Function (AMDF) algorithm for estimating the ECG signal period.

AM DF (j) = 1 T T  i=1 |xn(i)− xn(i + j)| (4.4)

where minperiod < j < maxperiod

period = min

j AM DF (j) (4.5)

where x(n) is the input ECG sample sequence and the range of ECG period is [minperiod, maxperiod]. Given the fundamental period T of the sample sequence, determined by AMDF

method, the signal is rearranged as an N×T matrix xm, N being the number of periods

included in one frame; see Fig. 4.1. Since ECG signal is only quasi-periodic, it is not

0 10 20 30 40 0 500 1000 -1 -0.5 0 0.5

convert to 2D signal (add nearby samples to the period less than T)

Figure 4.1: 1D-to-2D conversion on an ECG signal.

(40)

the tail parts of those rows in the array whose periods that are less than T . A simple approach is zero-padding, i.e., adding zeros to the tail parts. another approach is copy the beginning part of the next period or the current period. The first approach tends to produce extra high frequency components while the second one does not have such an undesired effect.

With the quasi-periodicity characteristic of ECG signals, the transformed array based on the above arrangement contains similar rows. Hence the 2D DCT array has only a small number of nonzero coefficients. The resulting sparse matrix leads to efficient quantization; see Fig. 4.2 for a typical 2D ECG frequency domain representation. The resulting algorithm is referred to as M ethod 4 and is described in Fig. 4.3. Fig. 4.4 gives a performance example.

0 10 20 30 40 0 500 1000 -10 -5 0 5 10 15 After 2D DCT

Figure 4.2: 2D transform domain representation of a typical ECG signal.

This method achieves a compression ratio of 21.87 and P RD = 3.25% with

compu-tational complexity of order O(Llog2(L)), L = N × T . For a period of ECG signals,

(41)

A/D 1D/2D Converter Non-uniform Quantization 2-D DCT IDCT 2D/1D Converter Tx Rx

Figure 4.3: Block diagram of the proposed 2D ECG compression system (Method 4).

the general 1D method with O(T log2(T )) and the increasing amount with O(T log2(N ))

is acceptable. Memory usage of the 2D method with N × T is higher than that of the

1D method with T . Nevertheless, the compression ratio in the 2D method is about two times of that in the 1D method and P RD in the 2D method is lower than that of the 1D method.

4.2

Complexity Reduction Techniques

The complexity and memory of the 2D approach are higher than those of the 1D method. There are, however, a few techniques that can be used for reducing the com-plexity and memory requirements. In particular, the adaptive sampling and nonuniform quantization methods can also be applied in the 2D scenario. The computational

com-plexity of the 2D DCT is decreased from O(N T log2(N T )) to O(N TD log2(N TD )) while

memory size is reduced from N × T to N TD .

Combining the complexity-reduction techniques with the proposed 2D approach, we obtain the algorithm shown in Fig. 4.5, which we refer to as Method 5. A typical performance of M ethod 5 given in Fig. 4.5 is shown in Fig. 4.6.

The CR of Method 5 is 19.48 which is a little bit lower than that of M ethod 4 while

(42)

0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 -0.5 0 0.5 time v ol tage Original Signal 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 -0.5 0 0.5 time v ol tage Reconstructed Signal, CR = 21.8750 , PRD=3.2534% 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -0.02 0 0.02 0.04 time v ol tage Error

Figure 4.4: ECG data compression, reconstruction and error performance of Method 4

is reduced at the cost of increased high frequency component number. It also results in a larger period variation within an array whence increase the “overhead”, those sub-fundamental-periods used to compensate for their tail parts; see Fig. 4.7.

Fig. 4.8 shows the above description. The strength in the M ethod 5 is its reduction of the memory size and the computational complexity.

4.3

Multi-Rate Quantization Based on Signal

Im-portance

The QRS wave is the most critical part of an ECG signal for it reveals the important critical cardiological information. Therefore, the QRS wave needs to be reconstructed as precise as possible. So we divide each ECG period into two parts, the first (impor-tant) part includes the QRS wave and the remaining part forms the second (and less important) part of the period. Different quantization precisions are specified for these two parts: more bits are used to quantize the important QRS part. Fig. 4.9 depicts the concept of multi-rate quantization. By AMDF, every period of ECG signals is obtained

(43)

Variable rate A/D 1D/2D Converter Non-uniform Quantization 2-D DCT IDCT 2D/1D Converter Tx Rx Interpolation

Figure 4.5: Block diagram of 2D transform domain ECG compression with adaptive sampling and multirate quantization.

and it is supposed this period of ECG signals is {x(i + 1), x(i + 2), ..., x(i + period)}.

The procedure to divide this period of signals into two sub-blocks is as follows:

map = max

1≤j≤period|x(i + j| (4.6)

where map is the approximation of the QRS location.

{x(i + map − duration), x(i + map − duration + 1)..., x(i + map + duration)} is chosen

as the important part and the other part is chosen as the less important part. In the less important part, its signal is adjusted as follows:

{x(i+1), x(i+2), ...x(i+map−duration−1), p(1), p(2), ..., p(2duration+1), x(i+map+ duration + 1), x(i + map + duration + 2), ...x(i + period)}

where p(j) = x(i+map−duration−1)+x(i+map+duration+1)−x(i+map−duration−1)2duration+1 ∗j , 1≤

j ≤ 2duration + 1.

Adding p(j) prevents from the fasting changing amplitude that results in the increas-ing of high frequency components. The adjusted less important part for the 2D ECG signals is shown in Fig.4.10 and the important part in shown in Fig.4.11 . We use more bits to quantize the important part and fewer bits to quantize the less important part. The performance of the M ethod 6 is shown in Fig.4.12. Its CR is 22.05 and PRD=2.88%. The advantage of the M ethod 6 is that the significant part of ECG signals (QRS wave)

(44)

0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 original ECG 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 reconstructed ECG,CR=19.48,PRD=3.6189% 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -0.05 0 0.05 Error

Figure 4.6: Original, reconstructed ECG signal based on Method 5 and the corresponding error sequence.

has less distortion than that of the algorithm proposed before so doctors detect precisely whether this patient has sickness.

Finally, M ethod 7 shown in Fig.4.13. combines three ideas (Adaptive sampling, 2D DCT and Multi-rate method). This gives CR of 20.5 and PRD of 3.285% shown in Fig.4.14 . The advantage of the M ethod 7 is that the important part is precise, compression ratio is high and computational complexity is reduced.

4.4

Simulation Result

We want to implement our algorithm by using less memory size and reducing com-putational complexity. For A/D, we want to sample the ECG data by fewer bits but this process would not cause high PRD. Then, we try to find the proper bits to sample ECG signals. Fig.4.15 shows the comparison PRD and Fig.4.16 shows CR of all methods with different digitized bits for A/D. It is shown that it will hardly enhance PRD if 12 bits

(45)

0 10 20 30 40 0 100 200 300 400 -1 -0.5 0 0.5

convert to 2D signal (add nearby samples to the period less than T)

Figure 4.7: A 2D time domain ECG signal representation with adaptive sampling.

are used to digitize each sample, so we choose 12 bits to digitize each sample. Table 4.1 lists the CR and PRD values for some other compression methods. 2D method use the beat to beat correlation and CR is about 20 which is about two times higher than that of the 1D method. 2D method’s PRD is lower than that of the 1D method and the M ethod 6 is presented to be precise and have less distortion in the important part (QRS wave). PRD in the M ethod 6 is lower than that of the other methods presented.

Complexity: Table 4.1 shows the complexity of all methods where D is the sampling

rate reduction ratio and S is the duration of QRA part. Adaptive sampling method’s computational complexity is reduced but 2D method’s computational complexity is much higher. Therefore we try to combine these two methods.

Memory: For the general 1D method, its memory size is LT where L is the digitized

bits for each sample. The memory size of the M ethod 3 is reduced to LTD and the

(46)

0 10 20 30 40 0 100 200 300 400 -10 -5 0 5 10

The adaptive sampling method after 2D DCT

Figure 4.8: 2D ECG representation with adaptive sampling and 2D DCT.

5 combining 2D with adaptive sampling method is reduced to N LT /D. Besides, the adaptive sampling method reduced the hardware complexity and A/D does not have to sample so fast.

(47)

A/D Intra-period Segmentation 1D/2D Converter 1D/2D Converter 2-D DCT 2-D DCT Non-uniform Quantization (use more bits to

quantize)

Non-uniform Quantization (use fewer bits to

quantize) Important

Less important

Figure 4.9: Block diagram of ECG compression; the Method 6.

0 10 20 30 40 0 500 1000 -0.1 0 0.1 0.2 0.3

components of the less important part

(48)

0 10 20 30 40 0 50 100 150 -1 -0.5 0 0.5

components of the important part

Figure 4.11: Components of the important part

0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 time vo lt a g e Original Signal 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 time vo lt a g e Reconstructed Signal, CR = 22.0524 , PRD=2.8800% 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -0.05 0 0.05 time vo lt a g e Error

(49)

Variable rate A/D Intra-period Segmentation 1D/2D Converter 1D/2D Converter 2-D DCT 2-D DCT Non-uniform Quantization (use more bits to

quantize)

Non-uniform Quantization (use fewer bits to

quantize) Important

Less important

Figure 4.13: Block diagram of ECG compression for the Method 7.

0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 original ECG 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -1 0 1 reconstructed ECG,CR=20.5,PRD=3.285% 0 200 400 600 800 1000 1200 1400 1600 1800 2000 -0.05 0 0.05 Error

(50)

8 9 10 11 12 13 14 15 16 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7

Compare PRD of all methods with different digitized bits for A/D

digitized bits

PR

D

1-D DCT+ different sampling (judged by its changing rate) 1-D DCT+ QRS +different sampling (QRS location detection) 1-D DCT + adaptive sampling

2-D DCT

Adaptive sampling + 2-D DCT 2-D DCT + adding importance

Adaptive sampling+2D+adding importance

Figure 4.15: Comparison PRD of all methods with different digitized bits for A/D

8 9 10 11 12 13 14 15 16 10 12 14 16 18 20 22 24

Compare CR of all methods with different digitized bits for A/D

digitized bits Com p re s s io n rat e

1-D DCT+ different sampling (judged by its changing rate) 1-D DCT+ QRS +different sampling (QRS location detection) 1-D DCT + adaptive sampling

2-D DCT

Adaptive sampling + 2-D DCT 2-D DCT + adding importance

Adaptive sampling+2D+adding importance

(51)

Compression method CR PRD(%) Memory Complexity (per period) AZTEC 6.8 10.0% LT O(T) TP 2.0 5.3% LT O(T) CORTES 4.8 7.0% LT O(T) LPC 11.6 5.3% LT O(T2) Method1 (1-D and different sampling judged by its changing rate) 10.03 3.71% LT O(T/D log (T/D)) Method2 (1-D and different sampling judged by the location of QRS detection) 14.76 3.49% LT O(T/D log(T/D)) Method3 ( 1-D and adaptive sampling) 12.58 3.59% LT/D O(T/D log (T/D))

Method4 (2-D ) 21.87 3.25% NLT O(T log (NT))

Method5 (2-D and adaptive sampling)

19.48 3.61% NLT/D O(T/D log (NT/D))

Method6 (2-D and add importance)

22.05 2.88% NLT+NLS O(T log NT +S log NS)

Method7 (2-D, adaptive and add importance)

20.5 3.28% [NLT+NLS]/D O(T/D log (NT/D) +S/D log (NS/D))

Table 4.1: Compression performance for ECG signals database with different compres-sion methods

(52)

Bibliography

[1] J.R.Cox, F.M.Nolle, H.A.Fozzard, and G.C.Oliver, ”AZTEC, a preprocessing pro-gram for real-time ECG rhythm analysis,” IEEE Trans. Biomed. Eng., vol.BME-15, pp.128-129, Apr.1968.

[2] W.C.Mueller, ”Arrhythmia detection program for an ambulatory ECG monitor,”

Biomed. Sci. Instrument”.,vol.14,pp.81-85,1978.

[3] J.P.Abenstein and W.J.Tompkins, ”New data-reduction algorithm for real-time ECG anlysis,” IEEE Trans. Biomed. Eng., vol. BME-29, pp.43-48, Jan.1982. [4] W.J.Tompkins and J.G. Webster, Design of microcompurer-based medical

Instru-mentation, Englewood Cliffs, NJ:Prentice-Hall,1981.

[5] R.C.Barr, S. M. Blanchard, and D. A. Dipersio, ”SAPA-2 is the Fan,” IEEE Trans.

Biomed. Eng., vol.BME-32, p.337, May 1985.

[6] Jyh-Jong Wei, ”ECG Data Compression Using Truncated Singular Value Decom-position,” IEEE Trans. Biomed, vol.5, No.4, December.2001.

[7] Robert Rieger, Shinyu Chen,”A Signal Based Clocking Scheme for A/D Converters in Body Sensor Networks,” 2006

[8] C.E.Shannon,”Communication in the presence of noise,” Proc.IRE, vol.37, pp.10-21, Jan.1949.

(53)

[9] Vladimir Johneff,”Complex Valued Wavelet Analysis for QRS Detection in ECG signals,”Technical University - Sofia, 2001.

[10] H.S.Malvar, Signal Processing with Lapped Transforms. Norwood, MA:Artech House, 1992.

[11] Information technology - Generic Coding of Moving Pictures and Associated Audio Information, Part7: Advanced Audio Coding (AAC), ISO/IEC MPEG Interna-tional Standard 13818-7, 1997.

數據

Figure 2.1: Schematic representation of normal ECG trace (sinus rhythm), with waves, segments, and intervals labeled
Figure 2.2: input signal
Figure 2.3: ECG Signals Spectrum after Discrete Cosine Transform.
Figure 3.1: A dual sampling rate (S 1 and S 2 ) solution based on signal’s short-term variation rate.
+7

參考文獻

相關文件

Study the following statements. Put a “T” in the box if the statement is true and a “F” if the statement is false. Only alcohol is used to fill the bulb of a thermometer. An

Courtesy: Ned Wright’s Cosmology Page Burles, Nolette &amp; Turner, 1999?. Total Mass Density

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

• Definition: A max tree is a tree in which the key v alue in each node is no smaller (larger) than the k ey values in its children (if any). • Definition: A max heap is a

Each course at the Institute is assigned a number of units corresponding to the total number of hours per week devoted to that subject, including classwork, laboratory, and the

It is useful to augment the description of devices and services with annotations that are not captured in the UPnP Template Language. To a lesser extent, there is value in

Each unit in hidden layer receives only a portion of total errors and these errors then feedback to the input layer.. Go to step 4 until the error is

The remaining positions contain //the rest of the original array elements //the rest of the original array elements.