• 沒有找到結果。

EEG-Based assessment of driver cognitive responses in a dynamic virtual-reality driving environment

N/A
N/A
Protected

Academic year: 2021

Share "EEG-Based assessment of driver cognitive responses in a dynamic virtual-reality driving environment"

Copied!
4
0
0

加載中.... (立即查看全文)

全文

(1)

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 54, NO. 7, JULY 2007 1349

Communications

EEG-Based Assessment of Driver Cognitive Responses in a Dynamic Virtual-Reality Driving Environment Chin-Teng Lin*, I-Fang Chung, Li-Wei Ko, Yu-Chieh Chen,

Sheng-Fu Liang, and Jeng-Ren Duann

Abstract—Accidents caused by errors and failures in human perfor-mance among traffic fatalities have a high death rate and become an important issue in public security. They are mainly caused by the failures of the drivers to perceive the changes of the traffic lights or the unex-pected conditions happening accidentally on the roads. In this paper, we devised a quantitative analysis for assessing driver’s cognitive responses by investigating the neurobiological information underlying electroen-cephalographic (EEG) brain dynamics in traffic-light experiments in a virtual-reality (VR) dynamic driving environment. The VR technique allows subjects to interact directly with the moving virtual environment instead of monotonic auditory and visual stimuli, thereby provides interac-tive and realistic tasks without the risk of operating on an actual machine. Independent component analysis (ICA) is used to separate and extract noise-free ERP signals from the multi-channel EEG signals. A temporal filter is used to solve the time-alignment problem of ERP features and principle component analysis (PCA) is used to reduce feature dimensions. The dimension-reduced features are then input to a self-constructing neural fuzzy inference network (SONFIN) to recognize different brain potentials stimulated by red/green/yellow traffic events, the accuracy can be reached 87% in average eight subjects in this visual-stimuli ERP experiment. It demonstrates the feasibility of detecting and analyzing multiple streams of ERP signals that represent operators’ cognitive states and responses to task events.

Index Terms—Cognitive state, event-related potential, fuzzy neural net-work, independent component analysis, principle component analysis, tem-poral filter, virtual reality.

I. INTRODUCTION

During the past years, driving safety has received increasing atten-tion due to the growing number of traffic fatalities. During these traffic fatalities, the most frequently happened events are drunk driving, speeding, and red light running. Preventing such accidents is thus a

Manuscript received March 21, 2005; revised October 14, 2006. This work was supported in part by the National Science Council, Taiwan, under Grant NSC 94-2218-E-009-031- and in part by the “Aiming for the Top University Plan” of the National Chiao Tung University and the Ministry of Education, Taiwan, under Grant 95W803E through MOE ATU Program 95W803E.

*C.-T. Lin is with the Departments of Electrical and Control Engineering/ Computer Science and the Brain Research Center, National Chiao-Tung Uni-versity, Hsinchu 300, Taiwan (e-mail: ctlin@mail.nctu.edu.tw).

I-F. Chung is with the Institute of Bioinformatics, National Yang-Ming Uni-versity, Taipei 112, Taiwan (e-mail: ifchung@ym.edu.tw).

L.-W. Ko and Y.-C. Chen are with the Department of Electrical and Control Engineering and the Brain Research Center, National Chiao Tung University, Hsinchu 300, Taiwan (e-mail: lwko@mail.nctu.edu.tw).

S.-F. Liang is with the Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 701, Taiwan, and also with the Brain Research Center, National Chiao Tung University, Hsinchu 300, Taiwan (e-mail: sfliang@mail2000.com.tw).

J.-R. Duann is with the Department of Computer Science and the Brain Re-search Center, National Chiao Tung University, Hsinchu 300, Taiwan, and also with the Institute for Neural Computation, University of California, San Diego, CA 92037 USA (e-mail: duann@sccn.ucsd.edu).

Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TBME.2007.891164

Fig. 1. Physiological signal measurement system with kinesthetic/visual/ auditory stimuli in the 3-D dynamic VR-based traffic-light motion simulation experiments.

major focus of efforts in the field of active safety research in vehicle safety-driving systems. In recent studies [1]–[3], many researchers had proposed to develop quantitative techniques for ongoing assessment of cognitive effort, engagement and workload, by investigating the neurobiological mechanisms underlying electroencephalographic (EEG) brain dynamics. In these studies, brain event-related poten-tial (ERP) signals were used to determine the relationship between different stimuli and human cognitive responses corresponding to correct/incorrect motor reactions. Some ERP features (such as P300) were calculated with on-line average for the use of biofeedback to, for example, move a cursor on computer screen [4]–[9]. Bayliss, et al. [10], [11] designed an experiment to recognize the existence of P300 ERP epochs at red stoplights and the absence of this signal at yellow stoplights in a virtual driving environment. They have shown that building a BCI using the P300 ERP would prove feasible. The main purpose of this paper is to analyze recorded single-trial EEGs, extracting and combining the multidimensional information obtained from the scalp EEGs, and modeling the dynamics of underlying brain networks in the dynamic VR environment.

II. METHODOLOGY

A driving environment based on virtual-reality (VR) scene and a six degree-of-freedom (DOF) motion platform are constructed to study drivers’ response corresponding to different traffic-light events. The VR technique allows subjects to interact directly with a virtual environ-ment instead of perceiving monotonic auditory and visual stimuli, and is an excellent strategy for a brain research, thereby provides interactive and realistic tasks without the risk of operating on actual machines. The overall dynamic VR-based experimental setup is shown in Fig. 1. The subject was asked to decelerate/stop the car when he/she saw may be more direct) a red light, to accelerate the car when he/she saw a yellow light, and do nothing (keep constant speed) when he/she saw the green light. The 31-channel scalp EEG and 4-channel EOG were simulta-neously recorded at 1 KHz sampling rate, and then re-sampled down to 500 Hz for the simplicity of data processing. Then, EEG data were 0018-9294/$25.00 © 2007 IEEE

(2)

1350 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 54, NO. 7, JULY 2007

Fig. 2. Flowchart of EEG data analysis in our traffic-light driving experiments.

preprocessed using a simple low-pass filter with a cutoff frequency of 50 Hz to remove the line noise and other high-frequency noise for fur-ther analysis.

A total of eight subjects (seven were 21–25 years and one was 40 years old) participated in the driving experiments. There will be five or six 10-min sessions (10 15 min break between sessions) in one driving simulation experiment for each subject. It contains 150 events per session. The event allotment ratios are 30%, 60%, and 10% for red, green, and yellow traffic lights, respectively. Each single stimulus appears in random intervals between 1.7, 2.1, and 2.3 s and lasts for 300 ms.

III. ANALYSIS OFEEG SIGNALS

Fig. 2 shows the system flowchart for processing the ERP signals. The continuous EEG signals were segmented into several epochs/trials where an epoch or a trial contained the sampled EEG data 200 ms be-fore and 1000 ms after the onset of the traffic lights. The recorded EEG signals were cleaned using Infomax ICA as implemented in EEGLAB [12] to remove a variety of noises related to eye activities and others [13], [14]. We cut the cleaned EEG signals to obtain the ERP for the further analysis. Then, a temporal matching filter was devised to solve the time-alignment problem and PCA was applied to the filtered ERP data to reduce dimension and select the representative components. Fi-nally, we developed a self-constructing neural fuzzy inference network (SONFIN) to classify the recorded ERP signals.

A. Matching Filter

Due to the time varying and nonstationary property of P300 in each single trial of the same stimulus, one frequently happened classifica-tion problem is the time-alignment due to varying latency of P300. The single trial ERP signals caused by the same stimulus could be classified into different principles in PCA due to the time alignment. To solve this problem, after collecting high-fidelity ERP signals, the temporal matching filter [15] was selected by averaging the first N single trials as the standard pattern of P300 for each subject. Then, we calculated the cross-correlation value between the matching filter and subsequent single trial, and found out the maximum magnitude of cross-correlation function. Finally, the original single-trial sequence was shifted to a new time sequence according to the maximum cross-correlation value. B. Self-Constructing Neural Fuzzy Inference Network

SONFIN was a general connectionist model of a fuzzy logic system [16]. It can construct an optimal structure and tune parameters auto-matically. Both the structure and parameter identification schemes was done simultaneously during on-line learning without any assignment of fuzzy rules in advance. They are created dynamically as learning proceeds upon receiving on-line incoming training data by performing the following learning processes simultaneously: 1) input/output space partitioning; 2) construction of fuzzy rules; 3) optimal consequent structure identification; 4) parameter identification.

IV. RESULTS& DISCUSSIONS

A. Component Selection of ICA

The measured ERP signals were first analyzed using ICA algorithm trained in single trials. After training, we obtained 31 ICA components from 31-channels EEG data. Fig. 3 shows the flowchart of the ICA al-gorithm applied to the EEG data analysis. Fig. 3(a) shows the averaged ERP signals for 31 channels, where each line presents one-channel av-eraged ERP signal. The amplitude of artifacts (EOG, etc.) is larger than ERP (P300) and its position on scalp is apparently observed. The de-tailed influence can be further observed in Pz channel in single trials shown in Fig. 3(b). The horizontal axis is time scale from previous 100 ms to 800 ms, the vertical axis is trial index, and the amplitude of each single trial is shown in color bar. The artifact can be observed almost in each single trial. The topographic maps of the obtained 31 ICA components after training are shown in Fig. 3(c). We can observe that most artifacts and representative visual ERP signals (P300) are ef-fectively separated into ICA components 1 and 4 after ICA, as shown in Fig. 3(d) and (e). The resulting independent components from ICA are ranked according to their variances, thus the component order of this visual ERP source can differ from subject to subject. Comparing Fig. 3(b) with Fig. 3(e), we can also find out that the ERP signal ob-tained by applying ICA to single-trial data is more clear and noise-free than the original one. The component with visual ERP (component 4 in this example) is selected for further analysis for two reasons: 1) the se-lected component is centered at Pz, given that the topography of P300 is centered at Pz [17]; 2) the P300 waveform of the selected component is more significant than the other components, as shown in Fig. 3(f). In Fig. 3(f), we can see that the P300 amplitude of component 4 is 35 times bigger than component 1 and 12. Combine two or more ICA com-ponents (with P300 ERP) for further analysis are also tested in our sim-ulations to produce the feature for P300 recognition. The recognition rates are slightly increased about 12% when 2 or 3 ICA components (with P300 ERP) are used as feature components, and when 4 compo-nents (with P300 ERP) are used as feature compocompo-nents, the recognition rates become lower than one major component (centered at Pz) for 2% in average. Thus, one major source (i.e. ICA component) with visual ERP is selected for further analysis.

B. Performance Comparison

After passing through the temporal matching filter and using PCA to reduce the feature dimensions, the selected PCA components (ERP data ranged from 0 ms to 800 ms) were then trained by different clas-sification models for comparison.

The leave-one-out cross-validation was applied to the classification procedure, with the ERP data of one subject was used as testing data and the rest 7 subjects’ ERP data were used as training data. The testing result of the linear discriminant analysis (LDA) were range from 0.320.57 (0.439 in average). Since the EEG activities were varied between subjects, we did not construct a global classifier but built each own classifier for each subject, which was trained only using the features extracted from the training pattern ERPs. In our study, projections (PCA components) of the ERP data on the subspace formed by the eigenvectors corresponding to the largestn eigenvalues were then used as inputs to train the individual classifiers for each subject. The total variance and sensitivity/specificity of 8 subjects were evaluated by LDA to find appropriate number of PCA components. The relationship between PCA component numbers and the average of 8 subjects in total variance, sensitivity/specificity is given in Fig. 4. The explained variance of one PCA component is 40%, while the explained variance of 50 PCA is nearly 95%. Thus, the number of PCA components is set to 50 for further analysis.

(3)

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 54, NO. 7, JULY 2007 1351

Fig. 3. Flowchart of artifact removal using ICA algorithm. (a) Averaged EEG signal for 31 channels. (b) Single-trial EEG signal in Pz channel. (c) The topographic maps on scalp of ICA components. (d) Separated artifacts in ICA component 1. (e) Separated noise-free ERP in ICA component 4. (f) Average ERP of component 1, 4, and 12.

Table I shows the sensitivity/specificity of eight subjects by three classifiers, LDA (linear classifier), back-propagation model (BP, non-linear classifier), and SONFIN. We could obviously observe in Table I that the recognition rate gets a significant increase up to 5% with tem-poral matching filter both in LDA and SONFIN. The recognition re-sults (expressed as sensitivity) between three lights (red/green/yellow) were 78%87% in average of 8 subjects with different classifiers

(both linear and nonlinear). In the previous VR-based stop light ex-periment without the dynamic platform [10], the recognition results between two lights (red/yellow) were 67%83% in average of five subjects with different algorithms. In our study, combining ICA, PCA, and temporal matching filter, the SONFIN classifier had excel-lent recognition ability in traffic-light visual ERP with the sensitivity 79%95%.

(4)

1352 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 54, NO. 7, JULY 2007

Fig. 4. Traffic-light stimulated ERP classification results of LDA with com-paring the effect of PCA component number and matching filter.

TABLE I

SENSITIVITY/SPECIFICITY OFTHREECLASSIFIERSWITH/WITHOUTTEMPORAL

MATCHINGFILTERS FOREIGHTSUBJECTS IN THEVR-BASEDTRAFFIC-LIGHT

MOTIONSIMULATIONEXPERIMENTS

V. CONCLUSION

In this paper, we developed a quantitative analysis technique for ongoing assessment of drivers’ cognitive responses by investigating the neurobiological information underlying EEG brain dynamics in traffic-light motion simulation experiments. It consists of a VR mo-tion-simulation driving platform and an EEG signal detection and analysis system. The dynamic VR technology we used in the cur-rent study not only provides dynamic motion stimuli in addition to conventional audio/visual ones, but also extends the applications of possible safety-driving prototypes to general population (including possibly lock-in patients) by allowing subjects to interact directly with virtual objects. We proposed a detailed experimental design and data-processing procedures for measuring and analyzing ERP signals. The experimental results show that the proposed signal processing procedures can analyze ERP signals in single trials correctly without using traditional time-domain overlap-added method. After applying ICA algorithm, we obtained a correct, clear, and noise-free ERP sig-nals in single trials. We also designed a new temporal matching filter to solve the time alignment problem and increase the recognition rate up to 5%. After using PCA to reduce the feature dimensions and save computation cost, we classified these ERP features using LVQ, BP or SONFIN classifiers. Classification results show that the proposed SONFIN can achieve a high recognition rate about 85% on average.

ACKNOWLEDGMENT

The authors would like to thank T.-P. Jung, T.-Y. Hunag, K.-C. Huang, S.-C. Guo, and Y.-J. Chen for their great help with developing and operating the experiments.

REFERENCES

[1] A. Kemeny and F. Panerail, “Evaluating perception in driving simula-tion experiments,” TRENDS Cogn. Sci., vol. 7, no. 1, pp. 31–37, Jan. 2003.

[2] Z. Duric, W. D. Gray, R. Heishman, L. Fayin, A. Rosenfeld, M. J. Schoelles, C. Schunn, and H. Wechsler, “Integrating perceptual and cognitive modeling for adaptive and intelligent human-computer inter-action,” Proc. IEEE, vol. 90, no. 7, pp. 1272–1289, Jul. 2002. [3] D. D. Schmorrow and A. A. Kruse, “DARPA’s augmented cognition

program-tomorrow’s human computer interaction from vision to reality: Building cognitively aware computational systems,” in Proc.

2002 IEEE 7th Conf. Human Factors and Power Plants, Sep. 2002,

pp. 7/1–7/4.

[4] P. Sykacek, S. J. Roberts, and M. Stokes, “Adaptive BCI based on variational Bayesian Kalman filtering: An empirical evaluation,” IEEE

Trans. Biomed. Eng., vol. 51, no. 5, pp. 719–727, May 2004.

[5] D. J. McFarland, G. W. Neat, R. F. Read, and J. R. Wolpaw, “An EEG-based method for graded cursor control,” Psychobiology, vol. 21, no. 1, pp. 77–81, 1993.

[6] G. Pfurtscheller, D. Flotzinger, M. Pregenzer, J. Wolpaw, and D. Mc-Farland, “EEG-based Brain Computer Interface (BCI),” Med. Prog.

Technol., vol. 21, pp. 111–121, 1996.

[7] C. E. Davila and R. Srebro, “Subspace averaging of steady-state vi-sual evoked potentials,” IEEE Trans. Biomed. Eng., vol. 47, no. 6, pp. 720–728, Jun. 2000.

[8] T. J. Dasey and E. M. Tzanakou, “Detection of multiple sclerosis with visual evoked potentials—An unsupervised computational intelligence system,” IEEE Trans. Inf. Technol. Biomed., vol. 4, pp. 216–224, Sep. 2000.

[9] X. R. Gao, D. F. Xu, M. Cheng, and S. K. Gao, “A BCI-based environ-mental controller for the motion-disabled,” IEEE Trans. Neural Syst.

Rehabil. Eng., vol. 11, no. 2, pp. 137–140, Jun. 2003.

[10] J. D. Bayliss and D. H. Ballard, “Single trial P3 recognition in a virtual environment,” Signal Process., vol. 36, pp. 287–314, 1994.

[11] ——, “Recognizing evoked potentials in a virtual environment,” in

Ad-vances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2000, vol. 12, pp. 3–9.

[12] A. Delorme and S. Makeig, “EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent compo-nent analysis,” J. Neurosci. Meth., 2004.

[13] T. P. Jung, S. Makeig, C. Humphries, T. W. Lee, M. J. McKeown, V. Iragui, and T. J. Sejnowski, “Removing electroencephalographic artifacts by blind source separation,” Psychophysiology, vol. 37, pp. 163–178, 2000.

[14] T. P. Jung, S. Makeig, W. Westerfield, J. Townsend, E. Courchesne, and T. J. Sejnowski, “Analysis and visualization of single-trial event-related potentials,” Hum. Brain Mapp., vol. 14, pp. 166–185, 2001. [15] S. Theodoridis and K. Koutroumbas, Pattern Recognition. New York:

Academic Press, 1999.

[16] C. F. Juang and C. T. Lin, “An on-line self-constructing neural fuzzy inference network and its applications,” IEEE Trans. Fuzzy Syst., vol. 6, no. 1, pp. 12–32, Feb. 1998.

[17] D. F. Salisbury, M. E. Shenton, and R. W. McCarley, “P300 topography differs in schizophrenia and manic psychosis,” Biol. Psychiatry, vol. 45, pp. 98–106, 1999.

數據

Fig. 1. Physiological signal measurement system with kinesthetic/visual/ auditory stimuli in the 3-D dynamic VR-based traffic-light motion simulation experiments.
Fig. 2. Flowchart of EEG data analysis in our traffic-light driving experiments.
Fig. 3. Flowchart of artifact removal using ICA algorithm. (a) Averaged EEG signal for 31 channels
Fig. 4. Traffic-light stimulated ERP classification results of LDA with com- com-paring the effect of PCA component number and matching filter.

參考文獻

相關文件

The first row shows the eyespot with white inner ring, black middle ring, and yellow outer ring in Bicyclus anynana.. The second row provides the eyespot with black inner ring

Our model system is written in quasi-conservative form with spatially varying fluxes in generalized coordinates Our grid system is a time-varying grid. Extension of the model to

Since we use the Fourier transform in time to reduce our inverse source problem to identification of the initial data in the time-dependent Maxwell equations by data on the

Reading Task 6: Genre Structure and Language Features. • Now let’s look at how language features (e.g. sentence patterns) are connected to the structure

• Any node that does not have a local replica of the object periodically creates a QoS-advert message contains (a) its δ i deadline value and (b) depending-on , the ID of the node

To decide the correspondence between different sets of fea- ture points and to consider the binary relationships of point pairs at the same time, we construct a graph for each set

• If we want analysis with amortized costs to show that in the worst cast the average cost per operation is small, the total amortized cost of a sequence of operations must be

In order to solve the problem of the tough recruitment of students in the future, universities and colleges, in addition to passing the relevant assessment conducted by the