ESE498

eyeTalk

SSVEP-Based BCI Using Emotiv Headset

By

Jake Lefkowitz and Jenny Liu

    

 

 

Supervisor

Dr. Robert Morley

 

 

 

 

 

 

Submitted in Partial Fulfillment of the Requirement for the BSEE Degree, Electrical and Systems Engineering Department, School of Engineering and Applied Science,

Washington University in St. Louis

 

May 2013

Front Matter

Student Statement

            We, Jake Lefkowitz and Jenny Liu, abided by the Wash U code of ethics in the design project and its write-up. We also followed design ethics to the best of our knowledge.

Abstract

7 out of 100,000 Americans have motor neuron diseases (MNDs) such as Multiple Sclerosis and Lou Gehrig’s disease. People with advanced MNDs often have trouble speaking, which may decrease their quality of life. The eyeTalk is a brain computer interface that allows subjects to communicate ‘yes’ and ‘no’, using affordable, commercially available products.

     Concentrating on a blinking light induces SSVEP (steady-state visually evoked potential) signals in the brain’s occipital lobe at the same frequency as the stimulus. The SSVEP signal is used as a control signal to play ‘yes’ and ‘no’ on a speaker. EEG signals from the occipital lobe are acquired with the Emotiv EPOC™ EEG headset, while the subject concentrates on an LED blinking at 12.8Hz or 16Hz. Features of the EEG signals in the frequency domain are used to determine if the subject is concentrating on a blinking light.

Our algorithm detected 16Hz and 12.8Hz stimuli with 99% and 80% accuracy, respectively. Our signal processing procedure requires 4.375 seconds of data, which results in an information transfer rate of approximately 9.1 bits/minute.

Acknowledgement

We would like to thank Professors Ed Richter and Robert Morley for facilitating the ESE 498 capstone project and the BCI student group for allowing us to use their headsets. We would like to thank physical therapist Arlene Goldberg for providing guidance on applications of our design. This work is supported by the McKelvey Undergraduate Research Fellowship, the Washington University Electrical and Systems Engineering Department, and the Washington University Engineering Project Review Board. We would like to thank Emotiv Systems for donating a headset in 2011 and Jasmine Kwasa for purchasing a headset with her McKelvey fellowship in 2013.

 

 

 


 

Table of Contents

Front Matter i

Student Statement i

Abstract i

Acknowledgement i

List of Figures and Tables. iii

Problem Formulation. 1

Problem Statement 1

Problem Formulation. 1

Project Specifications. 2

Concept Synthesis. 2

Literature Review.. 2

Concept Synthesis. 4

Concept Generation. 4

Concept Reduction. 6

Signal Acquisition. 6

Improving Signal Quality. 7

Feature Extraction. 11

Stimulus. 12

Subject 15

Detailed Engineering Analysis and Design Presentation. 18

1: Interface. 18

2: Signal Acquisition. 19

3: Signal Processing. 19

4: Detection. 24

5: Positive Feedback. 26

Cost Analysis. 26

Bill of Materials. 27

Hazards and Failure Analysis. 27

Conclusions. 27

Bibliography. 28

Appendix A.. 30

Top. 30

1: EmoTop. 31

1.1: Chunk of N = 80. 32

1.2: Line Up. 34

1.3: Signal Averager 35

1.4: A&B.. 36

2: Stim.. 37

3: Analog Write. 38

Appendix B.. 39

 

List of Figures and Tables

Figure 1 - Map of electrode locations on a human head.  SSVEP signals primarily occur at the O1 and O2 electrode positions. 2

Figure 2 - Sample SSVEP signals from a review paper [7]. Raw SSVEP signal in response to a 10Hz stimulus in the time domain (A1) and frequency domain (A2) shows a 10Hz peak. After 20 trial averages, the frequency domain (B2) shows a more distinct peak at 10 Hz, demonstrating the benefit of signal averaging. 3

Figure 3 - Concept Generation Chart 6

Figure 4 - (Top) - Raw EEG signal collected over six seconds has large contributions from noise in the time domain (A) and frequency domain (B and C). For the raw EEG, in the frequency range of interest (C), there is no distinguishable peak at the stimuli frequency. 7

Figure 5 - (Left) Digitization of an analog signal at the same sampling rate are slightly offset due to different phase, 8

Figure 6 - Headset frequency response. A 38uVpp - 20uV DC sinusoid of varying frequency from the HP 3312A function generator in Bryan 306 was wired to the O1 electrode and the two headset ground electrodes. The FFT magnitude at the input frequency was plotted. There is a drop-off between 25 and 30Hz. 9

Figure 7 – (Top) 'Sliding window' vs traditional signal averaging. ‘Sliding window’ fits more averages into the same sample length. 10

Figure 9 - Linear discriminant analysis shows good separation between 12.8Hz and 16Hz, as well as 16Hz and Neutral conditions, and mediocre separation between Neutral and 13Hz. The blue line separates 12.8Hz and 16Hz. Projecting the data onto the inverse of this line would give the maximal separation between the 12.8Hz and 16Hz stimuli. Conditions are FFT Magnitude at 13Hz and 16Hz. 11

Figure 8 - Features extracted from the frequency domain. We found that  worked. 12

Figure 10 - FFT of EEG signals with 8 Hz stimulus, driven by DAQ Write VI (left) and Function Generator VI (right).  Because the peak in the spectrum was closer to 8 Hz when the DAQ Write VI was used (8.007 compared to 8.018), we concluded that the frequency of the stimulus was closer to 8 Hz with the DAQ Write VI. 13

Figure 11 - FFT of EEG signals from 8 Hz (left) stimulus and 19 Hz (right) stimulus.  It appears that in this range of frequencies, a strong SSVEP signal can be produced. 14

Figure 12 - FFT of EEG signals with 32 Hz stimulus. 14

Figure 13 - ROC curves for 8, 12.8 and 16 Hz stimuli.  Detection with a 8 Hz stimulus is less sensitive and more prone to errors than the other stimulus frequencies. 15

Figure 14 - FFT of Subject’s EEG Signals- 10 Hz Stimulus. 16

Figure 15 - FFT of EEG signals with 8 Hz stimulus when the lights are on (left) and off (right) in the room. 17

Figure 16 - Data flow diagram for the eyeTalk. EEG signals recorded with the Emotiv EPOC ™ are processed using Labview. The control signal plays “yes” or “no” on the speaker. The user receives positive feedback when the red LED next to the stimulus LED turns on. 18

Figure 17 – A – Circuit board with LEDs for stimulus and positive feedback. The stimulus LED is blue (B), and the positive feedback LED is red (C). 19

Figure 18 - Dataflow diagram for signal processing. EEG signal from the headset is arranged into packets of 80 and aligned in a buffer. The data in the buffer is averaged, and then the averaged signal is used by the detection algorithm. The optional upsampling step allows the use of addition stimuli frequencies. 19

Figure 19 - Implementation of the Dataflow Diagram described in Figure 18 in LabView. (A): without using the upsampling block, and (B): with using the upsampling block. 20

Figure 20 - The subVI Chunks of N = 80 outputs packets of raw EEG data or simulated EEG data of length N = 80 rows by 22 columns. 20

Figure 21 – LabView code for Data Chunk responsible for updating the Temp and Output Buffers. When the Temp Buffer has accumulated more than 80 rows of data, the first 80 rows are written to the Output Buffer. Any rows in the Temp Buffer left over are written to the beginning of Temp Buffer. This subVI has a pointer, s, that tracks the number of rows of data in Temp Buffer. In the next iteration of the while loop, incoming data are appended at s. 21

Figure 22 – UpSampler subVI. To upsample by ,  zeros are appended to the spectra of the incoming signal.  The inverse Fourier transform yields an interpolated version of the original incoming signal. 22

Figure 23 –(A): Representation of data in time domain. (B): Filling the Line Buffer. 23

Figure 24 – The Stop Filling subVI in LineUp subVI. The Labview code implements the steady-state scenario described graphically in Figure 23. The Boolean “Enough for Averaging” is used to control the next step in the signal processing dataflow. Line Buffer is outputted to the signal-averaging block. 23

Figure 25 - ROC curves for various numbers of signal averages. We chose 6 signal averages as a good compromise between dwell time and accuracy. These ROC curves were constructed from 122 trials collected on Jake Lefkowitz. 24

Figure 26 - Peak detection as described in (1) and (2) above. 25

Figure 27 - Detection tree. 16Hz is detected first because it has higher accuracies. 25

Figure 28 - Screenshot of EmoTop front panel detecting a 12.8 Hz sinusoid and uniform noise, with -21.6 dB SNR. There is a peak at 13 Hz (red rectangle added). 25

Figure 29 - ROC curves with positive feedback. Data came from 28 trials collected on Jake Lefkowitz  26

Figure 30 - Top. Controls are used to adjust thresholds for  for 16 Hz and 12.8 Hz used for detection. First level SubVIs are 1) EmoTop , which handles the signal processing; 2) Stim  controls the LEDs used for stimulation; 3) Analog Write  controls the LEDs used for positive feedback; and 4) Music Out  controls the speaker output. 30

Figure 31 – EmoTop performs signal processing and detection. (A): without using the upsampling block, and (B): with using the upsampling block. EmoTop can run with simulated noisy signal or raw EEG signal from the Emotiv headset. 31

Figure 32 - Chunk of N = 80 outputs 80x22 of simulated or raw EEG data. SubVI’s: 1) Emotiv Front Panel interfaces with the Emotiv headset (given to us by Sam Fok); 2) Data Stim packages the simulated noisy sinusoid in small chunks just like receiving data from the headset; 3) Which Data to Use? toggles between real EEG data and the simulated data; 4) Data Chunk packages received data in packets of 80x22. 32

Figure 33 - Data Stim packages the simulated noisy sinusoid into small packets, simulating data received from the headset. 32

Figure 34 - Which Data to Use toggles between simulated and real data. 32

Figure 35 –Data Chunk has three states. 1) (Bottom Right) There is not enough data to output a packet, so the incoming data is written to Temp Buffer. 2) (Bottom Left) There is exactly enough data to output a packet. An 80x22 packet is output. 3) (Top) There is more than enough data to output a packet. An 80x22 packet is output. Any data not output is moved to the beginning of Temp Buffer. 33

Figure 36 – LineUp prepares the packets of data for ‘sliding window’ signal averaging. SubVI’s: 1) Stop Filling is the steady state scenario. The subVI outputs the Line Matrix when there is enough data for signal averaging. 2) Keep filling is the initialization scenario. 34

Figure 37 - Stop Filling outputs the Line Matrix for signal averaging. Only the first   columns are used for signal averaging. The Line Up Matrix is then updated almost like a FIFO buffer. The 2nd column to the column are shifted to become the 1st and  column. The packet of data finishes filling the  column and begins filling the  column. 34

Figure 38 - Keep Filling initalizes the Line Up Matrix by progressively filling in the columns. The columns are filled until the  column is filled and the  column is partially filled. 35

Figure 39 - Signal Averager computes the average if there is enough data to average. Otherwise, it outputs an array filled with a constant (useful for troubleshooting). 35

Figure 40 - A&B detects if there is a peak in the Fourier transform at 8Hz, 12.8Hz or 16Hz. Due to low accuracy, we skipped 8Hz by increasing the thresholds to 100. The SubVI Peak Detect determines if there is a peak. 36

Figure 41 - Peak Detect compares the magnitude at the frequency of interest G(f) and the ratio to neighboring frequencies  to threshold and ratio constants. A peak is detected if the value and ratio parameters both exceed the threshold and ratio constants, respectively. 37

Figure 42 -  Stim uses digital pattern writing to produce 16Hz and 12.8Hz signals from digital output line 0 and line 2, respectively, on the NI Elvis. Lines 1 and 3 have controls for adding more stimulus frequencies. 37

Figure 43 - Analog Write turns on and off the voltage coming from the analog out pins of the NI Elvis  38

Figure 44 - Music Out says 'yes' and 'no'. This is a modification of an example on the NI website called "Play Wave."  38

Figure 45 - Simulation for 16 Hz input frequency with -21.6 dB SNR. 16Hz is detected. 39

Figure 46 - Simulation for 12.8 Hz input frequency with -21.6 dB SNR. 12.8 Hz is detected. 39

Figure 47 - Simulation for 0 Hz input frequency with -21.6 dB SNR. “Is Control” (neither stimulus) is detected. 40

 

 

 


 


Problem Formulation

Problem Statement

In the US, motor neuron diseases (MNDs) such as multiple sclerosis (MS) and Amyotrophic Lateral Sclerosis (ALS) affect 7 out of every 100,000 people [1]. MNDs affect people’s voluntary muscle control, affecting their ability to speak, walk, breath, and swallow [2]. Many individuals with advanced-stage MNDs cannot use existing assistive-living technologies that require coordinated muscle control, such as interfaces that use voice recognition [3], puff and sip [4], or use the tongue or chin to manipulate a lever [5]. As a result, individuals with advanced-stage MNDs experience reduced quality of life due to a decreased ability to interact with their environments and communicate with their caregivers. While current brain-computer interfaces such as fMRI, MEG and EEG require minimal voluntary motion and could help individuals with advanced MNDs communicate; these devices are expensive, bulky, and are primarily dedicated towards research use.  Our goal is to build an accessible, easy-to-use interface for patients with advanced MNDs to communicate basic needs, using commercial components.

 

Problem Formulation

The increasing development of commercially available brain-computer interfaces (BCIs) may provide an avenue for people with advanced MNDs to communicate and interact with their environment. Steady-state visually-evoked potential (SSVEP) signals have a relatively high signal-to-noise ratio, are resistant to artifacts and are well-characterized for use in BCIs [6]. SSVEP-based brain computer interfaces only require the user to concentrate on a blinker in the visual field, so this technology is uniquely suited for patients who are physically unable to operate other assistive-living devices [7]. We propose an electroencephalography (EEG) BCI that uses the SSVEP signal to allow individuals with advanced MNDs to communicate “yes” and “no.” We will assess our ability to harness the SSVEP signal using a commercial EEG system by testing the device on a healthy test subject, evaluating the detection accuracy and calculating the information transfer rate (ITR). The accuracy and ITR of our system will be compared against the existing interfaces listed in [7]. We hope to provide a reasonably easy-to-use, inexpensive, and accurate interface to allow individuals with advanced MNDs to communicate at least basic desires to the world.


 

Project Specifications

Table 1 - Table of specifications for the eyeTalk.

Interface & Usability

Easy-to-use

System setup takes less than 10 minutes

System should be usable for at least a one-hour session

Functional

Enables the user to communicate “yes” and “no”

Interface requires few fine-motor skills

Accessible

Components used should be available commercially to the public in the USA

Cost

Total device cost should be less than $1000 (optimal) and $2000 (satisfactory)

Compact

Entire device should fit in a 12” x 5” x 6” box

Stimuli should fit in a 2” x 1” x 0.5” box

Signal Acquisition

Frequency range

Can detect at least two signals between 5-50Hz

Location

Has electrodes over the occipital lobe of the brain

Signal Processing and Detection

Accuracy

Detects the stimuli frequencies with at least an accuracy of 70%, evaluated as the area under the ROC curve

Detection duration

Subject concentration on stimulus should be detected within 10 seconds (optimal) and within 30 seconds (satisfactory)

Cost

Affordable

Total device cost should be less than $1000 (optimal) and $2000 (satisfactory)

Concept Synthesis

Literature Review

            Steady-state visually-evoked potential (SSVEP) signals are an increasingly popular choice for brain computer interfaces (BCIs) because of their relatively high signal-to-noise ratio (SNR). The potential for high detection accuracies with multiple stimuli provides the potential for high information transfer rates (ITR) [8]. SSVEP signals are produced when a user focuses on a visual stimulus blinking at a constant frequency. Brain waves at the same frequency as the stimulus and its harmonics can be detected above the occipital lobe, at the O1 and O2 international electrode positions [7] .  Figure 1 is a map of the Emotiv electrode positions; the SSVEP signals primarily occur at the O1 and O2 electrodes.

Left: EPOC headset electrode positions on head. Center: Correlation values between left hand movement condition and rest condition per channel and frequency bin.  Right: Frequency spectrum of signal of channel F3 showing changes in amplitude between left hand movement and rest

Figure 1 - Map of electrode locations on a human head.  SSVEP signals primarily occur at the O1 and O2 electrode positions.

Previous studies have reported that users are able to control SSVEP signal strength and phase [9]. Moreover, users can choose to selectively focus on one blinking stimulus out of multiple in the visual field, which has been tested with binocular rivalry studies and with intermingled stimuli of different colors [7]. However, individual variation in both SSVEP signal strength and duration required for detecting an SSVEP signal has also been reported, requiring a way to adjust threshold and duration to each user [9]. Lastly, SSVEPs are easily accessible for use in BCIs because people can generate SSVEPs without training [10], although two studies have shown that training and positive feedback can improve users’ ability to toggle between concentrating on one of two flashing stimuli [9] [11]. These features have made SSVEP-based BCIs increasingly popular.  Figure 2 below shows an example SSVEP response in the time and frequency domains (A1 and A2).  After signal averaging (B1 and B2), the SSVEP peak grows in comparison to the surrounding frequencies.  

 

Figure 2 - Sample SSVEP signals from a review paper [7]. Raw SSVEP signal in response to a 10Hz stimulus in the time domain (A1) and frequency domain (A2) shows a 10Hz peak. After 20 trial averages, the frequency domain (B2) shows a more distinct peak at 10 Hz, demonstrating the benefit of signal averaging. 

            Interfaces for patients with motor neuron diseases (MNDs) and quadriplegia use a variety of control signals derived from EOG [12], voice recognition [3], sip-and-puff [4], or tongue and chin position [5]. SSVEP-based brain computer interfaces only require the user to concentrate on a blinker in the visual field, so this technology is uniquely suited for patients who are physically unable to operate the alternative devices mentioned above. SSVEP-based brain computer interfaces have been previously implemented in several contexts. US Patent 8155736 describes a system for detecting stimuli-induced sensory-evoked potentials from a broad variety of sensory inputs [13]. However, the method of signal analysis is constrained to multiplying the recorded signal with a prototype signal [13]. US Patent 7269456 uses frequency-domain analysis, but focuses on the sum of magnitudes of frequencies passing through a narrow-band filter [14]. Patent application 13/415,169 describes a BCI in which a first stimulus determines that the user would like to interface with one of several devices, and a second stimulus is generated to determine operation of the device. None of these devices proposed are available commercially, so they cannot be used as assisted-living devices for individuals with advanced-stage MNDs. Moreover, these patents do not address the problem of using a portable and inexpensive commercial EEG headset, which often requires sacrifices in sampling rate and signal quality.

An implementation of the SSVEP-based BCI using the Emotiv EPOC™ EEG headset was described in 2012 [8]. The researchers emphasized the accessibility and cheapness of the Emotiv headset. However, the design required BCI2000, an open-source software package for BCI research, Matlab, and LCD-based stimuli controlled by DirectX [8]. The design choices for stimuli and signal acquisition restrict the device to a computer or laptop with sufficient screen space for separating the stimuli spatially. It may be difficult to mount a screen and set up a computer in a hospital setting, especially when the interface may share space with medical equipment. We focus on an SSVEP-based BCI implemented using hardware that can be easily mounted.

Understanding of the literature was helped by the following courses: Digital Signal Processing, Signals and Systems, and Signals and Systems Lab. Jenny Liu was also involved in Brain Computer Interface for undergraduate research and participated in a sister product called the eyeReader implemented on a laptop with LCD stimulus.

 

Concept Synthesis

Concept Generation

Throughout our engineering design, we had to choose the best option when confronted with multiple methods of solving a particular problem.  Figure 3 shows the options that we considered for the different components of our design.  We ultimately chose to implement the green boxes over the red boxes in our design because they proved to work better. The yellow boxes indicate solutions we found to improve our design, but were not variations of the design generated during brainstorming.   

                                               

Figure 3 - Concept Generation Chart

Concept Reduction

Signal Acquisition

As shown in Figure 3, we had several choices for collecting the EEG data. We ruled out building a custom EEG system because we wanted the device to be easy for the average American to acquire. Using a commercial device also avoids the regulatory pathways a novel assistive device would face before entering the market. A research-grade EEG recording system such as the g.tec EEGcap is extremely expensive ($23,000), bulky, and it can take about an hour to set up the electrodes. There are only two mutli-electrode headsets on the market. The Muse only has electrodes on the forehead. Unfortunately, the headset is back-ordered until next year, and one of the co-inventors said that it could not be modified to collect SSVEP signals on the back of the head. This leaves the Emotiv EPOC EEG headset, which has two electrodes over the occipital lobe: O1 and O2. However, the Emotiv headset has been criticized on the BCI2000 forums for poor signal to noise ratio and low sampling rate. Some labs have made hardware modifications, but we are borrowing headsets from the Brain Computer Interface student group, so we are restricted to signal processing procedures after sample acquisition.

Improving Signal Quality

B

 

C

 

A

 
            The raw EEG data collected had a low signal to noise ratio (SNR), so it was difficult to impossible to detect the SSVEP signal.  The raw EEG signal shown in Figure 4 has an estimated[1] SNRdB of -25dB, but after six signal-averages, the SNRdB was increased to -21.4dB. The SNR improved by 2.24, which is very close to the expected improvement of   .  We chose ‘sliding window’ signal averaging after trying out two other methods for lining up samples: cross-correlation and least-common-multiple (LCM).

Figure 4 - (Top) - Raw EEG signal collected over six seconds has large contributions from noise in the time domain (A) and frequency domain (B and C). For the raw EEG, in the frequency range of interest (C), there is no distinguishable peak at the stimuli frequency.

(Bottom) – After ‘sliding window’ signal-averaging, there is a much more distinct peak at 16 Hz (B and C).

Cross-Correlation

            We initially tried to line up segments of signal with cross-correlation. In Matlab, the function xcorr can take two signals as input. The output is the product of the two signals as one signal is shifted along the other signal using different time delays. Cross-correlation did not work because the contribution to the product term from noise drowned out the contribution from signal, due to the low SNR.

Least Common Multiple (LCM)

            Lining up the signals using LCM was our first attempt to use mathematics to account for how the 128Hz samples might fall on the analog EEG waveform. As illustrated in Figure 5, the sampled points from two different chunks of data might fall on different positions of the analog waveform. Attempts to add up several slightly offset samples may lead to unintended destructive interference. We could have reduced the degree of destructive interference by limiting the number of signal averages and shifting samples with offsets close to half the period so that they were closer in phase to the first sample in the signal averaging sequence. The latter could be accomplished by occasionally throwing out a sample after too much phase offset had accumulated, and the steadily increasing error between the time when the sample was collected and the time when it should have been collected exceeded a set threshold.

 

Figure 5 - (Left) Digitization of an analog signal at the same sampling rate are slightly offset due to different phase,

(Right) Attempted average of slightly offset samples does not line up with the original analog signal.

            Instead, we re-examined our design goals and realized that we only needed a limited number of frequencies for the algorithm to detect. We could choose our signal averaging procedure carefully so that the digital sample would always occur on identical locations of the waveform for the different samples used in signal averaging.

            Our first guess was to use the least common multiple of the Emotiv sampling period and the stimuli’s periods. However, 128Hz digitization corresponds to a period of 78125 *100 nanoseconds. The LCM of 78125 and an arbitrary 12Hz stimulus with period   is

65,104,140,625 * 100 ns, which is 6510 seconds. In less extreme cases, the LCM tended to be on the order of several seconds. Several second’s worth of data would have to be signal averaged with another several seconds of data, which is problematic because signal averaging only reduces noise by the square root of the number of averages taken. The user would be required to concentrate on the signal for a very long time, decreasing the usability of the interface. Moreover, it is inherently difficult to maintain focus for such a long time, so during the course of signal acquisition, an unpredictable phase shift may be inadvertently introduced, which would ruin the alignment and cause destructive interference.

            As a result, we realized we must carefully choose the stimuli frequencies as well as the signal averaging procedure.

‘Sliding Window’ Signal Averaging

            Instead of applying the LCM to the sampling period and the set of possible frequencies that the digital pattern writer could generate, we worked backwards and picked stimuli frequencies  with periods that were multiples of the sampling period, :

            For Jake, SSVEP signals were detectable for stimuli of 8, 12.8, and 16Hz, corresponding to k = 16, 10, and 8, respectively. We hypothesize that 25.6 and 32Hz did not work for two reasons. SSVEPs generated from blue LEDs have a strongest response at 13Hz and falls off for different frequencies [15]. Also, as shown in Figure 6, the Emotiv headset shows frequency-dependent behavior, probably an effect of the 50Hz notch filter [16].

Figure 6 - Headset frequency response. A 38uVpp - 20uV DC sinusoid of varying frequency from the HP 3312A function generator in Bryan 306 was wired to the O1 electrode and the two headset ground electrodes. The FFT magnitude at the input frequency was plotted. There is a drop-off between 25 and 30Hz.

            Instead of traditional signal averaging, which uses samples that do not overlap, we found that ‘sliding window’ signal averaging would increase the number of signal averages we could perform in the same length of time while maintaining N = 128 samples for a 1Hz frequency resolution. Figure 7 compares traditional, non-overlapping signal averaging with ‘sliding window’ signal averaging. Since white noise is interval-independent, the ‘sliding average’ averages uncorrelated segments of time. For example, the orange segment is repeated in row  and in row , but the sum of samples evaluated vertically do not cause the two orange segments to overlap. Figure 7 also shows that ‘sliding window’ signal averaging improves the signal-to-noise ratio by approximately  after  averages, as expected for traditional signal averaging. However, ‘sliding window’ signal averaging does introduce a frequency domain peak at

We can ignore  if the shift time is chosen so that  is sufficiently far away from the stimuli frequencies. With these parameters in mind, we chose since  is far away from the 8-16Hz stimuli frequency range, and 80 is a common multiple of .

Figure 7 – (Top) 'Sliding window' vs traditional signal averaging. ‘Sliding window’ fits more averages into the same sample length.

 (Bottom) The SNR for ‘Sliding Window’ signal averaging improves approximately as   for  averages, which is expected for signal averaging. The error can be attributed to reasons discussed in Footnote 1 on page 7.

Upsampling Allows More Stimuli Frequencies

The upSampling subVI increases the number of allowable stimuli frequencies by interpolating between existing samples, mimicking a higher sample rate. After upsampling by , frequencies

can be used as stimuli frequencies.

 

Feature Extraction

            As discussed later in the section, Engineering Design- Signal Processing, the ROC curve for 8Hz showed low detection accuracy, so we used the 12.8 and 16Hz stimuli. We first tried to use the magnitude at  to detect peaks at these frequencies. The results of the classification are shown graphically in Figure 8. There is good separation between 16Hz and 12.8Hz stimuli, as well as 16Hz and no stimulus. However, there is some overlap at the lower magnitudes between 13Hz and no stimulus.