-
Notifications
You must be signed in to change notification settings - Fork 0
/
web-MCI-model02.html
385 lines (366 loc) · 47.3 KB
/
web-MCI-model02.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
<!DOCTYPE html><html><head>
<title>web-MCI-model02</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css">
<style>
code[class*=language-],pre[class*=language-]{color:#333;background:0 0;font-family:Consolas,"Liberation Mono",Menlo,Courier,monospace;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;line-height:1.4;-moz-tab-size:8;-o-tab-size:8;tab-size:8;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}pre[class*=language-]{padding:.8em;overflow:auto;border-radius:3px;background:#f5f5f5}:not(pre)>code[class*=language-]{padding:.1em;border-radius:.3em;white-space:normal;background:#f5f5f5}.token.blockquote,.token.comment{color:#969896}.token.cdata{color:#183691}.token.doctype,.token.macro.property,.token.punctuation,.token.variable{color:#333}.token.builtin,.token.important,.token.keyword,.token.operator,.token.rule{color:#a71d5d}.token.attr-value,.token.regex,.token.string,.token.url{color:#183691}.token.atrule,.token.boolean,.token.code,.token.command,.token.constant,.token.entity,.token.number,.token.property,.token.symbol{color:#0086b3}.token.prolog,.token.selector,.token.tag{color:#63a35c}.token.attr-name,.token.class,.token.class-name,.token.function,.token.id,.token.namespace,.token.pseudo-class,.token.pseudo-element,.token.url-reference .token.variable{color:#795da3}.token.entity{cursor:help}.token.title,.token.title .token.punctuation{font-weight:700;color:#1d3e81}.token.list{color:#ed6a43}.token.inserted{background-color:#eaffea;color:#55a532}.token.deleted{background-color:#ffecec;color:#bd2c00}.token.bold{font-weight:700}.token.italic{font-style:italic}.language-json .token.property{color:#183691}.language-markup .token.tag .token.punctuation{color:#333}.language-css .token.function,code.language-css{color:#0086b3}.language-yaml .token.atrule{color:#63a35c}code.language-yaml{color:#183691}.language-ruby .token.function{color:#333}.language-markdown .token.url{color:#795da3}.language-makefile .token.symbol{color:#795da3}.language-makefile .token.variable{color:#183691}.language-makefile .token.builtin{color:#0086b3}.language-bash .token.keyword{color:#0086b3}pre[data-line]{position:relative;padding:1em 0 1em 3em}pre[data-line] .line-highlight-wrapper{position:absolute;top:0;left:0;background-color:transparent;display:block;width:100%}pre[data-line] .line-highlight{position:absolute;left:0;right:0;padding:inherit 0;margin-top:1em;background:hsla(24,20%,50%,.08);background:linear-gradient(to right,hsla(24,20%,50%,.1) 70%,hsla(24,20%,50%,0));pointer-events:none;line-height:inherit;white-space:pre}pre[data-line] .line-highlight:before,pre[data-line] .line-highlight[data-end]:after{content:attr(data-start);position:absolute;top:.4em;left:.6em;min-width:1em;padding:0 .5em;background-color:hsla(24,20%,50%,.4);color:#f4f1ef;font:bold 65%/1.5 sans-serif;text-align:center;vertical-align:.3em;border-radius:999px;text-shadow:none;box-shadow:0 1px #fff}pre[data-line] .line-highlight[data-end]:after{content:attr(data-end);top:auto;bottom:.4em}html body{font-family:'Helvetica Neue',Helvetica,'Segoe UI',Arial,freesans,sans-serif;font-size:16px;line-height:1.6;color:#333;background-color:#fff;overflow:initial;box-sizing:border-box;word-wrap:break-word}html body>:first-child{margin-top:0}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{line-height:1.2;margin-top:1em;margin-bottom:16px;color:#000}html body h1{font-size:2.25em;font-weight:300;padding-bottom:.3em}html body h2{font-size:1.75em;font-weight:400;padding-bottom:.3em}html body h3{font-size:1.5em;font-weight:500}html body h4{font-size:1.25em;font-weight:600}html body h5{font-size:1.1em;font-weight:600}html body h6{font-size:1em;font-weight:600}html body h1,html body h2,html body h3,html body h4,html body h5{font-weight:600}html body h5{font-size:1em}html body h6{color:#5c5c5c}html body strong{color:#000}html body del{color:#5c5c5c}html body a:not([href]){color:inherit;text-decoration:none}html body a{color:#08c;text-decoration:none}html body a:hover{color:#00a3f5;text-decoration:none}html body img{max-width:100%}html body>p{margin-top:0;margin-bottom:16px;word-wrap:break-word}html body>ol,html body>ul{margin-bottom:16px}html body ol,html body ul{padding-left:2em}html body ol.no-list,html body ul.no-list{padding:0;list-style-type:none}html body ol ol,html body ol ul,html body ul ol,html body ul ul{margin-top:0;margin-bottom:0}html body li{margin-bottom:0}html body li.task-list-item{list-style:none}html body li>p{margin-top:0;margin-bottom:0}html body .task-list-item-checkbox{margin:0 .2em .25em -1.8em;vertical-align:middle}html body .task-list-item-checkbox:hover{cursor:pointer}html body blockquote{margin:16px 0;font-size:inherit;padding:0 15px;color:#5c5c5c;background-color:#f0f0f0;border-left:4px solid #d6d6d6}html body blockquote>:first-child{margin-top:0}html body blockquote>:last-child{margin-bottom:0}html body hr{height:4px;margin:32px 0;background-color:#d6d6d6;border:0 none}html body table{margin:10px 0 15px 0;border-collapse:collapse;border-spacing:0;display:block;width:100%;overflow:auto;word-break:normal;word-break:keep-all}html body table th{font-weight:700;color:#000}html body table td,html body table th{border:1px solid #d6d6d6;padding:6px 13px}html body dl{padding:0}html body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:700}html body dl dd{padding:0 16px;margin-bottom:16px}html body code{font-family:Menlo,Monaco,Consolas,'Courier New',monospace;font-size:.85em;color:#000;background-color:#f0f0f0;border-radius:3px;padding:.2em 0}html body code::after,html body code::before{letter-spacing:-.2em;content:'\00a0'}html body pre>code{padding:0;margin:0;word-break:normal;white-space:pre;background:0 0;border:0}html body .highlight{margin-bottom:16px}html body .highlight pre,html body pre{padding:1em;overflow:auto;line-height:1.45;border:#d6d6d6;border-radius:3px}html body .highlight pre{margin-bottom:0;word-break:normal}html body pre code,html body pre tt{display:inline;max-width:initial;padding:0;margin:0;overflow:initial;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}html body pre code:after,html body pre code:before,html body pre tt:after,html body pre tt:before{content:normal}html body blockquote,html body dl,html body ol,html body p,html body pre,html body ul{margin-top:0;margin-bottom:16px}html body kbd{color:#000;border:1px solid #d6d6d6;border-bottom:2px solid #c7c7c7;padding:2px 4px;background-color:#f0f0f0;border-radius:3px}@media print{html body{background-color:#fff}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{color:#000;page-break-after:avoid}html body blockquote{color:#5c5c5c}html body pre{page-break-inside:avoid}html body table{display:table}html body img{display:block;max-width:100%;max-height:100%}html body code,html body pre{word-wrap:break-word;white-space:pre}}.markdown-preview{width:100%;height:100%;box-sizing:border-box}.markdown-preview ul{list-style:disc}.markdown-preview ul ul{list-style:circle}.markdown-preview ul ul ul{list-style:square}.markdown-preview ol{list-style:decimal}.markdown-preview ol ol,.markdown-preview ul ol{list-style-type:lower-roman}.markdown-preview ol ol ol,.markdown-preview ol ul ol,.markdown-preview ul ol ol,.markdown-preview ul ul ol{list-style-type:lower-alpha}.markdown-preview .newpage,.markdown-preview .pagebreak{page-break-before:always}.markdown-preview pre.line-numbers{position:relative;padding-left:3.8em;counter-reset:linenumber}.markdown-preview pre.line-numbers>code{position:relative}.markdown-preview pre.line-numbers .line-numbers-rows{position:absolute;pointer-events:none;top:1em;font-size:100%;left:0;width:3em;letter-spacing:-1px;border-right:1px solid #999;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.markdown-preview pre.line-numbers .line-numbers-rows>span{pointer-events:none;display:block;counter-increment:linenumber}.markdown-preview pre.line-numbers .line-numbers-rows>span:before{content:counter(linenumber);color:#999;display:block;padding-right:.8em;text-align:right}.markdown-preview .mathjax-exps .MathJax_Display{text-align:center!important}.markdown-preview:not([data-for=preview]) .code-chunk .code-chunk-btn-group{display:none}.markdown-preview:not([data-for=preview]) .code-chunk .status{display:none}.markdown-preview:not([data-for=preview]) .code-chunk .output-div{margin-bottom:16px}.markdown-preview .md-toc{padding:0}.markdown-preview .md-toc .md-toc-link-wrapper .md-toc-link{display:inline;padding:.25rem 0}.markdown-preview .md-toc .md-toc-link-wrapper .md-toc-link div,.markdown-preview .md-toc .md-toc-link-wrapper .md-toc-link p{display:inline}.markdown-preview .md-toc .md-toc-link-wrapper.highlighted .md-toc-link{font-weight:800}.scrollbar-style::-webkit-scrollbar{width:8px}.scrollbar-style::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}.scrollbar-style::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,.66);border:4px solid rgba(150,150,150,.66);background-clip:content-box}html body[for=html-export]:not([data-presentation-mode]){position:relative;width:100%;height:100%;top:0;left:0;margin:0;padding:0;overflow:auto}html body[for=html-export]:not([data-presentation-mode]) .markdown-preview{position:relative;top:0;min-height:100vh}@media screen and (min-width:914px){html body[for=html-export]:not([data-presentation-mode]) .markdown-preview{padding:2em calc(50% - 457px + 2em)}}@media screen and (max-width:914px){html body[for=html-export]:not([data-presentation-mode]) .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=html-export]:not([data-presentation-mode]) .markdown-preview{font-size:14px!important;padding:1em}}@media print{html body[for=html-export]:not([data-presentation-mode]) #sidebar-toc-btn{display:none}}html body[for=html-export]:not([data-presentation-mode]) #sidebar-toc-btn{position:fixed;bottom:8px;left:8px;font-size:28px;cursor:pointer;color:inherit;z-index:99;width:32px;text-align:center;opacity:.4}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] #sidebar-toc-btn{opacity:1}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc{position:fixed;top:0;left:0;width:300px;height:100%;padding:32px 0 48px 0;font-size:14px;box-shadow:0 0 4px rgba(150,150,150,.33);box-sizing:border-box;overflow:auto;background-color:inherit}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar{width:8px}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,.66);border:4px solid rgba(150,150,150,.66);background-clip:content-box}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc a{text-decoration:none}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc .md-toc{padding:0 16px}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc .md-toc .md-toc-link-wrapper .md-toc-link{display:inline;padding:.25rem 0}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc .md-toc .md-toc-link-wrapper .md-toc-link div,html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc .md-toc .md-toc-link-wrapper .md-toc-link p{display:inline}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc .md-toc .md-toc-link-wrapper.highlighted .md-toc-link{font-weight:800}html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{left:300px;width:calc(100% - 300px);padding:2em calc(50% - 457px - 300px / 2);margin:0;box-sizing:border-box}@media screen and (max-width:1274px){html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=html-export]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{width:100%}}html body[for=html-export]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .markdown-preview{left:50%;transform:translateX(-50%)}html body[for=html-export]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .md-sidebar-toc{display:none}
/* Please visit the URL below for more information: */
/* https://shd101wyy.github.io/markdown-preview-enhanced/#/customize-css */
</style>
<!-- The content below will be included at the end of the <head> element. --><script type="text/javascript">
document.addEventListener("DOMContentLoaded", function () {
// your code here
});
</script></head><body for="html-export">
<div class="crossnote markdown-preview ">
<div class="home-btn">
<a href="index.html">
<img src="images/home.png" alt="home" style="width: 50px; height: 50px;">
</a>
<h3 id="prediction-of-mild-cognitive-impairment-severity-using-eeg-data-and-neuropsychological-scores-in-source-space">Prediction of Mild Cognitive Impairment Severity Using EEG Data and Neuropsychological Scores in Source Space </h3>
<h3 id="introduction">Introduction </h3>
<p>This report delves into the predictive analysis of Mild Cognitive Impairment (MCI) severity, leveraging high-density EEG data combined with nuanced neuropsychological scoring. It extends previous research by incorporating advanced feature engineering techniques, including source analysis from the Human Connectome Project (HCP) combined atlas, and exploring the efficacy of machine learning models like Logistic Regression, Random Forest, SVMs, and XGBoost in distinguishing between MCI severity levels.</p>
<p>Groups are based on the FRSSD total score, a functional scale indicating the severity of symptoms. The Functional Rating Scale for Symptoms of Dementia (FRSSD)[2] evaluate daily living skills affected by MCI across 14 dimensions, including eating, dressing, managing incontinence, communicating, sleeping, recognizing faces, maintaining personal hygiene, remembering names, recalling events, staying alert, understanding complex situations, knowing one's location, managing emotions, and interacting socially. The FRSSD assigns scores from 0 (no impairment) to 3 (severe impairment) for each activity, reflecting the individual's level of functional ability. It's important to note that this assessment is completed by caregivers, not the patients themselves, which means the scores might reflect the caregivers' perceptions and the impact of caregiving on their well-being. The scale uses a threshold score of 5 (authors recommendation) to distinguish between individuals who are likely healthy and those who may have dementia, providing a valuable tool for analysis of cognitive decline.</p>
<p>In this analysis the cutoff score of 5 was not used, instead the participants were divided into two groups based on the FRSSD total score: a lower-score group (0-3) and a higher-score group (4 or higher).</p>
<a href="images/MCI_genlin/FRSSD.png">
<img src="images/MCI_genlin/FRSSD.png" alt="Segments">
</a>
<p>Fig.1 Functional Rating Scale for Symptoms of Dementia (FRSSD) and the distribution of scores in the dataset across the subjects. The scores are divided into two groups: lower-score group (<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>0</mn><mo>−</mo><mn>3</mn></mrow><annotation encoding="application/x-tex">0-3</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.7278em;vertical-align:-0.0833em;"></span><span class="mord">0</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">3</span></span></span></span>) and higher-score group (<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>4</mn></mrow><annotation encoding="application/x-tex">4</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">4</span></span></span></span> or higher). <strong>No FRSSD tot score: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>17</mn></mrow><annotation encoding="application/x-tex">17</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">17</span></span></span></span>, FRSSD tot score <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>0</mn><mo><</mo><mo>=</mo><mn>3</mn><mo>:</mo><mn>32</mn></mrow><annotation encoding="application/x-tex">0<=3: 32</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6835em;vertical-align:-0.0391em;"></span><span class="mord">0</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel"><=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">3</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">:</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">32</span></span></span></span>, FRSSD tot score <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo>></mo><mn>3</mn><mo>:</mo><mn>28</mn></mrow><annotation encoding="application/x-tex">>3: 28</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.5782em;vertical-align:-0.0391em;"></span><span class="mrel">></span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">3</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">:</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">28</span></span></span></span>.</strong></p>
<h3 id="methodology">Methodology </h3>
<p>The methodology involved preprocessing EEG data, source space analysis, extracting features, and linking these features with neuropsychological scores to create a comprehensive dataset. The dataset was then used to train and evaluate several machine learning models, including Logistic Regression, Support Vector Machines (SVM), and XGBoost, to classify subjects into two groups. The detailed steps for preprocessing the EEG data can be found in the <a href="web-MCI-preproc02.html">previous report preproc02</a> and <a href="web-MCI-model01.html">previous report model01</a>.</p>
<p>The study utilized preprocessed EEG segments, with source analysis conducted for each multivariete timeseries segment to extract bandpower features from 42 ROIs using the HCP_combined atlas. Stratified sampling was ensured based on the groups and subject segments to avoid data leakage.</p>
<p>Utilized a custom function to compute source time courses (STCs), employing sLORETA <code>ico4</code>, <code>snr 0.5</code> with <code>fsaverage</code> brain temaplate for inverse modeling and focusing on meaningful regions within the <code>HCPMMP1_combined</code> atlas using <code>MNE-Python</code> library[3]. The relative band powers and their ratios were calculated using the <code>yasa</code> library[6]. The features were standardized using the <code>StandardScaler</code> from the <code>sklearn</code> library. The dataset was split into training and testing sets, ensuring stratified sampling based on the groups and subject segments defined by FRSSD total scores. The models were trained and evaluated using accuracy and ROC AUC scores, with a focus on understanding the performance of each model in distinguishing between the unbalanced groups.</p>
<h4 id="rois">ROIs: </h4>
<ul>
<li>
<p><strong>Default Mode Network (DMN):</strong></p>
<ul>
<li>PCCL-lh: Posterior Cingulate Cortex Left Hemisphere</li>
<li>PCCR-rh: Posterior Cingulate Cortex Right Hemisphere</li>
<li>ACAMPCR-rh: Anterior Cingulate And Medial Prefrontal Cortex Right Hemisphere</li>
<li>ACAMPCL-lh: Anterior Cingulate And Medial Prefrontal Cortex Left Hemisphere</li>
<li>POCL-lh: Parieto-Occipital Cortex Left Hemisphere</li>
<li>POCR-rh: Parieto-Occipital Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Salience Network (SN):</strong></p>
<ul>
<li>AACL-lh: Anterior Insula And Adjacent Frontal Cortex Left Hemisphere</li>
<li>AACR-rh: Anterior Insula And Adjacent Frontal Cortex Right Hemisphere</li>
<li>IAFOCL-lh: Inferior Anterior Frontal Cortex Left Hemisphere</li>
<li>IAFOCR-rh: Inferior Anterior Frontal Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Central Executive Network (CEN):</strong></p>
<ul>
<li>DPCL-lh: Dorsolateral Prefrontal Cortex Left Hemisphere</li>
<li>DPCR-rh: Dorsolateral Prefrontal Cortex Right Hemisphere</li>
<li>LTCL-lh: Lateral Temporal Cortex Left Hemisphere</li>
<li>LTCR-rh: Lateral Temporal Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Sensorimotor Network:</strong></p>
<ul>
<li>EVCL-lh: Primary Somatosensory And Motor Cortex Left Hemisphere</li>
<li>EVCR-rh: Primary Somatosensory And Motor Cortex Right Hemisphere</li>
<li>SPCL-lh: Superior Parietal Cortex Left Hemisphere</li>
<li>SPCR-rh: Superior Parietal Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Visual Processing Network:</strong></p>
<ul>
<li>VSVCL-lh: Ventral Stream Visual Cortex Left Hemisphere</li>
<li>VSVCR-rh: Ventral Stream Visual Cortex Right Hemisphere</li>
<li>PVCL-lh: Primary Visual Cortex (Left Hemisphere)</li>
<li>PVCR-rh: Primary Visual Cortex (Right Hemisphere)</li>
</ul>
</li>
<li>
<p><strong>Auditory and Language Network:</strong></p>
<ul>
<li>TPOJL-lh: Temporo-Parieto-Occipital Junction Left Hemisphere</li>
<li>TPOJR-rh: Temporo-Parieto-Occipital Junction Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Attention and Frontoparietal Network:</strong></p>
<ul>
<li>IFCL-lh: Inferior Frontal Cortex Left Hemisphere</li>
<li>IFCR-rh: Inferior Frontal Cortex Right Hemisphere</li>
<li>SAMCL-lh: Supplementary Motor Area And Cingulate Motor Areas Left Hemisphere</li>
<li>SAMCR-rh: Supplementary Motor Area And Cingulate Motor Areas Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Memory and Temporal Lobe Function:</strong></p>
<ul>
<li>MTCL-lh: Medial Temporal Cortex Left Hemisphere</li>
<li>MTCR-rh: Medial Temporal Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Emotion and Affective Processing:</strong></p>
<ul>
<li>OAPFCL-lh: Orbito-Affective Prefrontal Cortex Left Hemisphere</li>
<li>OAPFCR-rh: Orbito-Affective Prefrontal Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Executive Function and Decision Making:</strong></p>
<ul>
<li>DSVCL-lh: Dorsal Stream Visual Cortex Left Hemisphere</li>
<li>DSVCR-rh: Dorsal Stream Visual Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Parietal Lobe Function:</strong></p>
<ul>
<li>PCL-lh: Parietal Cortex Left Hemisphere</li>
<li>PCR-rh: Parietal Cortex Right Hemisphere</li>
</ul>
</li>
<li>
<p><strong>Limbic System:</strong></p>
<ul>
<li>EACL-lh: Entorhinal Area Cortex Left Hemisphere</li>
<li>EACR-rh: Entorhinal Area Cortex Right Hemisphere</li>
<li>PLAMCCL-lh: Paralimbic Cortex Left Hemisphere</li>
<li>PLAMCCR-rh: Paralimbic Cortex Right Hemisphere</li>
</ul>
</li>
</ul>
<p>Plots of ROIs are presented below:<br>
<a href="images/MCI_genlin/ROIs.png"><br>
<img src="images/MCI_genlin/ROIs.png" alt="Segments"><br>
</a></p>
<a href="images/MCI_genlin/ROIs_flat.png">
<img src="images/MCI_genlin/ROIs_flat.png" alt="Segments">
</a>
<p>Fig.2 42 Regions of Interest (ROIs) used in the analysis. The ROIs are based on the <code>HCPMMP1_combined</code> atlas.</p>
<div style="display: flex; justify-content: center; align-items: center;">
<a href="images/MCI_genlin/movie_ROIs_small.gif" style="max-width: 100%; height: auto;">
<img src="images/MCI_genlin/movie_ROIs_small.gif" alt="Animated 3D brain visualization" style="max-width: 100%; height: auto;">
</a>
</div>
<p>The animated GIF showcases a dynamic 3D visualization of brain activity over time, rendered on the "fsaverage" template brain. It's presented from multiple perspectives: lateral, medial, dorsal, and ventral views, laid out horizontally for a comprehensive examination of the entire brain surface. This visualization is based on source-localized EEG data, capturing the intricate patterns of neural dynamics across different brain regions. The color intensity on the brain's surface corresponds to the magnitude of activity to highlight variations in neural engagement. Time from 0.1 to 1.1 seconds at a slowed-down pace (time dilation factor of 100) for detailed analysis. Traveling waves can be observed across the brain's surface, ranging from 30 to 90 milliseconds.</p>
<h4 id="feature-extraction">Feature Extraction </h4>
<p>The feature extraction process involved the following steps:</p>
<ul>
<li>Relative band powers were extracted from the EEG data using the <code>yasa</code> library[6]. Averge power was calculated for the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi><mi>e</mi><mi>l</mi><mi>t</mi><mi>a</mi></mrow><annotation encoding="application/x-tex">delta</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span><span class="mord mathnormal">e</span><span class="mord mathnormal">lt</span><span class="mord mathnormal">a</span></span></span></span> (1-4 Hz), <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>t</mi><mi>h</mi><mi>e</mi><mi>t</mi><mi>a</mi></mrow><annotation encoding="application/x-tex">theta</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">t</span><span class="mord mathnormal">h</span><span class="mord mathnormal">e</span><span class="mord mathnormal">t</span><span class="mord mathnormal">a</span></span></span></span> (4-8 Hz), <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>a</mi><mi>l</mi><mi>p</mi><mi>h</mi><mi>a</mi></mrow><annotation encoding="application/x-tex">alpha</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord mathnormal">a</span><span class="mord mathnormal">lp</span><span class="mord mathnormal">ha</span></span></span></span> (8-12 Hz), <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>s</mi><mi>i</mi><mi>g</mi><mi>m</mi><mi>a</mi></mrow><annotation encoding="application/x-tex">sigma</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.854em;vertical-align:-0.1944em;"></span><span class="mord mathnormal">s</span><span class="mord mathnormal">i</span><span class="mord mathnormal" style="margin-right:0.03588em;">g</span><span class="mord mathnormal">ma</span></span></span></span> (12-16 Hz), <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>b</mi><mi>e</mi><mi>t</mi><mi>a</mi></mrow><annotation encoding="application/x-tex">beta</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">b</span><span class="mord mathnormal">e</span><span class="mord mathnormal">t</span><span class="mord mathnormal">a</span></span></span></span> (16-30 Hz) and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>g</mi><mi>a</mi><mi>m</mi><mi>m</mi><mi>a</mi></mrow><annotation encoding="application/x-tex">gamma</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.625em;vertical-align:-0.1944em;"></span><span class="mord mathnormal" style="margin-right:0.03588em;">g</span><span class="mord mathnormal">amma</span></span></span></span> (30-45 Hz) frequency bands.</li>
</ul>
<p>Correlation matrix of the relative band powers for the ROIs was calculated to identify the relationships between different regions and frequency bands. The correlation matrix was visualized using a heatmap to provide a comprehensive overview of the interplay between ROIs and frequency bands.</p>
<p>Feature vector shape: (<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>n</mi></mrow><annotation encoding="application/x-tex">n</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">n</span></span></span></span> <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>s</mi><mi>e</mi><mi>g</mi><mi>m</mi><mi>e</mi><mi>n</mi><mi>t</mi><mi>s</mi></mrow><annotation encoding="application/x-tex">segments</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8095em;vertical-align:-0.1944em;"></span><span class="mord mathnormal">se</span><span class="mord mathnormal" style="margin-right:0.03588em;">g</span><span class="mord mathnormal">m</span><span class="mord mathnormal">e</span><span class="mord mathnormal">n</span><span class="mord mathnormal">t</span><span class="mord mathnormal">s</span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>42</mn></mrow><annotation encoding="application/x-tex">42</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">42</span></span></span></span> <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mi>O</mi><mi>I</mi><mi>s</mi></mrow><annotation encoding="application/x-tex">ROIs</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">RO</span><span class="mord mathnormal" style="margin-right:0.07847em;">I</span><span class="mord mathnormal">s</span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>6</mn></mrow><annotation encoding="application/x-tex">6</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">6</span></span></span></span> <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>b</mi><mi>a</mi><mi>n</mi><mi>d</mi><mi>s</mi></mrow><annotation encoding="application/x-tex">bands</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">ban</span><span class="mord mathnormal">d</span><span class="mord mathnormal">s</span></span></span></span>)</p>
<div style="display: flex; justify-content: center; align-items: center;">
<a href="images/MCI_genlin/corrmat.png" style="max-width: 100%; height: auto;">
<img src="images/MCI_genlin/corrmat.png" alt="corr" style="max-width: 100%; height: auto;">
</a>
</div>
<p>Fig.3 Correlation matrix of the relative band powers for the ROIs.</p>
<h4 id="data-organization">Data Organization </h4>
<ul>
<li>
<p><strong>Data Structure</strong>: The dataset (<code>eeg_data_df</code>) was categorized based on unique identifiers for each participant (<code>EEGCode</code>,<code>Segment_id</code>) and their respective group classifications, indicating different levels of cognitive impairment.</p>
</li>
<li>
<p><strong>Stratified Grouping</strong>:</p>
<ul>
<li>Data was transformed into a format suitable for stratified sampling, with each entry corresponding to a participant and their classification group.</li>
<li>Using the <code>GroupKFold</code> function from the <code>sklearn</code> library, the dataset was stratified based on the <code>EEGCode</code> and subject <code>Segment_id</code> defined by FRSSD total scores.</li>
<li>Specific datasets for training (<code>train_df</code>) and testing (<code>test_df</code>) were constructed.</li>
</ul>
</li>
<li>
<p><strong>Training and Testing Sets</strong>:</p>
<ul>
<li>The training set included a wide range of EEG data and corresponding classifications for model training and validation.</li>
<li>The testing set was used for the final evaluation to provide an unbiased assessment of predictive capabilities.</li>
</ul>
</li>
</ul>
<p>Following dataset organization and stratification:</p>
<ul>
<li>
<p><strong>Data Preparation for Modeling</strong>:</p>
<ul>
<li>Feature sets (<code>X_train</code> and <code>X_test</code>) were curated by removing non-predictive columns, focusing on variables crucial for predictive accuracy.</li>
<li>Target variables (<code>y_train</code> and <code>y_test</code>) were determined by the group classifications, representing cognitive impairment levels.</li>
</ul>
</li>
<li>
<p><strong>Feature Scaling</strong>:</p>
<ul>
<li>Feature scaling was applied using <code>StandardScaler</code> from the <code>sklearn</code> library, to ensure all features contributed equally to model decision-making, eliminating bias from varied feature scales. This step is crucial for linear models and benefits other algorithms by enhancing accuracy and convergence speed.</li>
</ul>
</li>
</ul>
<h4 id="model-training-and-evaluation-extension">Model Training and Evaluation Extension </h4>
<p>Following the structured preparation of the dataset, several machine learning models were employed to discern patterns indicative of cognitive impairments:</p>
<ul>
<li>
<p><strong>Logistic Regression without Class Weights CV5</strong>: A logistic regression model was initialized and trained, applying scaled features from the training set. Predictions were then made on the scaled test set, with accuracy serving as the primary evaluation metric. This model served as a baseline for comparison.</p>
</li>
<li>
<p><strong>Logistic Regression with Class Weights CV5</strong>: To address potential class imbalance, another logistic regression model was introduced, this time incorporating balanced class weights. After training, predictions were made on the test set, and the model's performance was evaluated based on accuracy and detailed through a classification report. ROC AUC curve was generated to visualize the model's ability to differentiate between classes.</p>
</li>
<li>
<p><strong>Support Vector Machine (SVM) CV5</strong>: An SVM classifier with an RBF & Linear kernel, balanced class weights were trained, leveraging the scaled feature sets. The model's predictive accuracy was assessed, and its performance was further detailed in a classification report. An ROC AUC curve was plotted.</p>
</li>
<li>
<p><strong>XGBoost Classifier CV5</strong>: The XGBoost model was employed next, using label-encoded class targets for training. This model's accuracy was evaluated, and its performance was summarized in a classification report.</p>
</li>
<li>
<p><strong>Random Forest Classifier CV5</strong>: A Random Forest model was trained and evaluated, leveraging the scaled feature sets.</p>
</li>
</ul>
<p><code>Sklearn</code>'s <code>LogisticRegression</code>, <code>Random Forest</code> and <code>SVC</code> were used to train the models. The <code>XGBoost</code> model was trained using the <code>xgboost</code> library. The <code>GroupKFold</code> and <code>cross_val_score</code> functions from the <code>sklearn</code> library were used to perform cross-validation and calculate ROC AUC scores. The <code>classification_report</code> function from the <code>sklearn</code> library was used to generate detailed classification reports. The <code>roc_curve</code> and <code>roc_auc_score</code> functions from the <code>sklearn</code> library were used to plot ROC curves and calculate ROC AUC scores.</p>
<h4 id="interpretation-of-model-performance">Interpretation of Model Performance </h4>
<p>The performance of each model was assessed based on their ability to distinguish between two classes: 'MCI-high' and 'MCI-low'. The results indicate varying degrees of success across different metrics, including accuracy, precision, recall, f1-score, and ROC AUC scores. A summary table is provided below to encapsulate the performance metrics for each model evaluated:</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Accuracy</th>
<th>Precision (MCI-high)</th>
<th>Recall (MCI-high)</th>
<th>F1-Score (MCI-high)</th>
<th>Precision (MCI-low)</th>
<th>Recall (MCI-low)</th>
<th>F1-Score (MCI-low)</th>
<th>ROC AUC</th>
</tr>
</thead>
<tbody>
<tr>
<td>Logistic Regression (Unweighted)</td>
<td>0.69</td>
<td>0.28</td>
<td>0.007</td>
<td>0.012</td>
<td>0.70</td>
<td>0.97</td>
<td>0.81</td>
<td>0.47 ± 0.13</td>
</tr>
<tr>
<td>Logistic Regression (Weighted)</td>
<td>0.48</td>
<td>0.26</td>
<td>0.47</td>
<td>0.33</td>
<td>0.68</td>
<td>0.48</td>
<td>0.56</td>
<td>0.47 ± 0.13</td>
</tr>
<tr>
<td>SVM (rbf)</td>
<td>0.68</td>
<td>0.19</td>
<td>0.02</td>
<td>0.04</td>
<td>0.70</td>
<td>0.95</td>
<td>0.81</td>
<td>0.45 ± 0.06</td>
</tr>
<tr>
<td>SVM (lin)</td>
<td>0.71</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.71</td>
<td>1.0</td>
<td>0.82</td>
<td>0.51 ± 0.10</td>
</tr>
<tr>
<td>XGBoost</td>
<td>0.63</td>
<td>0.21</td>
<td>0.09</td>
<td>0.13</td>
<td>0.69</td>
<td>0.86</td>
<td>0.77</td>
<td>0.43 ± 0.06</td>
</tr>
<tr>
<td>Random Forest</td>
<td>0.64</td>
<td>0.21</td>
<td>0.08</td>
<td>0.12</td>
<td>0.69</td>
<td>0.87</td>
<td>0.77</td>
<td>0.44 ± 0.07</td>
</tr>
</tbody>
</table>
<p><strong>Column Descriptions:</strong></p>
<ul>
<li><strong>Model</strong>: The machine learning model used.</li>
<li><strong>Accuracy</strong>: The proportion of true results (both true positives and true negatives) among the total number of cases examined.</li>
<li><strong>Precision (MCI-high)</strong>: The ratio of correctly predicted positive observations to the total predicted positives for the 'MCI-high' class.</li>
<li><strong>Recall (MCI-high)</strong>: The ratio of correctly predicted positive observations to all observations in the actual 'MCI-high' class.</li>
<li><strong>F1-Score (MCI-high)</strong>: A weighted average of Precision and Recall for the 'MCI-high' class. It takes both false positives and false negatives into account.</li>
<li><strong>Precision (MCI-low)</strong>: The ratio of correctly predicted positive observations to the total predicted positives for the 'MCI-low' class.</li>
<li><strong>Recall (MCI-low)</strong>: The ratio of correctly predicted positive observations to all observations in the actual 'MCI-low' class.</li>
<li><strong>F1-Score (MCI-low)</strong>: A weighted average of Precision and Recall for the 'MCI-low' class. It takes both false positives and false negatives into account.</li>
<li><strong>ROC AUC</strong>: The area under the ROC curve, a graphical representation of a model's diagnostic ability.</li>
</ul>
<div style="display: flex; justify-content: center; align-items: center;">
<a href="images/MCI_genlin/LR_weight_cv_model02.png" style="max-width: 100%; height: auto;">
<img src="images/MCI_genlin/LR_weight_cv_model02.png" alt="LRw" style="max-width: 100%; height: auto;">
</a>
</div>
<p>Fig.4 ROC AUC curve for the Logistic Regression model with class weights. The curve illustrates the model's ability to distinguish between 'MCI-high' and 'MCI-low' classes, with an average ROC AUC score of 0.47 ± 0.13.</p>
<div style="display: flex; justify-content: center; align-items: center;">
<a href="images/MCI_genlin/SVC_rbf_cv_model02.png" style="max-width: 100%; height: auto;">
<img src="images/MCI_genlin/SVC_rbf_cv_model02.png" alt="SVC rbf" style="max-width: 100%; height: auto;">
</a>
</div>
<p>Fig.5 ROC AUC curve for the SVM model with an RBF kernel. The curve illustrates the model's ability to distinguish between 'MCI-high' and 'MCI-low' classes, with an average ROC AUC score of 0.45 ± 0.06.</p>
<div style="display: flex; justify-content: center; align-items: center;">
<a href="images/MCI_genlin/XGB_cv_model02.png" style="max-width: 100%; height: auto;">
<img src="images/MCI_genlin/XGB_cv_model02.png" alt="XGB" style="max-width: 100%; height: auto;">
</a>
</div>
<p>Fig.6 ROC AUC curve for the XGBoost model. The curve illustrates the model's ability to distinguish between 'MCI-high' and 'MCI-low' classes, with an average ROC AUC score of 0.43 ± 0.06.</p>
<h3 id="key-insights">Key Insights </h3>
<p>Analysis has revealed significant insights into the classification of Mild Cognitive Impairment (MCI) severity using EEG data and neuropsychological scores. Key findings include:</p>
<ul>
<li>A pronounced imbalance in the dataset, with 'MCI-low' cases significantly outnumbering 'MCI-high' features (30912 to 12600). This imbalance presents a substantial challenge for machine learning models, potentially biasing them towards the majority class.</li>
<li>Machine learning models, including Logistic Regression, SVM, and XGBoost, demonstrated variable effectiveness across different metrics. Notably, models tended to perform better in identifying 'MCI-low' cases compared to 'MCI-high' cases.</li>
<li>The variability in model performance underscores the complexity of accurately classifying MCI severity levels, highlighting the need for advanced feature engineering and model optimization strategies.</li>
</ul>
<h3 id="discussion">Discussion </h3>
<p>The endeavor to predict MCI severity using advanced feature engineering and machine learning models has encountered notable challenges, chief among them being the dataset's imbalance. The predominance of 'MCI-low' cases over 'MCI-high' cases (30912 vs. 12600) underscores the necessity for careful consideration in model training and evaluation to mitigate potential biases. The importance of segments length and the number of segments per subject in the dataset was also observed.</p>
<p>The variability in model performance, particularly the models' propensity to better identify 'MCI-low' cases, suggests a need for a nuanced approach in model selection and optimization. While models like XGBoost and SVM showed potential, their effectiveness was inconsistent across classifications, necessitating a critical evaluation of both the modeling strategies employed and the inherent dataset characteristics.</p>
<p>Moreover, the challenge in accurately identifying 'MCI-high' cases calls for further investigation into feature engineering techniques and the incorporation of more sophisticated modeling approaches. This might include exploring multivariate analysis methods, which could offer more robust and balanced classification frameworks without the need for deep learning models at this stage.</p>
<h3 id="conclusion">Conclusion </h3>
<p>The analysis presented in this report highlights the nuanced potential of utilizing machine learning models to understand cognitive impairment through EEG data and neuropsychological assessments. The findings suggest a reevaluation of model selection and optimization strategies, particularly in addressing dataset imbalances and enhancing model sensitivity across varied impairment levels.</p>
<p>The challenges observed in distinguishing between 'MCI-high' and 'MCI-low' cases emphasize the need for more sophisticated approaches to feature engineering, model tuning, and the exploration of advanced analytical techniques. Moving forward, there is a clear avenue for further research and model refinement, with the ultimate goal of developing a balanced and effective classification framework capable of contributing to early detection and intervention strategies for cognitive impairments.</p>
<hr>
<h4 id="references">References </h4>
<ol>
<li>Ioulietta Lazarou, Kostas Georgiadis, Spiros Nikolopoulos, Vangelis Oikonomoui Ioannis Kompatsiaris, „Resting-State High-Density EEG using EGI GES 300 with 256 Channels of Healthy Elders, People with Subjective and Mild Cognitive Impairment and Alzheimer's Disease”, Brain Science MDPI, t. 10, nr 6. Zenodo, s. 392, grudz. 23, 2020. doi: 10.5281/zenodo.4316608.</li>
<li>Kounti, F., Tsolaki, M. & Kiosseoglou, G. Functional cognitive assessment scale (FUCAS): a new scale to assess executive cognitive function in daily life activities in patients with dementia and mild cognitive impairment. <a href="https://www.academia.edu/17238224/Functional_cognitive_assessment_scale_FUCAS_a_new_scale_to_assess_executive_cognitive_function_in_daily_life_activities_in_patients_with_dementia_and_mild_cognitive_impairment">https://www.academia.edu/17238224/Functional_cognitive_assessment_scale_FUCAS_a_new_scale_to_assess_executive_cognitive_function_in_daily_life_activities_in_patients_with_dementia_and_mild_cognitive_impairment</a> (2006).</li>
<li>Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A. Engemann, Daniel Strohmeier, Christian Brodbeck, Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, and Matti S. Hämäläinen. MEG and EEG data analysis with MNE-Python. Frontiers in Neuroscience, 7(267):1–13, 2013. doi:10.3389/fnins.2013.00267.</li>
<li><a href="https://mne.tools/mne-icalabel/stable/index.html#">https://mne.tools/mne-icalabel/stable/index.html#</a></li>
<li>Vallat, Raphael, and Matthew P. Walker. "An open-source, high-performance tool for automated sleep staging." Elife 10 (2021). doi: <a href="https://doi.org/10.7554/eLife.70092">https://doi.org/10.7554/eLife.70092</a></li>
<li><a href="https://github.com/raphaelvallat/yasa">https://github.com/raphaelvallat/yasa</a></li>
<li>Thomas A Caswell, „matplotlib/matplotlib: REL: v3.7.4”. Zenodo, lis. 18, 2023. doi: 10.5281/zenodo.10152802.</li>
<li>Waskom, M. L., (2021). seaborn: statistical data visualization. Journal of Open Source Software, 6(60), 3021, <a href="https://doi.org/10.21105/joss.03021">https://doi.org/10.21105/joss.03021</a>.</li>
<li>The pandas development team, „pandas-dev/pandas: Pandas”. Zenodo, sty. 20, 2024. doi: 10.5281/zenodo.10537285.</li>
</ol>
<hr>
<h6 id="author-łukasz-furmancracernetgmailcom">Author: <a href="[email protected]">Łukasz Furman</a> </h6>
</div>
</body></html>