-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.xml
1575 lines (1193 loc) · 99.9 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Ruiqiang Xiao's Personal Webpage</title>
<link>https://keeplearning-again.github.io/</link>
<atom:link href="https://keeplearning-again.github.io/index.xml" rel="self" type="application/rss+xml" />
<description>Ruiqiang Xiao's Personal Webpage</description>
<generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Mon, 24 Oct 2022 00:00:00 +0000</lastBuildDate>

<item>
<title>Example Talk</title>
<link>https://keeplearning-again.github.io/talk/example-talk/</link>
<pubDate>Sat, 01 Jun 2030 13:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/talk/example-talk/</guid>
<description><div class="alert alert-note">
<div>
Click on the <strong>Slides</strong> button above to view the built-in slides feature.
</div>
</div>
<p>Slides can be added in a few ways:</p>
<ul>
<li><strong>Create</strong> slides using Wowchemy&rsquo;s <a href="https://wowchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener"><em>Slides</em></a> feature and link using <code>slides</code> parameter in the front matter of the talk file</li>
<li><strong>Upload</strong> an existing slide deck to <code>static/</code> and link using <code>url_slides</code> parameter in the front matter of the talk file</li>
<li><strong>Embed</strong> your slides (e.g. Google Slides) or presentation video on this page using <a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">shortcodes</a>.</li>
</ul>
<p>Further event details, including <a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">page elements</a> such as image galleries, can be added to the body of this page.</p>
</description>
</item>
<item>
<title>Basic concept in differencial privacy in reinforcement learning</title>
<link>https://keeplearning-again.github.io/post/20230326_differencial_privacy/</link>
<pubDate>Sat, 25 Mar 2023 00:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/post/20230326_differencial_privacy/</guid>
<description><p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202303261817421.png" alt="DP_Page1" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202303261817062.png" alt="DP_Page2" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
</description>
</item>
<item>
<title>Some interesting findings of attention mechanism history</title>
<link>https://keeplearning-again.github.io/post/attention%E6%9C%BA%E5%88%B6%E6%BA%AF%E6%BA%90/</link>
<pubDate>Fri, 24 Mar 2023 00:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/post/attention%E6%9C%BA%E5%88%B6%E6%BA%AF%E6%BA%90/</guid>
<description><h2 id="明确chatgpt能解决问题的边界">明确ChatGPT能解决问题的边界</h2>
</description>
</item>
<item>
<title>Typora+Markdown快速入门</title>
<link>https://keeplearning-again.github.io/post/markdown+typora/</link>
<pubDate>Fri, 23 Dec 2022 00:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/post/markdown+typora/</guid>
<description><h2 id="typoramarkdown快速入门">Typora+Markdown快速入门</h2>
<hr>
<h2 id="why-markdown">Why Markdown?</h2>
<ul>
<li>支持$\LaTeX$语法、代码高亮、公式编辑块</li>
<li>自定义主题设定</li>
<li>自动排版、大纲面板可视化</li>
<li>实时预览</li>
<li>“打字模式+专注模式” yyds!</li>
</ul>
<hr>
<h2 id="各种快捷用法">各种快捷用法</h2>
<h3 id="标题分级">标题分级</h3>
<hr>
<h3 id="列表">列表</h3>
<ul>
<li>无序列表 (<code>'-'</code>+ <code>'space'</code>)</li>
</ul>
<ol>
<li>有序列表 ( <code>Number</code> + <code>.</code>+ <code>'space'</code>)</li>
</ol>
<ul>
<li><input disabled="" type="checkbox"> 任务列表 (已经设置好的快捷键 ctrl + shift + x)</li>
</ul>
<hr>
<h3 id="公式">公式</h3>
<ol>
<li>行内公式 $$</li>
<li>行间公式 (先输入$$ 然后enter)</li>
</ol>
<p>$$
F = ma
$$</p>
<hr>
<h3 id="代码块">代码块</h3>
<p><code>``` + enter</code></p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">5</span><span class="p">):</span>
</span></span><span class="line"><span class="cl"> <span class="k">pass</span>
</span></span></code></pre></div><hr>
<h3 id="文本格式">文本格式</h3>
<ul>
<li><code>**加粗**</code> &ndash;&gt; <strong>加粗</strong></li>
<li><code>*斜体*</code> &ndash;&gt; <em>斜体</em></li>
<li><u>下划线</u> &ndash;&gt; 快捷键 Ctrl + U</li>
<li><code>==高亮==</code> &ndash;&gt; ==高亮==</li>
</ul>
<hr>
<h3 id="生成思维导图">生成思维导图</h3>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011638203.png" alt="image-20230201163841129" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<hr>
<h3 id="页内跳转">页内跳转</h3>
<p><code>[try](#公式) </code> 公式必须是前面有的模块</p>
<p><a href="#%e5%85%ac%e5%bc%8f">try</a></p>
<hr>
<h3 id="引用">引用</h3>
<blockquote>
<p><code>&gt;</code></p>
</blockquote>
<hr>
<h3 id="插入视频">插入视频</h3>
<iframe src="//player.bilibili.com/player.html?aid=204834300&bvid=BV1gh411D753&cid=314330519&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
<p>到视频下方复制代码 直接粘贴</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011715496.png" alt="image-20230201171538234" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<hr>
<h3 id="隐藏">隐藏</h3>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-html" data-lang="html"><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">details</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">summary</span><span class="p">&gt;</span>点击查看详细内容<span class="p">&lt;/</span><span class="nt">summary</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">展开的内容
</span></span><span class="line"><span class="cl"><span class="p">&lt;/</span><span class="nt">details</span><span class="p">&gt;</span>
</span></span></code></pre></div><details>
<summary>点击查看详细内容</summary>
展开的内容
</details>
</description>
</item>
<item>
<title>Computer vision basic tasks</title>
<link>https://keeplearning-again.github.io/post/%E8%AE%A1%E7%AE%97%E6%9C%BA%E8%A7%86%E8%A7%89%E4%BB%BB%E5%8A%A1/</link>
<pubDate>Tue, 13 Dec 2022 00:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/post/%E8%AE%A1%E7%AE%97%E6%9C%BA%E8%A7%86%E8%A7%89%E4%BB%BB%E5%8A%A1/</guid>
<description><h2 id="计算机视觉任务">计算机视觉任务</h2>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021825208.png" alt="image-20230202182528109" style="zoom:67%;" />
<p>主要分为几大类:</p>
<ul>
<li>输入一张完整图像 &ndash;&gt; single label(single object)&ndash;&gt; ==<strong>classification</strong>==</li>
<li>输入一张完整图像 &ndash;&gt; single object + anchor&ndash;&gt; ==<strong>classification + localization</strong>==</li>
<li>输入一张完整图像 &ndash;&gt; multiple object anchors&ndash;&gt; ==<strong>object detection</strong>==</li>
<li>输入一张完整图像 &ndash;&gt; 每一个对象具体的pixel&ndash;&gt; ==<strong>segmentation</strong>==
<ul>
<li>
<ol>
<li>semantic segmentation &ndash;&gt; 让每一个pixel都有一个label</li>
<li>instance segmentation &ndash;&gt; 区分每一个物体</li>
</ol>
</li>
</ul>
</li>
</ul>
<h2 id="机器学习和神经网络简介">机器学习和神经网络简介</h2>
<h3 id="机器学习的典型范式">机器学习的典型范式</h3>
<ul>
<li>监督学习</li>
<li>无监督学习</li>
<li>自监督学习</li>
<li>强化学习</li>
</ul>
<h3 id="交叉熵损失函数----极大似然估计">交叉熵损失函数 &ndash; 极大似然估计</h3>
<p>对于预测的类别概率$P \in [0, 1]^K$和类别真值$y \in \left[1, \cdots, K \right]$,定义交叉熵损失为:
$$
L(P, y) = - logP_y
$$
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302022341081.png" alt="image-20230202234122934" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302022343225.png" alt="image-20230202234321173" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h3 id="反向传播算法">反向传播算法</h3>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302022344490.png" alt="image-20230202234418393" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h2 id="推荐热门ai研究方向">推荐热门AI研究方向</h2>
<ol>
<li>
<p>人工智能的可解释性分析、显著性分析</p>
</li>
<li>
<p>图机器学习、图神经网络(AlphaFold2)、知识图谱</p>
</li>
<li>
<p>人工智能+VR/AR/数字人/元宇宙</p>
</li>
<li>
<p>轻量化压缩部署:Web前端、智能手机、服务器、嵌入式硬件</p>
</li>
<li>
<p>Al4Science:天文、物理、蛋白质预测、药物设计、数学证明</p>
</li>
<li>
<p>做各行各业垂直细分领域的人工智能应用</p>
</li>
<li>
<p>神经辐射场(NERF)</p>
</li>
<li>
<p>扩散生成模型(Diffusion)、AIGC、跨模态预训练大模型</p>
</li>
<li>
<p>隐私计算、联邦学习、可信计算</p>
</li>
<li>
<p>AI基础设施平台(数据、算力、教学、开源、算法工具包)</p>
</li>
<li>
<p>认知科学+类脑计算+计算神经科学</p>
</li>
</ol>
</description>
</item>
<item>
<title>Some ChatGPT findings</title>
<link>https://keeplearning-again.github.io/post/chatgpt%E6%80%BB%E7%BB%93/</link>
<pubDate>Mon, 15 Aug 2022 00:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/post/chatgpt%E6%80%BB%E7%BB%93/</guid>
<description><h2 id="明确chatgpt能解决问题的边界">明确ChatGPT能解决问题的边界</h2>
<div class="mermaid">graph TD;
世界上有很多问题其中只有一小部分是数学问题 --> 在数学问题中只有一小部分是有解的 --> 在有解的问题中只有一部分是理想状态的图灵机通过有限步骤可以解决的 --> 在后一类的问题中又只有一部分是今天实际的计算机可以解决的 --> 而人工智能可以解决的问题又只是计算机可以解决问题的一部分
</div>
<blockquote>
<p>故我们可知 我们无法让chatGPT直接预测明天地球会不会毁灭 即使人类也无法给出答案</p>
</blockquote>
<h2 id="基本原理">基本原理</h2>
<p>本质上就是大规模语言模型 按照概率输出结果</p>
<p>他的输出是基于之前训练输入的数据和过往的标记数据得到的结果</p>
<h2 id="能做什么呢">能做什么呢</h2>
<h3 id="选择一个主题">选择一个主题</h3>
<p>我可以帮助你选择一个既有趣又与你的研究领域相关的题目。</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211610876.png" alt="image-20230221161003710" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211610456.png" alt="image-20230221161028376" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h3 id="制定提纲">制定提纲</h3>
<p>我可以帮助你创建一个大纲,明确界定你的论文结构,包括导言、正文和结论。</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211612172.png" alt="image-20230221161217119" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h3 id="进行研究">进行研究</h3>
<p>我可以提供进行研究的提示和资源,并寻找可靠的来源来支持你的论点。</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211610537.png" alt="image-20230221161046474" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211611738.png" alt="image-20230221161103655" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211611833.png" alt="image-20230221161131752" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211611949.png" alt="image-20230221161154883" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h3 id="撰写论文">撰写论文</h3>
<p>我可以就如何写出清晰、简明、有条理的句子和段落,有效地传达你的观点提供建议。</p>
<h3 id="修改和编辑">修改和编辑</h3>
<p>我可以就如何修改和编辑你的论文提供反馈,以确保它没有错误并符合学术标准。</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211613104.png" alt="image-20230221161329025" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h3 id="引用资料">引用资料</h3>
<p>我可以指导你如何在论文中正确引用资料,包括使用MLA帮你写作这用方式。</p>
<p><em><strong>参考文献全是假的。</strong></em></p>
<h3 id="代码实现">代码实现</h3>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211614495.png" alt="image-20230221161354493" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302211614480.png" alt="image-20230221161431442" loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h2 id="注意事项那个">注意事项那个</h2>
<ol>
<li>尽量用英文提问</li>
<li>提高归纳概括的能力</li>
<li>AI知识助手</li>
<li>谨慎乐观的使用</li>
<li>数据有泄漏风险</li>
</ol>
</description>
</item>
<item>
<title>Self-attention mechanism and Transformers</title>
<link>https://keeplearning-again.github.io/post/transformer/</link>
<pubDate>Fri, 12 Aug 2022 00:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/post/transformer/</guid>
<description><h2 id="1-明确输入输出">1. 明确输入输出</h2>
<h3 id="将输入转化为vector">将输入转化为vector</h3>
<ol>
<li><strong>文本翻译</strong> &ndash; 将词汇转化成same length vector(word2vector,one-hot coding)</li>
<li><strong>信号处理</strong> &ndash; 生成一段时长的window 将其中内容转化为一个frame vector 然后跳一个gap向前</li>
<li><strong>图(graph)</strong> &ndash; 每一个node的信息汇总成vector vertices单独拎出来和后面attention matrix结合</li>
<li><strong>图像</strong> &ndash; 每一个pixel有三通道的向量 <strong>DETR</strong></li>
</ol>
<h3 id="明确输出类型">明确输出类型</h3>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011214326.png" alt="image-20230201121424237" style="zoom:50%;" />
<p>例子:文本处理词性标注(POS tagging) / 一句话的感情分析(sentiment analysis) / model自己选择输出大小</p>
<hr>
<h2 id="第一类问题sequence-labeling">第一类问题(Sequence Labeling)</h2>
<h3 id="1-面对的问题">1. 面对的问题</h3>
<h4 id="1给一个sequence-如何生成corresponding-sequence--labels">(1)给一个sequence 如何生成corresponding sequence / labels</h4>
<p>如果使用单一的fully-connected layers,那么给定的sequence长度随着时间变化,则无法做到泛化</p>
<p>因此我们需要一个flexible的方法 不用限定sequence的长度</p>
<h4 id="2如何体现sequence的上下文信息对当前vector的output的影响">(2)如何体现sequence的上下文信息对当前vector的output的影响</h4>
<p>双向RNN能够体现上下文 用一个window将需要考虑的相邻向量容纳进去 但是无法平行计算(parallel &ndash; speedup)</p>
<h3 id="2-self-attention">2. Self-attention</h3>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011215795.png" alt="image-20230201121536700" style="zoom:50%;" />
<h4 id="1-解决的方法">(1) 解决的方法</h4>
<h5 id="1-attention-score-alpha----体现上下文信息">1. Attention Score $\alpha$ &ndash; 体现上下文信息</h5>
<p>每个input向量之间会计算一个attention score 用来表示相似性 (计算方法有很多例如dot-product / addictive)</p>
<h5 id="2-引入positional-encoding">2. 引入positional encoding</h5>
<p>针对每一个attention layer 无论sequence的顺序如何,他都不影响结果输出,这表示<strong>no position information in self-attention</strong> &ndash; 天涯若比邻 每个向量之间的距离都是“一样的”</p>
<p><em><strong>引入unique positional vector $e^i$</strong></em></p>
<p><a href="https://arxiv.org/abs/2003.09229" target="_blank" rel="noopener">Positional encoding article &ndash; Learning to Encode Position for Transformer with Continuous Dynamical Model</a></p>
<h4 id="2-流程">(2) 流程</h4>
<h5 id="1-生成每一个向量的query-key-value">1. 生成每一个向量的Query Key Value</h5>
<p>每一层attention layer共享一个Query Key Value matrix $$W^q \ W_k \ W_v$$</p>
<p>引入三个变量是为了引入更多可学习的参数,但是共享参数是为了减少训练量</p>
<h5 id="2-q-k-组合生成-alpha--soft-max-后生成-alphaprime">2. Q K 组合生成 $\alpha$ Soft-max 后生成 $\alpha^{\prime}$</h5>
<p>下图为第一个向量与sequence中所有向量的attention score计算示意图</p>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011215171.png" alt="image-20230131095641605" style="zoom:50%;" />
<h5 id="3-extract-information-based-on-attention-scores">3. Extract information based on attention scores</h5>
<p>$$
\boldsymbol{b}^{\mathbf{1}}=\sum_{i} \alpha_{1, i}^{\prime} \boldsymbol{v}^{i}
$$</p>
<h4 id="3-matrix-representation">(3) Matrix representation</h4>
<div align=center><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011215100.png" alt="image-20230131100815558" style="zoom:50%;" />
</div>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011216960.png" alt="image-20230131101707040" style="zoom:50%;" />
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011216302.png" alt="image-20230131101738724" style="zoom:50%;" />
$$
\begin{aligned}
Q &= w^q · I \\
K &= w^k · I \\
V &= w^v · I \\
A^{\prime} = softmax(A) &= softmax(K^T Q) \\
O &= V A^{\prime}
\end{aligned}
$$
<h3 id="3-multi-head-self-attention">3. Multi-head Self-attention</h3>
<p>different types of relevance</p>
<h3 id="4-self-attention-vs-cnn">4. Self-attention VS CNN</h3>
<p>CNN: self-attention that can only attends in a receptive field</p>
<p>self-attention: CNN with learnable receptive field (complex version of CNN)</p>
<p><a href="https://arxiv.org/abs/1911.03584" target="_blank" rel="noopener">On the Relationship between Self-Attention and Convolutional Layers</a></p>
<p>小样本数据集上CNN模型小更高效 大数据集样本上self-attention往往能找到更多的connection</p>
<h3 id="5-self-attention-vs-rnn">5. Self-attention VS RNN</h3>
<p>RNN递进关系会导致远距离很难考虑 并且 不是平行架构无法加速</p>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302011216327.png" alt="image-20230131111746795" style="zoom:50%;" />
<p><a href="https://arxiv.org/abs/2006.16236" target="_blank" rel="noopener">Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention</a></p>
<h3 id="6-综述文章">6. 综述文章</h3>
<p><a href="https://arxiv.org/abs/2009.06732" target="_blank" rel="noopener">Efficient Transformers: A Survey</a></p>
<p><a href="https://arxiv.org/abs/2011.04006" target="_blank" rel="noopener">Long Range Arena: A Benchmark for Efficient Transformers</a></p>
<hr>
<h2 id="transformers">Transformers</h2>
<hr>
<h2 id="sequence-to-sequenceseq2seq">Sequence-to-sequence(seq2seq)</h2>
<blockquote>
<p>Input a sequence, output a sequence.</p>
<p>The output length is determined by model.</p>
</blockquote>
<p><a href="https://arxiv.org/abs/2005.12872" target="_blank" rel="noopener">seq2seq for object detection &ndash; End-to-End Object Detection with Transformers</a></p>
<div align=center><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302020934811.png" alt="image-20230202093420708" style="zoom:33%;" />
</div>
<h2 id="transformers-encoder----self-attention">Transformer&rsquo;s Encoder &ndash; Self-attention</h2>
<blockquote>
<p>encoder的核心问题 是将一个sequence的向量如何转化为同样长度的向量</p>
</blockquote>
<div align=center>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302020943672.png" alt="image-20230202094355618" style="zoom:50%;" />
</div>
<h3 id="每一个block的处理顺序">每一个block的处理顺序</h3>
<ol>
<li>
<p>self-attention</p>
</li>
<li>
<p>residual connection (防止梯度消失)</p>
</li>
</ol>
<blockquote>
<p>为什么residual block能够防止梯度消失?</p>
<div align=center>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021804498.png" alt="image-20230202180404456" style="zoom:80%;" />
</div>
<p>根据后向传播的链式法则,</p>
</blockquote>
<p>$$
\frac{\partial L}{\partial X_{\text{Aout}}} = \frac{\partial L}{\partial X_{\text{Din}}} \cdot \frac{\partial X_{\text{Din}}}{\partial X_{\text{Aout}}}
$$
$$
\because X_{\text{Din}} = X_{\text{Aout}} + C(B(X_{\text{Aout}}))
$$
$$
\frac{\partial L}{\partial X_{\text{Aout}}} = \frac{\partial L}{\partial X_{\text{Din}}} \cdot \left[ 1 + \frac{\partial X_{\text{Din}}}{\partial X_{\text{Cout}}} \frac{\partial X_{\text{Cout}}}{\partial X_{\text{Bout}}} \frac{\partial X_{\text{Bout}}}{\partial X_{\text{Aout}}}\right]
$$
$$
= \frac{\partial L}{\partial X_{\text{Din}}} + \frac{\partial L}{\partial X_{\text{Din}}}\frac{\partial X_{\text{Din}}}{\partial X_{\text{Cout}}} \frac{\partial X_{\text{Cout}}}{\partial X_{\text{Bout}}} \frac{\partial X_{\text{Bout}}}{\partial X_{\text{Aout}}}
$$
$$
= \frac{\partial L}{\partial X_{\text{Din}}} + \text {original loss gradient}
$$</p>
<blockquote>
<p>通常微分后都小于1 故原始梯度容易越算越小 到前端就容易消失无法改变</p>
</blockquote>
<ol start="3">
<li>norm (layer normalization)同一个向量的不同dimension计算mean和deviation</li>
</ol>
<p>$$
\begin{bmatrix}
x_1 &amp; x_2 &amp; \cdots &amp; x_K
\end{bmatrix}^{T}
\rightarrow
\begin{bmatrix}
x_1^{\prime} &amp; x_2^{\prime} &amp; \cdots &amp; x_K^{\prime}
\end{bmatrix}^{T}
$$</p>
<blockquote>
<p>batch normalization是对同一个dimension不同的feature的mean和variance(不同向量之间的$x_i$的$\mu$)</p>
</blockquote>
<p>[以下是encoder的详细说明](#Transformer&rsquo;s Encoder &ndash; Self-attention)</p>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302020956666.png" alt="image-20230202095606545" style="zoom:50%;" />
<h3 id="encoder架构的变形">Encoder架构的变形</h3>
<ul>
<li>[<a href="https://arxiv.org/abs/2002.04745" target="_blank" rel="noopener">2002.04745] On Layer Normalization in the Transformer Architecture (arxiv.org)</a></li>
<li>[<a href="https://arxiv.org/abs/2003.07845" target="_blank" rel="noopener">2003.07845] PowerNorm: Rethinking Batch Normalization in Transformers (arxiv.org)</a></li>
</ul>
<h2 id="transformers-decoder">Transformer&rsquo;s decoder</h2>
<h3 id="输出方式----autoregressive-和-non-autoregressive">输出方式 &ndash; Autoregressive 和 non-autoregressive</h3>
<ul>
<li>将decoder的sequence vectors输入decoder</li>
<li>autoregressive &ndash; 决定了decoder的输出方式(AT)</li>
</ul>
<blockquote>
<ol>
<li>首先在decoder里输入vectors</li>
<li>输入begin of sentence(通常为one-hot coding vector)</li>
<li>第一个输出为一个长度为想要输出的范围的size长度的vector 再经过softmax得到distribution 取最大值</li>
<li>再将第一个输出当成新的输入(改成one-hot coding) 此时decoder输入有bos和之前的输出</li>
<li>再得到新的输出</li>
<li>以此类推</li>
</ol>
</blockquote>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021015182.png" alt="image-20230202101556039" style="zoom:50%;" />
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021020250.png" alt="image-20230202102046116" style="zoom:50%;" />
<h3 id="decoder-本身架构">Decoder 本身架构</h3>
<ul>
<li>抛去中间的block 可以看到就是与decoder 相似的主体结构 唯一不同的是masked self-attention,很好理解不能剧透后面的部分。(类似于RNN)</li>
</ul>
<div align=center><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021639103.png" alt="image-20230202163936914" style="zoom:50%;" />
</div>
<blockquote>
<p>eg.</p>
<div align=center><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021641078.png" alt="image-20230202164134997" style="zoom: 33%;" />
</div>
</blockquote>
<h3 id="encoder和decoder如何交互">Encoder和decoder如何交互?</h3>
<div align=center><img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021655592.png" alt="image-20230202165550501" style="zoom:45%;" />
</div>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021700358.png" alt="image-20230202170044216" style="zoom: 67%;" />
<ul>
<li>cross layer 可以用不同类型</li>
</ul>
<blockquote>
<p>[<a href="https://arxiv.org/abs/2005.08081" target="_blank" rel="noopener">2005.08081] Rethinking and Improving Natural Language Generation with Layer-Wise Multi-View Decoding (arxiv.org)</a></p>
</blockquote>
<h2 id="training">Training</h2>
<p><em><strong>Loss: 每一个词汇的ground truth 和 decoder inference 之后的结果 计算cross entropy</strong></em></p>
<h3 id="teacher-forcing">Teacher Forcing</h3>
<p>为了更好的训练decoder 防止encoder的error propagation 所以会使用把正确结果放进decoder的输入看能不能输出正确结果</p>
<img src="https://raw.githubusercontent.com/keeplearning-again/Typora_blog_images/main/blog/202302021749300.png" alt="image-20230202174931088" style="zoom: 50%;" />
<h3 id="scheduled-sampling">Scheduled Sampling</h3>
<p>为了避免没有泛化能力 就给decoder的输入放一些故意的错误(原理有点类似于神经网路 dropout)</p>
<blockquote>
<p>[<a href="https://arxiv.org/abs/1906.07651" target="_blank" rel="noopener">1906.07651] Scheduled Sampling for Transformers (arxiv.org)</a></p>
<p>[<a href="https://arxiv.org/abs/1906.04331" target="_blank" rel="noopener">1906.04331] Parallel Scheduled Sampling (arxiv.org)</a></p>
</blockquote>
<h2 id="tips">Tips</h2>
<ol>
<li>
<p>copy mechanism</p>
</li>
<li>
<p>guided attention ?</p>
</li>
<li>
<p>Beam Search ?</p>
</li>
<li>
<p>Optimizing evaluation metrics? &ndash; BLEU score</p>
<ol>
<li>只能用在testing 的evaluation 不能用在loss是因为不可微 不能反向传递</li>
<li>when you don&rsquo;t know how to optimize, just use reinforcement learning!!!</li>
</ol>
<p>把BLEU score看成reward 把decoder看成agent</p>
<blockquote>
<p>[<a href="https://arxiv.org/abs/1511.06732" target="_blank" rel="noopener">1511.06732] Sequence Level Training with Recurrent Neural Networks (arxiv.org)</a></p>
</blockquote>
</li>
</ol>
<hr>
<p>图片来源:</p>
<p><a href="https://www.bilibili.com/video/BV1v3411r78R?p=1&amp;vd_source=0da602efaef9a75c3e62c481d182f95c" target="_blank" rel="noopener">10.【李宏毅机器学习2021】自注意力机制 (Self-attention) (上)_哔哩哔哩_bilibili</a></p>
</description>
</item>
<item>
<title>Writing technical content in Markdown</title>
<link>https://keeplearning-again.github.io/post/writing-technical-content/</link>
<pubDate>Tue, 12 Jul 2022 00:00:00 +0000</pubDate>
<guid>https://keeplearning-again.github.io/post/writing-technical-content/</guid>
<description><p>Wowchemy is designed to give technical content creators a seamless experience. You can focus on the content and Wowchemy handles the rest.</p>
<p><strong>Highlight your code snippets, take notes on math classes, and draw diagrams from textual representation.</strong></p>
<p>On this page, you&rsquo;ll find some examples of the types of technical content that can be rendered with Wowchemy.</p>
<h2 id="examples">Examples</h2>
<h3 id="code">Code</h3>
<p>Wowchemy supports a Markdown extension for highlighting code syntax. You can customize the styles under the <code>syntax_highlighter</code> option in your <code>config/_default/params.yaml</code> file.</p>
<pre><code>```python
import pandas as pd
data = pd.read_csv(&quot;data.csv&quot;)
data.head()
```
</code></pre>
<p>renders as</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span>
</span></span><span class="line"><span class="cl"><span class="n">data</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&#34;data.csv&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="n">data</span><span class="o">.</span><span class="n">head</span><span class="p">()</span>
</span></span></code></pre></div><h3 id="mindmaps">Mindmaps</h3>
<p>Wowchemy supports a Markdown extension for mindmaps.</p>
<p>Simply insert a Markdown <code>markmap</code> code block and optionally set the height of the mindmap as shown in the example below.</p>
<p>A simple mindmap defined as a Markdown list:</p>
<div class="highlight">
<pre class="chroma">
<code>
```markmap {height="200px"}
- Hugo Modules
- wowchemy
- wowchemy-plugins-netlify
- wowchemy-plugins-netlify-cms
- wowchemy-plugins-reveal
```
</code>
</pre>
</div>
<p>renders as</p>
<div class="markmap" style="height: 200px;">
<pre>- Hugo Modules
- wowchemy
- wowchemy-plugins-netlify
- wowchemy-plugins-netlify-cms
- wowchemy-plugins-reveal</pre>
</div>
<p>A more advanced mindmap with formatting, code blocks, and math:</p>
<div class="highlight">
<pre class="chroma">
<code>
```markmap
- Mindmaps
- Links
- [Wowchemy Docs](https://wowchemy.com/docs/)
- [Discord Community](https://discord.gg/z8wNYzb)
- [GitHub](https://github.com/wowchemy/wowchemy-hugo-themes)
- Features
- Markdown formatting
- **inline** ~~text~~ *styles*
- multiline
text
- `inline code`
-
```js
console.log('hello');
console.log('code block');
```
- Math: $x = {-b \pm \sqrt{b^2-4ac} \over 2a}$
```
</code>
</pre>
</div>
<p>renders as</p>
<div class="markmap" style="height: 500px;">
<pre>- Mindmaps
- Links
- [Wowchemy Docs](https://wowchemy.com/docs/)
- [Discord Community](https://discord.gg/z8wNYzb)
- [GitHub](https://github.com/wowchemy/wowchemy-hugo-themes)
- Features
- Markdown formatting
- **inline** ~~text~~ *styles*
- multiline
text
- `inline code`
-
```js
console.log('hello');
console.log('code block');
```
- Math: $x = {-b \pm \sqrt{b^2-4ac} \over 2a}$</pre>
</div>
<h3 id="charts">Charts</h3>
<p>Wowchemy supports the popular <a href="https://plot.ly/" target="_blank" rel="noopener">Plotly</a> format for interactive charts.</p>
<p>Save your Plotly JSON in your page folder, for example <code>line-chart.json</code>, and then add the <code>{{&lt; chart data=&quot;line-chart&quot; &gt;}}</code> shortcode where you would like the chart to appear.</p>
<p>Demo:</p>
<div id="chart-716423958" class="chart"></div>
<script>
(function() {
let a = setInterval( function() {
if ( typeof window.Plotly === 'undefined' ) {
return;
}
clearInterval( a );
Plotly.d3.json("./line-chart.json", function(chart) {
Plotly.plot('chart-716423958', chart.data, chart.layout, {responsive: true});
});
}, 500 );
})();
</script>
<p>You might also find the <a href="http://plotly-json-editor.getforge.io/" target="_blank" rel="noopener">Plotly JSON Editor</a> useful.</p>
<h3 id="math">Math</h3>
<p>Wowchemy supports a Markdown extension for $\LaTeX$ math. You can enable this feature by toggling the <code>math</code> option in your <code>config/_default/params.yaml</code> file.</p>
<p>To render <em>inline</em> or <em>block</em> math, wrap your LaTeX math with <code>{{&lt; math &gt;}}$...${{&lt; /math &gt;}}</code> or <code>{{&lt; math &gt;}}$$...$${{&lt; /math &gt;}}</code>, respectively. (We wrap the LaTeX math in the Wowchemy <em>math</em> shortcode to prevent Hugo rendering our math as Markdown. The <em>math</em> shortcode is new in v5.5-dev.)</p>
<p>Example <strong>math block</strong>:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-latex" data-lang="latex"><span class="line"><span class="cl"><span class="nb">{{</span>&lt; math &gt;<span class="nb">}}</span>
</span></span><span class="line"><span class="cl"><span class="sb">$$</span><span class="nb">
</span></span></span><span class="line"><span class="cl"><span class="nb"></span><span class="nv">\gamma</span><span class="nb">_{n} </span><span class="o">=</span><span class="nb"> </span><span class="nv">\frac</span><span class="nb">{ </span><span class="nv">\left</span><span class="nb"> | </span><span class="nv">\left</span><span class="nb"> </span><span class="o">(</span><span class="nv">\mathbf</span><span class="nb"> x_{n} </span><span class="o">-</span><span class="nb"> </span><span class="nv">\mathbf</span><span class="nb"> x_{n</span><span class="o">-</span><span class="m">1</span><span class="nb">} </span><span class="nv">\right</span><span class="nb"> </span><span class="o">)</span><span class="nb">^T </span><span class="nv">\left</span><span class="nb"> </span><span class="o">[</span><span class="nv">\nabla</span><span class="nb"> F </span><span class="o">(</span><span class="nv">\mathbf</span><span class="nb"> x_{n}</span><span class="o">)</span><span class="nb"> </span><span class="o">-</span><span class="nb"> </span><span class="nv">\nabla</span><span class="nb"> F </span><span class="o">(</span><span class="nv">\mathbf</span><span class="nb"> x_{n</span><span class="o">-</span><span class="m">1</span><span class="nb">}</span><span class="o">)</span><span class="nb"> </span><span class="nv">\right</span><span class="nb"> </span><span class="o">]</span><span class="nb"> </span><span class="nv">\right</span><span class="nb"> |}{</span><span class="nv">\left</span><span class="nb"> </span><span class="nv">\|\nabla</span><span class="nb"> F</span><span class="o">(</span><span class="nv">\mathbf</span><span class="nb">{x}_{n}</span><span class="o">)</span><span class="nb"> </span><span class="o">-</span><span class="nb"> </span><span class="nv">\nabla</span><span class="nb"> F</span><span class="o">(</span><span class="nv">\mathbf</span><span class="nb">{x}_{n</span><span class="o">-</span><span class="m">1</span><span class="nb">}</span><span class="o">)</span><span class="nb"> </span><span class="nv">\right</span><span class="nb"> </span><span class="nv">\|</span><span class="nb">^</span><span class="m">2</span><span class="nb">}
</span></span></span><span class="line"><span class="cl"><span class="nb"></span><span class="s">$$</span>
</span></span><span class="line"><span class="cl"><span class="nb">{{</span>&lt; /math &gt;<span class="nb">}}</span>
</span></span></code></pre></div><p>renders as</p>
$$\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}$$
<p>Example <strong>inline math</strong> <code>{{&lt; math &gt;}}$\nabla F(\mathbf{x}_{n})${{&lt; /math &gt;}}</code> renders as
$\nabla F(\mathbf{x}_{n})$.</p>
<p>Example <strong>multi-line math</strong> using the math linebreak (<code>\\</code>):</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-latex" data-lang="latex"><span class="line"><span class="cl"><span class="nb">{{</span>&lt; math &gt;<span class="nb">}}</span>
</span></span><span class="line"><span class="cl"><span class="sb">$$</span><span class="nb">f</span><span class="o">(</span><span class="nb">k;p_{</span><span class="m">0</span><span class="nb">}^{</span><span class="o">*</span><span class="nb">}</span><span class="o">)</span><span class="nb"> </span><span class="o">=</span><span class="nb"> </span><span class="nv">\begin</span><span class="nb">{cases}p_{</span><span class="m">0</span><span class="nb">}^{</span><span class="o">*</span><span class="nb">} &amp; </span><span class="nv">\text</span><span class="nb">{if }k</span><span class="o">=</span><span class="m">1</span><span class="nb">, </span><span class="nv">\\</span><span class="nb">
</span></span></span><span class="line"><span class="cl"><span class="nb"></span><span class="m">1</span><span class="o">-</span><span class="nb">p_{</span><span class="m">0</span><span class="nb">}^{</span><span class="o">*</span><span class="nb">} &amp; </span><span class="nv">\text</span><span class="nb">{if }k</span><span class="o">=</span><span class="m">0</span><span class="nb">.</span><span class="nv">\end</span><span class="nb">{cases}</span><span class="s">$$</span>
</span></span><span class="line"><span class="cl"><span class="nb">{{</span>&lt; /math &gt;<span class="nb">}}</span>
</span></span></code></pre></div><p>renders as</p>
$$
f(k;p_{0}^{*}) = \begin{cases}p_{0}^{*} & \text{if }k=1, \\
1-p_{0}^{*} & \text{if }k=0.\end{cases}
$$
<h3 id="diagrams">Diagrams</h3>
<p>Wowchemy supports a Markdown extension for diagrams. You can enable this feature by toggling the <code>diagram</code> option in your <code>config/_default/params.toml</code> file or by adding <code>diagram: true</code> to your page front matter.</p>
<p>An example <strong>flowchart</strong>:</p>
<pre><code>```mermaid
graph TD
A[Hard] --&gt;|Text| B(Round)
B --&gt; C{Decision}
C --&gt;|One| D[Result 1]
C --&gt;|Two| E[Result 2]
```
</code></pre>
<p>renders as</p>
<div class="mermaid">graph TD
A[Hard] -->|Text| B(Round)
B --> C{Decision}
C -->|One| D[Result 1]