diff --git a/.DS_Store b/.DS_Store index 0ab78ff5..c4574f1e 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/Gemfile b/Gemfile index c3cda4dd..8ef95e2d 100755 --- a/Gemfile +++ b/Gemfile @@ -5,4 +5,5 @@ gem "jekyll-include-cache", group: :jekyll_plugins #gem "minimal-mistakes-jekyll" #gem 'webrick', '>= 1.7' -#gem 'wdm', '>= 0.1.0' if Gem.win_platform? \ No newline at end of file +#gem 'wdm', '>= 0.1.0' if Gem.win_platform? +gem "webrick", "~> 1.8" diff --git a/Lectures/.DS_Store b/Lectures/.DS_Store index fe476a35..980a055b 100644 Binary files a/Lectures/.DS_Store and b/Lectures/.DS_Store differ diff --git a/Lectures/S0-L17/58.png b/Lectures/S0-L17/58.png new file mode 100644 index 00000000..03ac200a Binary files /dev/null and b/Lectures/S0-L17/58.png differ diff --git a/Lectures/S0-L17/59.png b/Lectures/S0-L17/59.png new file mode 100644 index 00000000..5be6297c Binary files /dev/null and b/Lectures/S0-L17/59.png differ diff --git a/Lectures/S0-L17/60.png b/Lectures/S0-L17/60.png new file mode 100644 index 00000000..6a878840 Binary files /dev/null and b/Lectures/S0-L17/60.png differ diff --git a/Lectures/S0-L17/61.png b/Lectures/S0-L17/61.png new file mode 100644 index 00000000..cf7f75bf Binary files /dev/null and b/Lectures/S0-L17/61.png differ diff --git a/Lectures/S0-L17/62.png b/Lectures/S0-L17/62.png new file mode 100644 index 00000000..e7ba731f Binary files /dev/null and b/Lectures/S0-L17/62.png differ diff --git a/Lectures/S0-L17/63.png b/Lectures/S0-L17/63.png new file mode 100644 index 00000000..19feddb8 Binary files /dev/null and b/Lectures/S0-L17/63.png differ diff --git a/Lectures/S0-L17/64.png b/Lectures/S0-L17/64.png new file mode 100644 index 00000000..bee48159 Binary files /dev/null and b/Lectures/S0-L17/64.png differ diff --git a/Lectures/S0-L17/65.png b/Lectures/S0-L17/65.png new file mode 100644 index 00000000..995b16b2 Binary files /dev/null and b/Lectures/S0-L17/65.png differ diff --git a/Lectures/S0-L17/66.png b/Lectures/S0-L17/66.png new file mode 100644 index 00000000..78428b9b Binary files /dev/null and b/Lectures/S0-L17/66.png differ diff --git a/Lectures/S0-L17/67.png b/Lectures/S0-L17/67.png new file mode 100644 index 00000000..b3878cac Binary files /dev/null and b/Lectures/S0-L17/67.png differ diff --git a/Lectures/S0-L17/68.png b/Lectures/S0-L17/68.png new file mode 100644 index 00000000..f99fa124 Binary files /dev/null and b/Lectures/S0-L17/68.png differ diff --git a/Lectures/S0-L17/69.png b/Lectures/S0-L17/69.png new file mode 100644 index 00000000..22095b0d Binary files /dev/null and b/Lectures/S0-L17/69.png differ diff --git a/Lectures/S0-L17/70.png b/Lectures/S0-L17/70.png new file mode 100644 index 00000000..e15cd2fd Binary files /dev/null and b/Lectures/S0-L17/70.png differ diff --git a/Lectures/S0-L17/71.png b/Lectures/S0-L17/71.png new file mode 100644 index 00000000..78d68c89 Binary files /dev/null and b/Lectures/S0-L17/71.png differ diff --git a/Lectures/S0-L17/72.png b/Lectures/S0-L17/72.png new file mode 100644 index 00000000..939ef108 Binary files /dev/null and b/Lectures/S0-L17/72.png differ diff --git a/Lectures/S0-L17/73.png b/Lectures/S0-L17/73.png new file mode 100644 index 00000000..23fa4e16 Binary files /dev/null and b/Lectures/S0-L17/73.png differ diff --git a/Lectures/S0-L17/74.png b/Lectures/S0-L17/74.png new file mode 100644 index 00000000..c78ba04e Binary files /dev/null and b/Lectures/S0-L17/74.png differ diff --git a/Lectures/S0-L17/Slide10.PNG b/Lectures/S0-L17/Slide10.PNG new file mode 100644 index 00000000..3fec1e62 Binary files /dev/null and b/Lectures/S0-L17/Slide10.PNG differ diff --git a/Lectures/S0-L17/Slide11.PNG b/Lectures/S0-L17/Slide11.PNG new file mode 100644 index 00000000..99190bfe Binary files /dev/null and b/Lectures/S0-L17/Slide11.PNG differ diff --git a/Lectures/S0-L17/Slide12.PNG b/Lectures/S0-L17/Slide12.PNG new file mode 100644 index 00000000..1f4697cd Binary files /dev/null and b/Lectures/S0-L17/Slide12.PNG differ diff --git a/Lectures/S0-L17/Slide13.PNG b/Lectures/S0-L17/Slide13.PNG new file mode 100644 index 00000000..10377744 Binary files /dev/null and b/Lectures/S0-L17/Slide13.PNG differ diff --git a/Lectures/S0-L17/Slide14.PNG b/Lectures/S0-L17/Slide14.PNG new file mode 100644 index 00000000..9ca6f956 Binary files /dev/null and b/Lectures/S0-L17/Slide14.PNG differ diff --git a/Lectures/S0-L17/Slide15.PNG b/Lectures/S0-L17/Slide15.PNG new file mode 100644 index 00000000..fb09d61a Binary files /dev/null and b/Lectures/S0-L17/Slide15.PNG differ diff --git a/Lectures/S0-L17/Slide16.PNG b/Lectures/S0-L17/Slide16.PNG new file mode 100644 index 00000000..53a5f0cb Binary files /dev/null and b/Lectures/S0-L17/Slide16.PNG differ diff --git a/Lectures/S0-L17/Slide17.PNG b/Lectures/S0-L17/Slide17.PNG new file mode 100644 index 00000000..3691d137 Binary files /dev/null and b/Lectures/S0-L17/Slide17.PNG differ diff --git a/Lectures/S0-L17/Slide18.PNG b/Lectures/S0-L17/Slide18.PNG new file mode 100644 index 00000000..9592c96f Binary files /dev/null and b/Lectures/S0-L17/Slide18.PNG differ diff --git a/Lectures/S0-L17/Slide19.PNG b/Lectures/S0-L17/Slide19.PNG new file mode 100644 index 00000000..00512eb9 Binary files /dev/null and b/Lectures/S0-L17/Slide19.PNG differ diff --git a/Lectures/S0-L17/Slide2.PNG b/Lectures/S0-L17/Slide2.PNG new file mode 100644 index 00000000..3f542033 Binary files /dev/null and b/Lectures/S0-L17/Slide2.PNG differ diff --git a/Lectures/S0-L17/Slide3.PNG b/Lectures/S0-L17/Slide3.PNG new file mode 100644 index 00000000..644a6f8a Binary files /dev/null and b/Lectures/S0-L17/Slide3.PNG differ diff --git a/Lectures/S0-L17/Slide4.PNG b/Lectures/S0-L17/Slide4.PNG new file mode 100644 index 00000000..f3af1ede Binary files /dev/null and b/Lectures/S0-L17/Slide4.PNG differ diff --git a/Lectures/S0-L17/Slide5.PNG b/Lectures/S0-L17/Slide5.PNG new file mode 100644 index 00000000..fece2e57 Binary files /dev/null and b/Lectures/S0-L17/Slide5.PNG differ diff --git a/Lectures/S0-L17/Slide6.PNG b/Lectures/S0-L17/Slide6.PNG new file mode 100644 index 00000000..fa9bdb25 Binary files /dev/null and b/Lectures/S0-L17/Slide6.PNG differ diff --git a/Lectures/S0-L17/Slide7.PNG b/Lectures/S0-L17/Slide7.PNG new file mode 100644 index 00000000..c0d32ff6 Binary files /dev/null and b/Lectures/S0-L17/Slide7.PNG differ diff --git a/Lectures/S0-L17/Slide8.PNG b/Lectures/S0-L17/Slide8.PNG new file mode 100644 index 00000000..acefc633 Binary files /dev/null and b/Lectures/S0-L17/Slide8.PNG differ diff --git a/Lectures/S0-L17/Slide9.PNG b/Lectures/S0-L17/Slide9.PNG new file mode 100644 index 00000000..65a4d89d Binary files /dev/null and b/Lectures/S0-L17/Slide9.PNG differ diff --git a/Lectures/S0-L17/app2.png b/Lectures/S0-L17/app2.png new file mode 100644 index 00000000..34cd94bf Binary files /dev/null and b/Lectures/S0-L17/app2.png differ diff --git a/Lectures/S0-L17/approx.png b/Lectures/S0-L17/approx.png new file mode 100644 index 00000000..c7b07f3d Binary files /dev/null and b/Lectures/S0-L17/approx.png differ diff --git a/Lectures/S0-L17/ca1.png b/Lectures/S0-L17/ca1.png new file mode 100644 index 00000000..39ac14b3 Binary files /dev/null and b/Lectures/S0-L17/ca1.png differ diff --git a/Lectures/S0-L17/ca2.png b/Lectures/S0-L17/ca2.png new file mode 100644 index 00000000..5474f04d Binary files /dev/null and b/Lectures/S0-L17/ca2.png differ diff --git a/Lectures/S0-L17/chall.png b/Lectures/S0-L17/chall.png new file mode 100644 index 00000000..4ada91a1 Binary files /dev/null and b/Lectures/S0-L17/chall.png differ diff --git a/Lectures/S0-L17/con.png b/Lectures/S0-L17/con.png new file mode 100644 index 00000000..579af80e Binary files /dev/null and b/Lectures/S0-L17/con.png differ diff --git a/Lectures/S0-L17/dai1.png b/Lectures/S0-L17/dai1.png new file mode 100644 index 00000000..8fa3f446 Binary files /dev/null and b/Lectures/S0-L17/dai1.png differ diff --git a/Lectures/S0-L17/dai10.png b/Lectures/S0-L17/dai10.png new file mode 100644 index 00000000..7549cdbb Binary files /dev/null and b/Lectures/S0-L17/dai10.png differ diff --git a/Lectures/S0-L17/dai11.png b/Lectures/S0-L17/dai11.png new file mode 100644 index 00000000..c9402b72 Binary files /dev/null and b/Lectures/S0-L17/dai11.png differ diff --git a/Lectures/S0-L17/dai12.png b/Lectures/S0-L17/dai12.png new file mode 100644 index 00000000..9f3aa7c7 Binary files /dev/null and b/Lectures/S0-L17/dai12.png differ diff --git a/Lectures/S0-L17/dai13.png b/Lectures/S0-L17/dai13.png new file mode 100644 index 00000000..8e01d2f6 Binary files /dev/null and b/Lectures/S0-L17/dai13.png differ diff --git a/Lectures/S0-L17/dai2.png b/Lectures/S0-L17/dai2.png new file mode 100644 index 00000000..37528d8e Binary files /dev/null and b/Lectures/S0-L17/dai2.png differ diff --git a/Lectures/S0-L17/dai3.png b/Lectures/S0-L17/dai3.png new file mode 100644 index 00000000..0c84a48e Binary files /dev/null and b/Lectures/S0-L17/dai3.png differ diff --git a/Lectures/S0-L17/dai4.png b/Lectures/S0-L17/dai4.png new file mode 100644 index 00000000..ea14f277 Binary files /dev/null and b/Lectures/S0-L17/dai4.png differ diff --git a/Lectures/S0-L17/dai5.png b/Lectures/S0-L17/dai5.png new file mode 100644 index 00000000..8ac438b2 Binary files /dev/null and b/Lectures/S0-L17/dai5.png differ diff --git a/Lectures/S0-L17/dai6.png b/Lectures/S0-L17/dai6.png new file mode 100644 index 00000000..de69911b Binary files /dev/null and b/Lectures/S0-L17/dai6.png differ diff --git a/Lectures/S0-L17/dai7.png b/Lectures/S0-L17/dai7.png new file mode 100644 index 00000000..9261fafa Binary files /dev/null and b/Lectures/S0-L17/dai7.png differ diff --git a/Lectures/S0-L17/dai8.png b/Lectures/S0-L17/dai8.png new file mode 100644 index 00000000..90be4f14 Binary files /dev/null and b/Lectures/S0-L17/dai8.png differ diff --git a/Lectures/S0-L17/dai9.png b/Lectures/S0-L17/dai9.png new file mode 100644 index 00000000..07b180f7 Binary files /dev/null and b/Lectures/S0-L17/dai9.png differ diff --git a/Lectures/S0-L17/exact.png b/Lectures/S0-L17/exact.png new file mode 100644 index 00000000..90cfecf2 Binary files /dev/null and b/Lectures/S0-L17/exact.png differ diff --git a/Lectures/S0-L17/gr.png b/Lectures/S0-L17/gr.png new file mode 100644 index 00000000..16bf8f70 Binary files /dev/null and b/Lectures/S0-L17/gr.png differ diff --git a/Lectures/S0-L17/mu.png b/Lectures/S0-L17/mu.png new file mode 100644 index 00000000..063902c8 Binary files /dev/null and b/Lectures/S0-L17/mu.png differ diff --git a/Lectures/S0-L17/reasons.png b/Lectures/S0-L17/reasons.png new file mode 100644 index 00000000..973d2659 Binary files /dev/null and b/Lectures/S0-L17/reasons.png differ diff --git a/Lectures/S0-L17/right.png b/Lectures/S0-L17/right.png new file mode 100644 index 00000000..f2370e54 Binary files /dev/null and b/Lectures/S0-L17/right.png differ diff --git a/Lectures/S0-L17/tf.png b/Lectures/S0-L17/tf.png new file mode 100644 index 00000000..5156c6fa Binary files /dev/null and b/Lectures/S0-L17/tf.png differ diff --git a/Lectures/S0-L17/tl1.png b/Lectures/S0-L17/tl1.png new file mode 100644 index 00000000..5fe74153 Binary files /dev/null and b/Lectures/S0-L17/tl1.png differ diff --git a/Lectures/S0-L17/tl2.png b/Lectures/S0-L17/tl2.png new file mode 100644 index 00000000..6709fa3c Binary files /dev/null and b/Lectures/S0-L17/tl2.png differ diff --git a/Lectures/S0-L17/tl3.png b/Lectures/S0-L17/tl3.png new file mode 100644 index 00000000..160b71a0 Binary files /dev/null and b/Lectures/S0-L17/tl3.png differ diff --git a/Lectures/S0-L17/tl4.png b/Lectures/S0-L17/tl4.png new file mode 100644 index 00000000..0dc8e76f Binary files /dev/null and b/Lectures/S0-L17/tl4.png differ diff --git a/Lectures/S0-L17/truth.png b/Lectures/S0-L17/truth.png new file mode 100644 index 00000000..38ba44e3 Binary files /dev/null and b/Lectures/S0-L17/truth.png differ diff --git a/Lectures/S0-L18/images/IM_conflict_13.png b/Lectures/S0-L18/images/IM_conflict_13.png new file mode 100644 index 00000000..252f7b39 Binary files /dev/null and b/Lectures/S0-L18/images/IM_conflict_13.png differ diff --git a/Lectures/S0-L18/images/Interpretability_attack_2.PNG b/Lectures/S0-L18/images/Interpretability_attack_2.PNG new file mode 100644 index 00000000..255f6066 Binary files /dev/null and b/Lectures/S0-L18/images/Interpretability_attack_2.PNG differ diff --git a/Lectures/S0-L18/images/Interpretability_diagram_1.PNG b/Lectures/S0-L18/images/Interpretability_diagram_1.PNG new file mode 100644 index 00000000..b26cb361 Binary files /dev/null and b/Lectures/S0-L18/images/Interpretability_diagram_1.PNG differ diff --git a/Lectures/S0-L18/images/Interpretability_prompt_3.png b/Lectures/S0-L18/images/Interpretability_prompt_3.png new file mode 100644 index 00000000..ce62f350 Binary files /dev/null and b/Lectures/S0-L18/images/Interpretability_prompt_3.png differ diff --git a/Lectures/S0-L18/images/KC_cm_16.png b/Lectures/S0-L18/images/KC_cm_16.png new file mode 100644 index 00000000..7efbb975 Binary files /dev/null and b/Lectures/S0-L18/images/KC_cm_16.png differ diff --git a/Lectures/S0-L18/images/KC_example_14.png b/Lectures/S0-L18/images/KC_example_14.png new file mode 100644 index 00000000..70be8d21 Binary files /dev/null and b/Lectures/S0-L18/images/KC_example_14.png differ diff --git a/Lectures/S0-L18/images/KC_ic_17.png b/Lectures/S0-L18/images/KC_ic_17.png new file mode 100644 index 00000000..909b4108 Binary files /dev/null and b/Lectures/S0-L18/images/KC_ic_17.png differ diff --git a/Lectures/S0-L18/images/KC_methodology_15.png b/Lectures/S0-L18/images/KC_methodology_15.png new file mode 100644 index 00000000..86296771 Binary files /dev/null and b/Lectures/S0-L18/images/KC_methodology_15.png differ diff --git a/Lectures/S0-L18/images/NN_explain_7.PNG b/Lectures/S0-L18/images/NN_explain_7.PNG new file mode 100644 index 00000000..d07e8dad Binary files /dev/null and b/Lectures/S0-L18/images/NN_explain_7.PNG differ diff --git a/Lectures/S0-L18/images/behavior.png b/Lectures/S0-L18/images/behavior.png new file mode 100644 index 00000000..2733c8b3 Binary files /dev/null and b/Lectures/S0-L18/images/behavior.png differ diff --git a/Lectures/S0-L18/images/dataset_llm_11.PNG b/Lectures/S0-L18/images/dataset_llm_11.PNG new file mode 100644 index 00000000..4e8e40bb Binary files /dev/null and b/Lectures/S0-L18/images/dataset_llm_11.PNG differ diff --git a/Lectures/S0-L18/images/dataset_scale_10.PNG b/Lectures/S0-L18/images/dataset_scale_10.PNG new file mode 100644 index 00000000..a976f036 Binary files /dev/null and b/Lectures/S0-L18/images/dataset_scale_10.PNG differ diff --git a/Lectures/S0-L18/images/evaluation.png b/Lectures/S0-L18/images/evaluation.png new file mode 100644 index 00000000..eecadf90 Binary files /dev/null and b/Lectures/S0-L18/images/evaluation.png differ diff --git a/Lectures/S0-L18/images/evaluation2.png b/Lectures/S0-L18/images/evaluation2.png new file mode 100644 index 00000000..d62e2ced Binary files /dev/null and b/Lectures/S0-L18/images/evaluation2.png differ diff --git a/Lectures/S0-L18/images/evaluationvisualcap.png b/Lectures/S0-L18/images/evaluationvisualcap.png new file mode 100644 index 00000000..695c1b0e Binary files /dev/null and b/Lectures/S0-L18/images/evaluationvisualcap.png differ diff --git a/Lectures/S0-L18/images/explain_text_4.PNG b/Lectures/S0-L18/images/explain_text_4.PNG new file mode 100644 index 00000000..b9893916 Binary files /dev/null and b/Lectures/S0-L18/images/explain_text_4.PNG differ diff --git a/Lectures/S0-L18/images/factual.png b/Lectures/S0-L18/images/factual.png new file mode 100644 index 00000000..0c1408b7 Binary files /dev/null and b/Lectures/S0-L18/images/factual.png differ diff --git a/Lectures/S0-L18/images/gpt4_explain_6.PNG b/Lectures/S0-L18/images/gpt4_explain_6.PNG new file mode 100644 index 00000000..0b930495 Binary files /dev/null and b/Lectures/S0-L18/images/gpt4_explain_6.PNG differ diff --git a/Lectures/S0-L18/images/gpt4_explain_8.PNG b/Lectures/S0-L18/images/gpt4_explain_8.PNG new file mode 100644 index 00000000..f6ebee26 Binary files /dev/null and b/Lectures/S0-L18/images/gpt4_explain_8.PNG differ diff --git a/Lectures/S0-L18/images/gpt4_explain_9.PNG b/Lectures/S0-L18/images/gpt4_explain_9.PNG new file mode 100644 index 00000000..7fb55c1c Binary files /dev/null and b/Lectures/S0-L18/images/gpt4_explain_9.PNG differ diff --git a/Lectures/S0-L18/images/multilingual.png b/Lectures/S0-L18/images/multilingual.png new file mode 100644 index 00000000..c39166f6 Binary files /dev/null and b/Lectures/S0-L18/images/multilingual.png differ diff --git a/Lectures/S0-L18/images/needle.png b/Lectures/S0-L18/images/needle.png new file mode 100644 index 00000000..7cd0f545 Binary files /dev/null and b/Lectures/S0-L18/images/needle.png differ diff --git a/Lectures/S0-L18/images/probing_5.PNG b/Lectures/S0-L18/images/probing_5.PNG new file mode 100644 index 00000000..3a1a542d Binary files /dev/null and b/Lectures/S0-L18/images/probing_5.PNG differ diff --git a/Lectures/S0-L18/images/prompt_chain_12.PNG b/Lectures/S0-L18/images/prompt_chain_12.PNG new file mode 100644 index 00000000..65d80dc8 Binary files /dev/null and b/Lectures/S0-L18/images/prompt_chain_12.PNG differ diff --git a/Lectures/S0-L18/images/quality.png b/Lectures/S0-L18/images/quality.png new file mode 100644 index 00000000..aa23c4a3 Binary files /dev/null and b/Lectures/S0-L18/images/quality.png differ diff --git a/Lectures/S0-L18/images/trainingdetails.png b/Lectures/S0-L18/images/trainingdetails.png new file mode 100644 index 00000000..cd39cb8a Binary files /dev/null and b/Lectures/S0-L18/images/trainingdetails.png differ diff --git a/Lectures/S0-L20/.DS_Store b/Lectures/S0-L20/.DS_Store index b73a5977..172046b9 100644 Binary files a/Lectures/S0-L20/.DS_Store and b/Lectures/S0-L20/.DS_Store differ diff --git a/Lectures/S0-L20/images/.DS_Store b/Lectures/S0-L20/images/.DS_Store index df2dde5c..db3fd20a 100644 Binary files a/Lectures/S0-L20/images/.DS_Store and b/Lectures/S0-L20/images/.DS_Store differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide10.png b/Lectures/S0-L20/images/Reasoning/Slide10.png new file mode 100644 index 00000000..13cfb383 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide10.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide11.png b/Lectures/S0-L20/images/Reasoning/Slide11.png new file mode 100644 index 00000000..4f85d2d3 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide11.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide12.png b/Lectures/S0-L20/images/Reasoning/Slide12.png new file mode 100644 index 00000000..8c313149 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide12.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide13.png b/Lectures/S0-L20/images/Reasoning/Slide13.png new file mode 100644 index 00000000..9e30c13a Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide13.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide14.png b/Lectures/S0-L20/images/Reasoning/Slide14.png new file mode 100644 index 00000000..7676a95f Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide14.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide15.png b/Lectures/S0-L20/images/Reasoning/Slide15.png new file mode 100644 index 00000000..fdd38dd3 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide15.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide16.png b/Lectures/S0-L20/images/Reasoning/Slide16.png new file mode 100644 index 00000000..80c5cea2 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide16.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide17.png b/Lectures/S0-L20/images/Reasoning/Slide17.png new file mode 100644 index 00000000..d95f45b8 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide17.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide18.png b/Lectures/S0-L20/images/Reasoning/Slide18.png new file mode 100644 index 00000000..2352a443 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide18.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide19.png b/Lectures/S0-L20/images/Reasoning/Slide19.png new file mode 100644 index 00000000..2c01d21b Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide19.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide2.png b/Lectures/S0-L20/images/Reasoning/Slide2.png new file mode 100644 index 00000000..681fcfa0 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide2.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide3.png b/Lectures/S0-L20/images/Reasoning/Slide3.png new file mode 100644 index 00000000..e58dd54c Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide3.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide4.png b/Lectures/S0-L20/images/Reasoning/Slide4.png new file mode 100644 index 00000000..65b30c21 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide4.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide5.png b/Lectures/S0-L20/images/Reasoning/Slide5.png new file mode 100644 index 00000000..0034f095 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide5.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide6.png b/Lectures/S0-L20/images/Reasoning/Slide6.png new file mode 100644 index 00000000..1f25056e Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide6.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide7.png b/Lectures/S0-L20/images/Reasoning/Slide7.png new file mode 100644 index 00000000..83427247 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide7.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide8.png b/Lectures/S0-L20/images/Reasoning/Slide8.png new file mode 100644 index 00000000..19ade72d Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide8.png differ diff --git a/Lectures/S0-L20/images/Reasoning/Slide9.png b/Lectures/S0-L20/images/Reasoning/Slide9.png new file mode 100644 index 00000000..c010878e Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/Slide9.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_01.png b/Lectures/S0-L20/images/Reasoning/img_01.png new file mode 100644 index 00000000..1a69559a Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_01.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_02.png b/Lectures/S0-L20/images/Reasoning/img_02.png new file mode 100644 index 00000000..4c2e8abe Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_02.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_03.png b/Lectures/S0-L20/images/Reasoning/img_03.png new file mode 100644 index 00000000..9757afc8 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_03.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_04.png b/Lectures/S0-L20/images/Reasoning/img_04.png new file mode 100644 index 00000000..548d7a72 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_04.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_05.png b/Lectures/S0-L20/images/Reasoning/img_05.png new file mode 100644 index 00000000..be9fa8c6 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_05.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_06.png b/Lectures/S0-L20/images/Reasoning/img_06.png new file mode 100644 index 00000000..9f50ced1 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_06.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_07.png b/Lectures/S0-L20/images/Reasoning/img_07.png new file mode 100644 index 00000000..7923c0a4 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_07.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_08.png b/Lectures/S0-L20/images/Reasoning/img_08.png new file mode 100644 index 00000000..cd2d934d Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_08.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_09.png b/Lectures/S0-L20/images/Reasoning/img_09.png new file mode 100644 index 00000000..66ea30bf Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_09.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_10.png b/Lectures/S0-L20/images/Reasoning/img_10.png new file mode 100644 index 00000000..0c8dd8d9 Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_10.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_11.png b/Lectures/S0-L20/images/Reasoning/img_11.png new file mode 100644 index 00000000..c6cc917a Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_11.png differ diff --git a/Lectures/S0-L20/images/Reasoning/img_12.png b/Lectures/S0-L20/images/Reasoning/img_12.png new file mode 100644 index 00000000..b66bdfec Binary files /dev/null and b/Lectures/S0-L20/images/Reasoning/img_12.png differ diff --git a/_contents/.Rhistory b/_contents/.Rhistory new file mode 100644 index 00000000..e69de29b diff --git a/_contents/S0-L17.md b/_contents/S0-L17.md index 5c396cf1..5f3a71d7 100755 --- a/_contents/S0-L17.md +++ b/_contents/S0-L17.md @@ -43,3 +43,392 @@ Comments: EMNLP 2023. Updated with new experiments + https://arxiv.org/abs/2304.03545 + Alessandro Achille, Michael Kearns, Carson Klingenberg, Stefano Soatto Responsible use of data is an indispensable part of any machine learning (ML) implementation. ML developers must carefully collect and curate their datasets, and document their provenance. They must also make sure to respect intellectual property rights, preserve individual privacy, and use data in an ethical way. Over the past few years, ML models have significantly increased in size and complexity. These models require a very large amount of data and compute capacity to train, to the extent that any defects in the training corpus cannot be trivially remedied by retraining the model from scratch. Despite sophisticated controls on training data and a significant amount of effort dedicated to ensuring that training corpora are properly composed, the sheer volume of data required for the models makes it challenging to manually inspect each datum comprising a training corpus. One potential fix for training corpus data defects is model disgorgement -- the elimination of not just the improperly used data, but also the effects of improperly used data on any component of an ML model. Model disgorgement techniques can be used to address a wide range of issues, such as reducing bias or toxicity, increasing fidelity, and ensuring responsible usage of intellectual property. In this paper, we introduce a taxonomy of possible disgorgement methods that are applicable to modern ML systems. In particular, we investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch. + +### Outline + + +- The presenters discussed 3 primary topics: +1. Editing Large Language Models +2. Tuning Language Models by Proxy +3. A survey of Machine Unlearning + +## Paper 1: Editing Large Language Models + + +### Context + +As is visible from the graph, LLMs have seen a meteoric rise in recent times. This graph relates the number of parameters in models to time, by year since 2020. It also shows which models are available with open access, and shows larger circles for models with more parameters. + +### Unwanted Knowledge + +LLMs can easily learn unwanted knowledge. If given poor input data, it can output biased responses. The authors will discuss if there is an efficient way for large language models to update their knowledge. + + +Editing LLMs is necessary because the world changes after they are released. Labels shift, and the ground truth for their answers can shift as well. + + +The authors discuss 3 primary ways of updating a model: +1. Fine-tuning: drawbacks include its computational requirements and how easy it is to overfit. +2. Retrieval augmented: can scale poorly and suffer from retrieval noise +3. Model editing: gives precise control, but can be difficult and ineffective. + + +In this slide the presenters formally describe the task at hand. The goal is to modify a model's behavior for one particular edit descriptor while leaving other behaviors unchanged. The edit scope is also formally defined with *S*, and behaviors can either be in-scope or out-of-scope. + + +For evaluation, the authors primarily use metrics of reliability, generalization, and locality. + +#### Current Methods + +This slide shows how current methods could be used to modify an edit descriptor in a model. The upper section shows a method to modify the behavior while preserving the model's parameters. The lower section shows a method wherein the model's parameters are modified. + + +The authors present this table to compare the current methods and specify additional attributes of their approaches. + + +The authors now experiment with the different approaches. Their experiments are based on factual knowledge, which is information that can be verified as true or false based on empirical evidence or authoritative sources. + + +The authors will utilize the CounterFact dataset to measure the efficacy of significant changes. This slide also shows the composition of that dataset. + +#### Experimental Results + +This slide shows the results of existing methods on three metrics of the dataset: reliability, generalization, and locality. + + +In terms of scaling, the authors note that the ROME and MEMEIT approaches perform well on the GPT-NEOX-20B model but fail on OPT-13B. They note that large amounts of matrix computations and in-context learning ability could limit the efficacy of certain approaches. + + +Batch editing is required to modify a model with multiple knowledge pieces simultaneously. Some methods are batch-editing-supportive. Figure 3 shows batch editing performance vs. batch number. MEMEIT appears to be one of the best approaches in this regard. + +#### Preliminary Experiments +Sequential Editing +- The ability to carry out successive edits is a vital feature for model editing +- Methods that freeze the model's parameters, like SERAC and T-Patcher, generally show stable performance in sequential editing +- Those altering the model's parameters struggle, e.g., ROME and MEND + + +#### Comprehensive Study +Proposed more comprehensive evaluations regarding portability, locality, and efficiency. +Portability-Robust Generalization +- Crucial to verify if these methods can handle the implication of an edit for realistic applications +- Definition: Gauge the effectiveness of model editing in transferring knowledge to related content, termed robust generalization +- Three aspects: +1. Subject replace: replacing the subject in the question with an alias or synonym +2. Reversed relation: If the target of a subject and relation is edited, attribute of the target entity also changes +3. One-hop: Modified knowledge should be usable by the edited language model for downstream tasks + + + +Locality Side Effect of Model Editing + +- Evaluate potential side-effects of model editing. +- Other relations: Argue that other attributes of the subject that have been updated should remain unchanged after editing. +- Distract Neighborhood: If edited cases are concatenated or presented before unrelated input to the model, the model tends to be "swayed" or influenced by those edited cases. + + + + + +#### Limitations +- Model Scale: Computational Complexities +- Different architectures need to be explored: Llama +- Editing Scope: Application of model editing goes beyond mere factual contexts +- Elements such as personality, emotions, opinions, and beliefs also fall within the scope of model editing +- Editing Setting: Multi-edit evaluation +- Zhong et al. (2023) proposed a multi-hop reasoning setting that explored current editing methods' generalization performance for multiple edits simultaneously +- Editing Black-Box LLMs: Utilize in-context learning or prompt-based methods to modify these LLMs + +### Paper II: Tuning Language Model by Proxy +#### Model Fine-tuning + + +#### Idea of Proxy-Tuning + + +#### What is proxy-tuning? + +Decoding-time algorithm that adapts LLMs without accessing their internal weights\ +Uses only the base model's (LLM) output predictions + + +#### How does it work? + + +#### Performance Evaluation + + +#### Example of Proxy-tuning + + +#### Generated response from Proxy-tuning + + +#### Computational Complexity + + + +#### General Results + + + +Different models are tested on GSM and AlpacaFarm datasets. The results show that while +both Base and 70B-Base models are struggling, the proxy-tuned 70B-Base model has drastic improvement in performance +as well as generating less toxic responses. + +#### TruthfulQA Detailed Results + + + +The models are also tested on Truthful QA dataset, which has two aspects, truthfulness and informativeness. +Truthfulness is a measurement on answer to question does not assert a false statement. (does not give any +factually incorrect answer) while informativeness is a measurement on provided information that reduces uncertainty +raised by question. + +It shows that the proxy-tuned models are more truthful though slightly less informative which implies decoding-time +algorithms may preserve knowledge better than direct finetuning. + +#### Code Adaptation Experiments + + + +The authors also test the proxy-tuning on code adaptation. They used Codellama-7B-python as the base model and compared +the results with proxy-tuning again direct tuning. The evaluation datasets are CodexEval and DS-1000. + + + +The results show that the proxy-tuned model does not outperform the directly tuned model on code adaptation. The authors +deduced that it can be due to that the base model itself is already tuned on a specific task and that Proxy-tuning needs +more work for code generation applications. + +#### Task Finetuning Experiments + + + +LMs usually do not perform ideally on out-of-the-box tasks. The authors test the proxy-tuning on two tasks which requires +some sort of tuning. The datasets are TriviaQA and GSM, one is a question-answering task and the other is a math question +task. The models are LLAMA2-7B finetuned on trainset to obtain a task expert. Anti expert is another LLAMA2-7B model. + +The results show that the proxy-tuned model does not outperform the directly tuned model on both datasets. + +#### Analysis of proxy tuning at the token level + + + +To understand what kinds of tokens are influenced more by proxy-tuning, the authors recorded next-token +probability distribution at each time step and then took the difference in probabilities assigned to the +top token xt chosen by the proxy-tuned model. The analysis is based on 12B-Base and its proxy-tuned model. + + + +For GSM, all the intermediate equations' left-hand side and the right-hand side are compared to the references where +there is a single correct answer. the probability difference is 0.130 on average for LHS tokens, +and 0.056 for RHS tokens, a difference which is statistically significant with p < 0.0001 under a t-test. + +It shows that proxy tuning contributes more to formulating reasoning steps than to generating factual statements. + + + +For TruthfulQA, the authors recorded the tokens most influenced by proxy tuning. It shows that instruction tuning +mainly influences reasoning and style instead of increasing the model’s knowledge as can be seen in the two +examples, where the changes are more of stylistic nature. + + + +To study if hyperparameters can provide more control over proxy tuning, especially in terms of the +trade-off between informativeness and truthfulness. The authors used TruthfulQA dataset as the example, and the +hyperparameter α is between 0.2 and 2, the larger it is the more contrast there is between the expert and +anti-expert. + +It shows that the informativeness decreases as α increases, while the truthfulness increases. There is +some optimum value existing for a specific dataset. + +#### Conclusion + + + +The authors concluded that proxy-tuning is a promising method for the decoding-time by modifying output logits, an +efficient alternative to direct finetuning and a viable method to fine-tuning proprietary models. + +As full finetuning might lead to forgetting old information, proxy tuning might open a new method of continual +learning since it is more efficient. + +### A Survey of Machine Unlearning + +#### "The Right to be Forgotten" + + + +It can be argued that everyone should have “The right to have private information about a person be removed from Internet searches and other +directories under some circumstances”. As individuals tend to change and develop throughout the time and events from the +past can still cause stigma and consequences even many years later when the person has changed or the information is no longer +relevant or true. + +#### Machine Unlearning + + + +This concept should also be applied to machine learning models. As models are tend to be trained on past data, the +information that should be unlearned is both in the dataset and the model's parameters. Thus this poses a question +of how to unlearn the data from the model. + +#### Reasons for Machine Unlearning + + + +There are several reasons of why machine unlearning can be beneficial: 1. Improve security of the Model; 2. Improve +privacy of User; 3. Improve Usability of System and 4. Reduce Bias in the Model. + +#### Machine Unlearning Challenges + + + +There are also some challenges in machine unlearning: 1. As a model is trained on mini-batches, it is hard to +find all the batches that contain the data to be unlearned; 2. A model is trained in an incremental way, so the data +point to be unlearned also has influence on the later data points; 3. A model that has unlearned the data tends to perform +way worse than the original model. + +#### Machine Unlearning Definition (Exact/Perfect) + + + +To define machine unlearning in a mathematical way, it can be defined that after the unlearning process the model +Pr(U(D,D,Df,A(D))) should have the same probability distribution as the model Pr(A(D\Df)) which represents the model +trained on the datset without the forget set. And this is Exact Unlearning. + +#### Unlearning Definition (Approximate) + + + +The approximate unlearning however, lossens the constraint. It states that the unlearned model distribution should be +approximately equal to the model distribution trained on the dataset without the forget set to start with. More specifically, +this is defined as a ratio between the two models and the ration should be smaller than a predefined threshold. + +#### Differential Privacy and Approximate Unlearning + + + +There is also a close relationship between differential privacy and approximate unlearning. Differential privacy implies +approximate unlearning however, the reverse is not true. + +#### Understanding Differential Privacy and Its Role in Unlearning + + + +Differential privacy is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. Essentially, it provides a guarantee that the removal or addition of a single data point will not significantly affect the outcome of any analysis, thus ensuring the privacy of individuals' data. Slide 58 lays out a formal definition, encapsulating this guarantee in a mathematical inequality. It states that the probability of a specific outcome should be roughly the same, whether or not any individual data point is included in the dataset. Slide 58 also illustrates that differential privacy inherently supports a form of approximate unlearning. This is because if a model is differentially private, it’s also resilient to small changes in its dataset, which includes the removal of data points. However, this doesn't necessarily mean that a model capable of unlearning is differentially private since differential privacy requires a strict mathematical condition to be fulfilled that may not be addressed by all unlearning methods. + +#### The Variants of Unlearning + + + +Unlearning scenarios are the specific cases in which a machine learning model is required to "forget" data. Slide 59 introduces three scenarios: + +Zero-glance Unlearning: Here, the model unlearns without revisiting the forgotten data set. It relies on a subset of the remaining data and does not access the full data it's supposed to forget. + +Zero-shot Unlearning: This approach aims to unlearn by approximating without any access to the forget set—the exact data to be forgotten. It is akin to removing a memory without being allowed to know what the memory is. + +Few-shot Unlearning: In contrast to zero-shot, few-shot unlearning has partial access to the forget set. It uses a subset of the forget set along with the original data to recalibrate the model. + + + +Slide 60 provides a more tangible perspective on these scenarios by visualizing how a model might be trained on certain data (represented by images) and how it would approach unlearning if one of those images must be forgotten. It compares how close the unlearned model is to a gold standard - a model trained without the forgotten set from the start. + +#### The Framework of Unlearning + + + +Slide 61 outlines the flow of the unlearning framework, which starts with the current data being processed by a learning algorithm (like SGD or decision trees). When an unlearning request is made, the framework utilizes an unlearning algorithm which can be model-agnostic, model-intrinsic, or data-driven. The unlearned model is then produced, and verification processes like feature injection tests or membership inference attacks ensure the unlearning process is successful. If verification fails, the process might need to be repeated until the model effectively forgets the data without significantly impacting its accuracy. + +#### The Mechanics of Unlearning Requests + + + + +Unlearning requests can come in several forms: + +Item Removal: This is a request to remove specific data points or samples, such as personal photos, from the training data of a model. + +Feature Removal: Sometimes, a request is made to remove a sensitive attribute or feature from the model, like gender or race information in a job application screening system. + +Task Removal: Here, the request is to have the model forget how to perform a specific task entirely. For example, if a robot is trained on multiple tasks, it might be asked to forget one of those tasks completely. + +Stream Removal: In dynamic systems where data streams continuously (like online learning scenarios), users might ask for certain data to be forgotten over time, such as topics in a personalized news feed. + +#### Design Requirements for Effective Unlearning + + + + +The design requirements for a robust unlearning system include: + +Completeness: The unlearned model should behave as if the data it's unlearning was never part of the training set. + +Timeliness: The unlearning process must be significantly quicker than retraining a model from scratch. + +Accuracy: The accuracy of the model on the remaining data should not be significantly compromised by the unlearning process. + +Verifiability: There must be a verification mechanism to confirm the data has been successfully unlearned. + +Model-Agnostic: The framework should be versatile enough to be applied across different model architectures and algorithms, ensuring broad applicability. + +#### Unlearning Verification + + + + +The fundamental objective of unlearning verification is to provide assurance that the unlearned model is indistinguishable from a model that was retrained from scratch without the data intended to be forgotten. Verification serves as a form of certification, validating that the unlearning process has been successful and the data has effectively been 'forgotten' by the model. + +Two primary methods are described for verifying unlearning: + +Feature Injection Test: This involves adding a distinctive feature to the data set to be forgotten and observing if the model's parameters adjust accordingly. If the parameters remain unchanged, the unlearning process may not have been effective. + +Information Leakage and Forgetting Measurement: Here, the focus is on comparing the model's output distribution before and after unlearning to check for any information leakage. Furthermore, the success rate of privacy attacks, such as membership inference attacks, is used to measure how forgetful the model has been towards the removed data. A successful unlearning process should ideally show no increased success rate in such attacks. + +#### Unlearning Algorithms + + + + + +Unlearning algorithms can be categorized into three primary types: + +Model-Agnostic approaches: These treat the model as a black box, applying general techniques that are not specific to the model's architecture, such as differential privacy or statistical query learning. + +Model-Intrinsic approaches: These methods utilize properties specific to certain model types. For example, linear models may unlearn by directly adjusting their weights, while deep neural networks might selectively unlearn certain neurons or layers. + +Data-Driven approaches: Instead of modifying the model directly, this approach manipulates the training data. Techniques such as data partitioning allow for efficient retraining by only affecting the part of the model trained on the data to be forgotten. + +#### Detail Data-Driven Approach + + + + +The data-driven approach involves strategies like: + +Data Partitioning: Dividing the training data into smaller subsets and retraining separate sub-models for each. When unlearning is requested, only the relevant sub-models are retrained. + +Data Augmentation: This involves adding noise or variations to the data to dilute the influence of individual data points, making the model less sensitive to specific instances. + +Data Influence: Evaluating the influence of each data point on the model's predictions and then adjusting the training data to mitigate the impact of the points to be unlearned. + +#### Evaluation Metrics + + + +Various metrics are proposed to evaluate the effectiveness of an unlearning process, including: + +Accuracy: The predictive performance of the model after unlearning. + +Completeness: The indistinguishability between the outputs of the retrained and the unlearned model. + +Unlearn and Relearn Time: The efficiency of the unlearning process and the time required to retrain the model. + +Layer-wise and Activation Distance: Measures of difference in the model's parameters and activation outputs. + +JS-Divergence and Membership Inference Attack: Metrics for evaluating the success rate of privacy attacks post-unlearning, which reflect the model's forgetfulness. + +#### Unified Design Requirements + + + +Slide 74 presents a comparison of unlearning methods against various design requirements and unlearning requests. It highlights that different approaches may be better suited for different unlearning scenarios, emphasizing the need for a unified design that accommodates various methods. For instance, model-agnostic approaches may support feature and item removal well but may not be the best for task removal. On the other hand, data-driven approaches can be more flexible across different unlearning requests. diff --git a/_contents/S0-L18.md b/_contents/S0-L18.md index 0e883767..68ad4156 100755 --- a/_contents/S0-L18.md +++ b/_contents/S0-L18.md @@ -66,4 +66,339 @@ Mechanistic interpretability takes a bottom-up approach to understanding ML mode #### Language models can explain neurons in language models + https://openai.com/research/language-models-can-explain-neurons-in-language-models + Language models have become more capable and more widely deployed, but we do not understand how they work. Recent work has made progress on understanding a small number of circuits and narrow behaviors,[1][2] but to fully understand a language model, we'll need to analyze millions of neurons. This paper applies automation to the problem of scaling an interpretability technique to all the neurons in a large language model. Our hope is that building on this approach of automating interpretability [3][4][5] will enable us to comprehensively audit the safety of models before deployment. + +# Session Blog +## Rethinking Interpretability in the Era of Large Language Models +Section based on the paper [Rethinking Interpretability in the Era of Large Language Models](https://arxiv.org/abs/2402.01761) ++ In traditional ML interpretability, + + Building inherently interpretable models, + + such as sparse linear models and decision trees + + Post-hoc interpretability techniques + + Such as Grad-Cam that relies on saliency maps ++ A new opportunity in LLM interpretability: + + Explanation Generation + + “Can you explain your logic?” “ Why didn’t you answer with (A)?” + +Interpretability Definition: +Extraction of relevant knowledge concerning relationships contained in data or learned by the model +The definition applies to both: +1. Interpreting an LLM, and +2. Using an LLM to generate explanations + +Breakdown of LLM interpretability: Uses and Themes + + +Description example + + +### Local Explanation +Explain a Single Generation by Token-level Attributions ++ Providing feature attributions for input tokens + + perturbation-based methods + + gradient-based methods + + linear approximations ++ Attention mechanisms for visualizing token contribution to a generation ++ LLM can generate post-hoc feature attributions by prompting + +Post-hoc feature attributions by prompting LLM + + +Explain a Single Generation Directly in Natural Language + + +Challenges: Hallucination +Mitigation: ++ Generate explanation within the answer: + + Chain-of-thought prompting + + Tree-of-thoughts ++ Retrieval Augmented Generation + +### Global Explanation +#### Probing +Analyze the model’s representation by decoding its embedded information +Probing can apply to ++ Attention heads ++ Embeddings ++ Different controllable representations + +Probing as it applies to text embeddings: + + +More Granular Level Representation ++ categorizing or decoding concepts from individual neurons ++ explaining the function of attention heads in natural language + +How groups of neurons combine to perform specific tasks ++ finding a circuit for indirect object identification ++ entity binding + +#### GPT-4 Probing Example + + + + + +### Dataset Explanation +Data set explanation occurs along a spectrum of low-high level techniques: + + +Text Data +Using LLM to build interpretable Linear Models / Decision Trees. Basically just using LLMs to summarize details of less interpretable models. + +Partially interpretable models via chain of prompts techniques: + + +### Future Directions +Explanation reliability: prevent hallucinations from leaking in to explanations, ensure that explanations are related to the actual process of the model if asking it to explain itself, implement some kind of verification techniques. +Dataset explanation for knowledge discovery: better usages of models to summarize, create and display statistics, and extract knowledge from datasets +Interactive explanations: make the process more dynamic and accessible + +## Claude Model 3 Family: Opus, Sonnet, Haiku +Based on the Claude Product release paper, found [here](https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf) +### Introduction + ++ The Claude 3 family of models encompasses Opus, Sonnet, and Haiku variants, each excelling in reasoning, mathematics, coding, multi-lingual understanding, and vision quality. + + A key enhancement across the family is the inclusion of multimodal input capabilities with text output. ++ Claude 3 Opus delivers strong performance in reasoning, mathematics, and coding. ++ Claude 3 Sonnet demonstrates increased proficiency in nuanced content creation, analysis, forecasting, accurate summarization, and handling scientific queries. ++ Claude 3 Haiku stands out as the fastest and most affordable option in its intelligence category, while also featuring vision capabilities. + +### Model Setup + ++ Training Data: + + A proprietary blend of publicly accessible data sourced from the Internet as of August 2023. + + Non-public information obtained from third-party sources. + + Data acquired through data labeling services and paid contractors. + + Internally generated data. ++ Training Details: + + Implementation of Constitutional AI to align Claude's learning process with human values during reinforcement learning. + + Constitutional AI Enhancement: + + Claude's constitution has been augmented to promote respect for disability rights. + + This addition stems from research on Collective Constitutional AI, aimed at aligning Claude with human values during reinforcement learning. + + +### Security Measures: ++ Protected by two-party controls. ++ All users require an authorized account for access. ++ Continuous 24/7 monitoring of systems. ++ Immediate alert response. ++ Implementation of endpoint hardening measures. ++ Stringent controls on data storage and sharing. ++ Thorough personnel vetting procedures. ++ Enhancement of physical security measures. + +### Social Responsibility Focus: + ++ Implementation of Constitutional AI to ensure alignment with human values. ++ Commitment to labor standards and fair treatment of workers. ++ Dedication to sustainability practices and minimizing environmental impact. + +### Evaluation Criteria: + ++ Reasoning: Assessing the model's ability to logically infer and deduce information. ++ Multilingual: Evaluating proficiency in understanding and generating content in multiple languages. ++ Long Context: Gauging comprehension and coherence in handling lengthy passages or conversations. ++ Honesty: Examining the model's commitment to truthfulness and accuracy in its responses. ++ Multimodal: Assessing capabilities to process and generate content across multiple modalities such as text, images, and audio. + + + +### Evaluation + ++ Law School Admission Test (LSAT): Evaluates critical thinking, analytical reasoning, and reading comprehension skills for admission to law schools. ++ Multistate Bar Exam (MBE): Assesses knowledge of common law principles and legal reasoning skills for bar admission. ++ American Mathematics Competition (AMC): Tests mathematical problem-solving abilities and reasoning skills among high school students. ++ Graduate Record Exam (GRE): Measures verbal reasoning, quantitative reasoning, analytical writing, and critical thinking skills for graduate school admission. + + ++ Visual capabilities + + + + +### Evaluation - Behavior Design: + ++ Refusals: Assessment of the chatbot's ability to appropriately refuse or decline user requests or commands. ++ Wildchat Dataset: Examination of toxic user inputs and chatbot responses to ensure appropriate handling of such interactions. ++ XSTest Evaluation: Evaluation of the chatbot's performance using the XSTest evaluation framework, which focuses on various aspects of conversational AI systems, including response quality, engagement, and user satisfaction. + + + +### Evaluation - Multilingual: + ++ Multilingual Reasoning and Knowledge: Assessment of the chatbot's ability to reason and apply knowledge across multiple languages. ++ Multilingual Math: Evaluation of the chatbot's proficiency in solving mathematical problems and providing explanations in different languages. ++ Multilingual MMLU (Mean Length of Utterance): Measurement of the average length of the chatbot's responses across various languages, serving as an indicator of linguistic complexity and fluency. + + + +### Evaluation - Factual Accuracy: + +Assessment of the chatbot's ability to provide accurate and reliable information across a wide range of topics and domains, ensuring that responses are factually correct and supported by credible sources when applicable. + + + +### Evaluation - Long Context Performance + +Quality benchmark: Multiple-choice question-answering dataset; averaging around 5,000 tokens + + + +### Evaluation - Long Context Performance: Needle In A Haystack + + ++ Needle In A Haystack: Test scenario where a target sentence (the "needle") is inserted into a corpus of documents (the "haystack"). A question is then asked to retrieve the fact contained in the needle. For example: + + Needle: "The best thing to do in San Francisco is to eat a sandwich and sit in Dolores Park on a sunny day." + + Question: "What is the best thing to do in San Francisco?" ++ This evaluation assesses the chatbot's ability to accurately retrieve relevant information from a longer context or passage. + + + + +## Knowledge Conflicts for LLMs: A Survey +Based on the paper of the same name, found [here](https://arxiv.org/abs/2403.08319) + +Knowledge Conflicts can be broadly divided into 3 categories: ++ Context-memory conflict: stems from a discrepancy between the context and parametric knowledge. ++ Inter-context conflict: when external documents provide conflicting information. ++ Intra-memory conflict: discrepancies in a language model's knowledge stem from training data inconsistencies. + +Terminology Note: ++ context = contextual knowledge = knowledge in retrieved document ++ memory = parametric knowledge = knowledge in pretraining data + +Overview Diagram: + + +**Methodology:** Cause of conflict => Analyzing LLM behavior under conflict => Solutions + + +### Context-memory conflict + + + +This stems from a discrepancy between the context and parametric knowledge and is the most extensively investigated among the three types of conflicts. + ++ Causes: + + Temporal Misalignment: Models trained on past data may not accurately represent current or future realities. (The up-to-date contextual information is considered accurate. Pre-training data information is out-of-date.) + + + Misinformation Pollution: Introducing false or misleading information into a model's data can spread misinformation if the model doesn't critically assess these inputs. (The contextual information contains misinformation and is therefore considered incorrect. Web information is polluted. ) + ++ Analysis of Model Behaviors: + + + Open-domain question answering (ODQA) setup: + (1) In ODQA research: QA models sometimes depend too much on what they've already learned, ignoring conflicting external context. + (2) Recent studies: Bigger models like ChatGPT often blend what they know with similar outside information, even if it doesn't fully match. + + + General setups: LLMs might take in new information that contradicts their knowledge, yet they usually prefer matching information, struggle with conflicts, and favor logic over factual accuracy. + ++ Solutions: + + Faithful to Context: + + Align with contextual knowledge, focusing on context prioritization. + + Discriminating Misinformation (Faithful to Memory): + + Favor learned knowledge over questionable context with skepticism. + + Disentangling Sources: + + Separate context and knowledge to give clear, distinct answers. + + Improving Factuality: + + Strive for a response that combines context and learned knowledge for a truer solution. + +### Inter-context conflict: when external documents provide conflicting information. + + + ++ Causes: + + Misinformation + + RAG poses the risk of including documents containing mis information. + + + Outdated Information + + Contain updated and outdated information from the network simultaneously + ++ Analysis: + + Performance Impact + +Language models are vulnerable to misinformation: ++ These models prioritize information that is directly relevant to the query and consistent with their built in parametric knowledge. ++ There is a noticeable bias in LLMs towards evidence that matches their inherent parametric memory. ++ LLMs tend to focus on information related to more popular entities and answers supported by a larger body of documents within the context. ++ As the number of conflicting pieces of information increases, LLMs face greater difficulties in logical reasoning. + ++ Detection Ability + + Conversational Contradictions + + Contradictory Documents + + Document Credibility + + Truth vs. Misinformation + ++ Solution: + + Eliminating Conflict + + General Models for Fact-Checking: + + Improving Robustness + +### Intra-memory conflict: discrepancies in a language model's knowledge stem from training data inconsistencies. + + + +Causes of Intra-Memory (IM) Conflict: ++ Bias in Training Corpora + + Pre -trained Corpus from website may leading to misinformation. + + LLM tend to encode superficial associations prevalent within their training data. ++ Decoding Strategy + + Most strategies are deterministic and stochastic sampling methods. For the stochastic sampling, the nature of it is “uncertainty”, causing LLMs to produce entirely different content, even when provided with the same context ++ Knowledge Editing + + General method will be modifying a small scope of the knowledge encoded in LLMs, resulting in LLMs producing inconsistent responses when dealing with the same piece of knowledge in varying situations. + +Self Inconsistency ++ Knowledge Consistency Assessment: + + Elazar et al. (2021) developed a method to assess the knowledge consistency of language models, showed poor consistency across these models, with accuracy rates hovering between 50% and 60%. + + Hase et al. (2023) expanded on this by using a more diverse dataset and confirmed that models like RoBERTa-base and BART-base exhibit significant inconsistencies, especially in paraphrase contexts. ++ Inconsistency in Question Answering: + + Inconsistencies across multiple open-source LLMs in various contexts. + + LLMs may initially provide an answer to a question but then deny it upon further inquiry. In Close-Book Question Answering tasks, Alpaca-30B was only consistent in 50% of the cases. + +**Layered Knowledge Representation:** Studies show that LLMs store basic information in early layers and semantic information in deeper layers.Later research found factual knowledge is concentrated in specific transformer layers, leading to inconsistencies across layers. + +**Discrepancy in Knowledge Expression:** Li et al. (2023c) revealed an issue where correct knowledge within an LLM parameters may not be accurately expressed during generation. Their experiments showed a 40% gap between knowledge probe accuracy and generation accuracy. + +**Cross-lingual Inconsistency:** LLMs exhibit cross-lingual inconsistencies, with distinct knowledge sets for different languages, leading to discrepancies in information provided across languages. + ++ Improving Consistency + + Fine-tuning - ie, using a loss with the combination of the consistency loss and standard MLM loss. + + Plug-in - utilizing word-definition pairs from dictionaries to retrain language models and improve their comprehension of symbolic meanings + + Output Ensemble ++ Improving Factuality - Focus on improving knowledge across layers. Examples: + + Dola + + ITI + + +**Key Challenges for IM Conflicts:** ++ Knowledge Conflicts in the Wild - Knowledge conflicts often arise in RALMs (Retrieval-Augmented Language Models) when the models retrieve conflicting information directly from the Web. + + Traditionally, knowledge conflicts have been studied through artificially generated incorrect or misleading information, which may not fully represent real-world scenarios. + + There's a noted gap in current experimental setups for studying knowledge conflicts, leading to concerns about the applicability of findings from such studies to practical situations. ++ Solution at a Finer Resolution ++ Evaluation on Downstream Tasks ++ Interplay among the Conflicts - From investigating conflicts of a singular type to multi-type ++ Explainability - more microscopic examinations to better comprehend how models decide when encounter conflicts ++ Multilinguality + + By examining LLMs to address knowledge conflicts in non-English prompts + + Cross-language knowledge conflicts. Solutions could include employing translation systems ++ Multimodality - For instance,textual documents might clash with visual data, or the tone of an audio clip might contradict the con tent of an accompanying caption. multimodal knowledge conflicts could focus on crafting advanced LLMs skilled in cross-modal rea- soning and conflict resolution across diverse data types. + + + + + + + + + + + + + + + + + + + + diff --git a/_contents/S0-L20.md b/_contents/S0-L20.md index dff757e9..3de65e6a 100755 --- a/_contents/S0-L20.md +++ b/_contents/S0-L20.md @@ -36,7 +36,7 @@ In this session, our readings cover: # Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review ### Introduction -Models that are built on Large Language Model (LLM) as the backbone are capable of extracting meaningful information that can assist medical diagnosis or creating engaging contents. These models are also referred to as Artificial Intelligence-Generated Content (AIGC). Once the AIGC model is trained, by changing the way we compose the prompts as input to the model, the quality of the models' output can change. In this paper, we focus on techniques of engineering the prompts to achieve higher quality model output from the same AIGC model. +Models that are built on Large Language Model (LLM) as the backbone are capable of extracting meaningful information that can assist medical diagnosis or creating engaging contents. These models are also referred to as Artificial Intelligence-Generated Content (AIGC). Once the AIGC model is trained, by changing the way we compose the prompts as input to the model, the quality of the model's output can change. In this paper, we focus on techniques of engineering the prompts to achieve higher quality model output from the same AIGC model. ### Basic of Prompt Engineering @@ -46,15 +46,15 @@ One basic technique to improve the model output is to **be clear and precise** i -**Few Shot prompting** is also a common prompt engineering technique, where the model is given a few examples with answers in addition to the original question. This relies on the few shot learning ability that is an emergent in large language models, which is can be understood as a form of meta learning. +**Few Shot prompting** is also a common prompt engineering technique, where the model is given a few examples with answers in addition to the original question. This relies on the few shot learning ability that is emergent in large language models, which can be understood as a form of meta learning. -Authors of the paper also note that **adjusting the temperature and top-p** is essential for the prompt engineering. For code generation where standard pattern is valued, a smaller temperature and top-p is preferred, whereas in creative writing, a larger temperature and top-p may help the model produce original responses. +Authors of the paper also note that **adjusting the temperature and top-p** is essential for prompt engineering. For code generation where standard pattern is valued, a smaller temperature and top-p is preferred, whereas in creative writing, a larger temperature and top-p may help the model produce original responses. ### Advanced Prompt Engineering -Chain of Thought prompting induce the model to respond with step by step reasoning, which not only improves the quality of the output, but also shows correct intermediate steps for high stake applications such as medical reasoning. **Zero-shot chain of thought** is a simple yet effective technique, where we only need to include the phrase "Let's think step by step" to the input. **Golden chain of thought** is a technique that utilizes few-shot prompting for chain of thought prompting, by providing ground truth chain of thoughts solutions as examples to the input of the model. Golden chain of thoughts can boost the solve rate from 38% to 83% in the case of GPT-4, but the method is limited by the requirement of ground truth chain of thoughts examples. +Chain of Thought prompting induces the model to respond with step by step reasoning, which not only improves the quality of the output, but also shows correct intermediate steps for high stake applications such as medical reasoning. **Zero-shot chain of thought** is a simple yet effective technique, where we only need to include the phrase "Let's think step by step" to the input. **Golden chain of thought** is a technique that utilizes few-shot prompting for chain of thought prompting, by providing ground truth chain of thoughts solutions as examples to the input of the model. Golden chain of thoughts can boost the solve rate from 38% to 83% in the case of GPT-4, but the method is limited by the requirement of ground truth chain of thoughts examples. **Self-Consistency** is an extension to chain of thought prompting. After chain of thought prompting, by sampling from the language model decoder and choosing the most self-consistent response, Self-Consistency achieves better performance in rigorous reasoning tasks such as doing proofs. @@ -62,13 +62,13 @@ Chain of Thought prompting induce the model to respond with step by step reasoni -**Knowledge Generation** break down the content generation into two step generations: in the first step generation, the model is only prompted to output pertinent information (knowledge) of the original query, then the knowledge is included as prompt in the second step generation. +**Knowledge Generation** breaks down the content generation into two step generations: in the first step generation, the model is only prompted to output pertinent information (knowledge) of the original query, then the knowledge is included as prompt in the second step generation. -**Least-to-most prompting** also take a multi-step generation approach similar to knowledge generation. A given problem is decomposed into numerous sub-problems, and the model will output responses for each sub-problem. These responses will be included in the prompt to help the model answer the original problem. +**Least-to-most prompting** also takes a multi-step generation approach similar to knowledge generation. A given problem is decomposed into numerous sub-problems, and the model will output responses for each sub-problem. These responses will be included in the prompt to help the model answer the original problem. -**Tree of Thoughts reasoning** construct the steps of reasoning in a tree structure. This is particularly helpful when we need to break down a problem into steps, and further break down of each steps into more steps. **Graph of Thoughts** is a generalization of tree of thought structure, where each each contains the relation between each node. Graph of thoughts may be helpful for problems requiring intricate multifaceted resolutions. +**Tree of Thoughts reasoning** constructs the steps of reasoning in a tree structure. This is particularly helpful when we need to break down a problem into steps, and further break down of each steps into more steps. **Graph of Thoughts** is a generalization of tree of thought structure, where each each contains the relation between each node. Graph of thoughts may be helpful for problems requiring intricate multifaceted resolutions. @@ -76,7 +76,7 @@ Chain of Thought prompting induce the model to respond with step by step reasoni **Chain of Verification** corrects a response that may contain false information, by prompting the LLM to ask verification questions for the response. LLM may correct the false information by answering the verification questions. These answers will help LLM to generate a more accurate response for the original query. -In addition to the specific techniques mentioned above, there also exists **Plug-ins** of ChatGPT such as Prompt Enhancer that automatically enhance the prompt for the user. +In addition to the specific techniques mentioned above, there also exist **Plug-ins** of ChatGPT such as Prompt Enhancer that automatically enhance the prompt for the user. @@ -84,19 +84,11 @@ In addition to the specific techniques mentioned above, there also exists **Plug Benchmarking the prompt methods requires evaluating the quality of response from LLM, which can be performed by human or by other metrics. -**Subjective evaluations** requires human evaluators, which has the following pros and cons -Pros: Fluency, Accuracy, Novelty, and Relevance -Cons: Inconsistency Problem, Expensive, Time Consuming +**Subjective evaluations** requires human evaluators, which has the advantage of evaluating fluency, accuracy, novelty, and relevance, and some of its disadvantages are the inconsistency problem, expensive, and time consuming. -**Objective evaluations** relies on metrics to evaluate the response. Some examples includes - - BLEU: BiLingual Evaluation Understudy - - ROUGE: Recall-Oriented Understudy for Gisting Evaluation - - METEOR: Metric for Evaluation of Translation with Explicit ORdering - - BERTScore: BERT Model used for metric +**Objective evaluations** relies on metrics to evaluate the response. Some examples includes BLEU, which is a biLingual evaluation and BERTScore, which relies on a BERT Model for the metric. -Objective evaluations has the following pros and cons -Pros: Automatic Evaluation, Cheap, Quick -Cons: Alignment Problem +Objective evaluations has pros such as automatic evaluation, cheap, quick and cons particularly about the alignment problem. Evaluation results from InstructEval shows that in few shot settings, once the examples are specified, providing additional prompt harms the performance, while in zero shot settings, the expert written prompt improves performance. @@ -116,7 +108,7 @@ Prompt engineering can help **Assessment in teaching and learning**, where tailo - + ### Long context prompting for Claude 2.1 + https://www.anthropic.com/news/claude-2-1-prompting @@ -176,7 +168,7 @@ The authors use parallel point expanding to achieve speed-up than normal decodin For the evaluation, we can assess it from various perspectives. -- **Evaluation Process:** +- **Evaluation Process:** - Present a question and a pair of answers to an LLM judge. @@ -256,7 +248,7 @@ In summary, some strong models have very high-quality answers that are hard to b - Ask the RoBERTa to **classify** if the SoT is suitable for the desired answer. -## SoT-R – Evaluation +## SoT-R – Evaluation Based on the provided figures, we can understand: @@ -294,3 +286,310 @@ Having thoroughly reviewed the paper, we've gained significant insights into the - **Eliciting or improving LLMs’ ability:** - Graph-of-Thoughts + + + +# Topologies of Reasoning: Demystifying Chains, Trees, and Graphs of Thoughts +## Evolving into Chains of Thought +In the exploration of reasoning and cognitive processes, the paper delves into the intricacies of how thoughts are structured, leading to the conceptualization of reasoning topologies. These topologies provide a framework for understanding the organization and flow of thoughts as individuals tackle various tasks. + +
+ + +This figure presents an evolution of reasoning topologies in language model (LLM) prompting methodologies, showing an increasing complexity in how LLMs process and generate output based on a given input. + +- **Input-Output (IO) prompting**: This is the most basic method where an LLM provides a final reply immediately after receiving the initial prompt from the user, with no intermediate steps in the reasoning process. +- **Chain of Thought (CoT)**: Introduced by Wei et al., this method improves upon IO by incorporating explicit intermediate steps of reasoning, known as "chains of thought," which lead to the final output. +- **Chain-of-Thought with Self-Consistency (CoT-SC)**: Improving upon CoT, CoT-SC introduces several independent reasoning chains originating from the same initial input. The model then selects the best outcome from these final thoughts based on a predefined scoring function. The idea is to utilize the randomness within the LLM to generate multiple possible outcomes. +- **Tree of Thoughts (ToT)**: This method further advances CoT by allowing branches at any point within the chain of thoughts. This branching allows for the exploration of different paths and options during the reasoning process. Each node in the tree represents a partial solution, and based on any given node, the thought generator can create a number of new nodes. Scores are then assigned to these new nodes either by an LLM or human evaluation. The method of extending the tree is determined by the search algorithm used, such as Breadth-First Search (BFS) or Depth-First Search (DFS). +- **Graph of Thoughts (GoT)**: GoT enables complex reasoning dependencies between generated thoughts, allowing for any thought to generate multiple child thoughts and also have multiple parent thoughts, forming an aggregation operation. This method incorporates both branching (where thoughts can generate multiple outcomes) and aggregation (where multiple thoughts can contribute to a single new thought). + +The progression of these topologies indicates a move from linear, single-step reasoning to complex, multi-step, and multi-path reasoning structures, improving the depth and robustness of the reasoning process within LLMs. + +### Thoughts and Reasoning Topologies + +**What is a Thought ?** + +- In CoT, a thought refers to **a statement within a paragraph** that contains a **part of the reasoning process** aimed at **solving the input task**. +- In ToT, in some tasks, such as Game of 24, a thought means **an intermediate or a final solution** to the **initial question**. +- In GoT, a thought contains a **solution of the input task (or of its subtask**). + +Therefore, Paper proposes thought to be "Semantic unit of task resolution, i.e., a step in the process of solving a given task" + +**What is a Reasoning Topology?** + +Authors models thoughts as nodes; edges between nodes correspond to dependencies between these thoughts and a topology can be defined as G =(V,E) + +### Taxonomy of Reasoning Schemes + +**Topology Class** + + + +- This section presents three different classes of topological structures used to represent reasoning steps: Chain, Tree, and Graph. +- **Chain:** Depicted as a linear sequence of nodes connected vertically from an "Input" node at the top to an "Output" node at the bottom, suggesting a step-by-step, sequential reasoning process. +- **Tree:** Shown as a branching structure that starts with a single "Input" node which then divides into multiple pathways, eventually leading to one "Output" node. This illustrates a decision-making process that considers various paths or options before concluding. +- **Graph:** Illustrated as a network of interconnected nodes with one "Input" node and one "Output" node. Unlike the chain or tree, the graph shows multiple connections between the nodes, indicating a complex reasoning process with interdependencies and possible loops. + + + +**Topology Scope**:"Can the topology extend beyond a single prompt?" + + +- **Single-prompt** + + - Describes a structure contained within a single prompt/reply interaction. + + - The visual represents a tree topology where all reasoning nodes are part of one complete exchange, suggesting a condensed reasoning process that occurs in one step. + +- **Multi-prompt** + + - Indicates that one prompt/reply can contain multiple reasoning nodes. + + - The visual here expands the tree topology to show that individual prompts or replies may encompass multiple nodes, which implies a more extensive reasoning process involving several interactions. + +**Topology Representation** + + + +- The question is, "How is the topology structure represented?" indicating a focus on the manner in which the reasoning processes are visually and conceptually depicted. +- **Tree Diagram** + - A tree diagram is shown with a root node labeled "0" at the top, branching out to nodes "1," "2," and "3," which further branch out to nodes "4" through "9". This diagram is a representation of the reasoning structure, likely meant to illustrate the hierarchical and branching nature of thought processes. + +- **Implicit vs. Explicit Representation** + + - On the left, under the heading "Implicit," there is a statement suggesting a less direct method of describing the reasoning process: "The first preliminary solution should be enhanced three times. Each of these three enhanced solutions should be further augmented in two attempts." + + - On the right, under the heading "Explicit," there is a more direct and detailed explanation of the connections between the nodes: "