From dac33177c9de97bc5bc9eeab4a1de8c016c0c89a Mon Sep 17 00:00:00 2001 From: Ning Ren Date: Fri, 23 Feb 2024 23:46:03 -0800 Subject: [PATCH] Merge 0223 (#8) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Remove hardcode flash-attn disable setting (#2342) * Document turning off proxy_buffering when api is streaming (#2337) * Simplify huggingface api example (#2355) * Update sponsor logos (#2367) * if LOGDIR is empty, then don't try output log to local file (#2357) Signed-off-by: Lei Wen Co-authored-by: Lei Wen * add best_of and use_beam_search for completions interface (#2348) Signed-off-by: Lei Wen Co-authored-by: Lei Wen * Extract upvote/downvote from log files (#2369) * Revert "add best_of and use_beam_search for completions interface" (#2370) * Improve doc (#2371) * add best_of and use_beam_search for completions interface (#2372) Signed-off-by: Lei Wen Co-authored-by: Lei Wen * update monkey patch for llama2 (#2379) * Make E5 adapter more restrict to reduce mismatch (#2381) * Update UI and sponsers (#2387) * Use fsdp api for save save (#2390) * Release v0.2.27 * Spicyboros + airoboros 2.2 template update. (#2392) Co-authored-by: Jon Durbin * bugfix of openai_api_server for fastchat.serve.vllm_worker (#2398) Co-authored-by: wuyongyu * Revert "bugfix of openai_api_server for fastchat.serve.vllm_worker" (#2400) * Revert "add best_of and use_beam_search for completions interface" (#2401) * Release a v0.2.28 with bug fixes and more test cases * Fix model_worker error (#2404) * Added google/flan models and fixed AutoModelForSeq2SeqLM when loading T5 compression model (#2402) * Rename twitter to X (#2406) * Update huggingface_api.py (#2409) * Add support for baichuan2 models (#2408) * Fixed character overlap issue when api streaming output (#2431) * Support custom conversation template in multi_model_worker (#2434) * Add Ascend NPU support (#2422) * Add raw conversation template (#2417) (#2418) * Improve docs & UI (#2436) * Fix Salesforce xgen inference (#2350) * Add support for Phind-CodeLlama models (#2415) (#2416) Co-authored-by: Lianmin Zheng * Add falcon 180B chat conversation template (#2384) * Improve docs (#2438) * add dtype and seed (#2430) * Data cleaning scripts for dataset release (#2440) * merge google/flan based adapters: T5Adapter, CodeT5pAdapter, FlanAdapter (#2411) * Fix docs * Update UI (#2446) * Add Optional SSL Support to controller.py (#2448) * Format & Improve docs * Release v0.2.29 (#2450) * Show terms of use as an JS alert (#2461) * vllm worker awq quantization update (#2463) Co-authored-by: 董晓龙 * Fix falcon chat template (#2464) * Fix chunk handling when partial chunks are returned (#2485) * Update openai_api_server.py to add an SSL option (#2484) * Update vllm_worker.py (#2482) * fix typo quantization (#2469) * fix vllm quanziation args * Update README.md (#2492) * Huggingface api worker (#2456) * Update links to lmsys-chat-1m (#2497) * Update train code to support the new tokenizer (#2498) * Third Party UI Example (#2499) * Add metharme (pygmalion) conversation template (#2500) * Optimize for proper flash attn causal handling (#2503) * Add Mistral AI instruction template (#2483) * Update monitor & plots (#2506) * Release v0.2.30 (#2507) * Fix for single turn dataset (#2509) * replace os.getenv with os.path.expanduser because the first one doesn… (#2515) Co-authored-by: khalil * Fix arena (#2522) * Update Dockerfile (#2524) * add Llama2ChangAdapter (#2510) * Add ExllamaV2 Inference Framework Support. (#2455) * Improve docs (#2534) * Fix warnings for new gradio versions (#2538) * revert the gradio change; now works for 3.40 * Improve chat templates (#2539) * Add Zephyr 7B Alpha (#2535) * Improve Support for Mistral-Instruct (#2547) * correct max_tokens by context_length instead of raise exception (#2544) * Revert "Improve Support for Mistral-Instruct" (#2552) * Fix Mistral template (#2529) * Add additional Informations from the vllm worker (#2550) * Make FastChat work with LMSYS-Chat-1M Code (#2551) * Create `tags` attribute to fix `MarkupError` in rich CLI (#2553) * move BaseModelWorker outside serve.model_worker to make it independent (#2531) * Misc style and bug fixes (#2559) * Fix README.md (#2561) * release v0.2.31 (#2563) * resolves #2542 modify dockerfile to upgrade cuda to 12.2.0 and pydantic 1.10.13 (#2565) * Add airoboros_v3 chat template (llama-2 format) (#2564) * Add Xwin-LM V0.1, V0.2 support (#2566) * Fixed model_worker generate_gate may blocked main thread (#2540) (#2562) * feat: add claude-v2 (#2571) * Update vigogne template (#2580) * Fix issue #2568: --device mps led to TypeError: forward() got an unexpected keyword argument 'padding_mask'. (#2579) * Add Mistral-7B-OpenOrca conversation_temmplate (#2585) * docs: bit misspell comments model adapter default template name conversation (#2594) * Update Mistral template (#2581) * Fix in mistral template * Update README.md (vicuna-v1.3 -> vicuna-1.5) (#2592) * Update README.md to highlight chatbot arena (#2596) * Add Lemur model (#2584) Co-authored-by: Roberto Ugolotti * add trust_remote_code=True in BaseModelAdapter (#2583) * Openai interface add use beam search and best of 2 (#2442) Signed-off-by: Lei Wen Co-authored-by: Lei Wen * Update qwen and add pygmalion (#2607) * feat: Support model AquilaChat2 (#2616) * Added settings vllm (#2599) Co-authored-by: bodza Co-authored-by: bodza * [Logprobs] Support logprobs=1 (#2612) * release v0.2.32 * fix: Fix for OpenOrcaAdapter to return correct conversation template (#2613) * Make fastchat.serve.model_worker to take debug argument (#2628) Co-authored-by: hi-jin * openchat 3.5 model support (#2638) * xFastTransformer framework support (#2615) * feat: support custom models vllm serving (#2635) * kill only fastchat process (#2641) * Update server_arch.png * Use conv.update_last_message api in mt-bench answer generation (#2647) * Improve Azure OpenAI interface (#2651) * Add required_temp support in jsonl format to support flexible temperature setting for gen_api_answer (#2653) * Pin openai version < 1 (#2658) * Remove exclude_unset parameter (#2654) * Revert "Remove exclude_unset parameter" (#2666) * added support for CodeGeex(2) (#2645) * add chatglm3 conv template support in conversation.py (#2622) * UI and model change (#2672) Co-authored-by: Lianmin Zheng * train_flant5: fix typo (#2673) * Fix gpt template (#2674) * Update README.md (#2679) * feat: support template's stop_str as list (#2678) * Update exllama_v2.md (#2680) * save model under deepspeed (#2689) * Adding SSL support for model workers and huggingface worker (#2687) * Check the max_new_tokens <= 0 in openai api server (#2688) * Add Microsoft/Orca-2-7b and update model support docs (#2714) * fix tokenizer of chatglm2 (#2711) * Template for using Deepseek code models (#2705) * add support for Chinese-LLaMA-Alpaca (#2700) * Make --load-8bit flag work with weights in safetensors format (#2698) * Format code and minor bug fix (#2716) * Bump version to v0.2.33 (#2717) * fix tokenizer.pad_token attribute error (#2710) * support stable-vicuna model (#2696) * Exllama cache 8bit (#2719) * Add Yi support (#2723) * Add Hermes 2.5 [fixed] (#2725) * Fix Hermes2Adapter (#2727) * Fix YiAdapter (#2730) * add trust_remote_code argument (#2715) * Add revision arg to MT Bench answer generation (#2728) * Fix MPS backend 'index out of range' error (#2737) * add starling support (#2738) * Add deepseek chat (#2760) * a convenient script for spinning up the API with Model Workers (#2790) * Prevent returning partial stop string in vllm worker (#2780) * Update UI and new models (#2762) * Support MetaMath (#2748) * Use common logging code in the OpenAI API server (#2758) Co-authored-by: Warren Francis * Show how to turn on experiment tracking for fine-tuning (#2742) Co-authored-by: Morgan McGuire * Support xDAN-L1-Chat Model (#2732) * Format code * Update the version to 0.2.34 (#2793) * add dolphin (#2794) * Fix tiny typo (#2805) * Add instructions for evaluating on MT bench using vLLM (#2770) * Update README.md * Add SOLAR-10.7b Instruct Model (#2826) * Update README.md (#2852) * fix: 'compeletion' typo (#2847) * Add Tunnelmole as an open source alternative to ngrok and include usage instructions (#2846) * update readme * update mt-bench readme * Add support for CatPPT (#2840) * Add functionality to ping AI2 InferD endpoints for tulu 2 (#2832) Co-authored-by: Sam Skjonsberg * add download models from www.modelscope.cn (#2830) Co-authored-by: mulin.lyh * Fix conv_template of chinese alpaca 2 (#2812) * add bagel model adapter (#2814) * add root_path argument to gradio web server. (#2807) Co-authored-by: bertls * Import `accelerate` locally to avoid it as a strong dependency (#2820) * Replace dict merge with unpacking for compatibility of 3.8 in vLLM worker (#2824) Signed-off-by: rudeigerc * Format code (#2854) * Openai API migrate (#2765) * fix openai api server docs * Add a16z as a sponser * Add new models (Perplexity, gemini) & Separate GPT versions (#2856) Co-authored-by: Wei-Lin Chiang * Clean error messages (#2857) * Update docs (#2858) * Modify doc description (#2859) * Fix the problem of not using the decoding method corresponding to the base model in peft mode (#2865) * update a new sota model on MT-Bench which touch an 8.8 scores. (#2864) * NPU needs to be initialized when starting a new process (#2843) * Fix the problem with "vllm + chatglm3" (#2845) (#2876) Co-authored-by: 姚峰 * Update token spacing for mistral conversation.py (#2872) * check if hm in models before deleting to avoid errors (#2870) Co-authored-by: Your Name * Add TinyLlama (#2889) * Fix bug that model doesn't automatically switch peft adapter (#2884) * Update web server commands (#2869) * fix the tokenize process and prompt template of chatglm3 (#2883) Co-authored-by: 章焕锭 * Add `Notus` support (#2813) Co-authored-by: alvarobartt * feat: support anthropic api with api_dict (#2879) * Update model_adapter.py (#2895) * leaderboard code update (#2867) * fix: change order of SEQUENCE_LENGTH_KEYS (#2925) * fix baichuan:apply_prompt_template call args error (#2921) Co-authored-by: Zheng Hao * Fix a typo in openai_api_server.py (#2905) * feat: use variables OPENAI_MODEL_LIST (#2907) * Add TenyxChat-7B-v1 model (#2901) Co-authored-by: sarath@L3 <[omitted]> * add support for iei yuan2.0 (https://huggingface.co/IEITYuan) (#2919) * nous-hermes-2-mixtral-dpo (#2922) * Bump the version to 0.2.35 (#2927) * fix specify local path issue use model from www.modelscope.cn (#2934) Co-authored-by: mulin.lyh * support openai embedding for topic clustering (#2729) * Remove duplicate API endpoint (#2949) * Update Hermes Mixtral (#2938) * Enablement of REST API Usage within Google Colab Free Tier (#2940) * Create a new worker implementation for Apple MLX (#2937) * feat: support Model Yuan2.0, a new generation Fundamental Large Language Model developed by IEIT System (#2936) * Fix the pooling method of BGE embedding model (#2926) * format code * SGLang Worker (#2928) * Fix sglang worker (#2953) * Update mlx_worker to be async (#2958) * Integrate LightLLM into serve worker (#2888) * Copy button (#2963) * feat: train with template (#2951) * fix content maybe a str (#2968) * Adding download folder information in README (#2972) * use cl100k_base as the default tiktoken encoding (#2974) Signed-off-by: bjwswang * Update README.md (#2975) * Fix tokenizer for vllm worker (#2984) * update yuan2.0 generation (#2989) * fix: tokenization mismatch when training with different templates (#2996) * fix: inconsistent tokenization by llama tokenizer (#3006) * Fix type hint for play_a_match_single (#3008) * code update (#2997) * Update model_support.md (#3016) * Update lightllm_integration.md (#3014) * Upgrade gradio to 4.17 (#3027) * Update MLX integration to use new generate_step function signature (#3021) * Update readme (#3028) * Update gradio version in `pyproject.toml` and fix a bug (#3029) * Update gradio demo and API model providers (#3030) * Gradio Web Server for Multimodal Models (#2960) Co-authored-by: Lianmin Zheng * Migrate the gradio server to openai v1 (#3032) * Update version to 0.2.36 (#3033) Co-authored-by: Wei-Lin Chiang * Add llava 34b template (#3034) * Update model support (#3040) * Add psutil to pyproject.toml dependencies (#3039) * Fix SGLang worker (#3045) * Random VQA Sample button for VLM direct chat (#3041) * Update arena.md to fix link (#3051) * multi inference --------- Signed-off-by: Lei Wen Signed-off-by: rudeigerc Signed-off-by: bjwswang Co-authored-by: Trangle Co-authored-by: Nathan Stitt Co-authored-by: Lianmin Zheng Co-authored-by: leiwen83 Co-authored-by: Lei Wen Co-authored-by: Jon Durbin Co-authored-by: Jon Durbin Co-authored-by: Rayrtfr <2384172887@qq.com> Co-authored-by: wuyongyu Co-authored-by: wangxiyuan Co-authored-by: Jeff (Zhen) Wang Co-authored-by: karshPrime <94996251+karshPrime@users.noreply.github.com> Co-authored-by: obitolyz Co-authored-by: Shangwei Chen <109785802+Somezak1@users.noreply.github.com> Co-authored-by: HyungJin Ahn Co-authored-by: zhangsibo1129 <134488188+zhangsibo1129@users.noreply.github.com> Co-authored-by: Tobias Birchler Co-authored-by: Jae-Won Chung Co-authored-by: Mingdao Liu Co-authored-by: Ying Sheng Co-authored-by: Brandon Biggs Co-authored-by: dongxiaolong <774848421@qq.com> Co-authored-by: 董晓龙 Co-authored-by: Siddartha Naidu Co-authored-by: shuishu <990941859@qq.com> Co-authored-by: Andrew Aikawa Co-authored-by: Liangsheng Yin Co-authored-by: enochlev <47466848+enochlev@users.noreply.github.com> Co-authored-by: AlpinDale <52078762+AlpinDale@users.noreply.github.com> Co-authored-by: Lé Co-authored-by: Toshiki Kataoka Co-authored-by: khalil <90086758+khalil-Hennara@users.noreply.github.com> Co-authored-by: khalil Co-authored-by: dubaoquan404 <87166864@qq.com> Co-authored-by: Chang W. Lee Co-authored-by: theScotchGame <36061851+leonxia1018@users.noreply.github.com> Co-authored-by: lewtun Co-authored-by: Stephen Horvath Co-authored-by: liunux4odoo <41217877+liunux4odoo@users.noreply.github.com> Co-authored-by: Norman Mu Co-authored-by: Sebastian Bodza <66752172+SebastianBodza@users.noreply.github.com> Co-authored-by: Tianle (Tim) Li <67527391+CodingWithTim@users.noreply.github.com> Co-authored-by: Wei-Lin Chiang Co-authored-by: Alex Co-authored-by: Jingcheng Hu <67776176+REIGN12@users.noreply.github.com> Co-authored-by: lvxuan <3645933+lvxuan263@users.noreply.github.com> Co-authored-by: cOng Co-authored-by: bofeng huang Co-authored-by: Phil-U-U Co-authored-by: Wayne Spangenberg Co-authored-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com> Co-authored-by: Rohan Gupta <63547845+Gk-rohan@users.noreply.github.com> Co-authored-by: ugolotti <96428459+ugolotti@users.noreply.github.com> Co-authored-by: Roberto Ugolotti Co-authored-by: edisonwd <2388100489@qq.com> Co-authored-by: FangYin Cheng Co-authored-by: bodza Co-authored-by: bodza Co-authored-by: Cody Yu Co-authored-by: Srinath Janakiraman Co-authored-by: Jaeheon Jeong Co-authored-by: One Co-authored-by: sheng.gui@intel.com Co-authored-by: David Co-authored-by: Witold Wasiczko Co-authored-by: Peter Willemsen Co-authored-by: ZeyuTeng96 <96521059+ZeyuTeng96@users.noreply.github.com> Co-authored-by: Forceless <72636351+Force1ess@users.noreply.github.com> Co-authored-by: Jeff <122586668+jm23jeffmorgan@users.noreply.github.com> Co-authored-by: MrZhengXin <34998703+MrZhengXin@users.noreply.github.com> Co-authored-by: Long Nguyen Co-authored-by: Elsa Granger Co-authored-by: Christopher Chou <49086305+BabyChouSr@users.noreply.github.com> Co-authored-by: wangshuai09 <391746016@qq.com> Co-authored-by: amaleshvemula Co-authored-by: Zollty Tsou Co-authored-by: xuguodong1999 Co-authored-by: Michael J Kaye <1014467+mjkaye@users.noreply.github.com> Co-authored-by: 152334H <54623771+152334H@users.noreply.github.com> Co-authored-by: Jingsong-Yan <75230787+Jingsong-Yan@users.noreply.github.com> Co-authored-by: Siyuan (Ryans) Zhuang Co-authored-by: Chris Kerwell Gresla <80501101+ckgresla@users.noreply.github.com> Co-authored-by: pandada8 Co-authored-by: Isaac Ong Co-authored-by: Warren Francis Co-authored-by: Warren Francis Co-authored-by: Morgan McGuire Co-authored-by: Morgan McGuire Co-authored-by: xDAN-AI <128944251+xiechengmude@users.noreply.github.com> Co-authored-by: Ikko Eltociear Ashimine Co-authored-by: Robbie Co-authored-by: Rishiraj Acharya <44090649+rishiraj@users.noreply.github.com> Co-authored-by: Nathan Lambert Co-authored-by: Sam Skjonsberg Co-authored-by: liuyhwangyh Co-authored-by: mulin.lyh Co-authored-by: stephanbertl Co-authored-by: bertls Co-authored-by: Chirag Jain Co-authored-by: Yuchen Cheng Co-authored-by: Shuo Yang <73746844+andy-yang-1@users.noreply.github.com> Co-authored-by: Wei-Lin Chiang Co-authored-by: JQ <460494839@qq.com> Co-authored-by: yaofeng Co-authored-by: 姚峰 Co-authored-by: Michael <67104840+thavens@users.noreply.github.com> Co-authored-by: Josh NE Co-authored-by: Your Name Co-authored-by: WHDY <38045789+WHDY@users.noreply.github.com> Co-authored-by: 章焕锭 Co-authored-by: Gabriel Martín Blázquez Co-authored-by: alvarobartt Co-authored-by: Zheng Hao Co-authored-by: Ren Xuancheng Co-authored-by: Sarath Shekkizhar <137322432+sarath-shekkizhar@users.noreply.github.com> Co-authored-by: wangpengfei1013 <155146149+wangpengfei1013@users.noreply.github.com> Co-authored-by: Alexandre Strube Co-authored-by: Teknium <127238744+teknium1@users.noreply.github.com> Co-authored-by: Cristian Gutiérrez <57730982+ggcr@users.noreply.github.com> Co-authored-by: ali asaria Co-authored-by: wulixuan Co-authored-by: staoxiao <2906698981@qq.com> Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com> Co-authored-by: dheeraj-326 Co-authored-by: bjwswang <30621793+bjwswang@users.noreply.github.com> Co-authored-by: Zhanghao Wu Co-authored-by: Ted Li Co-authored-by: Shukant Pal Co-authored-by: Lisa Dunlap Co-authored-by: Logan Kilpatrick <23kilpatrick23@gmail.com> --- README.md | 4 + docs/arena.md | 13 +- docs/commands/webserver.md | 5 +- docs/lightllm_integration.md | 18 + docs/mlx_integration.md | 23 + docs/model_support.md | 93 +++- docs/openai_api.md | 15 +- docs/third_party_ui.md | 24 + docs/training.md | 2 +- fastchat/__init__.py | 2 +- fastchat/constants.py | 1 + fastchat/conversation.py | 329 ++++++++++- fastchat/llm_judge/README.md | 24 +- fastchat/llm_judge/common.py | 35 +- fastchat/llm_judge/gen_api_answer.py | 16 +- fastchat/llm_judge/qa_browser.py | 6 +- fastchat/model/model_adapter.py | 366 ++++++++++++- fastchat/model/model_chatglm.py | 37 +- fastchat/model/model_registry.py | 460 +++++++++++++--- fastchat/model/model_yuan2.py | 139 +++++ fastchat/protocol/openai_api_protocol.py | 6 +- fastchat/serve/api_provider.py | 398 ++++++++++++-- fastchat/serve/base_model_worker.py | 21 +- fastchat/serve/call_monitor.py | 219 ++++++++ fastchat/serve/controller.py | 47 +- fastchat/serve/example_images/distracted.jpg | Bin 0 -> 94338 bytes fastchat/serve/example_images/fridge.jpg | Bin 0 -> 127987 bytes fastchat/serve/gradio_block_arena_anony.py | 335 +++++++++--- fastchat/serve/gradio_block_arena_named.py | 89 +-- fastchat/serve/gradio_block_arena_vision.py | 222 ++++++++ fastchat/serve/gradio_web_server.py | 468 +++++++++------- fastchat/serve/gradio_web_server_multi.py | 188 ++++--- fastchat/serve/huggingface_api.py | 2 +- fastchat/serve/huggingface_api_worker.py | 16 +- fastchat/serve/lightllm_worker.py | 512 ++++++++++++++++++ fastchat/serve/mlx_worker.py | 288 ++++++++++ fastchat/serve/model_worker.py | 59 +- fastchat/serve/monitor/basic_stats.py | 72 +-- fastchat/serve/monitor/clean_battle_data.py | 135 +++-- fastchat/serve/monitor/clean_chat_data.py | 2 +- fastchat/serve/monitor/elo_analysis.py | 97 +++- fastchat/serve/monitor/monitor.py | 307 ++++++++--- fastchat/serve/monitor/summarize_cluster.py | 21 +- fastchat/serve/monitor/topic_clustering.py | 45 +- fastchat/serve/openai_api_server.py | 56 +- fastchat/serve/register_worker.py | 2 + fastchat/serve/sglang_worker.py | 313 +++++++++++ fastchat/serve/vllm_worker.py | 36 +- fastchat/train/train_baichuan.py | 2 +- fastchat/train/train_with_template.py | 400 ++++++++++++++ fastchat/train/train_yuan2.py | 482 +++++++++++++++++ fastchat/utils.py | 49 +- multigpu_inference.sh | 2 +- playground/FastChat_API_GoogleColab.ipynb | 347 ++++++++++++ .../test_sentence_similarity.py | 2 +- pyproject.toml | 10 +- scripts/build-api.sh | 60 ++ tests/launch_openai_api_test_server.py | 36 +- tests/test_openai_api.py | 34 +- tests/test_openai_vision_api.py | 162 ++++++ 60 files changed, 6308 insertions(+), 846 deletions(-) create mode 100644 docs/lightllm_integration.md create mode 100644 docs/mlx_integration.md create mode 100644 docs/third_party_ui.md create mode 100644 fastchat/model/model_yuan2.py create mode 100644 fastchat/serve/call_monitor.py create mode 100644 fastchat/serve/example_images/distracted.jpg create mode 100644 fastchat/serve/example_images/fridge.jpg create mode 100644 fastchat/serve/gradio_block_arena_vision.py create mode 100644 fastchat/serve/lightllm_worker.py create mode 100644 fastchat/serve/mlx_worker.py create mode 100644 fastchat/serve/sglang_worker.py create mode 100644 fastchat/train/train_with_template.py create mode 100644 fastchat/train/train_yuan2.py create mode 100644 playground/FastChat_API_GoogleColab.ipynb create mode 100644 scripts/build-api.sh create mode 100644 tests/test_openai_vision_api.py diff --git a/README.md b/README.md index 8e611922e..9687fbbde 100644 --- a/README.md +++ b/README.md @@ -16,6 +16,10 @@ We are focused to support Llama2 at scale now. If you want any other models, ple ## Dev Log +### 2024-02 + +Sync upstream changes + ### 2023-09 Sync upstream changes diff --git a/docs/arena.md b/docs/arena.md index 979f41db5..2d79b2acf 100644 --- a/docs/arena.md +++ b/docs/arena.md @@ -5,10 +5,11 @@ We invite the entire community to join this benchmarking effort by contributing ## How to add a new model If you want to see a specific model in the arena, you can follow the methods below. -- Method 1: Hosted by LMSYS. - 1. Contribute the code to support this model in FastChat by submitting a pull request. See [instructions](model_support.md#how-to-support-a-new-model). - 2. After the model is supported, we will try to schedule some compute resources to host the model in the arena. However, due to the limited resources we have, we may not be able to serve every model. We will select the models based on popularity, quality, diversity, and other factors. +### Method 1: Hosted by 3rd party API providers or yourself +If you have a model hosted by a 3rd party API provider or yourself, please give us the access to an API endpoint. + - We prefer OpenAI-compatible APIs, so we can reuse our [code](https://github.com/lm-sys/FastChat/blob/main/fastchat/serve/api_provider.py) for calling OpenAI models. + - If you have your own API protocol, please follow the [instructions](model_support.md) to add them. Contribute your code by sending a pull request. -- Method 2: Hosted by 3rd party API providers or yourself. - 1. If you have a model hosted by a 3rd party API provider or yourself, please give us an API endpoint. We prefer OpenAI-compatible APIs, so we can reuse our [code](https://github.com/lm-sys/FastChat/blob/33dca5cf12ee602455bfa9b5f4790a07829a2db7/fastchat/serve/gradio_web_server.py#L333-L358) for calling OpenAI models. - 2. You can use FastChat's OpenAI API [server](openai_api.md) to serve your model with OpenAI-compatible APIs and provide us with the endpoint. +### Method 2: Hosted by LMSYS +1. Contribute the code to support this model in FastChat by submitting a pull request. See [instructions](model_support.md). +2. After the model is supported, we will try to schedule some compute resources to host the model in the arena. However, due to the limited resources we have, we may not be able to serve every model. We will select the models based on popularity, quality, diversity, and other factors. diff --git a/docs/commands/webserver.md b/docs/commands/webserver.md index 179d3dfe7..df96cf8d2 100644 --- a/docs/commands/webserver.md +++ b/docs/commands/webserver.md @@ -24,10 +24,13 @@ python3 -m fastchat.serve.test_message --model vicuna-13b --controller http://lo cd fastchat_logs/server0 +python3 -m fastchat.serve.huggingface_api_worker --model-info-file ~/elo_results/register_hf_api_models.json + export OPENAI_API_KEY= export ANTHROPIC_API_KEY= +export GCP_PROJECT_ID= -python3 -m fastchat.serve.gradio_web_server_multi --controller http://localhost:21001 --concurrency 10 --add-chatgpt --add-claude --add-palm --anony-only --elo ~/elo_results/elo_results.pkl --leaderboard-table-file ~/elo_results/leaderboard_table.csv --register ~/elo_results/register_oai_models.json --show-terms +python3 -m fastchat.serve.gradio_web_server_multi --controller http://localhost:21001 --concurrency 50 --add-chatgpt --add-claude --add-palm --elo ~/elo_results/elo_results.pkl --leaderboard-table-file ~/elo_results/leaderboard_table.csv --register ~/elo_results/register_oai_models.json --show-terms python3 backup_logs.py ``` diff --git a/docs/lightllm_integration.md b/docs/lightllm_integration.md new file mode 100644 index 000000000..b271a826a --- /dev/null +++ b/docs/lightllm_integration.md @@ -0,0 +1,18 @@ +# LightLLM Integration +You can use [LightLLM](https://github.com/ModelTC/lightllm) as an optimized worker implementation in FastChat. +It offers advanced continuous batching and a much higher (~10x) throughput. +See the supported models [here](https://github.com/ModelTC/lightllm?tab=readme-ov-file#supported-model-list). + +## Instructions +1. Please refer to the [Get started](https://github.com/ModelTC/lightllm?tab=readme-ov-file#get-started) to install LightLLM. Or use [Pre-built image](https://github.com/ModelTC/lightllm?tab=readme-ov-file#container) + +2. When you launch a model worker, replace the normal worker (`fastchat.serve.model_worker`) with the LightLLM worker (`fastchat.serve.lightllm_worker`). All other commands such as controller, gradio web server, and OpenAI API server are kept the same. Refer to [--max_total_token_num](https://github.com/ModelTC/lightllm/blob/4a9824b6b248f4561584b8a48ae126a0c8f5b000/docs/ApiServerArgs.md?plain=1#L23) to understand how to calculate the `--max_total_token_num` argument. + ``` + python3 -m fastchat.serve.lightllm_worker --model-path lmsys/vicuna-7b-v1.5 --tokenizer_mode "auto" --max_total_token_num 154000 + ``` + + If you what to use quantized weight and kv cache for inference, try + + ``` + python3 -m fastchat.serve.lightllm_worker --model-path lmsys/vicuna-7b-v1.5 --tokenizer_mode "auto" --max_total_token_num 154000 --mode triton_int8weight triton_int8kv + ``` diff --git a/docs/mlx_integration.md b/docs/mlx_integration.md new file mode 100644 index 000000000..21642d948 --- /dev/null +++ b/docs/mlx_integration.md @@ -0,0 +1,23 @@ +# Apple MLX Integration + +You can use [Apple MLX](https://github.com/ml-explore/mlx) as an optimized worker implementation in FastChat. + +It runs models efficiently on Apple Silicon + +See the supported models [here](https://github.com/ml-explore/mlx-examples/tree/main/llms#supported-models). + +Note that for Apple Silicon Macs with less memory, smaller models (or quantized models) are recommended. + +## Instructions + +1. Install MLX. + + ``` + pip install "mlx-lm>=0.0.6" + ``` + +2. When you launch a model worker, replace the normal worker (`fastchat.serve.model_worker`) with the MLX worker (`fastchat.serve.mlx_worker`). Remember to launch a model worker after you have launched the controller ([instructions](../README.md)) + + ``` + python3 -m fastchat.serve.mlx_worker --model-path TinyLlama/TinyLlama-1.1B-Chat-v1.0 + ``` diff --git a/docs/model_support.md b/docs/model_support.md index fa0739128..ba5f5b79b 100644 --- a/docs/model_support.md +++ b/docs/model_support.md @@ -1,15 +1,48 @@ # Model Support +This document describes how to support a new model in FastChat. -## Supported models +## Content +- [Local Models](#local-models) +- [API-Based Models](#api-based-models) + +## Local Models +To support a new local model in FastChat, you need to correctly handle its prompt template and model loading. +The goal is to make the following command run with the correct prompts. + +``` +python3 -m fastchat.serve.cli --model [YOUR_MODEL_PATH] +``` + +You can run this example command to learn the code logic. + +``` +python3 -m fastchat.serve.cli --model lmsys/vicuna-7b-v1.5 +``` + +You can add `--debug` to see the actual prompt sent to the model. + +### Steps + +FastChat uses the `Conversation` class to handle prompt templates and `BaseModelAdapter` class to handle model loading. + +1. Implement a conversation template for the new model at [fastchat/conversation.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py). You can follow existing examples and use `register_conv_template` to add a new one. Please also add a link to the official reference code if possible. +2. Implement a model adapter for the new model at [fastchat/model/model_adapter.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/model/model_adapter.py). You can follow existing examples and use `register_model_adapter` to add a new one. +3. (Optional) add the model name to the "Supported models" [section](#supported-models) above and add more information in [fastchat/model/model_registry.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/model/model_registry.py). + +After these steps, the new model should be compatible with most FastChat features, such as CLI, web UI, model worker, and OpenAI-compatible API server. Please do some testing with these features as well. + +### Supported models - [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) - example: `python3 -m fastchat.serve.cli --model-path meta-llama/Llama-2-7b-chat-hf` - Vicuna, Alpaca, LLaMA, Koala - example: `python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.5` +- [allenai/tulu-2-dpo-7b](https://huggingface.co/allenai/tulu-2-dpo-7b) - [BAAI/AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B) - [BAAI/AquilaChat2-7B](https://huggingface.co/BAAI/AquilaChat2-7B) - [BAAI/AquilaChat2-34B](https://huggingface.co/BAAI/AquilaChat2-34B) - [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en#using-huggingface-transformers) +- [argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1) - [baichuan-inc/baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) - [BlinkDL/RWKV-4-Raven](https://huggingface.co/BlinkDL/rwkv-4-raven) - example: `python3 -m fastchat.serve.cli --model-path ~/model_weights/RWKV-4-Raven-7B-v11x-Eng99%-Other1%-20230429-ctx8192.pth` @@ -18,13 +51,20 @@ - [camel-ai/CAMEL-13B-Combined-Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data) - [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) - [databricks/dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b) +- [deepseek-ai/deepseek-llm-67b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat) +- [deepseek-ai/deepseek-coder-33b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) - [FlagAlpha/Llama2-Chinese-13b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat) - [FreedomIntelligence/phoenix-inst-chat-7b](https://huggingface.co/FreedomIntelligence/phoenix-inst-chat-7b) - [FreedomIntelligence/ReaLM-7b-v1](https://huggingface.co/FreedomIntelligence/Realm-7b) - [h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b) +- [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta) +- [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) - [internlm/internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b) +- [IEITYuan/Yuan2-2B/51B/102B-hf](https://huggingface.co/IEITYuan) - [lcw99/polyglot-ko-12.8b-chang-instruct-chat](https://huggingface.co/lcw99/polyglot-ko-12.8b-chang-instruct-chat) - [lmsys/fastchat-t5-3b-v1.0](https://huggingface.co/lmsys/fastchat-t5) +- [meta-math/MetaMath-7B-V1.0](https://huggingface.co/meta-math/MetaMath-7B-V1.0) +- [Microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b) - [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) - example: `python3 -m fastchat.serve.cli --model-path mosaicml/mpt-7b-chat` - [Neutralzz/BiLLa-7B-SFT](https://huggingface.co/Neutralzz/BiLLa-7B-SFT) @@ -34,26 +74,25 @@ - [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) - [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) -- [VMware/open-llama-7b-v2-open-instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct) +- [OpenLemur/lemur-70b-chat-v1](https://huggingface.co/OpenLemur/lemur-70b-chat-v1) - [Phind/Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) - [project-baize/baize-v2-7b](https://huggingface.co/project-baize/baize-v2-7b) - [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) +- [rishiraj/CatPPT](https://huggingface.co/rishiraj/CatPPT) - [Salesforce/codet5p-6b](https://huggingface.co/Salesforce/codet5p-6b) - [StabilityAI/stablelm-tuned-alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) +- [tenyx/TenyxChat-7B-v1](https://huggingface.co/tenyx/TenyxChat-7B-v1) +- [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) - [THUDM/chatglm-6b](https://huggingface.co/THUDM/chatglm-6b) - [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b) - [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b) - [tiiuae/falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat) - [timdettmers/guanaco-33b-merged](https://huggingface.co/timdettmers/guanaco-33b-merged) - [togethercomputer/RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat) +- [VMware/open-llama-7b-v2-open-instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct) - [WizardLM/WizardLM-13B-V1.0](https://huggingface.co/WizardLM/WizardLM-13B-V1.0) - [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) -- [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta) -- [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) - [Xwin-LM/Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) -- [OpenLemur/lemur-70b-chat-v1](https://huggingface.co/OpenLemur/lemur-70b-chat-v1) -- [allenai/tulu-2-dpo-7b](https://huggingface.co/allenai/tulu-2-dpo-7b) -- [Microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b) - Any [EleutherAI](https://huggingface.co/EleutherAI) pythia model such as [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) - Any [Peft](https://github.com/huggingface/peft) adapter trained on top of a model above. To activate, must have `peft` in the model path. Note: If @@ -61,29 +100,31 @@ setting the environment variable `PEFT_SHARE_BASE_WEIGHTS=true` in any model worker. -## How to support a new model -To support a new model in FastChat, you need to correctly handle its prompt template and model loading. -The goal is to make the following command run with the correct prompts. +## API-Based Models +To support an API-based model, consider learning from the existing OpenAI example. +If the model is compatible with OpenAI APIs, then a configuration file is all that's needed without any additional code. +For custom protocols, implementation of a streaming generator in [fastchat/serve/api_provider.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/serve/api_provider.py) is required, following the provided examples. Currently, FastChat is compatible with OpenAI, Anthropic, Google Vertex AI, Mistral, and Nvidia NGC. +### Steps to Launch a WebUI with an API Model +1. Specify the endpoint information in a JSON configuration file. For instance, create a file named `api_endpoints.json`: +```json +{ + "gpt-3.5-turbo": { + "model_name": "gpt-3.5-turbo", + "api_type": "openai", + "api_base": "https://api.openai.com/v1", + "api_key": "sk-******", + "anony_only": false + } +} ``` -python3 -m fastchat.serve.cli --model [YOUR_MODEL_PATH] -``` - -You can run this example command to learn the code logic. + - "api_type" can be one of the following: openai, anthropic, gemini, or mistral. For custom APIs, add a new type and implement it accordingly. + - "anony_only" indicates whether to display this model in anonymous mode only. +2. Launch the Gradio web server with the argument `--register api_endpoints.json`: ``` -python3 -m fastchat.serve.cli --model lmsys/vicuna-7b-v1.5 +python3 -m fastchat.serve.gradio_web_server --controller "" --share --register api_endpoints.json ``` -You can add `--debug` to see the actual prompt sent to the model. - -### Steps - -FastChat uses the `Conversation` class to handle prompt templates and `BaseModelAdapter` class to handle model loading. - -1. Implement a conversation template for the new model at [fastchat/conversation.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py). You can follow existing examples and use `register_conv_template` to add a new one. Please also add a link to the official reference code if possible. -2. Implement a model adapter for the new model at [fastchat/model/model_adapter.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/model/model_adapter.py). You can follow existing examples and use `register_model_adapter` to add a new one. -3. (Optional) add the model name to the "Supported models" [section](#supported-models) above and add more information in [fastchat/model/model_registry.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/model/model_registry.py). - -After these steps, the new model should be compatible with most FastChat features, such as CLI, web UI, model worker, and OpenAI-compatible API server. Please do some testing with these features as well. +Now, you can open a browser and interact with the model. diff --git a/docs/openai_api.md b/docs/openai_api.md index f3c0fba93..089b500ff 100644 --- a/docs/openai_api.md +++ b/docs/openai_api.md @@ -8,6 +8,8 @@ The following OpenAI APIs are supported: - Completions. (Reference: https://platform.openai.com/docs/api-reference/completions) - Embeddings. (Reference: https://platform.openai.com/docs/api-reference/embeddings) +The REST API can be seamlessly operated from Google Colab, as demonstrated in the [FastChat_API_GoogleColab.ipynb](https://github.com/lm-sys/FastChat/blob/main/playground/FastChat_API_GoogleColab.ipynb) notebook, available in our repository. This notebook provides a practical example of how to utilize the API effectively within the Google Colab environment. + ## RESTful API Server First, launch the controller @@ -32,29 +34,28 @@ Now, let us test the API server. ### OpenAI Official SDK The goal of `openai_api_server.py` is to implement a fully OpenAI-compatible API server, so the models can be used directly with [openai-python](https://github.com/openai/openai-python) library. -First, install openai-python: +First, install OpenAI python package >= 1.0: ```bash pip install --upgrade openai ``` -Then, interact with model vicuna: +Then, interact with the Vicuna model: ```python import openai -# to get proper authentication, make sure to use a valid key that's listed in -# the --api-keys flag. if no flag value is provided, the `api_key` will be ignored. + openai.api_key = "EMPTY" -openai.api_base = "http://localhost:8000/v1" +openai.base_url = "http://localhost:8000/v1/" model = "vicuna-7b-v1.5" prompt = "Once upon a time" # create a completion -completion = openai.Completion.create(model=model, prompt=prompt, max_tokens=64) +completion = openai.completions.create(model=model, prompt=prompt, max_tokens=64) # print the completion print(prompt + completion.choices[0].text) # create a chat completion -completion = openai.ChatCompletion.create( +completion = openai.chat.completions.create( model=model, messages=[{"role": "user", "content": "Hello! What is your name?"}] ) diff --git a/docs/third_party_ui.md b/docs/third_party_ui.md new file mode 100644 index 000000000..c0b230150 --- /dev/null +++ b/docs/third_party_ui.md @@ -0,0 +1,24 @@ +# Third Party UI +If you want to host it on your own UI or third party UI, you can launch the [OpenAI compatible server](openai_api.md) and host with a tunnelling service such as Tunnelmole or ngrok, and then enter the credentials appropriately. + +You can find suitable UIs from third party repos: +- [WongSaang's ChatGPT UI](https://github.com/WongSaang/chatgpt-ui) +- [McKayWrigley's Chatbot UI](https://github.com/mckaywrigley/chatbot-ui) + +- Please note that some third-party providers only offer the standard `gpt-3.5-turbo`, `gpt-4`, etc., so you will have to add your own custom model inside the code. [Here is an example of how to create a UI with any custom model name](https://github.com/ztjhz/BetterChatGPT/pull/461). + +##### Using Tunnelmole +Tunnelmole is an open source tunnelling tool. You can find its source code on [Github](https://github.com/robbie-cahill/tunnelmole-client). Here's how you can use Tunnelmole: +1. Install Tunnelmole with `curl -O https://install.tunnelmole.com/9Wtxu/install && sudo bash install`. (On Windows, download [tmole.exe](https://tunnelmole.com/downloads/tmole.exe)). Head over to the [README](https://github.com/robbie-cahill/tunnelmole-client) for other methods such as `npm` or building from source. +2. Run `tmole 7860` (replace `7860` with your listening port if it is different from 7860). The output will display two URLs: one HTTP and one HTTPS. It's best to use the HTTPS URL for better privacy and security. +``` +➜ ~ tmole 7860 +http://bvdo5f-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7860 +https://bvdo5f-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7860 +``` + +##### Using ngrok +ngrok is a popular closed source tunnelling tool. First download and install it from [ngrok.com](https://ngrok.com/downloads). Here's how to use it to expose port 7860. +``` +ngrok http 7860 +``` diff --git a/docs/training.md b/docs/training.md index 077221824..87b87312f 100644 --- a/docs/training.md +++ b/docs/training.md @@ -90,7 +90,7 @@ deepspeed fastchat/train/train_lora_t5.py \ ### Fine-tuning Vicuna-7B with Local NPUs -You can use the following command to train Vicuna-7B with 8 x 910B (60GB). Use `--nproc_per_node` to specify the number of NPUs. +You can use the following command to train Vicuna-7B with 8 x NPUs. Use `--nproc_per_node` to specify the number of NPUs. ```bash torchrun --nproc_per_node=8 --master_port=20001 fastchat/train/train.py \ --model_name_or_path ~/vicuna-7b-v1.5-16k \ diff --git a/fastchat/__init__.py b/fastchat/__init__.py index c4feccf55..c971add65 100644 --- a/fastchat/__init__.py +++ b/fastchat/__init__.py @@ -1 +1 @@ -__version__ = "0.2.33" +__version__ = "0.2.36" diff --git a/fastchat/constants.py b/fastchat/constants.py index 53ed55c1c..24e1783af 100644 --- a/fastchat/constants.py +++ b/fastchat/constants.py @@ -15,6 +15,7 @@ CONVERSATION_LIMIT_MSG = "YOU HAVE REACHED THE CONVERSATION LENGTH LIMIT. PLEASE CLEAR HISTORY AND START A NEW CONVERSATION." INACTIVE_MSG = "THIS SESSION HAS BEEN INACTIVE FOR TOO LONG. PLEASE REFRESH THIS PAGE." SLOW_MODEL_MSG = "⚠️ Both models will show the responses all at once. Please stay patient as it may take over 30 seconds." +RATE_LIMIT_MSG = "**RATE LIMIT OF THIS MODEL IS REACHED. PLEASE COME BACK LATER OR TRY OTHER MODELS.**" # Maximum input length INPUT_CHAR_LEN_LIMIT = int(os.getenv("FASTCHAT_INPUT_CHAR_LEN_LIMIT", 12000)) # Maximum conversation turns diff --git a/fastchat/conversation.py b/fastchat/conversation.py index 9c8b57e13..95576536c 100644 --- a/fastchat/conversation.py +++ b/fastchat/conversation.py @@ -5,8 +5,10 @@ If you have any changes in mind, please contribute back so the community can benefit collectively and continue to maintain these valuable templates. """ +import base64 import dataclasses from enum import auto, IntEnum +from io import BytesIO from typing import List, Any, Dict, Union, Tuple @@ -29,6 +31,12 @@ class SeparatorStyle(IntEnum): ROBIN = auto() FALCON_CHAT = auto() CHATGLM3 = auto() + DEEPSEEK_CHAT = auto() + METAMATH = auto() + YUAN2 = auto() + + +IMAGE_PLACEHOLDER_STR = "$$$$" @dataclasses.dataclass @@ -44,6 +52,7 @@ class Conversation: # The names of two roles roles: Tuple[str] = ("USER", "ASSISTANT") # All messages. Each item is (role, message). + # Each message is either a string or a tuple of (string, List[image_url]). messages: List[List[str]] = () # The number of few shot examples offset: int = 0 @@ -72,6 +81,9 @@ def get_prompt(self) -> str: ret = system_prompt + seps[0] for i, (role, message) in enumerate(self.messages): if message: + if type(message) is tuple: + message, images = message + message = IMAGE_PLACEHOLDER_STR * len(images) + message ret += role + ": " + message + seps[i % 2] else: ret += role + ":" @@ -160,6 +172,9 @@ def get_prompt(self) -> str: ret = "" if system_prompt == "" else system_prompt + self.sep + "\n" for role, message in self.messages: if message: + if type(message) is tuple: + message, images = message + message = IMAGE_PLACEHOLDER_STR * len(images) + message ret += role + "\n" + message + self.sep + "\n" else: ret += role + "\n" @@ -170,7 +185,7 @@ def get_prompt(self) -> str: ret += system_prompt for role, message in self.messages: if message: - ret += role + "\n" + " " + message + ret += role + "\n" + message else: ret += role return ret @@ -222,11 +237,52 @@ def get_prompt(self) -> str: ret += role + ": " + message + self.sep else: ret += role + ":" - + return ret + elif self.sep_style == SeparatorStyle.METAMATH: + ret = "" if system_prompt == "" else system_prompt + self.sep + for i, (role, message) in enumerate(self.messages): + # For MetaMath, sep2 is used to prefix the message. + starting_sep = ":\n" if i % 2 == 0 else ": " + self.sep2 + ending_sep = self.sep if i % 2 == 0 else "" + if message: + ret += role + starting_sep + message + ending_sep + else: + ret += role + starting_sep + return ret + elif self.sep_style == SeparatorStyle.DEEPSEEK_CHAT: + seps = [self.sep, self.sep2] + ret = system_prompt + for i, (role, message) in enumerate(self.messages): + if message: + ret += role + ": " + message + seps[i % 2] + else: + ret += role + ":" + return ret + elif self.sep_style == SeparatorStyle.YUAN2: + seps = [self.sep, self.sep2] + ret = "" + if self.system_message: + ret += system_prompt + seps[1] + for _, message in self.messages: + if message: + ret += message + "" + else: + ret += "" + ret = ret.rstrip("") + seps[0] return ret else: raise ValueError(f"Invalid style: {self.sep_style}") + def get_images(self): + images = [] + for i, (role, msg) in enumerate(self.messages[self.offset :]): + if i % 2 == 0: + if type(msg) is tuple: + for image in msg[1]: + images.append(image) + + return images + def set_system_message(self, system_message: str): """Set the system message.""" self.system_message = system_message @@ -243,11 +299,52 @@ def update_last_message(self, message: str): """ self.messages[-1][1] = message + def convert_image_to_base64(self, image): + """Given an image, return the base64 encoded image string.""" + from PIL import Image + import requests + + # Load image if it has not been loaded in yet + if type(image) == str: + if image.startswith("http://") or image.startswith("https://"): + response = requests.get(image) + image = Image.open(BytesIO(response.content)).convert("RGB") + elif "base64" in image: + # OpenAI format is: data:image/jpeg;base64,{base64_encoded_image_str} + return image.split(",")[1] + else: + image = Image.open(image).convert("RGB") + + max_hw, min_hw = max(image.size), min(image.size) + aspect_ratio = max_hw / min_hw + max_len, min_len = 2048, 2048 + shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) + longest_edge = int(shortest_edge * aspect_ratio) + W, H = image.size + if longest_edge != max(image.size): + if H > W: + H, W = longest_edge, shortest_edge + else: + H, W = shortest_edge, longest_edge + image = image.resize((W, H)) + + buffered = BytesIO() + image.save(buffered, format="PNG") + img_b64_str = base64.b64encode(buffered.getvalue()).decode() + + return img_b64_str + def to_gradio_chatbot(self): """Convert the conversation to gradio chatbot format.""" ret = [] for i, (role, msg) in enumerate(self.messages[self.offset :]): if i % 2 == 0: + if type(msg) is tuple: + msg, image = msg + img_b64_str = image[0] # Only one image on gradio at one time + img_str = f'user upload image' + msg = img_str + msg.replace("\n", "").strip() + ret.append([msg, None]) else: ret[-1][-1] = msg @@ -255,7 +352,10 @@ def to_gradio_chatbot(self): def to_openai_api_messages(self): """Convert the conversation to OpenAI chat completion format.""" - ret = [{"role": "system", "content": self.system_message}] + if self.system_message == "": + ret = [] + else: + ret = [{"role": "system", "content": self.system_message}] for i, (_, msg) in enumerate(self.messages[self.offset :]): if i % 2 == 0: @@ -265,6 +365,12 @@ def to_openai_api_messages(self): ret.append({"role": "assistant", "content": msg}) return ret + def extract_text_from_messages(self): + return [ + (role, message[0]) if type(message) is tuple else (role, message) + for role, message in self.messages + ] + def copy(self): return Conversation( name=self.name, @@ -285,7 +391,7 @@ def dict(self): "template_name": self.name, "system_message": self.system_message, "roles": self.roles, - "messages": self.messages, + "messages": self.extract_text_from_messages(), "offset": self.offset, } @@ -463,7 +569,7 @@ def get_conv_template(name: str) -> Conversation: register_conv_template( Conversation( name="chatglm3", - system_template="<|system|>\n {system_message}", + system_template="<|system|>\n{system_message}", roles=("<|user|>", "<|assistant|>"), sep_style=SeparatorStyle.CHATGLM3, stop_token_ids=[ @@ -527,10 +633,20 @@ def get_conv_template(name: str) -> Conversation: ) ) +# TenyxChat default template +register_conv_template( + Conversation( + name="tenyxchat", + roles=("User", "Assistant"), + sep_style=SeparatorStyle.FALCON_CHAT, + sep="<|end_of_turn|>", + ) +) + # Deepseek code default template register_conv_template( Conversation( - name="deepseek", + name="deepseek-coder", system_template="You are an AI programming assistant, utilizing the DeepSeek Coder model, developed by DeepSeek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.", roles=("### Instruction:", "### Response:"), sep="\n", @@ -658,6 +774,17 @@ def get_conv_template(name: str) -> Conversation: ) ) +# Perplexity AI template +register_conv_template( + Conversation( + name="pplxai", + system_message="Be precise and concise.", + roles=("user", "assistant"), + sep_style=None, + sep=None, + ) +) + # Claude default template register_conv_template( Conversation( @@ -668,6 +795,20 @@ def get_conv_template(name: str) -> Conversation: ) ) +# MetaMath default template +# reference: https://github.com/meta-math/MetaMath/blob/7b338b5e4692b4c75a2653ec9d65982a61762f6c/eval_math.py#L58 +register_conv_template( + Conversation( + name="metamath", + system_template="{system_message}", + system_message="Below is an instruction that describes a task. Write a response that appropriately completes the request.", + roles=("### Instruction", "### Response"), + sep_style=SeparatorStyle.METAMATH, + sep="\n\n", + sep2="Let's think step by step.", + ) +) + # MPT default template register_conv_template( Conversation( @@ -740,6 +881,15 @@ def get_conv_template(name: str) -> Conversation: ) ) +register_conv_template( + Conversation( + name="gemini", + roles=("user", "model"), + sep_style=None, + sep=None, + ) +) + # BiLLa default template register_conv_template( Conversation( @@ -933,7 +1083,7 @@ def get_conv_template(name: str) -> Conversation: register_conv_template( Conversation( name="mistral", - system_template="[INST]{system_message}\n", + system_template="[INST] {system_message}\n", roles=("[INST]", "[/INST]"), sep_style=SeparatorStyle.LLAMA2, sep=" ", @@ -955,6 +1105,18 @@ def get_conv_template(name: str) -> Conversation: ) ) +register_conv_template( + Conversation( + name="chinese-alpaca2", + system_template="[INST] <>\n{system_message}\n<>\n\n", + system_message="You are a helpful assistant. 你是一个乐于助人的助手。请你提供专业、有逻辑、内容真实、有价值的详细回复。", + roles=("[INST]", "[/INST]"), + sep_style=SeparatorStyle.LLAMA2, + sep=" ", + sep2=" ", + ) +) + register_conv_template( Conversation( name="cutegpt", @@ -1003,6 +1165,21 @@ def get_conv_template(name: str) -> Conversation: ) +# ehartford/dolphin-2.2.1-mistral-7b template +# reference: https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b#training +register_conv_template( + Conversation( + name="dolphin-2.2.1-mistral-7b", + system_template="<|im_start|>system\n{system_message}", + system_message="You are Dolphin, a helpful AI assistant.", + roles=("<|im_start|>user", "<|im_start|>assistant"), + sep_style=SeparatorStyle.CHATML, + sep="<|im_end|>", + stop_token_ids=[32000, 32001], + ) +) + + # teknium/OpenHermes-2.5-Mistral-7B template # source: https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B # reference: https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B#prompt-template @@ -1019,6 +1196,21 @@ def get_conv_template(name: str) -> Conversation: ) +# NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO template +# source: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO +register_conv_template( + Conversation( + name="Nous-Hermes-2-Mixtral-8x7B-DPO", + system_template="<|im_start|>system\n{system_message}", + system_message='You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.', + roles=("<|im_start|>user", "<|im_start|>assistant"), + sep_style=SeparatorStyle.CHATML, + sep="<|im_end|>", + stop_token_ids=[32000, 32001], + ) +) + + # Qwen-chat default template # source: https://huggingface.co/Qwen/Qwen-7B-Chat/blob/main/qwen_generation_utils.py#L130 register_conv_template( @@ -1236,6 +1428,18 @@ def get_conv_template(name: str) -> Conversation: stop_str="<|user|>", ) ) +# xDAN default template +# source: https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1 +register_conv_template( + Conversation( + name="xdan-v1", + system_message="You are a helpful and harmless assistant named xDAN and created by xDAN-AI.Please response and work on questions thinking step by step.", + roles=("### Human", "### Assistant"), + sep_style=SeparatorStyle.NO_COLON_SINGLE, + sep="\n", + stop_str="", + ) +) # Zephyr template # reference: https://huggingface.co/spaces/HuggingFaceH4/zephyr-playground/blob/main/dialogues.py @@ -1251,6 +1455,34 @@ def get_conv_template(name: str) -> Conversation: ) ) +# CatPPT template +# reference: https://huggingface.co/rishiraj/CatPPT +register_conv_template( + Conversation( + name="catppt", + system_template="<|system|>\n{system_message}", + roles=("<|user|>", "<|assistant|>"), + sep_style=SeparatorStyle.CHATML, + sep="", + stop_token_ids=[2], + stop_str="", + ) +) + +# TinyLlama template +# reference: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 +register_conv_template( + Conversation( + name="TinyLlama", + system_template="<|system|>\n{system_message}", + roles=("<|user|>", "<|assistant|>"), + sep_style=SeparatorStyle.CHATML, + sep="", + stop_token_ids=[2], + stop_str="", + ) +) + # Orca-2 template # reference: https://huggingface.co/microsoft/Orca-2-7b register_conv_template( @@ -1265,6 +1497,89 @@ def get_conv_template(name: str) -> Conversation: ) ) +# Deepseek-chat template +# reference: https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat/blob/main/tokenizer_config.json +register_conv_template( + Conversation( + name="deepseek-chat", + system_message="<|begin▁of▁sentence|>", # must add a bos token before first message + roles=("User", "Assistant"), + sep_style=SeparatorStyle.DEEPSEEK_CHAT, + sep="\n\n", + sep2="<|end▁of▁sentence|>", + stop_str="<|end▁of▁sentence|>", + ) +) + +# Yuan2.0 chat template +# source: https://huggingface.co/IEITYuan/Yuan2-2B-Janus-hf/blob/main/tokenizer_config.json#L6 +register_conv_template( + Conversation( + name="yuan2", + roles=("user", "assistant"), + sep_style=SeparatorStyle.YUAN2, + sep="", + sep2="\n", + stop_token_ids=[ + 77185, + ], # "" + stop_str="", + ) +) + +# Solar-10.7B Chat Template +# Reference: https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0/blob/main/tokenizer_config.json +register_conv_template( + Conversation( + name="solar", + system_message="", + roles=("### User", "### Assistant"), + sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE, + sep="\n\n", + stop_str="", + ) +) + +# nvidia/Llama2-70B-SteerLM-Chat +register_conv_template( + Conversation( + name="steerlm", + system_message="", + roles=("user", "assistant"), + sep_style=None, + sep=None, + ) +) + +# yuan 2.0 template +# reference:https://github.com/IEIT-Yuan/Yuan-2.0 +# reference:https://huggingface.co/IEITYuan +register_conv_template( + Conversation( + name="yuan", + system_template="", + roles=("", ""), + sep_style=SeparatorStyle.NO_COLON_SINGLE, + sep="", + stop_str="", + ) +) + +# Llava-chatml +# reference: https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/llava/conversation.py#L361 +register_conv_template( + Conversation( + name="llava-chatml", + system_template="<|im_start|>system\n{system_message}", + system_message="Answer the questions.", + roles=("<|im_start|>user", "<|im_start|>assistant"), + sep_style=SeparatorStyle.CHATML, + sep="<|im_end|>", + stop_str="<|im_end|>", + ) +) + + if __name__ == "__main__": from fastchat.conversation import get_conv_template diff --git a/fastchat/llm_judge/README.md b/fastchat/llm_judge/README.md index 1d2646b13..6737cf8ba 100644 --- a/fastchat/llm_judge/README.md +++ b/fastchat/llm_judge/README.md @@ -59,7 +59,7 @@ You can also specify `--num-gpus-per-model` for model parallelism (needed for la #### Step 2. Generate GPT-4 judgments There are several options to use GPT-4 as a judge, such as pairwise winrate and single-answer grading. -In MT-bench, we recommond single-answer grading as the default mode. +In MT-bench, we recommend single-answer grading as the default mode. This mode asks GPT-4 to grade and give a score to model's answer directly without pairwise comparison. For each turn, GPT-4 will give a score on a scale of 10. We then compute the average score on all turns. @@ -129,6 +129,27 @@ You can use this [colab notebook](https://colab.research.google.com/drive/15O3Y8 +### Other backends +We can also use vLLM for answer generation, which can be faster for the models supported by vLLM. + +1. Launch a vLLM worker +``` +python3 -m fastchat.serve.controller +python3 -m fastchat.serve.vllm_worker --model-path [MODEL-PATH] +python3 -m fastchat.serve.openai_api_server --host localhost --port 8000 +``` + - Arguments: + - `[MODEL-PATH]` is the path to the weights, which can be a local folder or a Hugging Face repo ID. + +2. Generate the answers +``` +python gen_api_answer.py --model [MODEL-NAME] --openai-api-base http://localhost:8000/v1 --parallel 50 +``` + - Arguments: + - `[MODEL-NAME]` is the name of the model from Step 1. + - `--parallel` is the number of concurrent API calls to the vLLM worker. + + ## Agreement Computation We released 3.3K human annotations for model responses generated by 6 models in response to 80 MT-bench questions. The dataset is available at [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). @@ -138,6 +159,7 @@ This Colab [notebook](https://colab.research.google.com/drive/1ctgygDRJhVGUJTQy8 - [Chatbot Arena Conversation Dataset](https://huggingface.co/datasets/lmsys/chatbot_arena_conversations) - [MT-bench Human Annotation Dataset](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments) + ## Citation Please cite the following paper if you find the code or datasets helpful. ``` diff --git a/fastchat/llm_judge/common.py b/fastchat/llm_judge/common.py index 4b598cefb..d2640d601 100644 --- a/fastchat/llm_judge/common.py +++ b/fastchat/llm_judge/common.py @@ -14,7 +14,11 @@ import openai import anthropic -from fastchat.model.model_adapter import get_conversation_template, ANTHROPIC_MODEL_LIST +from fastchat.model.model_adapter import ( + get_conversation_template, + ANTHROPIC_MODEL_LIST, + OPENAI_MODEL_LIST, +) # API setting constants API_MAX_RETRY = 16 @@ -159,10 +163,10 @@ def run_judge_single(question, answer, judge, ref_answer, multi_turn=False): conv.append_message(conv.roles[0], user_prompt) conv.append_message(conv.roles[1], None) - if model in ["gpt-3.5-turbo", "gpt-4"]: - judgment = chat_compeletion_openai(model, conv, temperature=0, max_tokens=2048) + if model in OPENAI_MODEL_LIST: + judgment = chat_completion_openai(model, conv, temperature=0, max_tokens=2048) elif model in ANTHROPIC_MODEL_LIST: - judgment = chat_compeletion_anthropic( + judgment = chat_completion_anthropic( model, conv, temperature=0, max_tokens=1024 ) else: @@ -185,7 +189,7 @@ def run_judge_single(question, answer, judge, ref_answer, multi_turn=False): return rating, user_prompt, judgment -def play_a_match_single(match: MatchPair, output_file: str): +def play_a_match_single(match: MatchSingle, output_file: str): question, model, answer, judge, ref_answer, multi_turn = ( match.question, match.model, @@ -262,14 +266,14 @@ def run_judge_pair(question, answer_a, answer_b, judge, ref_answer, multi_turn=F conv.append_message(conv.roles[0], user_prompt) conv.append_message(conv.roles[1], None) - if model in ["gpt-3.5-turbo", "gpt-4"]: + if model in OPENAI_MODEL_LIST: conv.set_system_message(system_prompt) - judgment = chat_compeletion_openai(model, conv, temperature=0, max_tokens=2048) + judgment = chat_completion_openai(model, conv, temperature=0, max_tokens=2048) elif model in ANTHROPIC_MODEL_LIST: if system_prompt != "You are a helpful assistant.": user_prompt = "[Instruction]\n" + system_prompt + "\n\n" + user_prompt conv.messages[0][1] = user_prompt - judgment = chat_compeletion_anthropic( + judgment = chat_completion_anthropic( model, conv, temperature=0, max_tokens=1024 ) else: @@ -400,7 +404,7 @@ def play_a_match_pair(match: MatchPair, output_file: str): return result -def chat_compeletion_openai(model, conv, temperature, max_tokens, api_dict=None): +def chat_completion_openai(model, conv, temperature, max_tokens, api_dict=None): if api_dict is not None: openai.api_base = api_dict["api_base"] openai.api_key = api_dict["api_key"] @@ -424,7 +428,7 @@ def chat_compeletion_openai(model, conv, temperature, max_tokens, api_dict=None) return output -def chat_compeletion_openai_azure(model, conv, temperature, max_tokens, api_dict=None): +def chat_completion_openai_azure(model, conv, temperature, max_tokens, api_dict=None): openai.api_type = "azure" openai.api_version = "2023-07-01-preview" if api_dict is not None: @@ -463,11 +467,16 @@ def chat_compeletion_openai_azure(model, conv, temperature, max_tokens, api_dict return output -def chat_compeletion_anthropic(model, conv, temperature, max_tokens): +def chat_completion_anthropic(model, conv, temperature, max_tokens, api_dict=None): + if api_dict is not None and "api_key" in api_dict: + api_key = api_dict["api_key"] + else: + api_key = os.environ["ANTHROPIC_API_KEY"] + output = API_ERROR_OUTPUT for _ in range(API_MAX_RETRY): try: - c = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"]) + c = anthropic.Anthropic(api_key=api_key) prompt = conv.get_prompt() response = c.completions.create( model=model, @@ -484,7 +493,7 @@ def chat_compeletion_anthropic(model, conv, temperature, max_tokens): return output.strip() -def chat_compeletion_palm(chat_state, model, conv, temperature, max_tokens): +def chat_completion_palm(chat_state, model, conv, temperature, max_tokens): from fastchat.serve.api_provider import init_palm_chat assert model == "palm-2-chat-bison-001" diff --git a/fastchat/llm_judge/gen_api_answer.py b/fastchat/llm_judge/gen_api_answer.py index b39618546..8f9c62624 100644 --- a/fastchat/llm_judge/gen_api_answer.py +++ b/fastchat/llm_judge/gen_api_answer.py @@ -1,7 +1,7 @@ """Generate answers with GPT-4 Usage: -python3 get_api_answer.py --model gpt-3.5-turbo +python3 gen_api_answer.py --model gpt-3.5-turbo """ import argparse import json @@ -16,9 +16,9 @@ from fastchat.llm_judge.common import ( load_questions, temperature_config, - chat_compeletion_openai, - chat_compeletion_anthropic, - chat_compeletion_palm, + chat_completion_openai, + chat_completion_anthropic, + chat_completion_palm, ) from fastchat.llm_judge.gen_model_answer import reorg_answer_file from fastchat.model.model_adapter import get_conversation_template, ANTHROPIC_MODEL_LIST @@ -50,15 +50,13 @@ def get_answer( conv.append_message(conv.roles[1], None) if model in ANTHROPIC_MODEL_LIST: - output = chat_compeletion_anthropic( - model, conv, temperature, max_tokens - ) + output = chat_completion_anthropic(model, conv, temperature, max_tokens) elif model == "palm-2-chat-bison-001": - chat_state, output = chat_compeletion_palm( + chat_state, output = chat_completion_palm( chat_state, model, conv, temperature, max_tokens ) else: - output = chat_compeletion_openai(model, conv, temperature, max_tokens) + output = chat_completion_openai(model, conv, temperature, max_tokens) conv.update_last_message(output) turns.append(output) diff --git a/fastchat/llm_judge/qa_browser.py b/fastchat/llm_judge/qa_browser.py index e449dee3a..1107756db 100644 --- a/fastchat/llm_judge/qa_browser.py +++ b/fastchat/llm_judge/qa_browser.py @@ -36,7 +36,7 @@ def display_question(category_selector, request: gr.Request): choices = category_selector_map[category_selector] - return gr.Dropdown.update( + return gr.Dropdown( value=choices[0], choices=choices, ) @@ -413,6 +413,8 @@ def build_demo(): ) = load_pairwise_model_judgments(pairwise_model_judgment_file) demo = build_demo() - demo.queue(concurrency_count=10, status_update_rate=10, api_open=False).launch( + demo.queue( + default_concurrency_limit=10, status_update_rate=10, api_open=False + ).launch( server_name=args.host, server_port=args.port, share=args.share, max_threads=200 ) diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py index e130de1cb..3b3b3c48b 100644 --- a/fastchat/model/model_adapter.py +++ b/fastchat/model/model_adapter.py @@ -12,7 +12,6 @@ else: from functools import lru_cache as cache -import accelerate import psutil import torch from transformers import ( @@ -33,6 +32,7 @@ from fastchat.model.model_chatglm import generate_stream_chatglm from fastchat.model.model_codet5p import generate_stream_codet5p from fastchat.model.model_falcon import generate_stream_falcon +from fastchat.model.model_yuan2 import generate_stream_yuan2 from fastchat.model.model_exllama import generate_stream_exllama from fastchat.model.model_xfastertransformer import generate_stream_xft from fastchat.model.monkey_patch_non_inplace import ( @@ -53,7 +53,24 @@ ANTHROPIC_MODEL_LIST = ( "claude-1", "claude-2", + "claude-2.0", + "claude-2.1", "claude-instant-1", + "claude-instant-1.2", +) + +OPENAI_MODEL_LIST = ( + "gpt-3.5-turbo", + "gpt-3.5-turbo-0301", + "gpt-3.5-turbo-0613", + "gpt-3.5-turbo-1106", + "gpt-3.5-turbo-0125", + "gpt-4", + "gpt-4-0314", + "gpt-4-0613", + "gpt-4-turbo", + "gpt-4-1106-preview", + "gpt-4-0125-preview", ) @@ -177,6 +194,8 @@ def load_model( debug: bool = False, ): """Load a model from Hugging Face.""" + import accelerate + # get model adapter adapter = get_model_adapter(model_path) @@ -317,6 +336,20 @@ def load_model( if dtype is not None: # Overwrite dtype if it is provided in the arguments. kwargs["torch_dtype"] = dtype + if os.environ.get("FASTCHAT_USE_MODELSCOPE", "False").lower() == "true": + # download model from ModelScope hub, + # lazy import so that modelscope is not required for normal use. + try: + from modelscope.hub.snapshot_download import snapshot_download + + if not os.path.exists(model_path): + model_path = snapshot_download(model_id=model_path, revision=revision) + except ImportError as e: + warnings.warn( + "Use model from www.modelscope.cn need pip install modelscope" + ) + raise e + # Load model model, tokenizer = adapter.load_model(model_path, kwargs) @@ -354,12 +387,13 @@ def get_generate_stream_function(model: torch.nn.Module, model_path: str): from fastchat.serve.inference import generate_stream model_type = str(type(model)).lower() + is_peft = "peft" in model_type is_chatglm = "chatglm" in model_type is_falcon = "rwforcausallm" in model_type is_codet5p = "codet5p" in model_type - is_peft = "peft" in model_type is_exllama = "exllama" in model_type is_xft = "xft" in model_type + is_yuan = "yuan" in model_type if is_chatglm: return generate_stream_chatglm @@ -371,6 +405,8 @@ def get_generate_stream_function(model: torch.nn.Module, model_path: str): return generate_stream_exllama elif is_xft: return generate_stream_xft + elif is_yuan: + return generate_stream_yuan2 elif peft_share_base_weights and is_peft: # Return a curried stream function that loads the right adapter @@ -387,7 +423,28 @@ def generate_stream_peft( judge_sent_end: bool = False, ): model.set_adapter(model_path) - for x in generate_stream( + base_model_type = str(type(model.base_model.model)) + is_chatglm = "chatglm" in base_model_type + is_falcon = "rwforcausallm" in base_model_type + is_codet5p = "codet5p" in base_model_type + is_exllama = "exllama" in base_model_type + is_xft = "xft" in base_model_type + is_yuan = "yuan" in base_model_type + + generate_stream_function = generate_stream + if is_chatglm: + generate_stream_function = generate_stream_chatglm + elif is_falcon: + generate_stream_function = generate_stream_falcon + elif is_codet5p: + generate_stream_function = generate_stream_codet5p + elif is_exllama: + generate_stream_function = generate_stream_exllama + elif is_xft: + generate_stream_function = generate_stream_xft + elif is_yuan: + generate_stream_function = generate_stream_yuan2 + for x in generate_stream_function( model, tokenizer, params, @@ -903,6 +960,16 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("openchat_3.5") +class TenyxChatAdapter(BaseModelAdapter): + """The model adapter for TenyxChat (e.g. tenyx/TenyxChat-7B-v1)""" + + def match(self, model_path: str): + return "tenyxchat" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("tenyxchat") + + class PythiaAdapter(BaseModelAdapter): """The model adapter for any EleutherAI/pythia model""" @@ -1040,12 +1107,7 @@ class ChatGPTAdapter(BaseModelAdapter): """The model adapter for ChatGPT""" def match(self, model_path: str): - return model_path in ( - "gpt-3.5-turbo", - "gpt-3.5-turbo-1106", - "gpt-4", - "gpt-4-turbo", - ) + return model_path in OPENAI_MODEL_LIST def load_model(self, model_path: str, from_pretrained_kwargs: dict): raise NotImplementedError() @@ -1067,6 +1129,22 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("chatgpt") +class PplxAIAdapter(BaseModelAdapter): + """The model adapter for Perplexity AI""" + + def match(self, model_path: str): + return model_path in ( + "pplx-7b-online", + "pplx-70b-online", + ) + + def load_model(self, model_path: str, from_pretrained_kwargs: dict): + raise NotImplementedError() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("pplxai") + + class ClaudeAdapter(BaseModelAdapter): """The model adapter for Claude""" @@ -1106,6 +1184,19 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("bard") +class GeminiAdapter(BaseModelAdapter): + """The model adapter for Gemini""" + + def match(self, model_path: str): + return "gemini" in model_path.lower() or "bard" in model_path.lower() + + def load_model(self, model_path: str, from_pretrained_kwargs: dict): + raise NotImplementedError() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("gemini") + + class BiLLaAdapter(BaseModelAdapter): """The model adapter for Neutralzz/BiLLa-7B-SFT""" @@ -1424,7 +1515,7 @@ class MistralAdapter(BaseModelAdapter): """The model adapter for Mistral AI models""" def match(self, model_path: str): - return "mistral" in model_path.lower() + return "mistral" in model_path.lower() or "mixtral" in model_path.lower() def load_model(self, model_path: str, from_pretrained_kwargs: dict): model, tokenizer = super().load_model(model_path, from_pretrained_kwargs) @@ -1508,6 +1599,16 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("open-orca") +class DolphinAdapter(OpenOrcaAdapter): + """Model adapter for ehartford/dolphin-2.2.1-mistral-7b""" + + def match(self, model_path: str): + return "dolphin" in model_path.lower() and "mistral" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("dolphin-2.2.1-mistral-7b") + + class Hermes2Adapter(BaseModelAdapter): """Model adapter for teknium/OpenHermes-2.5-Mistral-7B and teknium/OpenHermes-2-Mistral-7B models""" @@ -1535,6 +1636,22 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("OpenHermes-2.5-Mistral-7B") +class NousHermes2MixtralAdapter(BaseModelAdapter): + """Model adapter for NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO model""" + + def match(self, model_path: str): + return any( + model_str in model_path.lower() + for model_str in [ + "nous-hermes-2-mixtral-8x7b-dpo", + "nous-hermes-2-mixtral-8x7b-sft", + ] + ) + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("Nous-Hermes-2-Mixtral-8x7B-DPO") + + class WizardCoderAdapter(BaseModelAdapter): """The model adapter for WizardCoder (e.g., WizardLM/WizardCoder-Python-34B-V1.0)""" @@ -1646,6 +1763,8 @@ def load_model(self, model_path: str, from_pretrained_kwargs: dict): model.config.max_sequence_length = min( model.config.max_position_embeddings, tokenizer.model_max_length ) + model.use_cls_pooling = True + model.eval() return model, tokenizer def get_default_conv_template(self, model_path: str) -> Conversation: @@ -1768,7 +1887,7 @@ def load_model(self, model_path: str, from_pretrained_kwargs: dict): return model, tokenizer def get_default_conv_template(self, model_path: str) -> Conversation: - return get_conv_template("llama2-chinese") + return get_conv_template("chinese-alpaca2") class VigogneAdapter(BaseModelAdapter): @@ -1895,6 +2014,36 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("zephyr") +class NotusAdapter(BaseModelAdapter): + """The model adapter for Notus (e.g. argilla/notus-7b-v1)""" + + def match(self, model_path: str): + return "notus" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("zephyr") + + +class CatPPTAdapter(BaseModelAdapter): + """The model adapter for CatPPT (e.g. rishiraj/CatPPT)""" + + def match(self, model_path: str): + return "catppt" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("catppt") + + +class TinyLlamaAdapter(BaseModelAdapter): + """The model adapter for TinyLlama (e.g. TinyLlama/TinyLlama-1.1B-Chat-v1.0)""" + + def match(self, model_path: str): + return "tinyllama" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("TinyLlama") + + class XwinLMAdapter(BaseModelAdapter): """The model adapter for Xwin-LM V0.1 and V0.2 series of models(e.g., Xwin-LM/Xwin-LM-70B-V0.1)""" @@ -1933,6 +2082,16 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("metharme") +class XdanAdapter(BaseModelAdapter): + """The model adapter for xDAN-AI (e.g. xDAN-AI/xDAN-L1-Chat-RL-v1)""" + + def match(self, model_path: str): + return "xdan" in model_path.lower() and "v1" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("xdan-v1") + + class MicrosoftOrcaAdapter(BaseModelAdapter): """The model adapter for Microsoft/Orca-2 series of models (e.g. Microsoft/Orca-2-7b, Microsoft/Orca-2-13b)""" @@ -1955,6 +2114,171 @@ def get_default_conv_template(self, model_path: str) -> Conversation: return get_conv_template("Yi-34b-chat") +class DeepseekCoderAdapter(BaseModelAdapter): + """The model adapter for deepseek-ai's coder models""" + + def match(self, model_path: str): + return "deepseek-coder" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("deepseek-coder") + + +class DeepseekChatAdapter(BaseModelAdapter): + """The model adapter for deepseek-ai's chat models""" + + # Note: that this model will require tokenizer version >= 0.13.3 because the tokenizer class is LlamaTokenizerFast + + def match(self, model_path: str): + return "deepseek-llm" in model_path.lower() and "chat" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("deepseek-chat") + + +class Yuan2Adapter(BaseModelAdapter): + """The model adapter for Yuan2.0""" + + def match(self, model_path: str): + return "yuan2" in model_path.lower() + + def load_model(self, model_path: str, from_pretrained_kwargs: dict): + revision = from_pretrained_kwargs.get("revision", "main") + # from_pretrained_kwargs["torch_dtype"] = torch.bfloat16 + tokenizer = LlamaTokenizer.from_pretrained( + model_path, + add_eos_token=False, + add_bos_token=False, + eos_token="", + eod_token="", + sep_token="", + revision=revision, + ) + tokenizer.add_tokens( + [ + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + ], + special_tokens=True, + ) + + model = AutoModelForCausalLM.from_pretrained( + model_path, + # device_map='auto', + trust_remote_code=True, + **from_pretrained_kwargs, + ) + return model, tokenizer + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("yuan2") + + +class MetaMathAdapter(BaseModelAdapter): + """The model adapter for MetaMath models""" + + def match(self, model_path: str): + return "metamath" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("metamath") + + +class BagelAdapter(BaseModelAdapter): + """Model adapter for jondurbin/bagel-* models""" + + def match(self, model_path: str): + return "bagel" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("airoboros_v3") + + +class SolarAdapter(BaseModelAdapter): + """The model adapter for upstage/SOLAR-10.7B-Instruct-v1.0""" + + def match(self, model_path: str): + return "solar-" in model_path.lower() and "instruct" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("solar") + + +class SteerLMAdapter(BaseModelAdapter): + """The model adapter for nvidia/Llama2-70B-SteerLM-Chat""" + + def match(self, model_path: str): + return "steerlm-chat" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("steerlm") + + +class LlavaAdapter(BaseModelAdapter): + """The model adapter for liuhaotian/llava-v1.5 series of models""" + + def load_model(self, model_path: str, from_pretrained_kwargs: dict): + # TODO(chris): Implement huggingface-compatible load_model + pass + + def match(self, model_path: str): + return "llava" in model_path.lower() + + def get_default_conv_template(self, model_path: str) -> Conversation: + model_path = model_path.lower() + if "34b" in model_path: + return get_conv_template("llava-chatml") + + return get_conv_template("vicuna_v1.1") + + +class YuanAdapter(BaseModelAdapter): + """The model adapter for Yuan""" + + def match(self, model_path: str): + return "yuan" in model_path.lower() + + def load_model(self, model_path: str, from_pretrained_kwargs: dict): + model, tokenizer = super().load_model(model_path, from_pretrained_kwargs) + tokenizer.add_tokens( + [ + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + ], + special_tokens=True, + ) + return model, tokenizer + + def get_default_conv_template(self, model_path: str) -> Conversation: + return get_conv_template("yuan") + + # Note: the registration order matters. # The one registered earlier has a higher matching priority. register_model_adapter(PeftModelAdapter) @@ -1971,6 +2295,7 @@ def get_default_conv_template(self, model_path: str) -> Conversation: register_model_adapter(OasstPythiaAdapter) register_model_adapter(OasstLLaMAAdapter) register_model_adapter(OpenChat35Adapter) +register_model_adapter(TenyxChatAdapter) register_model_adapter(StableLMAdapter) register_model_adapter(BaizeAdapter) register_model_adapter(RwkvAdapter) @@ -1978,6 +2303,7 @@ def get_default_conv_template(self, model_path: str) -> Conversation: register_model_adapter(PhoenixAdapter) register_model_adapter(BardAdapter) register_model_adapter(PaLM2Adapter) +register_model_adapter(GeminiAdapter) register_model_adapter(ChatGPTAdapter) register_model_adapter(AzureOpenAIAdapter) register_model_adapter(ClaudeAdapter) @@ -1998,14 +2324,16 @@ def get_default_conv_template(self, model_path: str) -> Conversation: register_model_adapter(TigerBotAdapter) register_model_adapter(BaichuanAdapter) register_model_adapter(XGenAdapter) -register_model_adapter(NousHermesAdapter) register_model_adapter(PythiaAdapter) register_model_adapter(InternLMChatAdapter) register_model_adapter(StarChatAdapter) register_model_adapter(Llama2Adapter) register_model_adapter(CuteGPTAdapter) register_model_adapter(OpenOrcaAdapter) +register_model_adapter(DolphinAdapter) register_model_adapter(Hermes2Adapter) +register_model_adapter(NousHermes2MixtralAdapter) +register_model_adapter(NousHermesAdapter) register_model_adapter(MistralAdapter) register_model_adapter(WizardCoderAdapter) register_model_adapter(QwenChatAdapter) @@ -2021,11 +2349,25 @@ def get_default_conv_template(self, model_path: str) -> Conversation: register_model_adapter(CodeLlamaAdapter) register_model_adapter(Llama2ChangAdapter) register_model_adapter(ZephyrAdapter) +register_model_adapter(NotusAdapter) +register_model_adapter(CatPPTAdapter) +register_model_adapter(TinyLlamaAdapter) register_model_adapter(XwinLMAdapter) register_model_adapter(LemurAdapter) register_model_adapter(PygmalionAdapter) register_model_adapter(MicrosoftOrcaAdapter) +register_model_adapter(XdanAdapter) register_model_adapter(YiAdapter) +register_model_adapter(PplxAIAdapter) +register_model_adapter(DeepseekCoderAdapter) +register_model_adapter(DeepseekChatAdapter) +register_model_adapter(Yuan2Adapter) +register_model_adapter(MetaMathAdapter) +register_model_adapter(BagelAdapter) +register_model_adapter(SolarAdapter) +register_model_adapter(SteerLMAdapter) +register_model_adapter(LlavaAdapter) +register_model_adapter(YuanAdapter) # After all adapters, try the default base adapter. register_model_adapter(BaseModelAdapter) diff --git a/fastchat/model/model_chatglm.py b/fastchat/model/model_chatglm.py index 5d4db62bc..2cbac8bc5 100644 --- a/fastchat/model/model_chatglm.py +++ b/fastchat/model/model_chatglm.py @@ -37,6 +37,31 @@ def process_response(response): return response +def recover_message_list(prompt): + role_token_pattern = "|".join( + [re.escape(r) for r in ["<|system|>", "<|user|>", "<|assistant|>"]] + ) + role = None + last_end_idx = -1 + message_list = [] + for match in re.finditer(role_token_pattern, prompt): + if role: + messge = {} + if role == "<|system|>": + messge["role"] = "system" + elif role == "<|user|>": + messge["role"] = "user" + else: + messge["role"] = "assistant" + messge["content"] = prompt[last_end_idx + 1 : match.start()] + message_list.append(messge) + + role = prompt[match.start() : match.end()] + last_end_idx = match.end() + + return message_list + + @torch.inference_mode() def generate_stream_chatglm( model, @@ -54,7 +79,17 @@ def generate_stream_chatglm( max_new_tokens = int(params.get("max_new_tokens", 256)) echo = params.get("echo", True) - inputs = tokenizer([prompt], return_tensors="pt").to(model.device) + model_type = str(type(model)).lower() + if "peft" in model_type: + model_type = str(type(model.base_model.model)).lower() + + if "chatglm3" in model_type: + message_list = recover_message_list(prompt) + inputs = tokenizer.build_chat_input( + query=message_list[-1]["content"], history=message_list[:-1], role="user" + ).to(model.device) + else: + inputs = tokenizer([prompt], return_tensors="pt").to(model.device) input_echo_len = len(inputs["input_ids"][0]) gen_kwargs = { diff --git a/fastchat/model/model_registry.py b/fastchat/model/model_registry.py index 40aee1b4c..433449cdb 100644 --- a/fastchat/model/model_registry.py +++ b/fastchat/model/model_registry.py @@ -1,12 +1,12 @@ """Additional information of the models.""" -from collections import namedtuple +from collections import namedtuple, OrderedDict from typing import List ModelInfo = namedtuple("ModelInfo", ["simple_name", "link", "description"]) -model_info = {} +model_info = OrderedDict() def register_model_info( @@ -29,159 +29,356 @@ def get_model_info(name: str) -> ModelInfo: register_model_info( - ["gpt-3.5-turbo"], - "GPT-3.5", - "https://openai.com/blog/chatgpt", - "GPT-3.5 by OpenAI", + [ + "IEITYuan/Yuan2-2B-Janus-hf", + "IEITYuan/Yuan2-2B-hf", + "IEITYuan/Yuan2-51B-hf", + "IEITYuan/Yuan2-102B-hf", + ], + "IEIT-Yuan2", + "https://github.com/IEIT-Yuan/Yuan-2.0", + "Yuan2.0 is a new generation Fundamental Large Language Model developed by IEIT System.", ) + register_model_info( - ["gpt-3.5-turbo-1106"], - "GPT-3.5-Turbo-1106", - "https://platform.openai.com/docs/models/gpt-3-5", - "GPT-3.5-Turbo-1106 by OpenAI", + [ + "mixtral-8x7b-instruct-v0.1", + "mistral-medium", + "mistral-7b-instruct-v0.2", + "mistral-7b-instruct", + ], + "Mixtral of experts", + "https://mistral.ai/news/mixtral-of-experts/", + "A Mixture-of-Experts model by Mistral AI", +) + +register_model_info( + [ + "qwen1.5-72b-chat", + "qwen1.5-14b-chat", + "qwen1.5-7b-chat", + "qwen1.5-4b-chat", + "qwen1.5-1.8b-chat", + "qwen1.5-0.5b-chat", + "qwen-14b-chat", + ], + "Qwen 1.5", + "https://qwenlm.github.io/blog/qwen1.5/", + "A large language model by Alibaba Cloud", +) + +register_model_info( + ["qwen-14b-chat"], + "Qwen", + "https://huggingface.co/Qwen", + "A large language model by Alibaba Cloud", +) + +register_model_info( + ["bard-feb-2024", "bard-jan-24-gemini-pro"], + "Bard", + "https://bard.google.com/", + "Bard by Google", ) + register_model_info( - ["gpt-4"], "GPT-4", "https://openai.com/research/gpt-4", "ChatGPT-4 by OpenAI" + ["gemini-pro", "gemini-pro-dev-api"], + "Gemini", + "https://blog.google/technology/ai/google-gemini-pro-imagen-duet-ai-update/", + "Gemini by Google", ) + register_model_info( - ["gpt-4-turbo"], + ["deepseek-llm-67b-chat"], + "DeepSeek LLM", + "https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat", + "An advanced language model by DeepSeek", +) + +register_model_info( + ["stripedhyena-nous-7b"], + "StripedHyena-Nous", + "https://huggingface.co/togethercomputer/StripedHyena-Nous-7B", + "A chat model developed by Together Research and Nous Research.", +) + +register_model_info( + ["solar-10.7b-instruct-v1.0"], + "SOLAR-10.7B-Instruct", + "https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0", + "A model trained using depth up-scaling by Upstage AI", +) + +register_model_info( + ["gpt-4-turbo", "gpt-4-1106-preview", "gpt-4-0125-preview"], "GPT-4-Turbo", "https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo", "GPT-4-Turbo by OpenAI", ) + register_model_info( - ["claude-2"], + [ + "gpt-3.5-turbo", + "gpt-3.5-turbo-0125", + "gpt-3.5-turbo-1106", + "gpt-3.5-turbo-0314", + "gpt-3.5-turbo-0613", + ], + "GPT-3.5", + "https://platform.openai.com/docs/models/gpt-3-5", + "GPT-3.5-Turbo by OpenAI", +) + +register_model_info( + ["gpt-4", "gpt-4-0314", "gpt-4-0613"], + "GPT-4", + "https://openai.com/research/gpt-4", + "GPT-4 by OpenAI", +) + +register_model_info( + ["claude-2.1", "claude-2.0"], "Claude", "https://www.anthropic.com/index/claude-2", "Claude 2 by Anthropic", ) + register_model_info( ["claude-1"], "Claude", "https://www.anthropic.com/index/introducing-claude", - "Claude by Anthropic", + "Claude 1 by Anthropic", ) + register_model_info( - ["claude-instant-1"], + ["claude-instant-1", "claude-instant-1.2"], "Claude Instant", "https://www.anthropic.com/index/introducing-claude", "Claude Instant by Anthropic", ) + register_model_info( - ["palm-2"], - "PaLM 2 Chat", - "https://cloud.google.com/vertex-ai/docs/release-notes#May_10_2023", - "PaLM 2 for Chat (chat-bison@001) by Google", + ["nous-hermes-2-mixtral-8x7b-dpo"], + "Nous-Hermes-2-Mixtral-8x7B-DPO", + "https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", + "Nous Hermes finetuned from Mixtral 8x7B", +) + +register_model_info( + ["openchat-3.5-0106", "openchat-3.5"], + "OpenChat 3.5", + "https://github.com/imoneoi/openchat", + "An open model fine-tuned on Mistral-7B using C-RLFT", +) + +register_model_info( + ["deepseek-llm-67b-chat"], + "DeepSeek LLM", + "https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat", + "An advanced language model by DeepSeek", +) + +register_model_info( + ["stripedhyena-nous-7b"], + "StripedHyena-Nous", + "https://huggingface.co/togethercomputer/StripedHyena-Nous-7B", + "A chat model developed by Together Research and Nous Research.", +) + +register_model_info( + ["llama2-70b-steerlm-chat"], + "Llama2-70B-SteerLM-Chat", + "https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat", + "A Llama fine-tuned with SteerLM method by NVIDIA", +) + +register_model_info( + ["pplx-70b-online", "pplx-7b-online"], + "pplx-online-llms", + "https://blog.perplexity.ai/blog/introducing-pplx-online-llms", + "Online LLM API by Perplexity AI", +) + +register_model_info( + ["openhermes-2.5-mistral-7b"], + "OpenHermes-2.5-Mistral-7B", + "https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B", + "A mistral-based model fine-tuned on 1M GPT-4 outputs", +) + +register_model_info( + ["starling-lm-7b-alpha"], + "Starling-LM-7B-alpha", + "https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha", + "An open model trained using RLAIF by Berkeley", +) + +register_model_info( + ["tulu-2-dpo-70b"], + "Tulu 2", + "https://huggingface.co/allenai/tulu-2-dpo-70b", + "An instruction and RLHF model by UW/AllenAI", +) + +register_model_info( + ["yi-34b-chat", "yi-6b-chat"], + "Yi-Chat", + "https://huggingface.co/01-ai/Yi-34B-Chat", + "A large language model by 01 AI", ) + +register_model_info( + ["llama-2-70b-chat", "llama-2-34b-chat", "llama-2-13b-chat", "llama-2-7b-chat"], + "Llama 2", + "https://ai.meta.com/llama/", + "Open foundation and fine-tuned chat models by Meta", +) + register_model_info( [ "vicuna-33b", "vicuna-33b-v1.3", "vicuna-13b", - "vicuna-13b-v1.3", + "vicuna-13b-v1.5", "vicuna-7b", - "vicuna-7b-v1.3", + "vicuna-7b-v1.5", ], "Vicuna", "https://lmsys.org/blog/2023-03-30-vicuna/", - "a chat assistant fine-tuned on user-shared conversations by LMSYS", + "A chat assistant fine-tuned on user-shared conversations by LMSYS", ) + register_model_info( - ["llama-2-70b-chat", "llama-2-34b-chat", "llama-2-13b-chat", "llama-2-7b-chat"], - "Llama 2", - "https://ai.meta.com/llama/", - "open foundation and fine-tuned chat models by Meta", + ["chatglm3-6b", "chatglm2-6b", "chatglm-6b"], + "ChatGLM", + "https://chatglm.cn/blog", + "An open bilingual dialogue language model by Tsinghua University", ) + register_model_info( - ["mistral-7b-instruct"], - "Mistral", - "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1", - "a large language model by Mistral AI team", + ["tenyxchat-7b-v1"], + "TenyxChat-7B", + "https://huggingface.co/tenyx/TenyxChat-7B-v1", + "An open model DPO trained on top of OpenChat-3.5 using Tenyx fine-tuning", ) + register_model_info( ["zephyr-7b-beta", "zephyr-7b-alpha"], "Zephyr", "https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha", - "a chatbot fine-tuned from Mistral by Hugging Face", + "A chatbot fine-tuned from Mistral by Hugging Face", ) + register_model_info( - ["qwen-14b-chat"], - "Qwen", - "https://huggingface.co/Qwen/Qwen-14B-Chat", - "a large language model by Alibaba Cloud", + ["notus-7b-v1"], + "Notus", + "https://huggingface.co/argilla/notus-7b-v1", + "A chatbot fine-tuned from Zephyr SFT by Argilla", ) + +register_model_info( + ["catppt"], + "CatPPT", + "https://huggingface.co/rishiraj/CatPPT", + "A chatbot fine-tuned from a SLERP merged model by Rishiraj Acharya", +) + register_model_info( - ["codellama-34b-instruct", "codellama-13b-instruct", "codellama-7b-instruct"], + ["TinyLlama"], + "TinyLlama", + "https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0", + "The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.", +) + +register_model_info( + [ + "codellama-70b-instruct", + "codellama-34b-instruct", + "codellama-13b-instruct", + "codellama-7b-instruct", + ], "Code Llama", "https://ai.meta.com/blog/code-llama-large-language-model-coding/", - "open foundation models for code by Meta", + "Open foundation models for code by Meta", ) + register_model_info( ["wizardlm-70b", "wizardlm-30b", "wizardlm-13b"], "WizardLM", "https://github.com/nlpxucan/WizardLM", - "an instruction-following LLM using evol-instruct by Microsoft", + "An instruction-following LLM using evol-instruct by Microsoft", ) + register_model_info( ["wizardcoder-15b-v1.0"], "WizardLM", "https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder", "Empowering Code Large Language Models with Evol-Instruct", ) + register_model_info( ["mpt-7b-chat", "mpt-30b-chat"], "MPT-Chat", "https://www.mosaicml.com/blog/mpt-30b", - "a chatbot fine-tuned from MPT by MosaicML", + "A chatbot fine-tuned from MPT by MosaicML", ) + register_model_info( ["guanaco-33b", "guanaco-65b"], "Guanaco", "https://github.com/artidoro/qlora", - "a model fine-tuned with QLoRA by UW", + "A model fine-tuned with QLoRA by UW", ) + register_model_info( ["gpt4all-13b-snoozy"], "GPT4All-Snoozy", "https://github.com/nomic-ai/gpt4all", - "a finetuned LLaMA model on assistant style data by Nomic AI", + "A finetuned LLaMA model on assistant style data by Nomic AI", ) + register_model_info( ["koala-13b"], "Koala", "https://bair.berkeley.edu/blog/2023/04/03/koala", - "a dialogue model for academic research by BAIR", + "A dialogue model for academic research by BAIR", ) + register_model_info( ["RWKV-4-Raven-14B"], "RWKV-4-Raven", "https://huggingface.co/BlinkDL/rwkv-4-raven", - "an RNN with transformer-level LLM performance", -) -register_model_info( - ["chatglm-6b", "chatglm2-6b"], - "ChatGLM", - "https://chatglm.cn/blog", - "an open bilingual dialogue language model by Tsinghua University", + "An RNN with transformer-level LLM performance", ) + register_model_info( ["alpaca-13b"], "Alpaca", "https://crfm.stanford.edu/2023/03/13/alpaca.html", - "a model fine-tuned from LLaMA on instruction-following demonstrations by Stanford", + "A model fine-tuned from LLaMA on instruction-following demonstrations by Stanford", ) + register_model_info( ["oasst-pythia-12b"], "OpenAssistant (oasst)", "https://open-assistant.io", - "an Open Assistant for everyone by LAION", + "An Open Assistant for everyone by LAION", ) + register_model_info( ["oasst-sft-7-llama-30b"], "OpenAssistant (oasst)", "https://open-assistant.io", - "an Open Assistant for everyone by LAION", + "An Open Assistant for everyone by LAION", ) + +register_model_info( + ["palm-2"], + "PaLM 2 Chat", + "https://cloud.google.com/vertex-ai/docs/release-notes#May_10_2023", + "PaLM 2 for Chat (chat-bison@001) by Google", +) + register_model_info( ["openchat-3.5"], "OpenChat 3.5", @@ -198,68 +395,79 @@ def get_model_info(name: str) -> ModelInfo: ["llama-7b", "llama-13b"], "LLaMA", "https://arxiv.org/abs/2302.13971", - "open and efficient foundation language models by Meta", + "Open and efficient foundation language models by Meta", ) + register_model_info( ["open-llama-7b-v2-open-instruct", "open-llama-7b-open-instruct"], "Open LLaMa (Open Instruct)", "https://medium.com/vmware-data-ml-blog/starter-llm-for-the-enterprise-instruction-tuning-openllama-7b-d05fc3bbaccc", "Open LLaMa fine-tuned on instruction-following data by VMware", ) + register_model_info( ["dolly-v2-12b"], "Dolly", "https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm", - "an instruction-tuned open large language model by Databricks", + "An instruction-tuned open large language model by Databricks", ) + register_model_info( ["stablelm-tuned-alpha-7b"], "StableLM", "https://github.com/stability-AI/stableLM", "Stability AI language models", ) + register_model_info( ["codet5p-6b"], "CodeT5p-6b", "https://huggingface.co/Salesforce/codet5p-6b", "Code completion model released by Salesforce", ) + register_model_info( ["fastchat-t5-3b", "fastchat-t5-3b-v1.0"], "FastChat-T5", "https://huggingface.co/lmsys/fastchat-t5-3b-v1.0", - "a chat assistant fine-tuned from FLAN-T5 by LMSYS", + "A chat assistant fine-tuned from FLAN-T5 by LMSYS", ) + register_model_info( ["phoenix-inst-chat-7b"], "Phoenix-7B", "https://huggingface.co/FreedomIntelligence/phoenix-inst-chat-7b", - "a multilingual chat assistant fine-tuned from Bloomz to democratize ChatGPT across languages by CUHK(SZ)", + "A multilingual chat assistant fine-tuned from Bloomz to democratize ChatGPT across languages by CUHK(SZ)", ) + register_model_info( ["realm-7b-v1"], "ReaLM", "https://github.com/FreedomIntelligence/ReaLM", "A chatbot fine-tuned from LLaMA2 with data generated via iterative calls to UserGPT and ChatGPT by CUHK(SZ) and SRIBD.", ) + register_model_info( ["billa-7b-sft"], "BiLLa-7B-SFT", "https://huggingface.co/Neutralzz/BiLLa-7B-SFT", - "an instruction-tuned bilingual LLaMA with enhanced reasoning ability by an independent researcher", + "An instruction-tuned bilingual LLaMA with enhanced reasoning ability by an independent researcher", ) + register_model_info( ["h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2"], "h2oGPT-GM-7b", "https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2", - "an instruction-tuned OpenLLaMA with enhanced conversational ability by H2O.ai", + "An instruction-tuned OpenLLaMA with enhanced conversational ability by H2O.ai", ) + register_model_info( ["baize-v2-7b", "baize-v2-13b"], "Baize v2", "https://github.com/project-baize/baize-chatbot#v2", "A chatbot fine-tuned from LLaMA with ChatGPT self-chat data and Self-Disillation with Feedback (SDF) by UCSD and SYSU.", ) + register_model_info( [ "airoboros-l2-7b-2.1", @@ -269,8 +477,20 @@ def get_model_info(name: str) -> ModelInfo: ], "airoboros", "https://huggingface.co/jondurbin/airoboros-l2-70b-2.1", - "an instruction-tuned LlaMa model tuned with 100% synthetic instruction-response pairs from GPT4", + "An instruction-tuned LlaMa model tuned with 100% synthetic instruction-response pairs from GPT4", ) + +register_model_info( + [ + "spicyboros-7b-2.2", + "spicyboros-13b-2.2", + "spicyboros-70b-2.2", + ], + "spicyboros", + "https://huggingface.co/jondurbin/spicyboros-70b-2.2", + "De-aligned versions of the airoboros models", +) + register_model_info( [ "spicyboros-7b-2.2", @@ -287,18 +507,21 @@ def get_model_info(name: str) -> ModelInfo: "https://huggingface.co/OptimalScale/robin-7b-v2-delta", "A chatbot fine-tuned from LLaMA-7b, achieving competitive performance on chitchat, commonsense reasoning and instruction-following tasks, by OptimalScale, HKUST.", ) + register_model_info( ["manticore-13b-chat"], "Manticore 13B Chat", "https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg", "A chatbot fine-tuned from LlaMa across several CoT and chat datasets.", ) + register_model_info( ["redpajama-incite-7b-chat"], "RedPajama-INCITE-7B-Chat", "https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat", "A chatbot fine-tuned from RedPajama-INCITE-7B-Base by Together", ) + register_model_info( [ "falcon-7b", @@ -312,30 +535,42 @@ def get_model_info(name: str) -> ModelInfo: "https://huggingface.co/tiiuae/falcon-180B", "TII's flagship series of large language models", ) + register_model_info( ["tigerbot-7b-sft"], "Tigerbot", "https://huggingface.co/TigerResearch/tigerbot-7b-sft", - "TigerBot is a large-scale language model (LLM) with multiple languages and tasks.", + "A large-scale language model (LLM) with multiple languages and tasks.", ) + register_model_info( ["internlm-chat-7b", "internlm-chat-7b-8k"], "InternLM", "https://huggingface.co/internlm/internlm-chat-7b", - "InternLM is a multi-language large-scale language model (LLM), developed by SHLAB.", + "A multi-language large-scale language model (LLM), developed by SHLAB.", ) + register_model_info( ["Qwen-7B-Chat"], "Qwen", "https://huggingface.co/Qwen/Qwen-7B-Chat", - "Qwen is a multi-language large-scale language model (LLM), developed by Damo Academy.", + "A multi-language large-scale language model (LLM), developed by Damo Academy.", ) + register_model_info( ["Llama2-Chinese-13b-Chat", "LLama2-Chinese-13B"], "Llama2-Chinese", "https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat", - "Llama2-Chinese is a multi-language large-scale language model (LLM), developed by FlagAlpha.", + "A multi-language large-scale language model (LLM), developed by FlagAlpha.", ) + +register_model_info( + ["Chinese-Alpaca-2-7B", "Chinese-Alpaca-2-13B"], + "Chinese-Alpaca", + "https://huggingface.co/hfl/chinese-alpaca-2-13b", + "New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.", +) + register_model_info( ["Chinese-Alpaca-2-7B", "Chinese-Alpaca-2-13B"], "Chinese-Alpaca", @@ -346,13 +581,108 @@ def get_model_info(name: str) -> ModelInfo: ["Vigogne-2-7B-Instruct", "Vigogne-2-13B-Instruct"], "Vigogne-Instruct", "https://huggingface.co/bofenghuang/vigogne-2-7b-instruct", - "Vigogne-Instruct is a French large language model (LLM) optimized for instruction-following, developed by Bofeng Huang", + "A French large language model (LLM) optimized for instruction-following, developed by Bofeng Huang", ) + register_model_info( ["Vigogne-2-7B-Chat", "Vigogne-2-13B-Chat"], "Vigogne-Chat", "https://huggingface.co/bofenghuang/vigogne-2-7b-chat", - "Vigogne-Chat is a French large language model (LLM) optimized for instruction-following and multi-turn dialogues, developed by Bofeng Huang", + "A French large language model (LLM) optimized for instruction-following and multi-turn dialogues, developed by Bofeng Huang", +) + +register_model_info( + ["stable-vicuna-13B-HF"], + "stable-vicuna", + "https://huggingface.co/TheBloke/stable-vicuna-13B-HF", + "A Vicuna model fine-tuned using RLHF via PPO on various conversational and instructional datasets.", +) + +register_model_info( + ["deluxe-chat-v1", "deluxe-chat-v1.1", "deluxe-chat-v1.2"], + "DeluxeChat", + "", + "Deluxe Chat", +) + +register_model_info( + [ + "Xwin-LM-7B-V0.1", + "Xwin-LM-13B-V0.1", + "Xwin-LM-70B-V0.1", + "Xwin-LM-7B-V0.2", + "Xwin-LM-13B-V0.2", + ], + "Xwin-LM", + "https://github.com/Xwin-LM/Xwin-LM", + "Chat models developed by Xwin-LM team", +) + +register_model_info( + ["lemur-70b-chat"], + "Lemur-Chat", + "https://huggingface.co/OpenLemur/lemur-70b-chat-v1", + "An openly accessible language model optimized for both natural language and coding capabilities ", +) + +register_model_info( + ["Mistral-7B-OpenOrca"], + "Open-Orca", + "https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca", + "A fine-tune of [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) using [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca)", +) + +register_model_info( + ["dolphin-2.2.1-mistral-7b"], + "dolphin-mistral", + "https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b", + "An uncensored fine-tuned Mistral 7B", +) + +register_model_info( + [ + "AquilaChat-7B", + "AquilaChat2-7B", + "AquilaChat2-34B", + ], + "Aquila-Chat", + "https://huggingface.co/BAAI/AquilaChat2-34B", + "Chat models developed by BAAI team", +) + +register_model_info( + ["xDAN-L1-Chat-RL-v1"], + "xDAN-L1-Chat", + "https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1", + "A large language chat model created by xDAN-AI.", +) + +register_model_info( + ["MetaMath-70B-V1.0", "MetaMath-7B-V1.0"], + "MetaMath", + "https://huggingface.co/meta-math", + "A finetune of Llama2 on [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) that specializes in mathematical reasoning.", +) + +register_model_info( + ["Yuan2-2B-hf", "Yuan2-51B-hf", "Yuan2-102B-hf"], + "IEIYuan", + "https://huggingface.co/IEITYuan", + "A Basemodel developed by IEI.", +) + +register_model_info( + [ + "llava-v1.6-34b", + "llava-v1.6-vicuna-13b", + "llava-v1.6-vicuna-7b", + "llava-v1.6-mistral-7b", + "llava-v1.5-13b", + "llava-v1.5-7b", + ], + "LLaVA", + "https://github.com/haotian-liu/LLaVA", + "an open large language and vision assistant", ) register_model_info( ["stable-vicuna-13B-HF"], diff --git a/fastchat/model/model_yuan2.py b/fastchat/model/model_yuan2.py new file mode 100644 index 000000000..25b3e13f8 --- /dev/null +++ b/fastchat/model/model_yuan2.py @@ -0,0 +1,139 @@ +import gc +from threading import Thread +from typing import Iterable + +import torch +import transformers +from transformers import TextIteratorStreamer, GenerationConfig + +from fastchat.utils import is_partial_stop + + +@torch.inference_mode() +def generate_stream_yuan2( + model, + tokenizer, + params, + device, + context_len=2048, + stream_interval=2, + judge_sent_end=False, +): + prompt = params["prompt"] + len_prompt = len(prompt) + temperature = float(params.get("temperature", 1)) + repetition_penalty = float(params.get("repetition_penalty", 1.0)) + top_p = float(params.get("top_p", 0)) + top_k = int(params.get("top_k", 1)) # -1 means disable + max_new_tokens = int(params.get("max_new_tokens", 512)) + stop_str = params.get("stop", "") + echo = bool(params.get("echo", True)) + stop_token_ids = params.get("stop_token_ids", None) or [] + stop_token_ids.append(tokenizer("")["input_ids"][0]) + + inputs = tokenizer(prompt, return_tensors="pt").to(model.device) + input_ids = inputs["input_ids"] + attention_mask = inputs["attention_mask"] + + max_src_len = context_len - max_new_tokens - 8 + + input_ids = input_ids[-max_src_len:] # truncate from the left + attention_mask = attention_mask[-max_src_len:] # truncate from the left + input_echo_len = len(input_ids) + + decode_config = dict(skip_special_tokens=True, clean_up_tokenization_spaces=True) + streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, **decode_config) + + generation_config = GenerationConfig( + max_new_tokens=max_new_tokens, + do_sample=temperature >= 1.2, + temperature=temperature, + repetition_penalty=repetition_penalty, + no_repeat_ngram_size=10, + top_p=top_p, + top_k=top_k, + ) + + generation_kwargs = dict( + inputs=input_ids, + attention_mask=attention_mask, + streamer=streamer, + generation_config=generation_config, + ) + + thread = Thread(target=model.generate, kwargs=generation_kwargs) + thread.start() + + if echo: + # means keep the prompt + output = prompt + else: + output = "" + + for i, new_text in enumerate(streamer): + output += new_text + if i % stream_interval == 0: + if echo: + rfind_start = len_prompt + else: + rfind_start = 0 + + partially_stopped = False + if stop_str: + if isinstance(stop_str, str): + pos = output.rfind(stop_str, rfind_start) + if pos != -1: + output = output[:pos] + else: + partially_stopped = is_partial_stop(output, stop_str) + elif isinstance(stop_str, Iterable): + for each_stop in stop_str: + pos = output.rfind(each_stop, rfind_start) + if pos != -1: + output = output[:pos] + break + else: + partially_stopped = is_partial_stop(output, each_stop) + if partially_stopped: + break + else: + raise ValueError("Invalid stop field type.") + + # prevent yielding partial stop sequence + if not partially_stopped: + yield { + "text": output, + "usage": { + "prompt_tokens": input_echo_len, + "completion_tokens": i, + "total_tokens": input_echo_len + i, + }, + "finish_reason": None, + } + output = output.strip() + + # finish stream event, which contains finish reason + if i == max_new_tokens - 1: + finish_reason = "length" + elif partially_stopped: + finish_reason = None + else: + finish_reason = "stop" + + yield { + "text": output, + "usage": { + "prompt_tokens": input_echo_len, + "completion_tokens": i, + "total_tokens": input_echo_len + i, + }, + "finish_reason": finish_reason, + } + + # clean + gc.collect() + torch.cuda.empty_cache() + if device == "xpu": + torch.xpu.empty_cache() + if device == "npu": + torch.npu.empty_cache() diff --git a/fastchat/protocol/openai_api_protocol.py b/fastchat/protocol/openai_api_protocol.py index b2a4d25d4..99e93a40a 100644 --- a/fastchat/protocol/openai_api_protocol.py +++ b/fastchat/protocol/openai_api_protocol.py @@ -57,7 +57,11 @@ class LogProbs(BaseModel): class ChatCompletionRequest(BaseModel): model: str - messages: Union[str, List[Dict[str, str]]] + messages: Union[ + str, + List[Dict[str, str]], + List[Dict[str, Union[str, List[Dict[str, Union[str, Dict[str, str]]]]]]], + ] temperature: Optional[float] = 0.7 top_p: Optional[float] = 1.0 top_k: Optional[int] = -1 diff --git a/fastchat/serve/api_provider.py b/fastchat/serve/api_provider.py index 3dbb8a690..1e319f0a2 100644 --- a/fastchat/serve/api_provider.py +++ b/fastchat/serve/api_provider.py @@ -1,16 +1,93 @@ """Call API providers.""" +import json import os import random import time +import requests + from fastchat.utils import build_logger -from fastchat.constants import WORKER_API_TIMEOUT logger = build_logger("gradio_web_server", "gradio_web_server.log") +def get_api_provider_stream_iter( + conv, + model_name, + model_api_dict, + temperature, + top_p, + max_new_tokens, +): + if model_api_dict["api_type"] == "openai": + prompt = conv.to_openai_api_messages() + stream_iter = openai_api_stream_iter( + model_api_dict["model_name"], + prompt, + temperature, + top_p, + max_new_tokens, + api_base=model_api_dict["api_base"], + api_key=model_api_dict["api_key"], + ) + elif model_api_dict["api_type"] == "anthropic": + prompt = conv.get_prompt() + stream_iter = anthropic_api_stream_iter( + model_name, prompt, temperature, top_p, max_new_tokens + ) + elif model_api_dict["api_type"] == "gemini": + stream_iter = gemini_api_stream_iter( + model_api_dict["model_name"], + conv, + temperature, + top_p, + max_new_tokens, + api_key=model_api_dict["api_key"], + ) + elif model_api_dict["api_type"] == "bard": + prompt = conv.to_openai_api_messages() + stream_iter = bard_api_stream_iter( + model_api_dict["model_name"], + prompt, + temperature, + top_p, + api_key=model_api_dict["api_key"], + ) + elif model_api_dict["api_type"] == "mistral": + prompt = conv.to_openai_api_messages() + stream_iter = mistral_api_stream_iter( + model_name, prompt, temperature, top_p, max_new_tokens + ) + elif model_api_dict["api_type"] == "nvidia": + prompt = conv.to_openai_api_messages() + stream_iter = nvidia_api_stream_iter( + model_name, + prompt, + temperature, + top_p, + max_new_tokens, + model_api_dict["api_base"], + ) + elif model_api_dict["api_type"] == "ai2": + prompt = conv.to_openai_api_messages() + stream_iter = ai2_api_stream_iter( + model_name, + model_api_dict["model_name"], + prompt, + temperature, + top_p, + max_new_tokens, + api_base=model_api_dict["api_base"], + api_key=model_api_dict["api_key"], + ) + else: + raise NotImplementedError() + + return stream_iter + + def openai_api_stream_iter( model_name, messages, @@ -22,8 +99,19 @@ def openai_api_stream_iter( ): import openai - openai.api_base = api_base or "https://api.openai.com/v1" - openai.api_key = api_key or os.environ["OPENAI_API_KEY"] + api_key = api_key or os.environ["OPENAI_API_KEY"] + + if "azure" in model_name: + client = openai.AzureOpenAI( + api_version="2023-07-01-preview", + azure_endpoint=api_base or "https://api.openai.com/v1", + api_key=api_key, + ) + else: + client = openai.OpenAI( + base_url=api_base or "https://api.openai.com/v1", api_key=api_key + ) + if model_name == "gpt-4-turbo": model_name = "gpt-4-1106-preview" @@ -37,7 +125,7 @@ def openai_api_stream_iter( } logger.info(f"==== request ====\n{gen_params}") - res = openai.ChatCompletion.create( + res = client.chat.completions.create( model=model_name, messages=messages, temperature=temperature, @@ -46,12 +134,13 @@ def openai_api_stream_iter( ) text = "" for chunk in res: - text += chunk["choices"][0]["delta"].get("content", "") - data = { - "text": text, - "error_code": 0, - } - yield data + if len(chunk.choices) > 0: + text += chunk.choices[0].delta.content or "" + data = { + "text": text, + "error_code": 0, + } + yield data def anthropic_api_stream_iter(model_name, prompt, temperature, top_p, max_new_tokens): @@ -88,43 +177,278 @@ def anthropic_api_stream_iter(model_name, prompt, temperature, top_p, max_new_to yield data -def init_palm_chat(model_name): - import vertexai # pip3 install google-cloud-aiplatform - from vertexai.preview.language_models import ChatModel - - project_id = os.environ["GCP_PROJECT_ID"] - location = "us-central1" - vertexai.init(project=project_id, location=location) - - chat_model = ChatModel.from_pretrained(model_name) - chat = chat_model.start_chat(examples=[]) - return chat +def gemini_api_stream_iter( + model_name, conv, temperature, top_p, max_new_tokens, api_key=None +): + import google.generativeai as genai # pip install google-generativeai + if api_key is None: + api_key = os.environ["GEMINI_API_KEY"] + genai.configure(api_key=api_key) -def palm_api_stream_iter(chat, message, temperature, top_p, max_new_tokens): - parameters = { + generation_config = { "temperature": temperature, - "top_p": top_p, "max_output_tokens": max_new_tokens, + "top_p": top_p, } - gen_params = { - "model": "palm-2", - "prompt": message, + params = { + "model": model_name, + "prompt": conv, } - gen_params.update(parameters) - logger.info(f"==== request ====\n{gen_params}") + params.update(generation_config) + logger.info(f"==== request ====\n{params}") - response = chat.send_message(message, **parameters) - content = response.text + safety_settings = [ + {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"}, + {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"}, + {"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"}, + {"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"}, + ] + model = genai.GenerativeModel( + model_name=model_name, + generation_config=generation_config, + safety_settings=safety_settings, + ) + history = [] + for role, message in conv.messages[:-2]: + history.append({"role": role, "parts": message}) + convo = model.start_chat(history=history) + response = convo.send_message(conv.messages[-2][1], stream=True) + try: + text = "" + for chunk in response: + text += chunk.text + data = { + "text": text, + "error_code": 0, + } + yield data + except Exception as e: + logger.error(f"==== error ====\n{e}") + reason = chunk.candidates + yield { + "text": f"**API REQUEST ERROR** Reason: {reason}.", + "error_code": 1, + } + + +def bard_api_stream_iter(model_name, conv, temperature, top_p, api_key=None): + del top_p # not supported + del temperature # not supported + + if api_key is None: + api_key = os.environ["BARD_API_KEY"] + + # convert conv to conv_bard + conv_bard = [] + for turn in conv: + if turn["role"] == "user": + conv_bard.append({"author": "0", "content": turn["content"]}) + elif turn["role"] == "assistant": + conv_bard.append({"author": "1", "content": turn["content"]}) + else: + raise ValueError(f"Unsupported role: {turn['role']}") + + params = { + "model": model_name, + "prompt": conv_bard, + } + logger.info(f"==== request ====\n{params}") + + try: + res = requests.post( + f"https://generativelanguage.googleapis.com/v1beta2/models/{model_name}:generateMessage?key={api_key}", + json={ + "prompt": { + "messages": conv_bard, + }, + }, + timeout=30, + ) + except Exception as e: + logger.error(f"==== error ====\n{e}") + yield { + "text": f"**API REQUEST ERROR** Reason: {e}.", + "error_code": 1, + } + + if res.status_code != 200: + logger.error(f"==== error ==== ({res.status_code}): {res.text}") + yield { + "text": f"**API REQUEST ERROR** Reason: status code {res.status_code}.", + "error_code": 1, + } + + response_json = res.json() + if "candidates" not in response_json: + logger.error(f"==== error ==== response blocked: {response_json}") + reason = response_json["filters"][0]["reason"] + yield { + "text": f"**API REQUEST ERROR** Reason: {reason}.", + "error_code": 1, + } + + response = response_json["candidates"][0]["content"] pos = 0 - while pos < len(content): - # This is a fancy way to simulate token generation latency combined - # with a Poisson process. - pos += random.randint(10, 20) - time.sleep(random.expovariate(50)) + while pos < len(response): + # simulate token streaming + pos += random.randint(3, 6) + time.sleep(0.002) data = { - "text": content[:pos], + "text": response[:pos], "error_code": 0, } yield data + + +def ai2_api_stream_iter( + model_name, + model_id, + messages, + temperature, + top_p, + max_new_tokens, + api_key=None, + api_base=None, +): + # get keys and needed values + ai2_key = api_key or os.environ.get("AI2_API_KEY") + api_base = api_base or "https://inferd.allen.ai/api/v1/infer" + + # Make requests + gen_params = { + "model": model_name, + "prompt": messages, + "temperature": temperature, + "top_p": top_p, + "max_new_tokens": max_new_tokens, + } + logger.info(f"==== request ====\n{gen_params}") + + # AI2 uses vLLM, which requires that `top_p` be 1.0 for greedy sampling: + # https://github.com/vllm-project/vllm/blob/v0.1.7/vllm/sampling_params.py#L156-L157 + if temperature == 0.0 and top_p < 1.0: + raise ValueError("top_p must be 1 when temperature is 0.0") + + res = requests.post( + api_base, + stream=True, + headers={"Authorization": f"Bearer {ai2_key}"}, + json={ + "model_id": model_id, + # This input format is specific to the Tulu2 model. Other models + # may require different input formats. See the model's schema + # documentation on InferD for more information. + "input": { + "messages": messages, + "opts": { + "max_tokens": max_new_tokens, + "temperature": temperature, + "top_p": top_p, + "logprobs": 1, # increase for more choices + }, + }, + }, + timeout=5, + ) + + if res.status_code != 200: + logger.error(f"unexpected response ({res.status_code}): {res.text}") + raise ValueError("unexpected response from InferD", res) + + text = "" + for line in res.iter_lines(): + if line: + part = json.loads(line) + if "result" in part and "output" in part["result"]: + for t in part["result"]["output"]["text"]: + text += t + else: + logger.error(f"unexpected part: {part}") + raise ValueError("empty result in InferD response") + + data = { + "text": text, + "error_code": 0, + } + yield data + + +def mistral_api_stream_iter(model_name, messages, temperature, top_p, max_new_tokens): + from mistralai.client import MistralClient + from mistralai.models.chat_completion import ChatMessage + + api_key = os.environ["MISTRAL_API_KEY"] + + client = MistralClient(api_key=api_key) + + # Make requests + gen_params = { + "model": model_name, + "prompt": messages, + "temperature": temperature, + "top_p": top_p, + "max_new_tokens": max_new_tokens, + } + logger.info(f"==== request ====\n{gen_params}") + + new_messages = [ + ChatMessage(role=message["role"], content=message["content"]) + for message in messages + ] + + res = client.chat_stream( + model=model_name, + temperature=temperature, + messages=new_messages, + max_tokens=max_new_tokens, + top_p=top_p, + ) + + text = "" + for chunk in res: + if chunk.choices[0].delta.content is not None: + text += chunk.choices[0].delta.content + data = { + "text": text, + "error_code": 0, + } + yield data + + +def nvidia_api_stream_iter(model_name, messages, temp, top_p, max_tokens, api_base): + assert model_name in ["llama2-70b-steerlm-chat", "yi-34b-chat"] + + api_key = os.environ["NVIDIA_API_KEY"] + headers = { + "Authorization": f"Bearer {api_key}", + "accept": "text/event-stream", + "content-type": "application/json", + } + # nvidia api does not accept 0 temperature + if temp == 0.0: + temp = 0.0001 + + payload = { + "messages": messages, + "temperature": temp, + "top_p": top_p, + "max_tokens": max_tokens, + "seed": 42, + "stream": True, + } + logger.info(f"==== request ====\n{payload}") + + response = requests.post( + api_base, headers=headers, json=payload, stream=True, timeout=1 + ) + text = "" + for line in response.iter_lines(): + if line: + data = line.decode("utf-8") + if data.endswith("[DONE]"): + break + data = json.loads(data[6:])["choices"][0]["delta"]["content"] + text += data + yield {"text": text, "error_code": 0} diff --git a/fastchat/serve/base_model_worker.py b/fastchat/serve/base_model_worker.py index 514cc8221..2fe322990 100644 --- a/fastchat/serve/base_model_worker.py +++ b/fastchat/serve/base_model_worker.py @@ -34,6 +34,7 @@ def __init__( model_names: List[str], limit_worker_concurrency: int, conv_template: str = None, + multimodal: bool = False, ): global logger, worker @@ -46,6 +47,7 @@ def __init__( self.limit_worker_concurrency = limit_worker_concurrency self.conv = self.make_conv_template(conv_template, model_path) self.conv.sep_style = int(self.conv.sep_style) + self.multimodal = multimodal self.tokenizer = None self.context_len = None self.call_ct = 0 @@ -92,6 +94,7 @@ def register_to_controller(self): "worker_name": self.worker_addr, "check_heart_beat": True, "worker_status": self.get_status(), + "multimodal": self.multimodal, } r = requests.post(url, json=data) assert r.status_code == 200 @@ -126,18 +129,18 @@ def send_heart_beat(self): self.register_to_controller() def get_queue_length(self): - if ( - self.semaphore is None - or self.semaphore._value is None - or self.semaphore._waiters is None - ): + if self.semaphore is None: return 0 else: - return ( - self.limit_worker_concurrency - - self.semaphore._value - + len(self.semaphore._waiters) + sempahore_value = ( + self.semaphore._value + if self.semaphore._value is not None + else self.limit_worker_concurrency ) + waiter_count = ( + 0 if self.semaphore._waiters is None else len(self.semaphore._waiters) + ) + return self.limit_worker_concurrency - sempahore_value + waiter_count def get_status(self): return { diff --git a/fastchat/serve/call_monitor.py b/fastchat/serve/call_monitor.py new file mode 100644 index 000000000..eb8bf2aea --- /dev/null +++ b/fastchat/serve/call_monitor.py @@ -0,0 +1,219 @@ +import json +import os +import glob +import time + +from fastapi import FastAPI +import hashlib +import asyncio + +REFRESH_INTERVAL_SEC = 60 +LOG_DIR = "/home/vicuna/fastchat_logs/server0" +# LOG_DIR = "/home/vicuna/tmp/test_env" + + +class Monitor: + """Monitor the number of calls to each model.""" + + def __init__(self, log_dir: str): + self.log_dir = log_dir + self.model_call = {} + self.user_call = {} + self.model_call_limit_global = { + "gpt-4-1106-preview": 300, + "gpt-4-0125-preview": 300, + } + self.model_call_day_limit_per_user = {"gpt-4-1106-preview": 10} + + async def update_stats(self, num_file=1) -> None: + while True: + # find the latest num_file log under log_dir + json_files = glob.glob(os.path.join(self.log_dir, "*.json")) + json_files.sort(key=os.path.getctime, reverse=True) + json_files = json_files[:num_file] + + model_call = {} + user_call = {} + for json_file in json_files: + for line in open(json_file, "r", encoding="utf-8"): + obj = json.loads(line) + if obj["type"] != "chat": + continue + if obj["model"] not in model_call: + model_call[obj["model"]] = [] + model_call[obj["model"]].append( + {"tstamp": obj["tstamp"], "user_id": obj["ip"]} + ) + if obj["ip"] not in user_call: + user_call[obj["ip"]] = [] + user_call[obj["ip"]].append( + {"tstamp": obj["tstamp"], "model": obj["model"]} + ) + + self.model_call = model_call + self.model_call_stats_hour = self.get_model_call_stats(top_k=None) + self.model_call_stats_day = self.get_model_call_stats( + top_k=None, most_recent_min=24 * 60 + ) + + self.user_call = user_call + self.user_call_stats_hour = self.get_user_call_stats(top_k=None) + self.user_call_stats_day = self.get_user_call_stats( + top_k=None, most_recent_min=24 * 60 + ) + await asyncio.sleep(REFRESH_INTERVAL_SEC) + + def get_model_call_limit(self, model: str) -> int: + if model not in self.model_call_limit_global: + return -1 + return self.model_call_limit_global[model] + + def update_model_call_limit(self, model: str, limit: int) -> bool: + if model not in self.model_call_limit_global: + return False + self.model_call_limit_global[model] = limit + return True + + def is_model_limit_reached(self, model: str) -> bool: + if model not in self.model_call_limit_global: + return False + if model not in self.model_call_stats_hour: + return False + # check if the model call limit is reached + if self.model_call_stats_hour[model] >= self.model_call_limit_global[model]: + return True + return False + + def is_user_limit_reached(self, model: str, user_id: str) -> bool: + if model not in self.model_call_day_limit_per_user: + return False + if user_id not in self.user_call_stats_day: + return False + if model not in self.user_call_stats_day[user_id]["call_dict"]: + return False + # check if the user call limit is reached + if ( + self.user_call_stats_day[user_id]["call_dict"][model] + >= self.model_call_day_limit_per_user[model] + ): + return True + return False + + def get_model_call_stats( + self, target_model=None, most_recent_min: int = 60, top_k: int = 20 + ) -> dict: + model_call_stats = {} + for model, reqs in self.model_call.items(): + if target_model is not None and model != target_model: + continue + model_call = [] + for req in reqs: + if req["tstamp"] < time.time() - most_recent_min * 60: + continue + model_call.append(req["tstamp"]) + model_call_stats[model] = len(model_call) + if top_k is not None: + top_k_model = sorted( + model_call_stats, key=lambda x: model_call_stats[x], reverse=True + )[:top_k] + model_call_stats = {model: model_call_stats[model] for model in top_k_model} + return model_call_stats + + def get_user_call_stats( + self, target_model=None, most_recent_min: int = 60, top_k: int = 20 + ) -> dict: + user_call_stats = {} + for user_id, reqs in self.user_call.items(): + user_model_call = {"call_dict": {}} + for req in reqs: + if req["tstamp"] < time.time() - most_recent_min * 60: + continue + if target_model is not None and req["model"] != target_model: + continue + if req["model"] not in user_model_call["call_dict"]: + user_model_call["call_dict"][req["model"]] = 0 + user_model_call["call_dict"][req["model"]] += 1 + + user_model_call["total_calls"] = sum(user_model_call["call_dict"].values()) + if user_model_call["total_calls"] > 0: + user_call_stats[user_id] = user_model_call + if top_k is not None: + top_k_user = sorted( + user_call_stats, + key=lambda x: user_call_stats[x]["total_calls"], + reverse=True, + )[:top_k] + user_call_stats = { + user_id: user_call_stats[user_id] for user_id in top_k_user + } + return user_call_stats + + def get_num_users(self, most_recent_min: int = 60) -> int: + user_call_stats = self.get_user_call_stats( + most_recent_min=most_recent_min, top_k=None + ) + return len(user_call_stats) + + +monitor = Monitor(log_dir=LOG_DIR) +app = FastAPI() + + +@app.on_event("startup") +async def app_startup(): + asyncio.create_task(monitor.update_stats(2)) + + +@app.get("/get_model_call_limit/{model}") +async def get_model_call_limit(model: str): + return {"model_call_limit": {model: monitor.get_model_call_limit(model)}} + + +@app.get("/update_model_call_limit/{model}/{limit}") +async def update_model_call_limit(model: str, limit: int): + if not monitor.update_model_call_limit(model, limit): + return {"success": False} + return {"success": True} + + +@app.get("/is_limit_reached") +async def is_limit_reached(model: str, user_id: str): + if monitor.is_model_limit_reached(model): + return { + "is_limit_reached": True, + "reason": f"MODEL_HOURLY_LIMIT ({model}): {monitor.get_model_call_limit(model)}", + } + if monitor.is_user_limit_reached(model, user_id): + return { + "is_limit_reached": True, + "reason": f"USER_DAILY_LIMIT ({model}): {monitor.model_call_day_limit_per_user[model]}", + } + return {"is_limit_reached": False} + + +@app.get("/get_num_users_hr") +async def get_num_users(): + return {"num_users": len(monitor.user_call_stats_hour)} + + +@app.get("/get_num_users_day") +async def get_num_users_day(): + return {"num_users": len(monitor.user_call_stats_day)} + + +@app.get("/get_user_call_stats") +async def get_user_call_stats( + model: str = None, most_recent_min: int = 60, top_k: int = None +): + return { + "user_call_stats": monitor.get_user_call_stats(model, most_recent_min, top_k) + } + + +@app.get("/get_model_call_stats") +async def get_model_call_stats( + model: str = None, most_recent_min: int = 60, top_k: int = None +): + return { + "model_call_stats": monitor.get_model_call_stats(model, most_recent_min, top_k) + } diff --git a/fastchat/serve/controller.py b/fastchat/serve/controller.py index a67da62c4..42d928403 100644 --- a/fastchat/serve/controller.py +++ b/fastchat/serve/controller.py @@ -52,6 +52,7 @@ class WorkerInfo: queue_length: int check_heart_beat: bool last_heart_beat: str + multimodal: bool def heart_beat_controller(controller): @@ -72,7 +73,11 @@ def __init__(self, dispatch_method: str): self.heart_beat_thread.start() def register_worker( - self, worker_name: str, check_heart_beat: bool, worker_status: dict + self, + worker_name: str, + check_heart_beat: bool, + worker_status: dict, + multimodal: bool, ): if worker_name not in self.worker_info: logger.info(f"Register a new worker: {worker_name}") @@ -90,6 +95,7 @@ def register_worker( worker_status["queue_length"], check_heart_beat, time.time(), + multimodal, ) logger.info(f"Register done: {worker_name}, {worker_status}") @@ -116,7 +122,9 @@ def refresh_all_workers(self): self.worker_info = {} for w_name, w_info in old_info.items(): - if not self.register_worker(w_name, w_info.check_heart_beat, None): + if not self.register_worker( + w_name, w_info.check_heart_beat, None, w_info.multimodal + ): logger.info(f"Remove stale worker: {w_name}") def list_models(self): @@ -127,6 +135,24 @@ def list_models(self): return list(model_names) + def list_multimodal_models(self): + model_names = set() + + for w_name, w_info in self.worker_info.items(): + if w_info.multimodal: + model_names.update(w_info.model_names) + + return list(model_names) + + def list_language_models(self): + model_names = set() + + for w_name, w_info in self.worker_info.items(): + if not w_info.multimodal: + model_names.update(w_info.model_names) + + return list(model_names) + def get_worker_address(self, model_name: str): if self.dispatch_method == DispatchMethod.LOTTERY: worker_names = [] @@ -263,7 +289,10 @@ def worker_api_generate_stream(self, params): async def register_worker(request: Request): data = await request.json() controller.register_worker( - data["worker_name"], data["check_heart_beat"], data.get("worker_status", None) + data["worker_name"], + data["check_heart_beat"], + data.get("worker_status", None), + data.get("multimodal", False), ) @@ -278,6 +307,18 @@ async def list_models(): return {"models": models} +@app.post("/list_multimodal_models") +async def list_multimodal_models(): + models = controller.list_multimodal_models() + return {"models": models} + + +@app.post("/list_language_models") +async def list_language_models(): + models = controller.list_language_models() + return {"models": models} + + @app.post("/get_worker_address") async def get_worker_address(request: Request): data = await request.json() diff --git a/fastchat/serve/example_images/distracted.jpg b/fastchat/serve/example_images/distracted.jpg new file mode 100644 index 0000000000000000000000000000000000000000..382c888a0305296d7307ce061d527e1c5e01aca3 GIT binary patch literal 94338 zcmb@sWmH>T*ESm5gS)%CTZ=;oPM~;k*I*T(xH|-Qch^uTE}=ltVrkI=DNWIqQri1) zzt14= ztK+Hw07zp2fW9TCU}(sHVgMK(uk#plSWYC+^ z^YVX%lhj!h*5dIle0AbX?4N)B*#W2nu>T`CIM}%V3IRSI9zFpH;lDycMM6sQ_@$zt zp&)-e*yw3#>DjnhSy;J+`T2$AW#qwN=YL5s{y&q90?6?(Be6!YFgO61&lS6AR{sN#_`sq$5;4dYMKldmo?M6yg+Ebs?=0)3tKR<2+vdNXPi^um5+j ze{ZpI0JwOMMn&?+R4}kGasG{th4c8H05CAgv9Q_2DO7R(jYKKI8I^@grDoz-**PW& zUEb%CQjfl%F?IT98$gWpNKKAK4tNSUrQ6n-C1eg4&!bJd=M3_uJ55q3Z1dA=2`i-V zp{-8W_YRyPx8QmW-j*e-Z|#T6gKc)n;0ND*(=La=T4wB%eH+~cd&uzu!{sQhjRm3* zg%|N@@n%^dA}lRLP2E5dy~+y(Ra6<@UEMaPdr&OvkK7qvv~cCvhe?BdgDU5stKM(l z*11_Ys{<}(Nl~?9lMs1~TVlUK)h{>paA8hk#=}{5L0cYICJ8d$0BRv01%YaR<(Ozv z;SXj?XF2j$HU~%XYP!F;;XDHgI6QJ%xbynr8_CiIIssLP*K{(ttp@-F;nY#U6NBzS<1$c;78!TCX>FG>UcirDu4JcueRu)#!^DgX}LX7ZPq(*$JP)6GbFl3TQC$|3Ivop z#PwF|Ab~ASBCIA@B(85sbk%Es=)o+lN~N?M$D-D5%GBJ}_?%_>ihOMBzN980!q6ie zvC>Q{PzYnyt7$Se%Q`54-?t;$fKc4zIwp3==t1t-Dgp45?Qk;Hqy!M#b5zxHWba#w zr_yoc=F&NcNTHloX!NG%Z`_0vngnT#(}R{NnaNe+UyVE$O?JYbgq`jR>COhJ?&hr5 z{-HUpMe#tTep+w#@Dr=$o#r_H3L7!N9*z6@EG}#nPO6@|0kmxOwS7WhljUqo2;&VV zpEAwa4Fi$=FpZ8O%p-J_)gcgmk6c#IMAyijS1O^aIeGZ*T792WbZ~R0YSr_LQBzO( zCGOz}X45Ol+y6Z8z!UtKmh8p{#CMq-26-6AK##e+jkw0Ud6_c>kaOD1%UI33xwgxE z_@`q5CLFOW^M`4=5H9RHO$E|65DQ^Q8G`AOm^{`-4hI)1tLI_F?31*&(y+j@exO$1%C`|^07;{RMSBIocK`BJKZ0E^p+OB z2Ju|eJ^iuZ3z>WV(vqaQhxB0x-tU-Xtc4VHtB|Cm@p1Qz( ztuJlHu>EbNSBHD@zRnTWVU0T*!spzEBrmY1;?YO+lF`>L%4@WqNOmKMruV7({{fV( znf!h}MsBd|yGfc z@bE&)`*vxZFKXF*7qFP**~|_(qj4?l7U-~?MG1RF%h;lE*+u_5LatIODGKM8EM!|{ zVzr34C8Tg=r}MLSO)>nl0w8)E&8!iMma~o1>Xry|(Ty8^tA-M)EgOKx4_#@4&EKsJ z;q}GkpT@qQR^}bS_)pli(W#U&y`WTK5)=A$uw349*K8WRok7MjL~WTbiA}wpg(7J- ze(5Gx7?*eVUf%p!M4P$KQZ>vaRB;HYx144}dJ=rWeS1xPh z(+k+2R?e@h$(PrY>o5eAaI}=*qD_u6cK1Dg)oTTm=I37ugID`SrqKzQ3v?AoBd_gn znAsE+ht@!Ptp#pFUhsL2pNVH0N~&nkx!pe)ZgFUGtF35v7(^k*oQW^+py7!w9%vn0 z+g}qHTzw)hiR$kJAqWc2BkD_+&`E@*k|w=P5w4}Hw~RzFA*wlw)2DJ{dQ?85jL|6{ zG=(%Ully3ydm4Lg3=5cj}G=M?ZPW+Pzwul)foEq4tt=Z2= z({8Qi6e8a5SWtzP$wLUzYZYEHbNBy);hR7XC-<)K+Fgxv+G)i0@x`QSjiroNX5%+t zq3%o_wHqGdZ3!x_6ba=L+yzPtq$GI;t;r&7f-jlmA-kH|3ne~CJu{8Mp#V%12nFtG zK}lrXn4Y6QUKN)kgIOpnSV}?6m%o@EqO`1_n<~e2ZsMcy1@EwnTicvz16|Mrwr5#P zF^LcB9jco41~!U(G;5Fz3SYH%c^aXage>9)Y@kb-T4$PDyByY zTawXkgCx@{@DnqE8V0IeO87?t4h!$eG>Gs2<9w^?9oeo8VMj&wesI)%x`)1L-s>8+?fhQ{gMtJi$z;+jkl;kV&BqhJ8%01@oT!;xkkMn^}RTp{34k6upWp5oc<*bRyOp z$W2D?;TaKzF;PWptcx3>vCd}NhtsD%AE;`WTjAkpUY7xmpKB{VBMA7N>G#cnCoNBu zNan^)*(SR%)slw@G9^Eq+KScWVsp(M~WN|=<{*tZQu23FAgySmg@go`e#D19WEQ|Z&LG`c*19j7| zLQEo6ev$ZluXm;YY;D`5#?|&s11wa2^$$VdLowy_X<7<- zH5!<6Rcndrpa7O%ar3uitF(~=-<1_{Ek%Tzaf&Q4RozNwGu^H+MjWA4)|Z?w-p%0}9y_rj-hf(tBVedHS44GW`(DbX zv%N}*b1sV`R@Pv_VF$imj~S3FzEzI1z7sIuqF=>O>jzKZR%+ojh0OaaEtgBpc^8O% z0w47V8K-ty%lwGkXlk3f6}S3vCDrTf65;B*p8*TQt9IY=Sv@%I?MTZzPp!4M7hjQh zOG0l+OWf*$gjn*kM9Nw)*|)mtxp+#PLoC914pW8jeW?OPW`Z+hU69&AjJq3q!iMf! zjxKaWc)BvkP-7&nsJl_U7o?Dj5c?|i)^<%dQNcVlSHPiDqb8H7GJA-ve&0n{w?=s5 zTl<#aqQdXGSBJOUFXv{E9uqOZH?NhPUJ1oKOFiwv3z}5T^n8z#C}?k@T0pcSV;7fY zG_Ii?hgsomEL*wGh#ASngT!4lD_wh$0q^#88$Y(Rji^RfLcug#CZmr>{TE(~> zp3jhk)Jp=(mioMSXp$VS@I&%ysVyY%4XqfVhZ(Qp z8?xWc^C{EzIY^Tn*-OICng}fsn^YPtG!oYD=@&JEXz7xjT=%iYi)si|%n{xXrc%84 zCEVh%sNiDPcXrLFT~&F$SCn@+NcgR{Cvm5od-KEF62k4QCW)r&x@6ef51;h3O3KwYn9EI+%Y zZSqo%y)S}V9ha~Kh~WA|c-ESfvdgexADv+u&W-_`kJN4)$!wlMt3Tf!*SC9GY1dyh z^@s4o8@D5~=l;WiFTI!=(0t9IN|CxlBA?Y^w39v}&A*#c1|8Z65hn}l&EX$Zxza;q z4c2>VzcKPjnYTF)3ZtuFIdB!kD}EP-#4W#IdIw}9TF~8z0qeC8tMAgWX6H_Ht5Hdh zAYoKo1@^GIcIAZg83=d6g({J!i)XNTMolU2V&%iI`y7%1r)WO`+dE@5Rf@fpj-5R5 zoYH%_hu|HtEMNR$C?l zV1O~GUKNOBT{7}PJ4((IJ+{1rK3DN;S(DSVG46-F;?I3AjO8N!o&_u;_L4o-B`kKk zn&;-ZV%1h!UjL1uXJ+pZ8yKk;Y9?ccqJ%en&uWeiW5z65CoHcpRSN>R8YuEDM$mjb zC;K@N4G;9ltakMc(OvoJ-61MyU6Ot5MCHDA2rtBpJcO4UJQFc+{_ys72aS5wM^O!* z6^@p4?)gOPC_>7JUj7D3=M|;gBdUIzQ2u1~PO$U|w8{pLbdV61F)?fG6!GWTdfS6K zJcaQm(Nm6VNpp$EI;Tw3Me_?6fg_nD;8bFA&@Tu=M%UjbC!UU;v!?SvfxKZ_&y?FSgMHUz9pEXXdJqZsz1b7T*<}J;#DTZ>$H0 zZdGiD%nTI~R6? zM9;A{n5faXu-qKt_D)#v8y9BJzkS#8J z7rjCcqj7^N#m8x2FZ_?DMJVt2v8^r8Cj3JvQAa;^!tG zeiGL-f71!+EUVPayCGhYwG%V>Gu~g?j3;jv2=*l8Yktu_MO=m|E_yoojF%#H{MHol zem2B#RUro08t;9+@}*gA`&_(fptl>RX}nU-U0v2EBYw8)g`-HY>QBO}su(@}ovu1L zcKAua?XzJ?ZQ&8bG_`wu*)ti^HV1OKeIuLa0pZL>pS_QR^vB90zq`DDcu83*R0E7` zX6A?GY13ERmM2bWsx!Vv44vO%Ty~sv9Y(N}xwiv-t0U1V^)1wGLB`Rqagfqu4*K)i zo6$yUc`Sss^(CkHxu%}zG~p;83kY+rZcL~3=AO10L&f>CCqNDSza0hqaO}4ehXnDD zb3mf+>Tlv=iou93Kj8{ z#-WP#DY98-&}z&ustuu#(BxsVn+{0j7dG0w{8B4+9OPlVvWwhuG!M_KXK9-3)3MRB z>Gn)L6YHObV$$_GZw+-V^J>29TIJQSe6m_SD`UE@rVN>T-Jso6_vvtj!1RXjh5Fe& zjvUplp954o<1~J5oh&nzJ$t|fsK03>O&#@|LTLlLmRM>4O->lN3=D;+mD|@s)$#kQ zKJst9V5!_#bcokJRM0MxS;x%j2Ka-<7gpn+&>X$E&ghskW@5VAwSa;ps5}d;HuoBq zqWKwqzLNe#Vb~F3grlkrYHWX^5!>k>L7>%$Y|G}wD%D)#-OVLzOm7f5ela?GQG!w)``}gW@)|)!1iOc->%Pn%C`L*5nZtvJYstPmATP@8WOtKED81_O(k_2`1r#5}}c}oX?^hWaT zg{V&J9b=gtYGq!bPlv@#+X3)ZZWTGCP}o`wJhCj)gRx=%9%K3SkK989n?D5 zZ@Ev^r6$m9RtpPugz%Xb{Gh-Wn3ED_D`BTL|B{tSF2SFqIZtuq*tEHS;7N8L+Ob#L zugdMp{kIv54Xf#Ha<5lGqH4U(E3B*`SfkJb3C`vP&W2qMfo$E3+5;k{*lb?q{_^(# z)L2LERuXj#K6acD-Z9Y1w;URl4dFVua^pU1H>AthyUCMvoJasR<^ZB9b+jSPNUWRZIz;a^z z0siTkSKv&4KEzgI)?SKk&2{t94P1&MCiZWBdCgZm69t(^+%d&QaL^|q+Opmywp!mk z@CwX*doUqWGCsCa1zA+WSDd|k5pLjnkVhRI zW0U#xsvd_w8neR#Elff$Fiqc7mFKSLx8~kn`Zk|bQ~eSmzPO!}_S0N?96dTiqF&Y9 z9(e7T&ul|DP-u`7M&;8!@w%LK@tM}ek#A5y$u9wk_-!Zn>_f2pv+qE%%}vrO83iWN z3+b^a!?dXiVY0mr2R{rbck6jM)2C*F<5^TyOie4ryQVLF@)8;bIHPq6ZY1pdYW{(_ z6Cg1*;-OAT|<#?Rk2&_KrBbwLTh=lksDXORRY>L!=$8 zhn3a0b;?OBGp9~eZu~r21?>^2b&5>=%l;GG>OrEH$MYiUi6;Yo_o<4orx*t|yA&6$ zG=s;y@npQVIKaHWA43=V;cpxrASuBNKW*(79EO85Dp`oy9RJ8w9BlRJLa_m=3(^cC zB3fB<8FdjKe~5yEs{+1|tOhIgdCKYF_A|OK?PoZH8hm&^C};81P3_=ZIGZO>^tXzd z=5)>vwoqpFr>Cf(XfBtDU%G`Ty<#*<8YXGbMf4LYF*s^epC?SMlc!-nE2Q}J{_~dS z3+vayhUiwqNDlQENN`3~;`M>a;VF!DkP_v4-UD9tJFiBBXy<%WV2W`y$}kggA--in z6$@e8%Mn^YW5_QWh2-@zDv0l&IW#DB`5Nn{6&X3MK%dF=&FbAeRo?LA+d>$fMm5ou z79rR8vr5_cDr>)!5XqmU=iy}^z8{;=e%|+{>L^%dj&g}xN#o4*tsCP%0FA7dBMUfE zOT{W1pR9|{F(n85Iemx8!0ptgPi!d(s*e@HQFwN3g)>>aVqhHwl7;Kx@ zFz~=08eqHevuU$w9GP8kvgw?v7j>#(AXqxSRTYpc(=t<=Hlrh<_3_ze|?9~>9q|;s${`|Dlgp*e0s~dZ(F7dpTnzG;;F>!&wR?k0FmDU zig6<7)@oRLTRh%16Si+w7UjEZnyeoN6a*R~d2Vz)fP)|NRH>uPpwynj_l+5Au)6P? zv~(gew<51Dacp=P5_d$bfN2hGysJjjW?r_A4K5nh$H#R^L+P$e2g+mPjQZ1R@mX(I z3lc9Dn2NSl6+HrgfO~SdSM8yKN6}`YkD25}uyjS<$$MF4V_}#MIxpv}s%3CQz!Klb zv_!IhzidR~gth9Urp1Y3Q|JMM6+`#pWtVKEe5(5>_Bj*&*}DRenDG^FV6bz~UbL9% zE#WjTz48z=xv-kg?6DEbo6p&DN~S=b%9(avoMlEB#SjrEv~kGAqni^yaL={<>n7i~ zapTXoF zrwCoj?2?9w+~{~5Wh9{&w_j1d0EvcEm4b@4#Y1}I6NZYhbwih_m}?wC_C<_WEpox< zvG@$Oh0jQsx*%J1eLPI_uEgyc=lh~0 zvs#KmT2=9~_pdjajlF5g%paiiK#s|CI?@a zRf4Sv7R`N{-Ibp~se9f%#cD*8WZ``+8*JNjBer+$)eXo7f)5!u~JIsG8=st%>)? zgHxqzg+lMWiucjY^JZM1j#~KGFAn31W)A|ZcMrQzMEYfN)o!s|A|zGmeC!k2E*XTfLuP`7UYyq=Z5Vc4_-)blvJ7xo-=X9Ic285+^@~+LuhJOUyKEWLJ>Qzb zE^tXTg>-EKj`qne$kWZC~%ocU|rjc zQy(#>x}x0WCZMc8caH5MKocLKwNq=pM1O9k9gqozJeJ!-OaCvFF}XR$^uH2VC0Zs; zt;_LM-87)2^VF_7CVk1jYqHMl25Pc38(6`6!TE5l-vAuowkvnOMr?QX&t=J>og}o4ba3l z;YFW^B7EvY7R->$s@`ex1~)l)9`ZJYYfp_8#WS3Z6Qv0r#LrVY&P&tK*pDlh9=W*A z#ZhCM%d&u(8<(KWiv6lNAryPl%DH#QWxOwk zgx(TZu|svy))i$TR%PZ&V{wfu*e8lQI1Ey9cWayX*!(ML;l^~OnPj{s@`8G~)p{X< z5fBeJ=j=nxm@i2|gEN|{Fge~}-;9YO<&4lqU4ZGij_E!@hxrGJy>VBUCOkgeswMRbSZb%XP-C8HXWeu3NJ!h<60n zQMLx^Yi@2AjRa3TF6M!(a_E)_Qt-H;F3@4sXr>>hW`E|123HyP!OL$jGB9H=6SLk$ zbl_Ta^t%j8F8$YydxfPoRv%ba6f?!|;|xsuhhEd}XViLU9j&)lIcF`R!GE)ZkD?Xv zoTGko4fQUg)9%ks6u)yBlYQ2E7XX8Q4ev8Q%`TZ{x`Hx=%AZB8vIZsDp&D)(d~g!j zmwRvz6@P?C#QA-8fZ0ZwO_v48N?3BGMh9}LhuLZF-TWYY*B<}EEp_zbZ1);Lo>M3D zXVI|2@TRX2EiEhwnb2h%+MP!E`~3;aqqVl>kLPc+_%eEyA-ovk{Bkje^G{gh))PiH zuJ$1;RZB}@S;Q{NS)uAF|1&xqG;5J8>T}y)2j9wr=HcYZ8+1e1+Kd6#TYcek?Yw|B z{(M*M>)QP*1mcFmdkWK!=>)*^uguGNpj1gLUgq)}1%7Oa?eD zX`Xzr2AdNzaYJ=sTigI*FNQ1&xt0DAak z(E`M!egj3n2qsv=H2?`PQA^QU8~kR*z@vxvra5Q7KbihOWIOWb`txQTN?<@ltoKS% zUr$B)xZc?Rp(f`H8E8xG9AHS?F*r_3Ue6oOdsVKwW%E`v7`DE=s&vhCI&X0WSj1a7 zcRM$sc=4sC{M8!_=az!)wvBfj?`!Dz2Gl$Z{*c?ZI-eUy5jNa0HK(lA7NwhIKvtYW z5Z&)4#7S3no0gxkU;ozK?&`QU&|SXPTo_NXHZ)z5PmO}X%Suu`Q7r>C$yw$t4aGCFn{+Hh0BJ!_0wOUEgVL)oQz+5kC1 zk_h`8O{vk_=X5+XW;@J!P$rP9zN(|&73CUNHTD+DTxswD@om_kPT|cq0bce^0M!Uc zNLyANmX{QryG%9XFqsYf4B!65>yLUG6utknX+`f)8M3WV=%|=9w`gUaGud_}r^8Tp zWkCx@RXa82F}^6kK)TA+)9p5$*hB^iE;^TK>Q}?TN&)WE1cHzfBc@j{t3YSwAil*7i@g13-nE!O6VFus|}m1@O=bz{zst=1;nF=G z`Xa^AwKA_n5b@dMV`mM^{Phpyv|pd?CUHU9B0Vx#n>Zj%8A49ZCyxcM&!@Ur&>dc$ zn+^zYwB3Bc)LxR+NmVk+Fwzl@$iG7ILq{Z%isCuf9$ zL>YH5Apzdm+QZ(F2J>DNN!%}7Q+mr(>sbiM{En1*CY!@J40D3E3l-cAHBJGWE0ZA! zx|Z>q0*JkhBrmHl7_H9$y^9ydF6@I1iJqmP&HhkFy$FM8q*K0t7Du9On8i8QFncBnXW`gUzB{ zLH?U%E5%}`toBAZ<<&op)uTdjwl*7_ zF~7sJ591p&@1VIPImL|l%rt(rUd0_&4&0L*?|tValbtFBsGImMOS5~YNr}>_^QACG zABR7;49}iRT0I<*_mUVY4-6cP9k|tMlY(=-1jp;iWpMkF_Y@R@{aZMN@Q;}+-Br$P z_dUJ)XY$fAr3dGVcw&1nY1q*o9()hE+mTzOjkUtQDjHSl&T!qb7``2zSppfn!FH^5 zrgU!%IBy4kb~AHccJuEkRr`=LM4kQPlJ)r{3|D3t3)qJN3SJeAdbLA6FT%oYzHW0Q zZ2-uDi>q@(1|~o@ck(oU9(U`#$XV#Bxx;v(Ti2s*fkjkgmf{=W?i(`Bytvta>V_+W zPVF*IZv@77gH;g9%&rqbx*Adq%3OQEhCmcR+32h(X_L=4s7=q{m*ewSq?&RfsQNyK zzB4P?;aXiE@Oa)wM~aHyCv>Q|HHfS-gm-NS-&O<7t@%6lyp&sqdBgvi{Vh17@UNG( ztA8kjq?ubleclTz66tuK6$3Z*ad1cVF?4;o)l2oJ6(A}qnU&RUtew|7BQ^-`4_Z=} zv7*&CldK$a;tQGAm~lCLp4pML!|wZGioyOYIqp|9Ox*aXm@a9DIcp7vxHlDH_^4@d zm$yuah;_BoJ#|HC-pr5-`4WJ6)#en35zDqMsISSR$*nMz*y9Oj)K8nUNv2RxK%V*P zldNxE83z=g*;DmCzJ%!!!73^JzSWJ+MUhY$nap7FKIUj3Ej#>JS{XbqNzPkskaaKg zko}SeWz7jDreZ6$?z(I~pf$%}Rv&w@BhU5s1g<-9YBAKW7p)~P#t-L6? zJ_eK>=wfuSV%RhSO-;Ghdrl)aBQda{VC0Dk$ z7^zMX1J6V${Ad2T*x!v9@lGa)_{E{7@EiYy2flns{|B zCNFCwXZJPltwb#YM`=sehm4B5!gq#JXA_}Ic`Z)A(zwWeIA;Zp;MT zFhddrED#Mna>Q>HojTYxvOP*lr9!6KDq%r4x5W3ZY3-~*FQZJ~5Per`Dtd+YR4J#8 z^KU-$%+8#R>JEp=*VyO5-GaKxl-b`9cTciq@^ZWVtJC(k2y)*ED^~WBqi{569+-K1 zsa>XbXsUv^3(sP?GN~)Ab(J(?Q-ZvlfTx8rr->#2+Blk*qejQD`*a09N##ijT}>Hj zDL1L@p$5`i914${lOW|=t3CubO>jj{9Tq%^Yw{$RY#x@8G+L#@C=KREj1`7_b>p7$ zO=n`F3c_+oagooOV7KV|?H_2puiK{swa#pv*W?rSMf91Y6UK|!W;}4JN4sS*p9+K} zC%Q9=KG#snZjpE!-ta~siMz#b-<-H75||j)GZCI|$sP7vhwIKf#+?;WP9iq9vjd{KVWPEzv^K%irRv}(is-AdJ#z#~Dy z`ELY1d^JrtHutE&yJ~ga1%?U2Ycx69f*Vh+gk5)?Sao~SI^QAUHjHp%_OVmi04uXWcb(mT6nOaW^r=@r%S#=z?u>|zE1z3KK{N$(femSun*z=6%qc!b8teXEwIfYqxB2=qYNyeNeRz;Uur{zRFbQ2a z@H)-w9PLD$9*EauRSKU^C;KT&uc_oe$5N>EL9umJv2S?L3kIGfi(%V=4+vW4mAgE9 z?)1JAvY_A|=HzZ#U5l&Q^}0Gzh9wD_Eyf~aAazLQ#tie&J=I$WC4Kzpp7s|JJG|l1 z$f>;kb<*HLUOD+h@z*~9ns2{hp}MUxqP@T{8cm;rGLBd1A-n{S2s<4EMV#q(Bm_zz z@cB^Q#&T^eDcAXs?r>x}`gxY?^5Vv*?D(T`)XCI%J-YJWh2mUt0DpvlrMpI&DhP2KOvkr6*iu zfzP@-*UE;bm7(ctho@_c%`HRS+pyyUopa^KB zok2sZ6YJI3G^+T+>?7H`vKfp){=P5!RDSYi=e@ver-pBjlSE^e#IZT=Nb@r4yuOx! z4zwB4Y0qMdk*FG98#qT~^WI6|psFP;OX?^HY-t@v@}fxz!+Lyq8aER_7$$?C0AGav zA!fUqMljP-nj-$lf98K@IzP&)rGLTTQy}K8`TV<_A<_sf4{ycj9Q>BA~%a~c2 zYr$01oIk6NpyG+((>$yyla!$xg|Eo7?vxmwR>Z{HVz_MF_k;2ywdpbSiHyaPbJcH8~jkX;n&#gnXfiLdVc#=0if@x!5WzzG*bQ-hK`LxN7Wdhx?VI zidp$&8+RztMcPJh=U3P_GBtu7n*^*UWltDQ~vv(aJ3W^_y2)4f7Dt-T+* zFLSR*jC7d9tdF!d+0FGHUS&~QAmhUj?Y6T0I5%s}5qh4DDg91Rd}J%9;c4aS_E!N{ zCq_oAl|3vs z?UdaXn4t|#=eyOnMtBnO0P2^jNCthYB zOurHLQ_*g;rSkb$#^DW{ZbXyJIg@Znr=A1AOARlqjH7c}&xa~L9In+14Ki^CHB{XJ zfBWyJi$|yL>vJi5IWkifu{zxiS^Nj^2e$b=Vuu>>4}iR(rVOba%9i}qR-OM#?jOHs zy{q{Q=nSUh5u7o$xjb|qOcGS^QJKkQ^RK9Y3o+E@B|7YJKh zCh&%65EBKnAovKHI~xxW^r*)UGK~KcHr4cH@V%Vb@J8YHfU9TJW|Hc_Ihr%Q-67|X zBxQF{C|X8I5r%``F$VbEOsp^mi}Zh8ti-Ru@kV~Pr&w=b@WvRPtl zEC?-cQ6UnDOTFw`GhyA;d8_B}SD0@*FkKotbrUFisdD@(D%2!wHO-WQ);pX_LW zf8GBl-&sgJ!6x3vzp-R2v;aVg|OD1Q@2Nm@K^ zt7~a9rTJ&+lLgrYeGcKmu#PJ`2J>H{BOW7OqJ_Q+4QD(HBDE}F<60KoXYXYfoR-`Q zj8_beiC1t=5Tx9qB}>vj_;zymo#E?3^}iG zdotSm$SX_9=0QMz>shgOHG8UWaVE;iKeN3|3$$wjr>f{&v^wo+Y#i~4tWI=>YVut& zwX&Mv2>RS!E8R+EA1XGSox)x8q5iN*U!(1%P+HYwocfdd|{aoyH55Gfb@q-*#bo! zo%LVFc4u*O$n3+^n@M%?xyJ=X1YL6|{EUU&Z+~`hK2b}M8Qs%!**Y4=p0htOX`@QC zbnJ;{WI7KMV*6o3KQ2kD6OBjKOWP5?v)z}Vs zD&DO4J+ri~)jP99iR|^UsM+jC=Jc>!4gTfl@PbL?^lWQiWwBTZnXy2V>y?epX9AX{ zP?K4kiD=3yGiaS<=lU5P@U%1d?1N(47`xG@$#3!@%)3IKXp)r!=3EIO^rie38Y^8WBM^UL` zgS_fx$UDTY7jSH?1GZ%pu~2&ibhEsC4J8m*dFgc(v#!p#q%m6~yt8XoQy1jz?tfk? z+&aNf!4R@%&q*aTp`x;}KNY8xSNMvb?zC4|MkCSQawj_q(x0Et8Jr_S2I6P<9-7Yg zDL4qU0MiH&@>Q!R`{WcWUxPLaf6y4QNsq033l2KbGiXb29t1sSf5EI}CJ-1hj}jo@ zvfW^)>s<(Vi3khcP#NXSX>Y0%rZ!*;HVpXzRxE;++ZwAAHH7JJ{xO@kDl8kbO=fjYL0pa1K zeG>{RG$db_Fpp-2%|kxs=>51`vTORd|CfzgyvGY^IgVg-{t^DK1t?_fgIW)7glp7 z!rcfM{a>%7iqv2_X7g)v04l#!rVPb(3uiEZ)M?@him+GQWuNDVxIK)@qD^`i(LZSc z1{5UY5BE?Iw>mfP>C*tSvgOH}EW(wKPh^HWGgbZ99|mW?e(pNZT4D5~8G)U!!$f0$ zxt-Z!FY*RukbIFIM4#Yh3<}g7T01qO&j-f$PV3*uXE@BR860wQvenMDer&3|`}V9` zPZorwQO&=dN2@hzO{Y*ilmx)eJ3)~n7Iq}n$G#oV|H(0XJ8FTWp=GN+dd3`I@=~_L zl2U&j$&x^7sNdh=`dP`Ra48$)OzvwluA$Fj9n3$fb5?e)5W?gJi((lKoj&FP`TKB* zmNA4~ZIxMHsF0xO0GluHTQ@>J``SC;dBQ_!kYmVSTE$qUnJ63)kQK_{;~ z%+mkeD2ZqeCuOnzTC!ofq9o<*JT@IZesb#2knLA6WIobnbEIWJN7`O!DAQ?eg&lOI zN~D5PT>;!~3X;VJoWEU2zCOAJBeH_SzHzIoeu+HXa&(}O8XUMbqgGz2Vpuk~pXFuW zRH}C)MD|ITVi9E+#e)=k2d0ieQ+@!ppfglAG;Cjtm1=$a5pgyUq^p14U`Ck5b}%W5q&lnX>!OtIf0}dn8j5A zUt%rz!3Sv?8U?$K_T4-`gu_nU{o;~y_wJ-+lwsE%3B;a_+28WmN9QQR*j-Pr`ZZYq z)>0M3L`oJ2QXzF-=Xs;$=qNoT0y0R?JSG}9bpE(!S@kqoUbURg*uhA7tfHkrMb7A? z93*d5-RdBRkmg1S*PpUFspscR#ot(%%(A9(NOZ_2rTDXq`vm^-;pgZ&n}5kLRBt`$ z2EJ%gEHQyQMZcbzb^Tg0S|LJE07H7{rZMm;}3@GPN9#g`K5!yU)dQP)w3A1!`Brxt(yKLEu*I=}qO zP~fX7S&D>y2=h($LTX$7B{4t%rXI=I{{YcbChh!{{{Rrf_)>c=XW}qhT;b|hQSk;j za(flbd{h9q9-Y>f2eJXJk+X%EWUbOVoCVW1OKbUY_)Y)|p1vQ!&%7`ZkT6Q@+a0x> zrW|z(vu`4Cz1S2}%Epi~Hd^^akmH}0;b+!C&SeYb|z2QSF9jF?`2qB zor3SWv+j&B#Qy+gRb7tn`=`T_q`u?`q{+@a)aNv~Zd4jA!S_Yl-H>G2vfd?eLdC{B zJR$xjoPwCE;mewEhQ6GE(IrTAY)ZD?>Zu+Ztd^^C!NhvZVb1bYv@m7iIa?zvdoi+> z><*frSF$~>8OjG6WPbQ1No$-oIaXr!E3k_nk^(nO1_EA)9YHZu zbO>9pqKWwEp;gM*nn^tr$G9=MNE}UiWi(QF@q`naiebhMa!CL88YOyUgeqKB%-9032$Alwv(J_Q< zrySdA!kiLKBn+C+B#4bE4(-A&l(@Bu=#8x*Wcpa%4nS92RF@?*+#d%k+&EgI5aQU% zNUgP1!s1Iv!t6TxTrX+M0>rOK?Jy0n(~lWY}5<)6?^FV?6=P8Yia&gvlBs#Zqcc}tv+Mb8gv86X>@g8k|>?9b_if-=+F;kVBGAqmndQ~9S?#) zNF&1Lp1D1+f!(AO5WY7wM%D~Coq)%h4=iuXZe7bb?sape0gl^U6i;nDwGEZB5Ye8* zsrL&^@pB@lhXby|vY;w1k_wPw6SEKSxPl2F<7WH$uFJR4n#&!&s!@qw8(QP)8QaQ>EsM>YmWQ$B_hGXPq#=zm^HNj0QZG1Ur%(F{5FnQo^LFh+mxLdaAytE zz{sTeZd*qAaPhqPm3Pv|8zUG4U=?v~mS9@kCkE8W$c7wxbjm`XQ*wKjrGug|irAe~ zBXU4EOO>*0Ksh_%V^drKBUTV&Q_hg(*Z}R6iEhbv6-LTho<+(Q;9^i1PxntH^mLHx zmUcAfR_AR8 z)Vg@Er^K!y{wZv$qo$fi)3wbrxEzogbAkT=Md!ik^Br|Nq+u?MOBm~r4*k7zg&Nyr zG1{u4bYaEOot^F7Dts@yLu}%^bsL`WCy~$tjk4{!d-?<*82XrQ8u;oLNC)aw)RZ zpy0SSO^rJs@`6K=GD&aQL6@OVTOmA(b@A>{s(AkZ9h-MYD@I9CDI(QS%=xCmZiP*9 zUECKDGmji^$Edf22 z_$@%_!?zYtIqSn)fjuBni4W|slmV1R<(AT%4e6!0nwam z8;t#v<2douin$>*yNjSjC9$M^D3G-``B6GHvjGzUse;9WX>anZO{Kren)J1M%WCU5 zPd0}C04l)R8~mxr#@{86DXI>kFxngZs}pB$@~2)?e3!;1`2_@srp`Z=Vr;|tQ^zH~ zOWs3IWhV%R%n#&4t1UsG_*I+o?eanH6z$UOmmIb!8?7FI?Ui?4Fy{qwUt$5T*;CgM z_^(`sp=y5URK3w@$5bj_sfZSPOgu@7pP)%`>^o2F~UC#dmtBa;Bv*vu3wpEJbp!2R)55wbaDscoNw6ytp5PL6kUfD;~q)S z*)M{mxql*e5Xty*doE?-LSJ0t>Q_hc0VX%@x!U}~YaO?8uW8=rNo>AGkA^^h7LcCJ z3$$#~wA07GV7Rm4@#S1)K7n;Ds^sWnU?*%M$@c*cU&(H=Kp;EzK`P~AoH`&faSTkS zJ}?RZZ_JhX))1}2ZEkPkHpU=%YNYO`cdI7zbZoJ?y1J^q-Z9R}SMTimj^7g-LOdM5i^IdgVE z-5w^!I|SWnk9O>t9q0)#vtT$FO?GC!ILHL`LgI@cY?-aGVVygs#cm=`fYlj$p3riN zR8qCf=(W*CH8mbEf&ynr77wMQXlF&HN>R)>9n%zX2DI#<%D^6WM_U32<%X12oL~hO zM(%b}nc-%A5}VvOq8-+>Q^N9H*>e{j@hJ|Kjgz_Vu1+wyaw(-H*uX1{II`~Qn#l&Q z%yv~WH~?p?tZ71i9+9YgDCvQBbUxx>fTeuE z1F8`ljusP2;0n1J(ki(1Gj&~1!QCm>NY{rc6;r!?&>5vJB~#5eCS1jhD2lgQK`yP8 zQGB-+Y9XraotpEMjeVwRjr%1+O-T`w38H|jd+FpP9AQhhaSX6LG`phHe`hX7q;?GiIq3eCXR!&aTA{Ac$LRzxQW`#IdY|nusO^c!ss3psbhl%cU31Ii3cj2 z8GPDY=Z2?RY~UnIESU`n>YVsOvkl^m=BiB-!P$M4hF76Ab^)Jh%K7ctZ41{8lM)8O zk~bMbXX=d2(4g~uONm5hbWvpPrMX3Vh7dkbNmxESwMx^dt=z8)C@TfJ73$$QDH`t% zK~{}SIceD;-J#vvBAz=;I8SsUm^~Jkz&w$cMx(unA4IaU0OY4@vez&`==LlqBQI zR)%8BAE^Z{B`0=aNWO1IMhk>Bn8dJJz7B<7XkaO zf#KdIrGka=RJa(MAJBa8szQHKUU_pCuh8Acg<^x~irJ>pH#)uWw+{zTpQ;60Qx#kC zc6XR7O=lZ$dn=nkC1V>^De%N!zv#Y>mY%+CRMHOF9lg@#rlVa~>{yRQ^{-vp>}N&0CDDU zqs+o&&wLzVX%msCIAaYMJ(qNfd~C=!x0XUPxs~1{*lY&>0HTA#Z)@kQpqOB{@ng?z zz#(Y7Rl_u~w7bOP-xwGh?19NpXzx^&RK?m~hP*iS&fUDz;~R@!#W~BOY2R|;tBJI< z+W<1qJ=GUY6t=!Dq^XU8tz~Ouj1bp&iQBJ0PgQwc-*0rkE#zeQ(B3Izly%|eLiSZN zoY(XxQQRL;Qj@1H?zB^HgU`wpCZA?FOL_zIp>tmGpAuA-W9CrlWhc7r$f2p%a`Jpj z@0%|jQgm-6!*qI^c7;b*2{f87X=N3&2#*$2XSa1RgJpx+x~qZ?5@n3@Qb^Kr8lXwO zQAanzpgFnJFt3!j7b?Kiu%co4hI^vVp)xxqPN25_GdAv-6fS+sw}WH+D7}@}K+g#F zRW}L2%Z@RQ!CVw0!V#fpoLjPoD|(_NY@p0~sFwjo!4nRsOzkH-BNFaZM@C)ISCIIC z%tV8-#KO>U=QWsVanySbr4g=tR8-PQE5v$QNO%7L)locM$3VsM{M|*lERbyBj^OS0 z1$1KM{eyfKtE+6K&8Ld(NhMUd6zn9HJ=#mLsCyl{rK=|&(pB9ZW~{eIO;B4WE@;R) zWM}0h^&P?6E0zxs*9)+kBJB*Fvg8f5H0LAVD2!JMxNgGjHlQD!2h)ey^$Dbs~$3 zoBTGGct9@GrhEabK58u2xNF0snW5Pk(&4BdZz2=^oMyLfn7*eMvRfA~{kRfF()gQF z%mCuSq3!i7J-w7~`$cA`ke`+%=ckCYsxZ~mQj%hGxqp|*r|;X*YW_Ou32coGf4^T) zP`ntIVpQZ^6$XZ4`5*+~w{{VGm z-l_wd{nce#^|^8!xEJT{gv^Xh-3OU6Ajry_s$r+-5DIRn{XamaM?{w=(uc^W$aYX9 zApmFTBt!7WvR}Lp1HF?JU*(TvyLmnbdn{~=f6TKUAHh@Bf8L$$j58lDJ&;un(4+H&*=X>+N#<^V6oc`1t;HP5d8b2`$r0L@# zCi^aJ;3&+us0KTQ)ztK#hTXd+WaR-D=8_P2jnhX-A!CRmbyQn8mbKHE-yt}08CQoB zQeuLj`a)xbZr^l_yqY+^#1ngRTeA4GWx2y}y(48(w=98N&&N9^`$ImItn72$ z_D*(|?B8VA_7x$T2=k4T6%%x|2&ZR6ewDry_=%S7256n)YGTiXCAub&AlWhC=?LuV zCr7(V9Y54j7)OB*oxU@KPk<|OD~3p3&EbsuEX#KbRe9ypJo*%g_Ztqy1Mwo{A*YyX z#sOS)Ur=f*Bz8&hE`a#2b0AF^cUNI^q<$$qNlL{P=4H3EVz1nK;ieNSHtxC_#ThZP zZpx71T3t_!YIm9WJVGvu?iK5UMSeJWALtY)OABn*UhDsa#?L0fXCgbxr~Fw2NJK#weHqD+_UiyH8&m@rMLrDwdEpOo|tfITjW(un>o~s5iQgQFV!f1;JQV zLKI(B(H^EP$_*?e6SCbc>H<%UEkfZ>C*}&Y;cT@T(x80I(hkWU`0vAival*06WVl; ztLx-$EqX(Xiw)G4#RtTLFM0JiHoX_OoBGsgW#2< zJG#C&I{T_fhOW5oxvnxphU&)QdfrUY9eoH>sK;v17-#$}?+zQAnAqK(p;?sw07z6{ z16?DY2!VFs1$NR%9KpJbNjR0WMaFC;m5?h=rISi>LK`JDdhUx`O3lY))g6Gc4E+iX zH`I(wb|@_W08$31G-`5;{_As!(Bg%JYhc|>4e+Q;`jqyc3U)34mn15NY*l}G02eC; zrQQ;&yW75`_rNQWOpvs^jn}2iQ;G^h%`V15ZL~3xG&R5w;eroB4`iZxrpqzisN$9I z1ly!m99hG2*(#*a*BWU_bP{&+)9w>1yF@u{>@M zPdKR}0U>vCW*m7r1ht=F)rmtj$i5;gXy_#`pF$^qwL0A<;I};=^7LnJ58*mS_+6ddAtPd3cfdj$darZX@i_$U$qO|`e8AKF7pCy<0mku1 zCaUD+x`on9-i#f8jpS37Ju68qoZ7bj3tF<#!#C=BdOC_a*SekM@j5wM5q_b9&|cHG zd@B2iz8k)VhD(i_dYXt>$H^@nLj!6eV?L}F?EO7XH^$*-RXxTgLnUoQG>vf%kLT)} z2KmO(v=_O)M`Yb^y<9{z(ti=8oy?j!8YZTmk}@>7IlR}k%puLl1mG2WQQ`VOrq(zp zw6>;YV%6%L`m#yMH@%$V^QZz8H?ZY9aaE!#Rqm=NqI6Km8$3~;%Go#44{#1ZBVrQE zquqI_rwK|gl<3%tQc14pgA6%%OjOa}4U{M%1F|_!9>FIiHM$uKOxqhaSiP7kb7R{p z6A(RBdRMX?Z@4|}*+Xl5N8K5bfHxRJ9yH{pAQ|R?jG`0(37hVPwsz+PqBL_b%)k$H z*zhV$ym)L5TQ*je7t$45!#Od>>=mm--w0EX=91((vc-Yqu8PTDRE!O)e06sBq4x16NY3S*x&jp*4x1~oJ8nmosvZk zav0(rpSZ?Wa$DcUzY}luGcbImwnnwb`K0X8{nhtC$m!u=7yt|a6>PblHPaWn1}PEce)rcot(pr9CUZOSg5*42r+=c*9`z*k8CTyjctZFrIFF)xyxKT zE_JxLMKjxsG?k-^FILZHP|J&=pJo)T?UnO|U+&XJM{aBBVG^OMCWZmTO<;RzEu7-41qow|0@1pD=nAkBiAq)rDP@{*2SA z3Vi#&nS1;}-^_zMQ0%bO?$BN@dtl0*e*`?h}{g zCspoBwm74OYHfwiYfg}=im`%y&{pNBw9=py-Ozb@lwyA9Jl#q(M2(c)QOGs{1g0Gk zxOyp((KpI=P+oeea-ER(lqf4dOk*F0J(9)f;f_slkHa3xQu*{bW6f{*mRAKyQgc-A zbi)|`01V$`yHWoDdUv{Kg_)*q-IJFHtp4U9-^N(~0AjhXh)Mn+{{VH<_vnu;`*vK% z#EjE%c=uMYa&(@^bof;VI`|xVG%mh^7U&yxT#N9!p_JD*5=LA=3#%!sM<{raschncBqQ&F~ zh4YTeszdt&UH*H$)3u~{TkMj*;~#YKXdV`(#S4=hX`8T26piqpbfaL8)9lQr#R!w# z1?gLdjgjW#0RW?Rr-XMnXj4R~B4Y*`he&*AgB%MG`3Tf~*Eg{ovki;B=xy>*9TbN@ zl%iPflDNvaQ(NZW@|2mqhhtu4@^NRx%HuW4MrznQDzl;-$=W?rFBQ&fZk12YaN>m7 zY0K)Hw{;O}IN!l-!Mv+&wk@t3fU&GN=&ZNl*6L=&NCnY(kbE z96F>WW)#4u)2do~R)^sZ^~z)}K|SB%EFP8BuLXe4S13`-q*)iKLOyb(NCfCxwSSml5z} z4sRw#pC!}0KX7y`j2CtpS@4%knk?pzR8y_{mosGqEzEdI@LU-1JY7*}Ce}i^7r~z- z9b>YxaoK2#a2CfV-oogIJF%1+XJyEZOYxKrNk*im!$+@j(&Mt!`P-GH)3<%lgh*lC zPigR=(asCHsoyZr(Yk&v0Fkbw>{WAocJEI$HBG~}RlV+|xqj-y;1$XjOO^%7A*pHZ zur&kC0Ph8Pr*wj@yqljgl`6 zoLXMY^;$i2GGR_u~;n#={#}I<=qxG3c6`RN0sSW))Xqtd{8T2qgJ;3g)y> z#`s4upg0ZH8M$?CH>>Jgs)6l{xDXqFQcn`k3~qFCx$MJsAm{F^JGDd>=pN$m)7u@_ zIB|axx_MsK5O(n6uIbou%51BfBw(K4Yt6-#bqssr*NBZX%?Re5jG2wet=0gVZzn`!14}x*Pr} z?(~pWGGm4@o4Y=32caEF1TR)sGKc-0=jGZk1?Yhv(|>#a0AKe5%VFY6 zc`@7RhZ0%ppO;fT?GGBiGfzNA^R(o52z8pmZngChaJ3aoY;4bpuIDS8H8d_AW269q zG=S`0Mgn)Xa1FAunY8V#xYJsB+NRAzgX%!(6AJ*-7nvR>d8?|~ML}wco(gIf(!&%& zIGP!xwS%Kr!LtIH8w}l$&gH0s$)j$c}ThFMQ)<-)y zPXdO8)Xk1~od^xm0g#upXCP#G9-c~2uA+g?4H_JJO+!Bt6aN4>>FjVkY%U;vb%{{^Fvvqn*x_WX+#qlM3)!#1eprS zdNLM?V+v3p3@tJca-S$JkmDC%$y??0RGCvrY2nPu;_rrLT!|ER77G(qSrV705Vc!ckbj` zo%&+|uLR>c^I3US&9h?$*rD(nV|2H!3YSdxw4sqyMgvT;v7q;H0Wedxo@lkfJaTRM zTo3e9jxui2;5oFFKH!V&EUToJDHsSXE0K7z-uepZV0WvccbM{BbzQH-(hriPxbugN zYzAoAm&ca_-98z5XTHKGiOvp=bCN^830mW*#z8)zWnUAT_Z$#G!5&2h=8)r(wP7VF zrew`H$)*&ZKzv`Xx`Ji}j%n_&G6D$fcl)R^ytUna=+L%g_u<-4PN4ZM?Wu4WBcSs9$6l4)^vzcS<_8AdIET}}w-wEVMk(HC=Y4$DS2J1o6MS*P5+ zha6f7aP^>7*2LX88C_4qmMEcyEJivG%9P+LF(p8GZ*|lbh=XEa$7QYIN>Yzg?qv9c zlE~8ROoE0sG6q|Njg~tuk zdU*10_f|0d$SLsL8h(4{Y7>qMSWJ)@H$0}Aq=>m3zYcX z;Rt2H-V&^pvGR{?=fV>#2X_0VtAElH96gnMyqZ|P)J=SzHSvDR8{R|rS!&rKAK^a z0kBeWC5`+vJ>Z9wRDYI-Ja~%ca^%wY2DGXp;+^TVv{N)VX9O#*zF`zkIp`M~{9C)3 zszf2%7xYzfOU+2TriWD1d?KJRnwLH2tDH(X!`A0IpdCHemdaryV34d19ieL^X(PI; zlOsvQ+E`-Jl?;&3J+$Ku?3He8sEy9XRnbdC$HE3XAvX&Qb7Y1puxUGETpXiC4ozE} zKnDoTB@EACBo$x|84Y*Q`y*7=M#kTvTv=6cC+JG<#JgR(Moz*(>aSZRHYwYY*(uW7 zTG?CS?yOsUJh(u|WUZwp=(ikojSF|!GaG;kow?g&e2zP-&gjtmj3s$(jhG(_g|v>8 zDkz}pX{sZ0zz1YLH`lcEcU5@Q@Zfnrhx;heaHDcR4Oml2a8#By-Png~HEz?Ks{)D{ zi$93kmI`P8-#7p;e|@^RVaSF zkl}qZOdQebgI>i1Vmiv5Plh-X*&O~Gm%C*GvW?J;g{rbCZR%vaE<5@cHv`tj<~5pv zy=Y}6jnaEAOyKJGLsG_vhjT9weIJ@H2&rCz3Ofl?rro+_~wm13>Yub$5ZJ_|;C^ys?aAtR%Zl zH-rPaXVpTSSjk^@I*uLPCUF%hsgf#)#y)BQ_jzgrw4c5*nQh!RbGnweG=O~3ZYkkw zjjGE9NrlpgSVOQgkAES$sjE(S?MH`Yy$)F7%#(XbP@QLn8+^3aqd-@=?*UACl!<)_mW{?*(K#T0y2z#D0}x$cF8@~3NLWcmD| z=XSdt6=6-rR%smS8?Cl`eCKC|5<^@z`XAfar`KKTtW!@@BkEa5_mW+t)ItG0@HIF7 zU=}xPHwyeip^{kTr*%tMOJ{-LxEyVeHwxOEcHgJnZDQ(6X03XlZ5_h(LLX7X_h#2Z z(UVTYmjV8$oxk0wrp56l8>8PxUQHcO)2YK$TD^1F_XqV{y>znimjcwfMoBEtMD0wT zJn93?`g*7C7UElH9v_vta_Oqs8FTIp!*Ch@0Ja~Pe&{ks9G~JvC;Z88{{WGS(p@sO z=yaQE=%o*#sg0~IA+Dv7+M;%TmqPyl%bwkme-Uu=Yh#9;^%_{m5lK?!9OKajImm%DkhH_dl(>>ApIoVeeqng!O%X3=Cza+S`^1SEs zH1}0A-k4uK&joRk+b5C%5*67F$R-M`i zZAHJuX})UDQ7)It2K4fu`0OKghjn=K$x7G052~PPs<+5%D9c0g-QLz7jv#$a{{T?? zuRp5r>EZLk&yDV%bZcSOLnoGAS$cM2!^?1&t|Vx75m%REstr~$(zDrcQkl7v?wD+m zkCT~^Voe+#8lbP96H@G$Ly?HW3W~45APIzWnokOAsk%=3MG=GJ^0Gr z?wu4V4_%WUO^Xn&;QEK^&gnylD-E{aS69mqid(sUMBEXIpqirOc4A zAXX5C9paP9`74mF`s1?^?JVB#?s&O z0LcT>DvFeLz?HYhkXR+ zbznA?kT5qr0-=#qp7vde*==6OC-Loo+Ui|U)_xnw)%R5uO9M!IcL$;W0IKYsJhekz zX{rPbKx7rib@53M8eN&| z+TSImv9~=JhD`TaG1+h)t3MIl^?-)WHIN+E3$$? zp{rj-7{U;vBU0R_CjCQ8!f+RX!x>SkCJbraYTBZxhsPBnJ8f3~0PxEE;=Mavbz8jU zclBN6@eaHTZB`HED}nAORYd2_=lZJI9ng4}o=@0apQ%D+*rN0ILM7^<4Us08bVi~Z zAoSglxW>zwG2qoY87Fj3plpUex`I#B;6Ds|B+K{G^yoj!`y`w3@V#Wulv-RBQQo)y zV6DmqVvaBNs+FTP!~N>tq5lB7#s1XZ#dY@p?(vRO{fgrrINtp@@?EQUZ{W%7xWA2} zck$YSQ!xGO;4GC<09u$GRS@-8Eeo5=XzztjTTY6IkNTCRLntf+6#4RjhRubliwrKw z-w7&^{{V_{K4mi1%ZSeXlGPc^fLBe@ zCMwvc#Fb(SSx$c`Tph=k16O)@)sWXNzT@kU%g|JB`AXqC%}J=ZO!9M=su1(#e#V>^ zMjQ2M77dNM;buIQhi3*kTkeo}P_{_e50bL!;^nv;Dw+12k+*@VV~@dP&^#6}0**%r zkPL*OY@A`Bl;ck`a5qSgC8U)bMFeN43kfM$!LU;K;~i8=ihxH{&?KnWO8|7EZt2nr zTtqhPlIW_KB^f=_#1)b9x`g~mR^Z9Bmcs4EoQEjXur!_d;S1w#V+Ya~uvV~uH7RbWYll?Mr9u>C=m(w_$D9Njq4>%(RVd$dvsW+~ zRWd!xaf(p@TxP&3uDv&$=K!9rbsG_smnh~uk`TjZVrH*lEtCgY@S!()FxET6*;&k> zC7QIQYyCEc9kH^xjoU?hnA3#53d++Vrmb!cFy++zAyy`W9s4d+14UUQ&2$F<*6M6+ zC!#G(cZ`haeX?GR=&BLnYrK<%9eY|DGq~9*Q8P5HaPm-HTKFnJImT6uKV)a>8&=Dn zl6`CkD2USFk*#+PY;OINR5NM;TWh3NI612%OHUMl<*Ebil;`pULzpquU}Oks4OqQU z5=%8RkIft(1W#*RoB2uo%hor{+e^R55;f0BSVw?9; zoABn&-jHMNtvxMX>2H$muVix4L3e+tRlniQp~L*6`l!d@t)u?{Wbf5mx?AMDEpL*G z_D6o}59XKZQ*ZcVXgT~heyepK!z)83Ym9C{Qs?3wo@H*)|1thAP!DetIfuXoDh!k+HoR)`qlw zf~xrb*e`ZT*&9KrrE#7OvPo#eng0MVR;ALVkw)y0sjJoRlIcrbgPoDj9A}PS0+gkP zBwFn3<_gPnbq-_cSmNir4p12&owoGuu09LlyIrzcOSMtu#|1+ebsa^y82w9*JvS*@ zqTK%g;!@2`XsVhCL6y@+=R(E_0LaMO4IsHY(a<)pgsxPuk}8cuCW(afV;hL`p3ce1 z^Ep?%GfwbnUs9<~Nx6G=pm6Q`@}SpSMXg#F%~*$>I0tKf^|9k*Kfb{Rqdhoebi+iaiW=1oVN zuM{zw{w%jr2oATJc=X6)+bst>91fuyrQ=Q%q@tmYaB%ma3LN;xu)CBQtzeGDLaQ(sUwbH?ixjz7yRgac2NB zlHW+{1Dfi~%lVx6cZOJ+d@LB-bGxvVESC!VRn`l8u9H*IRZh^?{{WcCoK6AIdbH

0GZzz(dWBYVc>J^8D{;yx`J=P{03yE) zaqIYKTX;>d17C)TGJdk}%+uVSzJXlx#eWT=G_{b**E1$Z0jN7Y511oth0k~TT6-nx z{lqg)*_xB+epn(n}UQ?>>&95+EMpA{=8Z#0tL?llaGXF0EmH{vEn?STWnp2Z4+ zyG}hdZMAn~J8Or2$5Ks4Ey^QFaitM_70RLW8A+m; z$M{f7jPg(B(*A)RZB(W+B|QAc7KtMbIoE-!Z}$Xgm9t!^;+}$@7tJsn!h9r*WE0cf z3sT1A2Tm__y(Im2>_<|XQm1QglDEF_)iv^k_lp>b&lxkp4$N@}4lD-+t_Q16RZY~e zhMbHaRnl*gzS&()%F8(|oH|J==>YJ`4_CAS^^U%2ON+Q3d(4$r3o{{e@u|?X=rUuj z0qF7E9i7!;>vUInF)wzP4SO0|T~$;vKZgGR+0L^M8aZZZ6UXj@)iOwGUlg&qk}<`u zV}=|E>If-{$lX;D?c6NHnzYewY`*w;r{{Ye^z0p}{U}^EqR_6nrf;QlOu-9_0AP{{{SR!v%k=_3vrjn`5it8@XU|3P~NKG z()lSGA|zuY^FQy5D~hrxWDnK>IrB+SakW1vQbikSS{fo_LvYqLy9SJTbWCPiCeWE3 zID4Gk_w!YpueU9?X4G*w#woj`mc@O4;l|HUXQyKylb12y)D$b*y`+=XE!{1#xssg#k~ah?#gbU% zXjWK5EL>!ARI$DnmbiN@KsbYsa8vIU+8IqV7jZf*4eb>x(9oMf_!GDqUPXRsLVvwT9%_2^1J}6;zd2j##0M0!A>o{va5x(hh^%!yA zW%Fw^k-GD*E_>wI`Z$_8FNH4bYhxO`mDO&!vCMCYah0ct4KthL*;*F9IE|s8$-3Fm z>o$yKvoW=LN^PaF(}Lp7KT->FyD^cw#Hs?3IqE1wth80r1R9idM;s0dVpa*WQ@Dn{;963^40_nclh zT3i*RP0e8c0D8A=f5*f0FuvS)VM2u5H{}L!IRl?KO0hJ<5K)C z!Md~@wRt*ze){GPq(_yPK2RWMpp5V7W zHh|_~G1CGp2~=EKa)wb4#z(GjnB#HuL1+i!qk&sdyZnYWgM=o#`=c!zAocw|>BwF+ zxw?S#Opu+D%?aq4AUi7gc{DM7sPz3u-DV2#G`ZLaloSq=vQ$%H$L55eh{x~3qzf_D(i}-DI#HM9TzFmS4l-6b56>@ zv++!lxHFiQslg`Pmj&G>qAoFmbkTlNRq1%aO|-R{Xx&v-nzJ)n?58SfgMqru;nA%L zTt_dN+p2+AR6|Y*74szbSxD|pQtzZyWpFzbbcNraF6Tkk*nY#?;B*M zCi@1Ok{)NjzDIpo%9~4C_q01DCQ;KFUeZc>q@` zD$rv)DDYOeZIrh*wmLM>;ZawRcS|+3e^%=mYKd=D4yt18pl;Y#urjt{_Z`;jO}({F zbDlB>WG<#nIVAL3ZJa?38}Q-?Cv?ay_$uVKopL)(YKTHy(SY8eRQy_78(sk$C(7WbptAY|q2W3owBu}7+Emj!&1lrg@hDLO#uoJRYrSAlDJbuqU+)jLk& z?aADf_rkSYirBJ$Fjh4YP7m14!tQU}?%NO(+nRsUhE-F8E4)1x?hmLE&_n^qEeM@t zI%g*v71%4vRn#ryz0WPO^0AVeVY$H@d`|-jHaPUSg$lbZjr*R3UY6+-rH<8Oq`XmW zV~BNJV;-s-_`=dQ`RKp#1GQ+So~LhsO>bhPH6mBeyNTVUC~Ku@p?$?Z~ISg z{{Y`2{{X}nqlxX^>f-+Z#1|nw)kFR(zf}@iim(sjyY*4gKVpA-$LS-ohZI|X_sGBT z1(*G&wtw%Df8q;<=dY_~e;MDZfXj`1uft_|`;|XVe#d_Ihx)%my_>|A=X9%&{{RqM z#r}x@0Qr_5`GVrxb1rWHnO*vn*-ah50hu5Au2|*jfO@=N)%qP#{C1E30C;EW7vIMy z{{Z)fexY+%j;l}kj{Q)osVgGRCU@$O=}Xv;-!Xbi^g5Z>+CTl_pQu`m`ivj@!#`EW zHCH;;AM@Y(sExYUAO8R_{{Ymec&F?a?|6M*=ys!cl9iwF+4`?!;+QlYGK1{6)+?2E zK{LOvnN}5=x=COJ%1Q6K2B!v|)G|4A`B!@_o0aU}{V zdLth#Ut-LwzfT38n%_lrg6|}<#}vGVhSKWQy92NsTG5S1q3EYwXr#1pWufrYlTB9^ zil-CNu}9-B_X92URsD*Rc&P2P&>TezM3YT{mv54Y zn%X(&WN|%Q2DX0bxpeZVx6>8I%EaT!t<|%fEaJ*Ld1)_>GEMrM~hN4SP^FJyd6w>qRzZ;WrFC#i2<=p?s$WmI$()in{tBu_EU->U)Jeq5;cnre76BvR`b z$kM~O`=`SWI4gn^DauVLY)hx%6E*HB=caTpTgj`KxNP9h6GI5tf`2gC52e1;*|>V; zceYcQ>TcH=AeJXL!X|nA!=YnZdikg^@zJ2OSJKr*Q#B+{jpdor_O!bW(C*%N=%d&1 z9ep*`@zc~s`z$SFVa<$bI@_!n+DYBtoOc<*t(LM$#_jF5vL6eKQ)^&6f>}j1HB{0X zdWM*o$sXsj33|26)siusZ`oD%$mRl)Nn_PD9aE|02gsba?Xg#7`Z{{sO!d?tw6VO` zMo1-(i^S&oX^y14i^jG(S{WD!L01Ab;LJ04ryTO%>pvH2f$$(%2B zn%h+b@yDx?P(#{IL!+6yvrnmKRdQRSsDk53U2S_>=xPkiVcVG%Lw9LCh$C*0 zcDAFOQg5~Ie&yv!aju#5RMXBJ=M%_~bs+B3_XxByHt$L)CzqG1?1;aPiRR8O}V4 zbn(V;i%O;0`-qIC892T@2>$>Q_-+dMqOYW=65Lx%=Tt@p9Z?(@4w0a3t$>2yI*t1( z=ZE*GkK8M|Tg{r}>uttJnjkIaG0q{fFi67ELF`6;=@#i^rM6vQsiAmv9bQNs;+4Ns z2<@M!dKKq&`bCZXNT*UszhhHPoy~Mhl=dxa4$(*K6XXPG2?oM{35o6l;aO?$UhGS8 z7*QlWR`)vrLXpJfMA$$v7RU;p(kv@j|#e#owzd`Gd;7)V- z3jEUel~8BB*E=MGRXP6v>m&Or{N={ITQ6f@hmras#AVghpD81n*{SCM!9YI0V3ZTo zY8l8OG}&mX>Yf_~T
*7M;hktCBK{-BT=M$+yWdq`P${ zD2tp4UdIS&3TDl~2|-F$Wv3e-BsR67jC0F!5iLUp=6hDZ)_sn_L>KmF{wFX9yoVv>>uv;zIM?~hO zw{@^F<8HeQvDVMGjI8FYmC>BLvWg)i1!FeME70lOWg9%+ryGQ-g_`LL-t3gQ zQPiWAU*Y80?u8?FvaD`P{Yas-_2V=}AMC^W%)=3*0afMu8Cu@W7R(1SOPVBX^gZE0a zSA5|(IamjUZ zej;=@GxM)I7+O3iEq>psG!KNJcLTRx>8OYq(VmHxoa~io-O)5t9g;HgUB$psiZJ(_ z`KH=%#sSeMQ_LcENm6IL#|utfh1lqsjZ;*{(c5JU{Z3ut zY28s-DJMIWn*OPV?I$v#L)$5R{ZU}rF&(J;nHk|qwL6zq-TSD{-F*Z3b1~kk(^nRX z(0wzD^k+nH?7E_6{gd@B;Z(dWSRBa4Rj)+xx2p5`mrUe(d@{Ugli!v%C~UV;7lfQ6 zrZFsJ4r-hbaoMUhz1+98u5x#yF_F}CRys&|r;m(w z38ji?fFxrGu)1;^hdE_X_`@CI308O?QtUOys+Sf_a!V&Lq%k_++N|MX$VoX0J-*!b z0i31=#MS_+W!aLQJ%P?)4MOS%RB~bh16KS3mWFO9-lq{l{rf6L>3ph!rNTGum1c20 zU~F$0?5N7gIkgWyt4~LbNjA}yuFEJ_OQypw;u`50jb#aRvP}I?aV1CSvyXDL6k{G0 z8E-C3l%*!Zw8sg8-%wzlHx0K44LDPSw2`{6k=?IR_?1&TYvG;YwR90QyKpeAT89U> zJ;LOVg*Wa+QzCQrBn8zhmzdU&atKwiNyXQrJ5`T#`!#0JasfjwU*yT`QC2YL(5N>U zU&)XWu^TNFD?)I!7IWR@-;FUNxkxn7<@r5 zRz0L0(-WjBx1?9MNb!@7=uBNxO?-Up-~Uut&LcT4d-p2z zqT%THV~4E*dAU%>1w*P#dPKDoNX)&l>g+H9CtwzqJW3rjsYid7RGvpGw{8CbgKKMR zu~iMGk=2vZu$C}>VDcL05ISQx36qB^>tnXtYOCq~N%Gw!4`i4Ht!qmGC)cuFt+d&$ zR*U0pu}y1rEpuA=A10?r8peRpk^skDmWtRW{snG@)4o<);E$GbyWgEoaqH%^Fl2IR zaF=4HNJS^VFP~!Jn)5*m4kmPjbZZ;??}W2oeXN3y=IP?){{RoIr>h0qo>+F5{>5sf zah=>Yddy!e^pav;QHuYQ&{d^y*dU!!?x%!6_HO|D<~p@q2NhVXnalt+b`I}^P?t*vadJhGap7n# z6qguciLUvYNf^REoD7fLDuVY7ZK5~TMm51vZ@0?eMni*1VF%cZD*}s%Z&v9VWN9D} zK*m30M&vEG=nWh$5gDlJY8xDYX>4*F$kw0!9DqCzB2u#`xyDOo78fL(Ug>{=v9sNo zG$sbl@y!E6os2m24L+^TJwZ#xdXJu>WHxbeY;4wHk*<3Kh`=Nc=sfiG*4o2L=*&h} zh5&XfkQ{^dDahAfsWQtwrP0GQgonz&_c*i$^>G8Ttv4j)edw@~@`Wuj+$trT=D0SU zl+A#=GAN8BxsDzpSaX00$X0(1R#Lj6lA5cBC!>ks@Q1PTX$KnFo?ypNcS#&QQSkw{ zYInX0W`i7=8X|9290FVBo9Z~1jcCkrNn`&2jLxy`jP)9v3K&VN-EF_z6qW7GgKzhn zZI6gV?`>@iRV<0Fk7**3H=Jd_;OyXc9!RBC@|o0f(>NJk_vps_wmTb_V|_Yvu-vVm z6KkC{zMA89;nM0@ricfzpq(?fBX^u&5!q|u>y4(q-2}E?8+A)&Z{kZRbAx4R!5Trp z!SgBj^0n6*boX1`2%Hm8m!`}mTV$tYwu%TRGBzUknCCxMncLz3V*^Pb>HB3b*zYfv zs-7B`u(rN-vAxW5f=5%-XK!Mx3OMaDQ&iBKePsiok(hJbW{f;%2P9`1VFZOt@qZ0m zXzD6zDe&rAddO=VEH4L3B-oq{_A8~+mkgsN_a>d3t}{(O9iyqNc@eWC80?;D&cRju z8Fp>v(L-Dvm&n~SdP>cm`t|#(E}i}mypBjbQD;rc7CE*rbn%OQh^C8VO5 zhw|c-vDrQLR4ro+a&p`^XaK9Xg6=gpi!@DfFQ|f33!UcB0ck&E?A}DOCG5v&XNAiq z9ME!4c%|yO8SLK>RErh6$iTMcVi*U0AY z*dudLM;OT?A!TRDJd(LTgHb4B+AvUFh>FSck2SNKlh#vnp2c%Gtp^b2(>1R zsi<3>+K9(W);M@kN=jht93ukUUIxoOX|lYER{1oisK&9l?ZCL`hB_fNLk)0Fb6Vck zoF2(q;Y%!G`(=841C&)Zd2bNtlT8T~UsI4N`jm38kmqFro4VYG1`b`ACZKger23JX zj_5o7r09r{ZqZ8mx#Nk0a(7qX4)ClYk>!~rZdGruIfzM)m&yO*mDE?O-_BZhk=7 zOT0xLH1QalfhKZ1i1Jv7^3beegM?arj2%Wu=FMnFzX$Rp-?3cTdJm3yj`=Pnbv5!2 zz>{vvo4Agj6G}sSlvwgLk}CNc@TZBo>?1*4EHgBR5x<(St+zQJ6LRj#w-j{_d2o#5 zDbmqTFD<)vSF+MclyRqS(9JZ4VYx!EJ#SVCdg-$Z&Qxt->7j>#V7T~PC(r0#H@ zDxh?Q!3l=Q>8e^wbGoeaEL)c0f#swjI+mhCla&^Fd17urKC0}04YbAxZW8Uz$IEGM zjIPD4>XILqK%)m-h!%^qv{iBC!~%C+Rl@h?NlOQeeIavu_-bi@b0Z*wva&d0j=qvO z*&ue<>aG1~NjiI(lEFAi+SuwFZF?n*ZiqTfWMjHOvdUW@UBFIsZZZ%`Utd{992z5@`^6#@!*ZXGgmzuo9EGXJ3PB1p)tIuJlT3Z#YmSF8r1V#vvaCrP2tG@eE!9q@ zou@lxUOYK;5;22UlFQcNr__zD26ZTOzEN@0N=OSjn42pOLK{Iacy8$XBGv-VO{Qqw zF)C^T*)=yIN`svk#-f{ryh-^~nCyYX9V+B9w?&BOid874DF`H-_?%IiJ9}G$gv%9~ zto*7&sX1d{x_KQ7o^>TjP%lek7y(0-_{t$L4ax&mFd4yAYMnApqD>%n9{d2vFXOW&pWP{R%X2o zgyBelWQ7c!)YuP1`EXQ%wikzHg^_kj*3;p_@ux1rtksuR)HiU|#n$@hBo4sTGP_5G zZq2yT!}ytY0^t}OTHt(_KzughM@1xb?|(uts!jCGKF^ZFht}kf z5=7yR{Fd8YLuzT*;6d4LtZ8ctn}wws=}*|mR;69+qxhc9&TPate4Hu@>%+D3%;vS7 z*I-psM^UIbzfx1Ju*MuuEFV?avM9;2k+)=l#9Sp)6CW);ayu@PbvtqNpjR3eNXG)+p<8i; zlPX5rRL(iqSSB$%7Lagqi!z2um@|`=Veu7$II4NfU>okEQPAu>Duzk8$;Z&Xw7x|< z5N5Kyqc}k8D1>c3YU8V>XDsd6R@a6GxymDQXq6O}<$YAr(0vf-ocA`dqkUA-m4#T>=@19@yEgy>)(>^&6z5uuvhU=&i)niw2q`Kq@9 zB&c~H21n_n9s2|&mwQzm?ziM(6+|P+9PZwJ* z)pc06wkaeu^zI-R64>(Ns1Ch4AoR1cN=j^;!rFGk+;6Dq!FTJ?72|5R*3(zdj51~)lM)`Oc}6HYM37H8G2AzQ%*jX`>=uR$|?NxsR@!(h0{B?l}-OR1!EXMWkp zM5dakXqQ$nLz)LtLRz#v^!KXjX1_}+A`^YbRMH@lslk-6zIPq8uxkF9S5vs|SCXl5 z?k3mF`Wou#Go_TZv0P$k-|%WiiGV+Ilop<$jFzI7N4M~#a{)VK1Gar|t(vY1uBdgM zde*^1Pb($2&ibhYd1Rw3!*?X-qN*;`+&gBTo$||5b*R!{c!+Xg=Tn^rB<^xBN}&vk z7Dkr;00p-mxTZ5x+rNlg&&%LXOzxw7j^RY5xzyUPluxOMwX-vl^UxF9IZfOTdA-!t z(?u5xP*PJ!Ic3j>bL4XJTuQjEJTY5UM&}9abaf4Id~&)5g3>)m*;x9pZ8apHrHSck z)7=TS`Z|lghW`Kvo$Whw3kYcYoD4134gk1w?G+UCjy$*2Q^?rf*I|_mI!*!Ur~sP( z0PMu~T6Vs#jG?H6j*&@_2I6c6)VXgSJ&OWxi5}++}og63cx?2p`Pd9)79kj&8J~4ETnMh}uS- z{1Jhif`;)n_KzNf)Rl5uygzrV;ywzkhM|p|z{w(WOB=ZZn#}oU2Oh|O2j424Z>*-U z+hP)HY|mq~fZc7zdk)*|lfDvhO%xRuDoG5kf~m2Gcu4{6$p`E*rSQKxhWB)*f$nRC zV5Vb$>#LaI`!z~fo zAy-oTfBIql%h=<3cl}deppkMIt*r8Ft^@3FtzU&T(8U$fn@|&T87oI}++gF_s;;z` zm(1;q(!JkotyD8qQ`N&IJg<&W;ic?2BP#AlESQ{bw*LSHY*}6$@tRk^A17>5@hf16 zHOHbEaN{Flx;tAPFn0ra*Ulc_t?A*d!M;Js}pblbyZEfji~N541l&sWO>7Nk%D}(JEtZ) zI!n*lD<&6>)S}ZLk34cKi?wBq!^FMLnfq$#=oO1iCe3QD?9cot@#LMUUUpQUEtVU! zF$T7LGaiTjhy~|>=&w8QRYg(`e$<<#PU1t(-H(Uj{DJPkmLVGHKtHo&L&@ucWA= zsC09~Bonas;(JR-K4E^b{H?lXnBNls)t%Xowo)e){376;`$tx)N(=I%9Bs~pPP`!^B_H&I3E!H>!$c-ixhNiJ?SGS zC!r%KAH*Mk*X{@A-K>t9?QCqgd_pqnsH4VpJ**gWMmKYMcL?VR-C20H(!GJ8#?~?T z8OQ@;vc>UCS#dmSaoXK8Tf_6=&k0W)t@mBft#k|C$cB%&P#EgW;NiYTa_qyI9ZGIV`6%*pH|9-mkx-a z`d-5FE~HkMdvFA}1e;L-ZdJE&jgmBlUEE?3$ndtb**c(+kEq2`Vr@V>pz;j4AE{LtJ50}S26I-S0uD{%D!xJR8+NsHWRY1Jc zL#18O;r(_21**F_%y8A&6gDc_fs)dPRdB^{YlsJ@k`<_fCF1FI)Tm*!R+?G0y{AdY zc07u562&B8?UALYzEWdjEoVihM>|7+*;L1q%B;Vtg*MYST`J?Ku8Jtgb7Rh2Iud-8 z#tnwaRtILMKB=ZRWCY9d$jpkV!~=M4VUyUZf~vym*`9P~E>}gyK4VA=RkG68?pmn4 zy&0lwMjcq1DtNd(f;@7+W$ZjOY2I+nvbq8*I}0~xL2GDYf=m{S6{52JNHkV>RAibd z_uI^F!D*!&h$B^qtTu!{SUe z3^6(F@=JAx#1_0vn3E`u;W$u?;U`gbYH^@A$Fi@RxqBA@9!SqRF;%RASqals-(eg&aq7H2zJ0_l9ck)hlx&u(whlKfWutMD@Kvi+!%gQt| zW`$8*TlhJSYjY6&6HwtOSs27%ZRD(qdIm%Q(xpjG-8~hoag-kAEYgJOBCmqb>RmuL zV}xpwK3jA5OwOlh>JWQ~WHCL>+hVQ8X)a9ZSiv0JEv97_1kJx`u?V8?XjneAJf3SUhD%<{E`$C=A_BFS|8C~j>#1=5!|A? z(MWkwo!wZ8Rz`;$6=HC`W1FB7tGA~f8$Krcr0SA*WLVKjtjpZLt#$Ye$xrkZYP_?Yf5UNxE5o?un$V1)LXRAjN@>Q!1(YwD#n)VFg!@icYF`2HWEmQVUt7S<%|x< zT}?v(JA|_9#zFC+bvI0dhc2A#=-_^V3hKtUR_Q*nwb6N&OiTOP9I{F;;HB^KW>*hXy55Mb<@%}}?uCWuaVp0Z z?0HP@t1SLcI*_Hs)F`8>l^4_zOMr3m6lvS+v5np}=5W=64~Ze7&}uP}l*?hSeHCO+ zYq5s}vrg=^B2WxC6-zI4UBlU($H5#x%Bo0wd+8Yp!k(e7XvcNQd!!?+;6tXuxeXfEUT!~bmUEN_YN_vaVuQV1A~m^LymGg_FIitOqA(OQKh89?6qS;~93fyKz1-&z){nU$Oi&~sc!&QDwgPtZ)!Us(6agDpKz zhai1XID}IF0CTIXtGVfLiGrQ1jjh|mg4ToIV5-|SIFjK;*SMW?Haz-2!g}Y;Pg08M zjCd%h-M0B2_?q)wB?O{4*x1BP)3)WIqq;f56co~1b=@w^rfx)xhB3{U#~{cg4Ts$; zaTE@hL$TVvcgu>jMLW^ekG zr%`{s-DstP!(%bC8Hw%~U@hse($w2+vhhXEikf?^>5wtZ1@T65GmXX@hW`LmnHG#Z&ORQH}UAsKI*0K zL-$6G4|}NI>6q{ix!ezIMFA&2cLihJ>CGP;Q`I(E8S`~t#9Y=A)?zAsB=sMgDyy@X z{{RiCa|jL=_u~ZT{8d1N{-L2++(O{$adUgI@{~2Sq4~hpT-<)nE1|zy{{Z*@0JkDr zIPdNL!!5qZTMK?)Q{;jtx#hzIy@A;n9RW{`y--c_L5(`x*MQs~5$lh9s^3BR8(3>K zIvPk~lLM&?1ZOHcuLhc-f<`yPq_w~doNc-DD)eURGH;lbWfg5D!`Rz!;p)r7OxDZ8 zsib4wpfr)Kgldi65<5E`I)&D@87G#{YJEesOzJp!r_JGjIQ__6ZYOM!HnIsdFi9;l zfZI#qc`W-DJN8{!@b-6AaD9>Fl3NR-IqUduE&j_F#$kA|WxKhx`mg*NIth8_3a>4W znZm=XosIe}x_rWp=HR>Ryz4blO?+{wd>j{E(U4uFkNI7n?6^L>^506=H^Wa#`3V02 zQQP+`yZ$p!n%)|yj7hT?=?|`uP8SRD1Y_rGSizk|nf4ois>WBSl4g`6{{W>;dW#ok z=w-d3xc>kO!|qR0`mHv&!KsXpFdoeQs&^4b$*ycT-WVcb_r_24Q7B$xe6AyAw^928 zg`l6{e6R9TugBkoD*XZX2~71;xSZhNdkvHvXR_TRwK$cp3dA-^dMYZ(^-4{?SsdlA z$T%SNR`&~EU2&s)Q$IBj0h9dA?0@R2GB@29u~wUX&IvPyIOvo8uGue9iDkt7R_Jz@ zE5(*O`CF+STVt#kiKO-l;IZ1OE?RkQA&0B+u=ev+=fmC+mgdTO+V;7W_y#6t247SD zpX$1X*$g!Cv8-cS){FoK01MB?Dp`7c_c!G0A6Kf_7&uZYiZM+okU)N>#CdOc>xVpe zvFHduMbi_K6f?IU$-L({R$qaz?&Ospk)spJ14eFOG>w(k>1!(5Z@xnEc+*Zqk zz9^xCO&QKtf)yuiwQc;i3!|R-Wzr<77pg|}5;d{bw0b$XY@WmdooQ+<*GfiIQ@lwF zw#WspV~0@T)vg`FK>5N+-9{CaR?4Gtq6N;|=IO^uh{vb?(WB`-`X^|7ZfbFcc{c{> zOS&oemw;oIIc3vErnhkH{tG~fq8Kt4H?#f!0Gs}*pyEG<_ULA)yVxkBqpr^~dPsm| zH3Rcuqh`4JY)ZAMYpLq#spo;=^z{r6iQTY!6}8%23lEos`gt!)pHGWdh2zN;D9LQs zE8QMbPpHeJ%%_e`H0iF-*_=&$r;0vcjev6<%xL?Ne&f3Ogfo=SJYeU^UHoOhFRY$< zXaRGleoQ#%=XHB==6<2~R3&fB+G-t7QwyVe2^b?9vE`o2%IDJ0Oy!P!p3>~ssq8fR zyn2|%aD1)Tbo~kQOvr)VBv;(rXK@Zv>T4X}{2cit+KC=yB!Z~)+?k=(qN?j$TlzmIlQpJd(zT~a>J@ESv&XvX zJN==uQ@m;Hxz7IpK9RkSBn|e01_!5o2S&gz5Ge}ky z5N3d?vshr48R^k9+U~mag5>QscuJ!7Z-y^Xj#0L_AGX4SG~B$bqMKU*Q0jbHD%NaL zaka@$9OKnY#7ZPM%3i+$+F3Ah;W;R?8grDh#We;SZB9=53ac2;ds`$vP7TE_?Bs4Q zl+Rr@`NN8-ZhIFiHtypsM9%BERh`sfCWP0^cQw`Jv zQtx#UsEy(y9PU1-V@}J^!ki;shnCB0EdEV^ZluN3DBTL)!?;~80>Jk3;1VzlgM@4NX>UNVbsL&GCdUJf2b;uWm%0PbIc~)Ts*^1x6|RNN z2O%oeROk)`OkmWlqJ~_tly+cws|0gx*rWF9*(NRA;3VlK63>Q?r*!RSr0}%iYH6+Z zGIM?0j&!v;S-I?@R@b$l_?Lx+j39X>dyI7s4#SE(vvEUl!8&(D3!T?iKqxC}{3PXR zn%!3NNd%0cQ}d~T4rx^zbKQp~qUkeT#t|zW#cNAV~T0Arij)EiDX$8+ecW2nE7An}Slyx<- z=PO%TE~j8Uj2Sx&Z-lkO^ppPp6ppLqWpyqEKl$J-6ez zF=^CNy}1EhY93&e7Z(Cwr1}zZ)lK+YHJX$f+O`R*BCFFI90RZh4sdoE${$y1xjv32 zLvN~ZYc#Z&-Oz2hBdPyHtE~$qtfvV z)c3l&T3S%nO&et95vZ3AzJY6FSm4zpjQMrRJNYfiXJ()pbB=)Y2vzmk+k1>ZOQoTT z;Rz>5=0I@o>AF_t+~<&!P(um+ZSm=aDyEsPlZUG()aFG~EfmD>nCe@wKXo~qq||r^ z5=vENOT~5e>T9iA;g_4wEuI#!yB5p?#!1`1W%mwTYTqR7dm9rS9;Uac$n#pbmVvZX z_W4~1=F-VgLnoNzKT|X6TJOTSEV=QriBZ$4B&2Sw-8C9ignqZbKL&np9F@i^uF=4s$Ep)HQbC}%(IPaF6 zCqk2c;&LY>y@%ST&m9eOo&(JJ<;#DuOhOh3xzcNF;74yjn5PEiyIMVh&{w- zrZ+fC#P6@&dWAk?@+?~GO!Hh~6A0VT#_&^5j89$*KC zh57@UXG>DT)8e)%3+83H zy~pZ)mmj!MCJv`;?c5Ro`xQ0JHCK!s^-3B(+XeOZ9$jpV9W=B0EoeRQL;5fDC{1ma zX@46_^B3cY$*UcfqTGI|ro(Od!Cvw_FjGELx0b{DrO5_d>~|`}XL6K(map?F${jF^ z{%iikT^S`X@UITf#CO|$x;8&Vl7lsDhf32Tf5k!y<`; zE_JyFxuE!9q4sY#MjLJd;$xJ}JBBnA~sj{1P!>j7EQ4nry=BPf3~hjrOeZKhsH zG(Pa1%u`m+YLVbWQ>uDQ^lY7EVE+L354x})Gh5*&#@)d5R3qvnkCRamJlbg+G-uVl zzT;(bwyHYHT3X16A?$0u&KX|sz;%yPrFi7-Se<;X$IriLbW2UFSzxLelYbIio9oNp z41zhG6N?-Kjq2S)86MqNJXoCmAeQY7#-1T{qMk^mY^}j_LBKdY*FuK}G`Fb!#V*$H z;7(nW{6^;J@h5Ig?bpsqWXb3%BvNN23yyM2X89x)v2VHKv9K$CEHKhI) z>O1&CSN$y^bVRn8$vzOWj?WqVsCsFR$KqKOQ*zwqO3n+d&`WdrBx>6=Z48)}6O8r3 z?YG$Db5DSDN*8VuilxE4vAObA;OaCKd`Y=^w-!DYaQj&da=D%-;FZ-jx>iRUb{!RS z!xm=Sn8r1@0a~=UYXVnSn{oYJYpc{G(AI_Ncu`EAs{<#hZ`Y+r4w6ZF!&-{z79vEa zm9V-*>WVYVaK=29e|3z+9tc-WO(v!g0;>43l1g};CX;}7D6(R3d-gOkWsIVlVk7_= zLNptUrP?`lPB%=zcuE;BBM}LC(F>WO*;|EvFf}MuK!8XcpUk77T?78V0r9 z5xRJ^VMa)AWoJ7ET`2?NaK~lOJYQ=qk>^gubsbDv<2!a$XA@ck%zuY<(QYhE)$Vg` zEP7-u$3m@+7^2RH5R-p}OC8HbgM#IA@cp}&6E^G;b$Kd3CA}6txm|3AstB|&EtORv zn`KDSB8%*l?lnX_?y&MU+oRCMMI&uP`MZv)b*0ktTjUB2L|rt1d8ey=2x)78RYwSohd%5xVy9Q92J#QhV#A3 zt*)mtKiy#I-<_U|G<>4etiUA)lR8xQN-7&Z$k$xx9C{3%=|X{DSW?0?K?az z3V4elVO^W|K*G!(i+n@TN-+8?VlaYD94FNFDdH_7qAHnwpk>oiPjrwqvkyZ~p*H-?4l{Ov1o$q&5H#MY`l1Qg2S?{E_6p*!nx|=1+Lj3+-X5`ju0$q zy~e4x+^^}a?Gge1~SglK7_e<;r2W`3|7-=z#S0!$}sl3!jRU)^=@-rU5`Z0 z8W7KtbTVWNVO-ZMVCIvhDc4&-G8TnaNvE*YBq>(YMPFpEf~tO1G^RMGcC97M&bKff z@LU^=_DC(MqqMjRM^n>N2fI^R?M`t$+~P@b_fmN3N+A1H9aTV#7^Io8x%g}H{REH> zcG3V!O|#R-O7}%hBo21Y2VaBDdOr@erGsrbJ>RM_^=(Tlw93KV?l1JlJHKfc{CG)u~X*YYWhnRV<6%OM%d$Lfucdt+YhLDBgvV(@{Q7p^yRf+ z!U{N|`?t@r0^`G+Pdz&yS$>c;%xTf8L)2%!0;Hy*{ta_}n*DM(xrB@Zx%VYqcPa9mV6xvJ_dS7;37(l#D@ z8qfgO3DiHLSg};iRcxc3^58x&9kJMrjz2`hM-v%U1K&K4X*ygs0JYm3%M)9os*;-N33wSKeiWnJ1N$n5lB%e`R*>@7cROv-U9}eYVWg&~ zdva)QbK7OqbN(K%cr*7Ps-B;QZ*H1L^2IC>JjW$fL%C4`(}RL~{j#a4bn^BtsNV&Z z4O3-;o=IbkhCtVk#L2_H%YB8FkWF%F^0W}1nDddiEemgBX3D8PsO^TFckY}gtmF8_JVu5GHY#YEU7|D5zTp(Q(cLLHg|pT}1#AwM zH<_VsC#UALWy1g$FcY{r*+Z(l+{JBFZ>n}!^4-AS10F~mUU{iUrTKTToR&3J%OX&_ zj~#TPk&LI%`pcC-1+jQ=!5C91CgXYKF;!4P=-$G~FxZ-OJYbB2+!dr;YvH9Z?`7>32c03|e{lwudCoBz(N^B-L=|2T3FTO8Q`X3E9$) z-|qmTG&YKvEthux02*y%bk7c_hEEi4Zjc=j0sDs#2=!UjTewDPYF|80hDN#WlS`x- zcT?sNJ0}cSDcLtB=O_~gNI`oE1f1ufN}p6(`@QW43Tf_y%e`%ptYkMH9;vrZiVuz$iZ_QK?IJ(u4uaP9qNv*XfF1VG!1cUU#AQp z9-}&%a=JI+Rb@SA4T_3bxlp#R5w`$i5k_6p*DE)~n_6e>D85Y%3hK>O%vvvC4j06E zucL5#8sK&y5WX%UV|*7oirh;Hp4oA0dWvZrJ=!w4i;7_R&KIXGCp`DePopBw_gz_X zKY=JCV;h_Lu5-Uz`8L~~8PE7*UALoJ(9*JXk#WljFNgV>x+z>Fz0*e}TOo6=G~_p= z;Pmxc@2xJlRC*kKiKfd)UsE*AZtZg(+|<_)=U{tvRd-RPEM2}TwnSQcc?9;dGRaHFL&I)qj3WM?!jj{7?$J_v-i~-7?6EUkE+gOHcMiX{9Z0e2z#NB0z`;^1wqjr9Wmi>rJl2rPy zE3Ajn&yov^weZDG7nb``Fvv2dK_hY!_?)=)QlqHHby6p5BGH$aH6beJSLOjg(UWceHepb-rc0~O^DMh5S1SEPjHq;tGQ1LtPp9jZv<9nF(5W$s&rgEVLa^w-4Qf%h zur;G((N6pTw=m<3Qlw#J$vHQ;XI21NvEXGv*RuPWfUGw{60&}=x~+KR=8jS33VIVN z_l>00ODC~Z^)B&C;JKv#0G}Vp%k{6`R{E5KaEnu%aonfcwjFa5=2G2lHkgaCxn6n? zp`NXz(<@5%7MyyZPj*q8yR^%M0p)ICURb|l2UR8YB#azZunUd?-xLx~;26?Vg`(hGBy zUf=j(v$f<#uAi0LqK8pJ*#QNqkaB~x#mXvbnZ}oXAD)g!IjlI!Tk$c(IAE^0yl}*I zEqCFJpG8n#I3`AyxsjjIY2wu>QjwXPM$>YK=vC24(Gz3Cf{RyctD<=hVZbXyQkI+^ z$lqfM#m;LK{-I?mjJmMh^dCuF;(GC$<3uJahfu#sJMb6n=+ z^;VtA-^~u&s;{doWs>R**p*){R}#)VN=(MvQzcDYK3bLgWEh2>%9FI)ogG6Co{H+T zoH3yeJiSkFb7JX#kk-+%!bS%JvdxEOnz>ery~V295g{ZsB;76UV`L$xp;oh_2eK7Y zK*@F>Zj~q#)v>@I7H)Npyc=yDrb%5#O%o=Y4$@xWWWdG-!NaLac54R^n4qj>+4Svn zF%rfSbdp_>gSN%Hr@lGj3GO$m%#`e8Z25Vmc?>%Wb@o?6yahEuwCNLmb=;ypTsz;BdOi9;f_ z)o279H#_+|J=AI!8+&SMBY?T{5f*p)S>iaysX=&x^;G&f<20L?+(zV$hIiO4Y%8N1 z>ywl9e3WuNsjifg+hhl*m^ucy>|D@&_ElBgq0`KBpBYU~uCJP&ECw(Z4&U`imlsqz zd4uIYQRu2_W}J9sj4`LkQrRw%8;Qwb821Nd(Of-ixL#?Zt%8-)hc_X|0yZA`CuPmt zI_hRrOY@qa3kNaG2L<{^)1KK~U&1!(i~Wix-Y(Sk>IgC!O2QkVomw>P21lMsjW$|c zH^1r@Xf92hHEE!wZEY=g5%5L#<%-`iV`H}sFE>XfN2 z`~k!F*u@+VseMF6hDUp4`!v}ipqbCaZNlMAWVhamfyGmN0amClE4KX#MS9x0QdkKdZ`^pTqo3!}JwSq_Nn9lF1ny zz}FWzx76jw_Dq2B`@)v?jKM>&oKx>_(d=&UU~`B72#ZrGQWKJ0swhUjbK46eS^NuLv4r~)(G zWR;ZsGiRo?*8c$OnxDYs)HYpU7P<#(1mJc_ygyRwrIwZ{uqx?+jO5_qo%co~e9sb5 zNbMr7H^Hcp`Py1D`l}Ma<&FLaKcL!WJe)rFN+*~PK?6kBU zO=!Wu*~7A~?$;`MYu&~lTz)$qI>NiuCIQHaLZLQ z1DZA^(zu<8TIp|;PC3NG8tQ!u2u68n{MA(?mp}9RkAJKLvDwiCQ zlchN9_#@tHW2mQ|cyjJ+oUyhJqHuOmL#&PyjpSVkOGjb|e zNyA^^cil3`0O(hk#XF66TN^p?^yIa~w*%LV#P7HDRBshY92God>7E3CCLGT{9s_{) zz*m)S?Z=yf=k7VCWzO|aY;00F)w6SEEIYQIFRR=fVNup|1^r9?mh@l8=#fxbtmclM zf#7O2lv2I99UjLKr}kC7Y+_4`u-~p>x4V~0-R%6EPxw~mS{joyAa!(4*~9i+<5=eW z)RK&k#Uy%foGH^y{Y;*%7Ba`H96gyUiQ{y2cL-PuoGM%}I-ZRHE~EHk!L*zmV1?Cm z6>qspdG%7#${g7k@v{b#yIb`89PD&a&63$>AQQ0v0NM5R zI9J_g%SJ?1c4(k1%cGi=`M_h(Ew~sx4qctHy)T8}am2DUyZcWq;`)_5&k~l=-d`ne z#x?%{J6`B%YAOvS35}*HnnoshCC+Jf_|ib~C~X_E=Ala(`^lZyCT9-G4PM-Goq72XO@=39>RhhgfD!tI9Xa*X&w zJ5QCEwq~P5_Z1FPkfvijR9VMdE}rO!f+QHu$YoT4jd437wFdzCgr9e*(Hpl&O4>z^ zzxFet$MVon)9sjI8nOJx zbNiL)JOdN`=Pdg}9e!<|XZDja{nrLN-dQ@oB){@CGEFay-l)A4nr98{azA9c*Z2aP z6xfz059S}b_0NWQxblftEk=r?%xQP%Ql7l`P4xqrPdhuhHxIrNOHXpV?~BH@80GxX z{{R5wq?7g}YQS0~sneB<-iFjT;H=bAZ>mJEo6EYQ_S(%a#(Si8&4=SX(MoE9*EE8c zlFwP-+Bjv6(QndLv@~v}I;=aot)jUW0g^&3O{bC~Wk#4u0xy~}M%~K>}rlN&j>CA8(H1tkzTlsUy~LDC$ib{ zq>hSf*yXUTB_mH{E~d{6;D<0LSJ9s(9n^$&QA{pMX79l;R0ok5-_ca}kB6^}z0t6R z)s-B$fzCEccNr<9)Cdc|c$O|CXlXnbKZr^xV%hhjlTQp+!u)p1p57d(b2kNElTg!0 z@7oo@;V`tfp+X>f3E>a(9})jgOq#nXLz6NrU=Gw25R!Ui`?t#G7K#CQxY|r z{{RfpMmd9UoS~hPv=CRN!ICnHK0&O~X42)F$p*JKub0c^V5Ucxq7yt|gR(C#L_SW1 z1#%MQ3w1C8dLb2d(Dr9aXPKERCq&mUY^mi|*?SzTq)(=x5y%dd+d~$K-)Ps>7g5IOXHVbxev?BIKbe85w7ZX=D*I>L>k_RG^R{~Bq(}DYxa1~bk zW2^ao2ds-Y`MQ`P+ql)NkN%YGX-;rntpMF7{f-s5T&No*Q;kbYoag6ACnr9EOtoDD zEa1aVTmU0ekDQJ_+bK#~c-rYd4vg=Y;r6@YSI*RX4{X#98@Rcg`3y=#wrWe9YM+>P#>fx|S@v zvR$rbCzst(&h9JZw~bSO28tOQ6v4#oCW|=oF2Nt7z4#2ew*Du%mq$YUh$AF`bRCDE zr>dYL7!sc?2lQ}PT_n?1(@;p-rbO67OhZ;O_H5(#Rj?^bGEv#{V=^?L@4m-LTT4_GFNM)RrF6U2O(v9vIeuC2`Wy zG2d1&Jy+a6#Z-`6rK6Ui^-iOi(7F->zytkfIr|0ZA&V4UwQcu%nKPb7z8?d{4f3Az z1$7NeUf0hF4R|Dw=J=XVmf=OeM-9$`q)x_99c8*Yfbj6g2`NsO5uT0>D`gCn_0NK8 zenerf85k}-dXB*@U07%f>7}ZDORe{}H1j$NYW8U$dTzTbP?B|bxj6StT~S`vT;iu` zl3KXG1Zm4jdx#wzbdAbwqvF?-5bEk zJ`<#jlupweW;r2@X%RbMHYKj#WNuHC_Bx}(TaDH#>!cTpgbfff30m1$*0dJ@c6JHL zO_R31DY=ZTaBtMnjluNwRjl=b-&by|I+(`4l_SZ<8t**zPYmLo8!U8hzfg&z9wwQa z{{U4gaZ05mzF003leodXyDJFYDeI~uo0n91RT^Xrdjr7vmgpyc<&~h3>Z;qrodufD z6QXEz63-KcF{e0oKdOB^Tf(kw98>WH150InJHefR@~TEP4gUZmfIf*T`&C15`Qrs$ zJl`>*qzIlK1hKfXPzcXm$n;fIqvwi}ZE;S=xXmcJLB{KqLUlD$+YQ<6W?Lm(BM1Ev zpX-&nI>|5}mg?-%PBG|*P=IJBcKJ1fq4JHm+rL%1HtKLEQ6!84H^LlE^j-I7O)joG zAWV{)rx~<2!BJG+zsy*F>VVzla1`86l(jnuf_3Jjp1bf)WlhEab*NKuK2j3EW+1yx3nH&vhfN^HGfS z5Qaxjlkb9k)mLq+hMt~^SGRf^13Ae0=5|>Ve?ye@Zm5Qmr-!Xl)4r*aW;qKpsgkq_ zHSOf?sVzq$X|$7zPxC?Cqa@btJN*u?kQR;ljgPuNM^W<>>^aMuv%bg+-L1N^4z0-@ z4tL03P)=f*%qPPAzU#%w{XDID?0Tug>fyD@-2`jg8ylQR0lSiY6<5P?3zdD=`!N3i zH2l3F#z=xQ*Zya2ocO|bveIedekHG!w9Lm*uj)Twq0`bf-A@fPF^09RYZ?cT3atMC z7gHGC&{qC?9S_6PH(Slp_*{s*V#nJk=c$djVGkJcE*L*zsM}A&`-c$*aOR`DRNhqh zeLG>2M~O>Y)5bDL^gyYbyH|wq6xORyn*KR|ljxor`m}m>o&FqE{2{)D*#R*@*)36> zf?=n3P1bjFNua2~9TaKE!rC9Qhf?f=x+(AsR^WQ4byfBG1wa2f!|PUU;A57I3j1rBE?)i>||0E6eA zC9QhxL!S_e>4Nt4{KE~G0c{llrJI}w1T$XX4vS2(oZ~&hy(h6oD{C0Lh5Ob z4$B^jN$!McoLM10s!TOlE!}xt1Df-)56pCK-9*;MwEdsXucUD2k{h~XwdKXB4jQmP z-8NclnAbOTQLAU9)l0}(^iiE7WNnPtC5js67LO8OhOvhmrpIJpN%WGrCviZ<3Q8W5+W>Iwd$giGo)F;R^@LQ13vnY|U89DFEOr%fv1`l_lYitAkUg zEb3Hu)MvWV@aZy1P$hnohN@X}WX$Nbj|MF|P4azPULynt45e4yNS6iu{MXkvy}Gw4 z2CAY?ZUH@4VAA=zqz+wTQR#_q+v)_kIZY7b9%`Aj*VaNChh!(~edWXfD>3R8w0@dY zjJZoBowUbDh|RJ+7H=j z$;G^N?)Mw+x47bKu;8lewbk`RjO~>Uw!T?O%Zd)?>RB1{JeI8bFNF9m9Ta0H8<8GP zr1wzd8oH@(QJkpB9;XsYh*B_aw6a%1;6d3-aO$DR+#nr1amg1MD@-DUr!soVz>JwvZx?auThbDG1HxL+ivE+uJeNjs^GW&*9ep69y5rPJuK zsA7UtxjthQsM)&3peZZ>CL|6?wo8NtjH$88axhSnFJPp=?3CEDhFoo?*xPTasD;gQ zC3A7=tt&Sc2-?O>>=E5l5YLP$;}JbT&YyidX~>(h=*OWCEl(saB*Nj{&& zZvLq3brH%k*8;gQ)pgAp;*sqmafI)|oMT>VTSZ*cztH~xHFFP$MsiSRg7I(JY7j2kS~sI8G%z1gjE_s6@7V4&gpJ!djh zOEGBQXxbVN>gCXwT$%p>qy<5D5m~9mNQT;7k69!?{{R(ZOurf)+l$HeNab0oe;P)H zeq-0rque&;$jW&gHj>Jj8V6w42D=|}5NA^tOH}ND1dbTrJ$)62z;ygWOZ+*u`+X>7 zO;l2cNY)q5UgFNK&KkkcdzE`Ewy{w_R#Y}iWu>zdAry^kM&-i?7{_%IN*bq^G8=rv zk@T^SILP!0;DX!nA@-|t{%V4Z0W({pL6ZVk4!9s7uds29@;$TP$ZvD4jB6`v!7+4I4(%KrfHk`~~&?l4DW zM_qxnH#Si*+{ZE=Ss~pt_`0e+Qdyy0O5IszlZ39eMfJlq48GZ^C8?Rtmy{!yQ?ue` z-@1oyj+WcRR|;oG{{VT3vNL^)o3Z_sdBU7$YvG2~M{t?VW5{zDz+U3+_zEu){6&Jc zu9B|JMBk+So-e0N`x;~c6H#D1&s&|~PB!A%> zEp%KjY_~kdX(Kr4mW(S(dBXv5_1P^{2UJOxH#z=`CnjvU9nU1B-*)WQ#fKafMhYv@ zDn)a}JTY&riN-o+HQsFls-@-O+l7MfY^|pbU~N=wkd1+3oq`?cM#XibZ6bUZf&kxj zA=Z0q+Q!WbTs=TIBf7Dq&yx(`?b3Ud^_pyX@ya#3TNUM^JBRSs2zWA)(fsBx_#Gqv z0Lfv&E;%^LAPrL;rfvYIINQM-js2UHWxw{)owBxv<)nyJ$;)$?%)Po=0XaQZ54rq9 zfQL(JVQKEz+5Mco)}BguVI<*FR<}GBXt-sZ8g~~=uj8fuB|%*T95)@NS5-OYX&)ma zaBIlHGHeU7t2+J}zM7hse5)h{#Df&fkYlIiC0uaUaUH_dR~`1;=%`=Jbkj6xEDUy_ zNy)$jZ?{EULBbZQ+L~cYeTmV$a(t7Q!EIyo5& zDr#8axMeuf&_}C+H*cb>xVoC6DzQ%3YQv@^k&yBB+ucvEvDv6)mO9(Ktgfhv0Lfg^ z=y_T-5Ok7T;ybS<;OT0pTH|(v(?a(@nNINV{I?tAj1lS=WB&k+J}$chC+Dw2s^fDs zHLTN7-rwhk_0;R0~w`$!oQU!3~Oz7=b4`xoiy%&!|%#;0|1NUOQcl zWS4^4fbW`iaiT?mJP%`HCRp0Ea&xQ(l>In08%pcVffsb^~Bs9mTa9#PG z65Z*ZS_qN}B4O#=I1kx-d$=1#sQyW>;g=(7`kR-+L24k@OjzQ! zq@(iKHiCq4Cl}QmYNWSfH#@e+a6QVlWph9c8#D|7+hnc6HoxFj3RvX-0KHWKQBv;Z z@7xctMkSbF=eqUY4Wn9ZGMty<y~3Qft{L1e^d^# z3M;HGju(8rWJgpoc{qP%I!Q7P53*jl(;1j2r++GTZV2=8D`U47WA`flAiLC5RJMYy zo0m}wi8<=uFb{T~%UvGi1@->sE;Z0Hrk9sn19ZLk#-<c1GfV6xcA6~x`{#$?8%<;^hPs6h-`aiaU5$vnp$yEPm~?Q=@J-2z1Z`oFZ;= zjW0r0SwTI-tECv+;U(6?fOhDty5_auq^pBRhy@Z>k4fOtamc7pGnO&f>I%XRBKcQ< z_hDgJ!qDT6=_=u!Wj9Bsj%<}9*6VXtD=vy2Gl8MGm=C&bq~hvn0Mju2S1a>z`H?DG zr?8DO5%*j1@LY3A^-<<$tZGc_ntmjWzIyNfYweYL-41^Mkv8y7X zZr5#dUJF51MY@==t;*RAh|_7n=!i6mapF(LsctGx*(*?uqbGC@>o#I=2dG*rZb5|j zkk`}+qd6-pTd8hN5oe?Dj4K4O=4*x^*G%}Pa3`rkC7hh@i(c8{E*(NzF34GmZK}Qx zwdfW++I(3emgveyWoQR=kyr#R?rOzSF?i~uIZJloMsVtaNNzT0AxC?16q2>LCA3*U z=Qjmn=4uK{+q%F6B+xQOxrZQ?<0~d%`jqJ-)H)%O#6eSHa$~F!{O>P_%^)=m<<)G4KRV*rd2l zJO{(0{{X^=2XQox8xo>Pf2e*yB`*f8fvO%!zU8cb6`{q(2!tPVs+04T71GHjbv%`T zYRd{OzU8>%#gFWSSHdx02CbS(*w(H{Cn`N-aHKRW09!!W8bxW!%-X(unnLiL1rA9v zRJTCF9db9d&GtAbGnO3EmkTN!G zjW;JdWK7a!y_sirP*NFcj~#z*y^s~b{vJ8iwm*>l%EG^OXT@- zFU7cR)x2?D>f6g86!g^awkYFtvH_!B;A}Imy6d92((Q8LIk=06D=Pu4s*>Y2p`^Sw zM}!UV(SQ}LJ@wxFb{CWFDr%VcR+_GO+f8$?d2kGGF_5&A=-i*(SR6mYyi4=6T${X9 z%@sOX8zg&-F~tks*v1FtJCogIRmC(qs_CuPQ4S+7=J*+-4IANr@8!CKXs(heX$Y%% z*ClkD6R~71Zu|A!UHWpIWbH?``-tmFrB}6tC$e#aEf$_<6gj_zBfhCIyocE?zlEi6 zl1k1hmC_Nu2_XH8ma$Umi*2Fre2@PC4UwtDjoeQ~7FW5fz-So+>^mpJ1iDw^dhmry zCz<(sfbD!U!_A5AsJcEtC2{(i-0$Q8D0Sl1!V%P+43XoOKQxDbKFVFPozJFoqL9Ul#1v8ku*0d40 z05mEYa6&V?dwR0WCfcu2Lb8qSk?wnooO+Uk#~B%K@{$zRK64|;mD)zc9mZ5?=5!A5 zwWPR`e(4RbahvudOvh^J*;+pg?WS4goy^$0G#q&S}G@#3l5{<{p@Ej#5-&X$sBQCtO*7pK3 z+#Rvm4!gl)qqoB?rmfO4SxJ&7J2=wZjOPS>(*-SM2&SW@t)PygAD3gxi$k}NYbOWY zC~=iFvXUA%S%!fmgiO*&4JYZqcTC|#AuMVeN~>qtklLK2t#V)dD_zX9(L<@CLe1Rv?`)4S|OWMtsQ^ec*vBpx1_oB_7;MY0C zY%{R(L*%UoJu~i?K4MI6N{sra@9EWj5)#=THk>zZ!_roI+Owxtn+`g*((MNclS-d z(SPJO7EZwQQ=7KOw&&ep6nN|~q&1)Y^}t#?$RqQ6zo{3ldgNETEVE2o9iLKJ0#lnmg-BMIYk@(8G*u4v0}ZA4Ip*ykuFh< z&Noh#vfz=FpK*9OC0X$MSr^HA7Z_w^C3G5CsHdw_5U%`{!)3=S%~skk+>eO=MvR~js8CDAJ!TcE};w?QCxUa5YP zxw14jQXts?cE_sc7MO6#r5;B5T9&c8bJb%}FlZ@sPFaEq*)2=th~ON6tl?Z4+R0LT zi@ZTl3~yEat}$13ia$3d3++Sj@IeSa2*)Is*j=FM2XeGAnG!m$gzRz0PZ2Ea6|YG{ z9Bmr}p~KH(V2EwdC#Ys%9aP#4MbaVGQ?;$g3a{b|nU4T9y~ zAiB#v`nm0XSO{SIA(XmdH|U0AhhAK9_YPKEOtSzT(M^J$?QDWM>7ag(1OdAuXZtWOp@c!jj;%VFMbCL@2Vykakl@SQg|i zvSZBR97|KtCt&MgXT~2F=R*fq} zaSvptWTn!^?3;$cSjMV8?D+ox?FO;yHPheA7f8ZqlrCp=Q=~mm$<6^$CS$=-VeoBN zG6zIX;Nd?AGBT;Y53A+e9~nPZ>hp&VU_Og`K>F~<*-_@aRv%AL->%@t(`s6UAqObt zfU(d#mDh~S>IL#n7C$a?lvhCVUj_x;2Xz`DX=njVScZVIOp2gevK7u|Q(LR&d$hJH zXl$Dfn`WpZ*KxCo-LKJ-Uf@22Ne(eHTr9LaX^c3*t9}g8C=KKm9?c z`>t5w&lI)9>mTA|8yp_p`vsw-#HSfax6Ku^O{z=e6Y(318BFMCAfx4nfg{?|8{q>3 zAPf!CEe{eF#YS04AwMr(?cJWu2cXIi#Wy2ed$>*s(CHk=@=3!P;TZSER;zQRlAI9Y zIabQ|oOQ;aG55l;C($05?5qTn-R{Gm#cKZmhgq7+K+o`Tk`8gW`zX)2T%HNg!M4W( zAHGz+M4X#zOU?xpSOoOhZfq~oPDlVAqbkNXCkXkU0%uB6@~%aDRF@Z0u$h1wov;8P zm#Hfyt&xtZaSdx*vB#=ywX`;dGF$-5vF((HajT1)JevI-=fN*Yla&a+RicGgID9NXh#tF>@-T)gbyLF|>IB;y<}xRYRsA++e(h zk8bNad)$$2ZLtO>0UJ-4&+3M?pD|OD^oJsq150$bsn{>6AkbTZo~b<{67AGqHFTqg zPFtuKeygf%l!7{VakiXvR2K+R39jyu!#V)}0HWx-Z-syWaytd*yjp*Y2rkP108`TV zK0cf?pLt5ZLB5f#0fQc>%QJzT{k+h6Uw=T1H|Zm`H}hH1;L@GnBssNqIqXIha}S3h z$0K|xZrcYN9-&2^VH+Oca4?A=q9pMhQH7G_#k(q!r(A|YcR1Dir(6ADtf?#q!qd4) zRZ_L0U>lO>M!~;#JS`6m!}YaTqWHe2@V6*rUuZ7~Gufva`s^pqS|%DyG0U&xe`Me%?eg3bsbNc4dero3X+fWP8IMg6P^9MgZ`) zVYjeAZc+*9F0bl2jWK({kL==w-sF-u%~IK3$plUCjl4(8^;@lURh2Ysf_kkjYg`;o zLoRGg%d6UM#db}$N-yWEw9*VuD zZ#v@;&OqH*^ydr%`mbN$IQn$6oBla}WO#qtJiU6zV_(4!?!xTS-l1o#4f-e{_|EIc zNw;OR*?^I{NkKRZ0og%xnRYt`E;nZ#((c2m8n|{LHDxo8NM7fk{5T2TceLdx@SYtt zQ0^lb{ZlPV95RtLyo>k}`e@Kx4|N`3+#S?u9_F5^JLM#GNOkC0P*1T6t%1O#%alQ( z+b3C0K}%N%wFfHMVw&Kyg{U-CcSnXj$rj^NkmUKqP&uLGbyam{vRn>PXmS28!0l$C zOBG)93ob5kE5iH&zJAIykJ)-p;XLEy#q#g5S;ZkFC?|5JQ^s6ETwS^)szt?7P+KQn z=YaoICtT_)-leRrJk7GGgeuu?akC0 z1Z2nzPDDy=*HtX+%2`0Z4oRep9BqsPu}g2f(>Aa|=ZurnD%7*o!^jQ$T3WJ zM_@Yy2Ai@q3Fy4yJtN&4YWIw#N!^E(**@`QfgHluDKN-JGNefsR5HTSq+L@AvDpiD0YhSwq9VrvxGOx; zy#TGiWqGA_9o5+Q1(;ZYg|G~%NCwYWLfbu9IeJRg;<=F& zXp?mx7*Y=FQ-;Z9m^#?=I0(I-E@Z&<-2!v4Fr9dQpXoSKw`j}p%}UbfGP|cGNU-hk zhD0_*i10V_l}J`?P(aqYdg_7H{Va>bPCZV`mnf^NC>eX6YPYz=T=sx4 zHAvqN>(yD`8t$Uu#dP%A$!#PLU}zxe%YY>NkX1o{Ev2WJYiqt?qC`}1j@>Ep=Bto4D5Q{op{3u@ z6=qtj*GcViwmV&0nk*Fn5fiukM|y6CQum>We@Q4eRCHuL-}gwdljyOR5Ytr2l61uP zZhV4rJSTm0bU3rsk|`q3;gP^d8Vio?^a-1Yn^O?^XlJLER)EsT zf)8MkIFju}bb^-GZL&GFayv}21_Iy!B;dCjpHvf0F-pI=F?QP#`CQ71 z+&DdrYfNtF3rWUCeAKui!;U)uKe{y!58NYUFQ|v**z0UACFBA@lLG^(DVE)D;ug}i z^t3eau(+JON34wF#jiWyB_^mo8j0kTKWO zWnxgdzyNcO#d$4sy7f6C$4`u;ofx8ZbM3!Y0O-ZPTuO5PCuYY5hl3S&jf5 z`z1jRm9EE?4$|F-J9Q~G^>8u7><~61-5su&fw3Jil!J7X%_f}WH>#Etc-&K&#jTOf z>g~!}s4sJAa2hpLM}yqp=ldoa*&OEvvIj~*{{StZlHfaSy7BcCO{lf%;Rzyl%ok%^ z*zmCLf)}CiTy^!bwcDkCv(9+lR^BXMGJlr_ma)&Jw>Sp3{4-7mUZ=0B6L=+-y25tL zKtFP=2u*0Cq%YuFS(qu~+`{m1Mn>TR;s?L@Y>jur_(3u-U^Y?TwgU9gpW!W++v<-r z*Pp*Wf(!&l1Vj`5TYq4NSv`G*^UjnZ4XO@DMYwsgQ3S=%LbJuai+!=xw34U=`F@gbeM zvV1?HNUw@kUhcf+mdMGuTlUZZBV^6PQUg~!B=7R2*#7{S1yjUb=(QL8tNfRIHCDS$ z{{X4~0Dz@TvPXurlaEzqaJ5sNXr$L6ackshc5~l2`(;*jjB`Wujmm7{X-h4xVR!|T z=LF*=!~Xy!?(jGq6Takm6h7?W%4so-;T`PQ6>WT# z)j8EHht5ZTRLM--fPhfM>;sOcDVCj>!uhUe)t4jnlV{cNN2eAw=@f}=!05LUTGfME zHr;(8*zSb{`Q(F8{;7VlTn7YJrc7sawOudNp>ED@k-E&n`k5r`4ioH00A!^b1SgbT zjmgSvX*+Da?uQ@I(qV^wn}08J=Ds1D>+5wBy`0)#3t~ZV7};rjhX-}(8xZn zpnt{kUBLeU4kB$(!72l?j#WTQvQWpzx~(o*ZZBj)JrJt6bpx_C=3#0uhxH1)pJ2+i z>1s2%*=|*Dce04}5Y^H0+k=5RRegT90ND+;+M7{iWa(6~=KMwS zTAn{8`xb*OP_TU=V(NI}>4mzk;S?Fmz)E%Z_{KvVyd!g)?t18~I9Fl5l5?wPY!g)+ zZmPpZO09T)XVtVwbGwSWxK@p_Zbo-ZhACQ12dRsUoL3tgsJLCT;K<#IX{F&>I6y6d zx%0wnqoiwE-H}knfzfPW!unTZ6H$|{Zub`Bvc$$?K>ZTk>cYt)c+d{sO3J`O>)O*B zk>$nH8?kuhZjL46F+UF1I!N-?IP~q6U)FJ~&w=f3JI+?CFh3ZT3k+i3~U=ZIQo=b_!l}%@UaijK3RMscQhhU7L zj;cLLA)mQ8#@n)Gx>+3)wfEUDFyJ&HF~Di?)7=nbvgW0Dr0*O(%RUzbvSjuS)1HXL zF2V^;b_kmtfUwcHOqt_BN)@m1lzO6UFodI7HePo_BX{EDnM z7L>GoX+Ef)a8cfNLfF`EMg**0c3O+K zSiSfOc0DR)rugP-N1C(jQE6jOqO9%B=G5n|>e{ulL_rjJJU8ujrqnSf@*iuqTLYAE zRzTrC&z`hq_iRwAr}a~w{g7&>=O`p}R^Xb4?p53H?z^8r$jh==aHzS?a;#5=wW2z? zp(m~gR&*2T)Fu4LM~LJ4nn~@t+~{%8FIFr$Xjv~*w@C3v%jNRPjzZBg)D;xOqqURf*oh*E8*!hoNWhTv$NLq$E%Y@h zC%b#0c@pYWyA{nR4srcd&L-mpGl=Nf8sGl_U{;kE2XT|FGMV_7Lw|wR#_UTkPGQGD z0z-p@c(S0i)f`e^-v;+Ovb_#IBHYQ`NgIm6%RiM6+xen=Ge z>>01%yND2)-X+eOXj(xej0SnR;ZxjqN5l5ZeMCm~`l^QvHYnc1lb=kGo%p)bV4&3b z?=n36yl(L54kJMF)VXT&agKVa%`7FbHzG3QmmgA7o>s9zbxU-+P^dwZ0M2--&c2P$-F7HKOmL@u1Z3VAKxoTDzh2635mdu7DQPJwg)~BDMgtBq$nXHp-GUi3^RdnR zc`=>*We3odVB2X@x7?WIPDw88V6$@=Pci->wtYf+w_uPlvPiVIT{|3ll<4JX zb4CCouSM-N8d*a}iBxWDso%cmhu7*BO>S8}Ao^G2I#bs1-90@ZZMr8ll+d)ahGXa( z{aw)8#pjRBJPmIT#PjnE7$6V6&qp0{uVW*OR1XYMj25@E0P;a@cZ;N7CR;N}H3RZD z0|VQ2syng{LmbBn>@N?OzbxSw~UKoD4?dU7@~Ru#;^!$*S(uY?fvi>t^vfOX7k8O%aU9HaZtb?|HI6)t}A1EBZ@#OyirT+kfq9fhz&vy`4&u)LF z7txIBIsDxg**btfMegzM6nyU|+`-Z_VsX#f3H~Edx;(7I2d~5iollT{?X^3NXMp(em0+AIcjiY6eZk_PHamzlUxJIGV z7*VDq0;D~_5};1~j_Piub465Vd%XL)TTt`TlKT@p;M#>j5`7UAq0F4T@adp`UmLNMOYL;Y_3ak(a0V67_ zEWv@K8Fm8M40?*x3CYM)X-F>JQEu4dX{YcVk}dw2a^%~xv@|rw_DNUhL5@W1E6jz> z?9NQ$n6%ZMyObi$Pt2fu!{-pSWHZJ=g;{a_F8O4vr|n z7}*?RuE~Z4 z6sW5|fkO+mX$n*A?Q=G7WY1RE^MQq_nj0Ajn59xcMU!rLM z(qTVckxn?*pne+$zN3BEb_1|ewe8gogzOZOc3F#KT8MR%U>(w}M45R;>z+U< zURUN!E~`w{@R@Y$aeRg8UW`U^5E?{wA+lNZ%o4aSf=F3~hzl?jpxZGk!nR>zM^z^# zV7x0LH(0JD^8sOnpWS$bBw>Y)7Hh@9reeOr%M0MRn=v74b2Hg*!uL*8Q5{r6(S^B( z3t?i%P8U@L)S11fqTD>xX7yXzQ6?ft%mwIQFOiEfUoRKL;4UD=#{of=;3*GxR61f$ zM7c5e<6D~ffR(0sJS!dYv`7hX_d<3_^u@nKG1w`~p3*|JshgrUgO&gz89R7sqO6BKWRD-A8~4e%ea zKxI8vHp$F4I!A-jLHnu0E}XdE^&_&!uN33!7IsA(f=F0%*-Joda`P!7IsBm>Uc(f+ zppOfdJu{N`K3^}F$%4<9%jNRpiaX<7Y;-gha}f=yo}fjqI$R+yYlz6-ass2LuC9_f z2hYqwY0r(`NuT3)6Vp40qNKy0@=`$=z!=m{v)A2LM~L7Il>EGh!h3<+(QN4>2;q&@ zz5f7GmS-tV*BlUO=cuTtjgOK-2?tiQ&;;3CP{vy-HFP#Qwr8=?BZ>a1xVLdD9A=Qx zw+!DdcU*n4mv6jZT~$XMOw$ux1yopQIU-pGai3y~E?BM~=#GXtcOmh`{%W_uO&jr; zp=^YCY0eTsi!ETs)PG>HoJl5!J;awvGD?(LGpZ@!jk=!T16n(hRVODoT)!g1T`G|6 z!c;uT1lT%bm8H)9T@56Dq>s^I#MPpvYFm6K5ds=O>4x@3a4fW6Ck&dL4rOp3RG9G` zd3FgdrJAc~-e0JfyFEOWoudj%cUC9vmEOyOMNDEELfm66Xipp$PZc}pswA0>GEY0& z3C08wXZJ2hR|Y+-fMD+8Mmp@BtW<^v2h0w!N1Z3~z4~K5`4N=2Zc#~h3uPADFuL9O zb+j>xQtUbI$vDq%B@%0^Y9)>_32t0B&Q^CUI_U`hNJ;%l9?Y89gGWr9`zX~;P{oiA8M{Eq?MXQy8r}x5Jt!G3ODD_VQl$8*8O1sI89&I(<#s zZQCr0d;JAPH59IVjW4E%oUonr0D2)8i`4^aZB5ijGdQ0K+W;swK*=qX1Y-?|H-5V) zH+UkFzS0W-#E&B#_6qM*zMA8NR!TooT=xxE-zbF@Q4+bogzRO9$t7BD)E~rY94b6q z%-}Hp04YRnjA*yiHvqIVeO}|DR<*X7mk1pA^fkK4AZ!MLtL2=xOIWUXifi5eY~(1X6}2M^gTbk$WA6jM07Lx(WCKT#fg_FA}#7e_UsaNzby zK@4sU?D$$5K^}XIqI@*Qu++#{0hzi-`&dW!a?jUei_qqhwEmr!bTVsOr(Cb7jkS^F zIArd+zu|R?g6Z(a-9>PvY|vD`c^t=j$Z+6CL*!R3Rt=qz>{n#`FNLsh7Ywuxw1LOn z&Lpojph{UJnzc^PL*lY)9x82lt$sqBp98pT%IC*lTE>&J32U+UAp_zMhBkg5g5Oh1 zRZ%3uz8nVz;Iw3(gVA&{O5;Z{w(3{AANSb6`+@?wlS`+HZDUFPW>j@ri8P*BHRx23 z9P8f%9Q}z>NI1uJqL;4Ys3A>oBw%&!xAA?b^Z9m@AxVvh?cD=>dg>nIjkZQ=*0Hs>$ zrvgC9%JPlT?Bbh2Gf}AQR*Bqk)e5ANQzwjL-4duR&^H^Q>=IjxcL_0_)>arbNawg` zP}ry!0!)E`^(nH^Ps@#)c3y|Tu-vj_YlML7~ zs9WJ6g1nuc73q&2X^5YekY{1K5aX%;0CZlNxophO)Y62txF;!6;uuGwO};#c z-|U{B0m0cOUum3UlO9R9wvqx*%Fbuv*9TO6$_HR@ZD<~$8shk$9ZAocTe25?`&Xmq zP5%JY$m1{n0Ar>s9w?dY5nH4>;X8Ih_+^B}9(;mqt*6WCk9i7 z?GVqh6E6kFtLXie==xSI0)>xo**jk!(Hd0L{i84-H?yC-V0o@Acj^%5TrJ=6z zBaEo$8#LV~5&|>OB<7#QD78%HN01Il*dY#%NGEe77&%jtPDvK@iASk-48V3$ozX+5 z=oI2SDG0m-u5Mc7p((_rJ5{u4DK@3Jl>4GfW@vbGtzI9HwFt-v{7u;ex3|I*9Bse} zFO%zfcldFB#y~BAFNVv~Q6Y`l<`Wqpz6q?Kj_Eo(AXB|vDwcveMVP==3}i{idq z5%T$by)xm`&Oo1`#h00MWyP@>fX8S5`U-X`jjmOIibe%#q!g(&4vKc2;F)3tXL5Fs;3@fw;pW0jvWZ19H#; zrlr>^EH%=K-mb}Vr!>M^35Vim>CZrXlm7q>aBb+CE2Awk%q5+yh;5d(HW^7HIT;`* zPO;O$bGOGm_m>m(ekOG8a5gz1SaI;(pjO*kNyUs~O>JWw?qf9_4bTZ8!?RaSsO^0F z9Ua7at{&nlsY`!^RRZIaskr{?J%0w?DFwOG(!~yHJVnzpVT|v(bB4idl2Rw)!7erj zn8T0Nj-L@#+Qjy zxMXQ=eS$y36tYOiNLoO9-HeX9b_Hhmo?eU@cjeKe;kdf>7*%=}ZK^V2VA260HbWo) zHy$p@a)G3=x!8gGriylfrN_73c*L!q+l9F-oDIp&I;HBmmYqOn1HYOnRVgiYHXpiL zsc|HX?whc!!Sgw;4s(tJcPg^1xjL@lbHhWK6p_LXdw@y)sm}Ra+B#Rbk=3ne^Lvf} zRh_gKMON%@Ep&$`{XQS;wtOoWo#boT7+x`(%CRq&5v!{#q@NLu*{84b93s(FN+593 z&_jBA1QxOfi+s=OH8B4G^#s#R6CBZ~9-TeH^*Gb@Jj9yY+bdN~Qoi)Rx`Z zqW4%uzE+6G?{E5q8>7=ynysPOp@ z{wIOr>OfTWxBI4D z`NuEpTG4;+FfUAqF;?>Cb@2|Xwmu)Vtl+*G4e9(oBl?W49jtk4B_kWOuJ7RZK3j!& zb}jri+J3&VgZLqxSd1{@72zBNj{}t$>*G4y;GOuW5x6*C+N~D7zhEU)la}^&U^3f zpi&I8Az^l_6m(A0^~vFHnDZ-D<%SJBvd#Hn8)jT^PgAMKh6?35B){$hxK)aY-K6+f za53M^s9Nwk?4d?Z+o=R(4VQq=ntF`cGR@qk@BaV?(y(F2ro)R26-E(B8l(l~M-&mf zmpBD0P6+f~nVP}bBDX9q0rS$bFi1E{^>rdFg8u;3PPp7#9-*E5m3wfhsG3r9hUGgR zCpMR>11@OdjruELsz33S?HzRPRg*~oV_Si?5-$l+yHlii%bmF@&x$NJ7+%*yr;ZE} zg86?Z&U{BzEncR19}i@`5#&H&%{@V0%ex5fKcd>`^p?BNVzoFNbo5sWNRLc0F3JSV z(JX_>#S@97NH*@F);ANs)eMm_djniEm6RjIJMO)33)p_Xi;~!j+~__Chn2&$XCLM| zNB*u&r=k~QgXJu?xQM}RGc)c+PxVvXD?&_Nt2H93FK+-T#8K_oCpxNM!s*!w z;kbt;FJPPMV;rOGu|N4gGf(&l(#k@WM$+sOqlg_Vh6dfy%6LR=0}94@*Ohkx8=AlX zS+AGMy(5p8%jNRwM;|Yj%jG8?IAjo%FZ9tEWG-%YDZwCdoys~JbwrMK16Wcsr9s8k z{fZXv$ZbtQW&v(eMNQS1J|q_R#n4l-s#i)iJdn|zq^jU?0hc&(#L16@B|z~Ws@sC1N!uHwad7|um8!?|qtok5wW62QpHkwQ87ZBxQ_HqO zX`RteCvK}f8pw-M;iRQXpgF0Qq1zO*Qb#c#V)lps0CS1|08$NGhQS{P043@NY!QI{ zk{>75^tz9n^*)J*ji?xm|xUTR%obbA{KSmj_`ywtkFQ8U!OC2Pw2 zE3BTAEU$94<$IOH(USM7!z}b z;zzYmT#pI+Ly1E_Qtmm#d?9Ud*=Ubut%ssw;PLb1i7#|ER7O-bR4fz^ zle&ce0H}MZvL31nw~>R*!}_RPuaKG5To~M2DS!EGPRUP%*`nJ{KhP6m_4;L4k>RyY z^W^zw%jNRSEEk<}+Y2bCjuaY>`IiPnfbcVfyfUk=T2y z1Bt7y8b{4bOKD`1GzxlXnj~-=laZ7Aq^Dwgg)}ila9zyAhb}nZT!i&LL$a}wQkb{{Rmpw}(jSDdKq~ zx4Q;>*4U~ip4noGSseasmGStF#7z@iFb_{bgc+kYnkgDcEu(97dgQepJPt10yLJLu z8(S{tl=l7@V__3EQtn~H@z#{{Pc)A$naN<+SlA(-i0gwRbp=(`@ zv5i?mtF1g|0DF!8Jq;NExc>lls#u!nZsEIpq4T#d$f+wC#d1_!49=yCRY4k>*s^i9 z+5srt1v{xkOp-$*ng&i#S{rTBS=lXp&ztzJe=IoeUUQzwwag8dW_S)U_Fiwr^IL=y z=XQEe1;f1ZdQw5U5Z4B8w{zr~p=i=EzI){XqHfLG82!RwjA)Bfhpx*K9G-$)18d!k z^vYP8O*)=WvL#zNe69oOJ^H2Vok*vrkm=5WoM&=-1lYyONhQk_IVeT9!a{w@YPoKf z=-X%&kurBk{cI(J=CkUOufLS|6gL%2R7`5O64w8^=9yt*w`8m=&v!OVgO0{XB4 zYY7>HK4Cpk{{Y*k!E7wH!=O>ZTJ0H{I+&sEWPz z>V~s{AbW#B-WOwIo%c+XZsHbE$NvEKJ-#8!2YOxld6y}U9C;-bTz@G{%);yv*!tto zC258}J7u~M=Rij%C7{RHEqCRu;lZArs^5||mSX!&2$zGSbAXgMuu9NKmEDI#ChB1R zEk&LQbAf_PoX@ez@=q3fhZ|E_=mE4ewXwWpGh^dDFz$#g%I(HdT`0)1I}UM%86Pn5X4yviw=RY411gTyxObeeEq+u==S8D zPcv$)Rggc08%+7F^G~NT3^=*cjB4-EAaL{Kf;I^ObKM!1Q~tS*qxub%)kjIj4T=LT zwj#qx5IJixpvDLDDx#j*RYqo(N5c5zIAh~@csM?t5_O8SQ*E!Y@b8kwf!@0k-|z42 zZmiQRc#Pz^z+=Aa^d{dA%TUonjB=ojnEwE4KwqDh>Tj96TB*SQ06MY=@#Y;M3-mZ_ z0>#iT`5(cp;?K{8dp3Q6ka5u}aig@^BpLdpVE*Z`B=O6Lw$ryiRNv91X+?RL$&DxF zWle8(rko7!LRq}k{MBJ=4r7nP-(NHLM2%0%Qq(vcIFi=;l(5se$yuH=s{A}!Xuf>Y z{{TqX@GT$WW!1@lPCxd^EMVAUb-y3cVIDKpc-%89TJ zHcM3xbKHD_indQxAazi6TZ@Z`0SPcRYM9|?LA#Xd%7(!iK3~-m;ra)^GdvwnIV)$v z^v^CuHoUOos z6oKRrwY|7*eDJ2dm^*B=xucigVyC&pRxoSIA6!2h?25;L@I27k_Wm&TLn33t1WAuI zGSN=z%KBPyxtdlp^*B&vI8DC~ss3k*EyEi|Um^bhm87Vf>BlGRbQe}gFIIF!jBIg+ zj*S2hVX|DVspSH4nx|%3QZ@1ya@~W4myWSo2`2?u zbiYVT$Cc%#!pj8)Z^H8nW*3-F5oW9*wav>Mix7s^j|07zE>7&ufB1tIX=gr#P@E}& zcpm=%-Cv&(m;j4O64#^7(wYgXOAO zTN4YMb_;4l<7?Y(Hp+=hv8AO3Np!E1xJ~*Ru3UR)w$-wpYL>Oc2HI(zc z?lDL}%KDmJ9do*O%J!8~H+n?knJAI8WG(Q8*(RDSDy$pVczS z3tT>sp}vMPew0b16Sqd*9o~Psxah_sVyx^`Y-GC*>d&TW3~2UQUL*G-XTvxTQ*;V` zc0=ihL`|&7S!mjzWlutts$GdwbyG0XlPd{qg!-l~8D~jrJ}UlX8KRH?r<$g>x>bs+ zNWd%8Qs+4CtI2N9t7!7pD0~dRIC~ZM&60YcG00C+r8U6;rwo=B6$JNNn=X5y0zkJi zUsc1SAg;CdF+t|%y8D^B;Ow3r7rN9QV)vQqxa%is9&7DELgsp}d4+M-iJF(Xls;nj znd-RhiJF(%lulyznd-RWUZw7(6PfC=5-vKCD^kciqKG6OYau5^Hbo*>a`#$_9v8a( zS6Lkhm#JnXP<4+riI)qyf%<8Ld4Xgk2rH@x?zB3o-fC`=2kD%jD5Q*BIR@EZi}CuBC^aQwqj5S)+oLyqJeE&1lDfWbR%472)x3iDiN9v{Fn1#lEg-yZE_y zV|~VQxtEN1f~u}6Q2M7ivQk9n52SrP@UH3cOYZX1UHK7~Yl67D%M?`+z4Ds6$2rb@ zOZUM87-&6)QvJ4|w(pqZ6U&U?oHTotqOzgYbx#1|HUQDT4h?t@EnhjKo*xBbs2 z9)Kti%Ovh}js&rzE*%K;-}XUAM)>tu@Wj;e z@fe2heP_@VvJ%G?PPaY{1O} z87L*7JUcGi{h*WN5aekof7oPG{{UQ&uc%3()K=QZbls|%ZcifIF9L89Su`k zQ6U_%KC_Ze%NTEx(p#uqB`cc4M-bDJ)AZ+Tgy@BjZV-z6s2IAQyfxA-8!G!_D=M`$m5Z|Ngw&I3*y;2 z^>eNgqm z!WiuK3sb{zspsl{Y!&`}jDHl*(S^l5=eOnDZQ(lV)=L{%Q12B{X&WD|7dOAMvZyY% zKajJLRzUeU$I?4we`QZ_mByWwbcZkh0PzZc%jVaGVcTxBZM52ThxlT9K9l)mE7oaa zi>+FZ2b0d_PON{5<#5p}er`;bIAy-bH9hX<{20B}a{?oH%gF7O8XM&p#v+$*AV0ZF zG}o$mt+g?hJ-9P46N8T4YeOo;qxQ#-LZ&r4@5(LPb=JO~+Z=kv89dDgNZSN@dZ-mQ z%b2)QkoG;UibjmI0k-H}E!vILt`0A2#NeL`urPe58}Rk8zbl#gt>=@1-SaExt?s5TyJ<7um`0@X7oOO4|NA;iz$z zwy;Ry=@>5l%eva^g~uNSXq{Ka@1GtTIw@RFtb{OsRoEcmU}%*>Pt}^9CoZi*UXetV zU-9Gr00x3QXC?0a0150lReus|g)hsyJGK+2{mwWa(O>$J#+>D{if#TzZ_}1n9C24E zw3p^ncR9hRbWL!?Za&%WhKJYl7lqP;z>d(92f~DO_BQ&Qmxu~W^*ZF{{S{)jE>FjQ}AJy zY>|RnwpY|=jto=DC$v$8wiA1;l(c3TB+?IOJGoZ94FHBV40OU5!d7VMD0GJJtmo{W zBBmn?+Ud0M!HzMC;Q98iQ$0p$&QF$o1SO%kMPXyvj@eq-7BQ7EjP=G9sw#CeMA41$ z-?D<{^&Vjyb2S0x5W1L)cg8THf(3OW9YM;AUOtz3sZPse>9Jbt8>*e;b#CY%V3!P4 zobFYX@TMMR@}^9Gmq`Br)y@G{QxJb$z*hIdNdExD-Q@mQV{hu>f3jAg)+v5p`Gc6<((rIIQC8?Vh2|H)e-D9fT5- zLn9RVD8Gn=OG`U*)D=tcow3qg82o)DUtCRPnz%R&l)1y)J0%rcoaT2{@oVJ_QirIy z=<&(*K1+*-pe%+lj=?)}^BP;U;x;g0H8tS0Dz05l!|CY~ICa%j4`_KMj{TLF8^nzc z9nfY_F|GPmbk!|_M@08@=&In9u(YJ#OB?5!C(&{5#~3Q^*J>I{c9&@mbGgHPpvzX4B$byqsdImKLZG0lF#z>I}(zBzG%&AaT6TP#%64IH?`cI*+8O%sOg3REW5 z(b1Pprjy*_Z;YHE$*bbIQp8%~-7)~)zD0BVwu*)l4&M7O)BBxW$yB z&DY+}jeiQrr71?KqirbYjIp~Rl*6Jf!LV72lX-AJt8akpkZyh#IaqanNmcdSV7VGw zGUkcJYB9IPfJBEl%8stK8vTwx?@~Y3Nlq8lX{^IdfwSRP&8>nAY4o8 zwzgkXBuUeCxw4_6%yI?WFRII$hc-o+kSG|k1O%(zYJep5Uh_Ry1=dgI`medC%|Yfb zwfe3ntezxZ^Lj?8#yx&winZH$+F>yU%4r=$9hjTh^vJ!f(C#)bOTLhk{ zV3BMRdafs|C(K@R6ag=)_{zARvT2vqYIMG;8I#p(G9NX>Ig>Sm-C}f*qHFbE8GTm@ zl+n4sDDEY_3dG6kwK_#2CnQa!n3;gewCtxWX;lRq^H4q2wPkiAY9+zp{{U!bb4YZI z>=(}}Ja5PgCp%Jhb#1pI+EPoFtB=$ z5LR7XGs|kLE8>lTr2ZEvfu3KLs+YLjC9j5rV?!BD8*$u#JO2RDNT;Z5aKy%(2&)|7 z?C2a1{{XcDk>%0Az%iyM+J4OoN2#wW;UN^B^bd@KQ#b=8yaDBSx*FMa zeJmeQz;{w`u5KT&@%1eTbQN((=F?U38YE+z%$>}Er^epcJ9k~xR;_R9MwS=jCkH~0nQV(xt}9TYoLM?C(Kn%8|22$h7p+v$7viePx9=7@dK~awQ*F` z-5{xBsw8V3UCfYtxF-O(WDjK?;d8uO=RCJWA*Q#%Nl&C>K$(I^pI08`46-Knf0`Jh zvi|^Ni`BxavZ6=I^QfJG8v)n>+X*suiEa%T&n^x93y=K*#LrhbE_8<^^(q}B<(g{A za&Tn~2mPZ8CoZp2AsT#^UA`>Ruto+N0re$5Qa~S~Kt7-lYjrem+bNDgC4e8YWVg)- zF|;4b1{a0Y%j##y?)=R6nmD*%#H;vFdKkn|;EXu!(JS2~jxt@V82!^^u4Hu2d3Qnk zs@lF+R@AySlYnwisHz(#A#tVM(ZLNnTy<2qjCnD=e_o2Xs&DvaAN7SNiYrZD5I9FB zsNrdehq2St{;C}Bcgl}PwR%ksFQtYmw|l?L@_PK=TRhr(M`ig3ICZh}quX&;N^_4_ zH~Xs+-A>=(u;rlWt*l|&rX*FD57fPq%i$gwDV^6HDi}Yyv+1Ofw3h}pFbp?}j_3X$ z*T3quuu{n%lP@*6NG}G6$lC+DRyc-+><$POG7l|_!Z#gD$s2aUP`FgSIIfXAkHolX z`w^=O_FAM@#+|a!x5pDH=U`)Xx+r8M^bTx~iNo{(55H5f)U?iPMy6H8l3UrwPxn~a ztEH)@fbv0e=}CW!e1Q|s?n>Exs4cZtkr^(Hp~E|UBp%2xZ;oEgqp~KlN8W|IOI_YYOND(Ph{ z14KOC&uFAY$ws>)hg?G)xY8ZnUQ-daXFp!YHUA?_{GIc&iD zbw#AmA+3u;aY=Qmjkq`Z+Ng&zLUKKccs?vGmHZ zN8fVBu3cJ;ullR>JY47TGVsfV?bOh}RCH|-xQ>QT&H8j#FBkCxIC{%bR2(_){5p2| z$nO5l>bX8i-7DVbIJmfyN$N@JsrZh5m@G-}E#>fNcut$Da?d5Sc8|}guo6M@S!(ht zKyU$k=L3a|dJ8r=$|K*`C1zd_`k8}TNhqTmd<>xfZsqkjD2p*Z?fJcGo3K-&;`J zIsX7aBk6Bog;3K=eXR!v<)e50HxB;6TohDnh7lWEuozya!t`HCKUQnZ*51dC@sAMt z)A~5yk}r4a&DOXB8;s++Vq;F-5ZY)&WxIZ25!MV5ve~20w`|tY=v0Pa9Y3P;M)}{_ zY;;0a^&s{ms*p^WaBzF9Dqz&oLdluarQ^gq`Kc|#*}ruGAvU3oV~Esz!pM$fct4rA zQ|?gc$-;pXgPPw$f=FaolOQdX%_DF~SFggVQysGBP)GT-4PZWuZ~khQxJF~RN-?vr z;rCrR!dAm+;ro3%h9q-2l7G5Q{{UrIQ!S#sS!+ulJC~yW03`a>mo%!@^=ExqcH3oI z@-RC&2z1nqc6L-8Y|&DpnxeExM=nEgy0Pt&mNmI5MAr7_I9dfeu1P6jfxz7HiC(YPt92JCLL<&AP zrgQ=9m=H&rt~i3O=f+i#(b0z%2j%Y_0$ifzmF#nzVD>C79{89gnkso>bBo9Tjn&}< zy!e?WF{ou!d>gSK&=mKse*TAVhsWz1_L)kZDy_FNilnnkL z>D<6vat2lKMnP3y9MYCTXN+adImehool}cBG@5UO%DI#+u9dXLFhqw=Ft5%Rv&Czn ziWV>c)_?-9csAbJsYO(c-|~!fS42z^umHOpdZfLAoRp-R?2U`%^7&Z+`6qE*9CFZ3 zOku>drxY&c)=3N6;0PlLIg)YrC0^$-Pr{bUnIm&4!IxOow_VpnaP6K78aP_k00Tl= zwppTzvCl2e{vSnT!qCXt0PK~DDo=o?S8IwrbZv7BNEj=QJ}Xi-my72O7!QUwZTDT4 zxgX;e9mR}yoihjMN~_|rUqL?hYWOpMyiI>WTT_j~Gap1I0nrtV?64Onr|d?MtZcqGvPJea!Vx^{+Lqqq>RQ&HAjw zh<7s!AtLH0tR^J&Uvm@zD)*S8iR&eotV{P%d5gm$nj@^0tzcwPwX9tt>HY$m!ke3MsF03?1z*%00yQAK3rk{0HoIxb1wF441Ok2 zQ<|xV{{RwWrJ(yYzq;(#y|J;+bAwvYNhhEKsb7}g6mRe19x}gN0ds{-N#u^FmeHYr zf64-W73OmEU*w5#LR?i(d~-PLIgU6V{HG7@jYmQYn=MO>a2H{K4;+}=4Ikg4A7;19 zQK^iFoiTA7zsnzujnygg-=elSZiVt*s&NjMPnTwnJ%b&5h!DG>U@9WM@-@pNx3`r*YQ=+~lIA`OYPps!6Jt-cGq5AD*GP zZ~%RiLpVc;YU*j9dg6z|*(pw9g~Tzg4$c9~MsQN4;=^vW*T+0}SJg{QWI1-6H0dqX zkWTo@D8}aEpMJ~zfaH=*`T-{4OTA@dn%novh6cq95r?&mYd{TaLr5LfrEK9TEx#`3 z@no)!nV<%O;g>b0HhzERFMsH&y985J-6ECK2Np^tIo}69AJClrm5?eL%33(sG&IB{ z#&_`#59rVpwSl?CJiBuCLY`WSrR-sO;hQx@1xS+pAbXlylOqG2lZ~|m8QceFqM~b= z#P5hl9vsr$Yi21b2JN1kax(_B8l;a4P| zstmz9okCh0;b(+>ikU+nh&Vileuzc3cf*+81EYJ+cIbT5O*jtF;f}o*gO}egmV4|@ zp$}w&^%2X-)0~Wc=@W=%ZL-~6PzD^>4hSEa+YjAa5XbWj7-j|lN&IDIxk+TMG(58} zgof-kYqEZ=x+^%My5W2|B*f>Gx-z)^$_=m3fY(C)ErRT zj3-16Jfn@FKhz^N6#QB#99?Xslxl7PiWg^oo*XYr2kyD`JcNWxFhBahfs)!n@&5qn zy|4bt+J&Ew#ElXH1#~kGgTo|X1N$LX2478Eqkjfa$YrLeVRy@Q%ty)C?5$cM#7~;P zPX^nW&9zWZpOzUo0jF~9`l~qQ(>T4kUH!z(TxIYgN3*`Iu#&GMQ|?XxhPJ$KpE!cP zT^|$a2mZ@YX~7uzcK-nUf<>%t(~*8P934#~g-cxD@b1RJqdO9=WtwR%x%Cl^Q*Uw) z1-x>t?I*(dC4t1_V^HX&N7bGqF*Ebmtdesv|UCmPn-^qPEK2ou8EP5uU6Y z{{Te0V=S8IPXi9q^B@3_FePDN>*`g>O%;4mRnz|f*jO|+d05;>CNz_gf_%YARPL^$ zo*5bN?uoAxvqRlZ3Es#lx>qce4-LQ}U0Fm3-WVtLj zdY=)AVi@J9)F>`zE`17{j`*GYLv(j7#8SWuDchc-JG}b4`XpSR&~CDqIR|xyPB2fR z^Z6$%la!?QZ1woD!wxvYDwI_l3&?0amIhK5B>U{NOCVq_B?^tQB_v}s;iEZAGf6et zq|(4nfN?9w{{Xaqb*i$edfJbarOu*%5##cY`l}0vtdUYf7z`P}%H8nY1Cv>1w(O5J z;=U!&rH2%^{waT;jgH(X;A0xV0K#sCxFBP$%i3C9fCHi?X>MFg4Ro2xMn~0K29xv$*y!V=7;8WYyc~d)qj`X-U&RAP>!B`-IiZ1gNF7R7mh+B? zY}d&*z$l(yL`4p-sELDj1fhGnF^qPXrJC!-M1bJ9B%0cai5zt~=kAp`%0Wgfuq>Az zWT!9gtcWISm^WPWL0tN0gNFq3MZ}s~0dY|YIPOb|qR^UF#~YkaWypUNr)^IV#_)59 z@P5T!S3V_Zq>KT%DT50HIs+o~S+_~<+ z_b4U1JwA#{5>2{QxHzw}id^jVwzmO1Wue^b=QOO`Eb7IamU{Tuq*p=2yjd zxa&Sxle+a{LgYc_rSNtagcXXA8)o43?zJB)%}c26qDHmuwI3yTT)K#JHy5pLk!mh0 zQ=C+al+4{Cj`x1+(pH%c;SyV-rdhQaKVh=M%};L1;0}d;Gxa1C;|i2or>r`hBUmZR1PER8U0s0RCt^R znO#r7)cl)WFvm#3)9~2dR{at(d|rJzYW0x#zF#kwq&%5i{8Qo|`#-`p#gvlN&02m^ zIHDvti2>Ad&`85`znc8@zd=#+FO*y2veRf9p5k3G1GZhb`(bt0#%>_Lw);hEwb8{i zRMcW=UjB`eQsMl~=+^%0k@$$uEn{k~)dHesxA3Xkfg3mO0<(@S#+6%ONz*;y`Zrvu zL@~I~TZxUb%17c)s&)@@uM0JMEz-$uveNvuJ0x(DQUG)Go~|dNt2k>^k(#N={{Rih zlzB9K+gflHhx3Lj)!sNEbo{Eha0k*IoZ~)gM@fquT&0n&HeHIDtiTuH>Y22s+1Z)|!hBLFx%IV6m4gqM9S z5>v)W1&qtF=5|lXHPq+`Mbf?b3Ro*HGE&vZbZT=Y4FR|$=@7H*T`%mNs_Lp>mPpw! zWN?PW(lq75FdLox@Rh93WN}+!Xkgb@MAvHH#7pWA>w)?&tLxnNy!p73gR#n1IeF&W zVPsoHB!PFU&BfO`MzVI6HV2H5(sFvBHyX;BX(e!-nt&c1BP3^jgQ6XGkz0UftGmr+~NXAsh3}umdrF(ZGc^ZS`xelX)czN-c)81!<;GAn_Jo;c1i%Y^{ z+eIBzcc;uXyoPBbIX>yCuxW~C^Min6$OpE`Ue!$yrGO5u+pc@=k>Y7-^tmSTJ}EC& z%N`n~pHq$sJ?+0_7KWS%nTGDx{=L&p92~(5v9^#&^+97{b6f}V^+)XWEpr&x_*^!} zV~v-Ej_-5by^!y3hfGWw^Q&={D& zyKm|vl%%zBgKnj)xeXzX)0a1AuI;OV1Ix^ExB>_?^t<-wFSS2W@)VE%TjmeQI$yau6uu$>qZj>RlH1F|| zyo@8n(%&KGtovg=AqLO@_&j_*8tvo2bW;BS&?>sibGWkSZjjYT3ux7{H01mE;bGPA1o`>Bx_k5jE-XEZ+kyF{3894s{8eCxYIEx8&?&E1_?|Nye zviY7e8yPuZ2hB7(^ImdIbhvM?RJ~1ks$_Qm8I)+bgRcd%AxBP#gh-8lY zGe-zjalo2rT}MvgDRXw^=MqBp56#%&J~FG0UfAkgZHqF)8^JARoVkGi05YyPPW5t* zygylB5>pRWDx@;z0KZr4*sYmL=7cOZt9!bT;atalE zbn5GaeoC3F1~-Bj-mY%{0GrGz`sqK!7FNDKMLU|xhdG`I&Ue%^x__v;+#r zN(mSk)Yl#QCA;0i;M$#EHzPp-%^=|C39{4VcB>!g406YAcp2cAjn6c4DOk}M86rIR zXPKivW#ik#)7&wI)deJq_j{}*49a) z7#LjR-1!0KPe8dg^?aKBUsqo_A@fPIGqV^rYe%q59DT;8h^pUBO(7H%&mu0uR}GAW zdj4NUTbT$LUOQQ+SsKS5wmtn>>-;-N@W<-SdE~hCM#sElty%S2OZ5|zvc=MA2euUo z+|_&=T_9slN>jSrUiY?|pLt)lf0TKZy5Vvy1&tYTIB4@FPOw;{&&*6UNb{0P*6{rk z%{3`6D*omditE;%4;&w!OLyoO;oBIdV_ZgDcO;F;D#)g#(&p`rlxuAvC7=ud2Smo| zavIXc<-^PSuTuypq?cpGNOUR8-d7E)!K&Ozl$7AuoHJ ze9ClFl#*YWF~w`DTQ1AgGegCdG<->Kztq*mHA`!oEmVb$Y>$op2e)!Y15U%yc64rq z?gg3gjhsG`I}o^nqTfd=Bw?Mqf--VCuB-TYbf~;=eVS6{Ni95oD^O1PdkLveyODpNc|1o)bOu1dh@wll;307b2b zP#If*#>anU)ODV@nFk#fhvlPC2Vs>hq+_b_%xkh#27s?|bK3TJyg;j((>FTbOzh-_ z=|5#l&m5GKK6VXE(oU6UaHT<(-A+z?V4QpIs^^Wy{R?_|sLe0tR5h=c%jHbaJS!Y5 z*O)^@*@hM+t77?aCY>uCD}EL@QAZkdtaSP<_+DW}9B9(K(zjuKg~Xe5h42_zFPBmv z$903UwiV*zi0)}COQX+aFAKsDxRYS1AkSqD^qJ1c@n|bqU{TU_i;tlmN-O9F{uI(< z3rnINN+=}6*=@}~l?k5K!=DO#k|E@V%?RIR6p2lUGFt;l!&aY^a}#{g4wsn6Gnp!j{uy*c-4@+BQ)k8wD@2 zp%iR3La93xr(M2NO-^u9y z%TsBsjFrvtJZCBTOiLzR_*<5|UsUv-9VtcBrLq@(De!bdhv%9&hPZ~Xapbuaw+r+e z;)B@_4O|HWXkGmm=kW0oeAjEBp^;Z~U;T~LWY6LG6x)3{HvG}31+}4lrECMTqI+l~ zHU%lI%Fxk;Dl&#eyng4#G8P(0Q=5ljd}Bdlx=31AntCZ^nC!98y3Xk%XIR0EiFhy5`Qp%4o+;md}Pq?tbP6jZ4dpKFx-ddb+4+ zsi&x!;jzTR_a8u{E`9jP$8?lXixe^wBbe!G3r}|J)_%L6W$1BZc_yRVyY@WPrP7=I z3R{YJn&oo5UaAdCz^N@PI2jRvfC0ynq4{{0j$V<3s;&xc}0 zkO=*9s+#>%RS70}Cqe7iuVtkAV48|wkcDoWeTRM=bhFy)^@F9>`hwWna5DtVlP>e? zunPE|hN21OFg!7mwUV7UV@Y!ca!*xA@GRJ6BTQ{`XBQelt0#G3Yrz0+euAubR$gbb zT5Ali1kf-@PLbk`+DRh>_4Qqe)c1U#Car#8J|Ggj#}R9A+K8U~5J2UfT}PQXD!05$ z`7RTO$aH|w`VG@ysH(60CNc6#`i3-yf=R$9^w|Th;m#s5RZ49Y8eJoRbM&3kV6OI1>4wD`INkJ;=CX!dTa&;foFiBv(Ig`3pQ`53L zwG_jdGa4EtjjhSkp2Z(LFGX*tno7IHBZFl*Y+04ZzSv2{B`GwYV6u5qD|zT2Bkj!4 zb0tQ4c4*1|t6k~w!00KpG1&1q072hu5;Nt|8@b)+V!6K)OyXhD2xK4t(0o|SwkOR8 zsC6`kS!u+~j0Td~n$YP8IU%|UAFou$C|xVRhC))L-F~F24K$Brvd2>J*2KaFq5*WW zwV2rYm1NP?+(og@GwbJE>f{;UV(qshbcMs0R?|K>r(+9Eb6>5@07xNYp4O5!&Q9Ss zQOO&umQv|h5vS)jv#bafRa;8R^qyD%A-QQ?r56i@MZjoL>#oy77%&PQ^kS6<*0-{eR2opFk8P-fP+T^O=aPMmie=`SlYb2%YSAvfF8u)Ji07VPx_0K8I zJDvlYcQLuLj-TyThw*AA$v-XcIxA}!1`gSmR)2LwTSQSz{ZX{LlRI<)=M#dq8zT?Q z>5Mi{o|V|Z{PD5HE)$Z=1+@n?7YS_5_dXz@j=JBDX)oqRPLbuDZ|s(MhWl@AO(WdN zoJ$BHtsR~2rCp+prqd%_!d+K_b|=K#CB7u(NAh5n?&kn-GCCY=n<>iQ06|5iu=i%Y zQp<0qbK;vxga+tt!%H{!J7BEuH%ol9vC28a<)>r*oS!b4K)6D8ib&ZUptL$nj1GsG zd;6nTF%zjOu*Qv19Q4L<>XnVR9Etw`FjA#Bdm{}%f=1w|#!bq@=3{IzmY=Dhk76>T zhZne&^SrQdoTL1*YxF*gj%sj>mklTSlEbr%p|!P6)FF>|IPL!c-;$A0DW-(H<0=Xoo4q{t>ZvQ(T+rVOa3ek@4O@q2u2b~7XP+WmMQ(|{_E)e zD{;il($`SagWfuPxPRj%dc8bh1=AJ!9w$@vb5E*I-PzKvo;nDhG}1B{DUIV_$>~XjJNq>^JLQdRVzrZo~3zOx2MA3<*na+HYNcWe#=NgI6Q-wM^ z#Y9>E0M)U)IChc1{^81XTkVlD9DYXWXw*pqIp6M3r(ErB;fGFXn#Fe?FZuuPY z&9szW%D5!D^5=b4W{*6|H~#?sYT5WH#DY8B+QlnGEv#qF(zH82i5Sx(M#HOce#)JF z?$!o1zkl=v{Psgxj>2)M27uY^HQ7Q zLxjlhxr~FP9vAyerj;i#Yjth4xpOP;s{_%&(Wrjy_&xd{>K(JSBL&!ut!3JXoyo zydiPMq86ugTgY0-QAZ*Z$5jGWXDF+hqq;2iLqS*yNOP4H)YzKmaETt72@c zIHo-m(t9p^cZ#N0f2gj2fe}V!sfC#Oo(Jx`lYnT2EuNfX9ssz) z$xqL`P_r4#p?7}{Kq0q4G5G?~@aW%L_cQ+hYO-|U)8qBFR{sDP=jHI#!+-ciaFlMy zU9<7Al><^}-GEjFizbz`vVf|{<*Upo1T1j4 z?vqBIs~j!ms~j!|q|>LW#Iq zjI2mpLq>uYw+tx+Y_$-iM3EiAXpbD>?xeVERA|4>9_g^;&erXQe+8k#=M7@9sYoCy z&cy6Bs>q=^4O00&d8_-S@*aqEQgp0fZa~>h4A~7?QEUvEqyz%1U{gfw*rkfywJ2nS zxF(zCc!Djm7~GZB^lS`;fO#%%_+d}Xxz$QXKmzHAh8PzA0EWh~C+lqUJ~KPaqv((q ziSI1o`xVaQ>}Ci_j2`kk!ol|?a3%iXQE=iJC8_aD{z!Q{>Mte4D^H)K*=nGV1DjgS3P}G?-NbT`kfQO#&AK|dOQh38k1Ye{{Uop`T3;%kj2K% z(^+ru2A*4nGv+cj?5la=WYukHb{Zo&JO2Pcn>f0rrmDV>OOWFCfDSqa?snOIOT*?m zdMeuK#RVlnb45h0E_9n@2xAy8_u0Ow`5IDSL+wFc3aXNw&iQIx>8t4~3yVP^fS8;- zhIMB$}9zsW6JUcx1k z!+4$u-yG(Rj&zeq;xP9T-7Onns9Krf@!PeWK`1Cej#%vcQCgfa@|?Q8C9XNDAua|mTGob<Z*Ez_OP@7eM)UIG8$1Kb_4pKe(H(gULiLaE1)p8 z83rChPR`ER8R)FYF0ZSi5W56v1N2y4C6(sZWtVM~ekk<*6^c0Yc#?bW-_4JZRZ~ey z_c84_-M*?9@%SaGjjwK!1~KN5t+LA$?x!y}2QBt1ZStJc2dacv7)NyT&|(vpqZn_{qzMDuX70S@1CR~)D;aiw@F>3 zYN)^fe<%a{D~@k%%F&NIb~I6CmF~wrN%`BxnbuuBw-ZkY=rgz61HkiG(Q>!1`V-dHsKgPQV7U^Vif znpzhKS{&NgHW@j|O8iGxAc~33V91f+4fZ!j5OA>KOOfI_=s&>QWz$5*be5^TWIySTxB9)6C-LWlB;k*V zFS)@H6j!}Puzz-rpds%caMmD}46`%K@HOi5^ZId_`zvvTXdNpF>|4iiyGPc6WY z_ni!mNdpXk9f1zPBzmr?(9qVp z90p?`dU`C#TDZ%Yuye+wq_~o@46%d0*z6XLL$W*{QX*p$Z0Q^b>CkWFqf|=;ByV+8 zu|mcSm&)J(9(i#Hni6W{IxG<;N02%GibE=8i>U;8pZwAtZ^XU`w`*&!OsJ3?a;ci> zPwLU9_ezx?8SM>l`D+Dytnz;@FX~E)eu1b-_AeIQV~lvqG1ECx=_={vtAhVK3#due%^mr>SZh4vOp z0ah?bNN^HA6+q$J}2 zp`Vs5g#_g&rQMVd#9Gk#1v^gU?iExcAaqT4+J{^&brKVnhY%1+BHN<#S?BdK%3ZFJ z>@@h+EI8xsxd3)jTY?l~2r5KpAw)|l8YXT+1ytUtl1Ahpm2KFlk)x#L8~8dAClSyz zkIuWM*3A@=5A=n^oC?icPecCVyMBOTnV;=i-WSX&`^ zX$w8>dpuW-)8L$A#`s&K7+J~WC>eS&;hS60eCDj0Hd!bgRz}l>uq}nF3>C*w z?iZL@FL1b;HR)K8w&8d};uF zz-mDG4m=ib!k7tdOqlwl>+HG#gAEFs_*D@879WqP#H*QlT?}gsJ`v^Xy;1P&71Wyel1W`0Q=DY{yl^1{03GumRLb{`x;%*MIS!))x{gSvLR+bsMm4L!>L095<6*vL6F+0+Jh z0j>1&>Esj}bLvI9j*o@E5~19=>+W*5!yi_F3C~X6y%9-m)Lc1I=Se8KULMgWHba)i zTKf<1q2qFVI*e{4dw{Tv5y|2c0NG^ z;}&rhpi#Q(c{7-p%4RcLh&-vJZlVBa7#)#2ozm>uXJ)!gnEsf+jARbj$^F%5#+NfK zNvW&2Xw@vw*CX`+3C`Kyu6izbB^s2RQtpyEGf!lgrNw^|E>_BkDCuZn6Uhk^RL$cj zSjHFzHXX96D*Nr;>`AMxmPX(J%;{za`bGko#J3o3w>M1#e+oF;6CObY`hPJ8r>CL^ zq+{4D`R2(lFI(wbB)FpVF~T(ZS5xKpFw(NZ*^QUj2_c^}Zca0?Se?PyYy-kivc!V8 zgH{$gR?<4JFt{M^3RHZx?wDr{m^w4}8-D8RwvI(+f_yI;oA+Gv46di137e2(dt5%p zE1?@JWTK?3VaOOH{{W@SGg}&*CF-m6HoP+<`gtMf@AV7aV|^`U4NG=maAEuOP4A?J z?`M`i?qMH%Ae9Vl!qn!p<-vfr+$=bH@5^%|kl;IKdKFeLbj=;w=YVa3sni9KK0;)G z=y`C#$nKu+anFG82|LsjgGBc1(Ek;gl!CjT`8AXNe<@I9bJ#HOy!RG9<`7_9#sD4mggYcE`j* zL9COl&c?@Vg3_Kx5ztmZX)CDNRjUAP2+|_T5F}HzSfU1tk69tOX^dMp5Cakcyi0f;z~&VYK~@0b%n1EY;N$x z$3OG5cPO+?<1uW8S?v^a8mhXP$q+>Ci|{^^*=pf_ALANXt!54-6SP-LCZ22Eu!LZ4 z2;EPoi;XF*7FgSHW85jrh%Oj9o=)1@72JItYZW6=qvJatP;0d*BQ z#A9a0%x*K;$h)GV9mnD(iQM`oO-l(zau z7~^Xk*SXBo9+87C-kp?tggj#RYX&0b(ntvK1L8eLs!Vo__lF_Bx;*eA)S1}m4P7%J zd4Z1Qv6=t`c`I|^ZH`;l7EODVhvjVWxv^)VsGJs$yIYcd)ltNvx)xewt-+p3Fb)Vm zNx%ahSzVv-l7`(uz_y5~8UxO!RvzxDC&o|o9++4X$9mkYwY03v_P6qEBMABrro zxJh3USty zvC%(CjqEw~{{X6$6+7Ed*K;(1qd0Kf7mSYRIkq*xhD`SKJyB{)Yh%>!n0XD-Ts+7G z@6*XPO}M*A2-U5~dK~O(l)pa9Y;3?CRu75MXm2-HxW)>~H^?wSYCZ_$-bPM402Uw? zmkexu`Y)R2<2e}hUpda}q!H!piv7q(l`&IIM-vMe47TUB#5jUOWb*qa${&oJTW_<| z+Ty0AlB$im1!Vb?oX~LGv}42@)aP`u6pVBfI-$$7RL843V|NjQ?uM2^=egJo@*Pz- ziDY+l(|ml6lT@u*_xKG}@&5qC3%i~3JWV_;Kk(@o$Or1dpVXzgsAa5qlCGwDr}fo5 zhC`qEiBp&&Jx&Vw+_DdhCdU*b+WtW5eU$^73m6&$G3v4`8*X#gs+q|I4Kg(PXLW<& zc>KGMBvhATEQVMK!?k&JDZblJ3u>+S1pXPxKEW$7zAO=poD|JRGo!6$&BSpy?f3rx zqNk{7R^6EFcfaI!U&9i}cpHaW(ST{(8@RfO3xkT@Ql)Vv+TQM;(O#N+PqtNW(b zNuNTYZ#~u~p z$#}Tqdii|5Oa%FSzF#gl`Fy@#E;#vozF#gl`C`6ap(OhY)$;jC2aY%ds<)2rqMD9I zHQpx-1!Psr%ZbOxE_UM@=T_W#S@5%Rv-R0}F}mgKjXw;-^)n{c(cDFBW!WdW&Q`gj z*&1y)!ihX20_AJp@IwU7S*SBgH6 zM>qU;sSUK~&hC|?_guG3H8pfM$H~tkot|?A&lE1$EWLZYui*MO>|{DTCkp;)Nt<66 zqK5d(PDWAJFc70j4pjutUpz#5c`!gIpaIA+VU5U@JedP37|c$oGFWQ+3v)(7#I-?g zjd)m)wl9cWMH=wEUg3PWiaf%^g_vG0IMs#W3p_6fTnk3LF9=zXyv7#-(XR^v7VIoY zTth~Z76b*jfUqHO4H4WHk`$)}p}`Qgh_4$Jstu(3sSg_{(d_itcQZUeegwV;dsDNe zT2^thHdSSrrNz4|a?J@1YBkC8tzONKG(*A^_lz|{r5Zp|rFab~$!ugdYz1E68s%Kw z>Hsd?IbG_58tv6lbE`5)Kq|Xd$3@aQtKpCW%}QD}Ey-5Ws=gRN=GbV-{{Y#AYOhnJ zRgHWeJ6!Kmm;R=6Z^oK?G`HJ1thcs#T}I#?jr4~dy7{U);&^I{sNa`yU1od;&lGZuR!vlyQJ1)T39{m?W@E`t2s^a>VU26+=rHRgM zJbB-D4K91XK%Fg491w(IPF1>J>=lM{l$^F#vpKHu$CNb^m}x0$WFjXwF474&#>1+q zc;)qTUPCGbjO9ZfmOZ7o4sNV~dmQC{vrzE=01sIyxpX#a8aQJG*op=)40R_1s{Ewm z`{cY&#ym-Pdx`nF`CS+%q)j>)-?KmqJ5i~gSg5p;zQwF~;|FT*+(9hCUN{Wx(QP;d z+45SILdDteUX}0|Ub#sJy&48Pm5C?IWtRF|f!TOY<#ES{xmt?ZmF5>5YAVg)P9~#{ zh)`EdBP|+O=lw2kVV=Y)%#3$fC1u4Nv8VYoS;3xqdG$t}s%iH}Oj@g5NI7mgqAl>a z9D)|9;L|(ir}|uV5D#@m+IW)XX$_5%31IIF_XGW>vTcWtX;?utl?^0}hPk8<*9$&c zT(NZ&q`ggidi)btDRe(ZS?#ZTni){v2=h=Ye}|ZyOu7eL>c&3l2Jgpgk+HVh8{Lp~ zx}ks;pCB4eQWa%=Wx`E$GZ&tLy&64&IB91ux%98@22T*j53=Pi(H0ckX$7##+jCz$ zz!tW5xQ2pzj^_%sn3-JXJnD3gE^$5bbDz0bk6pmscn(fab$1>pVpSFL{Y4Wyy`)W&X;tgeIy{v!iLgFxxBXWQwAC`Uzgg6SlbZ_8uJV_Dpnq zZ3W7ek-@~YuY01#i3 zRE9eC;Jzog3=lj+1H?P4I*#LiV<-8RoCs;dh+uXwUQRUU^BgV|hOVxd>V~Rlj@z;1 zbm`qT(tJqcdMMo-d_g;y=C!0q)<0&vzZr1PQc7fXdUrUvV5;;^b7 z+mk4yAP`$5U81U_;;claXj=HV1)5!pLF|3ztw7WPUq5bPfqdoa|_LKX*drr9b9j{jS z>aCVJt0tRV5)wB!((oOpag1fB1S|gl;LYxiN{hXkmZ9@Cl<`&gj&zcFWPSEgxToN) z-t#;()ErS17QxHQCxE_o77aT&qzC=*tj-4bbztBsIij~RNor~XermE~s#!~V4<@t| zvqHtqQ`Jg%;?!wf?Y&X09I;v$VQ%Mb?ZQ`m$8)l-{w(P~4fw=4xgW#bx2~a&-B_+_ zt3ESJ$oOZ)Zq7A96LbFQn3;`UNPpCdbNeOm&(3zTSv(}*A0;V_5a$wZ2XM6HW!hc0 zT&%=yYs;E<*yOK6j=fCT68W_RfYL^Kl_r{M$fK%bNO98Lj&r|}2h~Jqo$d}sDSR}w zZzO8YGmP{Ae^l93E&HHXKLGM7GEA})@-Lv~I~b z@gGHmjv#E}u?Ghr>=v@^Lt~Bdq~MJ+p4xg`4MUw&4}1*)4lWKSG2rK--3*WUYIX{M z+|N_sBRTL5f`l~(Zv9T~@3K@KIefqJ!7}mw(Y*9|-K<#rVd^$1^e0z4g-jL_9yMh*UK>DNETFF~vIf zXKYf^vXYLLnS%|27Dfl78XNtVPLIh<(y=XA^tbSNhj#$@li_@$ zBBNkpzQcxJ3;6E>9F&i!?^$t0mC^7%894;RqO;}H^x#u|RdJmBK*jzeFfO;Y{_oT@ z0TDHan5uDnUGL^OAq_1Tx449wiIaQ?0ak?H#%qq>u(uWga8bQ3w z%$Wu9(>YuQ9CtHm9uys}A_#ulrY48GEq&L8nODqtwlH0*ZMT$16TGn(Mn81|dx-e_ z8`Q6g79hMl-F^k2mM}0m9wJoOOmU#m-*D8<2qpG*@-A&Mqbc* zdTsE0z~!I;I$$wcr{OC=UMDi^k!#Xm-AXvA1D4uF*miH02YeCy4r4<+IAr-_yz_nQ z6|iYPAn^)VVt57cWuOi>^htDa3NTLD+44#1s7+^CAR}Ba5{%(oR$gFup9d-Utu-Lt zhi7}26Hj*qeGq#E=#2{h%NC0N^w7AT7^(>)K(^jn^htXNCb(X3F@Oa#f$px?M=6A2 zKRvVyX)vj-_wXyeyaHIR-f}&cdnTVZQ(E5{?7)l=u006G!|f8|jzN!h0W-mWgi9<5 z%!D3Y)YcPI9SWuasvYKv>T+^w117dZf7U8%1h_JxO zR{&VVHIexS3qe=*K;tb9|KE# zdNh^~SpOSvlW!6o@wP7hnt~sJfbta}cQsVknWhDPOz`6=;@;uBG;_7>_{9@Ey#mzzF0c2oE5wHlrkO8)oHbjMsQ+QTZuxpb)sXwK9baPSDzExOqOKp^SzOz~Jin$sp%jF% z{^h4+@go{`ruZ0+cW$SQV>-X?O5QTBPhd0fT*mxCtUVs0|49S*7})iYfGYiI58*Pz zgE|I%OHTw66|(s}9PwzRFYV)(>%4x4)%)-ZRU&tf_41boH3KklpR4yUN>aj#$^;z_vN$(o9n^sf%R|=+bV;lsJX?qq*bqer$0O#K~in{^QCG> zxgiY&imu4I%>=0@BoSlS0m6ww_MYhD9*$36a-u&BO|VmnoYqB)U^sDy9(0&wgGwVz zJgdTfUaadmxPxWRaTy6fgEBB{VQx2~RyaJe97SPe)1dx++Yichto-s^ zu4`-+ntt9SJ}9iEt^p};u|GHVAF>*#N+@yEQ0>2*|LsbW^lwXuf%sd-9&mW(_;e*yqTQ&7YM%dXs_d);qnkEwYOcPUKG37v=vcz6ZZ)&E|6K1Y# z3!zX@(~i+h)Xw{JzSTJP#+(?xXTD`2IX^nw_Uz^&#oo8D4+@I@+=Aq=dWK*qdM# zteFU9@f`=~2)p*~#k@G+G=1JY)iv~$jrH~N#G31eGA`QT>Tf!s|y|4g5LWU^{wK)0*1P#Ko{YL z+bn2$SMEjm*xoGlUE>t&c;D09oBlREk5gM{daZQY3MfAu?(p#6(09*&ds$soOlJKIOn_7x`G43@}v0MN5 ze@ur64i4bkHw2t_NZ_6lf_s+O!4aG{9mAsbd16g-%H>p$8L?&_$C zUE2cy>*yHhAux8}UC+ZbAPSG%NgnnP`n2(U1mP(>EeU{je$&R-M>8)fm50!tHmb=s?D zljmM#3ZF#2RO?@Hz~0>##`<^Yn|4JC(WjpzOewrE^$Pmcb1qMbnwA@pa1RP}Y`c)L zApEGP62>}kE_0Q*Iwp)X^~IgfSyB5A^+K_SS25t8CJu<>6||xG=S-ezgT=AEXP71+ zVBTau<7i#moKWEJCz?0_#FNFRz@tgGFwP#AP=DKc=Znu!zTvsQvR6Q4VQ;Agp}^sv z0mr)*kI2H<0I-rA5c!mUk|UDb&>iV8V#U}#RRUp9pT%DDceddjZK=uQe42ZeXJH(Q z)ERyJmS$Kg!oJghB+i+FnJ?>o6|Tty5M7YXmq90Ub0XbK&6eL(+Ew&is&a$(l>Z78 z9{XjXjflkgwxnZic9X==t;FyYuKVO@=T%|4WK}cTdm@WfF0X)GhEl20GD9|LQp^yb z2x%k-@2MdF@S2Ohe5M>;gA$uB;eiDI^a&u~9uYB_f}d#IUOi}#TXC5LI&WIc@mHD| z|Bkq2B>lKmk%tv(80pDcXQ#5pwm-(g7?EEyb?PX!2U1OPmo$yX%iJyZi`R_%S8?sG z`NLlDVrq24&lxWd+5&B%8Qo-Ro6}|>fw9^cOfV;*ey$Gz%_%ba7SW~U)UQ8H<(pkt z6@dmc{$4UBSxS1)dVFa^vMHEf-D1YK0evfTs{6J8=>1V? zK~6yQ3P|c8rWpq29wo8A0uqW~D(RnG*rmG$=Z75S$AgN&m$tLU&p+bMK2=c|Cb6c%(Qzjio+ZM!Zx1Onck{sZU~W#gIGXlJ z9tJn4b`6RRg{#Jz&tMJhvpD)xFlGyj&K(<{MnV1NfX^e7dBtBuTJ@U_UXT zX`I?SMabpiobzX_$;=5*5_ zEP9R*>)io_(RU5XNe)_?ae(6MhK634iQ(qn8)W_VTNW)#{PZ}H&#d*0$1MHhhMo{d zq>10=(ctpS1;}8D1v`fD6`;!<@CuN|c?En(HHIowsV0+M+Sy|*iUErDB2Yfq=_X@Q!$6L{#!*Pu2ypq2)axvIfP@Yy@<$f_0z9i_%;T>F7)9ttr zeGVozhH|ne_!SK0i_i%1SfPMG+@W?sGj*5FTFZ}M(2WjjCl;gP>=!-LN8EDA<5AV! zw1{7LJD*B(dX(yS!TM@nlWEY$l85y4_4siax6nxs$;0M!Aq%{5B(G&9yiF)U=|@PK zhq@KoxL2AI>ZDq0-8HSl>8!dA-0jMF9(u%U#WYqM7OcksHAwt?b$zQ(-xm?pMk zR^|_?EC;D;`#<1ZcocEKG_|wc38d}eBZp@b4M*oLMmM{wpv#y89}zywKR&mOuK*?v z5lde)#y+US4uZ^@Hp^#0nbFLQjsdtR)+h3I%IK@b`Y(9bxx51y0(o127e~df05R=I zzgEj1(r1mpJ%Uh0OM5{waS_GJm9srO^Uzqnlo8uY{mV|HCrFas@jb4A6YAa0*E1Tw z&-86?`-6qn zkD0*?7d7!jHDh&2_h?Pn`7d5icNfba>ErgYh3&_~$M~CWO(mb9*)d+uTPHYcJWpz0 z*USBvyclbyJ(h3jCUHvA8)2BwIjmV79!k?v?DMC;f{qH5$Z?)?M~T40D!-v_Bw2g; zdP?*z@s9kAthZ`%LX9kPo;_{E7&;w^wZKXahX7tPhG;_Xd~05_1hSTei`Kr|Df+R= zR{(tmzu3oZ`8cIXe?1)u>Dy$L%7UHQfr{K;_gh=O3-cX?<+kJ1-F`GsNJB|3;^&mTaB9m9ZmQ9xpozg?maNKmjKQf`SM=xo(3+*!V)kghZ+q`YjKJ3Mkg>R<4 zneS)u-6LlVXq)bQi<3R6>lI)TYrAS@m<(EFLKS{_S14}(pqk_AduGuR8Z?#-&Ixcq zubRO^O_S4AgpMG<`21Jx%E^ zN9-Mm3_oavY2ev2%&IH#V-T&pg(Uc8E$FydkC~G)M67}!>B$DHPo|22Vp7mjw``Mk zMH3b|xVvzlcA4GHYtVZ_&3iOzaUdL_fp_2BahX0)cQ2byeUs`gIjh~uc&+n|i@D5h zM@dc?u0_M+D^8jpGGUg|k{4>`;F!^9Pgs;NeHs9Q>}>TCQztC$NY3wFP=g5R_WAVe zt{MfyAIXuZt>ZB}EvFn{l@bomCH^~-^~^3Czcml6Q3!j$9NY^SfN!?xw%%DBCCmJ( z@W_T&oqd>bhwyjB`vK|%ExN;h834?HW|_OyT)|bx&o}(!n?Fku5Q8vFzxOME%FC;ZMjA3puMcE^u%JA{M~K~gPxq0E=i@*Z; zw~B@%j%XvYQWqE=OF|k?iAqO+q>7rdhu&nkO3TN<>N zF%y#56_e^0)tt4acM3jTWgL5Qvb5L$}cM1E?44ARLR*kUhP^*sgO}wum^Mm#Ls>#=}wHb{rl_ zx;+5AHJfjH1+-daotJ!gkNv@_|0~Ap=E03S3t9F6(}*f#A2kxT0}l1B%TF>w~&` zpd_rZyy;pAKbq{93iZ#I1Als45@Ir}a6ot8heQ9^;<1lcI zH|cSswY9vjk_8=to_(Z##+~edn}NWTTP7WXB*9Ba)@uefBc(}z5#3TNOAYZ=%`W;}bBHoeh$tKr| zDCNeW2x^g60AWSNW#JSvA#=&y->-il3}l}suMQCeci;ap;h0o9`_x69%Xa!s2h}Z4 zQ~micVC`F`tRf-om$R5}Zfm(;loF8sgj_=I67k6YIFDSll_%@0ZLxfUIGNpkt! zepldgmK>7+oZ9oO(z?s6Tgh0Wy(fVnG+v)L2 z>i3oUu0if0hwf(k!<^2MSvbr`SOB7cP}jjW7*T5^%;1v(AO5; zdCVHGlj-EO7$$vqapri=`*Ryg;)ZYa3it-sF6}+wWN;S6Wiv56&!;iX@-fL6_#ou{ zC)1H*H`~s4{sFd93>?~nj=ch68(3Q5c;Bv;;bqQdvZNJ1qr|Ee$-{inFgO?*!`T70 zySGAJ%KO$HHgff4$XDSNK$DCc;ac+Z6%gd>x77FfIYVE5;0e1Dan?t8tb8X^b%n)M za(}7s6<`OxDh^HP!bDrvfx~GhlQ0)PgIM050JvWa9)dr*WT;##3u8f8S4I{lzfM~N zTdHD+FZKw$f zJ49Ngt>7J&SYVM@K(|RxzY-! z?fC4R@^po>!LX#q*Ym}N{>s|uv!nkAQ9~8rOLzJ6+#brUf25H`?x_l+hcgrVS4EKp z=O)dkJYV>q&Vq-i{r>T)y4fqBsqeX9e|ry@(L8fo<>D-PKbZXtk3sAXjP0vP9Js#+ z9RtuBKF=e8NHo7WdmSfP^&X4_c$v_D(I(!tyKYW)To(2U zAQIq)N)+unl{|cL_sM8(C}llrnkrFv_`)IjkZv?1He&;aHhP@>w>1(xpf*!sP5;ye zvesD?iJVhj0XFoCwYkDOG@(!hI0@wk7H2=@eyh7qh~jet(<^}bG==I`*SsB!ZAFE8 z80$RkK33q_xF`Sel9k~O?+2isCVmmVIDS6J)fbgd@Aj1)p&j4pKCmza z?K(78`>Kbq-^jH_I>gJu9Vj%_EbFo71Id1QZU^u3?2_n`2_7zG#D`InYgDqgQNTfZ zVfIA5OMMYIYP24(T{LhtoSFyiWqls!=7_yt^1G62J$P2gbsZ~AfOVHLR8mh}CKGD~ zNqQI|9>~8Z?obded0;kmGq4@2Tn`Bv{bE_LI ziZ)3VkkUYcZ$n7&ts;oJ(uCLpEjBWxlDEogJyQ};FydNys?qb@9MqT8|M}}k2@yp8 zgM^sq<8Xb)ieVM?ua0AZN^|6I1Ig|_45fQ=b8qfYo!;=;3)u^stH~8qFkhP&E>~{i z4k3&2g15lo;3**z&-vJ?`PeLu_}j7PL&L&yN|YqUFo_pY0Fd_Fmw{qRQVC~5gS+B$ z*Vn7KeBzq=32{bq}<=1=#SSEumOcCC_PbRHLSnX0jumUSzmq_HS^mQa;W5Y(Pz6~@> zzZ{pW;M#Atkg$lja)D%$_W(A-@DxYFRmgqYZ2?;kcS1(BW(8^vvJ5_EKO&^+N2U`I zCCfmI#CO`u_`pGhvD;WFyME`7g5xO4pfJldpdzF0ppA=JbWBuvOF|Mcx0_I(ZU8ehifIat45X+ z=jJ^(`RsnCP)9bTL24nwkk1EWYU2xQFd*g6!v@&$rZ(z@g8fLjg`eCeJunyBXxVf# zhHNB6ux#A>nEgVzd+78#-@A?xJ}4dLpA{dgiA&bTL7+#f*Fc{qhV(=4R`${KbHwC! zWJxa(jVJj>PehscakPVL7K~Kj*ds0?)v0o3fF~jAw9wASal(hZrFNj1ztY=S5=pm- z^r&;Sei3YY%|Ou!Ge7P3>X898^rOeeS72LwiewVI3;o+ps*T49eA9oJPW)S%(aeJS zXTQ}}8=tZY_y9(30^&YWVF?6Hp7I6W95o(@&;r z?RJ%f7Oj2H;QqVwC?*{2;3AdsxTky4MA@r9ny1VSRHY&j&)@JF%H_9t)XGZnxO+b` z5^t8x3%3c94+*fDkYiZpM;Uf0Z2qsPBklabI2m9in*IygMru_A8jzM8l8<9V9-PU7ditOoQJTfT`!H!=*w01?6v}&a3Z- zXlcC*@!)o)WVzO39c#9)`<~do`BP3ojLdEP=p1kNV>e@mkSc9QLp#4e3=4U34K{~! zgWPi#(P)(eTfh+F_bIK40k`SmY$&4N8}+u?gi4r~`A{>dIjk^Xj&icet5V7B#a_ye zZYk0cj{jFBBwoXDjFQx2Qz-Z&FE|TrN$B7Ap7XlMf0*;V#kJSfetwZLVki zdDK^c`KiI*{O^u`PJ8AIXo{W$R|P$P7W>`g7>G58`b8)#FfbOlo157AD*9vWg$|X# zAj!fHb?Pu;{KZZ+{909WX&sE}pPwVQdm!ukTJQQ5q~}^oo0L~AnK>Qf$hU{5H&?1t zTShX{O$VA8D+WVa`uen=1e(%H9Mg$7KP(cdz@o|f_Lx+;PxPWQWSV)?9+}29GwnY1 zFo~>T_*!<6J`n`mNcv9iYl1i^&(kiHi0$SvOM2uNzk)P?hY@R2RmJf$WdgGU=b?Sb zyky&3e`pvg+sSQIYNhQv-zt@SN3sgQW?6<|tIl0J|5$U^wHtu~Z$_E@zKQHmeg z^gYRE1)ssFwF$w9zfZsUc{?dh{jMmu0NwhvSDzI zdtrzf(HF4#)M*E$1IF4Bq)lxW4W6I;gnd1Oa02cCfG#<ZG5Vpz2TV{Kg1Yi>{LhxtljFPUz2IKh}^zrlup0XHGV!iU_7X4d>eP@r?Dv4^~HO| zEe=Bb>3Tf22m;RP>zXP6(`2%LxonaMR8){zfLtU`(N1%~fD`$tKTY_Poky-v$5o+w z18BED!KKU18gPd%?mTD+_6n#NI}Lg8>{={4Zomxmrn<6tv`MwnCa>xEJz3p+=Tbc7wZo)Y30hJWF;p zxVs~m>t3!M-qA$h=*l*~x^ho+fNlGe?Tg<29(g-+Bi_It?qrJ7JR3%ZFWj1_fHi@n z_UvnOm%oeF4VLaduCtJlPEvpHUto?IXB^uq26movht;T9+Hpkxo=fYa9L2vK%41Cd z2*u0U0vdX|H+%csSW-E*Cy|&;2Y|H0LX6A(xzj%~m#a>&-Y+r#x_lcq*2g3p3W-{7 zsN`jhL%a4!&C}praWt)0p43$7b8C>kYe^d)V4s8(M{i+^k{84&U#W|*vE5>BA8DTj zBu|%IlQ0zJ`;b|yb}|f0ll-=@)Rv|Gz1=bv8wYe;n5+F*V#n1>k=0;qA%4s@vYTdS zCUGu#Yavu!d*qA7E+ri3TU(Qg*2j@(k7<RL$z)}|^a}x#C+pqi zZl|V`6@w+Ax1m>60)J0YX5WD{8N2x2nfSzmC8J)f)`q&x(!Di!cim#EGeN7qE_-Qf zHT5;Vi-Xc}t_mwx!0_vD$KGcJ#(Ih8)>ILC|)S10rEY&!+h{#R3lNyk=uVH_zx=u z=OyK>?8*EG#8eTHXDvUzA!I$IUnr31WBRc|p~&+VFyIt>{cGrF+yX)o?^xVrQ7Xwi z-cZ%eLwyyeAw3+yQ&*w4H)Ud8mTW7S$*6`dN5Nv^z=LLK+DYu?$?d{1a z_3euw)YHE`^$`wbWqLu65t7~I+XnAA8k8>LK0Z}fzc6BR>bWZ3e(b8x9de2mQccd9 zWDX3vupvP88WTfZXm(v2pW2ip+JF3`a}k^43K_P4wyMCzRvaANWlEf=jA7$jiiXVKSTL&G{&Tq#KoD$M^LIO5*w)aPRn zot4X&498`u3I&47JLSIsLCh(q)A$CPIeJk$lH|_jcDf@tFO#}Xmljz!UacpvU52jS zzilkT?gZwyK`*r|>-dR?5q96IF!V2BaBl49h(M# z`BSVb6@mXWF{~`i4|WK#)7v9`8C@T*&Nb@L-TayuX=bdSR%<`rbI3c~wwyBbQUYDQ z)oR&PT7@*A{yxKEe?(WCsqRbbM%Oyu%8rZN(|wDR>c^J3=E-~x#`UMgm;exx3*yCj zPDeef_3v4q9W$WZ#~!N^N;m}OyE1$y#+PM7k(?@70Q%`(t1}5yjN`{i<(cdc@gS8J zhxO~KGwO|sB02vpiLmob5Zfq=3`BX8x#9~94KD~gX&%}y&7&;_VeiBeJ}~6$oT)oB z{4!2PP;Sa5e_Q$f3u<737%fUu3C28U^EhfaH_4NWhC6$VP$Snsd?frKBG*-jDdT!h`*lJp1jn0_XOi2U_yA|8~912H>cBbWj_S#{W zX_Wr^ZD;*nP7umMf+mQs_;FmCaiVAPD}&0dZeH$H9pOQ+=3>y%Fk$!jR|*xvy45pL za=z&Bs^si$1^F2$o=+0rjsC4pbZJ$*cTBqzs>&M^ypDLUz^mH9MfDGcbh09Xc-g9G z&k|n=`|&PU$bRb^&-fP}_dt5wjh9^nno57wyJyosQRFy#(Vs4Ny+t)6q>SM<`4f8R%5RNwm-)&j8I7_4-nyS}E8Nc8fe ziEe*9XsS>bduOKHLa5J~hYE0p2XhxnJt0~g34BhfhfBj6D5~qs?^d% zu9bi`u>bO$da4z3YpKHt3lq-vRIhG-z0Rz@q`n9tv@>=5|gmxrLaZiF|-TWB~S zL=5c0z%3qf6L}9E{or#^T5U6;>pGu!r0W7%_Pi~>AQ+aQ&MK8RcZq4+d=vX|& zOccmJMsd@OXl$&tsNI85>fUiD*02_g-+$)gTeQe?__Za8$;fGgSYTD)!Q$1#ET}BT z;^-8OQK3k`B;xBSoWG|A)QLG&9>K+RaGx3AKoq*&1pchtJ>Lv^S66AC*PMtH%1=9f$4@(Y|O z{v`%iNMOP#^>kB3YSjvYCa?1e*Zq!~%E6&`X84Hg@msMzK9rjV4e&zn-nCW(=(e-% z*1w8+dIGW9vcNaJXk3gWkpeQnelBe_=b*0`Cun8%V9u)YQ+FuYSo zoa}Q2&e#4pk=rt3vARItL_Snkm4#KhK=D!&QFFp7@o6 zA5Ib&dKAMwr+1czyQ$h9dU%3B`X0xBR!n0SPHBV}knkwFkS7xPrD@r{H&-(7!pSC0 zi|sbsnAk&MsCMUc(J17E3M@qOxi(prb+jiE^RWv;cdmdLf2{=v?uV70Rj*YI4{ev#StcJUO=2;N3&wE!2qVzw2>JQDb*(Q)kZ_EK5~bxzmC2pZ$;vm${lOcxKcy z`C}gvzP9Oc`%NNC-vLLXvx-$DHlD!dw2y1-tt6cJ@D%^{ibfUznd{iHSK4g_+=X(V z&jZaSpKOz;_WGI}>P2;0CspQB2%D(U^jVn?8j%lx#v52xJA5Uumi_K?Y6q=>^8|zu zrM91Us2Ymu4z_$ONm*CuO;eCBlAf&CNh^(iFF2i^8GheewGiz!aca9%?=O>BquMP= z-5O{ka>_!KW*BEW?_ssBy#?DY^mE_Q$z$TPt{$53M7BTS`*1Wscw@(HJ?$n{x{bka zxBz7(P^44g!93~&d%b;+MrNnFBQ5p9y2QV1ymgI7xJ3>N{*F~2<}+tVuLZr;iOB|U zj1etMh`ABdHe2Hr-`RZo7WE)K(4%Jg17sjE-Mnt+AeC~rYZ8^VQYe7CQhRhMEXD@x z&%}8u2tQ!Dk?+3$I_j;V`VZ@RO~5keak#Jkr?5CTv)zgEi>WF2z$jhpGIenCUk5c2 zKezm_$oZJg)eSTMRDFM)+9Ey4oL2#!*sVGaxWVOY*n(a_ce>_|TaE+lqqd&nScS2L zd%-W!?I~M)VtqgNt%gnc5d|&k;7nNiv*A5cUo8;dB^C+i@negv@LkCVn%ZbM*j?*d z^$JKn6^6rAU5@o0s9Mjd9x$+`P1j1F=DoY7nTH(r3vH-z-nMX?uJNVCE1+(6!R%#Z z)-N>JH94WqDR-Pd9kN~?3yt0*S=}DJrQLQ6+IQ^*PRE_uI@;o-Pa<|MoOhcflE#~(XFZ0sNSJTt&C}@M2uqIq>PhS zb(v|T$#$RW;3)YYHL61Udn81kzma=9LJb=L7ok4uIIlog)$ag%=Jl2Kus2e}DS=*( z^!-xSQPE7U0M!LuNqR)VLnU$sW3uG;sKtziB|=Q&_GLdPY!YSP^+OpJmZn4r*Jldw zJ&qNAKHZmw-SXbjvj!z9bqjE7ilZMB^(+e&Xo3O@qOy^-KKS_Vn4lkzqb%cdCy?JT zr3mk+21%d{_W^lPSXZJWGTNgYknW;O1#4Ct4?Ab24vP#CG7b6Ma_Gj~8NXTF2mNqb z)7>(OF`JtQ*Eq+ELIF|d^>!Fct07G_3Pk)rV@Nm8Tj=wGv#1YS=>MC83qr zYRp7uDgpHW=xuqI1@Q*3Z|Gr&rT#J=f_Ozd+l1qI0|;QxBvJCp8@l)D3XTOaPSMS( z`T}^=ZBU%;U_)uYHVlLFi>||}z7pQLlrMwvu{Ja)ACo4>8TI0_u?qIK zpjJ|B5t|(Oh9F9^k%sOYepuP$5*trQ;mll`ur*rbyapRC4c}!bX){piJM#Rno&Q`( zr0NCRtU!>hw#JC?)SU5wE@Wq$pOL56vcMgPmqhFk$UbbqQN?pUKBBUZMJ-*zaClI- zW0y8H#05ASiRw%|Di=)6(VikuFTB(GJMh3W+nS%Fp=Lxj-LtIxpnd)}=Q(m7`184a zO2-m#ACeW`G3n41bmzLe{;xw|57eDh4M#AS28lHxZ*_Fl6Qc{TNKnZ$F-WV5teleG z2pXGCWYuac_>w|9!B3R<XyxtV;!md$2Gi8&V|cu`(d7W>+}acDQe zQcvytkA}u`aO3Gu9PZTpB8Xu^rNUR4yf~tCiCT?{>{~X=iPOHbhd9@6u zb0wso4z;*U+h2Cgush+kc=Ik4sU^DnP*>PZwK+C@VR!#ANLu?bXT6YW=$YHcO5O3N zqKUK{mq$&P`(d3RS*Jv`xv zHl&Unb?Iyz?IR^wVqlSN_R|E5*8-7jL5tDIgvknUN~WGWiJ6P)Byo`p?9m*ayLCjl zBt;3Bo+*8x<%jdF;Bcx$uKx>}L5q}&q=Y$DHA{5hmOe0lz|kET7`I!WZb(PQis~xh zC90C4wAON8_QomC0^oR8h4dw=yK^at?gK@|1ub~hkcx8GHzSh~`Z5b6>RBb|xn0@<)M}3!i;<8LjH=9Y_V*|<=1hw5KQkg55J zC(zALFTUrbquiWcXflx4ApN(7dXB)cp7icU zGtCTQ0qM~C2<_fIWw@~tim}t1xuwq-S_*icXTVu=6n0dXv;LPS9A09_n!-F|xBtm1 z-4{9UNWIrB62=hR!x&Gx1$YI>_PzqVLXQ{;CiBu%S4sI?9pH9YK=c)$)5{M_3qAVY z7)fO*TZz3=^HRp}!g4j3WbZd@@X?L90>|#IgGn*qq@*Y?81y`)G<>Y_HSqlbsGkB=^FITh#Q^CTX zjCS=}e7fla(ImsuOz@hPwEqDR8g__R_EXSLnO+`;$fZslUdCGDM7IysE-(dbItBPj zm*Y|S!_FXcB{>jtz#+;9^%kn9ZAmkctAM+7-~BSG$RXM^(jr6*ijI0>8C-C;tGBOX ztzd8R&pWp=f!mLGcIFU;40VoClK1M?I~9beLVfZSNOn<)1_&lOUXt#R3fXRIb*agW z@VGg{ji|sfxkjl!6H{bsUn4eDeVfmg(dIeQF`+KQt>18>{Xqo^#!`nUYK5^eeA|`7 zPC!u(jIAXYO@te7z1nPeIiweWuN zENK?$X%XB<7Y{A6SGpuMX7B(DL3b_=CbFp<%~~+vZbA%u4BUzu+NL1ILGTfNdK#&H zfSQVIZWK|(?LPfY(pnEz+ED1t^{?MtkSP6#*MxaTLB9e4lt- zZ8Pw_0*drsvlwfM>Zdn-oHn5bE2r3e+P8$iVAsjH=y@?TOi-76~E|)fbb?dW;>%TG&M+cge$UT(Lf8kFvn`|_;8s9 zEhUQ3nA_SrC}js?wVVXyataO@s%1zeQ5NDKae5NTpKR!_DN38LFY&d>flzsZLP*saah)RkPDg;**<3NHaqEeaSi!(M_`_ zK9bvv?9)~$CysDVf7U#_zDKGNVNpJ4F4tfkpIe30qJIx=Jo~lJ;^aukV<|9gvtu54 zZ|KGa)LMRHQXTUrS;uHIQt$AR;LDQ9{yw_UU!m%Sh>PN-#27N(B0r`oUG}uxGJ*3J zNH8lY^G&UYx!>GIw^bD7qgYT0u_lI#Ir$4TSb7=Iy}dQF^iuDn4D`GeU+;fR>Y7l~ zZrs$=9;3D@txQP($NdyzVfjqqslZF|PJlqeL;Qhnpqx{B?naZugcE=2mZz{FiPzMM zvKMol<>O9;xvkE~YVE&zmN@ra-{hI99#xH_pQ)ID+2jzwsTQjtp*TL=qRO8)E1`*b;$PwBORfsBh&ynaYvPmAMvUm? z&1a^aGzL(#obq5SJ`Y)hS$?J0F3e;4%#j3DjU z;NInLiw)kFv0@V!dELykl0!yX_b~D(2di!t#69ClREqc9I^wAcu?d0=0$dtA+|<$5 z|G*kvx|JUFbt8z{8bivA)}G`md+78BPg1{nvf!m}5?Z|N9xW2RMw8ZQU0CkZqIvSJ z8C!izRZ5pmVMh$~-lNS)&^H#B5{T7g6NyDHPN_DkX|kQHS%E`c%o&6n1^TIWwKm&Z zQv;`?rLxFgAro3s=gz8UPYpCXIkZV>4L+?y;C`#pZQ+(!OADTxzD~j+EP@ZtjFxm# zP6f?91V)SfeC@;ImTt!Fo>Y6LEyu)NQdY7eEf#HQr9cX=Esx`!sG=Na+iU-GK|u?P znTjgv+_Znm^o3cIiJb@B8m#Ix3EB4v+6lg8PpJ^41Cl-rDi^%#>JNL!$;21nh~fEj z{hN+L=$>qeY<5&H>+O{c8szS4Q%D(E6jdg5$5W-tjP~#AdE#PSV;nU7z4#TuqcU=A z1)YVl-7U^7sVpqa8n&G6BwV@?F^F9Bd09RzY_647@65CFr>X==lCp}b;F(i8@f>UA zPu)G9kcK>}xCWov16f)3{J6%qsbRrlIt$b6nOfuugmX>p$;LB{K4)r+_a2fcOyd;-{)SgM6HOtnmNK-s*wA#GdSQn`-0GRpLMH14fy+0VnT2N&el z4sO-DkBnM*&UL#^3zSX#CsDE|4`}Uv)s0!Wv;*lJxD=5cd|I$ZK&*N$?xF1vhD z(aqYopgjf`GwKWduA+L{2%(Tk5o^QqfE~?`sdS+oP|8M&miRW5{ARUNQ@Wy7F^y>) z&X!>t8a!O%upgW&LGp&T#a7x;_XjTBu9ES9M{&qv&^rFMNbF9bhXzXd1OuBm0D4X* z+~1{Y8kR{lYnfFPLP$?yTg#4KUAcXu3(@p{XXDiE-Dyuw`)!kr)OyGC75o((?VfS+ zK8NYfB~)LW8%b57azG)k)ZC8v7*=Z>m%hl)QBP)cU5R9Zq>X1&k|If1DGp{33o&c*@h z>sjl2DT?r=3f$*BWnltyYe4(a?5;K^cfaWa& zb%|an$g8N1rkk>nlDs}A5=##;%YEHF%F^~mP+3*8^8WxPYhP#~b3HyDK7Ziu$o!@> zJhvKqTwZlFm2<~W8>49hneGiAIiTYtE~%hlj+zD@p`oYUviQrQtxTJUT3NBvxR+n8 z?aM=mAEskH%k2C)F*U;MN-y`(8eklkp1z!5Ba9$Ud^MoeNS8FXX`&oOws^z9KRmt8Je|?LT7M z-ze0=UnrX9e`~Q@8S!h++O-??NwM0;f74ybz$TI=P&{{Z(=EO}ZygRMSIFZO)9XMgmsRsJ3o+kR0% z{(|MkdXL|(Bl$Fe{*RY$`LR>}9u?dEQ6K9fV$;(fzY8Jn?j z%adAp@;dZX(H4h{4X_quMoO#D{Ub%{5`AG~c*}3YgBE~u@ zC{FuzRgBvQALLSqd=+sVcPgmPHziU>j^oqTs$wTSRg7(5t&HabrWHc~Z%(R~BcHKP zVc!Jqq#}Mu!i}!=;YtU@snLR5GJ1F^4`O=(=EBG=vQwH`NJt#!syzNrHtNF6gN|_O+kzJRo41BdXx6sAObi#DzxQ|HZl<#%B;c(X!f71sbwLxh z``B>y8Z-IU^#C`fI(IE;{-S#xU^MK#>EOM_R3jt13Z!5V8`R}`M)?Vx_idh`MS?rN zlpyep^WePY^{+uED09^ZauP5Qw*`BQ?~;cY>GUBCgO@(V2^mq7w^a$j?#cpL0ZSve zDTh5&yo{W5QkW$>DhFimg*Dz7Qr_<3ND9EGcsVB?#VO4sXTJ)2ozy8LjQFb{O7#^B zyVQBx7zz`(N03X#cOzxvTiz7HaGs!}An=SgKBb^F&HnGDw5OC7r_=g+ru%fZUhOl{ zTdAUwNh&53G))cVh!>HExSn0W85qV#KoOCH@LCgBX`YlQ`6U1+b!;W46bvXZ;!C% zWic)R&T|8>=Q*t;jB>Oz0kCld9*djI;kjt;_V_Jb!Mo+o9v<2)7fI)>thHNc=wDeN zG09fw*jOI+lI}@Jjr`5$c;*=oaBwRny=p4zU@7djiK=IL$b79Via%NJarJcdqKj!@ zK1bsp}boA}mXQ4g-b#YoF$_r8JH*m(|sW@q*s&99TP12g>X``!-N||M- z?OTUM8whhmcn@q~;{o98NLwn~q^j;UV(Y1^B&wuRbsC3;w5OgI+ z2&nIkv^MA1KCY>Z4t#Aj5l0(Z2@U;G&2)s28yF`o<^@}5@14TW1$0zbdQBsy7;Ur1 zER?his-~JZ6G|OT8GB?KW0may=e%KOT1Cguf=mZ8R=WYX1Q&I}#p-O-!$av?Z7i;1 zZIrgQLnFpRyDC<*KCthdp13(#K97n#kkikqpwe_E=zPm{lYOe#1DbA}E`SL_Sjzjrr33_5G+JbK8&)N zc{v_nzPPkxVBxQ)1yxTJZk9=%8%Jre+q=P0*3{Fm21_7d>$$?1%PA&_;gccP_u2?& z<d7^I z9BwU=QONzBzeD1(CYKQ)yTNyL&XJK-F^sH@fJoL7Sp03y!h5}4LbudXKbY;)bt0Bn zpD3k!_iM8o+II)kex-J8bv#h!z0GHLX^%zUnEI49{DE(zgtAav2wQiDgniAJkL{p+ z0cuIFHlXZ^=)LamTTg{`5m#8Ho(UTEnwK;=r)_`=C;CCLX;0}HN$Jip`&TV%Qq$YX zzI`M0t(8hSEzYk9}t>vLalDJ0J0 z3rXr3IcmxF{%R5r^c9ms+nrH%KS z+)`CKrfB)%p`?w$hE86Nd!BFs*m9CV*edNsueCk9hcRC*S6Z2_A`te?Q0#}hZM!z@ zzf2y0^j6L}aEkTBN7;22l9rj*Ycpl0lA3pNva*&(HRYV&g62KE!#f^lEVKF(->-Te zn#FZ|c9s}f9YfghB!Rng2IFj<@J~#yH+^r^Yuwa~9wOqj?}VS3>biB$Nugy*RTgHX z+}1aKUo3pQ%*M-E2CmX{mA4CQZKu7^GD?amK$0@q8R;>@qndJh_Z6i0x5JC~4C&+7 zoGmlKPi!JdVvYR9f)G6|oPPUGa08l7Ks)4aH}wq_J+_LTf|4;D6wx*q*gpl005p68 zhWlG#q@$XW_g!v|8EM{ThOVMG+V~0V-OL_Gpi9F^>6Zc2;dO4TV{=Iwf;Rmx0ckBhr%2{E9@T%+G}W9bI-<>#BVBh`Pe{BY=j8#bc)~zyq>tUp6!2ETDS%Qo`zY zHt|(IURsySlQ&YcrD<~INB&z%QTR)R^8Uub&~>m=>bhFmxkDkR63;9A7}gg!`~h1# zhT0Xkir9FYuWiuQYAc%=QAHI)IoWBXb3uw2*zZNM2AsLrCnp)sNz+zhb+-2*#wts@Da&Y(=lWeF~4@1UV6BK9m+9)K_p}am+BrUQ`qXQl$XgP z6xT_jrt^rJ;K-ztA-vfqnWfkVZMx*HUg9p1y=ksGT&}a*YE83JR>N|nuZ|AUj(LrZ zd@pdxX=B)M4mNNrd)@R+<5XPq@N^Gty5CbxTTwi{?T;?xXg%&Y9?bM!eO%wxuT}~; z((`uPj=d@7(Zr3>^*3w#`cULyDeg150cnu%2em`nf&ES5%pHuM8;iM~Pd+RjRt!xFycTvt< z=!dJ3;znb`m%(W5JzOrji&biX?8RAe6jY}__`NgCZ6orZ+B@DCdeY`MHqo4a+_U`t z`y9*LIays!l>Y$O{{Zd#Ze*ILSI?v(rntcqUg-?6l4NrWKxRo8@9tW2UU4S#uC;X4 z_RAE((ZUBige@@c%udR|f91hKlFc-L!&;gZc zz43z4qArzMV}*Kl2@NduEt>5-XP$8GX1TchK-U~M2XoEnotLuc)6;1O9iCqQ06(AE zo@(|Ib>#7@DQx&&zxQarc{b7Onl2*R?l&zzRZSZn=X2n+G!eGsddI{pqstFiOG&Kl z0=9P%By*p>66_FI3a#>G;-4AyHm8!?CAw>!riwUUacnGXo@oO}?2E_&&2Kbd40i&q zo>%(2hgZE>rfeKZut`ZzEkmWLyii0PR?23AUmFK=hk)!hx25*SDgOH zsP23*yXdRE4gUa5ULn*PUZKFU<4yq$1eP?tpq@s$M!l?d-28;{u=*;?73M1GreSz4 z61~3uVh{fSrP{aZFuA9RY|SY2&*`pwN_)o!XTq1ZP9tc_Cb+gqbsZP>NA5h~hvwr0 zU`Jl{_4c3j2JF5-@Muc@FX%`aEl4iZbRadj=eP3P`_Q%I4j}tN^DyA)zDnqZ+3{7> z`?~=2R8i!RTepg>fRJ;r@G8Zg#eFE|csM(CDBM{I5!LwaES zN+rQX$?bU?XYrKC`tx|TNv#Bdzfz{tVB;TxFp1=z=0C(E{Xv9y)&r<8r=n|cZ+4}olibkveTzN$L0;GSO6o8_0*2VnW+~c!)r$EMz-m0JxN6VEyFO3a z(@gKNYfF;o`+VP;J&29@qu7d|_z$l?*UqH>0G6&F!lYI%iuRsJ3o++I*%oefcs z595WAazV%!B69Y3fsU#~e>ixl_Yu3CH_v*L76XQH?JGJfBChfeno$qdIR5CX45SdR z_o0eP5&g;EzAC{gyG2SlvIziesojkFTkW!+D~5M&>XuWs*;Qz~cqGUGenO{q?sL1d z-O#As;vIrc$Dk@hyT?FEg2e98jTriR@mM^3a%*##(H|nlnaA5-KTP@V$N_?6jBU; zHqQRlR~x=6#TqLjz~~i2NcV+F3x`bQRmM+%*$R`JxKm=^8Tu3qZafZD<^#6sP%0xk z{01KM<33P2=5Gdf2EuD~5b2Rn+OpLCRbGP+jYQ9O5!Qt3@M{@UN;l_6M z73e+Xs8kR~P`w!=syluB7n^iKIU>mRdeHCDL-R*W_n|l)5Uvk=cTn%_UhYQtUJ`(W z_+>)iF_D!QK%j9w$OsNca)6W6D9P^VR0qKOkP?n^c2e9(IPU7BanC@d$Z?DgokFq% zmt?s>=LYSKluvLwV+vEeO0Yc6LC3K`PcOY3@H&o*!r*(59(Uw)UXlqIDh_OQUb|zm zcLdDfdltRn)iphCsp91=Wjs^EKB=dUNabL29^bi?qXnm8JxgE_^(+I0Pv+JfSs3Z7 z;UE71&ZY`GFlrrFY1H0In!5R~bk&boaRW$k68S?FZI+&X*YrB<2*j;?h0lAO_p&HK zEQztVl1Bm~BXn6fkz%Q-j*6z?ZKSEGnp%1&ZWk(eq^x{W_EItjhZaosNI4wVWP(~p zGhEjk;(BN+DW;^Vj7bedZiYr=faW+5LB{)o-MZGd-zJ_9YJ25}Qs4CLZEep?R||!v zq1PEA6tjt&Hbil9#_nkt*Ep6JJda7mRW}WhY^TXvbK-+)JmQ~9wuL@*Zv*|KRI(i+#e2ILnOzS;_? zWPVv+BS>q*oYy!$H?*=>vN;+98rChBqkNjUFT?0-;?eq2rrOJ`P_3`zx>Uc;Wd-G> z;qN12_v4f_j$?whojXGSXlQF%(AKmJ27oXJumA_SN={GS8B2o4sBP5OHdj#8wDxrj zYY7_`{pey$%`TJMmI5<{^n)mbY+E#}|BSJcVM#}Ic4%~0Z9l7Gmu5X)^?E^+m$Xqx88 z0o`0=jCJr-H4keXK^qRL$+Ahzlb!8rhokr{1Lq$Vme^zx#Uya!4Q5IAA2_z$^;3vTKc7r6lr! zf#3qobrCVzs^kvGp^u3HVHQr#XwG?eJptVNmP&gj%FB0zNS>Zp{Rv`l+aM9#tkqCR zURoMKJ;i4~b7|WjD_hInMgZMX%V~y3@^a*7y<5b=8hngiov#ov&iU-v9QSv87DGc( z6;vh~hV!xGmPWh1Py3S#v$o^Zyz>D}@a6-Pzf~HEO9m}?(~hO7bWCY{ZXEGD#lPg} zw?_l1%`T4YoH=n>F~_upg}JS$r(@w~Je231tUrW537+xO)&kkwK4mbob5XmR86+|= z`kVpxdle-ttOT%UFrVslxWV*~?@HG5Nsl;bH^;bn-nMY=MlJ!Lx87`@e zWA;H%-mlt&OlkvnrquOXdJ!!~l>4*j5pJcqR?}SSE1Av}Qpuw5XlKmyk8k zw73jl7XTKfu<-*^X+dP-H7=3i)h3_Shfz@Ji*2&RDXun0QPWgDBP{K9DjOp$;gt+) zV=VjIK_RPoUNt7DwrGkgj)u^-tBvx5LrH$M+Mt88K7RJch+^e%O?xK3fx7kAt2jG>*O!PcmnYiaAu)dW0qHIef<#pFB#&>T3^ z`X^g(?w{`03N4n0QHgBQP`ceMJP(R0hECVT9Ia%Xi{xx`4t9rg4@i;{IVU=IHOG`~ znj23Xv_7$>iHw@#S}k=l4V>{5_-=ehXHg*Az%Fo^#%(n<#qEkmTrTt6B<2_d$#HN5 z3?0A?$FMG$@{uO5h#V}Xk)_A(^+j2E{a}g5;sN)rdA!qAX}eIi%;rl`Nedx)jpvk? z)NVmL;}6fdbF68&-PUW2{_RBT-&RuT_UZGF*-Ayy&rw|ID}|}*ddmH4q+z$ochj!8 z>HCCim{8kk9_iubF}cqbB2dP)qqGvXGSbpAdYh%O#mZ?TW@yhSTcz&fpluZ#H<7h` zN3zgf>|K>_X-*pE^H;K=?q*^%tSopS9@Vlm4pIu(+T)E{Yf0ZUrkvC*^Ec{Cja>wj zx?m!pbTK{Q@{&Gl%Q;6)&woXrcyVgsD{mG`E2XQ5+Iv{(s+~2Z?9_MoDVuPy(>5th z8{!cY3#sZvbsli=T<1G1aBm|(&U~k6I6UDj_3o~i7{j643`Nb0j zTb)lH*$s`Q&gw`cG!uZR*<_KATRgU`7m2B3w&-BTGTjB@;~mFS&`?heEkJyft*44| z*HzB?cCvFD149I?X?bzNzfy5F{iyhdWRlCNDrM969a~o%F?u1A(6++LnByZE08H@w zXFS>VEH_+3!@E}%Ywmg*P-=S_eZJQ1Wfi|dM|q>QLme<}9J9+jW!Cw{FDP?Bej{sK z_}h+Uxn|*AvOPs<)%N{D_d!W`t-mq|>7(qY^Q(JlTHa=ya<#4P0chO27y(RPc_HB= zk1D95;TIRUt7?JrnrW?d*JvWf0EvUnple%@VyXD{ux+$N^p%xc8iiE5ss(5uf$}w-_j!R~H62ST8ra;{&(A0$n;g#F+3q7C{8nXh zq=x5p)6>>XFPOu2iaMr74qY3XNh8(+TnDVKj^kF1;8iyW?Xc6^tFCn&sM5AN==tGw zEasq0^H;Ml=ChnQy`{%Ipl(R_eLm;y)-@wiI@w2e<*o z9c2C;US_S+n$S{9_cC~kgmVr+;#(QZB>)VT`VJsF2W_#4xDR>Je0q}kp)_S>K9Yhu zX&|VEK_erZv&5XEn7jb?f^s^zo`fy;tFBrW-CJtnZNl3lZ+AGO)6`Y8*7+-^idkOn zScGf@555Ow#{h}a@_?w+71q0Dfz=i(HQuV7)e>Bzr=_Z@g~kWX3nXxKn>QTgyA89|bG826sI*47 zxoOQyY^k?gXzOH(qL!tlkBR)mjEWrk{wH+x-)~R4&LZwd*!~; zyf|l+Va{+TySRK8*NRP|V8?J@B7ByMrsFRRA~j6(?v(Nt7VnW8$=!W+qxq5V+!T99 zEmdj1{-ru{!k&EDj`1gI7VnR~06%ILx`Q3(Y%Pbxw|mTf}u#m3*Hxe7@vt zo>LmPAAudchuSG4{OZ@_9?sLs+lN5!ipsZqC~69-^6hw&%4b`Eer(ii_lh`F9zc*j zZ!4Z32LAwKp1MQq6crzwEoZ*wL+x1TJ&1##MNoVO*LnRM>QDJ<;ruE^V^_^~bFMWe zKl07P_*AOKtq^se$)*0!cVqtmvlU4x$x5!U*X|h z$>jmIgRD5l0qOZcU2;P>{@_3k9py-b?i^0RMRbz@;P{146r+@6t%OosmpkqoNkO?E zc_;TpW74}D9C$0$`k{}dp#bjKclVN3J0y)g`>cc#rXzl;l0pa{N}n@;-yu>+@ET9A zy;YK)87PK_Iqo}nsWLwfBd*FPhh&y9$JPc^veNQ>2Ml_OgCg8C><6uT+G9So3$idV zkD*I0?)nr0qn|hjJ1OwU1GXD?Dh|{G=@l`OfISkKK)E%;InEUL%IK6Y9#Wh-a_0P{iO$tb@a323fYh!|A6tC}WQVv%iX5jmmSh#1V4~G3$7c-$J7y~4pnDzRW(BaRNhL50Z8c$hj zI2h=D^96sT_?XerlRN^}@Op2k>FsaI-oZyrl0Krn&12O&f~ zBa_imP~0dkRPC#_)zj6$@CziTm9jO!^ucK%RboT<4>?%ix^E?}P9|=$zpDFo13VJv~dz=b%jQwp1kNu26&y z`2?wFuT>L?!2>@90r&h6k{1K(P(1f|4vW(6@)WnYk<|eUj{EgeAI;O7uZo!WU>4(Y zq`l+fr8a>r#N_y?Zphy`QIp;VRJS;9lCPg2Lhl}j)}bdJ2;y_GQps>VN5v2#0|exC zMx3t$yNVJALzLtZhaXGWwbuw_4%Pf!-#FGxZ^DKF_AQ18Cw;TFRsq6#AxEgVom6HL zebLoIsiPR>9fKQ=r$0BUhr=Y1c za!qqxDcl2|s@;<3skL&KIPRd0qXWOaT@^$VIkCp+9&px-m5tph%rHw?-ocO$;~j_C zuJ?J)W29lhdFMOy1Ezn^xnbIdwXdaQaX50zFQe;g1_AeyO0-_&lvccX*z~iLl1b`& z*5=Z0#kyYLV;RlvdwQ($*1c;?_);Z)q&|jC0vkFz|MJILmJRGC&3}cMt4{?EW2{=)qzV9 zoP(cY*p@CGM*3l2btYLF#+Ja$@(){&f4gsL#P4)wiswB)A!hk##x_`#)SeM%7ZsA< z6b&O_FjN&Z*h3cLXVt%*MW!e%Cl}c<~EpaQTrk z&hEqj$Im12?KwF2t#?x!TnmW$R>R@Q0>I8wk=sA(Mp(|6>{}$y;~(CwX({EBLUGis zl-vt+gZ0Pvs|u~tRc?@&s$vayaRl}Cs;)~IHl~vla(fnu)Q=u?@;jGFUGx>sz86Bu z%68{$mbq)ozmwjkuaL_X_SymNbaD@;eo9kwb9%?m0djJ`KIH-|Pz`G{sg@8snKbjT`Pv#}~TSuu?< zrg#HIukO0u%cN-_)OT6urmL92G>vSvUX<*PH#o%;=Efv3wWdNnB0(d zovERcJ9XOLX{uVv+J;ElSli9X9F2E8O<)-ugkx_~kxOl|MJr~XO2XG;1;Ua%;~g?L z?h7EOxoO!b*$XWag3K1Q(VR8d#zs1LZQZv;g{gNkW*n@VuGhd$4BTwAl$v$7+Iovs z%4?)I>MQkZSARULh9}J*}VajJ6sm z-95^^!#OT_CC?`$l3w5zJ$RtIR9qv|`a+Cpdn?RyT&}P&-f>XPgQ=;b2k%ZTYXhct zMvOEmI^y?s(b^8Gi-Worx~B6}%?voF1F?8+JmzB~ftdavB=b0eZ~nBhmh z8_&Ua`+QBRR`lm6rt;;&%2@T2;e0;Eh~h@5)SAhJfLn}yz1u)MmV<03v_Js`}Hp7$3X^`N+o!`*#Zel>KuD)~HR zzKi3bk=(Cq$AIYOIpF%*-vx@sooW#AYCn2!GG1B!HuN_7jcOMaJPJu@rm+{k_~*Y3 z<@xcscJg*teAvsLj5eMb8&U9z`PSL$}%FO>CM@<9x-#BGwc zCp^c@C5?_~4awxg_A3J5jhj?yU0GFmyy;uS5z^bGdqlJ{FvvsLOIwkG;?V355W2du zhR>-YWi3s4iHv_}*GAyi5LjE^(`RgYg1UOM?wo}kDL%$AW#Cq>xm~I?&a>h@9p78a zaI{s@*HT%$&f8B7z0(+}#W!mAks+z5e2+3`nd6iOv;%tMuLSi!5cqvIm9^Y7qw6kr z32qg$-YwNHY!evyW2mE&o<_@0JT47;9u7_jBm%FxTR~c_GehROifIEQ5RTEYJ1x#~ ze_h?p)B-@qRg-EBK4$Z8GrvBa@lOpyc^V!I<2?xG-Hrr@H>UYmo_vdYzx%&qoaFLH zyxX->+ zr|fW3Mds8N>KGYM4?Kb>VU2(_tp&#b0e~`A32fC@s@jNj4-aRCx+^?lmglLeU~!|Q zbbPHaGbDtdKUV9@&MB>*bX+QAfzKs^rL^ZU!~q~M zHMw)~gR_X$sd*Z3l;XBs`)A2{{Jy5;D;ZauX*avJn>9d9wHtBRtQSY(9PK2q@RU`J#?EhA%| zZoU~^EltC!UL5ecj>&PNrIzUKddm096D2LFkPNUmgYz01&AOJj_pU*?Q_@(jcRN+C z5l=GDt3rxZSL)07H@+DiQgm)BzXLdeQC_Mm z8D3s}&9T5(DJu1~o_eth+e3e~${R1u=abGNk3O!^(eAge8Z`aeU2wxy*`u|2=A+Yf zmTRSdm{Q8-Pgd$?wxCnw7d(ic-Iris@0H9vAmW9mhT6_b+y0sRxkq1a5ej1Vur{JP zsAPshTnGWK8Rh`EY(l)$c~xTO7KEI|}++8|zqlKmAW#fG1ZM*H{c`m7UZ*2)ewteM|muzF<-0V*$_;|%`tI?F& z(^=fBjmpngaJt)WbdZvB5O8P2{P92#lDH z2F4o5!@1sh2+0SSa~$JG`B>bpPuB2LTjVh?Q#*LrInE>_4anOAyl?B-X;z(_r(Uhp zbG|dr@MBjEX{a@}w3Aie?{BEoS9S`S zr4BH;gOCPC8ya&IOl}(1cLBahaXis~k+R%nmC|#`E1c&!&MtG$G2D~8y`^pK7Wk+y zl~r{UOD$Bi@*KFh`bWDbbGK97+^ia!H);MB&{!&1*4FAu#$8<5Xy%Si#hDSv$sPB( z7(mCP=`^{(oE`cR>N4t76yZxnHMDl!uRnoGk{Ipy)x%i%+B*q{ zp}ue-mBjrk>t!FzX&wV*@_WeXp`3Ez)DAKbM8r3ga*R^QbDVCzu~PJC?!a8%Xtu2d z%|IVQy8R4}=gaRd2?U-~8v7gK;RAjYu&c+(%?O{$ z<4E&=VxziC@T7SARy*aTCu!v=#5ZK^cn$bL5c?KY@@=<1R+@D9TCT_c0Jr}DD%EB~ z?11Qt)KCwAy6AtWjY1HNlO z>a9nRc036m)4G5UxtBQIAs?vA_qhk`PhWOb>9cxVq>yvoxpvQQx?QW_w_0eYtF4*Xh8S90 z_n%kVdJgr}+%n+)x2B|H^?sq}W0*N8qHk>YkAax=_B0XeTDQGw*QdufGQ;xk^fOy@ zs`sxJ>pyR)9CKrGaHq0hHxC~bv*85{Io!WAPlr5BXXF*vxr zGDY?YdLLF5VveVCzpAPk2$^C04N@*^UNOQ3Ds4iFfdrLbo(Jf1x%*eBV z2XmZzhthtPZLYcteK%;p9W-&Ki0T{{UZVyu(2pyrGbe;#>y*0O$Tia2Bw3WNt3UF&NHAfCGHvxqPXsXeYQh8oa`yNxO^a?f8lolVFH$6q>@&}`QxVLWV$w%>(u6;?U zp?5TTj_krZ5nVU(mKme9wMEUYvEL@twQbx3(R~`) z;+@?!Qk&VIJ-DG#+M}mYZi>pSYphsu-Cuhi%Y>u&%7ShnYl!}Cw!VYvxXb*dW~-|k z;O?zcROIBW_{oUL+r)k#f9KWwZ;h@p{&f?0kHj&z=<`qw`>rznb%+X|Us9$;H!vAL z1te}}5%C*{$6>Ga4nO4?KRT?3h+Ib?6R))a&yB)Q&aot^WYw zZYnnM3y53)0H;gOsCmg&h^fQJP4SgEwE5286)GteZI@PYBZy5q>g${RvygruDbevW zUpc{_qKmM{{4evhSq(Q`x+vkLAQxcZ0(x}WG@+@LwY**81vMe5i&0!%+ys?5@5gXf z0mhCiR9p;A*7q2g4mnun{h*Zv+LZmwr)h5Qb3c5s$_ilXo7_fyB`PS5Oxo71f5e)% z#L2b2(cJE9Pv06c3E!ht%T`=9E|R{~(`n1~Z3I#_H%QnezS3eE~4CM01VIMs6hWE*FN4Z*=NM>W%(NyjVDa#9Gl)+iOdc z`5`Z6UeSU-VzGOLF73m2$ye@yfAYd;N_qlxlhGu=23HV7qoZfXM8l&6KP56Wri0>Dq{>I zn;3BKKH`2W9FZxT(!Smys$^!ip~3RT*B4Gs#EgY+;pKI8UD~CUbq|(CxZIO#S-I3}! zYX`nuV~y0&XB@DH9Qcw-=KeMLL+T4fqQ_;Zpp~w;i7TfRcL^YSTHi&W_>8#z)oCmL z03R!#ac^Nqa{J6N#7nM^A=n*oTen@uw{>rJFLq9cBzL`%=@YA1*2>&>kl}^7Y&Yw5 zEwX3S)x!fh0PQA_F{jv1Rr0Qb%70N%!`Q)Ys4mXVHT<~tkj#~Rw)siw>zHYMf|hr` z(P5stc?(CxDyd4Ww7{JTD=hDhBJmeWQ`>4OpL%SPONN0EA*UPn@ANDSL~$=lvr${E z^47E&s$-7@oafRzXQ1CJkFQ)r)>k%mg2!xu&dJMJBZ>Bs{noyli#pE7sUxV4qM44N zs)}qjn+Gf{IP{6|R-9)>KJw&b#$nvO$uX={Uxx<-Q;2k?WOTaQ?Gj?3oOw(_~-ezm53Db{rs+f1x`TW`09MAnXM z4%C5{k~iGreS&beQtN$2%Erm5E|2WXV*#_4`aHj_!o^0OspBSDw2|}%r;he&cZLWm zTap@EC=GDWxC8Dd*sNC`J=><7GI(7`H|&i>tqpMwhNAVHUsP4UbtuAZsp$c^V*)$jml(vu6+q`JM~mLtYspeGm_+eWUUO$?{I& z^NHQmwNBsv0DPTc)}`2)M@l+VX4{^3K9i0=VzJ7L#jL+-$i2=A{{T=xv`TyZKi-9) z^}d{@ua(u&#srQexy=~K^{U5`QMN_S+U}K6{8U1^h@6U+{8u*34A4c{nH~u54kPJU zJ?iIlr<8~;b&MPQ%7>4rt8hwLJ2+KSoyAKBy?Q5z^qr?ZwZ)nu_kOR|mYa1>pt(Mp zdXU)|>d84vc}Y3nrpuZ>mrQa0Y>q_81dY05K_{+cT+@ttz5anFDRZMaVGs! zXOfE9NGa*O;KFyfNNjVL53CE0X8TwO8TalDz&Y+?{_?$WXf%%j_ zbFJj^nkce=S^}}#*{teh5jw7b2OhkyDwWrbX36VgWfe4$(bc|?NL*SaEdZA^Z?;Z7 zs;>#)l(L&$jFHxRNsO`75pn^_S%4dZ*KUfQZ6~HGYFSr3ii%2z%UsEyPRB^n-QL5eyX^o5YdlT|@nzrYR#e;UM!av5`&Xj0rNDMo z_E;xr{23d}K9!Eva0i1I2@JJ-G_Xk{h$ZHQxj4=@9@FtE!FKY1qOrzgx>%~(*7Y%l zyX{!C-dh|gqdc|$0CgesxiV-T?f(E{m6O7#)^V{_D6ZyhQRKhCTBbTFxMyZ9kO1bk z7Yu-Q8#~rjmZ!+e$+o6m-0;HR9GTBMCa7u8j?ybkUR&G{K8%;TsPn(4Z=Y|2O0BYa zXz&|Q%xSF}#u?@q20u5Mjs%=z(qPW&>ek_d&owlekU93yHhxc<%KzWES>}xV}A!B#bm*ZhRH`%jE&X zy+;1d&2W;aoMVy+HwA(F9$&?CUF<1z@`>R7pQ`g6L#^#rnv4f_HPr$KVDvbC78hka zwCVoFS=LpfrgF;lX19yTD}of&Lb8IVQq#Gl<$~;7!ZVTxa}LL$^?Q}0R(zfuCDqqo zyEhG2tD>oE85=}Zl;SYfj0`ifz&HRB0L}q9IVW;ac|!1JyyA<9+IDk{ksZnxV0fQN zT_EEh3~G*P^`(+e$*+X}02yugw~8t`mKPlCVU0R5e(FY*;n$IVmEk84Y#Q#*t8aC( z+8*{tPgU5KJU%B!>2kCkgNfV0b$uuvj@}Cahgs>H^}>>l(|Mw+nbHRa!#f^uHzxp+ zwgo57Z(gG-UgouG&o!Q3sgj%{4WeOuNpaV8;PH~)gC2#^p`!bMVjE@18VEHk=Ad}`6;CgxI@d~o=`~y*S zC$F6U02C|N5Pz*L&GD;0yjE=h&}6!J`Ps+qLc0AR`BmkICu!vy#9|I{s3vd0j9JIY zuM6e!u(C+cn+=-k_j?R2Z`MNBo>o$JzEHeIcmC4VkAVSgkCOJBPvt!6KeTC!gZ?r= z-{UP;$&mW)h>8L57hUbHH75T6?bXBhRI0|Wdg$j|YBT=;uC5=#rB+V$c=8Uk`82S2 z_jcR;%hdk>hlO_Ml#%7(W$E4GmP7Eu;xCg={hn^$^J1s`JS(>RrW{8KFYal_EhMKS z+^oX=N6%aBab#nDuOC{o$8vCto?tdT>)l0AeBY+sr`oa6Qf+!z zyZ7B&HGTNdX=|NJWaW8)!ooW9NJ9tjinW@z0uDz-kMSzC=Se+*->Oy9H}`q_5zAsy z!!|vb?6Ryi&jF>U zob7|&rc=_y=>V#c@v!gABj)$xAxBRk61DE;b`Ewu%5?E|4;<`seGpn5?CI+9TFv8Q zw+be}BRY_{G+;s)cp3i8ceMMbdf;gjS>U+sO zzzxah09Q&`swm!MrI5z%nd^dAP;fVfwDv1}dWPX_WLIj)Ltm5WvMz352h|!btSv(y-kW8EnWcSKpMZ)QI5;M_%3`J;FZ5a+Q1EpqUlu7Yiqd>x_f$^$ew+k+8x!c|*5xz|IH5s>rG7q%LW5Nx<2y z*`sd!4cElFw^pxrR#T}h+3l@uY1>m!rm&G~`<&g__*0 zm=%#4M@-($k(@WFIM{XR*WJ2>^g-ZqG!MvjPI1>i4~pTd+&)d%xbR~Sw=e+Q4qd=4 zI}h@DmA1i6fV!gG-gyoQ=79X} zoq0*-9*Sb9`if*cpyo`>P>rX6gmKhlM-UFEC5L)1R5PQO` zZN@T29h~e!=e`{4d1&fIhOzGRX`T0i=WQF$aP=Rrb+N7&*HckXEM%8*axk ztwQQDr0RKceA;%hg;h(>gDj=4`>%3!L4PzYA(3QsXzw1=lki!Mt`<45lGBWkb{#tR z^*+xN)oTqUSNEvc*EEnpJ9Qg(dzQ|!RJK;f_)X4FFvugn+CHBiKCc{j`u6ItEo?64W8CsxkQ&h3xDVGmaj-qzSB$GxN?1BGl0||_K=%S1 zGEO-`�aHV?8~Jo6Xs{17Yo6=C&s^x3zPRDIMK0w~u)0 zw1k{jtF>`1?#tG!r*4*~=Z-pWHEhmvngng^T*1r>+}FLtk~YBcT^kE}*Q#C;{VP59 z^jOa4I`^nhv(w_dYIyCp$I#d{w2XN!!8ruwz8MFmG2QN1)y9e=9IeK81QOnvK9WmL5N15B|vGo~swmgN0>jM&#l!go-tNC`k02GwMj);9aR~0GDiMpyS=XmAdjQkuJGPI=utAWlritx zMh=Z|>OGzw)jqapE8}-8jC^u2#jfTBq;I}P%O9gIfz!j$nGSDu_`zQH*{kW(ui#IK zea|iJ9h@ge$A>HIRaDfIzE!$u&g7Lccbx2V;;5u|FzbcyH70m>u^68l{VJ*ohO1F= z_$iS}7)PXc?p9A?tz{Vm6OD?y(o7rxSlu+fJw0mgNiQh`_p2sVJefr-fa411N=P_X z8YxaeJ1gBQ@ZHHjV#rnH7Z^qk+xpd5j5`Nzs+s}`!1@(y2f!hE_8{If%JsfxhhYx^hD;Tsv&0bDF31+&I5ghc<{wrj0 z_cIF^utm>sas6@>dmA%K(TCh9nU1qF>sP&d4{4~Y$QucW_$YLRAp4_2?0;ov))jWC zV4m$+Aj#&L=N|H^-r;O(%Gy)R*5kch^vngdSOYxpynP?!P*=bM+Dm^3tDS-#&aH%J z=D2?3q?wXCmL}5jhk()dq?g{&n-^Hs+;~Pzf#q{Q?&*?FK+{E$OrCL705Ye<92tP zI}YzyM-ZrmCp+f_>-=v+Yd4TsPcXR5DbQ0hya z{j75vTU7bl*L7=gus*;7!G=f#_pN%*EX^I34?!b;1-7_Lr_I*qPfl6QfZe{8s$u2J zLGO2lS6KNhX!qRj)UT_!!)R;V3rlG{_Pyi#ln3u|pVv!}JQmG_edj0G$6JJrhW-#w zC4g2%XnBhSnVlai2kef+;<&enRIGzuUTC9pWDKlyZ7h3BN!V=enDz8uXYM=;)74sT zJDl1{wX^nnR_c1E+^P--Q4T~g{!2yz`6I`hEOx#vYVzlnvKy^SBWoHXXd85B8v)P( zuow$9)W4{w?)GJAtrEjE-9?J$MBdhB({eWsA(gPa?dogETmJyb8rco%w($h-f;YZs zg9Ne$2l^x&7ZH*?2K%mbC6AOaxVI#3y2r??PSD({t|GP1d~cQ}_HO~~JANa9Y3jA? zmMUd-GFAO1z~ko!>n&CYZjxK3jw9%a9K*_ej_+JisOgOrPbqFO%){;_eNp4j4`}Ut zIgaCCsdXhhtn|L+NhAT@xhb6IK_JN7h6A4U^CO7#vc**XcSSJmota~2AiD5_+$--a$+;`6uw_17mKP-x{fk_@Fj$_OG_SLe1pzP~os6 zt)BkGnks_rK>0H7<3;~mc-4(L2n#J_)otUh%$0TgWq~OW7KCB0O^XgoyVAK9fGHP~TX&qY_ zf@aR!TW&A&9@C%E`_?iW&A4WHecLVP#O^5RTMW8yS~9FY3M!EecOwIx=2Z$lHhrZTg*Vg z>Ic+(7kyk(Ybr?cj>_S42lP18%4>^Po0Z=0r8L~tRF>;h@1T}g9%FS)Qw|>IJ-Bxx z9q6@y`a~ByWvyxNmABp33sg)!cXDCZXZY1(@l^hJDkmnmOur?IVgd1Ir&UEf04% zJkA6TTw`*%7lJlAO;N3_)OQ<|EhP;-R25avnf$MHI|*}{81HkO8sAWHE3!DdqitMV z(`>g;YC1X#tqn(>&`l}kh`Ga+@kj>zk7?>UU=f@tyQg(hm1xtJ8^)dfzYOqVu$`SM zw)3wH#W$~R&yS8dF8)s3Wc?pk4KJvj6Rz;b=QP!EN7**I2A@c}F|=~GBbX0FjmJ&) zZLN??Zm5$SxVTmJjz5I%e!mP4a?n0Z+#1&UM_yCvJza3I(^AmGP|8YbD(^XxrSdb7 zyF85{sQ8h_S) zt=I9NQvGI?lent7cJRV~JnxUGSLP8(D0pK?AmHPE#_}pt_4@p0&YpwntKF=dX0+8Tn2g>UNEb8^7NjvxvYQ>rrb*h6X*Z=4_R2!SNiK z1zUAr4>e70pl*+>($@<}wm3s|uBu>dG;zo1z0k?hQs56D9Rs_tRLx~wM^i^m`bipE z+Q}Nw9h?B%0q|DoHLh+g=ThZkeUZP`E$uDq!kweaOLyi{+i@*lX5a{L1Alo|@6fX! zGpZ?Md~TB*#f~8V0F-)_??O~j#i^~e7XcHgVyuPJiCSF47;r8Q10=Tq=NUN9UW;@~ zZQuporSrE^x|T_5e)wAV(#9inv9!67h)Y3hOWYbT91DpflrN?9`BPRjl}IZM9Il_gF|XBkY!JfA7TnGah0bO{<4+d)W zO}aPGLls2S*OyNBR=O-j9Xzh0l1^70-7`aDh=v?>4h6ZrO0=VS;kt5bPnV{A{{R`Q zPPKpO{8co)=lp+@m2Gy4Dksj^>Ej=HdHI9ZtW|XM%@{QC9lsDPwg&#HU$c!d&$k<_ zTDhcq7f$k=V@)WnjJ+bXt5>NhyuH50@A4U?iQZB6hq0{&D>Zw74jX;#gQh)w7u32z z_ttw?%r79w#NJSp?#Utx4RIcpK*@w2?_bcqp3&_idX@7pXm+~Cr}BEs9qHu2_=vp!Lcq%$cLqzjlfqLcI~iM=yoI670R8* z0ide_?Law5!S7i$4n$7L&OrLsGe_N(lhEWqcIkx-mB}u=j&`3{dQ+SNK=mkVe^dtt zqKcSvG?R|yD}ebit|6RLTL8)ZH1FQ4R}V&CaXvMKdv*%t#50KPZO2-dAKZm*;mdt0 z?x*ze_w-p8=p(uEL&|zlHy(J0ETZ9`7Bxwl%>VT@N|`N{{Zf3R)g|{ zm{f6_UC!K){ZDf{`l?<&#n68xTrQK!F7q%Rj9NyoKeDQ($JDhMl={kuiUIHwT>k*C zeMf`--CRF~O00_au7B6YqxD{{AHthlO_6l)%RbwGV&k z8IRBx5`3DZjyzsA{{V9JKjGnBx#cnb#5#z7z$g9aSL{SzzI@ZvujQp_>I{n-5|1`< z#@H(V0I7*}D>lX?3mHC*JDcB!eM;z(Nt-yvN3mfXN4?dVBeSus6zz0#Jh_AyI0AAH zLeK~scdW1-vR1m+_jlYbriF*^bVIQow{>y&?e0{c1*6H2(ar>%+xTH*a7o#;xQz7m zEN+&az=sTV-Cm)UxW_l8;gzRRWZ$)ugUc>))p9#>T1P{VrB=yZQB)!ApUxryt;yUs z!`anYnj8;@wNGrP(d8s#W3tzjaCK?dl3be`gc@<0t6kAjOx)7Zew~~Vv1=1Fg+?3mZ{G&lj5cx3waS7;$?S}^kC(>-#3&+1eZ zjmTzBbAU6`rcbkMe0^&1PS}f_=MMg75OVAe;yv90`AyYFozCo@5fm_!>6QmGY##4n z@bxcbbYsw!hd9pXe~*8`RS@?xm^Gjg^Fi3Rz$jk`c({+mt_^HVbHj)&Zspq^pmpxS@g2opTt_op+~b?^fWgiV{`0q} zt95f+W6J>~JGl2_2R+Bx_O1I_H6@chJ&bDY6CU@uxGgNhxp9nx-;Vx==~yP9ppK>q z88l6nH#PCSt{#hl$RPOVaqn5(bZzvX&KgL-4I4S2WcBxZy^9gNmNBKI7M7B5dye2Q zU(;mb))`u>DB-Numbxx3RB$rak((fE>KGpXNDz4_mP4g)J z=gcqZMup~{(Uod3WiagnoZnT=S+Yp2-=K zK+fd%ExTa2PU?0=O(E?b-%0&DG2`t&oUawsZG1sbat7hwlI?6BO+;gk1jLJPDvq3qOrEB zR^l-BkD9}{aqrKqUoJX)NlhCgrX|F4>^BFdJ?l!3PP4a?l|`yvzrpift?kYA+?*)l z5%lNkYI~Ip^7Sd2=XtnSJE1bnaDCZ7LX#eu5>^?iDcRaQ0i18P+vn7_r|4oH&S%2V zUI-2jZaEJ7XWXU3r)GS-{%&8&IgWGDcHwnS3lZBNBga+hHS0fmT$o4%RM^!w>8~) zr@JsXA3dU1M$TgvOPtPQ%6(%U4^L=US!QIl0;id!&yTqE9dZY7*-p7cA#m*3%y({k zg0-@g(Yf-528jnL#`(uh{4h%0wWX@=ok;BRk5$vUw4J=ID!j$5y&sV}SLdRpdxwtr zUcO9`;H0_Pb~-O`nTODeBX3&vF@iaJ`l(#rtoNvx?+~q$l2|LCacRKluJq3#p}|;U zcrHB^)|Os?p5fJGTNvBHlF-Z33i(MVFx|&~?*)rdN&~P?d=>tVP+RTUWs4!KnH4ke zIePll#wP=f{1uXpUR({c>Q^{r&xSk|lX)t(E>Z4w=&NLCbt74ipDP>g&Q%2~!8?)u zRa;RZYbn~&Jt^Dt1NqeCoR-~6;_9*vGaWRJ+t%Q$8eTrqyF9oY+bgyDQYX0c?&w;` zclV#nsPuGU)ogc*l$3lMr$*B>-L1o}WobJvFKld;v%C(oJmdC7R_NCM09I%Z<ne z6(;2mH1ZSp?kD0^30xS3&K8$3z9fZlu|_=UncLBKPsu6^j6<}D2K6U(S!HNtL`Nfh z@9hOvPX*S+v$MmAz`<#7$GuTRKz=_Mg?4;Jl(4-198NoaQ6FNp#N0>=vw|{SdLr~ouYyX?Y#g|K*32o(-J&5!r4SmyGJqikCaEtJ52 zYG>5750y-ux^~(#{g8g~ZR>dgf}Op7^v|hjpDP!Z*sWlkee?K>0^}TZbey%XaXYO~ ztZP8~tt;HZ$n3RT!P)3sz(6B*?cd$tdM%TDoQU2UcI)q2ru@@XFHn*XJ)&SgzQ(di z>5E}(@@ZO-DcL((FdG}NGt5Bc`qhn|3jKBT{NhHDyj?wffmj#`J9y!0Cgl{_wKa(} zi0R(;PQu3!L&o6lH(kf%N-0^YXpD_$9iVX}b@h|txN0)!rD+3vjl5S>`7&zg^!?^3 z-%$iDjoEy&%p!^~JKp2ad{3!#x_;_WHLGod>t4Uq6!GZzn@K-rz|TzhjC2d)SCq5e zHy3qRra6A-z0S$b!~_0*#q^f{0L8mS%S<)2t<@TmK0q4#Z6tQ;4CA`0tb-@DjrK2> z{C&7Gw~Bg#R?xnQvedRuH}mg0z85rJ+~}*M2fH~7svj{EC&Bf)du5ICnfsGxb$_I3!Rs$;}#b6YoQ17m4${b6Z)aB>{tayqVn z;w21`=(~w{0L2tcypi3(AO5Z!!^MwmvcGI*maDR=p>7fK*k&BZhmsW(GnIIt;|%d?I?70RsY^g9>0>a|)xjZTzog8LW_f{W?Es6`Uy+}hb-x&) zlArDiOMkOW?!Ez;TGH<`-*>5zuQ|g&$?#wc9cy@1d6L}Ox4*4I3NU*gKs=f_&8T&q z*54(=h!hdZ;K$2FG>>GV^3=LA-Llv6v8|QI4Z|PaRp-hBi28;tZ)|nHRa&jH&rv0| zI=b7Cretm9GV{vfc7TRCxdUkn2U0VuJe4fGC4I&@trx19F7aE!fF6}>~=_aMT(d$?$Zq14?%VQ&JABiRcQ=Vd6*0gm2yN%Uq z*iw}Qo=u2#YSFFYEO~H?-k0P4zYT`xRjlY_^SUZ3;Ykf!rJwY`mzAyuVEBC8^M@P* zD+t-ySvM27iNs-BM`fqi`f@2BSjtwJE*^tk+80!|Zc+|f149eC_MEGYI7LwB>L-`R zNRA@sVZN4fJ}URdTA1o9AZ&4BF&oAVc-xo2`qaax(^T`n)U^7=X*}(pf+@I*#jRg* znj2+isfOa(h#;$b>8c$haY-~`&S`NUsyCaJ*d?HYgwu61y5C6IYgsw%s&mTXazS9v zdesg)oUMpexm@E$hv zjsW^w1ChAM_AIrrSsdz+Y&Xa`gH1mx`HNh4^zz&gMsbb$Z`EM>N=j%e9{Qn zr#O2~4nCYL@;ikEJtN|#))&DX?0;<+VagtRFdE)&V~}tfay9_reQTpESyRW6^Zv0G z?j8-9!UkwKwQNzh^i363UmS7N)5PYugZM?bAdT_|eiMvP=oz)ubv&0XLrVHUBa=4v zJnZ0b21WpBaU`9IXzn?$;63_|;lv6HeRmP=6S?9_YG`Mqa58}#U52=P+?enI-_UBY z@jF$|Ca}Fe*TflWX)CTmXT~V-=$zSF(j_E%T!Fm91cS;#z9jEj-}L(V{0@FJTQ5Cd z+u~sKv|2V=YV%Wjxj|IQN4_?`lm>HhxP}wgB!&Qc`HsL0t$SF^wB&BKjn)b~#in`c z_=8DPRc@nuVWxc{A9}#$V;<3mKO+Y_4X{eq6@*~tz~yaLYE<4|=g}Me9W_#4FUE8y zkPI(8b*gYAvOPUe>~lDbnP46J1@yv<687bM-uXUabNI)3oV5M#Pdj|Osi=r|?cLqr zzLHW59%0v)>|ZbTgZ{1j5$r5K?FPW?Mb+z*;h(y>+md{#}wCuZSG>yPbN)PFf~ z;lZ;mO31P3`xU|ts@n+Nx%aD`J~9YaAiL2F<0GQU=&{%wck--agy6LM$yqfP4LSAU zRz;U2s_}_I!0DU??KeZ805ZI-$8GLjL4%X*Ru5)k{6Z%dk^SB0=n9X9;Nq#nY}Yvb zO0S3~@0OO1`D8ysRVNLaT@2i(8LoHlg1Y?x_~FK>yyJ>G!hZQRCDGsg+^ea6Nw`C} z;j8B(c=V<}ZCMxQT${w{jQZzMPFf!tInU+;y;sDAJO9R{{S`Wf5X7Lcgj(`{3p~A4Y{Z)G4E=~Lj7mV ztxD|EGi-W(3mS9s(!e)&{{W@1uG(T*=$>T-rH>#E@#|PD)UKzfFiO%K+~G=PnrODk zO!gM#x$ijcT8~oF%TK6o5ljehsC&K6IUhj_br4+aHzOTRKBa?k6HUo8*?y)60gaBJ zW2QE1{hi0%;;B()B*&8sjjaHd<8|wEoQ=u#DQtIULml(Gy=&Hj!@s4Ep=)5>PI<6LtxPqYG0TNV6JiwMgiX&vENgEE2?N_g0-!hn~=6f^A26M zAO4rfeXF;B*Uwjf;U7xqHKS6_8hw7|#kSN%P;woB_7`M#9^!kB>iG-YeIps>2YybD z+?)a9^{f`fG|K8eZy06%*9mr$XiPX}^q zxpHgRSlL5K9N6eSKFyT5q~ryF06F%{j!&#@;;93e$g=Wzd58cElI-{Asm2$Q&c<`J zhqM!(TxSEO+kI=B8atFwZJx;XJn}b`@3;UG269J-QS_=Byu3ZGes{xn=;s5zd(W%8 z3VWoEGt6Jfk%pXsk66Im?5*|n9*g4)4Q6` z;t1sf10;I-4+1@F2fxT4ERK^6Xm2Y|aP^$}xA&~B=QlIMNpCj$V{A7ho}E9+wdSk3 zi5lj{UR-@G&wqEby?XwaNx39ptt~4jGkHl@M^UKlF!g*ilDXc;N&`pk3dZyVZK<_a zLnCt>k_gX#R;zy(Z_!>g?aA=WB#w>EqG2E0#vS{z^_A$mbheuuQBFB#nZ{`w-rjD2 z`oIBw&!k(rR7W6*n5&MQ38GBVS+U;wlLI_>$C${5(#qH6{k z8U_bNmsxD{-5k=w(Hlm2k450>+*EVZ4ck0}psF(rhRSIpcx(XnjtDH0)%p50EstV;IIa{{TNxTITxbB&&PlkCaAn*LbIePINpFT$sn=0cimy0{m@X%nZnuIdnE12T^v$PopVmJD-uD4P-!$|ZT;Cf?xs!EAWRC325#v*%gPfd#TlxHYP zGG1t#dFhg9#UA%(03TA4=SiKxP)R;_1LCBaxqH{XJqHC(JBHX%vF~x&xX*HAZ9qF8A9AgxsNej`$hR>wA*Ger$jZJ^&g>55 zi__I~#>3jK)U}{s6Ss299f<5@bkzlen2wz=70w!*;OAwFL0sHpB&zyblU(d?;5X{7 zQKK!SnCCdhy<9x<16@N-`k2_~NzI2m4YnEjO2_RINnKLdA%7@n%ZP4wJ=^TFN*z8T zl+;6AQpqF(I4$MC7#k7bwCmm~PfT}aYIIz;W+A9*p>(&7S(xH$9rIoZ0BkxabX=@$ zOAbZ>7{OH?WosBCes=L$RQEp47f5%& z>7J`Q*rIzBS5nqX7}#a}quQ=Iewfd2ri78agPF%*2U1U|W3oD~rcT_r%OM$PV9zM^ z1v-lG*w0~G6pmy466QE_k6t<-VGA31FN_yEoK1$3m4h6{2NDPTW7a*8*XJc%t6wXk zGgCNE6Gl0k)xVTFwOO4hbAv`VR$LT(nH@97VjGe3aJDB4hbgQL?&baKmZzeqkc>6m zHuR3gZLS%{A{B#}oVA<&!OH~SV(WfQtwVuD0RH6Ps4XkyW(l)f1~%vZBHeVGQbCXH zZTf=Jo>nCN%C?OD8U5nMmgJmEsUOP6@ANHKad~x3#f|M8_uA@1SjObIj#qoO1pAiG z)h_g8$4;KrtR*h{xla=dL8ywRoO2C{bWXtd;b)X{7Kk+Udm@ZU1b=*QW`6sObDmzH z9`2t7V2#+@>sMZO%TWISy)mSQ^I&XAXvxPxzi?KMQCd%gZ1cGroHEnblwoad8eL+y z+@P4n8;dzy&T-5e>j3Nzud3AfIKsM4f!shatTMf22Byjz7z;=#4vP#tY(yr%QaBXeYvbv(CVVB8V_dZt+ z36bj);?BxB6OWNW&CzeTcibk%~~>X>P3B!Rw*q>cxx% z_cb*~2(38awvM5F#PTS^yq0)pj34R*KfZeSdSi17d75|Jcop`K$OBN*-#nD`D2isj zTWvi>wq}F#B!;3!;QH`ded|UY%hcfMI$7Hj-5i_8Ai>X%a>O+yg032eRnS5ujoX$E zquJuJk~zI5{Rz(EF`V~ct78TuRNUEdbPr^f%8U{HsmC8e)jFR~Pc*N4XsQ9< zmsWZ|iWit+KR4-9kZ_&DO4ZI?1orsqA$HD+q8zc~!Od{N3I_Q-@GDTp+KpGkD|B~%de{(KB}dV&E`UuU#So_lpWkFF;EWXoeL;u=3BKPEV8nO$n6 z`7`~%WP<+yE?Rs>TvUxkf!woO#=Fx#vA3?4%z{ z70wSQt927v+jgU`r40Z#v60b@y+PRe5v2JcNuz>zZRC5$4jSA|UAE<>Yvm)+4XLS! zjlu7T?SMj&#!n=lC@v^o;k{2orfFXcJ&|C#O73CJobL$1-{@aLB;f9#V>u}Jfpd(G zjv^-?-7u<#@Z(v=+zngX)wwlFkX^dcV-C4HS?rC36AE33f3I3_m%lin%-`>@#>%)C>SreGo z!bu&|M%;ekOk~p2*f-qy8+h_n;>DG&eaZ_oEpAEuKcA_J;iGgt%Fj;9ON~V=f(E^< z>?AG)&TuX{AauYBzIcJ;nXk2roF=KQ?A^JJ8=O&xdX5?nJ?jZ?k$(_q8X|{I%H}tx zEyg!v;^mT@VF#4TIZ5(pEic3S-l3?eb&b?=$q;5>lCn5taNl-^kO=pz)|-M{RjQ>P zZK3MrBX2^YnCBhi@0+c)VDew$Hl4dea;CG_#OU^Ldl@Sn-tc)}pP_9SG*+sLHpg+$ z-KQ?x(SkmQH|<`I*Vto-r3r-kn>wA z2Hy58TE5kHh&WAWxW;?=sHGoUS90nLVUdpHro4CKcVqYKT3vL8qrK1Ob~^?7wxUDI z*x1v8Pl+$U_LGFE?QE+pE|xxKdv1{%SxQP-Z_xMT=eJhL#9d!-o{m`JrnA#Q&iJ0p zz4Z<_^tzlzPRr<>KFuffr|Mc$$ z`=F1vU~w{lvhlft5Bd3znEwD3 zY8elv=!k&$39h!+8ju10-CRF~O00_auCe+z)Sf?ft{=jsRz+xotiDZO_ISJh0L_Y@ z@bIqfG%BM(!7yPNGJ!E&ag<(BU(u|^us zP#cZ;y=&+d>K0aMimF+UD<-FsHxM(1xDZdktq~sr^v0y3WRll;rzkCBm~ie)?aU6x z#a5nhW|K!R@{r;~Pj|671Daa)2Q-2RBOrs^bLw3wQRK~}A_PBJOb_I@>ofC*tEly?Dy|vno@wWe@k%x< zkQO|9ju&w|$?r`~FoGR%Rvda^j3??=%cqlul&R*VuGdpe&@sy`EK&@5vOcAXP)~GZ zjM~`ZXSLy=y^;>?*^s64Z?nK31=kgxN8CM~&61yASJQRKf}Mef+;j9QDSVG~Yg$^~ zAlzJY)zr}w^(ts&$+JT6n^~O`uI+cJY8u){$>x>0=wOh5;(sLayY?%+Y&8Y|9ZU(% zWXpGMd(X2Q?d@FaCz1}F8?#;4JH#4zw_ATo+N(5WmfvKS?Ip@ONQFBpTmX1z`Jwyu z4fH|v0K9j#ohEk_)zvkeEI)^5@;zs={YErq?P|Oyy7>8g&6i1giYnIAnMmdt4DUVu zkB>!}TJEj@?AH)Sw|6MbTpml_@bS%%{+Y+G-hI2}NHrnNa0UFmP8rLSzh7Ug zZmS=b^$GjeogKg@0K1pr)83@FQBw!~@_t>hhPTptv)}H{`|i04!d;QE9Vr5`cKYw&_w;L}TNZA5%)qZrArdS2`wvmX?CFF@E#l=gkqL zYG#%@?e-^QJ*+WEJtg88j$ zdX(pw>~Pg*3A~wY(@@J);`rr0kT~{#m{`?KNPKcUoNtm(huW=`cgSG?_JPeLV7G9s zmG{VHiJ$?@Bd%7)os%~wlbNwLTHW)h*;-=g# zZhJuI8w_9^EGpr0bk8{?gOG4nYR1b%wHQ>}*_4&_@9p=||VNB-h$fg49 zj0K#-!yAm-f;aEyADLXKxHnxkKK{|2hF&rCERoAwB8|m&HOqOsncfv-B#e3`Z5nO_ z+~gp6(@u8TNjKYt&$YPw8?Bx)v1{fLdj3Z@E|HvqBaZTS43ZbNeLPCMMtrZ<5X zCJ-6f=knY6)tqF;QG)4hO-*WgZb;7>%NQ`-M|z63-zjS< zqb>wuSc^}8t!kL#WR=d6M{?#uNL%i+gxapLj)n6=1Bn9xyAO(?r{FEp2lmoa(oTBr zNLv>6@}-u{RVJom`f|34+Xy0Efa)?w#blp%#LVP{;r-cTvxRq};jQw74qII~VFY2a z@}F-)p__&m>CRb416&S46X5+zX{lpl8p0^wES-a8jJy%}q~v|7Wi9sM%+VyGW;Ejx zusmbekgn7k4_eYV4>N{#H>fTd#(YA8>td+3@<~@|mOQ)fj*ZR3cj^Xr9ItYdwCF7N?bZMLdP#=2T7 zy*Ug%o=63U!NB0O`&QV|RLzFP6A5W{UR>e7G0$B57CCH6-;+|^Y_%3UiGjh+hDUQk z{Cwu;q4kc0Eu+I}M6IrP2WGTxKUn2y)OP1wXqgKGOW7|tF76wGuUtN%o`sc+n|C+@ zG28wKlY_(<9Q%7A#Tgne%RU5E;GlT3~n^w~ACJ4dG)EcrF;-s1G zAaY^A=WUAL@b=)wrQFk3YD#-%qp!C?Bi<=$9~pT$$ZNfW8+Ul+SgKjs)JV;?dFkz0 zUb3ew)z=4d0BR%^7{^{_G?VWqD;T&8R?D{e-{2P=tYsU#q*E1pv(8%(%Q^7lu|AKz zSr3A|*c7QH%iCS|Mb z+;?-lsW4lDi-WnowcqCwe2e@puYqsOLjtOUCg}E8! zE(9LCE7h;JYz=gE?_-YmWWyuc(yW^X=Nl8?72@9nQq!;+1B`C>5L$hGG8VNL?-!9i zy0Wdf^OM?+fzAbvKF4kfg4gMjAahBh}Y?6N7{QyK6LbC`Xgs_PbGdrM#0 zxQ4acbz$rDtozD)k;F(U>EZ3kjwZA?{s+xIymSkwe2cuNt2hl+S*NcQ6IW>)tp0I4 zabQhFQOX)-iKOzpIjsTg$S0H$xWVR1q8%xBCX0tpGH5D=TQp;bg?;g_eDwgNXVvrv@EGZ+?Fk zp@;2Nv1;u*HqqWG53*LDwS4m7T?>Pw7~3Oku9&b{Otj1qII!}LfE5$9j5~M*LQcY**%gWga^MKGi*w>L_*-AKN?gs=6<;Ct5dnm4~Z>^hnRf z9~GsWcGT>_61(>9RT1h9PTj<>IQ@V56?_&>v~DhTj`R-otw_yLsX^WH6Yc*1BB*Ar z)NKIe{W_L{o*a-;Oxm2lbYADy7O!r$)C~E}2Occ{04ki#cX=BaSV;Y0Rnp5Qjg}qj zUWZ|8mT>~-fL%=)AF>}}p57tVXVar$_B{S&UCGI?2?XcAs&hLF1!oa=^r*n!`(RU< z#7bQHbj}~MZ|6)glQib_rbc=!MVvvmxM#4F-Glb2tk;&u4@xOV>{A#eJef&wC+39~ z8;tUS>sZj&7LKLz57-qky3WSnGgXghAuu^I02t%DLWIQu*v8dnZm+SC{eKwtoGFde z)=3Uqafg2HaDciRN>kP*pKw7-OxJA~0Z}sfY<}&mi}t}(!EmIR*kr76Jpd{YE8=S! z0ASfxFpk1Q`ogH=AhZy3x3N_)<_SL~QbtIczh0c)8UFwj7C#p51z&aP&F#4EjA2;e zXzy7fu#t{Wa<&ZmA8T-H^u1D0dsu7nr zBfVsln?gS;>{VpPlAg2r;FGyiAO{CNDpgmdP~)8RQ(@R0yZ*%_7=I8?^-9&>_ql$s zsWeObEj3jsJA+dH0C)~nJV4&(Y90{F`{1p*dO;3@xlUbx%39}>>|@pc0N77+;-%UB zq^)^5E-rERoUZx#BIC{biPD~}qqH~T`lI$P0jzN>fSu1~)AR-O8{~*?Yxt!ngjQkn z)vZGz^nDQ!9{}~o`uEhEf45f;;ZmzAz3Y*kZ>awO<*SGAsa26)JcF!$O=JG9_kUiO zss8{E0`E>S)5}++swruzjV#i!!YPL4H}z}BtPl#|zmr&Jj~9;b^siI?9u?mFTc?h% zP5GnpBbTB*4!!ceiO#}WCPuQB_-7<8^ zZ<7X~@;=lwcN%MzKDUy0(@e>GDdho@G#%P-+~ACb9qY)$RNt?amzL`mQMRS|Wi>p5 zk~V+<11!$?$@Z*~+S659%POM0)Jz189hBrfglBMe2jEq8XAw1|)lVYgXajZ;Q(?^> zvgZtw@CmA$S=^MWE({cT4Qm5{&1&=cQn&dEQ@n-s**|;Lu)9BvF#bZ$9mL&x5FEOe zpkttPasL1%Q@e}YMvVG8mP75QkNGNAm*z{2TxVkq9zy&@VB1sBNsq>qe<4nT$d8Fo zkmlcQrw2LhK|j4&N#j2fdL2cd{u)2br$fe$Bxij>J!8#d-{?l7)(Wj_a^FMbv&GA6 zn#R;wYNG%W=hv9u&Fl;#r?FSR$-j$Ij!CI(MUDBbHxfYo$5no|@z+>GOC(i>o~}|p z^)YW|SbtpB?BVRfq??RB)iNvc$Dtlz$d1 zGDaJ(t?~1_E^NAavWO%8*(0Nn5ZvvvNjscuRs(yLt|%#Zcff|xyNUW z?y0}fy~W8~=SpmDdxqrPo=H9b0NAiT3c;hiqxi3=X@zZOx1x}Zp9MXtIQdEGmlrsY zM&N)lvY2?`#Dd`O(FkxG^17LrKLEDq`!~|31t=+R>R_L>JxTunx-aBo_b(&9F5O$p zCFapd%3YI@2E+;Xg{WwN8+zOGg^?Dw~uq$}&t}9@U8Yk1yJP*Fg z*7k+R^`YA|d{hweM~L%dn^H(WCJ4vzS9aRU+RmScM=2aji@S@d_?2E+g|o<~iNmJ0 zEj@p(y8hu$Z;^i#x;J02f774jEap9b#0dwds%mh57C6W8SFYE3?+5)fR^!AZrS6;Vs?OV2+akx2zW6{8ejO(7A_)S1lBj+?JQzuu%CXYq~*) zYTG95E!sJbG8Jn)l=X8!57pL(1JoEL-|1RciM(#|g3_90i1l$w%-)^`^H@@XVR3G01ZL% zzDmj;x>x!!Q#7KP&n^4hP9dx(4fLdN@A;#}Zql~tvV^A7nkF9azdQN0Q*H7&<1H1n^ffxF)Mj(J%}qS-jJ?3@TnjhZXEO08U%Q%ff08`@ z3nOS~_HC6e-NeozNiQ_l=RKh8*F@YA>)CX%4CKp}(S{EsjyObcHH{nEI%1%Yqmflm z`7Lpxj;W8XzgzAUz?T@GG0RrTA+cl;z-qw&7-U+OJj!!26=vxxWfV43FC5Kj*rJr%`H_H<;fl0VnWdU>{Le*N)|M z_b_{x1bDgI!uu7=?sFJ0=;5ryZaw~i2fv{N5x6heI>;5 zwUu<$L677e8hL?f_l^2()Kc2`Z^n9Nv{&4usc_i1)wE~Vuvv_EZAAG?AAPC7{bXnO zs;&2JN%JGPKnLUYN1yh&<@$Zf-XkMp9U3c$?xuYmR4$eYNHVe0mvdvL4n_`EKQ|7z z&jrQu(Os*ZNaL2lHB4ud)YFXP+D<%tGl_0mjx1vNIVY}Tj~`O0hQClXy`a3k?jz>Q z6M27gf}SHI2*oM1mlasXnWLyBz2d3AnNn&k4aJ*)HMW|%>wSBk9V?@mhABYU2Q}Ho z-R;M{X2#{I969B>h=?0D7=a%bZhiqvr@LwyTtlyuG&J<=Sab9#@fb-X47X_BFvP&x z;a4+;#m%f>`hu;l(fmTMhBj5w*F@@e7Qr+%U`oPC&iDfx_k;QJ^Lf;*^cz)9MmCp` z+~2j%J^A-4c`urkuMwN3M|m(eWbqpqM!dJ^-ZD_m=2lc)ZZK!ikk!W%$o+5w8TM>? z`jzF{cUKVCR%z?hqFEY>^Dqhw|A)HbMr&B|j8c0ELG^eYukw$$cB98Qim zbnMG>@6URl5%(BI4|d_sqq{WB>m7Z%vL_bBRZBA&Ge@b-cJ1+FvfofkX1U_84+SNg zRa$LzG!LnYk{gSlF*v?Qa#$r}u;)%vqoKM=NRaUt0A!g2H!VbvTy z*4#mwib*vUGcEMBkk!jXVlj!|3;Ea#gPGi|cx$;p9K;Nrl{Lq?DL%zrRc@oc$k?K$ zj+!AMcPj}l#`_cAH(ADnrmv@MV5~3Zc*_HhNgRhiO2?tFY72EsT|D&T4bLkaUABBk z3oX*pR6Nughx56uAZ!QVm5Op$*t$=aY&Oc)sZE91DBhf_$~3y*j|#27|x_)E*nQWbsS*Y@;B|( zzD9bwW>@Pj3|y@hb3+S^^byM!F>@L@T40r-*!+QzGmVrHc`tbjYUpZiR{dG4?JR(_ z%$4cD6E(inwpj z8A$HhbxWJLYK#X!;iN5b};0bwDg$C+pliBA^PUuML{*q_N&cQl(9)LEue8ME@{qrS%#Mp*KNmw z$909@R@7M=UvkutSuN$#dCZX74O^+%PSM(Pi6kN3pD@^G@UjO_KR7vivXJ{$I zq_utKzg5Ml>{bDFtGUHJ*0NiyEs~0&Xx}NOb66$iERBzG89b~FIo(#))mR3DvpR~% z`bXDNHlgqrGFQg;dTn#u&;aIR*x>Df!P_GXzirTM>Pt-u`Fm#zj{aw!z3aDjadWLh z3$|ZBv15niyRy{>7WqQKIBU6FX>5_d1owYjoF7qAqxm^_OJkRk_2m~{YYm(3eFSv15FqhrJ%bc?9S*f~vfDiX2m7nZRdY*{`bZ6? z6S!x%uN!Np13aG<>4RIG=Fvsdx<`I>AjPYg?8$1U$Es7ZuC)7+f`8oTMjRW_1-dhw zfrU#YW!e#nn{XKJYtv z(2=J-Fjel3p_8;0xnpUrODuLR=ElL*p+ol$Gm**D)Up8>b`(M^ZG}=yY1SC>eFZJn z2*~-KuANp;VjVHuyfL+)xx)Yqli+~G4xMKI0DB(=E&6nC_mQ`$WG}BdJ&mreUs&aC zekUrO_Zt2AblqXke`?LpmFQ#d)8KTA3RCQ1zu4W8;7YzvaJz5*mg{E{)zO^zmme*N4L&vU#zqqWrGCbY*JMye^GQ)#c>z5z9gl8l$1!apK z!``nt+-Cmao{b91!MOzNuO13ZlDePfjkfBpbl_vCR=7Ge9`$jj>H?B-UTBHfk>A|1 zYERTzow9z#jZhppr@3U*zo@r8IgiwLjD^1EzgfbO(*X|CzQ8a!HBhT)S=2-34Q%k_z zRXa``-92@9{{XYfuKxf)?tEgZ&nzGDZTf=xBl1Jj5g!!l`-|++d%&tcYWd$K`VNSQ4}hBE{d?*^`D)?(Dph1xt#OmB^(IgI zb#VR_DzdA`kadsAtbf(s@CSaEss8{E3h%BblG`+V$8Ob|czR)QFUhTs=6rvom zStO5_?NUkP8=Pfa=Y$5HV~i_JLz>2)p;)VBZ7hJeC@va?%}XQbDlU|)s*R12@EMvV zYg{&QX*t2dw=$wk`UTnU!qj^98wEzYW!|FkSorDXiPda|kiW8-yrr!V2XI>6;g*nh z@YB7zc9i@?vQWwRg?hA3C@AB;Q9(UR@4Ou#t?PGga{I2k8!HqOHvEv?0_59#h8>A1Cz3uUIOGQ?%`ckIf%3^}JG zERY}LNmDKZ&=T99S#V__}MylUc-05 z4r^R@X%5R?JFt9xOA)rsY`)X8vy^hpA2SPGu?M4?d;b8>I4Xi$BW#x@$4OTm1w~0F z2atT$td6X3{>LEob?c6+ExqMSccpy|I9c$KE+e42iYYB~^d$tcKeKpqc)#2ob9g~0lkU&BR#&C7*R9UB}m%025+ z=EjxI^IQ3qlvKGmK8J@qVXuE-O~Y&VF*xR)m#|>=xLz9o~eo5M+&K*;!wXMB?F~PCW)5$JL z7#zk-G?)Gf5$J%z=u2jfvIKeH?wE<^}gtfajqVg+yjG&eb=e=ajpaG*{g&b64 znx4NZg9qlR$y^jJr}R#G;Yo^^H0(F^tE{kKVL;k`Uad^|2hIv! z$83N|?@}bC4wIW5-RkrpkFvp=DMB2&LtU zW;}-zvEK*1S!!Vg(UIH^6#IP#Yb#w(F@j#_H|+`~9kvm9E>~ucIvez0>+Mst=p)Wf zF_7=5stVW7Gr*|iGBeP5XGdeaXQy6@xlxxkBz}t+Pk_qP-cv{KRD-V1 z`xfH^(Pz17uPLnZ(j1SP^`6Ii$irRY4B}mA2Wvm99*b5}HPYW`;sfqoj0^nHbrLEos3%iRisus>OJGSKQ=XKm4^%gZmqDo;^7GU)@X9OX=M; zv5_`ck^`z5!q!L1z;asfOG(_2Ikr^y3K%SGtE!rM3f9$Awe5nwNqpu9Ny_KT2>=I> zv&clG8jAOw2)FOd1%$={3}mmvujkPJMpWVbfK$xBJkNvYnMeBHr3b zoZtj_p2;xH4UWKT_K{H3q`~`GQbO(Pa~WUBeD-9m?U%?71z2A z_AML3>rmA7Z=;TluZ_;RW%X0d(Z$27pN_nv9S+OJdq<*;PaeN33mhzN^Yz&Ep3Cd4 z1#4*>w%&WaTP$I6S>V?7y0+OJPY>*mt$kCmUMj1jWhK68GZ=;9E_XbL}$EX*%WL9;~4Y65p;Zr)9;qtCak(;^hxyBpEnploHPK zfJo|BjHlIX^*BdP{AKU;K8;t|Ubjk*k1KiUT(rA-rIx;DKXD%EtuDH9rr&npeddLa zj-raT;U(UZI*A(cLzo`e<>Ud4bbp1nKe0&CdP?J7W71E~H0kYkwt?|b9x_N3T+baV|89Rf6*M7U79koq25v&(J zCRW?)8urOGGlX?-lo+d;*79<+4EA{>oMau&b^&^hl;)|qQ;yuUo9*ZO@XrI*CmDaj zo;Smk>` z4~f8?fCHJkeAeI+wABrgv6jfMHBnCL8nPp$ovb7{u(&%d&!_Wt8xhyky@idfwMeSG z=g%LT9xnFk;)`*X$~apm-`9=(enghK_dSNLmN_8v3vh~Rksxb%dA|1ER+H&F9RqQk z6TS+owLO=MeH)n^eWdZZq&%E6ljlFKeI8zJz4-Mkf-4M^IyR!&TWVupb%?t&1_>a1 z5;d-CPbhO7TO){TyZK1lb#Jg*?bi9L;i#0pG|L5(}PJh&MFz~g@Wf~YuR){9n8e8pY7 zq3ETcZI%bL4ti)`kkSEhg{9g3cTB5S4A1o`gT`kcrFgEXJoO2D7ufn2NTj3FA&)or zI-7-L2S7WH>!!5BiKcD{z+B0~!G}SEk%iNGU>`Pa*x7OK5;`{iMqx@%D9@=?;}N)? zoTutkblKTfkY5wp&Z}$jrppj-Co7}F zT+nmMa28Kb_s*V)@p96r-c6`st;sH<80e`gX$K}q@%t6y1a)Clm2C}fL230YTA7mw z!!LAA!60Dxtrf0=;MEmvYls1{91q9~pzm@48=v?iB$KcUowcpXj^(MU>h1M0&rVw# z8zUW-lGcE5d&X9JQh7D&EwEDCE>gu(+&rFDwlu9%@c7-z=Y^1w3EB)V!2A_7j_fFAX)!!B7ami9 zI-n=O*FD`wf~*k|GF#oAibVsXF5Eb8_`h1b!w{9O<;SmlC~V0XsHp(j*3?~ zv5*KKjnQfyPghM*b$pbS5zi2gHixj~xdU`>a~z7TH*u+swY;)B8R)mZn3eNE8GnN3 z-8>1cZ+~*%`b>V8k3Iodg~69w{{ZlV2X|zCu(D{khC|aWSmnR}00=Z^`w{)}$k;yf zCNr~DWL^fW)`WhDB*$KU?og0#kQ?^7NKe3g{vU<$d#-0qoH%5C{^ zvbn(0H_m-ZNiP=iIr{E9mQhLigGPE2?^wOPfXD~{9)zr(lRcA0an)8t$C8Sp9lP=4 z_9?LRInPxlqxE->*D29_-tM_pAi=fg{vD?O0Nl^-oUv_ZBVh7%)>Q=Z@@b2i&$b7T zwPf1x{{UCw-=!DY<%V%eZ#VK@*HavNI$FOzyv9e`uKxf*DQYn_4I z4@H{ksKBX+4;6_|S1w?0-qcUJj_D@fQX>x{$B=JdY^ z^;X%W1*BwTY^=yjxkw@okdwNoG5zacrAGWXf;Iy=Q^c+4PhEgfl1eEd*zB#QkG#2dr(5|JW&i??WCuCo~AzEj;Qcn@6r>Kf(v-9GSuW9$RkgS?p711tmTPxu7 zP5{qL^;o6DnT|OmZn7Htv;aHrtpH~`^b1mJ9x>>9O+)FKbaia)ICOMO?L$wk`3?9I zqIDrnW={rsHr1{kDMhm7t>=3*>-##H;dGP68epgp$m5*ywZxAN$4q5u7tMEVSkqOt z^)bg;S_L%Ju=bvlh;H~9J7=$e->FsFtBbaa+ZgI-FVMc7q%tPU5Q0#CaC;6$R+=yxa?lt-2Rrk(m>z&P8~M(nWVoL=(AxH%vXW!^VuHI9MDlz2 znTN#wyYg?|4^@;@Ro$$Uw)v-@=Jhc*4jJuf=3xHD^4_iRCk|=ccpn>cPIW3LvXb=2 z*ZKThiqp{XCGMC#?}xKnYbx$~HlAp_-e2mSf_z6cG>3k8hH%nG;{?Yb+#dA}uKxgO zy3sn#sNjxDd4BcUzB1oPADbh7a~{*|b9`&_*xYzo#Nxq`uZMXV3Z8zzMEEQ1#>MNLN{1SI04y-?XXLmDwBezEH#Qy+= zUuiF@{(e7xO$&-zR;9Z0?u=^bNpjLKh%Qu;>J2n~Hb}wGSwue^7Vh?s#%})rj1J{T z1ygG%VY%CD#pc>QD5_w0>8JtreBtAdcu-K^Z(2sF)_b)pswrE!gq~+LzmF5Y2zxj2 z3JTpPLv=E0o*UibwOonqCWw#HwI@H>m_BQd_Qs5Z^MPB-3-$ebKaZEh)NO3#_3i!) zt4Hg)$|>1T=kU!z$mcRg2M}G{+&4Uju3R=4`Mv9I>Fs?>R_LKJ80C<9F}=W-`q<&K z$FtzMXzE%QS3cLN=w{N|l#X_jgHKtH!)YFbt@X5r4_8Hn>dix}HC?(*H>K>C2SF9i zDjMplVtm>4FOi}KbDRj?WG&p5oxnS8yInEe_=*(wk9_uCX`L%-m1QjwTOUPjx7vD! zj5uU_mO)88Qed^q=f;ZSACxRsiD+ZIP|-(b?Jf0F+^C&ZSq)4?HPOk=c9e%T`{Wj7 z<7S+6-wb${bh&AZbee{Gb9jm_&f?fhKy1w<7Z!p;d4LYUBmzhUdYMN`P4_pXuyhPf zYlcdyV*wp9u$xA!hNqYBna@B9%P8&<%@}Yw_9|e;R)~jo7lskxvW<@qXmG~q%mzkK z6z_qg;~f;2Q)KQ!u4v{Y=Wg($5rTRr0HlJ9WNq{+fk@t7imoWUKvTOsbnj4G1+mm% z_igK5idKbiF$}LncpVU6p!0U}^ys`~0Plre+2^1}#s|Fy3Vz4CILC5!TdIQ6oS>g_ zoD34gcpHRsGB5xtXl_FAw`^nDjFJX0V+1JqF}4(KVFN4Pz1>h0!H#Z1jj<7o_o;!W z!A3`;jH$>eWnnJacp^B@Hp+aB^s)c}M8=X%S`_DEF_*ajj1lbbQu#9OhiSZ1lj# zKvhkv0-`hQP*wxl0n=ilw{24%#AR8%ibs-VSj9}a?{QZP5X_Ef9ZN^j6(w*UbtoD6 z%D+uFnz}Y`^V;ul-|SeKb;+>H0W2B7$XY|nZdUWsMhG~K?(5uE?S}@?)eXFddq_S4a1Lqpw%GA33JGSm&p#`J9Y&jWRL7obu(PIVU*-7|s<=gwnK{qg!!P zMAu0>Z8sBcxb_i_UdpCno_5FcMjTq_ZVBZ9dOFwC;T*+#t9R?Z8a$=Fm1>GmoUyXP z`6i$5_cj**Ec7tx+O8tp?Qyd2aSrUHW;qKo8o8K?jhWD#;U3>taOx= zk~*Fu52E(8;E~|W^d9q)O55Bi(mJ}Ahb_|=2%(PgLi&zy=kl_1%;6iOcSes$I0LpY zHtMw1mx6p(v{$~Cr$AoG4>){O-J>19Q1+72J?rEC-CbNCY#wB3>BLbfJdP zOyRq$WgPhb0E=c_@xsk5eoZ=d&r-_TtFWu6VFYtfwam}4&)?v@g&vo2bBCu3)cU>0 zx7`z@IVV+4UQyroSwMSHlEC^yEM}O;D-Kvgnq1bLj-&vr98%IyYaIh@ziOIln|`CBq>6IpOUo@aPz~+5 zc~4U2oyNyvzL)H+?M|O_{WVOuG_mpH_WLvQZ~h?Cv+LIX0P2?7V^O(kirdvo8*;Lq z*vijykndWzRae+z(%1RtlF~$fr5*%kCr6%Fw19bC*97OLNI6(uiPg3GBUeFjrIraK zh?hjlf?o?}aL9RXKnFP)$T{CDeN(~1%I!kh$tZ;zBYbb2mnr0_hXJpA(jHb5g6>U* zeshT%3}qL-3vi!K>L|61)|TaZ5EjW>JxxJ9l8jb%KM0((k>9G$5R zC(>XM0nXgFA%JvPl=n+@4f9e{GDfwRY9`aZOaddQia{h`p9hAyr2aM*{o@V#+okk1 zU1gb#ghIg;G!aG1*}cxdXC?x*T^>-}x zb9YJ2tS;V|FkM^!07n;z=WIVYXB>UZf65-|S9GDYNm3t(gIER+C_IGxNCR(DvT)1~ zxlxi146gdtb>&ZFn@{PzD@!Z}H&sUr?gxHZ1aA%<-NmPU_wZOR4CQNW5Hln7M|*m& zC)8YO-b!%e3+#J-lI2II#uYi=?sZQJ+6aTYE2gy2<)&aGt3u|k6aq{qm2}3OAEo0M z-vDm7_fH!g3I<2#P*Si7U+Ev3!Ml&SS5VSiZXT1H*hS&qN3Ar46*WT~>7iqt+CKz< ze+c)Lo5w{1G04E(yNcr0sis&uFsGH14xU%rCG&bp_tSO_FPY7VJ3AjlWbAv$^@Uj; zqP;}fCS~oBvNtt>vO$x~MnLp)oQ=;y7FsDb80Pqe>#fWqE{UWEa}Pvpe0{55S~6(C zLuw{2j>lY!(ZD;r+JY=g1&yBzAE4{S)*p zr*XEy5P4@ausg~8>l>-&Im^lRU@JygD=0Q-E9R5E>AH_O5IU-nr+H&;OMfY&e|X#2 zvaJ=VDV@t3orevudgW%9H}+1(PIf&=`j=cx#8xao^k(WhJMxcQmdqe|aN0Dy~TuQEY;a`AJJFX(}oNa=+4R zH*w`X3EOU+{tER^t8C3JVY>#GlF%~Ik~)l)W)?ca_eg8nd5Lp?jl-L40g_GtJus=~ zu~&8>uBD=@3}JB&iJ{M%x!E9KH*9Z{g>ylJgq-c$FG+tzNjq#sarmLZnKu4Q6k~#pfl4^ZJ+es^))&uIs7NES@ zZjjF{L*Z<#B%H*++z-hgV(4q-tFtBVjIei?Y_NNT6cNfvdtK~TTbSgTtaQ}jwan%t zs^9urqBBn(`? zt;@~7cY?BdQ;6G*`u&R+pwBQ_O(uI5b?B=iW641=%+AiknNN)RJ9w&j58vPUMNWfu z;k{Fr z)W4E?9$o}$dAr}MCm#w*Z_c$0r){Dl1K=LG{{UY4ke;7*t{=jsR!}>Y>s*{`ZAp%w zcCH`7rB+day0l&xI>Y4H{{Uxu$Nbo-{{RmH?+!4$GfT+xwj2-c!r=b^lVkmz?->68 z&3d2k@UH0NjJ=+Ws*oR;-2Uh*_9~aWXh`Znt9f&WK(b3U$&t**2MmMawTthqWY2MU z8wEBEQ06f)I(ar-%I6eLTDMU7PH`tvTO_tiaE{E$2@aal-o{ct6uHI6ACg*q zYgV>6sUd6XShkt94dg@Q-b2 z(rqcE^yT`L)yy^1I;Of;RmmaAmRA`b-CZuQk*TvO_^vD3UPuwu+hgm7wljjQTn5>7QD{t0*dM@8!5r zRZ^CYVkd#|?c;|5vdi?gtf(#GAufH%ig6Dek=1}3|aj_ZNB_V=mR zkJ8UUVH&+cimP!;N`0T_+_#Gz1bL1CT62+l5VEc-{NPw==PNL9I4ZG^)0OIvjrE?ItZTJL^!7I`b0?ma!m5&@@n?6X9YZE$)6+Y<@2tl@ zJD-40;SBmm53e;kdVjjs>w}#GcNMhlB**Hgi?6l7dq;Z6Xf1})#$2f?U2?R_(Dql+ zOhl|__fB7)(c{#2tMhd9H;4q*`kGq))xR=h*-G!pk-HybhE|+v#+N&<&QC1;*S7{6 zKC05XiqZEUUfU?y0g^Y}X-eH%peAxL3$wG!?B4r;rH%11R+v-^SOGPVN%@A`*FJPI% zCpFD^_Fm6Xrx;4CUed-tKZC1fO+pRFIDL%^c+^+>{O+FXc$#NR$lhm27~e!avF8~1 zt&!|$Js5d&$}l=BN>#%xvY2U_zRs%O3~ZL2#K$~)0fIff`?U20tZKuebiFLK8qjN< zKUiatu+a7zZd#jF#LDi+OfGHc$V`p{tz7%eWHY6Qk^)N=RqaKTgcWlPz z9=A6w@AMW`OIvrg#@#`us~;>yt#qB9J38Os_puWQ}9I^}%-c`~nrnu~VYn8NL8J!~<+o@|>!^-KR$!kx8*nKF^#4XLI^}QW* zQOgxmBy5;1b6u9WdW!Gc+|syT@ZSFbxxG%bDt?x1=$_Xc0y0%ZV|;k41-9oD;jU}J z2iC8#O5L{}wV9(bWn^JY=%H(YZbJ7t3JV^0<2_Ve-<^54diSqfpdcd)%x%#`NIRj} zAR#UA!ttXgrpwn5*5sTZc6uoTY-7QB-_VoZz3<(5u{$Q zi$Eavpdw}*Ie}4)y@$m>#|Jka13u;JoB#kF`umU&xcW=bIY1fQ_o+^Ce1iw7m~tJC zpbdwDx1j{gIjr1dmE>UOIqHriftO`^+#h#}oDkc|c+uq7AjgM_HcdpNhiE_a|K{n`N|1 zR{sFiBpj{Zy>mwr9_h@|NxPg&i7S3_9;~{iIu`8MVg!#_S__Cc43pLuU#Gz(tIX26 zmUcOtSo$)|R=1#NI*yDEWB2lX1B3HP+t#)|vv4!Y1D3a>mGt-OX&47LGuwmfVIOMr zR_ix3{wwZtZl0TJkJ0^(sh5ZreM4f?+J4t?otGM0Q1gmeTK1eW0NgpJXO3p>&~xA7 ztM~<~E$`PF&iQ<~@*4Y%J+hV2G?olDS92Etd05u7cRSH;-8Nc#N>Tflw zXl$?$4`$*V-p|02)|~D#-N6MzalO)LOQmL<({abk+er6K=Q4KT#hsh7@J2&}vyIww zp6$H_PM-iZmYwbZm%c&Te3 zmH3Wo8Jy-Z&5^m~A>EM=0Pc4L605ZT0G4k9ry>izecsJh(}%-RZhXxS9l}U?n&Io) z*cPARHn7tCMzL0K{{T?edAyfOXHiu1MHF1&?Hsa38`06Zt#AY95DwW~-NX$UsPq=I zuG5;DoWos9FQo8237AizG-tjyjluPhcLXj)K80yc7R^i20 zr6s2ur`k)mX0pD=&`a-@hX}3o#nwjMuJYs245jmU&$ZHT@M&6-_2s#6tgCRMw~7dJ zu&1G9&&RFvKXdG0T*}$8O4FY2W`8=eK5tLwdsg+gvt2&lEyq*!e~b1oRO;8aQ_`iS zdN2E(ivIvA`;`Wdsn+!O8e60kBzeWU<8N&&ukmXpf!J%Y(3ZG*mf6;tzNP#;pwy5- z1+udF4J^>nTxDQ%U94g5jn2Vob6cGF@|V!ji&{@_%Lfd`+laUDC3XuJbu?yZLA4Ye&g(o}QZNV4|_X%ToBNpCOF)jL$rd zW1jCp{hl`NI3cHXud66@F16IRno50HTLzuew2Wj;t8~%{T3k%8bgeECx0@W&xPA~g zTLW2WDjQaUbz5HhU9$Z%=w2#V;@IYHeBQITw{Ym)j1bu$^MK!;_Nov$r!&GDEg-&U@E-+Z7=yw4WKr=KlZ>PnqXuLZsJ)DCO+h?}y!E z8*%uG-Aie$hNhm+Pc-#(l@bV|sIKK~ke23NOoDL0*M|WA07zu?CktLvdG*%i$@i!q zPT4uy9!QK}k4W#kv5aFNEupA&m7e``wtd8^u9A?lwtBZq8)D=S^m`VA`lcgj+~+0O z7{bRYYpjq}PWY+dWRptiq+wJ^|g@6Z@q%jE|Vo zQA0b6oefLl4?l#^32VLJh35`pcig&XH=%Jqx+}1S^RL868zhTnwm-xZMF?CjJQdW%Er zYAD4=Ar%z-y`)ZS+V;7;z_gsSbqmi|{cPz}Q<;qleod@m6K3$w@KD%*)=1muI!|X9Oab4d=Aa; z&SP-^b4K{{oO6ND5LD|w4dA!jCArt?D}~g}MLhv!Oi_X(n1cLB2RGpC-13gN%DU0$ zT_ZGz#D87xsTs4 z2vs-Bb=BUBsfuN7>TjaPQk*BfQZF=@BSa+{L&ayCxdPD*AtvSQ*%-#o6)`(bmv6o=1bAmo|ee|A>H5=2XO zZ@wSO;~Ow`AQBebwY-)Cy3>3)c3B@cTm`tTgV4ww%4NwB<0az6;Oc0e9o2zka0k2Q z2fG{KWUOk-=)oQb;171o3ge|b6|m__ddu84+F!a=FP>Q{9TC8!cxhu>p3&`dn)b$a zcOl6gR=k$@BI>)fqUxobV{GHPT$_k7&H2IOzXTAY80nW|)K@j=8hvx8G!~(z(Rzla z?^{c#X*||O&*w5#Kw%h+^JlW=b~)NcS~5C%OJqkk7ov*7*sCG*<%$ab06mTndAnD0 zFuWdO^3lq9T3SO|=8#E892K#-mEF8+N;+HZoa5)*+HLI{yIq1jNmQ~E&)0nTs+bre zrj9TM7{K3vRFZt`desrOVfxhl<>kkyJLPH%gG1c>f=+R|+gjQAr*=DuT!t^&-d-T< z3JXT6`=cvaP|7P~rDV;ZkX5m`JCytvfKR}@T+K8K-fQTk1xB;Jf);t)G zvC_<_BlTGv;p`Y6V%YFhJhjav@@!AA6q#r=YJE)L?Lk&L;n-9SJ}Pb6cGTVK>IQJ> zUG=FiWOnw5#P+c0A?F>gUN-PGa(Ky3gG{t*&tQ zK}_GVYMBnpL_`O`J#atMzN8=e>f!t}@bE6+@`I4a!`qws2EgZS?F)!LO>zFN*K@b}*Qx&i4+8CvDAy6gYq`!b zOCrmigm;=amgjnO6;ehd&#NoSJIN<S&+bQOTmW)bUo~6!nkHj^hZ3=pfGjoi3)|BG=iPr6y^mV=x-*IqJ zS9)gWPr|rN#UC^3Cw$TpT=y3^p7K=ccI20Q6)an1j;i5E01V3H*4ty=^yv7W^CmKx zga?pw-Rf1e^fO3#wC$V%cPnt(bcLRx(dYBdY2oa7 zXx-c5N!_1FmV<`3Y-1TGJFRIpwnwC+Z%r+7*wwV!e&H=tbaBC%BmvyK!=^LUL0Lrj zNs4IkjDWTc&czj?&-YrMrLKH)h(nrjzB7_Og)>`e8dxkg>)p0NBvnylk-i;`{+z5z za&LYPl&Dcr#9r@@;J_?48jFjhf`&Ge7&1Kc4R3i~SK8%?yzDJ|Kp7ybrj!-VsI3;@ z^Ejr-4Z+JffObLd+$-Lyvo^_ZYY69&>@IfWCvCCfGqFCja*~k{r%%JfP>`qsF)ZE~nRK$NcCo+S;>lKV>>S8K_e--X;ptqwIhQdYrRg6g{@ zulvLJX*f&lfi?02n<72Tq_ve2VrFGJkN{Pe2 z=KgEYaRu`#h)ZJLA5A-KUSfMzHwWGocbkgz-kNhfCLGCujp%q$mA z-D1(YdM`Aqg1WBs=wq+5Tpb%}ntMkGyFttzFk0SV{58ywwBQVzz1CXJ9aye5>2H*k zrsQ18dRJQ+BvIA4w9hh)JuK|>TW$5hx3U~A;4WUUU}$Fs~rMrFjeoB-qVjrx%4Z9h!fs|9m! zcHf*>AI|m<5Pon!=ap{Y)YP?_EZVO%EbQQ_o%DRn4(#J|7}vH;?)IKfM|?D73}l60 zU!kgP4>t`ZR%zRuh$58sME?MDarwWJaq3=Mt6j$C>BqEix8q}zc~%pu>9_Uc{2o!% z>5WBF`ipH#taoDWX#-js*Sq_yX93@oFn?U7%TaUFvkNSMdBQ48#A9g2h*vh9i3D}caS$fPrXM)aJ<`^=;&)1f4Acgjm-j~t?+YB&i7;9 zv^1~J=lL~UTy|V}KkWYilD2Eag0d-UUp*~o9I(_wg5p0E9CFCqehDA39ac?X;yuSs z$l0oD4J<_e0AJHIzLHP)=(jN+7Yu>;!C}w~Maiy?!8B1h->O@x^Vxzsn~}{C_>WLj zk2Q3)J&iPyQ`JT?>PnYmoOreQ^oOrL@zG3TlgxH&>-qZ=6q~}?ug}iMS!j+kQPjE^ zE)&O2`EX*Hvj^~B>U7J@^)HSVp#&P1lb6f8%w^T^V0Q+>ZdQ;KM9%x@yFky1gdSsnP&QAE z!Mn6>RDGbwDI1j}I1ijb9toV_ABvn1o(RAL=H*66BXN}|ID&9b}`OKhqxn1lI#aS2_DKM=P0q& zxb}@o9p{XUWIGGW(}{t~iiGB9@IMJz@KJJ9SCQuqAmI1~u{41Vqo4<;AU{Ia6`&d9 za6X{6HkcTqqCNbB>R15PKq|i^Wn7qMx_Nc#te+4*fV7_w^4R3{53z2IY3beS`h5QY zv1xuH3_~Ue=vQ*fgmdfi&SlmI^whp5eE2PD9vgavz!QSun1)EgfAoOufCzhMEA-&|4(Y$=UO{?;`^M9N^@%4YQS5#mz4JESh5T zZKh!t3r#c3PW$7ZR2ni}orf_k7zZ0+Sv8F{O>UjGTx5-)Qo{>F7<)p-I1mYNIqEiP z!-hSJ)TiSoH2C~yn5CxPr6r%&?!WE*it8OcWw!TCPUFd26rGcsCR^!mkVXjroDUFu zHY2euc8(iypG4X8*AWXU?v%7ok{b3A6=g3hoR2J&0>`wt9JG=TC?%khR*Kvm~MDjWP(`fB(rjI5}hc7{9`$Xp} zsuIg)nap#l`J7tzwo>Nu4nPN_3tln-%Hj2XrEOWLDLBdxW|Qa9rRu-AwbFV;{iR7% zl2VNMTQ2@huTu%Vc@W(jBY#5NNVLsQHcU;pQ%e+~qp3ZgGfq4}3(X_sD`hu!jeW$4 zE=`M;No|%|b4`|!miCiKOWMjAU&rt2htYIa%{@hK44hFy z-@U>Z{(ngC%;%BmImEc`NhM;Mj;FWMbhMgoNN8@fo^M?>H1gr%5FL=o2*yD>y9XG_ z0B$JtX1lOiYHM_Uve#TSwp2?@%4+&bo5fRS1C_j)bD_pICnS1Y+xLZrSH8BA#iQ%& z)HE$+*4I&0YwRSJSt+Th+#KqM2}48L@aXUX1P2Uc5*8kxrL?5^d0TAu-!H%B<}RfQ zsuHN9wm08@m#^jwV~RSPQR_w`Ng3Vy0X_k)NM<{(=T#XoX(#gA6F_F%F0nBh8 zg`n?RCew6vZT`WkZS~McA#a;ehdxN!80T^kDPy08H08i=jNoSrW$JA=sr5ao_jrM@ z%WAWUJtxT08pjfF*MDRHgMopLn9p53WTKAqLtSOe(M_$pE*JO7;%VH}omtnDaeFOXww}%L!pCG#QkZC1+UsodI(egfPHTok z7 zD0%GEgzR~p`{J+ORTg?`8`ZL+jj^^l>0ssz#sk|H54X7IyppZ_DUGhNP!qOGxBZb` zF8ZUD%%|p&>ujQ%5e!FiP8U7!!GN`g#|xrtTQ(u;3x#(q zv#CK~%k)^qxvueZ^ZR15iEHR+A&y$w$SK(*vD2~BNY*vdz1*x|B<|-EfI5?~SjLld zaM_zH-Wt|+k8wR3&|FDA?5g-RCX2pX*~@>n&reXsO%ufJ2XV~F7+ulBAjJqr&CpiaVrb}?F zI;xV>OIdFe?Vy^jvYG68sUXPKLnLk^Kt{k{*&A|>OR-GZJbo}qXS+)1wZB!xb?oYVUME2Ih zBZr%?!yA|>qm9gQVF#B6VtI+%JE2W%=Ch1&@t;0EUU~H0V^t^0j^8iO@NH<^IemXp zl_AT?18}VtSS^+a4A(2g1h*>owo2;aV}RC69C$C~2LLsLU_NU=$?sM0*U{C>Q4K8O zN7FnqPyzH@cLQ<1v@ID&4{i+Umix!>1wX(EF3*7}|E?}AEt!X5*yyK{AT;qi+=WmAWLVifu(4Zr*xWTTd|$ zX*}it;vCWQTuaHuF}HSbwB3!FmG1G8=l4Z#REC)vu1{ufP8^RyG6BeH+_ur^dqUIv zJ8vr^2d^kwj@Ul;pB1k@OUHuaX6Ec79x=6EB&DURV2-Ym3EB=rW{I)@d~o1Gve`-V zXr1E0^(w24@*6DluDG%)du+VTp5r}CaYauda~l+H4&0d>;sF~1-TH-xOO&=VykmX~ zy$lpIf=H-frhs#1MnM}GIkEUbpl^(I-xyWwamn8}9S?HWbQa%tlU{cvNASZx&1pX7#NP|WN&k>aM?DWmL2k z6mEN-I~st{@B@eh@{%y%1BU$w=%iJ=HN~N8HHU^~Fl}ieF&}QgEl*e2#VoRxMHo4A z-^v^9B>kmq-DN5CvU&}-T9)r{^IC{wG-jYw$mqx?3~NJ4CFchX+a4S4uLYN}62>yg zw3SVoS>eP@baFa8HTfif2=3c0ImdUcYYL&8?Q6jQ0LG;0-{~xZ!zBd;T{N^5GWp$b ziL%v8Q6&ANCCi(algjcM+ymU)dlN=lMfZsC+n&y5I^_4r4|`Zk$oeq0XHQJFHPgw` zx*mo0Z8LA7dq|OxGn{_7jGv^dwesutF3~f~gI^%!1J9h7LOY|08pDk;5sdD<;@?A&C<&0PEpumpaiA5FY@w!N#}LpYqki_*AOQH&?B5 z{=M}X{{Yul58+a)C{lseKPJB1cDx*axq6@B5H8m8kvQR{;Ewu97ZiM(8}ZunpZwRU z{t*K0Unvv*rnip+OCVS5L|?v-o!W)O-JU;N(NCdy4IV0!Q8)|9&wD=AQAqAUJwbC$ z)^;vZR+!jMao|>^;(VRG>XM|L-%45gSqc6!*jEakR-OBnwb!S$buB>7#zr502>Obd z?-MsoV*d!yOe)QnanWS3i{_139jmTr=E zfFEWReXCPGT_SYlQ<`Fkx>h)y$EbVO9_sTW9~lD=+pk4T%9ql#wlVX`bGeB>{*@G2 zvr?@$B=ND4=vv6Fy1p8Umljga=hMnFjK=nmPq1u#%O&CsRf0=pJMC{W&l!6j;zNfa zZ$NtCYKPa(X6*y;d9pX?s-?M1_#YeHkbCjEyHA5nRHo&xFDSQFONSRY{{Z_`&zY`g zE>fk$k_qUk9Vh^ReTynNb!%a=*?n*j2M*#Ex~}Ass%Hm~OV3fUS;nQQ>}ZZZ8M*kY z6^~0*T=i8ouH^)=!!(@Iy99a|&H?T{0_|F2l`T^2e5cw<%B9o6VWgv~s;Ol?9XylG zA;Tk#66YUQ7SYjMFEF%SZh|SQTgf9Sa!81g%mK#Ec-WSl2IG9>X8Uocw&*Q8EVhe7 zJmzRS5h01PzAr@cb8_Unn>ohAXA|g!lkmFFde-_bZBcsEHI%+>TQdYv)V3(98pzs4 z%?0C>yLS!UfCg|@waxszKNc@cbBy;qyyqz`ihH(`qo=QB_NlN(B_uEXH7#L{ku-8 zQ(hefZ6psdGNF^Vo;Nu2y^Lmea4nP2ap>j`#>Fq(DhQ>nudS*wx$&|l^2i8{jgQe` z+m+4^2LK-O4snIAPpVc)`|ZD6lw$V!3Z8s^ah6aHpwz%@k86zad z{;hHA%bkhXtBt<-=MQV?kU8IsudbYsQL+_xSq z$n^wJaord^8AFZ=cY|9aCZ3i{jsXIqm7KLv0oP&k!0*SwPGx&4DGV->3*`vvIdMEk z*3O`h7VZ=bskU95Cb!g!eNo2X<*#?Ix!xHVKDBJV%~0B=O?#&_v&PWISJHsixOW#g z4gUaUJ%wdUQ^Q}Mo<+24cz>UsgwV((Zvr6VlQcQDIN2qR<6?r2Q52iqV~#$VzADmgIpg%+f>H@THvD+^8#|Y>q&9?-&V-sx=PAdxvg>M ze$6<^2R$+E?pspTtte|FW7{wx_$@ieAoL@}Yt*MIPZL-v!I<`-o%$~Wut^HpPjQZ5 z9M_C0rq0)NSh5PNowqpfO!`<-8z?&!8Ih#*Qo%T2<2(CQV+V8YQd;4Rzp7Xrcf7a5hASM z=g=zGcF%7i>$z7OCPhgh--4}oLQO-?d(m=FV#isI*bJ$ZKdUh37(L2GX&tM3wkl=4 z*`1l#1bhX_KCfxrxH-z%I&5N&h+{u{AH7=2vB1vqI^}H5J~2&52XsLGL|+eyB>*TtsU`QU^J8?Piijm1yOh&8s5ks+8kz zwQG0*W}u9QorC_Rv^859+e$Hpg0+#C7mirl%wHSWa&gxmXdPE-!m@Xi*Qz=9)SI)Y zmyG0aBfjAL(CiuUyBMk-of&FAyb@l~SgI6_O` zSxby%B>u0==&EipYV9Sat=ema+7cZ(O$~i@O=U&Sjyj47xt`I<2<8vVU$Jq)7+V8} z{!o-H({s>ktyQXR8oIXSC4FVa$ES@&IE-PDM-eTNj*GHmciiNi!CEVW`ewmfZ`Bw5 zMP?U^jRk+4)6>C09dzc3o|;Z~MNc7XWN$OMxm;S;a%7xtvYiKMtJSyY<{Un-)-u-} zuEk&XK7?E2hFbVtQB_pt!a(zegA`CSXAdq71oZLOG_0pmr8z3Catr;~P2@xMh_@4Aaron3v^*ZJ&J_UlbG!owvi=X+$9Lk1&>p{;YB zA2Zwsw+&cMq2gw%vR2yk?c&#B;T?jWC!lTCOVpPhCf94qn%YxOK}Y%s6o5)W80TwTADn@LZ~*Vh zx7Be2N@|<@ts;5g|B{a09lU^E3R61iM4z!)6y6z z;;_>}dycNMNkvV>N9Pj(B*nxy8AKog8bf#70ypiteF}FgKg;cG$RKY_movWa8gmDL)WWF}~Jlw5}WPT9I#z+_px~_~~ z9jWUroU_3#Gvk69aSS7uJnxVUI5>;}$$@}#p8MVM(ROWo)R0-Fh{Dj>9QOl~K_hp3 z;9B91{A^W@2jseQ4uf5jY>ZY%+t&+9X zSbVOSa+f%49ngLW#{LT;hf2?Qwaru|p@QFR40$&l+S1{I_{h&`xeaN_c5&#HiA^Oa zlr}1tx=G9Aa~vGt$iM-)I2+{h`i@U}M@(uqZpyT-ZJ`I3DCc+GcH7erf01S5jsCLz zXVb9P2R>Q|C5iL7xOAC&v7C2?fHJChPY*Vnw>Up{bAI@%7ZWgTl!S+5=Opzw2mJlZ zC*eC=`WUr5k@IuYr=(Yh>nTvalvk2H$4HZ}V+pgVcw!vZMh`){=qo2iZ_^#!S3hvj z<*dma%YWGk?BW?4;|tGRNj7$=Eip3vGXraE521UW(Nh$#lEdi${$`wd@Jh#Qyh5Y5 z+pBE)avPl|+*pMJ8+4RUPXT9~Lk}TivIjjI+m`|kS+}IFI&SSV?2S5hXd4Hv$FO5jMJOKaQXKx9h-ka8bZ(aUV_(fxI;^IbhA$rVrQou z^%Yo`(saerxB}8J`8Oe>d!35ko93&a{StZP*TzXbHBCI!u9?sXTM%Q29L;tUk;Ad zHjPpGC9HxeqIEUeDI~IjD&u`+Ofv^DlD7`z?1<{(ay=g9c_?XXQp?Yi@0GvFcM;2J z;o)QWEP9?gx@#7rwp3Jh^1)%PlB~fDC6GVO85bmuYmXp%fsw5F#qN1Nhy`wT3_}w$ zu!)2BA!^p?%hsf_@eTA+UFR2CR?}O>Z7-R5)a|9^d;2a%XM8X?fC(O;R(WN))m|f- zYKvU7l+1K&r85V|EUoOCUSAv93kzOc&hUA13f8)5!6n=9dOVuP2Fu?7g*NjMk-88|%-d-qx&Pd6?k_sH+x z6}vC&ewwO^r%_&^<#iB%$t0wDLD|owd-hsKPf7_62OzkS@hhV!?(xUZ`<(q*X+BN! zpg?G0@B`Sk^|bx$6{qwI^T^!f7KOI0k{-#Yu5cEuL}!cJ2bUZ1(^^t%;H4E#rfeaw znM8c$^0F{FuyOPA(DYL^1@KR#uQfHgdfj%mGLm@biW{_YvY>LGy$`+a=MvBdT)DUz z!tz?C(P>)yftW?+bTd^6T^n8_F)_~$!+{4JxmxZ!o~qS4UYfI1J(Eh??G?VDQa#j; zidNQ8zBAdi;5ZQ^ppXlJqQN_Ltb?4hC6(eo_A8& zqNQvR*0+_;Y|!HkkhGWfgl<~yn`S5eg0*vPddj~~+vI#T?4_i1jfBrq&eXD#(mysg zyAm~y{bxayM{vK;yNHuEro&NPCY^g|=!u#U8ziWrbwrX7;mqfDQ5IyJZg4#!RNWg{ zrRuD;+FkMe%(hU;L*~t`k_M2U=)ppsLx}IYG|l2}SbW;BnVIHd!PD_6_D3 zIQ;7ywga8})=3$}K>oE-MWe}XZ>0YH&)TXZ*{wdVs+uvr3FxY0IcPqma;C4>4$bcJ zPWX+3uj^X0{{WY}`#&Z~+5NixYG=c=ckx!;)`yz)dl(}eqv&eV{{SdFrXMF;Tv^9y zscqvPIEHifD@}W-mC5rStNP*o)%EA($c@bMSGKvs0r$JJhpc(QA7c3nQ@um+;=acG zo7DY>l3ta5G27ML<@WxZlzod-$aLx=AU*+WbN-$6AFlOq{uLs!kAl5xkM-}U_WuBG zt{=jsR$-j<>VWGHlVg55-hbx3Pwf3Zl2H=z9H2*Dgz8Ol&@^H z6HM>{!+;UAb#X3pLHtm8j=^mkC6dPsv(^S%?vq6t}N=mr9FxJoJ&7Lk&8!>DB7nY2J&4&U>JlFwRqoSar^Nl@I zP$=Z>i4@St^Ao1WJDTQ^u*oiQY2Oa;Xj)ZoCvliX;yz3F&vWFrx4+cQ)P2d-@>#8o zZ8eUa&_ysZPRdyOH##YLxZ%#`9Hfs+n>a1$*Uc?9o3h)lRJ6vbMqVi3rhIIV6tPJd z``d}x;PV!jI1!%io7*I&j-rlwSH10ugCn*PJk9KT<=&haF-M5~my8#G3iAuN>g za!3tJM=UJ_mPZ(9B)pHoIL193mpF-ZwG@=~oR;AnaAULva$d~5mw;XKd7epkq(K1e z2SZy)K}>wv_c&Ww$o|eg&^gbE?i*l$F^*041=&0LSu{^=1vO-~71rTQ=;IvyhiJ^b zj^uD-PC##@=N5*HIn8k%mKJ4)wg^~zk}2C)JQX#Q6BEpZ(mBy{0^^a)c|iwlj^n7t z&Du=-l`ivY*T~UL8|HCy!;dcBk#leyTz$TwoRwE4Of9C07@I6LRMEO(=;b-5h#dBZ zJ$bh|+?L~^8{6GYSk`t`64uQly(XT|lm@e*k+Uf`7 zi6hLB5#RdeA6DyL!>B0g473+$yE$0%8~L|sa5y{k?(g5$uhx6D;>(xXtNUtV+?Zr> zb6x7+Bzy8bVQfx=ire$x%yXlSm*>E`vWkjlHhT?Np?rs#usnxC_|Q;3F;dJ zt^2D%EwZc*Up#@_&5-vOXJhPe9X2YimgA;%6R_ypX}Cy0JALj`>l=ggs+x;s_Rj#8 zXd{ieRvlP+WzVnk+`!%*CHef8_s;lnQB#%-WLMGy%jz;St9`vaYs#H( zQKoJ-oA9^NSJcr|ik3)Tc`74dIcXs0ykMN*;Edrz6Db^DHLkPne4vurc}a9G@eOd@ zzgLR2SS>>L#4CLR!!6CHoKHKN$ld{j8J&i??2X0Tt!M?z1i85*E1}bT<(nRG-b`n9Z1dy-)`lfajxq@E~pl2OEoPGMY37lS5Z$4@V=sVxu81}G)T}H z8wfn|2RZMMJ>2UVWn;B<$%f|IXlmTgzE#BtjLupH42=Xe(jBt>obAcU-Fj)&jR-k6 zvN{rJEVE)kuQlc7(H%8)?kmMhX$&rk;Ks#QPU6#twpM^2fS~&JZgj+YP|O2fJ%^z_&Q;~>U)r+aYNm1%IRB7LKdlpH|rA$k;aLPJfQRWkX|qLmK$5;-WM7 zakYiV=)#>ImmI`NcwiksmX)n>+rK#<0<*5C*`@oI@zi!_)S7GM4O4$gRzWyD@l4r2 zdP=Hoa9{!9%^ofoUo2zOcZw&9m?3P<13dKOvpDwSud!qnjb*B|+ZyX$y4Lomw|Dux z-g66mtS)ELvcssUJY)AIS54s`O=7K$`+^AT)l{-(yLPy2zJ6)%DDExWo&`4*bj|gn zAEobjG!|is)3kId6sg=Wc3%faowT{8~WG#hFbEciGG0M}QTG)(~ zmSW3gJ`0F7VP&ClAdkS`XUrg6`A#anKKc8&uh)~hX24$!!f z(?3eL+@I+?eyQBIV#QoYm}KbbvLC%#rmjHg9!GF1YFVWbQq#Tez-%C&+N^BIjAs7;Ym#}cmlwa18h1FC z5Kd2k0dwaNt}&L)6OQaBD}70;YbO=c!%sAHLbdH{;%R%_8znlN&9FB>gGsw&*-hLCxN9Q?HS z1$foml?}0jtYzYtkshU|g_?%LebKVkIKADCG{P|8L3UjOq-HXA^Z?(vTx>d9$maeG zT64;Zf(Acm>FcYhnCP4MzkZG<5&2mo3x%}}OQdBJTPd5er!LRyk6iTMVb>nN6NE6A zIG2#&o`(c1tF2#J%M_|Vu72jdUXf>a2O8AVPj~VDeNU^D-cB4Y)KWI8%f~G#awayz z7i#(%oX3oE<+Egt@wVAly*cDHUqf}Kx@q1b+A3+|sfMZ4?h~=DbZ+Kp=8fkhZ-RFj z*sqbWLrmv_X`$wHfb$53-rzF3qsY3?1xBf$Wy0SaQ9qZGoG==h&M*!$%L@VAxa6LL z3vQQO>J{TA=D+^{EBG~`(5!7*@Z#710MxPc$=s9qCeS&?NTjE0Lw1Rkm5gb5Ad|E9 z!D~nZyaIYG3sHF<>ifmoE3ICnmab`Lcb^KojeN~~Ksl`CIp#iuk~D&PJ=^bv&^qHw z*eP^}XxnQek+Jh-l1Sb0>T>v}$9-w5sCb#EqPFWB{8a6@N+XU7y-OPy{85MZEoyyY z@-cNaC-F$hSEJiiYB`elNw4mG8`EA!T87DcxB%)%bp3bgmda||gCl-6VS8Fy9e_?b ziN;7BRZXAC3yBieT;`lZtrfN+~Xf zZ5nQz`uX<~Td-H`^s@pDL{{Xtrr8O3EudFm|jwwNX_|EPUA5 zJ~uez9sMBej-Ivi#_0t(mf$L2fDMTa!-GgW5J>r_KSmDgB~Gz_TArEI+u*+D8eIdY z+tu+Q4gM_pU&V`00k2mGt2JMg6@t-E1SONv(_Sc=Cw>b=hYxds4LrP{V*#VNY45sz zMOI6UuRKk->bkbEk%%aJc_V8;*`Pxk8-t9TzjM0ax#NN$d0~ydwzrjCG`&Ncc^JkY zA5+&O!D`d%SG!VG8eWF%x=o~8si8K`ndA2m^dYoyBe!`SR>9Fg8(lNU%`uD>k>qlWPyG=hjlya=h@Q|nd3@vt9%l8) z+tKFug|%dyG4yXdk82Bwa4uoMBx8RMUxJ-YEko#9?$T4r@CF~hE)QN`Z-0nYeM(Jc zP(-eu6#dz=1G8MUgDK?~DQej(c{?`=X2Z7hvJXsmdsPgV{afdbYwk774RHSe#$@!o zakki7e%T|wvnZ&L0Nj#x10CgT((6i^#Az?jk&1mK&E{2E%s0&ss;Z@TYi)fvAeOpzvK_PH z;^!Z5ScaX05ymmlZdTx*iW*Z_UZ=EJpHnkoWP&=!L(2t8Vf4gDIdB7fkO2UZm8$gn zJqZ0CzCP7T-9~q@Qj$q_!4~$Et@o!a=hWQ#0iXsDdNc*MEnBsKZcZ?@CWQb7?f4;W zi(~mi96F2@qc(5A#y>?GGW9_nqWmu67^HLMhAwu+J4RaQWX=6QCG$1EntXMM9Evt1f`8@OPG0G$}-%Hs(Nk^QQY}zh1sK= zx%{rB?_&-w7+P5A^u&8jXCizjSk^ z@7u(nn(e>&E_51qhlaYQ?|9WZj_Yu&k&@I@RLwmzT=v3YX%SPhHa8JD%-D2s>y>t( zrqZ@IRZXRD@WyK!0GgKT47F792xIKx8E_A0D*yuojk+vr`V2A`$q>J_ysQr75=_># z1D;`=jQ;?YR{sD(YKUWGGT&JvgSo@Z@)sNcFi$DZ%^P~KsW{rp{{U6L&n#fPCv_kF zCHeC*8ahi&Hj}2bH&oEF;Y)D)uG!g6;Q*;Kx?Wz-uVv{LV>!n8%7&X#>6=vxD5`a2 z6*9H*J@QE_+;aE1<)FJR4&_fz!`rkr!9@*xQbm_Jj(qRt0QYB?r-H?+ zIA5t7E2K5_?Z7StdtKU7pg7U9-PZ}wf-D|J(a~k%$QBvoA zf#*L4tzkD1bc>9P7b#{A1*AnwG1%?bl~(I65o$-5yxl4HId?T|$3Ec2fycq?ljrBD zyV2)6?=$|;wqs|~qr0sC06s>nuKJv|RF~(>$!N81jI~6^TWGEhFf);y(UxJqvvBcQ z&yx*sx@w(8T-Fw~?Q7c7K_EI;9-(XM4iriygNSoMd>ch>XBTPegr6uaLJb(4K&8 zpU$eI`J*SG*-}NE3=yzZab&b?It5}4RjdwgH@*7G(e$-w@03?$or9;Sv)3>X>0DHi-j;TfG3M4GvngZ?fTX&6P(8uxuvdYA;63fd=_@&pG5Umo5h`#nW%#dsA%eX zKh%FZ#ZoG3W1(es=t~mdw7~vT%Hxf;Dy@>ayEx=4b6#VV#WM_YGiNc~La7okfD6F& zE>%dlyff&QS23wgVV4N14=Loq8C3PPWR726%2Qf-pvQWMLtL`tnX0l#+Y(P-T zn(0+zlT_1DM@tRPqI!rTENFDiB$mDG%dzM;Cl1@20Cda`lA_?p-R@PiQ`B=9*jqPg zC2QPpF|YstM=o&RM3a&ZDsLBvDkEejNQ@BEwev(`dqX2*+R|Fa0$3Va*6o3p?T}Rk z3|5!cMNf5_N;g+D#2Zr+o+D;Cxj?|>aC1Xk^S(wvIp$t|E1HdC`0_j}lc_DZBNWre zV~$#Chxx>F67ej89(NG(BRFpEck~+0whHaRaNmXDZtOb19;A95ht% zx0+IEW(S`GFLOc5xv(D7&72T02Su8kvAK_a1tkqTYGk& zIk`_L=H?*!J$EHj)EOY;@q^e10lt&EOuN&k1_R?S6N#XJ7$bJRuMNi?M@pcVcfC?fbcf~;DUV;&~QV7 zf|4qVH&V5*v7#8A8H`3l9ft>i)0|+mG!QW1$mbjqL6}9q)L$(viUtXpbTyY6T4t8h zMCdzzOOdW?nsMmigdT0VK-l%=1yaW67MKMOJ0y{r5)Nr&amei3%yS{;a{+PBF&Xht zJ<(OZRx-Xex<OAs!OiIx0CoTzdPFxk0Od`#O3^c=ii$XAWMUHpa)*YVZbpXh zlHl?OF~&=qj0}Tzl0}wZ#M1iS*Da#?EV3D4eL-`HB)A8Sp4Sgi_T(xned3~owcV+S zzl_gLBkdo9?!wQtWsp?as>2HhShO!7?b6|9t<}<^$GkRc-;;)c4)Xf(Wr>CK~T&pRa2=kL96;$;Pm(^TV^+>?2ttDlagG(OV#_-(r@Hr>*D~7tew%=x^v{GE}O{1vn;Es~% zZfsAciX6Goi3lak4>5e#5aQ9Fn#PLDU9!hpK^#`P$gmrZZf~4>&Ug3~Jzn7`sAHZ; zxR#iFNz|3oM$y?57aP ztNo@aB8}~+r<-Xb4-ad=>5<=#R*dlYmRJDns(ItrH8(>rM>Ep`)&= zfOZrWW70Tv4QsF+>SVuTv+Yx)HcHui)~*Sx^|&}>qYZcDpPEOor))`nd_RIXSZJbs zzfD)pe96^1TI71OnD#Drk5RX=T`YE0Bb0rSv{fo1XpeYD%q(x{4`&nescHirGH2Bp><2I6g&8a+u-Y?KE5;-YDM-lCn}G0m}I9vLIl zRmz5=UT7%1?@rzS0HRiyjkz&HJb!lSBOi)dODft2al`OCmvS28#PjrjRmqw!eofP& ze51HSswQncqlgx9QzxR@YPtN@Qcrg*+_^rXm6+DvA&~QlTZas^gRveU_xo0XuvPFc zjI`8kz=0T}tlpD=;OE!^b$!da~Yh+xR^Tbw>>A>z}6Sotws)6s9PMdwz5ZdQfB~@lY%-gZhld;wuWN$rQb70Qv{56 z(CjNVv7z&QQFM{5>rnIbtwpaSIDSUhD`h+)z0ILnZf*1mqSxLj!Z#*nvj4;Oc_Qx_$eRgEb8j&2 zw>qjewZkSEo*mb_2Lru#w~`I@uedSVhUWUpaUmSll`=}%8MiBlbBG71$Ej`Gr#Lqa zk&1azNwc>#wWD6@n`Cy#u9R~JGZMaf*`SCVegrhlX(RHb=(q=oKPLYG9kmtq=X$c< zG&MtODx|1j^SX+6Vb0D7a~=lUW0?C-RovV_b)w&7eb&*Zsx2`H!`WLcZ`|le@o9XG z=KZgN`Jb&kwmhUXcCfr!>^xO%RUIY5c%Tx~Q&uoW*O1nU3}Czgw>Edjy<>QVJ@_Yg zA{&RvzvWv(d7;ww8-N`JE}()DL4+m4+;&I<4CMdg>peFqG6I-`a*O6pgR zW>wj*VtCvAqCEBPKNWUKZZ4I=Y(W6NkaUrEI*YB={=1K5<;BW~Drn zQ%wdAT-hRXG45~}URmdXw`xZT0O;A3G zH~1=u{I&dtbt|K7>qS#E$l%Z`baZa?kFBh7)KQMCretnSVWWzp5>)9*$lSn3-p+Bj z8Nnmh-Jb=LaN;+&N80k!p2Q8utgg?8i5@^&j*>@DrgW@Omi_ad$t5@gJ^ZW+zR_j! z47t%o9f;}5YFuJ3G`LR#K+kaJ9r;=2oTZZwY#Nh>H4X+R4^rt(F*97q`tN{MzRAPS zCYF%>IjPwW`5l$y$OpN^tlr7>HED6 z1$^+B>FZth=F_GRk6eXYfD|7 ziS}$qy+tGvvflP)c8MJ92SOa;4^6%v!H(t6wLVXp!u5ET^Q?8S)H6s)#PZ4mMm7y* zc^W-U04m!j2{rbK9($uNGtt&Y=6&Ut2fC70|GC983+&;w;jGN0w>T4Sj zT^u_nZH1>e)m0@Q47F6z%2^}rvcSWg$&(Dnw+<_7+$-s1i1Ji0`!kiOIJIxMYDxxa zn{!?bd^ImJc4V=yWMsx1dJMbA#cAHvoeOzN5zE;n@o9f@m3M@a=LVP-nmspo(=_z< zVQ-1k)YAuj@RY$iSh)mkXFGB>>O5Co+B$6MXJ?0%^MA?9w*5F zdwJY@hc8or-=SJ`{LS-dM}IHcuTPE+S5(hb*qFz5jdPqD82+5(la8AJa50}1f-_Kg zk+Ue|hM-gmiGzDU!yjeoVa;wD2^l+WoaI-D$fl=w$OLkNM=o-E$k=Cn{cB!T@xO%h zm9-6auB5DzEQt*z6!kH+*Z_j|G=e>uSss91KZY&wu}1#@i{5_KtX$q`h=BM5u0Pklq~HC!xPJ3cSkalI3c6^VkbxaUrq|+T-D$A5!V6Tcs5#Y*#B}!ZX%L@cv(GP6_ub zCa$sBu5|9~11f1M+Bw+Pk{jUcbp#)J@%G-;-FVoq6{Y@idMdpNhK)4xrRT|@^xlZM z>Q{D=JX68fv~VQ5@<-aTx-JyhMa`DF{Ye3>A+GaTYayq5gT8LgAn)CUkXS2WvDCWT zSX&8U4IQQ#miG~!<&GOxp}*?;#qii_rGi=C=E7TJotC!&oUd^CxM(1Elb+zWCDg8N zYfDv45KB8@e&(I^3SBbZGNntzn|;%NvNPL92>5wa`KTXmwbjo|8p(@fWbS7$C7`$y z*M9#16^hq+HrOtDpqqVtO%<{+nusC8vhLu=J3OPuud53p;pMki*eR|qtK$+t{gCpNt~=kU(K_sW4ekVbq6cFsQqjB5V?Bt26_ZIZIV zePvxk8=k_hAsBzCJ-NrcF09bdz}PT3OWb*!d&xaXS$$0-CW*}r>M_%0$Dg)6Lc+SI z8D%bWzD?VT6zkK|G<=8DR$IQExzB98QBOl~rGMXJlws{X#CH&Th{!65m+IKr%JxS5 zZJo!|yOYXig0%Y9()#OEF-3c#1=O-1L`dGQg~Wad>O38f61fjV=nJKNV{7hI4~{0Y zwSkfko^$iD;5-&ia+LPMcY%OWk{~&oT`wp|9H3x)^I5 z=qVmUp3pFPeSRMGzRj0PY5RS?$7qIb$T97xbPa3Fj&G#jMZ|Mq&EFZoC#tttY!#H0 zZ0sD^#Lo?Li@ABX$1Z(mZ(6Ig84UB?TVE72Lo?{v6XtADOjvSS?$gi?*m`By8-P4; zU0llIS19(bIB|k>PpvBwisruGkK6@P;|_WK0?05Lz{D^nrb_DGcf9G${QQp z16tf|8pGi5Qc&?Yn6z8qmk~j+2p(i~xDZFy$Q0aR8B%xb$(oRY7H? zg0>pL9Yc$9Ox&;KB*fDY=Eym29C6h11JPv0otb}uOL2~Qq=MT~3)>@^&d$xkMmj!X z1KLYB1JZs2w&fsf0Z_2Mf_FXnjnc&TG5v#B*#n2qad|inBRL=tF|wa+w?_rrvMn8x z7}m=il0D9ATp!uR>!Qk`Rk042@4{O43=jOz~<;OS{;#``$kr#)ll{tg0+G1xbuSJ9(i&= zBnFs2jSh(TkCi*+HhS#Q|BZd zgt8dP_l>fo@Gkdrb?!|xYn;n_p^~eRv=Ye%Iv$`L?gxf^R_m6EcICm5%*Ou!Yx&q( zmF_Fr)o|66Xx6P#gxob63T_-#myuaRQxl))^EzyU>~SY6Cz8>qD(=)5w}NYE1|XB1 zyo@)Lk`8)}^S1lKtujd^Q^9LGXW#ZWTL=pWB-tZ-n!?421c{Nw3Xk5jCc zBJdT{lH9_tnkwh2E7@_o~=R;_195U(A*6rb-Jd2=K=MRn8d!e1M|nPAyfLxit=w( zSn+2)kw;Iabv+wHW7IL!#0J8mjp5%G<|SJSg|H9I>_R`|XwbH}%c$AK!4 zl}SDDnN=4p@LFmuO@;ihTthd=r>$ulcn2Pz73{T@bZzr$xSK6Ijs!JBnlt$h?owOC zX`DL>x#;RM_!6{&?{En29 zw!V9Ooz(RQnkRlqJAENl)yox2gjee;Bz8R+Y4gr~U~GN_tZEEaJjYMVE8KD;iauue z?lZER9erJ~?xd;{Qb(M2b#b1>$JNcZz^Qci+3rNCzE>sc3x6{o9g6oATwjMpTsgq? zc(>rFgJ!eL>VPY>{N(TppC%37Gi>YywDdNMcf(GO#Xrq6Lzu~gk4 zXKzDX*`XiXb2NU!_TlaQkSd<^c`NTnXZiW+Rx?u0_#^whYKG%VMg?sMY0s>)cAs$v z)Us+#O6$$H%_(KK-zeh*I+mgacB~zDYld<=j^3iMDX!LfNjrPJwxZ1>hXS^d-+6gH zFOSha$2CqgklX`nEwyxa4qe+c4iL`YUo$-u?`T-3MlxAzA8*ftMI3bQw)=j3%B!`8 zl1a?g04OS3^QoOHq5yuh5bS$08m)3%Z&qfBF4p$P`chNMA(ZF1yLP|XBTwZjsw#lm&+1L*%&Xby7>G2o=eyAd zl>1snd=;usxmD9Tsq`I{TQ+pmxr$bw*K+QEWYQGU*lQvUsFEtS$MZ|r86$ohXW5U* zSul|fIDUnFPOQIEU*Wvpom~ym0vi?98lw$R1EweWvU?=oxorDS9Cb}o;nA9mvf}+k zPc1FF;}l{OJKVxMIpgTr21ZER1RcWEusfNpq-8{of6XvRd5InXJFl=2>dN*y>1=I| zYC0x4tgVIO6P{N%oDk6M<9@)5V+RCxtdOfXWkvF4gGu1(F~myiEe(@S)HRQ&WgJxT zy^bzp-yBWAYlg=p5ubY48gEy+?L|3?H<#1Q=pNUDgXo8K@WYoI>1tb^q13Gt>iXI{ zZ8MCt28pc|4wpD=T?k=mCq9nk^c&-KXjb2JL8`Z3Z;{n9uC=n7)>R24=9IOBN(Rlw z8VLl9=c3o6b8xF@t5tEvFth3M^*0uWA9Wk9j%u2RMFdl3lAYYq#^a9ty|~{QU7Knf zMmvOmV&b}ce6+5PSm$$?*DyJbcckA)44)D7uGO`j+CRDdi?>O+E!h#ewCrM)4}H%#&WtFUB%B|TI_5llRp*Byg`&jG_85q5?1PYhr;tUTG>m=+95cN2dc?X9vHt$SsWH@UZ82VR5~@H_jtD+&#T}w_RW4)iF0pdU$Cj zsh#c}oKUd2?sj{C`*6DbH+ab!cMlC6Pp-8UhniDOXQYy6JcCtQ4Q*4-0n4VS=Xv*F zZ_~Ma)Z+D1>b2$V?>#OPfrf^Ra~*&HbYDodK90ECZRWRCU9_&Bl0xIy*4_m5IXNdV z)3xM#+!xKhEWzKlgLXYiLy7ZQXB|e?7B;Vv>IgTs#e@}?G>J+K0;YvY^Z!W zE;#%2CC8>e17(Y+ulSAn7Owr%CgM9SuC=~fd;;4c9+POvWG_A@9F_y?XbYYvdGnV7 z!6S(z9rp3}eM^4vN|K`6L5S-umr7TULblObD&RYJ?Aaqer?4#nkZS?Xz&l~=D|+J? zSo1K`&UY-80QT-Yq!3QUK}t&`f!XOZPQ_GK2LkdOd3mvn_o`}hyjeXZV&rinZ?U)R z&XM-2kO|LV>}2|uWuhEToumZk0pGqW470F4*)VA*Ft`(+*7z*PMS3gX`z_||%Y`iy>M6xUvB^)(6m61CgG2Mm9m~u}&tBy!(#wXAx!tse zu+-!3eLWjlWT|nAoYqn|nZ1~klsJ&k);M!)cHB5BtwT*&tFbc^f)pxp>r>%S2Ni9Ds8bz}MZsXj?>Y%?;HzM~GqZ!1MW9)Lm^%oNBw5{sXX}QBh`0omv4ni(r;lWBa#NU>LFiXyJ^RO zS!cP%e_I=f`UzTtM&$Am7R|ShMZVCnwq;T-RJW}jz}C}L*Frx87@kMet$Nmrhg#y* z9-h@Yj%evj&8cicpwl_;U_7Mk3xF=pHyOe13Vou1orF)x05_PQ0lrGooLHr)ztT5c zd6BE8vDdIXb6`aj-+^!c>>XfZ2(zTQ|I<2pQOD(@n1=QUdGM_kE$KUog zJ*$O0C$&<(j@77SVA~F}d5#F_wPB}{oVn4>A0e00X&XOh3}hXM z+zjPet#sGtU9Rz8ZEdBcb;X`-JE)lfn-xtFfZzd?7LswlOspeWQD5V+T}@S=?o(l_ zr+p-_QW{58%4Bn07T7X+1iNm8de?8+_`?2;NK;-MHDRZX_}?Exo8r;8N>r)uzZ)Gr zLp@Y9RP?jlX{zOlCPw++AZz7t02jHVU|o!yZIV}$Lt9WV-oWQ=rT+jT(%d6;MWW!% z4XdcxB}?1cA3A7!kyNy0%`Q1OcKc_j80xl7U4o7ymQt~U^`y7zDsHXStkh(=Nv3sZ zRq@l3WSho_yj>dR#Wg!}e+ZwkU>cvvrthleiYvCFpoZRHo+)8$PjlbeG>%|$0LI7M zu9w%=&@t|%tf`9~IVOMST2qM}OxAi<{{Uf_-|muG>t%H%T{|iZd7R=J*K>6k<_Fxl z_Yt8uVxr2*`-_T-lr>L-aa(lP4*Z|&6*3C>Dy}j@WR_>R8S0+M_Y7z_Xv1@Xu+HUP zI2WpJI-Yyu>&ruE>RTKDbq#PNgX7RHcGVUsE$zMPdZ}cZx+_%lt&&H%?UEdZwZxyp z9N0cM@5Wa3@@!yh^*{s=+k;pSs4%VQCE#W&3y;sKjU@?aj|Sn?$NGmpHv8H7)}oM} z&|}4ItyJ&o6x;P^`qrj%e)Sa4Fsn2cz~Y6$xknorihw3imhlyU05c z_9{L??oh^0F5NIbrB=g&(~b62(O^fCaB;W6R5NU3bi%kbEzt8_YkhYy{t&Kg`Ant$ zqHy9I^{KhL9?0MyYUvu|&2*^m=DH8?j$Fm^naT?|Zw`0G)#QKK!kUlWEfu-Ij>(wg zuQ%Gh;CU4@lgUqo?f(GT%46~7E8+8rY{dS%?O$>{jv;+NlHUuWc)|S)Q9I(y{i|5H zxls`T@DE*Wt~D?Gw{iXzDzXn+zH8=|Yt2l@Hw0epAde5zQY#SdKy{zVv7hMadH(>K z^*_QOUA4z{XGqqwJz#!*1#u_IuaD^IdH(>K)jz@@UAxC3U8Sm<-PpW*L0In|Eowjh zuk2pkX5tIgA3ta)oqE5?ZgYr<&TFQ|$B+O>_v}68ZX0}XP*Q;#19@5lTG5YSTH8ZY zH}2iitQ>}xLJ!h@#j#Dw-V}S%NTiu%!yA}GVrzjJK8mnv4H0X) zL&;l18=MZrw{Uxi@IJM%ok$~RYYF6V8TA!P>StUbd)(&bxd3CTS=f*d_O2Y8Ln7FRt5yjWjVMWcEMoZcW3mo2fLof*?`Q`&GZr-)3_-<`-UYQ-smzB^+#`8N31c99HGBM5n0PhQP z-l!>TT7u0%Xui)i(rTs(Ha(?~#sf=(xjD1V%0PBbozz(aIWdT1mj$yjz7!OcFA7&*u;{IlQv~BXhS6 zF5G<6=?r9o=Jhx=*JzmAmEM-RYL7mpk*|Mb@e^LhIns8HlXBPO{-$X{hO_qYH#GUvwPKrL&TwoVH-Xc-x)rK3LsPD^rYOw|HeDrs8tDPvmh z)0FmZbDCP#wa1$Wa9T&A)3(Gbbnrfsww{i#weiIBCuCoM-gbi0!GL)((A@WsTrsg( zCBp4&scC%yWjyZ>k*sS=9}}7hE*zw>p~Zov*cfR5fbG?)Xo0AXvDnL=`8!FaC5~~y zaT#|ro9Vj@#`)`=)=G66&bGf@9UeMw|802>*92x)&b6e>ERg|BQ zj7f3Z>gA|x)b%Bs2WOP>66ZH}XCQ-)R{%f<1c890M;p6H1kzPEwQ;AJ?}+9GICLcX zP8lsE43Ih#mW6YxnoA*dU3?W1NYH2GFKZr4-rP#Y51ctUk1IePhzGoyNUNc#d2DpB zJ?x%FzFFL4?FlXdK+(&}+Nk&)0a&n#IbHqq)5RaWQ?~U0zPTaQMJLAYFOU( z)VsBhE>Jw@;;rM zgvP@t$;UYvINa9Jet->$9I zv{BS~+&>q^Pd2%^cIqBv88m!rVF19`8i-`u$5K zH_^27KEUuI})aE$<00;*c5`Dcr%N(M! znyGPiW2mEP&or%$9FGECd4cu?kwqqup@>G>+i}|&6B!8ifL9_^>BohZb0tm@*(S>9 zH6EU<$6-_>B%HEcz8~fk6JPUa7SGKL zpMR~j($_tQXTjU+B=h) zj;2_!@A$Kk?8k>ibasTVu+`6Rze7~{LHo^RH4ij0Gx3}mj#qf}#yi!H{b#D9sEVq2 z*z0QmCU#2s3ti&&xDHPahuXR{oE)(kY;D2i<0Z&x3!T;`&t#;Uo@#TR)N^y34Y`}K zX#^aW>{Z%>SYGT*`<=v{D^XW($Z4*QpPK{4!x)p=-TXqn=$&g>LmU*<9M@Q!;MNRb z?E?T|`()!7&s=p|>rB4Mf46qCwkX}z@K#62{{U|LIdd;BWkYJP)V@3a03{^eaMs_s zqp5gP!@XxlFM1~K*gH+c*6Hv8@N+Y<^$=B}yM#L0%=10ING^f9^F~cQKpwK)kG-pW z)o{Ot^%ahFZhxRp^@)hVNAc3^j4GDG83Mh!(szyO=cX`%*K|_b$V*C(jYv0BFdRAnj2$*0?#e&$4`-;v1F zcZ>XDQ%!xOy3)gjyjQ~L-xxguqyq!qt}#bVaqQnyNmXm8ZaXT&$+ZJ*^>k zzLU{b?LA~|l7@k?(MB9QE=G=EdY9m0M;`Y$5?4F}+ z*~L{D-Opo2%$1`j(ZF!d*}*+4ZmUA(rj)Vd{{Wr~Iw|qVqwA?2@LO}Lf_E^H70Qz= zaL41b9X()U#i3gn=Zvq8-*AbwGE@-QCW!KxbI>&O4)tdTyFC^)9;Uk=PTzDh1G$E$ z1pdX_XWpu5wPaOIbq&tg*!SEjspY;t4pum}$CQH?=W6-*P=+&gzZ zHpU-N17wyU0B9~AjF=q`$HywkH9PEDo`_JwUvIe1#|XJ{BXDpex5FKMLp}=}V%o&H zAwf%Tp|rSZXQXA%SL(|9ReRY_S{+38wE4Jpd8FhS8Gh{qfN(N8m6_IYhRJ@4lT7HC zpsBq?$*HPb>Q^w9;jx0_lGpO(&dA1a!M+OA6&y^x(o%kajh;#oV4G zImbEoZl&;&Ij!7hr;fT(^Hk|eWGw@2E*L1~#qOtN90o%OgxyT!= zso`XIFr85FOUU0YjA!;L#}i!U`Yp*JT<_eCV2y_Zv;&j?0Kd?~SFb{SD(2mhx z18Q8A6l@}9mp!Z*7&y-U#nD^~xiXHijcl?-xNF`XL!3MF;p_{X=w8-N9PI6kW5+9T zaNgu8YZwEv2?qcjv)!`m^tQd@PUU4Is`Ul#-A`}*DN74^+k*1V$;Vus^0;S+vc*%X zZnZ8Me3ef-XTNpRm)ozSWNxoaJWfJ5u?Tw#%00J0B^2 zFna7(>CtnVJ3JsX0OoVxjfPLJuFFe6;vU>&9IdNkoX2cnaRhP#*8)yEfIb}-r%rg3 zjMex>!&KJ9>T>AVBYD`mKSZC0Re?3c&V3y1vF`<5uN97%a=F<#B#^97zcagSzsR;O zAH0!{pA4a_F6>?l2yZ@lhTtCpsUyo+%zg}v?)9nl#ED)!REClwaOT(+oxQ*1T*-c{ zn@MbLO)%3XzGepGm|Q*lzO|L;(Tf#6`SftHJtN9!OP{|q4(fkeR$HQo7HWNu&0Dp* z8j;}I{5mQ_|ghW?dq-C3OYY^$~k_csR*!TXO$ zd2@y{ztk45v0kRpJXL+Zj6qjr)HUp&k?&zGX%g*#afc4<_Yy{M3eKo?MY~1lU(p58 zk9-vIKsgvP=e_O>X!wzn;I0+*mVHR*pMKHJsGgE|q>g4)G|0(X?69@?YmNxdc*nJQ z{-qjHoM$Ov!RPtMMM3)lt>YEjM817eG%Vam}1Nn-0fax+{)^x#?PV!FRH|RZCA@Mq5PcnEOU{ zM3)mJ-}_9+`t z10gX}Q*vdjE+Z~+KaN4bTH{aU(MR2p)7rOIpu0s`RaZkBABtuWUSlDojhr%eBp$f& z3rlJ6dkDxo;4YjZ=~Gmn2`m~qVHfV3b452jnD7f}+hfZEuoke<00O{#7TC47F))3r z9G(%L0wz3I>U%u%`JBAI-bXAShHa+2XdA7%ww5}oI-0p@ZZN!j_>4>(xyPb;OUOHJ z2H^GB)@HQQ#va~gmSB$&-WH(Y@xSQqFhOCJwRCokLTU-6oCXssVtHVA=tvnLw3eLp zI9`7KgZ{Rad}U$B;hcX4JF5N#T5^)iTf_P6i$%5zcL+nY{->m#sv4@VJGINiU3uN{tJUn{6-5lu*rSoM*{SMV*@~Ik*5X3P5z7-?3xVkO z0Dfr6&UFPwf^#Ia*W4|2hDw)alD**KXiq$+G7J*N;{&+h;!@|ey&wf5CHa7P1_g}jm((?ZA z)5z7j_3ndIW_WJde>O9}i(1p~TZ-cl=6(h@D@O3Xs=Hn58a-pGrstle z?0ZWgK9OTbC!)Z48cRkA!0Y0-#bZGSIQ6ctSBs_LP?qj7!zMBH>O zx=le}dWz#+PWz04Y2l2=$ms(E3x_jF8E6?9#@Spt%A==>NjK_kSBsKKExVZYJM6M+ z3Yy1RE_F5a(7s_EWkhVJYXpFtHMu9!BjC69lQiCUPpK<8NbT-Hq~Q9oMjnqZ4FzGxRL{&fO~gZt>+SFD`{%?V#wfo@D{2$665VyVr1{@Kc%T z5!c|P)8_{sDtvu?O2j9Et{sTS!CdJENY2XD0K>DL&fdj#qi6&ip=G&$wzZ&g<0HRi ze5eD>o*FjC*sONbmjVw}e5xF@ul6dwLfa(^Js@-7s+fHtj}TQfWRO)Kb-^ybk`=i5{Iu_Af)J8tUtHCkk;b}f|$n}-?d&zp7Q+YXX$`CP)Cu=_%W*@bDAd}`ud;^!>-zRwc zE&=e}fMXg;)BF-Ou4Fb6A|O5laT{(SYN6lvdgJ^mMUM`nWs&hIrJP9Bj3+#|`;gC% z>8Tbu-?9U&zD0+ln(Upe#{*JHz0IT~MEr04^@n3U@v_yFOp($w`)s7qbgF@I=a63S-jVfD;_U8ar z&7X?Rt%rP%XkQoVQ*|dPKIhixDAT7HFC!zPcQoxeBXYJKiQ$ydy~me5Fty#Zb4#4= z(Uq`lj?FAh!1Q^Ved9l3@m*PaxGgqS(~i=2JnmfL@1=NRZ|ULB5T@$6no{u=$?&QVLL;iP+4P-BENJY~b+6%-Y3rzAAw6Sz|6sbisI zfb@VMpA+1v-g%lIoE57=-6-Nn(Pggc{&u~E*%`@J=Ey`l9-AtwiTzU>9B!?SkTwR4 z^()d_!DvXfcEN3~%h>m4q3X0}8tl!x=_}P3ZeD4fD?crSJbv|WYLU$g@z9m3_<;!f zpI+`NC=V8&+1t4ZLt{X<&I{|`hY}7 zpEv_|a~;X#+~kh&g01&3`ugX-z5z-lft-}__Kj;@%E0XBmn8E)rT_(}m~#aItE#zF z(0T0~@j5u%ViheUv<8l5^E3d%%39$j=MEzv6&=d5!&59YQpp^Ym1Xa18!VGGjqHB> z0b_Zi1+rrYV%V92b}iA2TyU^+KTE-Asi>o@xiYpI0V9jCA%(JXwo>L1o>vmzK((L( zNo%=I3rSU7y;QUm)bqq&Btf-RA@Fh*Kb!*Ic9PKUZ~)G7Ngj^GQe5>MHX}yLYjmNt z(=sQRn$qy_S_qMzi=hM-9R3ynZZ{{&O1d{o9JN%_#v%`nvE1XApCpNk$D~Iyxr2Z> z4=vA9HgvTV+S{=&GR6w&DQO)~_dr*&aQR!>Q%2vu7r2w?yCibpV1RM~$EirIb+(=I z(ooyGDu>VLIfO{=V$eMXhUb~1n4I7ealS@oR93!1cK3RG_0BtX>|*GMlx1L=^%S~OS+bZVa_?uLC+}9uN91QyWqSLGP>Z}ffG+` z*U&JwKH6?(#Tfy-jTi?G>JO+5&Pwv%Lq%VGnx548B=e}F?cU~?-riR?E&vb#<#`;$ z9=!SH%AzwmoJkC|blsdXM+?9Nv5mtqHd+owFfcz9jf+)Xmc91KrK@bm6R?;&7<)cf zHSPtB+`D$j&Q~~{jxpXdsJOwn64OfvyIri((bh0k&DkE&@|H$hB!$4W{vfr%-iau-s%StHmraP{l(|=C)VUh-GY&mQ({7C9pUT>|>aG_LllR z?IB@PnrT1J#9~dwq+y!b^t9}EnN4vmz*Spj^$?sSCc7cc+BLN4S8QG`Dey1G{`G=lm$Yq@YEYg^buxBmc9!_+x)rM3!c zfZo<&94*HwCAl8%@_vOmzjJ`ZLC#>)XsW8|?N1~I(owV-ot~MH0!Qt= z%J?AGRT44_4wSi6LgGW86S-N??&+i)&+a~g#eYS#ovUbVOnpSoKQb74`-aN*V7qEc zD(Bs9w(I0HamUK$x}270J$YE-b{WB1^E5LZ-3 zE_q8#+@zmlhuX8+NiFpyvcq(DEK!|z`Wv#nMLZ3tbPukCJiF1w zJEZ0wJx)F9tU7ibp6m^|#W4U69mmkLYhPEY{pa^GXMPHL3k@VL3<5`i(Xp^Fet)H8 zlvGso;C58B@#B1-N#7M=X|FRp2TCIiX%2s)3EvbXGsMBr1C9ONRu!ooOxI(_w(N<6f#-OuLj8O-N~59TnbIwc9odoB_=4=Eyn%dz=6n z*n!n!yi1k}iyYMQzn8TRY3AQ;$4_QKQ*|UTww6H1dB!kXjfn7C{`Gv8{{U~Jus)+T zbn^L}j(NkMAI@XR@h!m5h%0*5oRrg#25M1qSU5uAZKY+xqK=wDO*9ieKFUXWCzZqu zJG;AM@lXj|$s0v?rEw!>o{t{G-WAV@)_2&o{r=kL^9XC@4LIo1at8hYf$mduRo>T8 zbfvQDm|~jWQ7dCFlw?Fnz&;>)!5HekpRM?6eC?Ao^A((AxKPuwx`*y0()RGX^)n^| z;(vi2v)}RRvdsgizeAJB6ww#9N-B$FfSkz~dvF{N-9NBJm$jB=~P}R1!qSbhTfSR}YrAvAc_MM3yJAEseM&^%E zr7awA*Qu$DtzKl>+jY95R9Yu{nO6~YJXImN0kUJ#Y4I&5b(GIbE}zy`i=#kmt(An% zae^Bg!O8uJaQCcI;@P35shW-Q3FDe6<8xb=HRQa42YxwL?BT|tp_hgNt5Z?X)XQ@< z(%oHEo?rmPMk&S#~n$^)i*{1O>hAG_qDn8 zIm16jR!sY&8<-90?kwsd48F2fTo@ zBRMg4e!jU{)(s^iC3Tz;-0Enh$)=^AKLag5f7H;dG&6~6XOLtyjRQSG_U&?PV8Ia% zknB%$f6AexjM3X#U9^=4pB@9d{QYW`YHF#sV$hBvNvnA>Y)1Y5DolIFF(+`)ew75X zi3^Li$D%wG%4dD*Tw|c$-nDX@GMjHjJblk|PS}Ci_4*}dnlC2J0rf0HO0%-GOD+Hc znD6aw7FnT4eS(rk{u4YWq)XO{OyexkW=hcOpbw0hT7 z=%~wRasABwN~?n(d>a*(>rHNjo{sr!^ZFOjH@Bw*w7sJQV2#?~Kp>658C4V(%Ow+7 z=1r&?C2m8cs%(Y9j1LFWZconG@9S5MG{7zJG-o0D}Zn=E?N~-8PGiT7+LY#xvWWi!m!bReVIZS>K{=oe4VT0!;6|a2IBw$ z;lBA-XW1<^5%zM>M&_20jF1RDdKFN}nG|h`Ci1d2hPA!`bXjWLDbvvw_j&F8jL@Rg z^+=?KR%RHzC1h)S9AAAEo2F!%W$p z>Y7?c^9;5#G@+dC85s@FOpKf=`aAvdr?V9lwz?+;H0wKYqkO#qUmGw9N-~*j3*Aj8xlTlPrJc$u5B*v zH@A{s7mwsk$~7|H*x<%^gQ#s9N`^gSc+hrBmn}?=YvY<--z$4L9_EwE?q;_KAoKvN zqSaFnp{$^Ojk7YEzK~B>6Dr>R+9J@v$7P$2R)LJ;9XhH_L8LT2=4y*I%S%OBNNS;g zMrx;MbKW;`!|*v>l1@4tD*J7VPf^Cx1toJKWG!iHu9KG*y9J@4?m216z&*gOy^@?Q zwB0u6pFFH#W4iB`$8%O5BCRIc)A^po?borU?XyQorzojusUqzi6*QtY2H>0k)^D&; zS}*h(rpZZtwU?TD*vpyBFy@d;T+{0W5$;xt+jR4&rH-~6v!a$E&c;`eT+_BPeW(12 z@j*jNXtq<@=xAc2q@`$z2xEURY-6VWJDleyIXT@{<`mUESbCfNzO3A*Cg+6qFwI|q z3)($vRab2>*=;RjCK~jU?k@Ng%CABP`w#CWhlxE#_KA zQ{t8**t%0FIhT~X0lj)-KCrgGleeQusgEyP+@tpmT29N`9Zg?utGDvvrS%ZbGnYc> z#&YiLy8|6YHra0f04I3)ExL#Sf%k4<{{Yis-MFTyq?eL3=*`P7)teVtpXwuzOpvu_ z_Cb}i^|P^LBi0tM$r1v57Nk}UmQJV085?e=2M}-u{py|$ZNX=w+qTJy{chhG(WtvRQ6 za8cY8tQ}8w6-zd7J_LPgLkJB!Ded+5ZloG^0$p*Y2XbyC`+BZN`AcLj*ZLk3c4nTc zaQ&5E`4?SKG)ZxxC%(DI+p%1C@|DP4#Z4s_b2Q7(?W*Mb%4%?0Je*TGA~|v1*L_Rx zuK@sW0{C7?x+z~Wy5Y7QWqKZxvn3jG0aZUIZeJrPU>1Uhor9zB$?g{)l65fK6K zH;Nb>P1X4B)E6(|QY>UFr;6+m@k?1Seh#9!kN)aKiy%77kbG-im&3cS~+8|xK z#`giHXUF0kt|Iw0-Ttn3pZTv*{2~S09CvqTrD~pWffJ9=u->fNRA2pH*uA^O#2>2r zoK2%EBA%-6OU?_Su5mc-4r^Q%eQqNl1F`6|R}b%^9wA+=a=!w$7~anU_AqDYC*Zer zwcv(7ZoUiTU25CwsFtOdy2^7oz8$nNx<9% zA)7Gm6nIlq9a?@J=XyiVn zY$r7B9ptNxTY!zt9WmamqYZ_u>x|_3*KUHuNz|}YObKJg4#z7}@qXhSpG)3u4{#lY z1nwE}Eg*d)EwxoAXA|6fcU+<6d9I+dX_jgBREAYvCk+5@L@^NeoO;h7_pe#(-K?(f zVNG{`_cSl7DpY4T$s zfs!(`)h?*5iSh_4e7n-=j}{7Inj|c~J%43OdT@sN$)b9(t!Rx$qXq(%#Rx z+2}*I!6XdfeX8o-s)KPpe_85}x^BDO%+VO(?4)!B?Jbfr_X#76I5Dkx+2ps;^no0* z5O6^v%O&ZIu92}pRQP$jH%eUMLtV^yMhPw5r)~Lp9k>K6pIy<^#pY2}x|#7kua2RT z!YSf(?f^KP^O(Ucc`bGUW3ofOYQ*R*X2!_ruFyEUH13WeFKG6(HNSfj5;EqQSl(`q z`Pd<$aP8b>O-SUpB#RcIr>noyzM_r^gmg|RU9PHsr;Po{7|*p7rWtCJiqb%Huej9^U|YhrU6TnwbO$6=2W zTu(mb#DWHE+f5aonXQe{wT*0Ujg-O6E^`4P%xQS}qDPk!PdO(60NOw$zoFSX?x(q< z=uyi`(usE{-yD&|Wr`+Cm}Zfsq5Q?{dob~qvvv&Sq!G*15YvxT#wpueVRMYGdrxLs z9Rzh7HP3!r9snV)a64oM6m`D0@iBB<^`wM=M?FWTf*PZ?|&FTlyAQs$_6|JTg?iHi+h9%=@{GjEDX2 zAOdrO52Wnic&XL3H1kGhs$>Mp#ur5!aKVDi5aJ7)(=Kos+>V==9~0yAin*l`I;H)i zyK+pE#_`P8zYgbiKz0Ba!%lYqDU^&CO3CUTUrOf4<~kM<=~^;!0m$Wz#5K%m9N1{| zmF0!GkuCCKyg{ses?{rN9XvITY?Q8tE*!EDJrZ-fc|qF)Y<12Cr?pc{Q(qJ@4ao;R zzydcbAl21>cCT^TXv-Z98=TQLd7mM11ec!jFa|rZg_CLBIYB`W&BU&53(5KY`=0M| z>eFi8j{LJ{Uxg1=U-b1RWNW2p<>dr(b;sC##eAC9DszF|w=;gC6`rQ~r?lNZE3Pb# zOl7fWo$mGq@AfMk+(Nw9yRoraq%7{1qwjlo@{FIPEguf0SE?&xoho%IOH_7Vrerkn z-Q)%}xb%qteJW*Kx^B-0ZFIUeFu~}7BR~V!k-7FPCO9><=$2Gnp1|S1n>Uf8)y#b& zt~UM|Qgiv;)<;xINA_12x!$r*)S=Rhf96Z*R!K+xWd4Lw@jBS)Jkw6vs^($HbfP|D z{{Uujf$&fU>U-hsU2l9ak}~<6)}K(vqs8kkwJ7~eemY*KoN^)9ChJ4X_ilI`(jV+yriXOGWQR@it) z1>#8RZ?n3o`8>tpj4nCxU@qkYyC*AcShRhL$w=CI8YtMvzyJuGLw(2AvnVNTF-G6& zjOQ2_jyzyv)v}IBER5y!>LdVT4t6uw0eNeis`o#tSYBB-V@feBEj1yojyOzVzK9KL zTlW2GT$J@MaOT?wmkyH{9^kGO6Wdt)MjAHG@6J3DDJkCmR|r{va$O+nj*FY7QkKnr z<&;+$B~sHuWg*XnuOqB+^F7D70^{mgO}g62SUW*H-P^kW0o{PCEYxsPuslAEx$fAJ z>s9oc=9V%+t79A78y-U4xA37af9?F4BI1j^ZMm#qEvF0vt`<3SF?Mfjng;t7I=@@d zzAi^sa}%85m{^^|P(t|2guLYMx!A3mwJEpmCdQC1Ty}Mj`8H6&pXYk7d9rU!U6_LJD85>;pIP!+H zWRi9O4)wMCp{yTrH7z~D2e?CVW6PL&HddX;^$_gi$381x>07K;d%QJnqwS?@*kF^N zl)&)phqPdV?BfLV>b{R>Q&p^G9riXHv$?bbIGLcV)~b8Ptg54$RaOycK9tY6nnOL;7^HETU z?X=FQsgfw>k_cOvrS$M6-{y68d$a9TyjkJfbQ36MqLQT2wl-5n+Je`-j$-a$_5jyp zf({19rb4-JgIZE)JK>_az3jJF!qz%m=doy$m=H139KerQJ<4j_I+or&zT_nFvCup( zWlVZzn%P$wr>mo+jtSu{!%3MDI-Y3+pwG{GG%IKsU zj{g7pFvTA{^-(lq@qs?zH&$DPQ)^301KQogd4@aGih9n-W3vJH*~1$t zb5%(@xtww89TrK}bB>Y}6kbT9fvj^L_u;wU11H=S!lv`c&d!;xF8pVXp5mYAr=u z9lUU;t+c(hbP(VG>6jdSK;>6MjJ78`WZYKnT3s_$E1iPKTF^quICc-{THBH77#dHt zV)sTGZ7Iq|4Mdn2{8ojR@ahtD{k^z5Le;TSd8wz+#G{H@{Y}rqXbGrc+pg=Xw1i+= ziRt&V`_;|-FrYo8WQ^s&uAb9W{hhm~SloVvk?>&Un?Aw|X<-2DBX88Q7~C4d=QuU3 zZcC290phGyU<45Z+|l(Yb+ugdnqs0@Gl$6(QXd8vZ}ty*;`Ixk=(lK^TIlo zv>nZ8QBNA0UsCl2+LCY|8GD#|KN}xkjutb2yG?JesB7JUG3JTxK8vG02>QLs(`~$V z)PkAv77zz3+Ba$YR%@px^U5hZKqL`O=sC^KGSE&l({=J24yAYPt#{LJ@feXU#p))>R92elAK2ACQnd$24&?0i2RLEpuRJEHMB(hq-OpX=o(3Og z{LlvZ7+Yf24`w}!QbxpfCtF%WV*uF|9>IO*nt1Y;CF^v8Tex-GmuAYLPO1haj zB8YPnjGs^gyq=3vwMf#7oZb@MzeH|EE=@)}m7EuK>NZ1DW}*y~yJ#Zsec|piln(?;7 zVc@H6RI}0AsVOQQ%`7mrk2%icufwOgQ&x_-Uf!IX;H4#pjy#TCNT^AuyrWrbbI4vY zJN~t*2y>q)YcfQ#W7p{wwzb=S_&@`8pRH(}X-7|6Zhb9ny|plT(U1pM8dmd%KN)Z! z?ctsC>RWd4@g$=B{{TFVYVvV|XrR*%;vugUy1Hm+s3V&$=X?=3uxA6&^ovQxK|7Vu zzE5Koy+xn=w?FSAT={3W>AP!Vl7foH$qaG7n29Rn&r$|UvxBfXTmJyb_Mm;vzXRWR zl-t$3t8Za0mgP)D_tIIOG8gr zczTw#aXqU*-E!f<&}*?ap9Q$}sjb<=qLCKQQ`&dQ+^csPD&RZhZ{3ov)PS|2%FzJa zdX?Ifasl@#2*_<`CA)fASBlU@Gk_gW+N{>hc8$Oa^4!&AJaWSO0QzkY@5tantW_{pnw!ofO_pA7fzr8*M260O+-AmoG?&fcPWDt@x|0 zKj#(C_*9D(9adk)AlC7BT!xGRscv9@+N4;51FXJHa6hA+=X`&qdY|DCF3|FZld{oQ z843Bc(j5DW;_s7V{hcp6b<*`e!XRCXFxvPjea(iub zHx|ZBRZz1o1Mu1aKTs=R@K0GXS`nwKX2kSE5c3`12X;J5PTtHf1ML-~uSHjfG3VfV z{*iIvP11ZbplvlBncRFje3GF!L@y=A&LR8e0j#Z4@^<#!~kC(@1M z@JL)s?w>CU#Y~7EJ|BP;pg-F9%HQD zYnOu#7bmfwNMz(I#ipaps$r>&0_dLE8V^i@tj&g@r_zDr+^Z?gNF$aKaqV9EX-Qz= zY|faqaiR@ZY^=4NDm>}H$k<>3-M8NhjIu~b=9IO;hi+b2^RXnUHPBwQ1;Vbq)m1GF ze;}N-(lyb?A!v6dbH6iQ2sv*qGlBxrRtoy7eRVx#Vpz;lHN>&NIE?y09_);b!5glU zq`zs&s14rHGcG2(mzXveHSxcgkja`|%>#sBkVD8g$pmCCHFs|t87D?NKiJZ#O;g?_ zkVG`pT#x6Mk*tx;d->!Mw|2J~^qhOgi->ERfr!~9T#&k^dWos!mCR*3%s6p>E4S>x zJLjBlfOa9kY^ttxc4^yPOEo;T6%3St95IH;G>mz9cymU71pLfpt?uUD+}cW3);gA) z2l?asS9Z`58;e@PL&)5h0^s*_Jnirc9A~L-$LaauEqf$$-5pecy`)YY z%>{t%fcJp@tuN-sOe~Phnn{^pwrIN1YKlhCewwzv0vh&Kx=a?v;OvI;JoRxPa^y5M z&EH@ZmUc0>8QWc2XlZI1*^E%``5M>Zz_gM^c|kbn=a#n^0aoeD^<2@m(=EPPrnS{Q z)O5^=(mBPDfEezMco=t5P6#c|6*RRr>KfXZCbvmM_-Xr4_f@fmBH`XT7{kk5;9DGH z0OJ`Vv`OIzL8jISEf;Hrx+;k+hKYu1a#&_*9isz=!!6|*4tB@Cr*?3SXRYfjjh5eM zFv9BC!+E2VJ)A@EfE#l>=OE+`U59$QTyK$S{GU3F@aGXdQ);-~tS>!{y5Cst);X%P%`-{Cba8=~frGi_1dL^FQm0Ov@ha@T=FC)M?@NOQ zvYMtn0>h>4 zHEa>;8-v3MY^2e|8zTwBn}YH%M#LP0x@q1aQCn(%LRaf#)fT#0oh7oelOA_(1>DVR zOSXFkP8xlijh4kq?rT*z=O3NRN~XQyiL{(Pw^BSA+$5+tbdnHUNh7$p<;Oiv!{D+S zdo3kDEL8NZYXA;8+_?CY(FHSk=6hM5?BHkEs4Mk7 z4Hu%vbIFeWx$aYMJU`U0bA0aR(F~r5eI7{0{SM^&j{e=1qxuz9QAYa6=8^o(<`YIl zoA{PLXMG;k5OiMhkcA3-&P3~Ecim%FC~e?5pmRJm_~dVUiRjVY>+V}qNbtzP7h@eY zCu;Wd%`+vJosH>|^XP%=_+?-`F1g&a{-6-*X410JBVQFoT_qk1c|mk!v>fLn9H#@) zCp#6mYW0x0!yO)$q?!wJnk5sxyrxWzoJs2MUqRiD{VUp)RJ=M|dG;~f>Vwk1NtioR z!DynVwYsaf>?3SUjjnT9fH9nPJvy8!%6&~-#4-UO?iW{D-wrTi{WzM)l`iGdGEXOz_a@LNZp7Y*Ew>(){ zMx(-6e{)tdsV$AEF0!Da5wfbHIJv}=M=4t@DJ zT+OX@cBj<$M$2ciI;NTj22sN$bv$9mA3gYSa3{ET4)KzvY5fPq{Yh{GP+6(%)Qth* z(p0mskZtIiq20v#3WxI3QoNDS_AF@nN~)&T z6}2StTjRzvyCeGw)VAIR+u2nu6~^TRm3*z4p_D=Ga}S55%y4oKfyauFlMys$@!>Brs% z-Q3JSV#O_5KK}qn*0uC^cnQfJmB>!gHr#XMtd`wFbPo^{bbIrTS2dqrKGn>vPKy5k zmygexSk7^HX_3@kE1+*{Q%L5tY)>dE-Bo*VV~mgn;C>SbSLk4LxR6sw^VB)SeZ4)( zO=S*XCZeIV(DW^C?)r9G@SP~UG;Ai`<^x?{J=Dt`1+H4rPSGUHrURXvjcbt|cF7jscU5%?Pv<%NTue4(q^cDT^N zPl987j%d$D-~xSw6}i(Imx$ERXnGM8{&p_$c9bR|+Ei3_gprJIo%spzZaqs*X}L8H zgTFITNcdlH^QKfXh0iW{XKC5Ek~U~_WFU9txx&cj(|3*zP&L!rIqwf+qN1n-%F1Y6 zocXgz9MZg(W$x*m05VCvS7=I07J#^2YpH5y)eoXK8C=(I43Uo0vuPbfZEKw4-gDdl zTq+8Mv>YM4X7Ta<67w}2^7H0n{CU;%nwHUSwA$Svo|ZSyP)>PyfFY%)_1PzH46SKa zC8w6rT*n?>!e%|az~QIrKSJ5JeJ5$r_0$%iI$JBY~q6S(^pm8rD2%ILc%J+9b~QnVwT#;j~; z%k>nr6|SV6g|bt=NFF>K#}<+FD{9a9r(vn0sE+X@WHznXmDv8w8aa>bhb#T1{tEzF z)fSmx&li=FWG`b}TPAxqj5{A%xJd++RJ@Wf_VBkXcXX`r3f&A(o>oG^={(?NWEEA^ zv^M%$CS~%|NL~o=KR6$1wLEEg;Fe>ar0!L>Y9mEf=7H!x4`J~HWPJn`M6tDpfPdzH&hQ*&GyJi9Tj4u9m0{>70j$q?QVvGtUZ#~n4gCk#{4 zM(H=`_l%M4#umfjT??u#miGW~VQE^5Nh)q$aak*KB}<&}J-r9i7fW!bPMdX>-9 z+;qk<+!al6r)@P$pfMVz#7hT09AA$_Vq4#QVp&X7;;%J*thAbnbJ5a$NJ(?mbEW06mq0o~rc7t-Z7$nH*bM zStGK;&y9d~G0JnE{{V`g9ewGh;^`?TEvKD~rongLBQ5~HJLCX!ow_a$PwAaMrcKti zrKQWBUflQLe{-*Sad&+c3e!z5hWq35>SXN3H}1zASPuUHH{G^= zDto%GK%S+OjU|VI)qBro52&qmd83}C&yG2#h=NZvbghu!cKDv)wKbvq%#SC-C2t)} z17ef`k3juPLRn_?W69fZTG(@yP2_7hC~$6F7(wO7fLmhQlFa99ElHxJz>fpGd=}cZ zZx(myx}z-~3aSmcKO}uYTeycUOg1Wa!=6t(kG02#E4g#(=EuJZlFExrz%BQIEjwYI z&vRRcHwOizcom=NiQhe_Nc@JGU&;9({(P)@pN4esR1(X*QKu(8pJd*Acyo03;bv6S zvC`ABo|-O5qi-Wx2SB_hvA)XP^h>$iT;V3Ll6lTrdHs&hTEDcpf{nKMdEu-RNc3{8 z)veA>GvKa&gSg+;s4BrF+jL(@JlU-^%zl~mwEb&FYRAIJL~<}5TB7$f(i(zW%<;2) zd$Zl|Tl-u?%;u0k0R3x9-!0QrR7nx@kizodVW|;y$sqUVbM66bTggUgvjueJ3?dyL z%*r{WseFwM05Ho++HeN{05o`X_ywGBYVjO3`j&~Z1N~5KLvVVv(E*YBRfbzM{X7kG zyw*1m-m}cqX z4LK}1I9L<=zgnuTx{)WdU}Z?EXo^w?e(PR4Y-fICF}u>-`qfKLTru>jIPxFQ zx$#wOC%=4ms}*~dkQSbsE3Fs=jq`ldpg1lE}&D-rC zTC`em=x@J*yVw5!Telt|M{tT}lZ71TmIx>q(eDH=(Ds&$!=|lKAHbofZ^AZ(`RS%-X;lPx`SQl!i-2FE z8+2WT1vEd};bVRc0c)2kQ4s;~Ta2IVy{>=fEzkH=ix9`DW&C9S0A=lS{{T5|f5N0# zf&;8RO_B#4F94m+mZ|>$4+8ApDC!w2^bO7k+0V@ZIPbt9pP()>`8LKl>3ProYt;V$ z2!VYL)z%pAwn^#BIkM8y4|polj;J(MJWVs_{nJTM>MJ~!`o>@p=s0y_he8&Xy=W~{ zq$}H7O-)b)FXoD-7|WUQA4n^$FM2wr$6YNOW5;H8Yk~CH+()k)5UrC@RZAFTk%u%4 zaUxQy^SOvSRF^bZT&N@w5|RC`&Ise zq+@onp}fHO7L$;D%82MYU4JubY>>J)oV*reocjD#Wj_Y=T)-NgDQ`Zr4&iw2rDJVU zPfBU}aChzRWRliW>z!L*vD>7ox?QO$<7g)@lw|sV8Y;d0P3*48^gHt{U@8-P0pQ#&OMA(1@VpgwY<1)df<5Oua`X!bh=Zz znn8c$aKY@H=45+X&`9^M5ie$|y6?kH$2{Sdo#h6y-sUB!xSgbJ*BLF9I)dKXnsfK~ z%|l0Ie-!W6)!p8;R}r}W1q`;G0>@TYN((&s(31oMz03q=M|f?@I-K-ddJYKCQ@fTu z4?BoC=XE2TeYrjAO=k&i)Yy{Qr7K$haXE^Ro0NnU0__##pkG{nA`$FrAo)>Eu7ZIYb zF3*J6%Z@R_tLvCSjD3$?))&@doX+^O4}-Rp!y9 zul6dq>LRpMI5EJwE?0(0j)B~pk-u^J*G;p8d_(^L%sb%6jVS!;rXCRS4a1T8I`s%} z!FQVj^%atI#99?L#W%Z6(jmzU{}_ zdBlbDNF?QBrVVguZ!4b>`Ry4b5PWQPBUS2YBNSR<;3?S~VQU}G%4RX6mAs~ZOfY0S zl!9^y$qTN>2zZRgG`B=uo&!kzO|1hXar6CW;H68!jwApX28+ACJePV=F}WmQ>|eIP zZ|YkVY!n_`St<=~&PlZBZYR>n+J+lo06qZ8WPGy22zliPAP0~R0nR#XrBrZ_SY0S& zqr7Sur-Zn=CWu?i+=es^hU8(fw{C!qmt9kEV~KH0)A|Fs@_x3&XaofcdH=$Ea0F9Ke$j7!8H*R)^9J~(j4tLLYPK`Ve;&aXpg}D$uZ>1J- z`*l}K!CoYaR@2X*s~8wP5*-&eu#Rkd@~>8loN&=ERF>y0*YIwxNE;PB!hnK3Bg$HS zkaPZfD+{3S7otFse-ob)-oY%FXy)o$cte&$+ zY9R-i#boDU+?tBE2Ir>D1o$q1kAhrZXyqD4vmOtc5Q{_;ksEU}sm9oPwpRMOk<-82?j$o|PH}=y$nwv#Ejga$5`Rn3*jy(oLTd?&7I?-Pp zzTe5Kgp_qOZZLKQvPj^@;53GY?&IJF*!XmuJQCu;W0^jeehYygrs2r;dP=fB1M8`y zX{pkZcDUWbsxWiy=2fZFWY(O040E+oSUBfXYyD$$27^OstAz|MK9zLT?lZdw@CG?! z9<$8vV#;_YYSFfuSu3^8n%3=6Eq7|#>zu}UJlx#P^cay&5MShzpTe@`ev*pY9n^Y;P zTSg_DPn{0d%Er2Z(YOeU-Xnxb$&t^FW8=9w>xHc~8K{o^UfnG`9%5w7LM8&n9C|=< z`uFf%14qFN#W5{9I+K(Wj!_G6Gt*@zZVu~_GmIK~ozUmd52=l$1;n%H6{yv6mZqPY zD^i=D7ue-3Uu4u*su)>ytCO;)nX<(#B}1uYiTMOMuWlSVcPoXDQt=MU3u&Ug)!btP zm7?P;buI()%R|q+mD1I`5!MxtjkDMwF*DM^sbI^!c=amEZVc)Qap-BvSkLQZi2Wxk zt5c^p4Gx}uMSKZ)zp2l3+T)9lXl*T)lA*mntpEq!*#fymt~kLE^t5$5z~~zTvHlfr zp~Br<-<@ZS`)XT-0pVt-1czDS#&*p>T*>rDE+{XfC%}Ge(NE$p6`I0k;p5(V9#JcP zrxjXz&nE3y(CHwdrFifY zjQuOOXVd=xr(WbV{(16QMoVLl zR@F;@PcX|O-+ob?hWER#Ra>d}OQ^K$0(;GrMFvY7CRpa=_hcVxxJjZcx`xgob$!K* zs4>duW3QR9^EIGjm9GOMA05`eTC}TLlwnR-P4CMyQ;auK+6H=K86@}Z)Zu0pE1kCeCEn^MY~OU@lG{rZVXtdxBOb!ExbjvVv*oBBZ5z2QhDJ8FRMZa!DsX3T+1ut2C|3gHu|ZT{W_!w2qui=CI+s zj%!+W8xlHUb1PUXbYmKCbhh>H^2;S@Yf3f$02lOAB$nGzZl|7GeJ*Jj#-3plBLU6% z+lB@MwmXJ4R(%Ua4XX7=4D*u9wJei7J7CAWJJI%Kt@o9?uT_lH*XY@Bp`F(GVeO=X zXxkJ{oHvlPkZ?ySelf8F2d4V(SkO^@)Uwj(x>h>k+Oah|X_$5x1hlmDXf12JHa;sq zPrIq6G$$9Ulwqk&#}V>2gm`6=iljjd~3$xdYSox7Ylff)3T0PfFDy(7WN zCadAa4Z>(UH&x6h<=)Ry->?GrVx?Pi^g10!MguZ=(u|uc`}u3Xd}NxW5|mC{pC(N;C5`wduW zBiP^0lps`;TV@28M* znJL)GT*5QT_V3gak>j}UT*0rn9mF0VO8)@5nr7oACqLfnBc+f==J#^~GQ;B64382w zUvK1!7aW$fpI}tgcG?=|Ix31trE9h!%m*XhlX0IU#mXEX3bekLy4Tg3_tHrXB^fW4 zX@_ROM|U%@GOZPnM=gBKr9N;u&O0wpKLz(5tAiXeuAEg|G#!ejMx3!sS~wb z0F%BI9+aW9Sm}|Pv1FB{{Tb1*3{4<;neW* z_r*%mK82n!OSz6yZ?Vgo4@iei1x;i{&WwjKfrpi+$9|=BW{IJvgvZ+Dpku2_uaky% zdwJ$|ow!rEy$h)vG4KlUY_(KUwT_Yp#_h5k!rlGqRuM^OlWOT}g}#<1G)~dqy3Js| zQ^^COZ0$VEE!g;sEm>rk@->o1Ht*`P%XBT0co`Hfd)*@gTE^s)k+4wGXoTKP&2Mij zKnDPYbdk;wzmcWR^=D`>*-1uQhy2~w9zM`6rq(L*;h~JVqPm0l! z$-1^6hJyJ^7@BJ8I>)hxFq%)Ed;__L{lJ}u=L(^;S67xtKiq~ixxJ(5;gPa(=h6+% zKqsbs>jX_LTCEAj*aOqrI z6!8~4Y+LC5C2adRULCvBA1dPkzR(AK@s#Vn01 zk<4jxNX7{pjC)nhl1EYLMt!5etgCJe8E=x+c>&zM9IFzn9+?>)D>Aw<&La&t<=@n? zTdQNHjm{VV4V94$z%@W`B*Qp7gnbL2Z16d(W=BQ4^>y;!LE1jyR}2x8j%j2K2i3n{ za@H2>Q-hk&GSGH#80HH+S*tdcR;1CpPFiG{VWw?!C${$HmC&{}wzPU2#r(t; znb($5R2h3GOXp4Oxn9zH4e~%~-1OV3tcuroqKHWi?s}P8z|hH|ZzGO;NyzJ*?r<=w z8$oe%qh)2`BJyP|Sm}pA%RQGh=8ct^f%mcRmYkEG!)PJiM3|wWlsEAar3`hU*Cx zE&z9}=c}np*w+D{KZ%U$TWVHzx+*eN3!B=pB} zs*L0h9B!su=wC%tSX+SX5T!&3##Hi01#AxAV6N1>+3B}>v_R3G{+U;bB6b)ES!218 zT1Teb9B!`Fz=&Jz3e{)o+zs*-*5E$(89pIb8-&VcpF{vV!h+y=ZS`fp?|BdI3XB1U zfOEQ$c5xP}=Xg^~`@*aZZKfJc&e}W~Df*Ys&oB9VPbnMV1o{?xQ$P9y{j2CUmjmvd z1~55bl<)aLeDCt|o3-+i)c7B%P}?K;Qn3B0TY}N#=WAHzy3jYqt^xl5)oT3`006-5 zU!A)+KXz@v4b5#nKfbkoi2~OydWeV*g4|;m;`X`E{&L+vg-EdgGliG&lYTGjl7I7- z>HI21h>vOmtNu;H{;s!yT)8ULUEm^Qawi;@i4H#{R-EwpZwb zP4`}HKla#vl)O5dPB(LVvy5&}pF%PFrI~c;fAY`jzk;9CulKM>Ij|H{Q z5OgjGcC*t%9yeNHkKL~a5%dchd9}`$1HU(VQT%mRPd(WUnw8X^P)9$eX#hPBoVWQX zD7>JKe@;<=Pklfq_p583CUEpPXI|-Mw0Pbrcm3M%@jr({uD;U0_su8r)T(RE{Yv8| zsW$H@4DE5D>EAg1N`=$ZvO!JR@FRJ;j2k&_=lvy@HeVY<){xAI9v|= zNzwxyf2(w#%Tl50#tmBAN0bQwI`c`K@Au*SmEp5_K^~X7PY>0H@>b9Oj9e((o+jwg zj{g9=bf3#mo-**99r&Z8Kkv#<<*CxM$ib`|Ve*3`eL+i~_|g280L0TB4BW z!QcM?DvW8LC>~(vYN}w|?gNHTzm;rn=>e)YjkZ74%3tNLKcn9TG5*W?AP-&g zmv7yv7&YM2JbBwXrZ5lpU;hA=O#KUS!P~o4H*X0){#Bnp$^(KRW!zxU;CTJprv6fr z{{Sd%1+aQtWYGCP`s$zgNR~#HV8=;ux!Zn=K>eBg${)~Q8?(N&Ab4}1%(Ii_7r#2X{SyA$m;1C4=07ktLf&T!l;r!|c`Azvgv~xR+x+o7-)-?_ZjV3fSPZcsbwLF;H zZr$Hr6wlB<61OhY)1Ufxer0@jl^2uCc6S@JUjG34&7YY3gquBFD7mA#=SY~-edkPNq3eflQuK`8g%wQB7f$T_%Qrz zodvPnKh`W9zjdZR{7dqr$)UBITj*YD*iYit$Q9AvSbj{s4;l3F{{W(&`K3)imLHQK z0~}(oob<&#LH__0tf@s4ryYzo^TF*=c$>Dbt#wf!H@B*iTptn5k?;zZlY_dBf!xtb z#skzf{e8q?W-|F=`7;7@#oHP7QdE)t2~t1H56P5*Kcc;;pH&T4^GeP!lG4#%54nq6 zI8S}4m$Z;FLyl5?5J2^9)l#>Hx|)tZH)Rx4Q}dUGm$j}3+u`b2Q~a>JnVb*$s#|^3 zHDAptNxZMTnemqKM&9pGp{qaQmM3dUYfD7v@-SX#M63ANfk}%K!-+T-JvPt}}Jes!05~)t#gKn!o;*~%8FnpZi zKYQXlUh8FF^Gh6heF#rqe~}GKgJ@}8Cqt*B4}Ub0xuhAxr9 zKw~qo3rk7mkL2!h4-rB3TPptmn^(RbasL2>TuaqaX`L0TKIvTF47DNyMLb`6i~{#_ zJ%DYIwsLa1Fw(G*TuCksvkO{Y4iw9SMA14U=?+tXcMHfzE4SaoDxlW5i5tq|Ti$Rw z_x&p)dzS1_#bB`1#0^cFjoh)bnD!%^ao1tv-mP~XD-0ydYh-W%z8fr&k?(7<^ecqc zi+x319d&ISRW-KPIrP&WX1+4vgNz(F5sjP($OHg)RVG%SoePFP6oRRxw~`(VJ<)1< zT=rGnJ1EjY zX>PjJwEIm>1cUX|@-z3P%5;26xHoExJV(ctXE*oF3cb{^XVH~}_j+r`&=+SQ{RAeG z%Ot4yp-Ko7S}FnWyA#0eAgzn?soY1=m2HyK8;_+7XY!a8JFIk1IP*w{ zv6xWdfSWSPJ|gKJf4*HA8UEAk!+D~RfzObW0BP|<_8|A)Xm}_EpvB0X9To`&x)9+d+=;#LQcP=}`Z68nx`W~-< z=9=R>JOgEszf+WWWaQr>uo68cl@O^1a@Lih*#@kpB*T8J!ReZV{9si~ca3*WJqxU_ z{Gl7aiWF#QlX&*bI~D~g@LT2>nvN%q{$>KSyLGO zThZsIWq*;w_hHK42Nx_){Zg8`G2`a$ch!|!9~AVHuw=M8A^p-Px9bYJNy{nfI7xVu zCXbA|+Aj{-)Kvrj0I2~#N}+cN_>D~{p4-Oy@b>2hr%&6YBW!hSO+5w{Nl(NrCpr7w z_LF_LnW0>%^*y3-m~L=9AH|R#wKS8dm^jg%P*koan~VB#t)ruAw5)AE7~T}hczmSo z)3#G{&^HL$cVMBZr!F4RDaXNV23=|2g36YkdYan!-ScH*v(-kFH8Ryhe%BQ=PLBPh zE%4O=hRH{9bK@u0*E1k{3bNcN?^TU1uYAn}ey2?tXdkjUarzc!+MaE))0G&gZ+l0@ zMwA?zH2q$ep^8&C+~e8~k9=j${juNaSkScc5_t$YI}w7q6wMR%bYT0`4~mL!)hSqg ztp!~&jbzf+%2%`i#siM4BBP>`cXlwt%_I5`a@a1l*EPJDIJfIe}x!euZ+r2GS!re6_B3K_F9I|pd(IA7Jy^9~0 zguFVRndeQ?u)C>=?~MNBg>RpM+8RU6O)c(>_Zn;6Tz^!H9+Nh-Qq~IU3#|0B>L`#Ne3M9PYWx>p7n931__fjJmBX&ZeQClr6s zQIHe&FNr$tjCVDbQ1RQRdTy6X~SLsX&d_%*l!NwwiiB|Pd+z(JEAUe_PFHz zD$@NweyX~fdV3{w(l~~&HWy61;LV8@?O_; z0IRLQ`VqAB%C%Yn=4c@7L03zW2eQHMQQRV!>>H02O?Yr^n&F@KtiQb~q2%p@-W477 z*s)x{_gQ~>q!+crS(zD@507~Vf$)>bjH=XNEJxWGbR{xf9EaF_*9D$9t$kv7TjLfKCMf0 z{uLs`P!SXDedSji-Lvix2t$xS-UOE+0fGfja7}`{1PFr$cMCd$2iM>}xO;F7I(Tq` z4({$SFr2x6!CB}0?p^DCc)#tnx>r|s?dn~-YCrXqzeu1&gb$KQFPIN;`6dwqvwK&; z9rKilJF^#n5s>c1z}0c9RoVma^^DFdWD=2{2=R6LMcFsKZ4QFkijKAE7f1k2UkDt0 zMXI#D`1DW=tHotlOJ5;MHWYGwff{2>`whHo)xP)5?d((K#(^%w$j{w7+Uh*EE>?WN zl7}0h6ovji($E-*zY?hHVC3KXe#@2P!f>A><8!9T`4lXXKy(+7-ZezWFI*5`;UfUj z`*6UVpVP~OvFc}oe-3RQic~qc_W9Kt90bU3 zB5$xZ%)G6=F(Bi~nM4ul>VnfU_gZtSdGoS9>xw4OAl#5&AlMoIZMQ4%G@N^@R){ z*sY(cBz08sv@=P0zUuM}=(N$e5%h?L&QaueC)>DsS-Qv z`T2>k(yJ8)e;%bSUU*}X@DzwpErMtTPZvjlpQg=|i4iMUMd~nv(kod-Fo&b`1O^0XuD#Sc}A7abh@e*%&+Eol)M>xcwhP=|O<|DF!7#1HVuyS6hJL_#Ii zBj|V4jZK9(etMlJzKJnqmMlqopZW|h$mq4Wu>-vzjO@hq%VPbD)nk2Ee7YuXP8}>Z z5u-TFZs4Ixb0!rO9{rct^MFjsi8LcX_K&AYY~P-h8+9J$JdVn3`g2NwACNESBNCKP zpXE{u@!s{lvbm;v8o!|9gGKySUPL&Uu)w&jTx(qd1Z5kb)_Cie!4+*BW~1~XszVvP zG!yjhE~PoQ-c*mWHjjDNV=LduekzL39&~iya^!( z-ld}A{&UEP%n3nBl6@#E*SUvt9F8cQNl$s%$a@)!_&E+)sa0K@==d^AKWHiW7{R3> zU0nAdh7#jJhrOA64@!8noILG}(SM5wgvD*v(6}&*r^V?XXI~*inM_W%@*qB*YlXeN z8IokZk~Q(`muEctMqk?P$;CuEUY$#couqoLQggmpXOO|Huc`jKN3Qv2-AJ>$k3x$| z-n86)gA9S3WpzGO;V0U26>4!H7EbjHM79UL+}qGN>5S{{O~dQZET{}|J6984U%@Sl z4)c__v98E@cP#_w*~c)1D=bJkN|E1^gVYkgPxAul>)u^H%eWoy#(Xkt}zZzRwJzlSQM8M@3cO>P}*mT&4(0QKg3f$LrQ}5T0U5MFE4)!x>v93<@>}bWP4k7MDPNswB!Hc%VE8xy5`i)jSXLX-)zFor`mPrhhn)@pZ#q zc3?4id~+Kyykeso@hMy3_v(y-UoAk|>zgzrd)d^9sB;fvL9_MRsIFZ3PuK@`fZw$4 zOu@?9I-WG_2ufe1ZM)L(#|98sqa~R15?q|QV3t}K`whp=I%Vk6^pkQm%UjhR>qsHS zLSH$7)Cn1F=m*uNp1STz|Cw1KIMIG7%)qr?}QC`0H9;K4wYJhzPqm(v&Fs{I}tuxq`K4)3|(yY-aupa!evAwSPJ<~lQjfm z{0fdfKLXm)Me}L9E=oNrPO<%d%6Ou_8t_1?H@|19-hXzE9%-rz_&e_&{I@;Dl0)ub zf}2Z945~G(%o>V!pTi$}5LsDM3(1Y}3S%Yl^F0ae*lWlRjw-D6Aqe&MO`BZyt~;Uh ze^t;BsVkNbh+{9QMHA$4A1Oxn~aK{&QRtU)(NiDP69l;-E4YQlRP_TT3Ipn9y za~fyrV5#v(OI8Tb5+p4}5I*fRVC}}dkELHw zc?>357?Zw9r{&g`g9T*zVAozQ+O)p(R=)8*5yvgg9B&Z--;tWWCl9wQ z0uIA=RZh_uF!^RJ)Il*yCj=TVtO(EVU1EC#xS%nSY+mm_1Bb`YR2&&Q!tabxd&MY6 z$OFrBI+|9aNpW-`@&~a|jshm%+j7u=Q27w${Ao@Pb+iocfB?;L2U>W(P?^ugGh zT9jHI0sRJ2su<^`cWT#!4?-WsP>W)yj~CF`=_W+<3rOuY+9`76g!tahi^^K%o;RK$ zT+_mpa(Z44ZfueVTLU(HQYkcd| z_AN@%=(oiQje*{S=*z2KWD3vzL_`Xap6`k6%Fp>+?4wP)@iofD3m^%&MTyEn7*QE^ zKG#@cqv&r=o?NEu zl-hBgxC8&~+wU+RN)FV%B;U3wVXU=uKwMwOr=vPGl*@8|A?-@_nc~i{eP5*PQN3F6 zPiS28vx?zGqiI62oq1fTW29P3=gW6gtmx^ZayzxwL>5~-0%F=g3_t=WyvARpA;(Wr zBYeX3CPEXRQ=$?;cr7=DWj<}Hj{x3)joej3Av9N<4Gp;RNEjWP^rP)waR!Df zwiRzMk-z$4zo*R_-BI)rz}@2UYJ*BjRL<_XU&o8fCb0zR`@!%#jeCmYHTFiai+G=5 z_W934z?adnx&9r2J=9Lp_b#Aw+}CFs4)T?!zt-GUX?0$CmbX>Lavv~vzdTyg`qi+-b;-na^qyH-KEgB)R$%alSdGXxFqY^se zl)YRV#4gu;QuTb%8Wcw*{E6eY9FeLOU06ZcLJ!aKR=>pgFS<`d`6nvMU24YJSa(Kr z+ww-EbS&~OV16Slix_Z;sLg9h=gI_*pL_(+!p;1SLS$&X$xM zcU>+Xor9Gpvw&^DY69S<`@O^H1V}mkaJR43ltEZ&D>NeFrpgJFi(9I{Lv7HtOpCZN zDhXI?wY-~SLYIRFFc46UBm^S~vSly<2(^6NQQoKKl!Owc5xrDYqG zQdJ#ei5KVA`JW9TsV6n1_VGX(oBJF@ADi~?Wqz^KHE z^u;;3qk%1;@RLqH)-=3j%mAw6t|Ot!f>g1M-80j5?BHLgltH?ZvoF3@PT7{^LgUos zeVy?1XS;`~W`Q3@w&q4&f0fPQj1q3}Fus)oJEDpr$}yYkr{&qb?tRNL5u4ZWL9RR3 zB$-8^kF_-v?JOYiUh=u@KJ*E-4ZQk7T74_twx<3964!7Zt`lw3$&JjlbdkbErFXk> zU4&UIJ1oUDBBlij5dWHD4z7GMmyS)!*`H3~9eCd_#gE_5QiRA}IRgXZ7f@=fbX5zp zICj@)MV|beAL7Cuw_Xy-$d@p27 zClf(yCvR0Yt!n6gry9ZPe#W|*ySwyjFl}IdPiJ0<%d(*Hf|&TZFA^zLXIkIRcrt&5 z@#_>%VFq(d!mZFz^axrRHR;NQs-!bUfS|%1I*H6h`2sc$t(hld>TvH*5?AfGsXF!N z6Uwg+MNWlL>>)bFfNa(RLCgYpnF(SsA$Fy%fM>cfLO`j7{!<##e;qq`Tr`>;B0NFUGs zP80URSeuR`-uw=m-xYl>SnpO_d`Od~t+O3ns#J6O5&G3g_ z%jjtA>6;j(pnC47w^^SJPzKu9TeY#eTkBa&U0JOhW!gIW>!PN6^>pCE&fJXr?sZL| zk3@0@^Pc#oLO4j|)GT)G9oQUhx~1a^6zxjJ%Er{?6+&vT7m)AP6m3`@cb&k;8aic| zM|hNPKL!u4ix}H?YFwGv5l*yq|N7Zc%EI@QJS+SZa7DbD4KN`^M;#Ry%g<)X?NGe} zh7Gsb)C@i|tuF6!^I#-_PK$)8t+d|kB68fF)-Lo*y}|Ri9<_g$c|rPe%|4ffIp9ByKCVbjInPQXjYLYi>Mgvqi(|6>quVn!u4xSD^+h z)<3CH$K{^$;+_pVsx?pJMs|M#1bHIkhM5sD#>VYW^_RpaoQy$Fw-^5kt=V~nQ7DiA z=PVGJR#G$vXT?ATFeuhyo@=jbAh1NN2VPawTSfrqe~68s{U(-8=^t3cY`?+#n7iSBB@_E|SO5_phGVvY=>0*q>3qWvrk)ZS~4?7~rm zIINon@$p@De5e0twxEl-wIYxVbO&Eb&YRrh+Z9g2qM1g?#HBu6x|I1`6Nr674w80( z7&7kjZ?uJ7T+lnJORc5Vv3JU`@S>2cr#(I_R|!+3%fQ{C-poH*V0k0->f1P={3327 z%j^Aks6J&Z`RJDaQ2-skUSl?d;zZ{JhYpM8#P@zvQkc;0tFRpnn>#a=eWOMV!Y=YM zL7k|-csJE7QJN;wnmma=a|K%qQ3?YzQhJuSxU2_wzNtsIpBJ=>6qC7bnOE>&xh-sj zWYvZOKC3esQ}uJ@*YbAB4eFX|!(8@RudGdx9C&c%EJ;dAy9xQkHr7xUymI!=d3huAVZi#y^t z8LFQfgaaPk?P!`G-iB%6 z&ux2QX~epGxN^J`y`L1UzPUr~;^Ld7>naHx>^ZR_&v^B!$G-%i!yVFOV3B?f<}k*X zk;foUIxH-dh@>Smanh|mNdFzl{8sG-Z$YepxLA#)w(b7FkgrZ)P5pVN*Go0C2i2o$ zbzsZ6#HJ#p)y6tk_qeQ!J_2hV}r$%k+L-1_!r;Jl=(?1)rg|w86 z)K*`shpfLNL2-vsX6q~M!8w?lheI1#Yxfv^BfV_YnjZCN`i<{%G-LFL#VTawK+2|s z9W*khaDq>>%Cw@OO^}DN=-D^>iH~<1=$(CdgAXl&RY;7(x{D&U-6(x*iD?_&@K0di zQftb6&byz@U>N3yRt4iG`8l&hBbaK0km&TahOehD`uNF6Tuk&@k+kfMWpQwfd!{QuWUrSwcUR>G{IlxkZ!hZ$8SM#9C+V_M#Yt&Pp*hRe-84lHvyt;u zvV8cjlze{Rc9)=z@Ok$u-6aV_N@Q2d!sXlIU;KiTB^-%o%R#A|!rBj|UC2<6GxIag zsTb`+l~@-FOSw(Ut0sk!JvZlESr4oP%^ujdZ}lM3@|`WVJR4+NIi)29o0oo6S(rzf zYQTXOt#}YkS!>R0pREBUc)Uh8gY!mqm_*CwZ1gO(!6~4}Pj`Gt&oVDf6g3RW_j=k4 zO`%aB1!78nDk(QKe-rz_>W8|gsY-(nh;$8f^3pKv-H|$*Uktxbpd}>8HM-vX&^{bN zmv{(cVx-xrVkdcc?Sxq=QX}*18qNgzIAq(wOj8~Gk+bV%vp^K;cU4-)02tm(d;9Kg^3&q{cw*i8w7ST08CfQ<7u=ha zLZRwi56zpGQ&NSjvS$^F{n(RF5MF^`rgqf~*$S;?s+a;+19fF0GoxuD_ok9w|cYF{_# zbd9#{U{OUaDxf35#pi1+os#UE%#Wf8i8`%$jX;l0+l~kw#oi@KhTbaJkMz>b&Fw&{ zFO%tgkASWRV><#jx=6*Ka$gYsmAiI6_$g}!ac4%ryu;R1!!Nq2PbVL1Dv*&lNCDfK zJMI(UVurC%#q&jvwuxtAo|m4HOI2IJ$B`lj7k)P!A&$Kgub&6jj=F^l$lei81RZ+S z#q8T|ZA*d2I={X%Xme0szs-3MvXYnCo8VfTUfBeG6Xb}#YUg!K9PzVRt0?j1%k|#v zKM@&aFm?UI+QEc06Jlr5oWfYrQp88d+G&*R_ebL2mHTA|bb6AtTY{l&tyw#c+b*fz zD;TIhHUx~z@TUn2qN)}264A}gK;`M@=lByn%8i@K;Uw>QcE~LI1dDzPw-EFVg_ip; zv%L%ed#>uK3+u@{ShoiQY8K1~nY(n-@uE_vzJI<%4w+zHMSxzf99O6r=6;tESCGMH-($av zK#-6U08@0@Ht5(2_9Syto^fNe`;yqHaKoDlP72{u8GouEoQ9^Mu=v6T0U8vmt#EpR zAyPz5c{W=miINVn+@feRW&Xe=`Ci)*uuazkBFf`5G_lr4#t(@9 zEopB{NXsdnmfokP)?*c2GvL%-G1rBWm*vEl>V>5*V7lM<_EyTp5r6)+3oPwr=7!W`snR}HJr19zK zwh$?dheipAZ^(E=sz^8`{pE{S|G8WJpFcsEmZo%66UC(Z26I$(|E5GJGI883bWG3o zd&18abM72$Il@|gVx15IAI~$d6{Q?PWK(C`OPyV8s@t;n`5WF^Z@m-zb?<+)#<7pG z7p-K(0$)Uo%d5&rUl&u5AU^{cm)aF78TLa*h#NF?8W=A!voa8$mvXx7#pthbSp=eB zYl*d4^mlLa<6RUnyD#6qJk=XU=bqf6W94>!Ut1mtsSn&lR8HnW^Gb!dn#UU6`t8)qTJ7o{Yn}=dhTY|a5Jr8q=i;OtWA4;)e~_jVe3gic&c04(Iep5WqC`?S>u%D zUZ1a54mSw{_(k^5lf{+TgoC+}lE`rqn6^9Fb>-MD;O73z2bYS4?>{CNi0G4l#Z7%+ z+=*{9(yELN513~2S$s}!4Tgl*j0+XBq>ir!;K^Pm2qyEN&29eb8*(Q|gt&;)U>8uK zCUm{GJt>f69U+mW^6uL__I37u{g7O0M&kK0uNnFD=bNeTJJJ~x0&IQ(*AeK+{x-#@90wV!U;boSJDT2%iIDb*@CmZ9#%uK;_0Z7nbTP-` zn5-2_daqni@~4?u*Kxlq!^Vi(-tO#fHBxh0%abXfr68x=H`<4_t4;IP{6uH>TSWm( zdwv%oEF1xKQH}$O9B_SD0+vH>+Z^O0?(8#Wmx=bRu=~6ihYU+MK{fbvyV{YqB43(D zgEnQqNYY8?%@=|2(hdm$KW6&7ghtmlL%VRn)K~tL50Z+elIZ3S9ATh-C* z*5ZrN3bt!PvjC0SUTG>5yO~;$gF5*atj2=^`oK+v}Yg){T5y)Ijw@uIRGjK*Tm+4*2%DUGq zT9)px8rfV-YfRK@7escmL1-pErAn>%cL<3zTKA~-&nkGEC<;KLeJvJYDtO0YaZ-tC zlDDEK`wCN&Bg(3u3TzKD$#dVuGV9^#UJ2wqnV?$3p`*j;tQmhH{-12||L1Gbe=7}d zRP*NP9;)BhK{<^00&!9_6Cy^x164xsS&lo|pBoH#od*hvwjEiT zi%3Q#9K(Gb&7^ycK6p#E%M(v<%id7u2b*~W7#DoEWt^Ge2SRXIU2-sAPPD#V4GMaJ z3gGuBC^O^4G_bFez`P;@rj%o)yTv6WL 0 else "", + interactive=True, + show_label=False, + container=False, + ) + + with gr.Accordion( + f"🔍 Expand to see the descriptions of {len(models)} models", open=False + ): + model_description_md = get_model_description_md(models) + gr.Markdown(model_description_md, elem_id="model_description_markdown") + + with gr.Row(): + with gr.Column(scale=3): + textbox = gr.Textbox( + show_label=False, + placeholder="👉 Enter your prompt and press ENTER", + container=False, + render=False, + elem_id="input_box", + ) + imagebox = gr.Image(type="pil", sources=["upload", "clipboard"]) + + cur_dir = os.path.dirname(os.path.abspath(__file__)) + + with gr.Accordion("Parameters", open=False) as parameter_row: + temperature = gr.Slider( + minimum=0.0, + maximum=1.0, + value=0.2, + step=0.1, + interactive=True, + label="Temperature", + ) + top_p = gr.Slider( + minimum=0.0, + maximum=1.0, + value=0.7, + step=0.1, + interactive=True, + label="Top P", + ) + max_output_tokens = gr.Slider( + minimum=0, + maximum=2048, + value=1024, + step=64, + interactive=True, + label="Max output tokens", + ) + + examples = gr.Examples( + examples=[ + [ + f"{cur_dir}/example_images/fridge.jpg", + "How can I prepare a delicious meal using these ingredients?", + ], + [ + f"{cur_dir}/example_images/distracted.jpg", + "What might the woman on the right be thinking about?", + ], + ], + inputs=[imagebox, textbox], + ) + + if random_questions: + global vqa_samples + with open(random_questions, "r") as f: + vqa_samples = json.load(f) + random_btn = gr.Button(value="🎲 Random Example", interactive=True) + + with gr.Column(scale=8): + chatbot = gr.Chatbot( + elem_id="chatbot", label="Scroll down and start chatting", height=550 + ) + + with gr.Row(): + with gr.Column(scale=8): + textbox.render() + with gr.Column(scale=1, min_width=50): + send_btn = gr.Button(value="Send", variant="primary") + + with gr.Row(elem_id="buttons"): + upvote_btn = gr.Button(value="👍 Upvote", interactive=False) + downvote_btn = gr.Button(value="👎 Downvote", interactive=False) + flag_btn = gr.Button(value="⚠️ Flag", interactive=False) + regenerate_btn = gr.Button(value="🔄 Regenerate", interactive=False) + clear_btn = gr.Button(value="🗑️ Clear", interactive=False) + + if add_promotion_links: + gr.Markdown(acknowledgment_md, elem_id="ack_markdown") + + # Register listeners + btn_list = [upvote_btn, downvote_btn, flag_btn, regenerate_btn, clear_btn] + upvote_btn.click( + upvote_last_response, + [state, model_selector], + [textbox, upvote_btn, downvote_btn, flag_btn], + ) + downvote_btn.click( + downvote_last_response, + [state, model_selector], + [textbox, upvote_btn, downvote_btn, flag_btn], + ) + flag_btn.click( + flag_last_response, + [state, model_selector], + [textbox, upvote_btn, downvote_btn, flag_btn], + ) + regenerate_btn.click( + regenerate, state, [state, chatbot, textbox, imagebox] + btn_list + ).then( + bot_response, + [state, temperature, top_p, max_output_tokens], + [state, chatbot] + btn_list, + ) + clear_btn.click(clear_history, None, [state, chatbot, textbox, imagebox] + btn_list) + + model_selector.change( + clear_history, None, [state, chatbot, textbox, imagebox] + btn_list + ) + imagebox.upload(clear_history_example, None, [state, chatbot] + btn_list) + examples.dataset.click(clear_history_example, None, [state, chatbot] + btn_list) + + textbox.submit( + add_text, + [state, model_selector, textbox, imagebox], + [state, chatbot, textbox, imagebox] + btn_list, + ).then( + bot_response, + [state, temperature, top_p, max_output_tokens], + [state, chatbot] + btn_list, + ) + send_btn.click( + add_text, + [state, model_selector, textbox, imagebox], + [state, chatbot, textbox, imagebox] + btn_list, + ).then( + bot_response, + [state, temperature, top_p, max_output_tokens], + [state, chatbot] + btn_list, + ) + + if random_questions: + random_btn.click( + get_vqa_sample, # First, get the VQA sample + [], # Pass the path to the VQA samples + [textbox, imagebox], # Outputs are textbox and imagebox + ) + + return [state, model_selector] diff --git a/fastchat/serve/gradio_web_server.py b/fastchat/serve/gradio_web_server.py index ba7a4aa4c..843e2a5b1 100644 --- a/fastchat/serve/gradio_web_server.py +++ b/fastchat/serve/gradio_web_server.py @@ -5,6 +5,7 @@ import argparse from collections import defaultdict import datetime +import hashlib import json import os import random @@ -14,32 +15,30 @@ import gradio as gr import requests -from fastchat.conversation import SeparatorStyle from fastchat.constants import ( LOGDIR, WORKER_API_TIMEOUT, ErrorCode, MODERATION_MSG, CONVERSATION_LIMIT_MSG, + RATE_LIMIT_MSG, SERVER_ERROR_MSG, INPUT_CHAR_LEN_LIMIT, CONVERSATION_TURN_LIMIT, SESSION_EXPIRATION_TIME, ) -from fastchat.model.model_adapter import get_conversation_template -from fastchat.model.model_registry import get_model_info, model_info -from fastchat.serve.api_provider import ( - anthropic_api_stream_iter, - openai_api_stream_iter, - palm_api_stream_iter, - init_palm_chat, +from fastchat.model.model_adapter import ( + get_conversation_template, ) +from fastchat.model.model_registry import get_model_info, model_info +from fastchat.serve.api_provider import get_api_provider_stream_iter from fastchat.utils import ( build_logger, - moderation_filter, get_window_url_params_js, get_window_url_params_with_tos_js, + moderation_filter, parse_gradio_auth_creds, + load_image, ) @@ -47,37 +46,53 @@ headers = {"User-Agent": "FastChat Client"} -no_change_btn = gr.Button.update() -enable_btn = gr.Button.update(interactive=True, visible=True) -disable_btn = gr.Button.update(interactive=False) -invisible_btn = gr.Button.update(interactive=False, visible=False) +no_change_btn = gr.Button() +enable_btn = gr.Button(interactive=True, visible=True) +disable_btn = gr.Button(interactive=False) +invisible_btn = gr.Button(interactive=False, visible=False) controller_url = None enable_moderation = False acknowledgment_md = """ +### Terms of Service + +Users are required to agree to the following terms before using the service: + +The service is a research preview. It only provides limited safety measures and may generate offensive content. +It must not be used for any illegal, harmful, violent, racist, or sexual purposes. +Please do not upload any private information. +The service collects user dialogue data, including both text and images, and reserves the right to distribute it under a Creative Commons Attribution (CC-BY) or a similar license. +Additionally, Bard is offered on LMSys for research purposes only. To access the Bard product, please visit its [website](http://bard.google.com). + ### Acknowledgment -

-

We thank Kaggle, MBZUAI, AnyScale, and HuggingFace for their sponsorship.

- Image 1 - Image 2 - Image 3 - Image 4 +We thank [Kaggle](https://www.kaggle.com/), [MBZUAI](https://mbzuai.ac.ae/), [a16z](https://www.a16z.com/), [Together AI](https://www.together.ai/), [Anyscale](https://www.anyscale.com/), [HuggingFace](https://huggingface.co/) for their generous [sponsorship](https://lmsys.org/donations/). + + """ -ip_expiration_dict = defaultdict(lambda: 0) - -# Information about custom OpenAI compatible API models. -# JSON file format: +# JSON file format of API-based models: # { -# "vicuna-7b": { -# "model_name": "vicuna-7b-v1.5", -# "api_base": "http://8.8.8.55:5555/v1", -# "api_key": "password" -# }, +# "gpt-3.5-turbo": { +# "model_name": "gpt-3.5-turbo", +# "api_type": "openai", +# "api_base": "https://api.openai.com/v1", +# "api_key": "sk-******", +# "anony_only": false +# } # } -openai_compatible_models_info = {} +# +# - "api_type" can be one of the following: openai, anthropic, gemini, or mistral. For custom APIs, add a new type and implement it accordingly. +# - "anony_only" indicates whether to display this model in anonymous mode only. + +api_endpoint_info = {} class State: @@ -87,11 +102,6 @@ def __init__(self, model_name): self.skip_next = False self.model_name = model_name - if model_name == "palm-2": - # According to release note, "chat-bison@001" is PaLM 2 for chat. - # https://cloud.google.com/vertex-ai/docs/release-notes#May_10_2023 - self.palm_chat = init_palm_chat("chat-bison@001") - def to_gradio_chatbot(self): return self.conv.to_gradio_chatbot() @@ -118,42 +128,50 @@ def get_conv_log_filename(): return name -def get_model_list( - controller_url, register_openai_compatible_models, add_chatgpt, add_claude, add_palm -): +def get_model_list(controller_url, register_api_endpoint_file, multimodal): + global api_endpoint_info + + # Add models from the controller if controller_url: ret = requests.post(controller_url + "/refresh_all_workers") assert ret.status_code == 200 - ret = requests.post(controller_url + "/list_models") - models = ret.json()["models"] + + if multimodal: + ret = requests.post(controller_url + "/list_multimodal_models") + models = ret.json()["models"] + else: + ret = requests.post(controller_url + "/list_language_models") + models = ret.json()["models"] else: models = [] - # Add API providers - if register_openai_compatible_models: - global openai_compatible_models_info - openai_compatible_models_info = json.load( - open(register_openai_compatible_models) - ) - models += list(openai_compatible_models_info.keys()) - - if add_chatgpt: - models += ["gpt-3.5-turbo", "gpt-4", "gpt-4-turbo", "gpt-3.5-turbo-1106"] - if add_claude: - models += ["claude-2", "claude-instant-1"] - if add_palm: - models += ["palm-2"] + # Add models from the API providers + if register_api_endpoint_file: + api_endpoint_info = json.load(open(register_api_endpoint_file)) + for mdl, mdl_dict in api_endpoint_info.items(): + mdl_multimodal = mdl_dict.get("multimodal", False) + if multimodal and mdl_multimodal: + models += [mdl] + elif not multimodal and not mdl_multimodal: + models += [mdl] + + # Remove anonymous models models = list(set(models)) + visible_models = models.copy() + for mdl in visible_models: + if mdl not in api_endpoint_info: + continue + mdl_dict = api_endpoint_info[mdl] + if mdl_dict["anony_only"]: + visible_models.remove(mdl) - if "deluxe-chat-v1" in models: - del models[models.index("deluxe-chat-v1")] - if "deluxe-chat-v1.1" in models: - del models[models.index("deluxe-chat-v1.1")] - - priority = {k: f"___{i:02d}" for i, k in enumerate(model_info)} + # Sort models and add descriptions + priority = {k: f"___{i:03d}" for i, k in enumerate(model_info)} models.sort(key=lambda x: priority.get(x, x)) - logger.info(f"Models: {models}") - return models + visible_models.sort(key=lambda x: priority.get(x, x)) + logger.info(f"All models: {models}") + logger.info(f"Visible models: {visible_models}") + return visible_models, models def load_demo_single(models, url_params): @@ -163,10 +181,7 @@ def load_demo_single(models, url_params): if model in models: selected_model = model - dropdown_update = gr.Dropdown.update( - choices=models, value=selected_model, visible=True - ) - + dropdown_update = gr.Dropdown(choices=models, value=selected_model, visible=True) state = None return state, dropdown_update @@ -176,15 +191,10 @@ def load_demo(url_params, request: gr.Request): ip = get_ip(request) logger.info(f"load_demo. ip: {ip}. params: {url_params}") - ip_expiration_dict[ip] = time.time() + SESSION_EXPIRATION_TIME if args.model_list_mode == "reload": - models = get_model_list( - controller_url, - args.register_openai_compatible_models, - args.add_chatgpt, - args.add_claude, - args.add_palm, + models, all_models = get_model_list( + controller_url, args.register_api_endpoint_file, False ) return load_demo_single(models, url_params) @@ -227,14 +237,14 @@ def regenerate(state, request: gr.Request): ip = get_ip(request) logger.info(f"regenerate. ip: {ip}") state.conv.update_last_message(None) - return (state, state.to_gradio_chatbot(), "") + (disable_btn,) * 5 + return (state, state.to_gradio_chatbot(), "", None) + (disable_btn,) * 5 def clear_history(request: gr.Request): ip = get_ip(request) logger.info(f"clear_history. ip: {ip}") state = None - return (state, [], "") + (disable_btn,) * 5 + return (state, [], "", None) + (disable_btn,) * 5 def get_ip(request: gr.Request): @@ -245,7 +255,22 @@ def get_ip(request: gr.Request): return ip -def add_text(state, model_selector, text, request: gr.Request): +def _prepare_text_with_image(state, text, image): + if image is not None: + if len(state.conv.get_images()) > 0: + # reset convo with new image + state.conv = get_conversation_template(state.model_name) + + image = state.conv.convert_image_to_base64( + image + ) # PIL type is not JSON serializable + + text = text, [image] + + return text + + +def add_text(state, model_selector, text, image, request: gr.Request): ip = get_ip(request) logger.info(f"add_text. ip: {ip}. len: {len(text)}") @@ -262,8 +287,7 @@ def add_text(state, model_selector, text, request: gr.Request): # overwrite the original text text = MODERATION_MSG - conv = state.conv - if (len(conv.messages) - conv.offset) // 2 >= CONVERSATION_TURN_LIMIT: + if (len(state.conv.messages) - state.conv.offset) // 2 >= CONVERSATION_TURN_LIMIT: logger.info(f"conversation turn limit. ip: {ip}. text: {text}") state.skip_next = True return (state, state.to_gradio_chatbot(), CONVERSATION_LIMIT_MSG) + ( @@ -271,20 +295,10 @@ def add_text(state, model_selector, text, request: gr.Request): ) * 5 text = text[:INPUT_CHAR_LEN_LIMIT] # Hard cut-off - conv.append_message(conv.roles[0], text) - conv.append_message(conv.roles[1], None) - return (state, state.to_gradio_chatbot(), "") + (disable_btn,) * 5 - - -def post_process_code(code): - sep = "\n```" - if sep in code: - blocks = code.split(sep) - if len(blocks) % 2 == 1: - for i in range(1, len(blocks), 2): - blocks[i] = blocks[i].replace("\\_", "_") - code = sep.join(blocks) - return code + text = _prepare_text_with_image(state, text, image) + state.conv.append_message(state.conv.roles[0], text) + state.conv.append_message(state.conv.roles[1], None) + return (state, state.to_gradio_chatbot(), "", None) + (disable_btn,) * 5 def model_worker_stream_iter( @@ -296,6 +310,7 @@ def model_worker_stream_iter( repetition_penalty, top_p, max_new_tokens, + images, ): # Make requests gen_params = { @@ -309,8 +324,12 @@ def model_worker_stream_iter( "stop_token_ids": conv.stop_token_ids, "echo": False, } + logger.info(f"==== request ====\n{gen_params}") + if len(images) > 0: + gen_params["images"] = images + # Stream output response = requests.post( worker_addr + "/worker_generate_stream", @@ -325,7 +344,27 @@ def model_worker_stream_iter( yield data -def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request): +def is_limit_reached(model_name, ip): + monitor_url = "http://localhost:9090" + try: + ret = requests.get( + f"{monitor_url}/is_limit_reached?model={model_name}&user_id={ip}", timeout=1 + ) + obj = ret.json() + return obj + except Exception as e: + logger.info(f"monitor error: {e}") + return None + + +def bot_response( + state, + temperature, + top_p, + max_new_tokens, + request: gr.Request, + apply_rate_limit=True, +): ip = get_ip(request) logger.info(f"bot_response. ip: {ip}") start_tstamp = time.time() @@ -339,34 +378,22 @@ def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request) yield (state, state.to_gradio_chatbot()) + (no_change_btn,) * 5 return + if apply_rate_limit: + ret = is_limit_reached(state.model_name, ip) + if ret is not None and ret["is_limit_reached"]: + error_msg = RATE_LIMIT_MSG + "\n\n" + ret["reason"] + logger.info(f"rate limit reached. ip: {ip}. error_msg: {ret['reason']}") + state.conv.update_last_message(error_msg) + yield (state, state.to_gradio_chatbot()) + (no_change_btn,) * 5 + return + conv, model_name = state.conv, state.model_name - if model_name in ["gpt-3.5-turbo", "gpt-4", "gpt-4-turbo", "gpt-3.5-turbo-1106"]: - prompt = conv.to_openai_api_messages() - stream_iter = openai_api_stream_iter( - model_name, prompt, temperature, top_p, max_new_tokens - ) - elif model_name in ["claude-2", "claude-1", "claude-instant-1"]: - prompt = conv.get_prompt() - stream_iter = anthropic_api_stream_iter( - model_name, prompt, temperature, top_p, max_new_tokens - ) - elif model_name == "palm-2": - stream_iter = palm_api_stream_iter( - state.palm_chat, conv.messages[-2][1], temperature, top_p, max_new_tokens - ) - elif model_name in openai_compatible_models_info: - model_info = openai_compatible_models_info[model_name] - prompt = conv.to_openai_api_messages() - stream_iter = openai_api_stream_iter( - model_info["model_name"], - prompt, - temperature, - top_p, - max_new_tokens, - api_base=model_info["api_base"], - api_key=model_info["api_key"], - ) - else: + model_api_dict = ( + api_endpoint_info[model_name] if model_name in api_endpoint_info else None + ) + images = conv.get_images() + + if model_api_dict is None: # Query worker address ret = requests.post( controller_url + "/get_worker_address", json={"model": model_name} @@ -407,6 +434,16 @@ def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request) repetition_penalty, top_p, max_new_tokens, + images, + ) + else: + stream_iter = get_api_provider_stream_iter( + conv, + model_name, + model_api_dict, + temperature, + top_p, + max_new_tokens, ) conv.update_last_message("▌") @@ -430,8 +467,6 @@ def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request) ) return output = data["text"].strip() - if "vicuna" in model_name: - output = post_process_code(output) conv.update_last_message(output) yield (state, state.to_gradio_chatbot()) + (enable_btn,) * 5 except requests.exceptions.RequestException as e: @@ -464,6 +499,20 @@ def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request) finish_tstamp = time.time() logger.info(f"{output}") + # We load the image because gradio accepts base64 but that increases file size by ~1.33x + loaded_images = [load_image(image) for image in images] + images_hash = [hashlib.md5(image.tobytes()).hexdigest() for image in loaded_images] + for image, hash_str in zip(loaded_images, images_hash): + t = datetime.datetime.now() + filename = os.path.join( + LOGDIR, + "serve_images", + f"{hash_str}.jpg", + ) + if not os.path.isfile(filename): + os.makedirs(os.path.dirname(filename), exist_ok=True) + image.save(filename) + with open(get_conv_log_filename(), "a") as fout: data = { "tstamp": round(finish_tstamp, 4), @@ -478,13 +527,14 @@ def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request) "finish": round(finish_tstamp, 4), "state": state.dict(), "ip": get_ip(request), + "images": images_hash, } fout.write(json.dumps(data) + "\n") block_css = """ -#notice_markdown { - font-size: 110% +#notice_markdown .prose { + font-size: 120% !important; } #notice_markdown th { display: none; @@ -493,8 +543,11 @@ def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request) padding-top: 6px; padding-bottom: 6px; } -#leaderboard_markdown { - font-size: 110% +#model_description_markdown { + font-size: 120% !important; +} +#leaderboard_markdown .prose { + font-size: 120% !important; } #leaderboard_markdown td { padding-top: 6px; @@ -503,13 +556,22 @@ def bot_response(state, temperature, top_p, max_new_tokens, request: gr.Request) #leaderboard_dataframe td { line-height: 0.1em; } -#about_markdown { - font-size: 110% +#about_markdown .prose { + font-size: 120% !important; } -#input_box textarea { +#ack_markdown .prose { + font-size: 120% !important; } footer { - display:none !important + display:none !important; +} +.sponsor-image-about img { + margin: 0 20px; + margin-top: 20px; + height: 40px; + max-height: 100%; + width: auto; + float: left; } .image-container { display: flex; @@ -558,9 +620,9 @@ def get_model_description_md(models): def build_about(): - about_markdown = f""" + about_markdown = """ # About Us -Chatbot Arena is an open-source research project developed by members from [LMSYS](https://lmsys.org/about/) and UC Berkeley [SkyLab](https://sky.cs.berkeley.edu/). Our mission is to build an open crowdsourced platform to collect human feedback and evaluate LLMs under real-world scenarios. We open-source our code at [GitHub](https://github.com/lm-sys/FastChat) and release chat and human feedback datasets [here](https://github.com/lm-sys/FastChat/blob/main/docs/dataset_release.md). We invite everyone to join us in this journey! +Chatbot Arena is an open-source research project developed by members from [LMSYS](https://lmsys.org/about/) and UC Berkeley [SkyLab](https://sky.cs.berkeley.edu/). Our mission is to build an open crowdsourced platform to collect human feedback and evaluate LLMs under real-world scenarios. We open-source our [FastChat](https://github.com/lm-sys/FastChat) project at GitHub and release chat and human feedback datasets [here](https://github.com/lm-sys/FastChat/blob/main/docs/dataset_release.md). We invite everyone to join us in this journey! ## Read More - Chatbot Arena [launch post](https://lmsys.org/blog/2023-05-03-arena/), [data release](https://lmsys.org/blog/2023-07-20-dataset/) @@ -577,23 +639,21 @@ def build_about(): - File issues on [GitHub](https://github.com/lm-sys/FastChat) - Download our datasets and models on [HuggingFace](https://huggingface.co/lmsys) -## Sponsors -We thank [Kaggle](https://www.kaggle.com/), [MBZUAI](https://mbzuai.ac.ae/), [Anyscale](https://www.anyscale.com/), [HuggingFace](https://huggingface.co/) for their generous sponsorship. -Learn more about partnership [here](https://lmsys.org/donations/). - -
- Image 1 - Image 2 - Image 3 - Image 4 +## Acknowledgment +We thank [SkyPilot](https://github.com/skypilot-org/skypilot) and [Gradio](https://github.com/gradio-app/gradio) team for their system support. +We also thank [Kaggle](https://www.kaggle.com/), [MBZUAI](https://mbzuai.ac.ae/), [a16z](https://www.a16z.com/), [Together AI](https://www.together.ai/), [Anyscale](https://www.anyscale.com/), [HuggingFace](https://huggingface.co/) for their generous sponsorship. Learn more about partnership [here](https://lmsys.org/donations/). + + """ - - # state = gr.State() gr.Markdown(about_markdown, elem_id="about_markdown") - # return [state] - def build_single_model_ui(models, add_promotion_links=False): promotion = ( @@ -601,6 +661,8 @@ def build_single_model_ui(models, add_promotion_links=False): - | [GitHub](https://github.com/lm-sys/FastChat) | [Dataset](https://github.com/lm-sys/FastChat/blob/main/docs/dataset_release.md) | [Twitter](https://twitter.com/lmsysorg) | [Discord](https://discord.gg/HSWAKCrnFx) | - Introducing Llama 2: The Next Generation Open Source Large Language Model. [[Website]](https://ai.meta.com/llama/) - Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality. [[Blog]](https://lmsys.org/blog/2023-03-30-vicuna/) + +## 🤖 Choose any model to chat """ if add_promotion_links else "" @@ -609,38 +671,42 @@ def build_single_model_ui(models, add_promotion_links=False): notice_markdown = f""" # 🏔️ Chat with Open Large Language Models {promotion} - -## Choose any model to chat """ state = gr.State() - model_description_md = get_model_description_md(models) - gr.Markdown(notice_markdown + model_description_md, elem_id="notice_markdown") - - with gr.Row(elem_id="model_selector_row"): - model_selector = gr.Dropdown( - choices=models, - value=models[0] if len(models) > 0 else "", - interactive=True, - show_label=False, - container=False, - ) - - chatbot = gr.Chatbot( - elem_id="chatbot", - label="Scroll down and start chatting", - height=550, - ) - with gr.Row(): - with gr.Column(scale=20): - textbox = gr.Textbox( + gr.Markdown(notice_markdown, elem_id="notice_markdown") + + with gr.Group(elem_id="share-region-named"): + with gr.Row(elem_id="model_selector_row"): + model_selector = gr.Dropdown( + choices=models, + value=models[0] if len(models) > 0 else "", + interactive=True, show_label=False, - placeholder="👉 Enter your prompt and press ENTER", container=False, elem_id="input_box", ) - with gr.Column(scale=1, min_width=50): - send_btn = gr.Button(value="Send", variant="primary") + with gr.Row(): + with gr.Accordion( + f"🔍 Expand to see the descriptions of {len(models)} models", + open=False, + ): + model_description_md = get_model_description_md(models) + gr.Markdown(model_description_md, elem_id="model_description_markdown") + + chatbot = gr.Chatbot( + elem_id="chatbot", + label="Scroll down and start chatting", + height=550, + show_copy_button=True, + ) + with gr.Row(): + textbox = gr.Textbox( + show_label=False, + placeholder="👉 Enter your prompt and press ENTER", + elem_id="input_box", + ) + send_btn = gr.Button(value="Send", variant="primary", scale=0) with gr.Row() as button_row: upvote_btn = gr.Button(value="👍 Upvote", interactive=False) @@ -668,17 +734,18 @@ def build_single_model_ui(models, add_promotion_links=False): ) max_output_tokens = gr.Slider( minimum=16, - maximum=1024, - value=512, + maximum=2048, + value=1024, step=64, interactive=True, label="Max output tokens", ) if add_promotion_links: - gr.Markdown(acknowledgment_md) + gr.Markdown(acknowledgment_md, elem_id="ack_markdown") # Register listeners + imagebox = gr.State(None) btn_list = [upvote_btn, downvote_btn, flag_btn, regenerate_btn, clear_btn] upvote_btn.click( upvote_last_response, @@ -695,17 +762,23 @@ def build_single_model_ui(models, add_promotion_links=False): [state, model_selector], [textbox, upvote_btn, downvote_btn, flag_btn], ) - regenerate_btn.click(regenerate, state, [state, chatbot, textbox] + btn_list).then( + regenerate_btn.click( + regenerate, state, [state, chatbot, textbox, imagebox] + btn_list + ).then( bot_response, [state, temperature, top_p, max_output_tokens], [state, chatbot] + btn_list, ) - clear_btn.click(clear_history, None, [state, chatbot, textbox] + btn_list) + clear_btn.click(clear_history, None, [state, chatbot, textbox, imagebox] + btn_list) - model_selector.change(clear_history, None, [state, chatbot, textbox] + btn_list) + model_selector.change( + clear_history, None, [state, chatbot, textbox, imagebox] + btn_list + ) textbox.submit( - add_text, [state, model_selector, textbox], [state, chatbot, textbox] + btn_list + add_text, + [state, model_selector, textbox, imagebox], + [state, chatbot, textbox, imagebox] + btn_list, ).then( bot_response, [state, temperature, top_p, max_output_tokens], @@ -713,8 +786,8 @@ def build_single_model_ui(models, add_promotion_links=False): ) send_btn.click( add_text, - [state, model_selector, textbox], - [state, chatbot, textbox] + btn_list, + [state, model_selector, textbox, imagebox], + [state, chatbot, textbox, imagebox] + btn_list, ).then( bot_response, [state, temperature, top_p, max_output_tokens], @@ -749,7 +822,7 @@ def build_demo(models): state, model_selector, ], - _js=load_js, + js=load_js, ) return demo @@ -794,41 +867,27 @@ def build_demo(models): help="Shows term of use before loading the demo", ) parser.add_argument( - "--add-chatgpt", - action="store_true", - help="Add OpenAI's ChatGPT models (gpt-3.5-turbo, gpt-4)", - ) - parser.add_argument( - "--add-claude", - action="store_true", - help="Add Anthropic's Claude models (claude-2, claude-instant-1)", - ) - parser.add_argument( - "--add-palm", - action="store_true", - help="Add Google's PaLM model (PaLM 2 for Chat: chat-bison@001)", - ) - parser.add_argument( - "--register-openai-compatible-models", + "--register-api-endpoint-file", type=str, - help="Register custom OpenAI API compatible models by loading them from a JSON file", + help="Register API-based model endpoints from a JSON file", ) parser.add_argument( "--gradio-auth-path", type=str, help='Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3"', ) + parser.add_argument( + "--gradio-root-path", + type=str, + help="Sets the gradio root path, eg /abc/def. Useful when running behind a reverse-proxy or at a custom URL path prefix", + ) args = parser.parse_args() logger.info(f"args: {args}") # Set global variables set_global_vars(args.controller_url, args.moderate) - models = get_model_list( - args.controller_url, - args.register_openai_compatible_models, - args.add_chatgpt, - args.add_claude, - args.add_palm, + models, all_models = get_model_list( + args.controller_url, args.register_api_endpoint_file, False ) # Set authorization credentials @@ -839,11 +898,14 @@ def build_demo(models): # Launch the demo demo = build_demo(models) demo.queue( - concurrency_count=args.concurrency_count, status_update_rate=10, api_open=False + default_concurrency_limit=args.concurrency_count, + status_update_rate=10, + api_open=False, ).launch( server_name=args.host, server_port=args.port, share=args.share, max_threads=200, auth=auth, + root_path=args.gradio_root_path, ) diff --git a/fastchat/serve/gradio_web_server_multi.py b/fastchat/serve/gradio_web_server_multi.py index b918f9d6b..538d7776b 100644 --- a/fastchat/serve/gradio_web_server_multi.py +++ b/fastchat/serve/gradio_web_server_multi.py @@ -9,9 +9,6 @@ import gradio as gr -from fastchat.constants import ( - SESSION_EXPIRATION_TIME, -) from fastchat.serve.gradio_block_arena_anony import ( build_side_by_side_ui_anony, load_demo_side_by_side_anony, @@ -22,6 +19,9 @@ load_demo_side_by_side_named, set_global_vars_named, ) +from fastchat.serve.gradio_block_arena_vision import ( + build_single_vision_language_model_ui, +) from fastchat.serve.gradio_web_server import ( set_global_vars, block_css, @@ -29,7 +29,6 @@ build_about, get_model_list, load_demo_single, - ip_expiration_dict, get_ip, ) from fastchat.serve.monitor.monitor import build_leaderboard_tab @@ -44,74 +43,78 @@ def load_demo(url_params, request: gr.Request): - global models + global models, all_models, vl_models ip = get_ip(request) logger.info(f"load_demo. ip: {ip}. params: {url_params}") - ip_expiration_dict[ip] = time.time() + SESSION_EXPIRATION_TIME selected = 0 if "arena" in url_params: selected = 0 elif "compare" in url_params: selected = 1 - elif "single" in url_params: + elif "direct" in url_params or "model" in url_params: selected = 2 - elif "leaderboard" in url_params: + elif "vision" in url_params: selected = 3 + elif "leaderboard" in url_params: + selected = 4 if args.model_list_mode == "reload": - if args.anony_only_for_proprietary_model: - models = get_model_list( - args.controller_url, - args.register_openai_compatible_models, - False, - False, - False, - ) - else: - models = get_model_list( - args.controller_url, - args.register_openai_compatible_models, - args.add_chatgpt, - args.add_claude, - args.add_palm, - ) + models, all_models = get_model_list( + args.controller_url, + args.register_api_endpoint_file, + False, + ) - single_updates = load_demo_single(models, url_params) + vl_models, all_vl_models = get_model_list( + args.controller_url, + args.register_api_endpoint_file, + True, + ) - models_anony = list(models) - if args.anony_only_for_proprietary_model: - # Only enable these models in anony battles. - if args.add_chatgpt: - models_anony += [ - "gpt-4", - "gpt-3.5-turbo", - "gpt-4-turbo", - "gpt-3.5-turbo-1106", - ] - if args.add_claude: - models_anony += ["claude-2", "claude-1", "claude-instant-1"] - if args.add_palm: - models_anony += ["palm-2"] - models_anony = list(set(models_anony)) - - side_by_side_anony_updates = load_demo_side_by_side_anony(models_anony, url_params) + single_updates = load_demo_single(models, url_params) + side_by_side_anony_updates = load_demo_side_by_side_anony(all_models, url_params) side_by_side_named_updates = load_demo_side_by_side_named(models, url_params) + vision_language_updates = load_demo_single(vl_models, url_params) + return ( - (gr.Tabs.update(selected=selected),) + (gr.Tabs(selected=selected),) + single_updates + side_by_side_anony_updates + side_by_side_named_updates + + vision_language_updates ) -def build_demo(models, elo_results_file, leaderboard_table_file): +def build_demo(models, vl_models, elo_results_file, leaderboard_table_file): text_size = gr.themes.sizes.text_md + if args.show_terms_of_use: + load_js = get_window_url_params_with_tos_js + else: + load_js = get_window_url_params_js + + head_js = """ + +""" + if args.ga_id is not None: + head_js += f""" + + + """ + with gr.Blocks( title="Chat with Open Large Language Models", theme=gr.themes.Default(text_size=text_size), css=block_css, + head=head_js, ) as demo: with gr.Tabs() as tabs: with gr.Tab("Arena (battle)", id=0): @@ -124,30 +127,39 @@ def build_demo(models, elo_results_file, leaderboard_table_file): single_model_list = build_single_model_ui( models, add_promotion_links=True ) + + with gr.Tab("Vision Direct Chat", id=3, visible=args.multimodal): + single_vision_language_model_list = ( + build_single_vision_language_model_ui( + vl_models, + add_promotion_links=True, + random_questions=args.random_questions, + ) + ) + if elo_results_file: - with gr.Tab("Leaderboard", id=3): + with gr.Tab("Leaderboard", id=4): build_leaderboard_tab(elo_results_file, leaderboard_table_file) with gr.Tab("About Us", id=4): about = build_about() + with gr.Tab("About Us", id=5): + about = build_about() + url_params = gr.JSON(visible=False) if args.model_list_mode not in ["once", "reload"]: raise ValueError(f"Unknown model list mode: {args.model_list_mode}") - if args.show_terms_of_use: - load_js = get_window_url_params_with_tos_js - else: - load_js = get_window_url_params_js - demo.load( load_demo, [url_params], [tabs] + single_model_list + side_by_side_anony_list - + side_by_side_named_list, - _js=load_js, + + side_by_side_named_list + + single_vision_language_model_list, + js=load_js, ) return demo @@ -192,29 +204,15 @@ def build_demo(models, elo_results_file, leaderboard_table_file): help="Shows term of use before loading the demo", ) parser.add_argument( - "--add-chatgpt", - action="store_true", - help="Add OpenAI's ChatGPT models (gpt-3.5-turbo, gpt-4)", - ) - parser.add_argument( - "--add-claude", - action="store_true", - help="Add Anthropic's Claude models (claude-2, claude-instant-1)", + "--multimodal", action="store_true", help="Show multi modal tabs." ) parser.add_argument( - "--add-palm", - action="store_true", - help="Add Google's PaLM model (PaLM 2 for Chat: chat-bison@001)", + "--random-questions", type=str, help="Load random questions from a JSON file" ) parser.add_argument( - "--anony-only-for-proprietary-model", - action="store_true", - help="Only add ChatGPT, Claude, Bard under anony battle tab", - ) - parser.add_argument( - "--register-openai-compatible-models", + "--register-api-endpoint-file", type=str, - help="Register custom OpenAI API compatible models by loading them from a JSON file", + help="Register API-based model endpoints from a JSON file", ) parser.add_argument( "--gradio-auth-path", @@ -228,6 +226,17 @@ def build_demo(models, elo_results_file, leaderboard_table_file): parser.add_argument( "--leaderboard-table-file", type=str, help="Load leaderboard results and plots" ) + parser.add_argument( + "--gradio-root-path", + type=str, + help="Sets the gradio root path, eg /abc/def. Useful when running behind a reverse-proxy or at a custom URL path prefix", + ) + parser.add_argument( + "--ga-id", + type=str, + help="the Google Analytics ID", + default=None, + ) args = parser.parse_args() logger.info(f"args: {args}") @@ -235,22 +244,17 @@ def build_demo(models, elo_results_file, leaderboard_table_file): set_global_vars(args.controller_url, args.moderate) set_global_vars_named(args.moderate) set_global_vars_anony(args.moderate) - if args.anony_only_for_proprietary_model: - models = get_model_list( - args.controller_url, - args.register_openai_compatible_models, - False, - False, - False, - ) - else: - models = get_model_list( - args.controller_url, - args.register_openai_compatible_models, - args.add_chatgpt, - args.add_claude, - args.add_palm, - ) + models, all_models = get_model_list( + args.controller_url, + args.register_api_endpoint_file, + False, + ) + + vl_models, all_vl_models = get_model_list( + args.controller_url, + args.register_api_endpoint_file, + True, + ) # Set authorization credentials auth = None @@ -258,13 +262,21 @@ def build_demo(models, elo_results_file, leaderboard_table_file): auth = parse_gradio_auth_creds(args.gradio_auth_path) # Launch the demo - demo = build_demo(models, args.elo_results_file, args.leaderboard_table_file) + demo = build_demo( + models, + vl_models, + args.elo_results_file, + args.leaderboard_table_file, + ) demo.queue( - concurrency_count=args.concurrency_count, status_update_rate=10, api_open=False + default_concurrency_limit=args.concurrency_count, + status_update_rate=10, + api_open=False, ).launch( server_name=args.host, server_port=args.port, share=args.share, max_threads=200, auth=auth, + root_path=args.gradio_root_path, ) diff --git a/fastchat/serve/huggingface_api.py b/fastchat/serve/huggingface_api.py index 2a49bf5f1..8022fbc93 100644 --- a/fastchat/serve/huggingface_api.py +++ b/fastchat/serve/huggingface_api.py @@ -61,7 +61,7 @@ def main(args): add_model_args(parser) parser.add_argument("--temperature", type=float, default=0.7) parser.add_argument("--repetition_penalty", type=float, default=1.0) - parser.add_argument("--max-new-tokens", type=int, default=512) + parser.add_argument("--max-new-tokens", type=int, default=1024) parser.add_argument("--debug", action="store_true") parser.add_argument("--message", type=str, default="Hello! Who are you?") args = parser.parse_args() diff --git a/fastchat/serve/huggingface_api_worker.py b/fastchat/serve/huggingface_api_worker.py index 2d0611fe5..6ed8e6c8c 100644 --- a/fastchat/serve/huggingface_api_worker.py +++ b/fastchat/serve/huggingface_api_worker.py @@ -4,12 +4,18 @@ Register models in a JSON file with the following format: { "falcon-180b-chat": { - "model_path": "tiiuae/falcon-180B-chat", + "model_name": "falcon-180B-chat", "api_base": "https://api-inference.huggingface.co/models", - "token": "hf_xxx", - "context_length": 2048, - "model_names": "falcon-180b-chat", - "conv_template": null + "model_path": "tiiuae/falcon-180B-chat", + "token": "hf_XXX", + "context_length": 2048 + }, + "zephyr-7b-beta": { + "model_name": "zephyr-7b-beta", + "model_path": "", + "api_base": "xxx", + "token": "hf_XXX", + "context_length": 4096 } } diff --git a/fastchat/serve/lightllm_worker.py b/fastchat/serve/lightllm_worker.py new file mode 100644 index 000000000..ed0e21b68 --- /dev/null +++ b/fastchat/serve/lightllm_worker.py @@ -0,0 +1,512 @@ +""" +A model worker that executes the model based on LightLLM. + +See documentations at docs/lightllm_integration.md +""" + +import argparse +import asyncio +import json +import os +import torch +import uvicorn + +from transformers import AutoConfig + +from typing import List + +from fastapi import FastAPI, Request, BackgroundTasks +from fastapi.responses import StreamingResponse, JSONResponse + +from fastchat.serve.base_model_worker import BaseModelWorker +from fastchat.serve.model_worker import ( + logger, + worker_id, +) + +from lightllm.server.sampling_params import SamplingParams +from lightllm.server.multimodal_params import MultimodalParams +from lightllm.server.httpserver.manager import HttpServerManager +from lightllm.server.detokenization.manager import start_detokenization_process +from lightllm.server.router.manager import start_router_process +from lightllm.server.req_id_generator import ReqIDGenerator + +from lightllm.utils.net_utils import alloc_can_use_network_port +from lightllm.utils.start_utils import start_submodule_processes +from fastchat.utils import get_context_length, is_partial_stop + +app = FastAPI() +g_id_gen = ReqIDGenerator() + + +class LightLLMWorker(BaseModelWorker): + def __init__( + self, + controller_addr: str, + worker_addr: str, + worker_id: str, + model_path: str, + model_names: List[str], + limit_worker_concurrency: int, + no_register: bool, + conv_template: str, + tokenizer, + context_len, + ): + super().__init__( + controller_addr, + worker_addr, + worker_id, + model_path, + model_names, + limit_worker_concurrency, + conv_template, + ) + + logger.info( + f"Loading the model {self.model_names} on worker {worker_id}, worker type: LightLLM worker..." + ) + self.tokenizer = tokenizer + self.context_len = context_len + + self.is_first = True + + if not no_register: + self.init_heart_beat() + + async def generate_stream(self, params): + self.call_ct += 1 + + prompt = params.pop("prompt") + request_id = params.pop("request_id") + temperature = float(params.get("temperature", 1.0)) + top_p = float(params.get("top_p", 1.0)) + top_k = params.get("top_k", -1.0) + presence_penalty = float(params.get("presence_penalty", 0.0)) + frequency_penalty = float(params.get("frequency_penalty", 0.0)) + repetition_penalty = float(params.get("repetition_penalty", 1.0)) + max_new_tokens = params.get("max_new_tokens", 256) + echo = params.get("echo", True) + stop_str = params.get("stop", None) + stop_token_ids = params.get("stop_token_ids", None) or [] + if self.tokenizer.eos_token_id is not None: + stop_token_ids.append(self.tokenizer.eos_token_id) + + request = params.get("request", None) + + # Handle stop_str + stop = set() + if isinstance(stop_str, str) and stop_str != "": + stop.add(stop_str) + elif isinstance(stop_str, list) and stop_str != []: + stop.update(stop_str) + + for tid in stop_token_ids: + if tid is not None: + s = self.tokenizer.decode(tid) + if s != "": + stop.add(s) + + if self.is_first: + loop = asyncio.get_event_loop() + loop.create_task(httpserver_manager.handle_loop()) + self.is_first = False + + # make sampling params in vllm + top_p = max(top_p, 1e-5) + if temperature <= 1e-5: + top_p = 1.0 + + sampling_params = SamplingParams( + do_sample=temperature > 0.0, + temperature=temperature, + top_p=top_p, + top_k=top_k, + presence_penalty=presence_penalty, + frequency_penalty=frequency_penalty, + repetition_penalty=repetition_penalty, + max_new_tokens=max_new_tokens, + stop_sequences=list(stop), + ) + sampling_params.verify() + + results_generator = httpserver_manager.generate( + prompt, sampling_params, request_id, MultimodalParams() + ) + + completion_tokens = 0 + text_outputs = "" + cumulative_logprob = 0.0 + + async for request_output, metadata, finish_status in results_generator: + text_outputs += request_output + completion_tokens += 1 + + partial_stop = any(is_partial_stop(text_outputs, i) for i in stop) + # prevent yielding partial stop sequence + if partial_stop: + continue + + if type(finish_status) is bool: # compatibility with old version + finish_reason = "stop" if finish_status else None + else: + finish_reason = finish_status.get_finish_reason() + + if request and await request.is_disconnected(): + await httpserver_manager.abort(request_id) + finish_reason = "abort" + + logprob = metadata.get("logprob", None) + if logprob is not None: + cumulative_logprob += logprob + + prompt_tokens = metadata["prompt_tokens"] + ret = { + "text": prompt + text_outputs if echo else text_outputs, + "error_code": 0, + "usage": { + "prompt_tokens": prompt_tokens, + "completion_tokens": completion_tokens, + "total_tokens": prompt_tokens + completion_tokens, + }, + "cumulative_logprob": cumulative_logprob, + } + + if finish_reason is not None: + yield ( + json.dumps({**ret, "finish_reason": None}, ensure_ascii=False) + + "\0" + ).encode("utf-8") + yield ( + json.dumps({**ret, "finish_reason": finish_reason}, ensure_ascii=False) + + "\0" + ).encode("utf-8") + + if finish_reason is not None: # In case of abort, we need to break the loop + break + + async def generate(self, params): + async for x in self.generate_stream(params): + pass + return json.loads(x[:-1].decode()) + + +def release_worker_semaphore(): + worker.semaphore.release() + + +def acquire_worker_semaphore(): + if worker.semaphore is None: + worker.semaphore = asyncio.Semaphore(worker.limit_worker_concurrency) + return worker.semaphore.acquire() + + +def create_background_tasks(request_id): + async def abort_request() -> None: + await httpserver_manager.abort(request_id) + + background_tasks = BackgroundTasks() + background_tasks.add_task(release_worker_semaphore) + background_tasks.add_task(abort_request) + return background_tasks + + +@app.post("/worker_generate_stream") +async def api_generate_stream(request: Request): + params = await request.json() + await acquire_worker_semaphore() + request_id = g_id_gen.generate_id() + params["request_id"] = request_id + params["request"] = request + generator = worker.generate_stream(params) + background_tasks = create_background_tasks(request_id) + return StreamingResponse(generator, background=background_tasks) + + +@app.post("/worker_generate") +async def api_generate(request: Request): + params = await request.json() + await acquire_worker_semaphore() + request_id = g_id_gen.generate_id() + params["request_id"] = request_id + params["request"] = request + output = await worker.generate(params) + release_worker_semaphore() + await httpserver_manager.abort(request_id) + return JSONResponse(output) + + +@app.post("/worker_get_status") +async def api_get_status(request: Request): + return worker.get_status() + + +@app.post("/count_token") +async def api_count_token(request: Request): + params = await request.json() + return worker.count_token(params) + + +@app.post("/worker_get_conv_template") +async def api_get_conv(request: Request): + return worker.get_conv_template() + + +@app.post("/model_details") +async def api_model_details(request: Request): + return {"context_length": worker.context_len} + + +if __name__ == "__main__": + torch.multiprocessing.set_start_method("spawn") + parser = argparse.ArgumentParser() + parser.add_argument("--host", type=str, default="127.0.0.1") + parser.add_argument("--port", type=int, default=8000) + + parser.add_argument( + "--model-path", + dest="model_dir", + type=str, + default=None, + help="the model weight dir path, the app will load config, weights and tokenizer from this dir", + ) + parser.add_argument("--worker-address", type=str, default="http://localhost:21002") + parser.add_argument( + "--controller-address", type=str, default="http://localhost:21001" + ) + parser.add_argument( + "--conv-template", type=str, default=None, help="Conversation prompt template." + ) + parser.add_argument( + "--model-names", + type=lambda s: s.split(","), + help="Optional display comma separated names", + ) + parser.add_argument("--limit-worker-concurrency", type=int, default=1024) + parser.add_argument("--no-register", action="store_true") + + parser.add_argument( + "--tokenizer_mode", + type=str, + default="slow", + help="""tokenizer load mode, can be slow or auto, slow mode load fast but run slow, slow mode is good for debug and test, + when you want to get best performance, try auto mode""", + ) + parser.add_argument( + "--load_way", + type=str, + default="HF", + help="the way of loading model weights, the default is HF(Huggingface format), llama also supports DS(Deepspeed)", + ) + parser.add_argument( + "--max_total_token_num", + type=int, + default=6000, + help="the total token nums the gpu and model can support, equals = max_batch * (input_len + output_len)", + ) + parser.add_argument( + "--batch_max_tokens", + type=int, + default=None, + help="max tokens num for new cat batch, it control prefill batch size to Preventing OOM", + ) + parser.add_argument("--eos_id", type=int, default=2, help="eos stop token id") + parser.add_argument( + "--running_max_req_size", + type=int, + default=1000, + help="the max size for forward requests in the same time", + ) + parser.add_argument( + "--tp", type=int, default=1, help="model tp parral size, the default is 1" + ) + parser.add_argument( + "--max_req_input_len", + type=int, + default=None, + help="the max value for req input tokens num. If None, it will be derived from the config.", + ) + parser.add_argument( + "--max_req_total_len", + type=int, + default=None, + help="the max value for req_input_len + req_output_len. If None, it will be derived from the config.", + ) + parser.add_argument( + "--mode", + type=str, + default=[], + nargs="+", + help="""Model mode: [triton_int8kv | ppl_int8kv | ppl_fp16 | triton_flashdecoding + | triton_gqa_attention | triton_gqa_flashdecoding] + [triton_int8weight | triton_int4weight | lmdeploy_int4weight | ppl_int4weight], + triton_flashdecoding mode is for long context, current support llama llama2 qwen; + triton_gqa_attention and triton_gqa_flashdecoding is fast kernel for model which use GQA; + triton_int8kv mode use int8 to store kv cache, can increase token capacity, use triton kernel; + ppl_int8kv mode use int8 to store kv cache, and use ppl fast kernel; + ppl_fp16 mode use ppl fast fp16 decode attention kernel; + triton_int8weight and triton_int4weight and lmdeploy_int4weight or ppl_int4weight mode use int8 and int4 to store weights; + you need to read source code to make sure the supported detail mode for all models""", + ) + parser.add_argument( + "--trust_remote_code", + action="store_true", + help="Whether or not to allow for custom models defined on the Hub in their own modeling files.", + ) + parser.add_argument( + "--disable_log_stats", + action="store_true", + help="disable logging throughput stats.", + ) + parser.add_argument( + "--log_stats_interval", + type=int, + default=10, + help="log stats interval in second.", + ) + + parser.add_argument( + "--router_token_ratio", + type=float, + default=0.0, + help="token ratio to control router dispatch", + ) + parser.add_argument( + "--router_max_new_token_len", + type=int, + default=1024, + help="the request max new token len for router", + ) + + parser.add_argument( + "--no_skipping_special_tokens", + action="store_true", + help="whether to skip special tokens when decoding", + ) + parser.add_argument( + "--no_spaces_between_special_tokens", + action="store_true", + help="whether to add spaces between special tokens when decoding", + ) + + parser.add_argument( + "--splitfuse_mode", action="store_true", help="use splitfuse mode" + ) + parser.add_argument( + "--splitfuse_block_size", type=int, default=256, help="splitfuse block size" + ) + parser.add_argument( + "--prompt_cache_strs", + type=str, + default=[], + nargs="+", + help="""prompt cache strs""", + ) + parser.add_argument( + "--cache_capacity", + type=int, + default=200, + help="cache server capacity for multimodal resources", + ) + parser.add_argument( + "--cache_reserved_ratio", + type=float, + default=0.5, + help="cache server reserved capacity ratio after clear", + ) + parser.add_argument( + "--return_all_prompt_logprobs", + action="store_true", + help="return all prompt tokens logprobs", + ) + parser.add_argument( + "--long_truncation_mode", + type=str, + choices=[None, "head", "center"], + default=None, + help="""use to select the handle way when input token len > max_req_input_len. + None : raise Exception + head : remove some head tokens to make input token len <= max_req_input_len + center : remove some tokens in center loc to make input token len <= max_req_input_len""", + ) + + args = parser.parse_args() + + # 非splitfuse 模式,不支持 prompt cache 特性 + if not args.splitfuse_mode: + assert len(args.prompt_cache_strs) == 0 + + model_config = AutoConfig.from_pretrained(args.model_dir) + context_length = get_context_length(model_config) + + if args.max_req_input_len is None: + args.max_req_input_len = context_length - 1 + if args.max_req_total_len is None: + args.max_req_total_len = context_length + + assert args.max_req_input_len < args.max_req_total_len + assert args.max_req_total_len <= args.max_total_token_num + + if not args.splitfuse_mode: + # 普通模式下 + if args.batch_max_tokens is None: + batch_max_tokens = int(1 / 6 * args.max_total_token_num) + batch_max_tokens = max(batch_max_tokens, args.max_req_total_len) + args.batch_max_tokens = batch_max_tokens + else: + assert ( + args.batch_max_tokens >= args.max_req_total_len + ), "batch_max_tokens must >= max_req_total_len" + else: + # splitfuse 模式下 + # assert args.batch_max_tokens is not None, "need to set by yourself" + if args.batch_max_tokens is None: + batch_max_tokens = int(1 / 6 * args.max_total_token_num) + batch_max_tokens = max(batch_max_tokens, args.splitfuse_block_size) + args.batch_max_tokens = batch_max_tokens + + can_use_ports = alloc_can_use_network_port(num=6 + args.tp) + + assert can_use_ports is not None, "Can not alloc enough free ports." + ( + router_port, + detokenization_port, + httpserver_port, + visual_port, + cache_port, + nccl_port, + ) = can_use_ports[0:6] + args.nccl_port = nccl_port + model_rpc_ports = can_use_ports[6:] + + global httpserver_manager + httpserver_manager = HttpServerManager( + args, + router_port=router_port, + cache_port=cache_port, + visual_port=visual_port, + httpserver_port=httpserver_port, + enable_multimodal=False, + ) + + start_submodule_processes( + start_funcs=[start_router_process, start_detokenization_process], + start_args=[ + (args, router_port, detokenization_port, model_rpc_ports), + (args, detokenization_port, httpserver_port), + ], + ) + worker = LightLLMWorker( + args.controller_address, + args.worker_address, + worker_id, + args.model_dir, + args.model_names, + args.limit_worker_concurrency, + args.no_register, + args.conv_template, + httpserver_manager.tokenizer, + context_length, + ) + + uvicorn.run(app, host=args.host, port=args.port, log_level="info") diff --git a/fastchat/serve/mlx_worker.py b/fastchat/serve/mlx_worker.py new file mode 100644 index 000000000..a7e85f848 --- /dev/null +++ b/fastchat/serve/mlx_worker.py @@ -0,0 +1,288 @@ +""" +A model worker using Apple MLX + +https://github.com/ml-explore/mlx-examples/tree/main/llms + +Code based on vllm_worker https://github.com/lm-sys/FastChat/blob/main/fastchat/serve/vllm_worker.py + +You must install MLX python: + +pip install mlx-lm +""" + +import argparse +import asyncio +import atexit +import json +from typing import List +import uuid + +from fastapi import FastAPI, Request, BackgroundTasks +from fastapi.concurrency import run_in_threadpool +from fastapi.responses import StreamingResponse, JSONResponse +import uvicorn + +from fastchat.serve.base_model_worker import BaseModelWorker +from fastchat.serve.model_worker import ( + logger, + worker_id, +) +from fastchat.utils import get_context_length, is_partial_stop + +import mlx.core as mx +from mlx_lm import load, generate +from mlx_lm.utils import generate_step + +app = FastAPI() + + +class MLXWorker(BaseModelWorker): + def __init__( + self, + controller_addr: str, + worker_addr: str, + worker_id: str, + model_path: str, + model_names: List[str], + limit_worker_concurrency: int, + no_register: bool, + llm_engine: "MLX", + conv_template: str, + ): + super().__init__( + controller_addr, + worker_addr, + worker_id, + model_path, + model_names, + limit_worker_concurrency, + conv_template, + ) + + logger.info( + f"Loading the model {self.model_names} on worker {worker_id}, worker type: MLX worker..." + ) + + self.model_name = model_path + self.mlx_model, self.mlx_tokenizer = load(model_path) + + self.tokenizer = self.mlx_tokenizer + # self.context_len = get_context_length( + # llm_engine.engine.model_config.hf_config) + self.context_len = 2048 # hard code for now -- not sure how to get in MLX + + if not no_register: + self.init_heart_beat() + + async def generate_stream(self, params): + self.call_ct += 1 + + context = params.pop("prompt") + request_id = params.pop("request_id") + temperature = float(params.get("temperature", 1.0)) + top_p = float(params.get("top_p", 1.0)) + top_k = params.get("top_k", -1.0) + presence_penalty = float(params.get("presence_penalty", 0.0)) + frequency_penalty = float(params.get("frequency_penalty", 0.0)) + max_new_tokens = params.get("max_new_tokens", 256) + stop_str = params.get("stop", None) + stop_token_ids = params.get("stop_token_ids", None) or [] + if self.tokenizer.eos_token_id is not None: + stop_token_ids.append(self.tokenizer.eos_token_id) + echo = params.get("echo", True) + use_beam_search = params.get("use_beam_search", False) + best_of = params.get("best_of", None) + + # Handle stop_str + stop = set() + if isinstance(stop_str, str) and stop_str != "": + stop.add(stop_str) + elif isinstance(stop_str, list) and stop_str != []: + stop.update(stop_str) + + for tid in stop_token_ids: + if tid is not None: + s = self.tokenizer.decode(tid) + if s != "": + stop.add(s) + + print("Stop patterns: ", stop) + + top_p = max(top_p, 1e-5) + if temperature <= 1e-5: + top_p = 1.0 + + tokens = [] + skip = 0 + + context_mlx = mx.array(self.tokenizer.encode(context)) + + finish_reason = "length" + + iterator = await run_in_threadpool( + generate_step, context_mlx, self.mlx_model, temperature + ) + + for i in range(max_new_tokens): + (token, _) = await run_in_threadpool(next, iterator) + if token == self.mlx_tokenizer.eos_token_id: + finish_reason = "stop" + break + tokens.append(token.item()) + tokens_decoded = self.mlx_tokenizer.decode(tokens) + last_token_decoded = self.mlx_tokenizer.decode([token.item()]) + skip = len(tokens_decoded) + + partial_stop = any(is_partial_stop(tokens_decoded, i) for i in stop) + + if partial_stop: + finish_reason = "stop" + break + + ret = { + "text": tokens_decoded, + "error_code": 0, + "usage": { + "prompt_tokens": len(context), + "completion_tokens": len(tokens), + "total_tokens": len(context) + len(tokens), + }, + "cumulative_logprob": [], + "finish_reason": None, # hard code for now + } + # print(ret) + yield (json.dumps(ret) + "\0").encode() + ret = { + "text": self.mlx_tokenizer.decode(tokens), + "error_code": 0, + "usage": {}, + "cumulative_logprob": [], + "finish_reason": finish_reason, + } + yield (json.dumps(obj={**ret, **{"finish_reason": None}}) + "\0").encode() + yield (json.dumps(ret) + "\0").encode() + + async def generate(self, params): + async for x in self.generate_stream(params): + pass + return json.loads(x[:-1].decode()) + + +def release_worker_semaphore(): + worker.semaphore.release() + + +def acquire_worker_semaphore(): + if worker.semaphore is None: + worker.semaphore = asyncio.Semaphore(worker.limit_worker_concurrency) + return worker.semaphore.acquire() + + +def create_background_tasks(request_id): + async def abort_request() -> None: + print("trying to abort but not implemented") + + background_tasks = BackgroundTasks() + background_tasks.add_task(release_worker_semaphore) + background_tasks.add_task(abort_request) + return background_tasks + + +@app.post("/worker_generate_stream") +async def api_generate_stream(request: Request): + params = await request.json() + await acquire_worker_semaphore() + request_id = uuid.uuid4() + params["request_id"] = str(request_id) + generator = worker.generate_stream(params) + background_tasks = create_background_tasks(request_id) + return StreamingResponse(generator, background=background_tasks) + + +@app.post("/worker_generate") +async def api_generate(request: Request): + params = await request.json() + await acquire_worker_semaphore() + request_id = uuid.uuid4() + params["request_id"] = str(request_id) + output = await worker.generate(params) + release_worker_semaphore() + # await engine.abort(request_id) + print("Trying to abort but not implemented") + return JSONResponse(output) + + +@app.post("/worker_get_status") +async def api_get_status(request: Request): + return worker.get_status() + + +@app.post("/count_token") +async def api_count_token(request: Request): + params = await request.json() + return worker.count_token(params) + + +@app.post("/worker_get_conv_template") +async def api_get_conv(request: Request): + return worker.get_conv_template() + + +@app.post("/model_details") +async def api_model_details(request: Request): + return {"context_length": worker.context_len} + + +worker = None + + +def cleanup_at_exit(): + global worker + print("Cleaning up...") + del worker + + +atexit.register(cleanup_at_exit) + +if __name__ == "__main__": + parser = argparse.ArgumentParser() + parser.add_argument("--host", type=str, default="localhost") + parser.add_argument("--port", type=int, default=21002) + parser.add_argument("--worker-address", type=str, default="http://localhost:21002") + parser.add_argument( + "--controller-address", type=str, default="http://localhost:21001" + ) + parser.add_argument("--model-path", type=str, default="microsoft/phi-2") + parser.add_argument( + "--model-names", + type=lambda s: s.split(","), + help="Optional display comma separated names", + ) + parser.add_argument( + "--conv-template", type=str, default=None, help="Conversation prompt template." + ) + parser.add_argument( + "--trust_remote_code", + action="store_false", + default=True, + help="Trust remote code (e.g., from HuggingFace) when" + "downloading the model and tokenizer.", + ) + + args, unknown = parser.parse_known_args() + + if args.model_path: + args.model = args.model_path + + worker = MLXWorker( + args.controller_address, + args.worker_address, + worker_id, + args.model_path, + args.model_names, + 1024, + False, + "MLX", + args.conv_template, + ) + uvicorn.run(app, host=args.host, port=args.port, log_level="info") diff --git a/fastchat/serve/model_worker.py b/fastchat/serve/model_worker.py index 5e84a4262..683a78556 100644 --- a/fastchat/serve/model_worker.py +++ b/fastchat/serve/model_worker.py @@ -31,7 +31,6 @@ str_to_torch_dtype, ) - worker_id = str(uuid.uuid4())[:8] logger = build_logger("model_worker", f"model_worker_{worker_id}.log") @@ -49,6 +48,7 @@ def __init__( device: str, num_gpus: int, max_gpu_memory: str, + revision: str = None, dtype: Optional[torch.dtype] = None, load_8bit: bool = False, cpu_offloading: bool = False, @@ -76,6 +76,7 @@ def __init__( logger.info(f"Loading the model {self.model_names} on worker {worker_id} ...") self.model, self.tokenizer = load_model( model_path, + revision=revision, device=device, num_gpus=num_gpus, max_gpu_memory=max_gpu_memory, @@ -101,6 +102,10 @@ def __init__( self.init_heart_beat() def generate_stream_gate(self, params): + if self.device == "npu": + import torch_npu + + torch_npu.npu.set_device("npu:0") self.call_ct += 1 try: @@ -159,9 +164,13 @@ def __process_embed_chunk(self, input_ids, attention_mask, **model_type_dict): data = model_output.hidden_states[-1].transpose(0, 1) else: data = model_output.hidden_states[-1] - mask = attention_mask.unsqueeze(-1).expand(data.size()).float() - masked_embeddings = data * mask - sum_embeddings = torch.sum(masked_embeddings, dim=1) + + if hasattr(self.model, "use_cls_pooling") and self.model.use_cls_pooling: + sum_embeddings = data[:, 0] + else: + mask = attention_mask.unsqueeze(-1).expand(data.size()).float() + masked_embeddings = data * mask + sum_embeddings = torch.sum(masked_embeddings, dim=1) token_num = torch.sum(attention_mask).item() return sum_embeddings, token_num @@ -206,10 +215,14 @@ def get_embeddings(self, params): base64_encode = params.get("encoding_format", None) if self.embed_in_truncate: - chunk_embeddings, token_num = self.__process_embed_chunk( + embedding, token_num = self.__process_embed_chunk( input_ids, attention_mask, **model_type_dict ) - embedding = chunk_embeddings / token_num + if ( + not hasattr(self.model, "use_cls_pooling") + or not self.model.use_cls_pooling + ): + embedding = embedding / token_num normalized_embeddings = F.normalize(embedding, p=2, dim=1) ret["token_num"] = token_num else: @@ -219,10 +232,41 @@ def get_embeddings(self, params): chunk_input_ids = input_ids[:, i : i + self.context_len] chunk_attention_mask = attention_mask[:, i : i + self.context_len] + # add cls token and mask to get cls embedding + if ( + hasattr(self.model, "use_cls_pooling") + and self.model.use_cls_pooling + ): + cls_tokens = ( + torch.zeros( + (chunk_input_ids.size(0), 1), + dtype=chunk_input_ids.dtype, + device=chunk_input_ids.device, + ) + + tokenizer.cls_token_id + ) + chunk_input_ids = torch.cat( + [cls_tokens, chunk_input_ids], dim=-1 + ) + mask = torch.ones( + (chunk_attention_mask.size(0), 1), + dtype=chunk_attention_mask.dtype, + device=chunk_attention_mask.device, + ) + chunk_attention_mask = torch.cat( + [mask, chunk_attention_mask], dim=-1 + ) + chunk_embeddings, token_num = self.__process_embed_chunk( chunk_input_ids, chunk_attention_mask, **model_type_dict ) - all_embeddings.append(chunk_embeddings) + if ( + hasattr(self.model, "use_cls_pooling") + and self.model.use_cls_pooling + ): + all_embeddings.append(chunk_embeddings * token_num) + else: + all_embeddings.append(chunk_embeddings) all_token_num += token_num all_embeddings_tensor = torch.stack(all_embeddings) @@ -345,6 +389,7 @@ def create_model_worker(): args.model_path, args.model_names, args.limit_worker_concurrency, + revision=args.revision, no_register=args.no_register, device=args.device, num_gpus=args.num_gpus, diff --git a/fastchat/serve/monitor/basic_stats.py b/fastchat/serve/monitor/basic_stats.py index e1934bb07..3c1a8793d 100644 --- a/fastchat/serve/monitor/basic_stats.py +++ b/fastchat/serve/monitor/basic_stats.py @@ -13,50 +13,60 @@ NUM_SERVERS = 14 +LOG_ROOT_DIR = "~/fastchat_logs" def get_log_files(max_num_files=None): - dates = [] - for month in range(4, 12): - for day in range(1, 33): - dates.append(f"2023-{month:02d}-{day:02d}") - + log_root = os.path.expanduser(LOG_ROOT_DIR) filenames = [] - for d in dates: - for i in range(NUM_SERVERS): - name = os.path.expanduser(f"~/fastchat_logs/server{i}/{d}-conv.json") - if os.path.exists(name): - filenames.append(name) + for i in range(NUM_SERVERS): + for filename in os.listdir(f"{log_root}/server{i}"): + if filename.endswith("-conv.json"): + filepath = f"{log_root}/server{i}/{filename}" + name_tstamp_tuple = (filepath, os.path.getmtime(filepath)) + filenames.append(name_tstamp_tuple) + # sort by tstamp + filenames = sorted(filenames, key=lambda x: x[1]) + filenames = [x[0] for x in filenames] + max_num_files = max_num_files or len(filenames) filenames = filenames[-max_num_files:] return filenames -def load_log_files(log_files): +def load_log_files(filename): data = [] - for filename in tqdm(log_files, desc="read files"): - for retry in range(5): - try: - lines = open(filename).readlines() - break - except FileNotFoundError: - time.sleep(2) - - for l in lines: - row = json.loads(l) - - data.append( - dict( - type=row["type"], - tstamp=row["tstamp"], - model=row.get("model", ""), - models=row.get("models", ["", ""]), - ) + for retry in range(5): + try: + lines = open(filename).readlines() + break + except FileNotFoundError: + time.sleep(2) + + for l in lines: + row = json.loads(l) + data.append( + dict( + type=row["type"], + tstamp=row["tstamp"], + model=row.get("model", ""), + models=row.get("models", ["", ""]), ) - + ) return data +def load_log_files_parallel(log_files, num_threads=16): + data_all = [] + from multiprocessing import Pool + + with Pool(num_threads) as p: + ret_all = list(tqdm(p.imap(load_log_files, log_files), total=len(log_files))) + for ret in ret_all: + data_all.extend(ret) + return data_all + + def get_anony_vote_df(df): anony_vote_df = df[ df["type"].isin(["leftvote", "rightvote", "tievote", "bothbad_vote"]) @@ -77,7 +87,7 @@ def merge_counts(series, on, names): def report_basic_stats(log_files): - df_all = load_log_files(log_files) + df_all = load_log_files_parallel(log_files) df_all = pd.DataFrame(df_all) now_t = df_all["tstamp"].max() df_1_hour = df_all[df_all["tstamp"] > (now_t - 3600)] diff --git a/fastchat/serve/monitor/clean_battle_data.py b/fastchat/serve/monitor/clean_battle_data.py index 23357d08c..58541c3d0 100644 --- a/fastchat/serve/monitor/clean_battle_data.py +++ b/fastchat/serve/monitor/clean_battle_data.py @@ -27,6 +27,7 @@ "laion", "chatglm", "chatgpt", + "gpt-4", "openai", "anthropic", "claude", @@ -35,31 +36,26 @@ "lamda", "google", "llama", + "qianwan", + "alibaba", + "mistral", + "zhipu", + "KEG lab", + "01.AI", + "AI2", + "Tülu", + "Tulu", "NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.", "$MODERATION$ YOUR INPUT VIOLATES OUR CONTENT MODERATION GUIDELINES.", + "API REQUEST ERROR. Please increase the number of max tokens.", + "**API REQUEST ERROR** Reason: The response was blocked.", + "**API REQUEST ERROR**", ] for i in range(len(IDENTITY_WORDS)): IDENTITY_WORDS[i] = IDENTITY_WORDS[i].lower() -def get_log_files(max_num_files=None): - dates = [] - for month in range(4, 12): - for day in range(1, 33): - dates.append(f"2023-{month:02d}-{day:02d}") - - filenames = [] - for d in dates: - for i in range(NUM_SERVERS): - name = os.path.expanduser(f"~/fastchat_logs/server{i}/{d}-conv.json") - if os.path.exists(name): - filenames.append(name) - max_num_files = max_num_files or len(filenames) - filenames = filenames[-max_num_files:] - return filenames - - def remove_html(raw): if raw.startswith("

"): return raw[raw.find(": ") + 2 : -len("

\n")] @@ -74,29 +70,54 @@ def to_openai_format(messages): return ret -def replace_model_name(old_name): - return ( - old_name.replace("bard", "palm-2") - .replace("claude-v1", "claude-1") - .replace("claude-instant-v1", "claude-instant-1") - .replace("oasst-sft-1-pythia-12b", "oasst-pythia-12b") - ) +def replace_model_name(old_name, tstamp): + replace_dict = { + "bard": "palm-2", + "claude-v1": "claude-1", + "claude-instant-v1": "claude-instant-1", + "oasst-sft-1-pythia-12b": "oasst-pythia-12b", + "claude-2": "claude-2.0", + } + if old_name in ["gpt-4", "gpt-3.5-turbo"]: + if tstamp > 1687849200: + return old_name + "-0613" + else: + return old_name + "-0314" + if old_name in replace_dict: + return replace_dict[old_name] + return old_name -def clean_battle_data(log_files, exclude_model_names): +def read_file(filename): data = [] - for filename in tqdm(log_files, desc="read files"): - for retry in range(5): - try: - lines = open(filename).readlines() - break - except FileNotFoundError: - time.sleep(2) - - for l in lines: - row = json.loads(l) - if row["type"] in VOTES: - data.append(row) + for retry in range(5): + try: + # lines = open(filename).readlines() + for l in open(filename): + row = json.loads(l) + if row["type"] in VOTES: + data.append(row) + break + except FileNotFoundError: + time.sleep(2) + return data + + +def read_file_parallel(log_files, num_threads=16): + data_all = [] + from multiprocessing import Pool + + with Pool(num_threads) as p: + ret_all = list(tqdm(p.imap(read_file, log_files), total=len(log_files))) + for ret in ret_all: + data_all.extend(ret) + return data_all + + +def clean_battle_data( + log_files, exclude_model_names, ban_ip_list=None, sanitize_ip=False +): + data = read_file_parallel(log_files, num_threads=16) convert_type = { "leftvote": "model_a", @@ -110,6 +131,7 @@ def clean_battle_data(log_files, exclude_model_names): ct_anony = 0 ct_invalid = 0 ct_leaked_identity = 0 + ct_banned = 0 battles = [] for row in data: if row["models"][0] is None or row["models"][1] is None: @@ -156,7 +178,9 @@ def clean_battle_data(log_files, exclude_model_names): messages = "" for i in range(2): state = row["states"][i] - for role, msg in state["messages"][state["offset"] :]: + for turn_idx, (role, msg) in enumerate( + state["messages"][state["offset"] :] + ): if msg: messages += msg.lower() for word in IDENTITY_WORDS: @@ -169,7 +193,11 @@ def clean_battle_data(log_files, exclude_model_names): continue # Replace bard with palm - models = [replace_model_name(m) for m in models] + models = [replace_model_name(m, row["tstamp"]) for m in models] + # Exclude certain models + if exclude_model_names and any(x in exclude_model_names for x in models): + ct_invalid += 1 + continue # Exclude certain models if any(x in exclude_model_names for x in models): @@ -186,8 +214,16 @@ def clean_battle_data(log_files, exclude_model_names): ip = row["ip"] if ip not in all_ips: - all_ips[ip] = len(all_ips) - user_id = all_ips[ip] + all_ips[ip] = {"ip": ip, "count": 0, "sanitized_id": len(all_ips)} + all_ips[ip]["count"] += 1 + if sanitize_ip: + user_id = f"arena_user_{all_ips[ip]['sanitized_id']}" + else: + user_id = f"{all_ips[ip]['ip']}" + + if ban_ip_list is not None and ip in ban_ip_list: + ct_banned += 1 + continue # Save the results battles.append( @@ -216,12 +252,19 @@ def clean_battle_data(log_files, exclude_model_names): print( f"#votes: {len(data)}, #invalid votes: {ct_invalid}, " - f"#leaked_identity: {ct_leaked_identity}" + f"#leaked_identity: {ct_leaked_identity} " + f"#banned: {ct_banned} " ) print(f"#battles: {len(battles)}, #anony: {ct_anony}") print(f"#models: {len(all_models)}, {all_models}") print(f"last-updated: {last_updated_datetime}") + if ban_ip_list is not None: + for ban_ip in ban_ip_list: + if ban_ip in all_ips: + del all_ips[ban_ip] + print("Top 30 IPs:") + print(sorted(all_ips.values(), key=lambda x: x["count"], reverse=True)[:30]) return battles @@ -232,10 +275,16 @@ def clean_battle_data(log_files, exclude_model_names): "--mode", type=str, choices=["simple", "conv_release"], default="simple" ) parser.add_argument("--exclude-model-names", type=str, nargs="+") + parser.add_argument("--ban-ip-file", type=str) + parser.add_argument("--sanitize-ip", action="store_true", default=False) args = parser.parse_args() log_files = get_log_files(args.max_num_files) - battles = clean_battle_data(log_files, args.exclude_model_names or []) + ban_ip_list = json.load(open(args.ban_ip_file)) if args.ban_ip_file else None + + battles = clean_battle_data( + log_files, args.exclude_model_names or [], ban_ip_list, args.sanitize_ip + ) last_updated_tstamp = battles[-1]["tstamp"] cutoff_date = datetime.datetime.fromtimestamp( last_updated_tstamp, tz=timezone("US/Pacific") diff --git a/fastchat/serve/monitor/clean_chat_data.py b/fastchat/serve/monitor/clean_chat_data.py index 7f0c9bd4f..1dd8b594d 100644 --- a/fastchat/serve/monitor/clean_chat_data.py +++ b/fastchat/serve/monitor/clean_chat_data.py @@ -2,7 +2,7 @@ Clean chatbot arena chat log. Usage: -python3 clean_chat_data.py --mode conv_release +python3 clean_chat_data.py """ import argparse import datetime diff --git a/fastchat/serve/monitor/elo_analysis.py b/fastchat/serve/monitor/elo_analysis.py index e95f157c8..d0ff0fb09 100644 --- a/fastchat/serve/monitor/elo_analysis.py +++ b/fastchat/serve/monitor/elo_analysis.py @@ -52,6 +52,41 @@ def get_bootstrap_result(battles, func_compute_elo, num_round=1000): return df[df.median().sort_values(ascending=False).index] +def compute_elo_mle_with_tie(df, SCALE=400, BASE=10, INIT_RATING=1000): + from sklearn.linear_model import LogisticRegression + + models = pd.concat([df["model_a"], df["model_b"]]).unique() + models = pd.Series(np.arange(len(models)), index=models) + + # duplicate battles + df = pd.concat([df, df], ignore_index=True) + p = len(models.index) + n = df.shape[0] + + X = np.zeros([n, p]) + X[np.arange(n), models[df["model_a"]]] = +math.log(BASE) + X[np.arange(n), models[df["model_b"]]] = -math.log(BASE) + + # one A win => two A win + Y = np.zeros(n) + Y[df["winner"] == "model_a"] = 1.0 + + # one tie => one A win + one B win + # find tie + tie (both bad) index + tie_idx = (df["winner"] == "tie") | (df["winner"] == "tie (bothbad)") + tie_idx[len(tie_idx) // 2 :] = False + Y[tie_idx] = 1.0 + + lr = LogisticRegression(fit_intercept=False) + lr.fit(X, Y) + + elo_scores = SCALE * lr.coef_[0] + INIT_RATING + # calibrate llama-13b to 800 if applicable + if "llama-13b" in models.index: + elo_scores += 800 - elo_scores[models["llama-13b"]] + return pd.Series(elo_scores, index=models.index).sort_values(ascending=False) + + def get_median_elo_from_bootstrap(bootstrap_df): median = dict(bootstrap_df.quantile(0.5)) median = {k: int(v + 0.5) for k, v in median.items()} @@ -185,12 +220,12 @@ def visualize_average_win_rate(battles, limit_show_number): return fig -def visualize_bootstrap_elo_rating(df, limit_show_number): +def visualize_bootstrap_elo_rating(df, df_final, limit_show_number): bars = ( pd.DataFrame( dict( lower=df.quantile(0.025), - rating=df.quantile(0.5), + rating=df_final, upper=df.quantile(0.975), ) ) @@ -215,7 +250,7 @@ def visualize_bootstrap_elo_rating(df, limit_show_number): return fig -def report_elo_analysis_results(battles_json): +def report_elo_analysis_results(battles_json, rating_system="bt", num_bootstrap=100): battles = pd.DataFrame(battles_json) battles = battles.sort_values(ascending=True, by=["tstamp"]) # Only use anonymous votes @@ -225,24 +260,48 @@ def report_elo_analysis_results(battles_json): # Online update elo_rating_online = compute_elo(battles) - # Bootstrap - bootstrap_df = get_bootstrap_result(battles, compute_elo) - elo_rating_median = get_median_elo_from_bootstrap(bootstrap_df) - model_order = list(elo_rating_median.keys()) - model_order.sort(key=lambda k: -elo_rating_median[k]) + if rating_system == "bt": + bootstrap_df = get_bootstrap_result( + battles, compute_elo_mle_with_tie, num_round=num_bootstrap + ) + elo_rating_final = compute_elo_mle_with_tie(battles) + elif rating_system == "elo": + bootstrap_df = get_bootstrap_result( + battles, compute_elo, num_round=num_bootstrap + ) + elo_rating_median = get_median_elo_from_bootstrap(bootstrap_df) + elo_rating_final = elo_rating_median + + model_order = list(elo_rating_final.keys()) + model_order.sort(key=lambda k: -elo_rating_final[k]) + + limit_show_number = 25 # limit show number to make plots smaller + model_order = model_order[:limit_show_number] + + # leaderboard_table_df: elo rating, variance, 95% interval, number of battles + leaderboard_table_df = pd.DataFrame( + { + "rating": elo_rating_final, + "variance": bootstrap_df.var(), + "rating_q975": bootstrap_df.quantile(0.975), + "rating_q025": bootstrap_df.quantile(0.025), + "num_battles": battles["model_a"].value_counts() + + battles["model_b"].value_counts(), + } + ) limit_show_number = 25 # limit show number to make plots smaller model_order = model_order[:limit_show_number] # Plots - leaderboard_table = visualize_leaderboard_table(elo_rating_median) + leaderboard_table = visualize_leaderboard_table(elo_rating_final) win_fraction_heatmap = visualize_pairwise_win_fraction(battles_no_ties, model_order) battle_count_heatmap = visualize_battle_count(battles_no_ties, model_order) average_win_rate_bar = visualize_average_win_rate( battles_no_ties, limit_show_number ) bootstrap_elo_rating = visualize_bootstrap_elo_rating( - bootstrap_df, limit_show_number + bootstrap_df, elo_rating_final, limit_show_number ) last_updated_tstamp = battles["tstamp"].max() @@ -251,8 +310,9 @@ def report_elo_analysis_results(battles_json): ).strftime("%Y-%m-%d %H:%M:%S %Z") return { + "rating_system": rating_system, "elo_rating_online": elo_rating_online, - "elo_rating_median": elo_rating_median, + "elo_rating_final": elo_rating_final, "leaderboard_table": leaderboard_table, "win_fraction_heatmap": win_fraction_heatmap, "battle_count_heatmap": battle_count_heatmap, @@ -260,6 +320,8 @@ def report_elo_analysis_results(battles_json): "bootstrap_elo_rating": bootstrap_elo_rating, "last_updated_datetime": last_updated_datetime, "last_updated_tstamp": last_updated_tstamp, + "bootstrap_df": bootstrap_df, + "leaderboard_table_df": leaderboard_table_df, } @@ -274,6 +336,11 @@ def pretty_print_elo_rating(rating): parser = argparse.ArgumentParser() parser.add_argument("--clean-battle-file", type=str) parser.add_argument("--max-num-files", type=int) + parser.add_argument("--num-bootstrap", type=int, default=100) + parser.add_argument( + "--rating-system", type=str, choices=["bt", "elo"], default="bt" + ) + parser.add_argument("--exclude-tie", action="store_true", default=False) args = parser.parse_args() np.random.seed(42) @@ -286,12 +353,14 @@ def pretty_print_elo_rating(rating): log_files = get_log_files(args.max_num_files) battles = clean_battle_data(log_files) - results = report_elo_analysis_results(battles) + results = report_elo_analysis_results( + battles, rating_system=args.rating_system, num_bootstrap=args.num_bootstrap + ) - print("# Online") + print("# Online Elo") pretty_print_elo_rating(results["elo_rating_online"]) print("# Median") - pretty_print_elo_rating(results["elo_rating_median"]) + pretty_print_elo_rating(results["elo_rating_final"]) print(f"last update : {results['last_updated_datetime']}") last_updated_tstamp = results["last_updated_tstamp"] diff --git a/fastchat/serve/monitor/monitor.py b/fastchat/serve/monitor/monitor.py index 580a2c866..1912ef6fe 100644 --- a/fastchat/serve/monitor/monitor.py +++ b/fastchat/serve/monitor/monitor.py @@ -8,11 +8,13 @@ import argparse import ast +import json import pickle import os import threading import time +import pandas as pd import gradio as gr import numpy as np @@ -22,24 +24,52 @@ from fastchat.utils import build_logger, get_window_url_params_js -notebook_url = "https://colab.research.google.com/drive/1RAWb22-PFNI-X1gPVzc927SGUdfr6nsR?usp=sharing" - +notebook_url = ( + "https://colab.research.google.com/drive/1KdwokPjirkTmpO_P1WByFNFiqxWQquwH" +) basic_component_values = [None] * 6 leader_component_values = [None] * 5 -def make_leaderboard_md(elo_results): +def make_default_md(arena_df, elo_results): + total_votes = sum(arena_df["num_battles"]) // 2 + total_models = len(arena_df) + + leaderboard_md = f""" +# 🏆 LMSYS Chatbot Arena Leaderboard +| [Vote](https://chat.lmsys.org) | [Blog](https://lmsys.org/blog/2023-05-03-arena/) | [GitHub](https://github.com/lm-sys/FastChat) | [Paper](https://arxiv.org/abs/2306.05685) | [Dataset](https://github.com/lm-sys/FastChat/blob/main/docs/dataset_release.md) | [Twitter](https://twitter.com/lmsysorg) | [Discord](https://discord.gg/HSWAKCrnFx) | + +LMSYS [Chatbot Arena](https://lmsys.org/blog/2023-05-03-arena/) is a crowdsourced open platform for LLM evals. +We've collected over **200,000** human preference votes to rank LLMs with the Elo ranking system. +""" + return leaderboard_md + + +def make_arena_leaderboard_md(arena_df): + total_votes = sum(arena_df["num_battles"]) // 2 + total_models = len(arena_df) + leaderboard_md = f""" -# 🏆 Chatbot Arena Leaderboard -| [Blog](https://lmsys.org/blog/2023-05-03-arena/) | [GitHub](https://github.com/lm-sys/FastChat) | [Paper](https://arxiv.org/abs/2306.05685) | [Dataset](https://github.com/lm-sys/FastChat/blob/main/docs/dataset_release.md) | [Twitter](https://twitter.com/lmsysorg) | [Discord](https://discord.gg/HSWAKCrnFx) | +Total #models: **{total_models}**. Total #votes: **{total_votes}**. Last updated: Feb 2, 2024. + +Contribute your vote 🗳️ at [chat.lmsys.org](https://chat.lmsys.org)! Find more analysis in the [notebook]({notebook_url}). + +⚠️ **Some mobile users reported the leaderboard is not displayed normally, please visit [our HF alternative](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) while we are fixing it**. +""" + return leaderboard_md + -This leaderboard is based on the following three benchmarks. -- [Chatbot Arena](https://lmsys.org/blog/2023-05-03-arena/) - a crowdsourced, randomized battle platform. We use 100K+ user votes to compute Elo ratings. -- [MT-Bench](https://arxiv.org/abs/2306.05685) - a set of challenging multi-turn questions. We use GPT-4 to grade the model responses. -- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a model's multitask accuracy on 57 tasks. +def make_full_leaderboard_md(elo_results): + leaderboard_md = """ +Three benchmarks are displayed: **Arena Elo**, **MT-Bench** and **MMLU**. +- [Chatbot Arena](https://chat.lmsys.org/?arena) - a crowdsourced, randomized battle platform based on human preference votes. +- [MT-Bench](https://arxiv.org/abs/2306.05685): a set of challenging multi-turn questions. We use GPT-4 to grade the model responses. +- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot): a test to measure a model's multitask accuracy on 57 tasks. -💻 Code: The Arena Elo ratings are computed by this [notebook]({notebook_url}). The MT-bench scores (single-answer grading on a scale of 10) are computed by [fastchat.llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge). The MMLU scores are mostly computed by [InstructEval](https://github.com/declare-lab/instruct-eval). Higher values are better for all benchmarks. Empty cells mean not available. Last updated: November, 2023. +💻 Code: The MT-bench scores (single-answer grading on a scale of 10) are computed by [fastchat.llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge). +The MMLU scores are mostly computed by [InstructEval](https://github.com/declare-lab/instruct-eval). +Higher values are better for all benchmarks. Empty cells mean not available. """ return leaderboard_md @@ -53,12 +83,17 @@ def make_leaderboard_md_live(elo_results): return leaderboard_md -def update_elo_components(max_num_files, elo_results_file): +def update_elo_components( + max_num_files, elo_results_file, ban_ip_file, exclude_model_names +): log_files = get_log_files(max_num_files) # Leaderboard if elo_results_file is None: # Do live update - battles = clean_battle_data(log_files, []) + ban_ip_list = json.load(open(ban_ip_file)) if ban_ip_file else None + battles = clean_battle_data( + log_files, exclude_model_names, ban_ip_list=ban_ip_list + ) elo_results = report_elo_analysis_results(battles) leader_component_values[0] = make_leaderboard_md_live(elo_results) @@ -91,10 +126,14 @@ def update_elo_components(max_num_files, elo_results_file): basic_component_values[5] = md4 -def update_worker(max_num_files, interval, elo_results_file): +def update_worker( + max_num_files, interval, elo_results_file, ban_ip_file, exclude_model_names +): while True: tic = time.time() - update_elo_components(max_num_files, elo_results_file) + update_elo_components( + max_num_files, elo_results_file, ban_ip_file, exclude_model_names + ) durtaion = time.time() - tic print(f"update duration: {durtaion:.2f} s") time.sleep(max(interval - durtaion, 0)) @@ -166,90 +205,186 @@ def build_basic_stats_tab(): return [md0, plot_1, md1, md2, md3, md4] -def build_leaderboard_tab(elo_results_file, leaderboard_table_file): +def get_full_table(arena_df, model_table_df): + values = [] + for i in range(len(model_table_df)): + row = [] + model_key = model_table_df.iloc[i]["key"] + model_name = model_table_df.iloc[i]["Model"] + # model display name + row.append(model_name) + if model_key in arena_df.index: + idx = arena_df.index.get_loc(model_key) + row.append(round(arena_df.iloc[idx]["rating"])) + else: + row.append(np.nan) + row.append(model_table_df.iloc[i]["MT-bench (score)"]) + row.append(model_table_df.iloc[i]["MMLU"]) + # Organization + row.append(model_table_df.iloc[i]["Organization"]) + # license + row.append(model_table_df.iloc[i]["License"]) + + values.append(row) + values.sort(key=lambda x: -x[1] if not np.isnan(x[1]) else 1e9) + return values + + +def get_arena_table(arena_df, model_table_df): + # sort by rating + arena_df = arena_df.sort_values(by=["rating"], ascending=False) + values = [] + for i in range(len(arena_df)): + row = [] + model_key = arena_df.index[i] + model_name = model_table_df[model_table_df["key"] == model_key]["Model"].values[ + 0 + ] + + # rank + row.append(i + 1) + # model display name + row.append(model_name) + # elo rating + row.append(round(arena_df.iloc[i]["rating"])) + upper_diff = round(arena_df.iloc[i]["rating_q975"] - arena_df.iloc[i]["rating"]) + lower_diff = round(arena_df.iloc[i]["rating"] - arena_df.iloc[i]["rating_q025"]) + row.append(f"+{upper_diff}/-{lower_diff}") + # num battles + row.append(round(arena_df.iloc[i]["num_battles"])) + # Organization + row.append( + model_table_df[model_table_df["key"] == model_key]["Organization"].values[0] + ) + # license + row.append( + model_table_df[model_table_df["key"] == model_key]["License"].values[0] + ) + + values.append(row) + return values + + +def build_leaderboard_tab(elo_results_file, leaderboard_table_file, show_plot=False): if elo_results_file is None: # Do live update - md = "Loading ..." + default_md = "Loading ..." p1 = p2 = p3 = p4 = None else: with open(elo_results_file, "rb") as fin: elo_results = pickle.load(fin) - md = make_leaderboard_md(elo_results) p1 = elo_results["win_fraction_heatmap"] p2 = elo_results["battle_count_heatmap"] p3 = elo_results["bootstrap_elo_rating"] p4 = elo_results["average_win_rate_bar"] + arena_df = elo_results["leaderboard_table_df"] + default_md = make_default_md(arena_df, elo_results) - md_1 = gr.Markdown(md, elem_id="leaderboard_markdown") - + md_1 = gr.Markdown(default_md, elem_id="leaderboard_markdown") if leaderboard_table_file: data = load_leaderboard_table_csv(leaderboard_table_file) - headers = [ - "Model", - "Arena Elo rating", - "MT-bench (score)", - "MMLU", - "License", - ] - values = [] - for item in data: - row = [] - for key in headers: - value = item[key] - row.append(value) - values.append(row) - values.sort(key=lambda x: -x[1] if not np.isnan(x[1]) else 1e9) - - headers[1] = "⭐ " + headers[1] - headers[2] = "📈 " + headers[2] - - gr.Dataframe( - headers=headers, - datatype=["markdown", "number", "number", "number", "str"], - value=values, - elem_id="leaderboard_dataframe", - ) - gr.Markdown( - """ ## Visit our [HF space](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) for more analysis! - If you want to see more models, please help us [add them](https://github.com/lm-sys/FastChat/blob/main/docs/arena.md#how-to-add-a-new-model). - """, - elem_id="leaderboard_markdown", - ) + model_table_df = pd.DataFrame(data) + + with gr.Tabs() as tabs: + # arena table + arena_table_vals = get_arena_table(arena_df, model_table_df) + with gr.Tab("Arena Elo", id=0): + md = make_arena_leaderboard_md(arena_df) + gr.Markdown(md, elem_id="leaderboard_markdown") + gr.Dataframe( + headers=[ + "Rank", + "🤖 Model", + "⭐ Arena Elo", + "📊 95% CI", + "🗳️ Votes", + "Organization", + "License", + ], + datatype=[ + "str", + "markdown", + "number", + "str", + "number", + "str", + "str", + ], + value=arena_table_vals, + elem_id="arena_leaderboard_dataframe", + height=700, + column_widths=[50, 200, 100, 100, 100, 150, 150], + wrap=True, + ) + with gr.Tab("Full Leaderboard", id=1): + md = make_full_leaderboard_md(elo_results) + gr.Markdown(md, elem_id="leaderboard_markdown") + full_table_vals = get_full_table(arena_df, model_table_df) + gr.Dataframe( + headers=[ + "🤖 Model", + "⭐ Arena Elo", + "📈 MT-bench", + "📚 MMLU", + "Organization", + "License", + ], + datatype=["markdown", "number", "number", "number", "str", "str"], + value=full_table_vals, + elem_id="full_leaderboard_dataframe", + column_widths=[200, 100, 100, 100, 150, 150], + height=700, + wrap=True, + ) + if not show_plot: + gr.Markdown( + """ ## Visit our [HF space](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) for more analysis! + If you want to see more models, please help us [add them](https://github.com/lm-sys/FastChat/blob/main/docs/arena.md#how-to-add-a-new-model). + """, + elem_id="leaderboard_markdown", + ) else: pass - leader_component_values[:] = [md, p1, p2, p3, p4] + leader_component_values[:] = [default_md, p1, p2, p3, p4] - """ - with gr.Row(): - with gr.Column(): - gr.Markdown( - "#### Figure 1: Fraction of Model A Wins for All Non-tied A vs. B Battles" - ) - plot_1 = gr.Plot(p1, show_label=False) - with gr.Column(): - gr.Markdown( - "#### Figure 2: Battle Count for Each Combination of Models (without Ties)" - ) - plot_2 = gr.Plot(p2, show_label=False) - with gr.Row(): - with gr.Column(): - gr.Markdown( - "#### Figure 3: Bootstrap of Elo Estimates (1000 Rounds of Random Sampling)" - ) - plot_3 = gr.Plot(p3, show_label=False) - with gr.Column(): - gr.Markdown( - "#### Figure 4: Average Win Rate Against All Other Models (Assuming Uniform Sampling and No Ties)" - ) - plot_4 = gr.Plot(p4, show_label=False) - """ + if show_plot: + gr.Markdown( + f"""## More Statistics for Chatbot Arena\n +Below are figures for more statistics. The code for generating them is also included in this [notebook]({notebook_url}). +You can find more discussions in this blog [post](https://lmsys.org/blog/2023-12-07-leaderboard/). + """, + elem_id="leaderboard_markdown", + ) + with gr.Row(): + with gr.Column(): + gr.Markdown( + "#### Figure 1: Fraction of Model A Wins for All Non-tied A vs. B Battles" + ) + plot_1 = gr.Plot(p1, show_label=False) + with gr.Column(): + gr.Markdown( + "#### Figure 2: Battle Count for Each Combination of Models (without Ties)" + ) + plot_2 = gr.Plot(p2, show_label=False) + with gr.Row(): + with gr.Column(): + gr.Markdown( + "#### Figure 3: Bootstrap of Elo Estimates (1000 Rounds of Random Sampling)" + ) + plot_3 = gr.Plot(p3, show_label=False) + with gr.Column(): + gr.Markdown( + "#### Figure 4: Average Win Rate Against All Other Models (Assuming Uniform Sampling and No Ties)" + ) + plot_4 = gr.Plot(p4, show_label=False) from fastchat.serve.gradio_web_server import acknowledgment_md - gr.Markdown(acknowledgment_md) + gr.Markdown(acknowledgment_md, elem_id="ack_markdown") - # return [md_1, plot_1, plot_2, plot_3, plot_4] + if show_plot: + return [md_1, plot_1, plot_2, plot_3, plot_4] return [md_1] @@ -266,7 +401,9 @@ def build_demo(elo_results_file, leaderboard_table_file): with gr.Tabs() as tabs: with gr.Tab("Leaderboard", id=0): leader_components = build_leaderboard_tab( - elo_results_file, leaderboard_table_file + elo_results_file, + leaderboard_table_file, + show_plot=True, ) with gr.Tab("Basic Stats", id=1): @@ -293,6 +430,8 @@ def build_demo(elo_results_file, leaderboard_table_file): parser.add_argument("--max-num-files", type=int) parser.add_argument("--elo-results-file", type=str) parser.add_argument("--leaderboard-table-file", type=str) + parser.add_argument("--ban-ip-file", type=str) + parser.add_argument("--exclude-model-names", type=str, nargs="+") args = parser.parse_args() logger = build_logger("monitor", "monitor.log") @@ -301,13 +440,21 @@ def build_demo(elo_results_file, leaderboard_table_file): if args.elo_results_file is None: # Do live update update_thread = threading.Thread( target=update_worker, - args=(args.max_num_files, args.update_interval, args.elo_results_file), + args=( + args.max_num_files, + args.update_interval, + args.elo_results_file, + args.ban_ip_file, + args.exclude_model_names, + ), ) update_thread.start() demo = build_demo(args.elo_results_file, args.leaderboard_table_file) demo.queue( - concurrency_count=args.concurrency_count, status_update_rate=10, api_open=False + default_concurrency_limit=args.concurrency_count, + status_update_rate=10, + api_open=False, ).launch( server_name=args.host, server_port=args.port, share=args.share, max_threads=200 ) diff --git a/fastchat/serve/monitor/summarize_cluster.py b/fastchat/serve/monitor/summarize_cluster.py index 1d5fbcddc..b461a68b2 100644 --- a/fastchat/serve/monitor/summarize_cluster.py +++ b/fastchat/serve/monitor/summarize_cluster.py @@ -6,10 +6,12 @@ import argparse import pickle +import pandas as pd + from fastchat.llm_judge.common import ( - chat_compeletion_openai, - chat_compeletion_openai_azure, - chat_compeletion_anthropic, + chat_completion_openai, + chat_completion_openai_azure, + chat_completion_anthropic, ) from fastchat.conversation import get_conv_template @@ -52,13 +54,13 @@ def truncate_string(s, l): if "azure-" in model: template_name = "chatgpt" - completion_func = chat_compeletion_openai_azure + completion_func = chat_completion_openai_azure elif "gpt" in model: template_name = "chatgpt" - completion_func = chat_compeletion_openai + completion_func = chat_completion_openai elif "claude" in model: template_name = "claude" - completion_func = chat_compeletion_anthropic + completion_func = chat_completion_anthropic conv = get_conv_template(template_name) conv.set_system_message(instruct) @@ -74,3 +76,10 @@ def truncate_string(s, l): print() print(f"topics: {topics}") print(f"percentages: {percentages}") + + # save the informations + df = pd.DataFrame() + df["topic"] = topics + df["percentage"] = percentages + + df.to_json(f"cluster_summary_{len(df)}.jsonl", lines=True, orient="records") diff --git a/fastchat/serve/monitor/topic_clustering.py b/fastchat/serve/monitor/topic_clustering.py index dd15c6edc..3d58e56bf 100644 --- a/fastchat/serve/monitor/topic_clustering.py +++ b/fastchat/serve/monitor/topic_clustering.py @@ -16,6 +16,7 @@ from sklearn.cluster import KMeans, AgglomerativeClustering import torch from tqdm import tqdm +from openai import OpenAI from fastchat.utils import detect_language @@ -46,6 +47,8 @@ def read_texts(input_file, min_length, max_length, english_only): line_texts = [ x["content"] for x in l["conversation"] if x["role"] == "user" ] + elif "turns" in l: + line_texts = l["turns"] for text in line_texts: text = text.strip() @@ -77,14 +80,26 @@ def read_texts(input_file, min_length, max_length, english_only): def get_embeddings(texts, model_name, batch_size): - model = SentenceTransformer(model_name) - embeddings = model.encode( - texts, - batch_size=batch_size, - show_progress_bar=True, - device="cuda", - convert_to_tensor=True, - ) + if model_name == "text-embedding-ada-002": + client = OpenAI() + texts = texts.tolist() + + embeddings = [] + for i in tqdm(range(0, len(texts), batch_size)): + text = texts[i : i + batch_size] + responses = client.embeddings.create(input=text, model=model_name).data + embeddings.extend([data.embedding for data in responses]) + embeddings = torch.tensor(embeddings) + else: + model = SentenceTransformer(model_name) + embeddings = model.encode( + texts, + batch_size=batch_size, + show_progress_bar=True, + device="cuda", + convert_to_tensor=True, + ) + embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1) return embeddings.cpu() @@ -218,6 +233,8 @@ def get_cluster_info(texts, labels, topk_indices): ) parser.add_argument("--show-top-k", type=int, default=200) parser.add_argument("--show-cut-off", type=int, default=512) + parser.add_argument("--save-embeddings", action="store_true") + parser.add_argument("--embeddings-file", type=str, default=None) args = parser.parse_args() num_clusters = args.num_clusters @@ -229,7 +246,15 @@ def get_cluster_info(texts, labels, topk_indices): ) print(f"#text: {len(texts)}") - embeddings = get_embeddings(texts, args.model, args.batch_size) + if args.embeddings_file is None: + embeddings = get_embeddings(texts, args.model, args.batch_size) + if args.save_embeddings: + # allow saving embedding to save time and money + torch.save(embeddings, "embeddings.pt") + else: + embeddings = torch.load(args.embeddings_file) + print(f"embeddings shape: {embeddings.shape}") + if args.cluster_alg == "kmeans": centers, labels = run_k_means(embeddings, num_clusters) elif args.cluster_alg == "aggcls": @@ -249,7 +274,7 @@ def get_cluster_info(texts, labels, topk_indices): with open(filename_prefix + "_topk.txt", "w") as fout: fout.write(topk_str) - with open(filename_prefix + "_all.txt", "w") as fout: + with open(filename_prefix + "_all.jsonl", "w") as fout: for i in range(len(centers)): tmp_indices = labels == i tmp_embeddings = embeddings[tmp_indices] diff --git a/fastchat/serve/openai_api_server.py b/fastchat/serve/openai_api_server.py index 65fcab977..58bfbba92 100644 --- a/fastchat/serve/openai_api_server.py +++ b/fastchat/serve/openai_api_server.py @@ -10,7 +10,6 @@ import asyncio import argparse import json -import logging import os from typing import Generator, Optional, Union, Dict, List, Any @@ -22,7 +21,11 @@ from fastapi.responses import StreamingResponse, JSONResponse from fastapi.security.http import HTTPAuthorizationCredentials, HTTPBearer import httpx -from pydantic import BaseSettings + +try: + from pydantic.v1 import BaseSettings +except ImportError: + from pydantic import BaseSettings import shortuuid import tiktoken import uvicorn @@ -61,6 +64,7 @@ APITokenCheckResponse, APITokenCheckResponseItem, ) +from fastchat.utils import build_logger ###### Shale @@ -75,7 +79,7 @@ from fastapi.requests import Request -logger = logging.getLogger(__name__) +logger = build_logger("openai_api_server", "openai_api_server.log") conv_template_map = {} @@ -213,7 +217,12 @@ def check_requests(request) -> Optional[JSONResponse]: if request.top_p is not None and request.top_p > 1: return create_error_response( ErrorCode.PARAM_OUT_OF_RANGE, - f"{request.top_p} is greater than the maximum of 1 - 'temperature'", + f"{request.top_p} is greater than the maximum of 1 - 'top_p'", + ) + if request.top_k is not None and (request.top_k > -1 and request.top_k < 1): + return create_error_response( + ErrorCode.PARAM_OUT_OF_RANGE, + f"{request.top_k} is out of Range. Either set top_k to -1 or >=1.", ) if request.top_k is not None and (request.top_k > -1 and request.top_k < 1): return create_error_response( @@ -236,10 +245,20 @@ def process_input(model_name, inp): inp = [inp] elif isinstance(inp, list): if isinstance(inp[0], int): - decoding = tiktoken.model.encoding_for_model(model_name) + try: + decoding = tiktoken.model.encoding_for_model(model_name) + except KeyError: + logger.warning("Warning: model not found. Using cl100k_base encoding.") + model = "cl100k_base" + decoding = tiktoken.get_encoding(model) inp = [decoding.decode(inp)] elif isinstance(inp[0], list): - decoding = tiktoken.model.encoding_for_model(model_name) + try: + decoding = tiktoken.model.encoding_for_model(model_name) + except KeyError: + logger.warning("Warning: model not found. Using cl100k_base encoding.") + model = "cl100k_base" + decoding = tiktoken.get_encoding(model) inp = [decoding.decode(text) for text in inp] return inp @@ -295,13 +314,29 @@ async def get_gen_params( prompt = messages elif isinstance(messages, list) and len(messages) > 0 and isinstance(messages[0], str): prompt = '. '.join(messages) + images = [] else: for message in messages: msg_role = message["role"] if msg_role == "system": conv.set_system_message(message["content"]) elif msg_role == "user": - conv.append_message(conv.roles[0], message["content"]) + if type(message["content"]) == list: + image_list = [ + item["image_url"]["url"] + for item in message["content"] + if item["type"] == "image_url" + ] + text_list = [ + item["text"] + for item in message["content"] + if item["type"] == "text" + ] + + text = "\n".join(text_list) + conv.append_message(conv.roles[0], (text, image_list)) + else: + conv.append_message(conv.roles[0], message["content"]) elif msg_role == "assistant": conv.append_message(conv.roles[1], message["content"]) else: @@ -310,6 +345,7 @@ async def get_gen_params( # Add a blank message for the assistant. conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() + images = conv.get_images() gen_params = { "model": model_name, @@ -325,6 +361,9 @@ async def get_gen_params( "stop_token_ids": conv.stop_token_ids, } + if len(images) > 0: + gen_params["images"] = images + if best_of is not None: gen_params.update({"best_of": best_of}) if use_beam_search is not None: @@ -455,6 +494,9 @@ async def create_chat_completion(request: ChatCompletionRequest): return create_error_response(ErrorCode.INTERNAL_ERROR, str(e)) usage = UsageInfo() for i, content in enumerate(all_tasks): + if isinstance(content, str): + content = json.loads(content) + if content["error_code"] != 0: return create_error_response(content["error_code"], content["text"]) choices.append( diff --git a/fastchat/serve/register_worker.py b/fastchat/serve/register_worker.py index 2c2c40295..aa57117b9 100644 --- a/fastchat/serve/register_worker.py +++ b/fastchat/serve/register_worker.py @@ -14,6 +14,7 @@ parser.add_argument("--controller-address", type=str) parser.add_argument("--worker-name", type=str) parser.add_argument("--check-heart-beat", action="store_true") + parser.add_argument("--multimodal", action="store_true") args = parser.parse_args() url = args.controller_address + "/register_worker" @@ -21,6 +22,7 @@ "worker_name": args.worker_name, "check_heart_beat": args.check_heart_beat, "worker_status": None, + "multimodal": args.multimodal, } r = requests.post(url, json=data) assert r.status_code == 200 diff --git a/fastchat/serve/sglang_worker.py b/fastchat/serve/sglang_worker.py new file mode 100644 index 000000000..b30668433 --- /dev/null +++ b/fastchat/serve/sglang_worker.py @@ -0,0 +1,313 @@ +""" +A model worker that executes the model based on SGLang. + +Usage: +python3 -m fastchat.serve.sglang_worker --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --port 30000 --worker-address http://localhost:30000 +""" + +import argparse +import asyncio +import json +import multiprocessing +from typing import List + +from fastapi import FastAPI, Request, BackgroundTasks +from fastapi.responses import StreamingResponse, JSONResponse +import uvicorn +import sglang as sgl +from sglang.srt.hf_transformers_utils import get_tokenizer, get_config +from sglang.srt.utils import load_image, is_multimodal_model + +from fastchat.conversation import IMAGE_PLACEHOLDER_STR +from fastchat.constants import ErrorCode, SERVER_ERROR_MSG +from fastchat.serve.base_model_worker import BaseModelWorker +from fastchat.serve.model_worker import ( + logger, + worker_id, +) +from fastchat.utils import get_context_length, is_partial_stop + +app = FastAPI() + + +@sgl.function +def pipeline(s, prompt, max_tokens): + for p in prompt: + if isinstance(p, str): + s += p + else: + s += sgl.image(p) + s += sgl.gen("response", max_tokens=max_tokens) + + +class SGLWorker(BaseModelWorker): + def __init__( + self, + controller_addr: str, + worker_addr: str, + worker_id: str, + model_path: str, + tokenizer_path: str, + model_names: List[str], + limit_worker_concurrency: int, + no_register: bool, + conv_template: str, + runtime: sgl.Runtime, + trust_remote_code: bool, + ): + super().__init__( + controller_addr, + worker_addr, + worker_id, + model_path, + model_names, + limit_worker_concurrency, + conv_template, + is_multimodal_model(model_path), + ) + + logger.info( + f"Loading the model {self.model_names} on worker {worker_id}, worker type: SGLang worker..." + ) + + self.tokenizer = get_tokenizer(tokenizer_path) + self.context_len = get_context_length( + get_config(model_path, trust_remote_code=trust_remote_code) + ) + + if not no_register: + self.init_heart_beat() + + async def generate_stream(self, params): + self.call_ct += 1 + + prompt = params.pop("prompt") + images = params.get("images", []) + temperature = float(params.get("temperature", 1.0)) + top_p = float(params.get("top_p", 1.0)) + top_k = params.get("top_k", -1.0) + frequency_penalty = float(params.get("frequency_penalty", 0.0)) + presence_penalty = float(params.get("presence_penalty", 0.0)) + max_new_tokens = params.get("max_new_tokens", 256) + stop_str = params.get("stop", None) + stop_token_ids = params.get("stop_token_ids", None) or [] + echo = params.get("echo", True) + + # Handle stop_str + stop = [] + if isinstance(stop_str, str) and stop_str != "": + stop.append(stop_str) + elif isinstance(stop_str, list) and stop_str != []: + stop.extend(stop_str) + + for tid in stop_token_ids: + if tid is not None: + s = self.tokenizer.decode(tid) + if s != "": + stop.append(s) + + # make sampling params for sgl.gen + top_p = max(top_p, 1e-5) + if temperature <= 1e-5: + top_p = 1.0 + + # split prompt by image token + split_prompt = prompt.split(IMAGE_PLACEHOLDER_STR) + if prompt.count(IMAGE_PLACEHOLDER_STR) != len(images): + raise ValueError( + "The number of images passed in does not match the number of tokens in the prompt!" + ) + prompt = [] + for i in range(len(split_prompt)): + prompt.append(split_prompt[i]) + if i < len(images): + prompt[-1] = prompt[-1].strip() + prompt.append(load_image(images[i])) + + state = pipeline.run( + prompt, + max_new_tokens, + stop=stop, + temperature=temperature, + top_p=top_p, + top_k=top_k, + frequency_penalty=frequency_penalty, + presence_penalty=presence_penalty, + stream=True, + ) + + entire_output = prompt if echo else "" + async for out, meta_info in state.text_async_iter( + var_name="response", return_meta_data=True + ): + partial_stop = any(is_partial_stop(out, i) for i in stop) + + # prevent yielding partial stop sequence + if partial_stop: + continue + + entire_output += out + prompt_tokens = meta_info["prompt_tokens"] + completion_tokens = meta_info["completion_tokens"] + + ret = { + "text": entire_output, + "usage": { + "prompt_tokens": prompt_tokens, + "completion_tokens": completion_tokens, + "total_tokens": prompt_tokens + completion_tokens, + }, + "error_code": 0, + } + yield ret + + async def generate_stream_gate(self, params): + try: + async for ret in self.generate_stream(params): + yield json.dumps(ret).encode() + b"\0" + except (ValueError, RuntimeError) as e: + ret = { + "text": f"{SERVER_ERROR_MSG}\n\n({e})", + "error_code": ErrorCode.INTERNAL_ERROR, + } + yield json.dumps(ret).encode() + b"\0" + + async def generate_gate(self, params): + async for x in self.generate_stream_gate(params): + pass + return json.loads(x[:-1].decode()) + + +def release_worker_semaphore(): + worker.semaphore.release() + + +def acquire_worker_semaphore(): + if worker.semaphore is None: + worker.semaphore = asyncio.Semaphore(worker.limit_worker_concurrency) + return worker.semaphore.acquire() + + +def create_background_tasks(): + background_tasks = BackgroundTasks() + background_tasks.add_task(release_worker_semaphore) + return background_tasks + + +@app.post("/worker_generate_stream") +async def api_generate_stream(request: Request): + params = await request.json() + await acquire_worker_semaphore() + generator = worker.generate_stream_gate(params) + background_tasks = create_background_tasks() + return StreamingResponse(generator, background=background_tasks) + + +@app.post("/worker_generate") +async def api_generate(request: Request): + params = await request.json() + await acquire_worker_semaphore() + output = await worker.generate_gate(params) + release_worker_semaphore() + return JSONResponse(output) + + +@app.post("/worker_get_status") +async def api_get_status(request: Request): + return worker.get_status() + + +@app.post("/count_token") +async def api_count_token(request: Request): + params = await request.json() + return worker.count_token(params) + + +@app.post("/worker_get_conv_template") +async def api_get_conv(request: Request): + return worker.get_conv_template() + + +@app.post("/model_details") +async def api_model_details(request: Request): + return {"context_length": worker.context_len} + + +if __name__ == "__main__": + parser = argparse.ArgumentParser() + parser.add_argument("--host", type=str, default="localhost") + parser.add_argument("--port", type=int, default=21002) + parser.add_argument("--worker-address", type=str, default="http://localhost:21002") + parser.add_argument( + "--controller-address", type=str, default="http://localhost:21001" + ) + parser.add_argument("--model-path", type=str, default="lmsys/vicuna-7b-v1.5") + parser.add_argument("--tokenizer-path", type=str, default="") + parser.add_argument( + "--model-names", + type=lambda s: s.split(","), + help="Optional display comma separated names", + ) + parser.add_argument("--limit-worker-concurrency", type=int, default=1024) + parser.add_argument("--no-register", action="store_true") + parser.add_argument("--num-gpus", type=int, default=1) + parser.add_argument( + "--conv-template", type=str, default=None, help="Conversation prompt template." + ) + parser.add_argument( + "--trust-remote-code", + action="store_false", + default=True, + help="Trust remote code (e.g., from HuggingFace) when" + "downloading the model and tokenizer.", + ) + parser.add_argument( + "--mem-fraction-static", + type=float, + default=0.9, + help="The ratio (between 0 and 1) of GPU memory to" + "reserve for the model weights, activations, and KV cache. Higher" + "values will increase the KV cache size and thus improve the model's" + "throughput. However, if the value is too high, it may cause out-of-" + "memory (OOM) errors.", + ) + parser.add_argument( + "--multimodal", + action="store_true", + required=False, + default=False, + help="Register this worker as serving a multimodal model.", + ) + + args = parser.parse_args() + + args.tp_size = args.num_gpus if args.num_gpus > 1 else 1 + args.tokenizer_path = ( + args.model_path if args.tokenizer_path == "" else args.tokenizer_path + ) + + multiprocessing.set_start_method("spawn", force=True) + runtime = sgl.Runtime( + model_path=args.model_path, + tokenizer_path=args.tokenizer_path, + trust_remote_code=args.trust_remote_code, + mem_fraction_static=args.mem_fraction_static, + tp_size=args.tp_size, + log_level="info", + ) + sgl.set_default_backend(runtime) + + worker = SGLWorker( + args.controller_address, + args.worker_address, + worker_id, + args.model_path, + args.tokenizer_path, + args.model_names, + args.limit_worker_concurrency, + args.no_register, + args.conv_template, + runtime, + args.trust_remote_code, + ) + uvicorn.run(app, host=args.host, port=args.port, log_level="info") diff --git a/fastchat/serve/vllm_worker.py b/fastchat/serve/vllm_worker.py index 46e876b2f..e2be90a24 100644 --- a/fastchat/serve/vllm_worker.py +++ b/fastchat/serve/vllm_worker.py @@ -22,7 +22,7 @@ logger, worker_id, ) -from fastchat.utils import get_context_length +from fastchat.utils import get_context_length, is_partial_stop app = FastAPI() @@ -55,6 +55,10 @@ def __init__( f"Loading the model {self.model_names} on worker {worker_id}, worker type: vLLM worker..." ) self.tokenizer = llm_engine.engine.tokenizer + # This is to support vllm >= 0.2.7 where TokenizerGroup was introduced + # and llm_engine.engine.tokenizer was no longer a raw tokenizer + if hasattr(self.tokenizer, "tokenizer"): + self.tokenizer = llm_engine.engine.tokenizer.tokenizer self.context_len = get_context_length(llm_engine.engine.model_config.hf_config) if not no_register: @@ -79,6 +83,8 @@ async def generate_stream(self, params): use_beam_search = params.get("use_beam_search", False) best_of = params.get("best_of", None) + request = params.get("request", None) + # Handle stop_str stop = set() if isinstance(stop_str, str) and stop_str != "": @@ -88,7 +94,9 @@ async def generate_stream(self, params): for tid in stop_token_ids: if tid is not None: - stop.add(self.tokenizer.decode(tid)) + s = self.tokenizer.decode(tid) + if s != "": + stop.add(s) # make sampling params in vllm top_p = max(top_p, 1e-5) @@ -119,7 +127,20 @@ async def generate_stream(self, params): else: text_outputs = [output.text for output in request_output.outputs] text_outputs = " ".join(text_outputs) - # Note: usage is not supported yet + + partial_stop = any(is_partial_stop(text_outputs, i) for i in stop) + # prevent yielding partial stop sequence + if partial_stop: + continue + + aborted = False + if request and await request.is_disconnected(): + await engine.abort(request_id) + request_output.finished = True + aborted = True + for output in request_output.outputs: + output.finish_reason = "abort" + prompt_tokens = len(request_output.prompt_token_ids) completion_tokens = sum( len(output.token_ids) for output in request_output.outputs @@ -139,8 +160,15 @@ async def generate_stream(self, params): if len(request_output.outputs) == 1 else [output.finish_reason for output in request_output.outputs], } + # Emit twice here to ensure a 'finish_reason' with empty content in the OpenAI API response. + # This aligns with the behavior of model_worker. + if request_output.finished: + yield (json.dumps({**ret, **{"finish_reason": None}}) + "\0").encode() yield (json.dumps(ret) + "\0").encode() + if aborted: + break + async def generate(self, params): async for x in self.generate_stream(params): pass @@ -173,6 +201,7 @@ async def api_generate_stream(request: Request): await acquire_worker_semaphore() request_id = random_uuid() params["request_id"] = request_id + params["request"] = request generator = worker.generate_stream(params) background_tasks = create_background_tasks(request_id) return StreamingResponse(generator, background=background_tasks) @@ -184,6 +213,7 @@ async def api_generate(request: Request): await acquire_worker_semaphore() request_id = random_uuid() params["request_id"] = request_id + params["request"] = request output = await worker.generate(params) release_worker_semaphore() await engine.abort(request_id) diff --git a/fastchat/train/train_baichuan.py b/fastchat/train/train_baichuan.py index 70c6488b5..b6b19b486 100644 --- a/fastchat/train/train_baichuan.py +++ b/fastchat/train/train_baichuan.py @@ -159,7 +159,7 @@ def preprocess(sources, tokenizer: transformers.PreTrainedTokenizer, **kwargs) - else: # If the data volume is large, use multithreading for processing with Pool() as p: conversations, conv = p.apply_async( - apply_prompt_template, (sources, tokenizer, systems) + apply_prompt_template, (sources, systems) ).get() input_ids, targets = p.apply_async( tokenize_conversations, (conversations, tokenizer) diff --git a/fastchat/train/train_with_template.py b/fastchat/train/train_with_template.py new file mode 100644 index 000000000..e5c5f353d --- /dev/null +++ b/fastchat/train/train_with_template.py @@ -0,0 +1,400 @@ +# This code is based on tatsu-lab/stanford_alpaca. Below is the original copyright: +# +# Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from dataclasses import dataclass, field +import json +import math +import jsonlines +import pathlib +from multiprocessing import Pool +from typing import Dict, Optional, Sequence + +import numpy as np +import torch +from torch.utils.data import Dataset +import transformers +from transformers import Trainer +from transformers.trainer_pt_utils import LabelSmoother + +from fastchat.conversation import SeparatorStyle +from fastchat.model.model_adapter import get_conversation_template + +IGNORE_TOKEN_ID = LabelSmoother.ignore_index + + +@dataclass +class ModelArguments: + model_name_or_path: Optional[str] = field(default="facebook/opt-125m") + + +@dataclass +class DataArguments: + data_path: str = field( + default=None, metadata={"help": "Path to the training data."} + ) + lazy_preprocess: bool = False + + +@dataclass +class TrainingArguments(transformers.TrainingArguments): + cache_dir: Optional[str] = field(default=None) + optim: str = field(default="adamw_torch") + model_max_length: int = field( + default=512, + metadata={ + "help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)." + }, + ) + + +local_rank = None + + +def rank0_print(*args): + if local_rank == 0: + print(*args) + + +def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str): + """Collects the state dict and dump to disk.""" + state_dict = trainer.model.state_dict() + if trainer.args.should_save: + cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()} + del state_dict + trainer._save(output_dir, state_dict=cpu_state_dict) # noqa + + +def apply_prompt_template(sources, template_id, systems=None): + conv = get_conversation_template(template_id) + roles = {"human": conv.roles[0], "gpt": conv.roles[1]} + conversations = [] + for i, source in enumerate(sources): + if roles[source[0]["from"]] != conv.roles[0]: + source = source[1:] + + conv.messages = [] + for j, sentence in enumerate(source): + role = roles[sentence["from"]] + assert role == conv.roles[j % 2], f"{i}" + conv.append_message(role, sentence["value"]) + if systems and systems[i]: + conv.set_system_message(systems[i]) + prompt = conv.get_prompt() + conversations.append(prompt) + return conversations, conv + + +def tokenize_conversations(conversations, tokenizer): + input_ids = tokenizer( + conversations, + return_tensors="pt", + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + ).input_ids + targets = input_ids.clone() + return input_ids, targets + + +def get_prompt_separator(conv): + if conv.sep_style == SeparatorStyle.ADD_COLON_SINGLE: + user_turn_separator = conv.sep2 + assistant_turn_separator = conv.roles[1] + ": " + + elif conv.sep_style == SeparatorStyle.ADD_COLON_TWO: + user_turn_separator = conv.sep2 + assistant_turn_separator = conv.roles[1] + ": " + + elif conv.sep_style == SeparatorStyle.ADD_COLON_SPACE_SINGLE: + if conv.sep2 is None: + user_turn_separator = conv.roles[0] + ": " + else: + user_turn_separator = conv.sep2 + + assistant_turn_separator = conv.roles[1] + ": " + + elif conv.sep_style == SeparatorStyle.LLAMA2: + user_turn_separator = conv.sep2 + assistant_turn_separator = conv.roles[1] + " " + + elif conv.sep_style == SeparatorStyle.CHATML: + if conv.sep2 is None: + user_turn_separator = conv.sep + "\n" + else: + user_turn_separator = conv.sep2 + "\n" + + assistant_turn_separator = conv.roles[1] + "\n" + + return user_turn_separator, assistant_turn_separator + + +def mask_targets(conversations, targets, tokenizer, conv): + for conversation, target in zip(conversations, targets): + total_len = int(target.ne(tokenizer.pad_token_id).sum()) + if tokenizer.eos_token is None: + cur_len = 0 + elif tokenizer.eos_token is not None and target[0] != tokenizer.bos_token_id: + cur_len = 0 + elif tokenizer.eos_token is not None and target[0] == tokenizer.bos_token_id: + cur_len = 1 + + target[:cur_len] = IGNORE_TOKEN_ID + user_turn_separator, assistant_turn_separator = get_prompt_separator(conv) + turns = conversation.split(user_turn_separator) + for i, turn in enumerate(turns): + if ( + i < len(turns) - 1 and turn == "" + ): # Last turn is the user_turn_separator + break + + if i != 0: + turn = user_turn_separator + turn + + turn_len = len(tokenizer(turn, add_special_tokens=False).input_ids) + + if assistant_turn_separator in turn: + parts = turn.rsplit(assistant_turn_separator) + parts[0] += assistant_turn_separator + else: + parts = [turn] + + instruction_len = len( + tokenizer(parts[0], add_special_tokens=False).input_ids + ) + + target[cur_len : cur_len + instruction_len] = IGNORE_TOKEN_ID + cur_len += turn_len + + target[cur_len:] = IGNORE_TOKEN_ID + + if False: # Inspect and check the correctness of masking + z = target.clone() + z = torch.where(z == IGNORE_TOKEN_ID, tokenizer.unk_token_id, z) + rank0_print(tokenizer.decode(z)) + + if cur_len < tokenizer.model_max_length: + if cur_len != total_len: + target[:] = IGNORE_TOKEN_ID + rank0_print( + f"WARNING: tokenization mismatch: {cur_len} vs. {total_len}." + f" (ignored)" + ) + return targets + + +def preprocess( + sources, tokenizer: transformers.PreTrainedTokenizer, template_id, **kwargs +) -> Dict: + systems = None if not kwargs else kwargs.get("systems", None) + + # If the data volume is small, process it directly in the main thread + if len(sources) <= 1000: + conversations, conv = apply_prompt_template(sources, template_id, systems) + input_ids, targets = tokenize_conversations(conversations, tokenizer) + targets = mask_targets(conversations, targets, tokenizer, conv) + else: # If the data volume is large, use multithreading for processing + with Pool() as p: + conversations, conv = p.apply_async( + apply_prompt_template, (sources, template_id, systems) + ).get() + input_ids, targets = p.apply_async( + tokenize_conversations, (conversations, tokenizer) + ).get() + targets = p.apply_async( + mask_targets, (conversations, targets, tokenizer, conv) + ).get() + p.close() + p.join() + + return dict( + input_ids=input_ids, + labels=targets, + attention_mask=input_ids.ne(tokenizer.pad_token_id), + ) + + +class SupervisedDataset(Dataset): + """Dataset for supervised fine-tuning.""" + + def __init__( + self, raw_data, tokenizer: transformers.PreTrainedTokenizer, template_id + ): + super(SupervisedDataset, self).__init__() + + rank0_print("Formatting inputs...") + systems = [example.get("system", "") for example in raw_data] + sources = [example["conversations"] for example in raw_data] + + data_dict = preprocess(sources, tokenizer, template_id, systems=systems) + + self.input_ids = data_dict["input_ids"] + self.labels = data_dict["labels"] + self.attention_mask = data_dict["attention_mask"] + + def __len__(self): + return len(self.input_ids) + + def __getitem__(self, i) -> Dict[str, torch.Tensor]: + return dict( + input_ids=self.input_ids[i], + labels=self.labels[i], + attention_mask=self.attention_mask[i], + ) + + +class LazySupervisedDataset(Dataset): + """Dataset for supervised fine-tuning.""" + + def __init__( + self, raw_data, tokenizer: transformers.PreTrainedTokenizer, template_id + ): + super(LazySupervisedDataset, self).__init__() + self.tokenizer = tokenizer + self.template_id = template_id + + rank0_print("Formatting inputs...Skip in lazy mode") + self.raw_data = raw_data + self.cached_data_dict = {} + + def __len__(self): + return len(self.raw_data) + + def __getitem__(self, i) -> Dict[str, torch.Tensor]: + if i in self.cached_data_dict: + return self.cached_data_dict[i] + + ret = preprocess( + [self.raw_data[i]["conversations"]], + self.tokenizer, + self.template_id, + systems=[self.raw_data[i].get("system", "")], + ) + ret = dict( + input_ids=ret["input_ids"][0], + labels=ret["labels"][0], + attention_mask=ret["attention_mask"][0], + ) + self.cached_data_dict[i] = ret + + return ret + + +def make_supervised_data_module( + tokenizer: transformers.PreTrainedTokenizer, + data_args, + template_id, + train_ratio=0.98, +) -> Dict: + """Make dataset and collator for supervised fine-tuning.""" + train_ratio = min(train_ratio, 1.0) + dataset_cls = ( + LazySupervisedDataset if data_args.lazy_preprocess else SupervisedDataset + ) + rank0_print("Loading data...") + data_path = data_args.data_path + if data_path.endswith(".json"): + raw_data = json.load(open(data_path, "r")) + elif data_path.endswith(".jsonl"): + with jsonlines.open(data_path, mode="r") as reader: + raw_data = [item for item in reader] + + # Split train/test + np.random.seed(0) + perm = np.random.permutation(len(raw_data)) + split = int(len(perm) * train_ratio) + train_indices = perm[:split] + if train_ratio < 1: + eval_indices = perm[split:] + else: + # if train_ratio==1, we use 5% of data as eval data, make sure trainer will not throw error when eval data is empty + eval_indices = perm[-int(len(perm) * 0.05) :] + train_raw_data = [raw_data[i] for i in train_indices] + eval_raw_data = [raw_data[i] for i in eval_indices] + rank0_print(f"#train {len(train_raw_data)}, #eval {len(eval_raw_data)}") + + train_dataset = dataset_cls( + train_raw_data, tokenizer=tokenizer, template_id=template_id + ) + eval_dataset = dataset_cls( + eval_raw_data, tokenizer=tokenizer, template_id=template_id + ) + return dict(train_dataset=train_dataset, eval_dataset=eval_dataset) + + +def train(): + global local_rank + + parser = transformers.HfArgumentParser( + (ModelArguments, DataArguments, TrainingArguments) + ) + model_args, data_args, training_args = parser.parse_args_into_dataclasses() + local_rank = training_args.local_rank + config = transformers.AutoConfig.from_pretrained( + model_args.model_name_or_path, + trust_remote_code=True, + cache_dir=training_args.cache_dir, + ) + # Set RoPE scaling factor + orig_ctx_len = getattr(config, "max_position_embeddings", None) + if orig_ctx_len and training_args.model_max_length > orig_ctx_len: + scaling_factor = float(math.ceil(training_args.model_max_length / orig_ctx_len)) + config.rope_scaling = {"type": "linear", "factor": scaling_factor} + config.use_cache = False + model = transformers.AutoModelForCausalLM.from_pretrained( + model_args.model_name_or_path, + config=config, + trust_remote_code=True, + cache_dir=training_args.cache_dir, + ) + # Tie the weights + model.tie_weights() + + tokenizer = transformers.AutoTokenizer.from_pretrained( + model_args.model_name_or_path, + config=config, + trust_remote_code=True, + cache_dir=training_args.cache_dir, + model_max_length=training_args.model_max_length, + padding_side="right", + use_fast=False, + ) + # NOTE: if the token_id exceed the vocab_size will cause failing in training process! we need add special config and resize the embedding size! + tokenizer.pad_token = tokenizer.unk_token + tokenizer.pad_token_id = tokenizer.unk_token_id + print(f"tokens len: {len(tokenizer)}") + model.resize_token_embeddings(len(tokenizer)) + + template_id = model_args.model_name_or_path + data_module = make_supervised_data_module( + tokenizer=tokenizer, + template_id=template_id, + train_ratio=0.98, + data_args=data_args, + ) + trainer = Trainer( + model=model, tokenizer=tokenizer, args=training_args, **data_module + ) + + if list(pathlib.Path(training_args.output_dir).glob("checkpoint-*")): + trainer.train(resume_from_checkpoint=True) + else: + trainer.train() + trainer.save_state() + safe_save_model_for_hf_trainer(trainer=trainer, output_dir=training_args.output_dir) + + +if __name__ == "__main__": + train() diff --git a/fastchat/train/train_yuan2.py b/fastchat/train/train_yuan2.py new file mode 100644 index 000000000..6f3c09a14 --- /dev/null +++ b/fastchat/train/train_yuan2.py @@ -0,0 +1,482 @@ +# This code is based on tatsu-lab/stanford_alpaca. Below is the original copyright: +# +# Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from dataclasses import dataclass, field +import json +import math +import pathlib +from typing import Dict, Optional, Sequence + +import numpy as np +import torch +from torch.utils.data import Dataset +import transformers +from transformers import Trainer +from transformers.trainer_pt_utils import LabelSmoother + +from fastchat.conversation import SeparatorStyle +from fastchat.model.model_adapter import get_conversation_template + +IGNORE_TOKEN_ID = LabelSmoother.ignore_index + + +@dataclass +class ModelArguments: + model_name_or_path: Optional[str] = field(default="facebook/opt-125m") + trust_remote_code: bool = field( + default=False, + metadata={ + "help": "Whether or not to allow for custom models defined on the Hub in their own modeling files" + }, + ) + padding_side: str = field( + default="right", metadata={"help": "The padding side in tokenizer"} + ) + + +@dataclass +class DataArguments: + data_path: str = field( + default=None, metadata={"help": "Path to the training data."} + ) + eval_data_path: str = field( + default=None, metadata={"help": "Path to the evaluation data."} + ) + lazy_preprocess: bool = False + last_response_loss: bool = False + split_example_loss: bool = False + efficient_loss: bool = False + + +@dataclass +class TrainingArguments(transformers.TrainingArguments): + cache_dir: Optional[str] = field(default=None) + optim: str = field(default="adamw_torch") + model_max_length: int = field( + default=512, + metadata={ + "help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)." + }, + ) + + +local_rank = None + + +def rank0_print(*args): + if local_rank == 0: + print(*args) + + +def trainer_save_model_safe(trainer: transformers.Trainer): + from torch.distributed.fsdp import FullyShardedDataParallel as FSDP + from torch.distributed.fsdp import StateDictType, FullStateDictConfig + + save_policy = FullStateDictConfig(offload_to_cpu=True, rank0_only=True) + with FSDP.state_dict_type( + trainer.model, StateDictType.FULL_STATE_DICT, save_policy + ): + trainer.save_model() + + +# add by wpf for yuan test +def right_replace(string, old, new, max=1): + return string[::-1].replace(old[::-1], new[::-1], max)[::-1] + + +def preprocess( + sources, + tokenizer: transformers.PreTrainedTokenizer, + data_args, +) -> Dict: + conv = get_conversation_template("yuan2") # wpf + roles = {"human": conv.roles[0], "gpt": conv.roles[1]} + + # Apply prompt templates + conversations = [] + for i, source in enumerate(sources): + if roles[source[0]["from"]] != conv.roles[0]: + # Skip the first one if it is not from human + source = source[1:] + + conv.messages = [] + for j, sentence in enumerate(source): + role = roles[sentence["from"]] + assert role == conv.roles[j % 2], f"{i}" + conv.append_message(role, sentence["value"]) + conversations.append(conv.get_prompt()) + if data_args.last_response_loss: + a = conversations[0].replace("", "") + a = right_replace(a, "", "") + # a=right_replace(a,"","\n",max=20) + conversations[0] = a + if data_args.split_example_loss: + a = conversations[0].replace("", "") + a = a.split("") + for i in range(int(len(a) / 2)): + if i == 0: + conversations[i] = "" + if i != 0: + conversations.append("") + for j in range(i * 2): + conversations[i] = conversations[i] + a[j] + "" + conversations[i] = ( + conversations[i] + a[i * 2] + "" + a[i * 2 + 1] + "" + ) + + if data_args.efficient_loss: + a = conversations[0].replace("", "") + conversations[0] = a + + print(conversations) + + # Tokenize conversations + input_ids = tokenizer( + conversations, + return_tensors="pt", + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + ).input_ids + targets = input_ids.clone() + + # assert conv.sep_style == SeparatorStyle.ADD_COLON_TWO #wpf + # Mask targets. Only compute loss on the assistant outputs. + # sep = conv.sep + conv.roles[1] + ": " #wpf + + if data_args.split_example_loss: + for conversation, target in zip(conversations, targets): + total_len = int(target.ne(tokenizer.pad_token_id).sum()) + turns = conversation.split("") + cur_len = 1 + target[:cur_len] = IGNORE_TOKEN_ID + + for i, turn in enumerate(turns): + if turn == "": + break + if i == 0 or i == len(turns) - 1: + turn_len = len(tokenizer(turn).input_ids) + else: + turn_len = len(tokenizer(turn).input_ids) + 1 + # parts = turn.split(sep) + # if len(parts) != 2: + # break + # parts[0] += sep + # "-2" is hardcoded for the Llama tokenizer to make the offset correct. + instruction_len = 0 + if i == len(turns) - 1: + instruction_len = turn_len + target[cur_len : cur_len + instruction_len] = IGNORE_TOKEN_ID + cur_len += turn_len + + target[cur_len:] = IGNORE_TOKEN_ID + # print("cur_len: ", cur_len) + # print("total_len: ", total_len) + + if False: # Inspect and check the correctness of masking + z = target.clone() + z = torch.where(z == IGNORE_TOKEN_ID, tokenizer.unk_token_id, z) + rank0_print(tokenizer.decode(z)) + exit() + + if cur_len < tokenizer.model_max_length: + if cur_len != total_len: + target[:] = IGNORE_TOKEN_ID + rank0_print( + f"WARNING: tokenization mismatch: {cur_len} vs. {total_len}." + f" #turn = {len(turns) - 1}. (ignored)" + ) + + if data_args.efficient_loss: + for conversation, target in zip(conversations, targets): + total_len = int(target.ne(tokenizer.pad_token_id).sum()) + + turns = conversation.split("") + cur_len = 1 + target[:cur_len] = IGNORE_TOKEN_ID + + for i, turn in enumerate(turns): + if turn == "": + break + if i == 0 or i == len(turns) - 1: + turn_len = len(tokenizer(turn).input_ids) + else: + turn_len = len(tokenizer(turn).input_ids) + 1 + # parts = turn.split(sep) + # if len(parts) != 2: + # break + # parts[0] += sep + # "-2" is hardcoded for the Llama tokenizer to make the offset correct. + instruction_len = 0 + if i % 2 == 0: + instruction_len = turn_len + + # if i != 0 and not tokenizer.legacy: + # # The legacy and non-legacy modes handle special tokens differently + # instruction_len -= 1 + + # Ignore the user instructions + target[cur_len : cur_len + instruction_len] = IGNORE_TOKEN_ID + cur_len += turn_len + + if i != 0 and not tokenizer.legacy: + # The legacy and non-legacy modes handle special tokens differently + cur_len -= 1 + target[cur_len:] = IGNORE_TOKEN_ID + # print("cur_len: ", cur_len) + # print("total_len: ", total_len) + + if False: # Inspect and check the correctness of masking + z = target.clone() + z = torch.where(z == IGNORE_TOKEN_ID, tokenizer.unk_token_id, z) + rank0_print(tokenizer.decode(z)) + exit() + + if cur_len < tokenizer.model_max_length: + if cur_len != total_len: + target[:] = IGNORE_TOKEN_ID + rank0_print( + f"WARNING: tokenization mismatch: {cur_len} vs. {total_len}." + f" #turn = {len(turns) - 1}. (ignored)" + ) + if data_args.last_response_loss: + for conversation, target in zip(conversations, targets): + total_len = int(target.ne(tokenizer.pad_token_id).sum()) + + turns = conversation.split("") + cur_len = 1 + target[:cur_len] = IGNORE_TOKEN_ID + + for i, turn in enumerate(turns): + if turn == "": + break + if i == 0 or i == len(turns) - 1: + turn_len = len(tokenizer(turn).input_ids) + else: + turn_len = len(tokenizer(turn).input_ids) + 1 + # parts = turn.split(sep) + # if len(parts) != 2: + # break + # parts[0] += sep + # "-2" is hardcoded for the Llama tokenizer to make the offset correct. + instruction_len = 0 + if i == len(turns) - 1: + instruction_len = turn_len + + # if i != 0 and not tokenizer.legacy: + # # The legacy and non-legacy modes handle special tokens differently + # instruction_len -= 1 + + # Ignore the user instructions + target[cur_len : cur_len + instruction_len] = IGNORE_TOKEN_ID + cur_len += turn_len + + # if i != 0 and not tokenizer.legacy: + # # The legacy and non-legacy modes handle special tokens differently + # cur_len -= 1 + + target[cur_len:] = IGNORE_TOKEN_ID + # print("cur_len: ", cur_len) + # print("total_len: ", total_len) + + if False: # Inspect and check the correctness of masking + z = target.clone() + z = torch.where(z == IGNORE_TOKEN_ID, tokenizer.unk_token_id, z) + rank0_print(tokenizer.decode(z)) + exit() + + if cur_len < tokenizer.model_max_length: + if cur_len != total_len: + target[:] = IGNORE_TOKEN_ID + rank0_print( + f"WARNING: tokenization mismatch: {cur_len} vs. {total_len}." + f" #turn = {len(turns) - 1}. (ignored)" + ) + + return dict( + input_ids=input_ids, + labels=targets, + attention_mask=input_ids.ne(tokenizer.pad_token_id), + ) + + +class SupervisedDataset(Dataset): + """Dataset for supervised fine-tuning.""" + + def __init__( + self, raw_data, data_args, tokenizer: transformers.PreTrainedTokenizer + ): + super(SupervisedDataset, self).__init__() + + rank0_print("Formatting inputs...") + sources = [example["conversations"] for example in raw_data] + data_dict = preprocess(sources, tokenizer, data_args) + + self.input_ids = data_dict["input_ids"] + self.labels = data_dict["labels"] + self.attention_mask = data_dict["attention_mask"] + + def __len__(self): + return len(self.input_ids) + + def __getitem__(self, i) -> Dict[str, torch.Tensor]: + return dict( + input_ids=self.input_ids[i], + labels=self.labels[i], + attention_mask=self.attention_mask[i], + ) + + +class LazySupervisedDataset(Dataset): + """Dataset for supervised fine-tuning.""" + + def __init__( + self, raw_data, data_args, tokenizer: transformers.PreTrainedTokenizer + ): + super(LazySupervisedDataset, self).__init__() + self.tokenizer = tokenizer + + rank0_print("Formatting inputs...Skip in lazy mode") + self.tokenizer = tokenizer + self.raw_data = raw_data + self.data_args = data_args + self.cached_data_dict = {} + + def __len__(self): + return len(self.raw_data) + + def __getitem__(self, i) -> Dict[str, torch.Tensor]: + if i in self.cached_data_dict: + return self.cached_data_dict[i] + + ret = preprocess( + [self.raw_data[i]["conversations"]], self.tokenizer, self.data_args + ) + ret = dict( + input_ids=ret["input_ids"][0], + labels=ret["labels"][0], + attention_mask=ret["attention_mask"][0], + ) + self.cached_data_dict[i] = ret + + return ret + + +def make_supervised_data_module( + tokenizer: transformers.PreTrainedTokenizer, data_args +) -> Dict: + """Make dataset and collator for supervised fine-tuning.""" + dataset_cls = ( + LazySupervisedDataset if data_args.lazy_preprocess else SupervisedDataset + ) + rank0_print("Loading data...") + + train_json = json.load(open(data_args.data_path, "r")) + train_dataset = dataset_cls(train_json, data_args, tokenizer=tokenizer) + + if data_args.eval_data_path: + eval_json = json.load(open(data_args.eval_data_path, "r")) + eval_dataset = dataset_cls(eval_json, data_args, tokenizer=tokenizer) + else: + eval_dataset = None + + return dict(train_dataset=train_dataset, eval_dataset=eval_dataset) + + +def train(): + global local_rank + + parser = transformers.HfArgumentParser( + (ModelArguments, DataArguments, TrainingArguments) + ) + model_args, data_args, training_args = parser.parse_args_into_dataclasses() + local_rank = training_args.local_rank + + # Set RoPE scaling factor + config = transformers.AutoConfig.from_pretrained( + model_args.model_name_or_path, + cache_dir=training_args.cache_dir, + trust_remote_code=model_args.trust_remote_code, + ) + orig_ctx_len = getattr(config, "max_position_embeddings", None) + if orig_ctx_len and training_args.model_max_length > orig_ctx_len: + scaling_factor = float(math.ceil(training_args.model_max_length / orig_ctx_len)) + config.rope_scaling = {"type": "linear", "factor": scaling_factor} + config.use_cache = False + + # Load model and tokenizer + model = transformers.AutoModelForCausalLM.from_pretrained( + model_args.model_name_or_path, + config=config, + cache_dir=training_args.cache_dir, + trust_remote_code=model_args.trust_remote_code, + ) + tokenizer = transformers.AutoTokenizer.from_pretrained( + model_args.model_name_or_path, + cache_dir=training_args.cache_dir, + model_max_length=training_args.model_max_length, + padding_side=model_args.padding_side, + use_fast=False, + trust_remote_code=model_args.trust_remote_code, + ) + + if tokenizer.pad_token != tokenizer.unk_token: + tokenizer.pad_token = tokenizer.unk_token + tokenizer.add_tokens( + [ + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + "", + ], + special_tokens=True, + ) + + # Load data + data_module = make_supervised_data_module(tokenizer=tokenizer, data_args=data_args) + + # Start trainner + trainer = Trainer( + model=model, tokenizer=tokenizer, args=training_args, **data_module + ) + if list(pathlib.Path(training_args.output_dir).glob("checkpoint-*")): + trainer.train(resume_from_checkpoint=True) + else: + trainer.train() + + # Save model + model.config.use_cache = True + trainer.save_state() + if trainer.is_deepspeed_enabled: + trainer.save_model() + else: + trainer_save_model_safe(trainer) + + +if __name__ == "__main__": + train() diff --git a/fastchat/utils.py b/fastchat/utils.py index b5e3ba543..70f61202f 100644 --- a/fastchat/utils.py +++ b/fastchat/utils.py @@ -2,6 +2,8 @@ Common utilities. """ from asyncio import AbstractEventLoop +from io import BytesIO +import base64 import json import logging import logging.handlers @@ -57,6 +59,9 @@ def build_logger(logger_name, logger_filename): logger = logging.getLogger(logger_name) logger.setLevel(logging.INFO) + # Avoid httpx flooding POST logs + logging.getLogger("httpx").setLevel(logging.WARNING) + # if LOGDIR is empty, then don't try output log to local file if LOGDIR != "": os.makedirs(LOGDIR, exist_ok=True) @@ -149,16 +154,21 @@ def oai_moderation(text): """ import openai - openai.api_base = "https://api.openai.com/v1" - openai.api_key = os.environ["OPENAI_API_KEY"] + client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"]) + threshold_dict = { + "sexual": 0.2, + } MAX_RETRY = 3 - for i in range(MAX_RETRY): + for _ in range(MAX_RETRY): try: - res = openai.Moderation.create(input=text) - flagged = res["results"][0]["flagged"] + res = client.moderations.create(input=text) + flagged = res.results[0].flagged + for category, threshold in threshold_dict.items(): + if getattr(res.results[0].category_scores, category) > threshold: + flagged = True break - except (openai.error.OpenAIError, KeyError, IndexError) as e: + except (openai.OpenAIError, KeyError, IndexError) as e: # flag true to be conservative flagged = True print(f"MODERATION ERROR: {e}\nInput: {text}") @@ -166,7 +176,7 @@ def oai_moderation(text): def moderation_filter(text, model_list): - MODEL_KEYWORDS = ["claude"] + MODEL_KEYWORDS = ["claude", "gpt-4", "bard"] for keyword in MODEL_KEYWORDS: for model in model_list: @@ -223,7 +233,7 @@ def pretty_print_semaphore(semaphore): url_params = Object.fromEntries(params); console.log("url_params", url_params); - msg = "Users of this website are required to agree to the following terms:\\n\\nThe service is a research preview. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes.\\nThe service collects user dialogue data and reserves the right to distribute it under a Creative Commons Attribution (CC-BY) or a similar license." + msg = "Users of this website are required to agree to the following terms:\\n\\nThe service is a research preview. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes.\\nPlease do not upload any private information.\\nThe service collects user dialogue data, including both text and images, and reserves the right to distribute it under a Creative Commons Attribution (CC-BY) or a similar license." alert(msg); return url_params; @@ -311,9 +321,9 @@ def is_sentence_complete(output: str): # NOTE: The ordering here is important. Some models have two of these and we # have a preference for which value gets used. SEQUENCE_LENGTH_KEYS = [ + "max_position_embeddings", "max_sequence_length", "seq_length", - "max_position_embeddings", "max_seq_len", "model_max_length", ] @@ -347,3 +357,24 @@ def str_to_torch_dtype(dtype: str): return torch.bfloat16 else: raise ValueError(f"Unrecognized dtype: {dtype}") + + +def load_image(image_file): + from PIL import Image + import requests + + image = None + + if image_file.startswith("http://") or image_file.startswith("https://"): + timeout = int(os.getenv("REQUEST_TIMEOUT", "3")) + response = requests.get(image_file, timeout=timeout) + image = Image.open(BytesIO(response.content)) + elif image_file.lower().endswith(("png", "jpg", "jpeg", "webp", "gif")): + image = Image.open(image_file) + elif image_file.startswith("data:"): + image_file = image_file.split(",")[1] + image = Image.open(BytesIO(base64.b64decode(image_file))) + else: + image = Image.open(BytesIO(base64.b64decode(image_file))) + + return image diff --git a/multigpu_inference.sh b/multigpu_inference.sh index eef154da3..de4408ed6 100644 --- a/multigpu_inference.sh +++ b/multigpu_inference.sh @@ -1 +1 @@ -python3 -m fastchat.serve.cli --model-path /data/ml/llm/vicuna-13b-v1.1 --num-gpus 2 \ No newline at end of file +python3 -m fastchat.serve.cli --model-path /data/ml/llm/OpenHermes-2.5-Mistral-7B --num-gpus 2 --max-gpu-memory 8GiB \ No newline at end of file diff --git a/playground/FastChat_API_GoogleColab.ipynb b/playground/FastChat_API_GoogleColab.ipynb new file mode 100644 index 000000000..9fcdf8358 --- /dev/null +++ b/playground/FastChat_API_GoogleColab.ipynb @@ -0,0 +1,347 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "provenance": [], + "gpuType": "T4" + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "language_info": { + "name": "python" + }, + "accelerator": "GPU" + }, + "cells": [ + { + "cell_type": "markdown", + "source": [ + "# FastChat API using Google Colab\n", + "\n", + "[ggcr](https://github.com/ggcr)" + ], + "metadata": { + "id": "1UDur96B5C7T" + } + }, + { + "cell_type": "code", + "source": [ + "%cd /content/\n", + "\n", + "# clone FastChat\n", + "!git clone https://github.com/lm-sys/FastChat.git\n", + "\n", + "# install dependencies\n", + "%cd FastChat\n", + "!python3 -m pip install -e \".[model_worker,webui]\" --quiet" + ], + "metadata": { + "id": "NQWpzwse8PrC" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "See [openai_api.md](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md) from FastChat docs.\n", + "\n", + "Because in Google Colab we are limited in resources and executing things in the background is not stable, we will run each API process in a thread and communicate them via explicit addresses:" + ], + "metadata": { + "id": "97181RzwSjha" + } + }, + { + "cell_type": "code", + "source": [ + "import subprocess\n", + "import threading\n", + "\n", + "%cd /content/\n", + "\n", + "# Using 127.0.0.1 because localhost does not work properly in Colab\n", + "\n", + "def run_controller():\n", + " subprocess.run([\"python3\", \"-m\", \"fastchat.serve.controller\", \"--host\", \"127.0.0.1\"])\n", + "\n", + "def run_model_worker():\n", + " subprocess.run([\"python3\", \"-m\", \"fastchat.serve.model_worker\", \"--host\", \"127.0.0.1\", \"--controller-address\", \"http://127.0.0.1:21001\", \"--model-path\", \"lmsys/vicuna-7b-v1.5\", \"--load-8bit\"])\n", + "\n", + "def run_api_server():\n", + " subprocess.run([\"python3\", \"-m\", \"fastchat.serve.openai_api_server\", \"--host\", \"127.0.0.1\", \"--controller-address\", \"http://127.0.0.1:21001\", \"--port\", \"8000\"])\n" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "BrhPP9ZggVL0", + "outputId": "be510360-21ba-4f6f-d6b6-24c710bdff4d" + }, + "execution_count": 11, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "/content\n" + ] + } + ] + }, + { + "cell_type": "code", + "source": [ + "# Start controller thread\n", + "# see `controller.log` on the local storage provided by Colab\n", + "controller_thread = threading.Thread(target=run_controller)\n", + "controller_thread.start()" + ], + "metadata": { + "id": "3S8vDHy3gWUv" + }, + "execution_count": 3, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "# Start model worker thread\n", + "\n", + "# see `controller.log` on the local storage provided by Colab\n", + "# important to wait until the checkpoint shards are fully downloaded\n", + "model_worker_thread = threading.Thread(target=run_model_worker)\n", + "model_worker_thread.start()\n" + ], + "metadata": { + "id": "UAU097ymgbNf" + }, + "execution_count": 4, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "# Start API server thread\n", + "api_server_thread = threading.Thread(target=run_api_server)\n", + "api_server_thread.start()" + ], + "metadata": { + "id": "bTqHMMr1gcQJ" + }, + "execution_count": 12, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "We now have the API running at http://127.0.0.1:8000/v1/ locally from Google Colab.\n", + "\n", + "We can run the examples from FastChat with curl." + ], + "metadata": { + "id": "iBdjt9I6fuSn" + } + }, + { + "cell_type": "markdown", + "source": [ + "Try chat completion with" + ], + "metadata": { + "id": "KtaxADXqhazs" + } + }, + { + "cell_type": "code", + "source": [ + "!curl http://127.0.0.1:8000/v1/chat/completions \\\n", + " -H \"Content-Type: application/json\" \\\n", + " -d '{ \\\n", + " \"model\": \"vicuna-7b-v1.5\", \\\n", + " \"messages\": [{\"role\": \"user\", \"content\": \"Hello, can you tell me a joke for me?\"}], \\\n", + " \"temperature\": 0.5 \\\n", + " }'" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "MZGd4y2SfBTT", + "outputId": "066835bb-f7f0-4e16-f54a-2f74b0e2f9d9" + }, + "execution_count": 14, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "{\"id\":\"chatcmpl-3RViU5mrsEBNu8oSxexAEb\",\"object\":\"chat.completion\",\"created\":1705781842,\"model\":\"vicuna-7b-v1.5\",\"choices\":[{\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":\"Sure thing! Here's one for you:\\n\\nWhy did the tomato turn red?\\n\\nBecause it saw the salad dressing!\"},\"finish_reason\":\"stop\"}],\"usage\":{\"prompt_tokens\":50,\"total_tokens\":82,\"completion_tokens\":32}}" + ] + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "Try embeddings with" + ], + "metadata": { + "id": "umgVIilThc6a" + } + }, + { + "cell_type": "code", + "source": [ + "!curl http://127.0.0.1:8000/v1/embeddings \\\n", + " -H \"Content-Type: application/json\" \\\n", + " -d '{ \\\n", + " \"model\": \"vicuna-7b-v1.5\", \\\n", + " \"input\": \"Hello, can you tell me a joke for me?\" \\\n", + " }'" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "VraqDkMahfAQ", + "outputId": "18710c2c-1994-4f36-eff1-6aff5a2a83a4" + }, + "execution_count": 18, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "{\"object\":\"list\",\"data\":[{\"object\":\"embedding\",\"embedding\":[0.0229715034365654,-0.020740192383527756,0.01663232035934925,0.013713006861507893,-0.01602417416870594,-0.006382038351148367,0.011642662808299065,-0.021167458966374397,0.004879815969616175,-0.005442662630230188,0.0034834356047213078,-0.010336925275623798,-0.009551243856549263,0.0005828586872667074,-0.0089940270408988,-0.0018360239919275045,-0.021827373653650284,0.007349758874624968,-0.0011765437666326761,-0.01432803925126791,0.012239773757755756,-0.018455859273672104,0.016475312411785126,-0.006144467741250992,-0.013893244788050652,-0.00961716752499342,0.00827623251825571,0.0013034207513555884,0.006355977617204189,0.007773293182253838,0.0029199880082160234,-0.014487813226878643,-0.01615595631301403,0.007242684718221426,-0.004686516709625721,-0.0034376305993646383,-0.0046915397979319096,0.0007899928605183959,-0.003679676679894328,-0.022176748141646385,-0.005467468872666359,-0.02236158587038517,0.02086811512708664,0.0029669292271137238,-0.0168694406747818,0.025603512302041054,0.009139388799667358,0.02165624313056469,-0.004472456872463226,0.0006205983809195459,0.0011453271145001054,0.014379195868968964,0.01994524523615837,-0.017613859847187996,0.005462903995066881,0.005702079739421606,-0.021057194098830223,-0.021468186751008034,-0.004666909575462341,-0.007595115341246128,-0.009129735641181469,-0.0161031112074852,0.009293882176280022,0.00953285675495863,-0.0013638428645208478,0.0007091081934049726,0.0018222536891698837,0.020376019179821014,0.01186810340732336,-0.013734177686274052,-0.004418510012328625,-0.006746952421963215,-0.0006970430840738118,-0.006644704379141331,-0.04453064501285553,0.003871878841891885,-0.01059865765273571,-0.024984514340758324,0.011757172644138336,-0.016218630596995354,-0.009141125716269016,-0.004623874556273222,-0.009732221253216267,-0.009169373661279678,-0.006947007961571217,-0.005838882178068161,-0.0068959807977080345,-0.000743469747249037,0.008742589503526688,-0.008120769634842873,-0.018119709566235542,-0.004530956968665123,-0.003916825633496046,0.02495340257883072,0.010598400607705116,0.010666633024811745,0.00679260678589344,-0.009019959717988968,-0.004487940575927496,-0.0026543298736214638,0.00286748050712049,0.012851846404373646,0.0012102456530556083,0.014895712956786156,-0.01030716486275196,0.01633496955037117,0.015731101855635643,-0.009079995565116405,0.016830960288643837,0.00940327625721693,-0.0014347939286381006,0.0207867082208395,0.06265891343355179,0.002649270463734865,-0.007526970934122801,0.004714089445769787,0.006397288292646408,-0.0029612022917717695,-0.0015034123789519072,-0.006392269395291805,-0.012309122830629349,0.0040127672255039215,0.001810954650864005,-0.016414696350693703,-0.019156336784362793,0.0003308420709799975,0.007823580875992775,0.0020239183213561773,-0.0024881847202777863,-0.008919963613152504,-0.01775810308754444,-0.012687149457633495,0.0022407048381865025,-0.009261680766940117,0.006048525683581829,0.00518012186512351,0.0029072873294353485,-7.72168641560711e-06,0.012007351964712143,-0.0004918070626445115,0.0013227892341092229,0.006292788311839104,-0.010167273692786694,-0.009050589054822922,0.008057740516960621,0.006250383332371712,0.014853340573608875,0.02723078615963459,-0.02242557890713215,0.04399850592017174,0.00313431303948164,-0.022166002541780472,0.010024639777839184,0.003234871895983815,0.0030383227858692408,0.012888548895716667,0.01507903728634119,0.00479199830442667,-0.0024831658229231834,0.008515636436641216,0.0005489314789883792,0.004214818123728037,0.006590660661458969,-0.012804229743778706,0.011747709475457668,0.002035082783550024,0.0143223125487566,0.0134012121707201,-0.0008568498305976391,0.0025005715433508158,-0.012422841973602772,0.014866000972688198,0.020239505916833878,-0.0034607010893523693,-0.026886560022830963,-0.0023535056971013546,-0.0037942437920719385,0.013139543123543262,0.004902820568531752,0.008357052691280842,-0.011724174953997135,0.005840683821588755,0.009768190793693066,0.00013014259457122535,0.016845345497131348,-0.006546108052134514,-0.00838533416390419,-0.01408461295068264,-0.0022769987117499113,0.010644538328051567,0.002947496483102441,0.02589692734181881,0.012639564462006092,0.004540625493973494,-0.0176566019654274,-0.010239857248961926,0.01839127205312252,0.0031600680667907,0.011127336882054806,0.0036535318940877914,0.015353705734014511,-0.026527339592576027,-0.008746611885726452,0.01886408030986786,0.00887488853186369,-0.0001859961193986237,0.001222877879627049,0.0065072583965957165,-0.009838716126978397,0.008630175143480301,-0.00633110711351037,0.02635054476559162,-0.005968477576971054,-0.013434287160634995,0.01017901673913002,-0.003692896803840995,-0.005410553887486458,-0.006332104559987783,-0.017778540030121803,-0.017085647210478783,-0.005269246641546488,-0.013628004118800163,-0.0005570553475990891,0.010984581895172596,0.000956009142100811,0.009669160470366478,-0.0019082700600847602,-0.05074448138475418,-0.03876679390668869,0.0011635294649749994,-0.012585809454321861,0.008794615045189857,0.00023998660617507994,-0.00455761281773448,-0.0020947649609297514,0.017387693747878075,0.004844747018069029,0.008267332799732685,0.00747610442340374,0.02141532674431801,-0.02262278087437153,-0.014600872062146664,-0.021727152168750763,0.008812149986624718,0.009474638849496841,0.03191479295492172,-0.019652077928185463,0.01944698765873909,0.017112286761403084,0.015296016819775105,0.014461753889918327,-0.019157931208610535,0.009540014900267124,0.004215397406369448,-0.008012793958187103,0.013523118570446968,-0.009407458826899529,-0.029304828494787216,0.012041181325912476,0.015149015933275223,0.0031983305234462023,-0.0003109185490757227,0.03257888928055763,0.007614033296704292,-0.005175750236958265,-0.002383652376011014,0.006435382179915905,0.006068408954888582,-0.007524268701672554,0.02373131737112999,0.004817254841327667,0.005436067469418049,-0.0059105646796524525,-0.005925316829234362,-6.454042886616662e-05,-0.008412199094891548,-0.00655836658552289,-0.0010680218692868948,-0.004262322559952736,0.0015925978077575564,0.00412611523643136,-0.011034490540623665,0.009839101694524288,0.00415002042427659,-0.007727092131972313,-0.010377302765846252,0.0007711391081102192,-0.009322070516645908,0.0035655524116009474,-0.026301125064492226,-0.006197007372975349,0.0006739745149388909,-0.00818476639688015,-0.02090131863951683,-0.002644758205860853,0.006994722411036491,-0.0016304099699482322,0.01705804094672203,-0.016460495069622993,0.017486274242401123,0.013187418691813946,0.0033816162031143904,0.017844069749116898,-0.017695210874080658,-0.011941025033593178,0.009029353968799114,0.0033719318453222513,-0.009064359590411186,0.012252643704414368,0.0011845449917018414,0.003185839159414172,0.003374891821295023,-0.007335654925554991,0.0029391313437372446,0.000280876352917403,0.0048222895711660385,-0.0003767217858694494,-0.045474909245967865,0.004725527483969927,0.0075803473591804504,0.005909985862672329,0.002949362387880683,-0.0036183823831379414,0.0026071954052895308,-0.005563989747315645,-0.012707033194601536,-0.004933884367346764,-0.016659578308463097,-0.0081319659948349,0.012579865753650665,-0.022291865199804306,-0.018159057945013046,-0.0069056968204677105,-0.00018650286074262112,-0.006835494190454483,0.0006484286277554929,0.005561383906751871,0.0062789213843643665,0.029090696945786476,0.002546998206526041,0.009344656951725483,-0.0038842656649649143,-0.012519339099526405,-0.0025535617023706436,-0.003679415676742792,-0.0033875037916004658,0.003728062380105257,-0.014787501655519009,0.0023771373089402914,0.005443841218948364,-0.00957341119647026,-0.015306569635868073,0.0046866778284311295,-0.016635537147521973,-0.01424899697303772,0.001698320615105331,-0.004534294828772545,0.0066452836617827415,0.010703673586249352,0.004293128848075867,-0.009486992843449116,-0.0031507215462625027,0.01611129753291607,-0.015744132921099663,-0.014641146175563335,0.0026989546604454517,0.01565713621675968,-0.005524931009858847,0.006648661568760872,0.0040243822149932384,-0.00559786893427372,-0.014391486532986164,0.026553215458989143,-0.009266120381653309,0.020683180540800095,0.00994131714105606,0.0026739235036075115,0.0038542025722563267,-0.012158502824604511,-0.010751161724328995,-0.00017412402667105198,-0.017064156010746956,-0.010691382922232151,0.00937278475612402,-0.014700417406857014,-0.005352479871362448,0.012342552654445171,0.009191831573843956,-0.011637836694717407,-0.012737436220049858,0.01105053722858429,0.020749129354953766,0.07297933101654053,0.027850160375237465,-0.005428216885775328,-0.019425511360168457,0.0016134463949128985,-0.007674881722778082,0.004896160680800676,-0.006309020332992077,0.0028925116639584303,-0.016418879851698875,-0.012568380683660507,-0.0076565672643482685,-0.002051394898444414,0.011267355643212795,0.01101701334118843,0.02482358179986477,0.011389358900487423,-0.01589033007621765,0.0005615596892312169,-0.027247965335845947,-0.008588980883359909,0.005675439722836018,0.008922569453716278,-0.003106530988588929,0.00925450585782528,-0.00030810333555564284,-0.002115500858053565,-0.007074093911796808,-0.005927231162786484,-0.017885340377688408,-0.016033342108130455,-0.0049004401080310345,0.006337509956210852,0.01978384517133236,0.001572070992551744,-0.0143946073949337,-0.008655560202896595,-0.0011587677290663123,-2.521412170608528e-05,-0.01082194410264492,0.010964666493237019,-0.011412781663239002,0.008038532920181751,0.006299568805843592,-0.008974144235253334,0.006545931100845337,0.0006125871441327035,0.00486041558906436,0.0042688059620559216,0.0018871801439672709,-0.006763682700693607,0.013578971847891808,-0.0020262349862605333,-0.0024552710819989443,-0.01506423857063055,0.0054992204532027245,0.011333892121911049,-0.007717472035437822,-0.005762179847806692,0.0007979075890034437,0.007761630229651928,-0.00952511839568615,-0.010288495570421219,0.014522014185786247,-0.005318223498761654,0.009297103621065617,0.0038411528803408146,0.012293890118598938,0.004698003176599741,-0.007060967851430178,-0.004558722488582134,-0.003963573835790157,0.016085496172308922,0.015816137194633484,0.0027972774114459753,-0.017336538061499596,0.014937583357095718,0.013450084254145622,0.06357342004776001,-0.009506811387836933,0.007877970114350319,0.007048371247947216,0.011457744054496288,0.023370005190372467,0.014203527010977268,-0.004614254459738731,-0.008159955963492393,0.0030794248450547457,-0.0010602197144180536,0.0006093939300626516,-0.010418003425002098,-0.007668149657547474,0.015968769788742065,-0.0015574641292914748,-0.018846578896045685,-0.003667157609015703,0.0019307112088426948,-0.001895931432954967,-0.010295855812728405,0.00023113582574296743,0.007993489503860474,0.0022910244297236204,0.00033837073715403676,-0.005313453264534473,0.0010675875237211585,-0.01549510844051838,0.007410695310682058,0.009646059945225716,-0.012997191399335861,0.010529725812375546,-0.019208982586860657,-0.010029473342001438,-0.013124711811542511,0.029043130576610565,-0.00493550905957818,0.008303387090563774,0.0067044831812381744,0.005133184138685465,-0.008268092758953571,0.0027517518028616905,-0.013479426503181458,-0.01547516044229269,-0.020013773813843727,-0.006451855413615704,0.008133156225085258,-0.006830539554357529,-0.007085484452545643,0.010033013299107552,0.002104497514665127,0.0005678657325915992,0.006996427197009325,-0.00785919837653637,-0.029753299430012703,0.03372034803032875,-0.008247010409832,0.008989491499960423,0.017457574605941772,-0.0059603373520076275,-0.003432418452575803,-0.014526166021823883,0.01819109544157982,-0.007616993971168995,-0.008361894637346268,0.008198246359825134,0.004229682497680187,-0.02080651931464672,0.009076694026589394,-0.006605580914765596,0.0037523536011576653,-0.010452975519001484,-0.012760377489030361,-0.017025675624608994,-0.007993683218955994,0.013692287728190422,0.017206765711307526,0.006106856279075146,0.011746293865144253,-0.009011680260300636,-0.007511272560805082,0.006244495511054993,0.009395747445523739,0.006921007763594389,0.00926200207322836,0.03370635211467743,0.0026780739426612854,0.012087206356227398,0.0012626887764781713,-0.014491417445242405,-0.007984738796949387,-0.02033303491771221,-0.008010058663785458,-0.0027411666233092546,-0.006356299389153719,0.014341359958052635,0.00924749206751585,0.008061794564127922,-0.014423820190131664,-0.0027274927124381065,-0.009464149363338947,0.0032869288697838783,0.028920968994498253,-0.007417581044137478,-0.012927711941301823,-0.006823397241532803,0.0021555088460445404,-0.008643687702715397,-0.0023652170784771442,-0.0060961428098380566,-0.017238536849617958,-0.007533663418143988,0.0022437411826103926,-0.0029654495883733034,0.007918364368379116,-0.004272923804819584,0.022094689309597015,-0.01293826475739479,-0.03929437696933746,-0.05735565349459648,-0.013071688823401928,0.0007404614589177072,-0.000720368989277631,-0.006117763463407755,-0.011282929219305515,0.010729444213211536,-0.014913954772055149,0.00311655318364501,0.006948134861886501,-0.00748022273182869,-0.02309916727244854,-0.0178229883313179,-0.0072602517902851105,0.007839913479983807,0.012868576683104038,0.002075975527986884,0.0007498079212382436,0.005815781187266111,-0.011992518790066242,0.010061261244118214,0.004755143541842699,-0.0014543153811246157,0.014422083273530006,-0.0023919050581753254,0.009424189105629921,-0.01841503195464611,0.008597759529948235,0.023288220167160034,-0.009507520124316216,0.015740947797894478,-0.0004225693119224161,0.02476677857339382,-0.011370633728802204,0.011675688438117504,0.020527847111225128,-0.0073279449716210365,-0.013483609072864056,-0.019474929198622704,-0.004016772843897343,-0.012925073504447937,-0.00565439835190773,0.0104595385491848,-0.007314899004995823,0.010194428265094757,0.0022050561383366585,0.011519340798258781,-0.0059105330146849155,-0.0007297637057490647,-0.016200484707951546,0.015271657146513462,-0.016203250735998154,0.034517351537942886,0.0006107089575380087,-0.013269267976284027,0.01328535471111536,-0.02016814425587654,-0.007773164194077253,-0.007333156652748585,-0.01815428026020527,0.006929537747055292,-0.0034732790663838387,-0.004669690039008856,0.0016878641908988357,-0.03094855323433876,0.0019403311889618635,-0.005923015996813774,-0.0040122526697814465,0.009299001656472683,-0.006708343978971243,0.01585310511291027,0.0010694535449147224,0.0006908577051945031,-0.0015497022541239858,-0.014749257825314999,0.013069666922092438,-0.0003381777205504477,-0.0186776015907526,-0.00869465060532093,-0.005246113985776901,0.004712183494120836,-0.0033125269692391157,0.005922533571720123,0.005009307526051998,-0.002772809471935034,0.0018297180067747831,-0.007289668545126915,-0.025313491001725197,-0.010890730656683445,-0.013207301497459412,-0.015217771753668785,-0.0064299451187253,0.0012019408168271184,0.013148745521903038,-0.022279510274529457,0.008878774009644985,-0.007133841048926115,-0.0007347667124122381,0.007130189798772335,0.0017936835065484047,0.012268022634088993,0.007812416646629572,0.009994118474423885,-0.01274168398231268,-0.000458410766441375,-0.006630516145378351,0.0004267197218723595,0.013977475464344025,-0.003951766062527895,-0.0167144313454628,-0.012754247523844242,0.012914633378386497,0.010781855322420597,0.002908888040110469,-0.007131427992135286,0.017916306853294373,-0.005879903212189674,-0.002502115909010172,-0.0016746085602790117,-0.024386180564761162,-0.008716223761439323,0.003937223460525274,0.004685036838054657,-0.005052074324339628,-0.004745748359709978,-0.004316418897360563,-0.009056701324880123,-0.011055074632167816,0.0087593924254179,-0.016003968194127083,-0.001959120621904731,0.014024545438587666,-0.005205253139138222,-0.0034684527199715376,-0.00704217841848731,0.004913646727800369,0.01903299242258072,-0.007594246882945299,-0.0001278904383070767,-0.00024535658303648233,0.01912636123597622,0.02121288888156414,0.01097018364816904,-0.005211591720581055,-0.004693691153079271,0.0002123745362041518,0.01864037662744522,0.004567956551909447,-0.006998493801802397,0.002807476557791233,-0.0272210780531168,0.008950882591307163,-0.007628897670656443,0.017757385969161987,0.011070613749325275,-0.02169198729097843,0.005343310534954071,0.0013322805752977729,-0.004593148827552795,0.009079867042601109,0.011012605391442776,0.00658367108553648,-0.004797258879989386,-0.006833371240645647,-0.0069283475168049335,-0.009916930459439754,-0.006784595549106598,-0.03476946800947189,0.020896492525935173,0.008564138785004616,-0.0012716330820694566,-0.013008822686970234,-0.000613439769949764,0.0047750589437782764,-0.012346075847744942,0.006973704788833857,-0.013979197479784489,-0.006083691958338022,0.005035505164414644,0.011145804077386856,0.013424682430922985,-0.00019037174934055656,-0.008892635814845562,-0.01950671710073948,-0.010386078618466854,0.01175111997872591,-0.014368708245456219,0.00041413979488424957,-0.014867283403873444,0.0020979661494493484,-0.002743129152804613,0.004309915471822023,-0.012755325064063072,0.013642803765833378,0.008863402530550957,0.0013711462961509824,-0.019572222605347633,0.0036479418631643057,0.1259939968585968,0.01384377758949995,0.015267448499798775,0.014036224223673344,0.0038570465985685587,-0.005283885635435581,0.010237026028335094,-0.011374881491065025,-0.011878897435963154,-0.008971023373305798,-0.009165126830339432,-0.0010226268786936998,-0.007337307557463646,-0.010756309144198895,-0.014150279574096203,0.002133630681782961,-0.015334047377109528,0.00481215538457036,-0.013047880493104458,-0.014511879533529282,-0.0030851999763399363,-0.007749861106276512,-0.006487664300948381,0.013752967119216919,-0.012187069281935692,0.0007167012081481516,-0.0016341822920367122,-0.004467220976948738,0.0042928713373839855,0.022611349821090698,0.0005482397391460836,-0.017850179225206375,-0.014368931762874126,-0.02213916927576065,0.009322037920355797,-0.008927379734814167,0.0012655361788347363,0.003878731979057193,-0.011226431466639042,0.014120969921350479,-0.013007482513785362,-0.027299636974930763,-0.02149207703769207,0.0018350587924942374,0.0014142269501462579,-0.000801382411736995,0.010266175493597984,0.006652788259088993,0.0005369306891225278,-0.006750426720827818,0.0077108764089643955,0.008079683408141136,-0.0018402388086542487,-0.016589464619755745,-0.009489567019045353,-0.006460928358137608,-0.008930034004151821,0.005508729722350836,-0.021854624152183533,0.0021647908724844456,-4.1697108827065676e-05,0.0023772178683429956,-0.015694361180067062,-0.0025681040715426207,0.02343827858567238,-0.007234286982566118,0.011763988994061947,0.006332748103886843,0.01569999009370804,0.0011719107860699296,-0.0026809938717633486,-0.019673682749271393,0.010832150466740131,0.0020819918718189,0.0021434274967759848,0.014149283058941364,-0.018654564395546913,-0.005904508288949728,0.024274280294775963,0.0020302003249526024,0.009324193932116032,-0.0019528145203366876,0.010275795124471188,-0.007945165038108826,0.02523057907819748,-0.0015196279855445027,-0.0033202609047293663,-0.00838176254183054,0.009073046036064625,0.004423896782100201,0.0025238976813852787,0.0009007186163216829,0.012340654619038105,0.013026034459471703,0.0006704675615765154,-0.011622972786426544,0.0025514704175293446,0.0018054911633953452,-0.00021421245764940977,0.0015564989298582077,0.0002535287057980895,-0.007833908312022686,-0.002614386146888137,0.010472987778484821,0.008430087007582188,-0.010319744236767292,-0.007126948330551386,-0.0032228068448603153,-0.005715849809348583,-0.007379905320703983,0.0007485531968995929,-0.020927315577864647,0.0019611797761172056,0.0038484123069792986,-0.006966795306652784,-0.018788157030940056,0.007531090173870325,-0.006524322554469109,0.010099516250193119,-0.004077629651874304,-0.017544057220220566,-0.0056204223074018955,0.0014705952489748597,0.02655109204351902,-0.004098542500287294,0.00679929880425334,-0.009616298601031303,-0.00428798096254468,-0.004214432090520859,0.017463093623518944,0.007254500407725573,0.011614413931965828,-0.015450838021934032,0.01430854294449091,0.011353002861142159,0.0038417966570705175,0.013071335852146149,-0.003091377206146717,-0.0012477281270548701,-0.012130544520914555,-0.0005112078506499529,0.0007805016357451677,0.01115238294005394,-0.011903454549610615,0.01652473211288452,-0.016062499955296516,0.0243363119661808,0.00521033676341176,-0.019244149327278137,0.015055154450237751,-0.0014579187845811248,0.024649038910865784,0.003033657558262348,-0.004459853284060955,-0.0024275374598801136,-0.004720765631645918,-0.008315999060869217,0.01299308892339468,0.003514010924845934,0.00035230195499025285,-0.0016822096658870578,-0.011835559271275997,0.013584377244114876,0.014042497612535954,-0.0021746200509369373,-0.013556176796555519,0.009201740846037865,-0.016880186274647713,0.006788729690015316,0.007318035699427128,0.0079000573605299,-0.0021131120156496763,0.005459972191601992,-0.01956108957529068,-0.003485738066956401,-0.012780935503542423,-0.010953888297080994,-0.0035778111778199673,0.013985262252390385,0.004123058635741472,-0.017365043982863426,0.02569989673793316,-0.0032679142896085978,-0.006953733041882515,-0.020901406183838844,0.003745210822671652,0.004216748755425215,0.007281791884452105,0.01097949780523777,-0.008859830908477306,0.0076435767114162445,-0.002383668441325426,0.003228791058063507,0.000471006816951558,0.021136121824383736,0.006612015888094902,-0.00790025107562542,0.002388188848271966,-0.01046378631144762,0.0019024949287995696,-0.020805569365620613,0.008167678490281105,0.01708216592669487,0.003778784302994609,-0.007486400194466114,0.009304165840148926,0.01634320802986622,-0.015319439582526684,0.012349807657301426,0.008008498698472977,0.004085544031113386,-0.0019550668075680733,-0.0013337925774976611,0.005621806252747774,0.00999923050403595,0.0067540789023041725,0.024973737075924873,-0.013562659732997417,-0.009736709296703339,-0.012089909054338932,-0.016808679327368736,0.008086872287094593,0.008295665495097637,-0.012549092061817646,-0.010748330503702164,3.521411053952761e-05,0.0017467420548200607,0.01626216247677803,0.009219243191182613,-0.006609965115785599,0.010143030434846878,-0.020303402096033096,-0.01044105552136898,-0.013318654149770737,0.00010932621080428362,0.007084518671035767,0.007645950186997652,-0.0032920767553150654,-0.01955648884177208,0.0074850814417004585,0.00894773006439209,0.009001234546303749,0.005829519592225552,-0.0045957546681165695,0.0013910618145018816,-0.012523948214948177,0.013304369524121284,0.01453658938407898,0.017666004598140717,-0.004940214566886425,-0.011730528436601162,-0.015571167692542076,-0.010929387994110584,-0.0006716740899719298,0.02221648395061493,0.021565254777669907,0.01226515881717205,-0.0053292508237063885,0.0007020622142590582,0.0024210221599787474,0.01962619461119175,-0.004420963115990162,-0.015309896320104599,0.0034791347570717335,0.02059043198823929,-0.008116353303194046,-0.0032520205713808537,-0.012169377878308296,0.025940747931599617,-9.851584763964638e-05,0.0036511996295303106,0.0037823636084795,-0.010169846937060356,0.010504196397960186,0.013252376578748226,-0.007866725325584412,-0.0026977320667356253,-0.011583752930164337,-0.006372353993356228,-0.0007445314549840987,-0.0030074622482061386,0.016342146322131157,-0.009066401980817318,0.0021215977612882853,0.008862188085913658,0.015515057370066643,0.009001555852591991,-0.024249698966741562,0.020413951948285103,0.008854007348418236,0.0006535120774060488,0.013391399756073952,-0.01817990653216839,-0.0016513630980625749,-0.011816883459687233,0.007374065928161144,0.02026175521314144,-0.019211476668715477,0.00015504502516705543,-0.007945390418171883,0.001324703567661345,0.025466380640864372,0.006762733682990074,-0.01408602949231863,-0.01516133826225996,-0.0069986796006560326,-0.0004754628462251276,-0.01119284238666296,-0.004222266376018524,-0.014954396523535252,0.0031823322642594576,-0.009523541666567326,-0.011928976513445377,-0.0011272374540567398,-0.009063232690095901,-0.011843233369290829,-0.0030050550121814013,-0.010779651813209057,0.017810650169849396,0.009822757914662361,-0.0130256162956357,-0.002755612600594759,0.010061550885438919,-0.002134740585461259,-0.0004929009592160583,-0.011506262235343456,0.004393350332975388,0.002644677646458149,0.013704448938369751,-0.015646131709218025,-0.005174269899725914,0.017940374091267586,0.006815964821726084,-0.014483116567134857,-0.018775692209601402,-0.017056433483958244,-0.00333380582742393,-0.01628420129418373,-0.02220962941646576,-0.007394126150757074,0.004732364322990179,0.003667865414172411,0.013815898448228836,-0.014784134924411774,0.006790837273001671,-0.005050111562013626,-0.01184664387255907,-0.005963458679616451,0.01068057306110859,0.01837034337222576,6.692128226859495e-05,-0.0020520382095128298,-0.005477442871779203,0.008534909226000309,0.021816853433847427,0.019038107246160507,0.008523069322109222,-0.021777216345071793,-0.01595551334321499,-0.012562203221023083,0.012347427196800709,0.013057525269687176,-0.015681490302085876,0.012324455194175243,-0.0041071330197155476,0.01061281468719244,-0.01118357665836811,-0.001830828026868403,0.0030818136874586344,0.0002257306332467124,0.012498816475272179,0.005094640422612429,0.020110618323087692,0.008550223894417286,0.008692882023751736,0.0034023199696093798,-0.0035538740921765566,0.017047973349690437,-0.008395790122449398,0.0036361422389745712,0.0012567044468596578,-0.012467821128666401,0.015781357884407043,-0.009986070916056633,0.01078745350241661,0.008992418646812439,-0.00894157588481903,-0.009751653298735619,-0.007818657904863358,-0.11352294683456421,0.006673813331872225,0.0006858144770376384,0.012712855823338032,0.017139634117484093,-0.003267174120992422,-0.0037179840728640556,-0.027594735845923424,0.015738407149910927,-0.008096124045550823,0.008535375818610191,-0.006178006995469332,0.0021386174485087395,0.00922358687967062,0.015902427956461906,0.010610240511596203,-0.006293817888945341,0.007873225025832653,-0.009341374039649963,-0.015121137723326683,-0.0025967389810830355,0.0009708734578453004,0.02104487642645836,-0.0034994683228433132,-0.012507845647633076,0.022736024111509323,-0.007137798238545656,0.004183493088930845,-0.005087561905384064,0.005540612153708935,0.011934671550989151,-0.008175094611942768,0.013157593086361885,0.003565874882042408,0.007175907958298922,0.02075435034930706,-0.008561364375054836,0.0018133737612515688,-0.0031988373957574368,0.0026560029946267605,-0.015025373548269272,0.0025075653102248907,-0.020946715027093887,0.002409552223980427,0.0030347283463925123,-0.008436071686446667,0.011734389699995518,0.005770737770944834,0.0027340843807905912,0.009276704862713814,0.014263113029301167,0.005924335680902004,-0.013983492739498615,-0.0073938933201134205,-0.0037190215662121773,-0.007606761995702982,0.00866461731493473,-0.00787393283098936,0.004571785684674978,-0.01736222766339779,0.0011665115598589182,-0.018963271751999855,0.002434736117720604,0.023223616182804108,0.013454395346343517,-0.007077569141983986,0.006989220157265663,0.0016794668044894934,-0.0029226583428680897,0.015770161524415016,-0.007460178807377815,0.02135499194264412,-0.0067621381022036076,0.006347097456455231,0.01143655739724636,-0.009779580868780613,0.0011012412142008543,0.022937849164009094,0.03317839652299881,0.002777715912088752,0.0014309572288766503,-0.004011448472738266,-0.020232975482940674,-0.0036248492542654276,0.009381849318742752,-0.004546706099063158,0.01232175249606371,-0.02003932185471058,0.005393791012465954,0.007975440472364426,-0.02001962997019291,0.00812353566288948,0.004558304324746132,0.012361841276288033,-0.00022309240011963993,-0.005494816228747368,-0.005414157174527645,-0.0007955267792567611,-0.006178250070661306,0.0011265840148553252,0.014568240381777287,-0.015398587100207806,-0.009784664027392864,0.002724339719861746,-0.012673153541982174,-0.0022227196022868156,0.012834923341870308,0.011582594364881516,0.0023665439803153276,0.006087005604058504,-0.0014784777304157615,0.004853080026805401,0.004227772355079651,0.005455693230032921,-0.0038181168492883444,-0.009257722645998001,0.006031699012964964,0.0033167218789458275,-0.0009175615850836039,0.023257719352841377,-0.0028650029562413692,0.002901359461247921,0.002793062711134553,0.01102980226278305,0.0026135335210710764,0.028918616473674774,0.015613989904522896,-0.0029948721639811993,-0.009738076478242874,0.018055813387036324,0.0043314797803759575,0.008178786374628544,-0.011788956820964813,0.011455508880317211,0.01573013886809349,0.00820583663880825,0.01591729186475277,0.002678733319044113,-0.017613554373383522,-0.00441357959061861,-0.010343971662223339,0.003275121096521616,-0.004354435950517654,-0.016168376430869102,-0.016327762976288795,0.010710583068430424,-0.0002415279159322381,-0.005174752790480852,-0.010321610607206821,2.5521783754811622e-05,-0.005093996413052082,0.00427284324541688,-0.00925386231392622,-0.022916292771697044,-0.005452363286167383,-0.005463994108140469,-0.00032996939262375236,-0.0056364452466368675,-0.01507771946489811,-0.0140626709908247,-0.001988076837733388,0.010080339387059212,-0.008691756054759026,0.001160038635134697,-0.0021076020784676075,-0.012562798336148262,-0.002622719155624509,0.0030087551567703485,-0.007625970058143139,-0.002947271103039384,0.018139785155653954,0.02823634259402752,-0.0030986485071480274,-0.0026572253555059433,-0.009556874632835388,-0.0120854452252388,-0.016098687425255775,0.004706657491624355,0.018779207020998,-0.008696485310792923,0.02307201363146305,0.008763439022004604,-0.014935833401978016,-0.010818082839250565,-0.2784213721752167,-0.007361662574112415,-0.009495736099779606,-0.023461056873202324,-0.008934522047638893,0.015963122248649597,0.0016804963815957308,-0.009592200629413128,-0.011385498568415642,0.010840379633009434,0.0007005499792285264,0.0030378401279449463,0.01442185789346695,0.0060276128351688385,0.011916878633201122,0.0019495971500873566,0.010881658643484116,0.010174351744353771,0.002560058841481805,-0.011619336903095245,0.005709640681743622,-0.019679618999361992,0.008580016903579235,-0.020601846277713776,-0.003206663765013218,-0.009325030259788036,0.010211093351244926,0.02160986326634884,-0.0012345046270638704,-0.0058813090436160564,0.02697822079062462,-0.009422902949154377,-0.013682184740900993,-0.0015802914276719093,0.020953504368662834,-0.003903919830918312,-0.00243631680496037,-0.020303402096033096,0.01755078323185444,0.024769868701696396,0.0016339250141754746,0.02251550555229187,0.004645044915378094,-0.010381357744336128,-0.014821520075201988,-0.010959195904433727,0.00934459175914526,-0.010714001022279263,0.018016111105680466,-0.00970667414367199,-0.007309091277420521,-0.012314545921981335,-0.02047012746334076,0.027432451024651527,-0.0009060755837708712,0.07745006680488586,-0.0023823976516723633,0.01124457735568285,0.0096189696341753,-0.0008077527745626867,-0.0035770712420344353,-0.0034886582288891077,0.011778567917644978,-0.008943229913711548,0.003386442083865404,-0.00024284704704768956,0.010145587846636772,0.007330470718443394,0.003942918032407761,0.0022819836158305407,-0.0008272781851701438,0.007588133215904236,0.005243266467005014,-0.014266717247664928,-0.005166773218661547,0.0074570500291883945,-0.0016363218892365694,-0.019104288890957832,-0.005167931783944368,0.008953874930739403,-0.007413430605083704,-0.013545575551688671,-0.017633790150284767,0.026401540264487267,-0.0021100472658872604,-0.010175767354667187,0.009788733907043934,-0.014036711305379868,0.003915506415069103,-0.003761973464861512,-0.004975275602191687,0.002093156334012747,-0.001363328075967729,-0.0029019585344940424,-0.009283140301704407,-0.006503944285213947,-0.011568261310458183,0.02174294926226139,-0.014086995273828506,0.0033965124748647213,0.0035606948658823967,0.003461358603090048,0.010544992983341217,0.010210482403635979,-0.002245498588308692,0.019960559904575348,-0.007419897243380547,-0.007997768931090832,0.00904663186520338,0.02357649616897106,-0.011239221319556236,-0.00011569660273380578,-0.0029487835709005594,0.007448234129697084,0.016541525721549988,-0.0001295312977163121,0.009020346216857433,-0.020686302334070206,0.015325473621487617,-0.0016831347020342946,-0.008773420937359333,0.016255050897598267,-0.0012025240575894713,0.01161193661391735,-0.016618099063634872,0.012996693141758442,-0.004140432924032211,-0.007176905404776335,0.020722240209579468,-0.010730667039752007,0.01690627448260784,-0.0032811376731842756,0.010093660093843937,-0.0027236961759626865,-0.03603730350732803,-0.004680242855101824,0.006091711111366749,-0.012325975112617016,-0.014773516915738583,-0.012536093592643738,0.0029048342257738113,-0.02004828117787838,-0.007857202552258968,-0.012408236041665077,-0.005879549775272608,-0.003138889791443944,-0.015323558822274208,-0.0001826178777264431,0.004041365813463926,-0.015603084117174149,0.008681814186275005,0.01134839653968811,0.0006241817027330399,-0.026418721303343773,0.0036757681518793106,0.0031010936945676804,-0.0018149744719266891,-0.0038577064406126738,-0.010925833135843277,-0.006739520467817783,-0.014096260070800781,-0.005563016515225172,0.016652911901474,-0.0007585270213894546,0.011374784633517265,-0.009055189788341522,0.014467866159975529,0.021866194903850555,-0.011922026984393597,-0.006064226385205984,0.014592982828617096,0.012229286134243011,0.007419169414788485,-0.003800228238105774,0.005821636877954006,0.005980832036584616,0.019860951229929924,0.0005983874434605241,-0.021042626351118088,-0.011280648410320282,-0.0034789254423230886,-0.005904307123273611,0.00940112117677927,-0.01505252718925476,-0.007798091508448124,-0.005041247699409723,-0.020565425977110863,0.002939002588391304,-0.010503344237804413,0.006530262529850006,-0.00948650948703289,0.006920433137565851,-0.013644187711179256,-0.01110368873924017,-0.0007017726311460137,-0.011356927454471588,-0.009044218808412552,0.004168874584138393,0.014494956471025944,0.007382184267044067,-0.01204177737236023,-0.0026305855717509985,0.00237200572155416,-0.011614670976996422,0.0075203352607786655,-0.007654733490198851,-0.018017364665865898,-0.007952709682285786,0.009685106575489044,0.016591427847743034,0.008159216493368149,-0.004515109583735466,0.019129447638988495,-0.1756141632795334,-0.024899190291762352,0.0018353804480284452,0.008671293035149574,-0.01384413056075573,0.01001817174255848,-0.012732546776533127,0.005506077315658331,0.0014535110676661134,-0.00014272250700742006,-0.02563503570854664,0.0071355667896568775,-0.02158156782388687,-0.00474808132275939,0.018071835860610008,0.023083724081516266,0.009568641893565655,0.006390306632965803,-0.005066118203103542,-0.01592129096388817,0.017062868922948837,-0.01115796621888876,-0.015767812728881836,-0.005238134413957596,0.006928991060703993,0.006582673639059067,-0.008210115134716034,-0.0006850744248367846,0.003518740413710475,0.02363714389503002,0.014902275986969471,-0.00873962976038456,-0.00457162456586957,0.008439594879746437,0.004671009257435799,0.006651798263192177,0.007029373198747635,0.010178695432841778,-0.01541563868522644,0.005330503452569246,0.005778331309556961,0.010172613896429539,-0.0029294793494045734,-0.005375274922698736,0.015940893441438675,-0.01708410307765007,0.02029111236333847,0.020185356959700584,0.003809751709923148,0.010334190912544727,0.004035063553601503,-0.013017106801271439,-0.009174071252346039,0.0011511747725307941,0.003145364811643958,-0.004294078331440687,0.01332454290241003,-0.013086714781820774,0.016923105344176292,-0.012309269048273563,-0.012259078212082386,0.0015276713529601693,0.00023750621767248958,-0.00841486919671297,-0.012003683485090733,-0.02218620665371418,-0.006810398772358894,-0.05309946462512016,-0.016830896958708763,0.008899983949959278,0.013663781806826591,-0.008498359471559525,-0.009214417077600956,-0.005358291324228048,-0.019415665417909622,-0.0016335167456418276,-0.01287610549479723,-0.005925686564296484,0.007678573951125145,0.004894197918474674,-0.005250392947345972,0.01937422715127468,0.03884986415505409,0.007704956457018852,0.004224277101457119,-0.010258260183036327,0.012103293091058731,0.0007560174562968314,0.009477147832512856,0.005485904403030872,0.011781315319240093,0.005216819699853659,-0.01289766188710928,-0.00058182911016047,-0.006487181875854731,0.010025066323578358,0.01070936769247055,0.008055237121880054,0.009198716841638088,-0.0050565944984555244,0.01677780970931053,-0.004822997841984034,-0.0006103349733166397,-0.010622531175613403,-0.007425166200846434,-0.0016098107444122434,-0.006618257611989975,0.0011639798758551478,-0.08570022881031036,0.020885812118649483,-0.025955354794859886,0.018434884026646614,-0.0073579950258135796,0.005618041846901178,0.005165067967027426,0.0032188494224101305,-0.0012533745029941201,0.015155804343521595,-0.004030752461403608,-0.0077774110250175,0.0008675797143951058,-0.0021942458115518093,0.005814365576952696,0.0067954701371490955,-0.0116463303565979,-0.004899860825389624,0.012563779018819332,-0.02336389385163784,0.0006979600293561816,-0.004649227485060692,-0.012502971105277538,-0.010896007530391216,0.0012360489927232265,-0.012883569113910198,0.025206802412867546,0.011092202737927437,-0.01052560843527317,-0.006687352433800697,-0.01787686161696911,0.004141188692301512,0.0106991371139884,-0.00821922067552805,-0.02622329816222191,0.006792123895138502,-0.013250929303467274,0.007654957938939333,0.008035637438297272,-0.005465570371598005,-0.013763535767793655,-0.01950150541961193,0.008698672987520695,0.0057535613887012005,-0.019228672608733177,-0.011553805321455002,-0.0003967660013586283,0.0012686088448390365,0.006336930673569441,-0.005957281216979027,-0.002579220337793231,-0.002936155302450061,0.0036823435220867395,0.005852008704096079,0.017855370417237282,-0.00011639236618066207,0.0004218293179292232,0.001062761410139501,0.0018936148844659328,0.0179592277854681,0.006386397872120142,0.009569131769239902,0.00946755986660719,0.0031641540117561817,-0.019553659483790398,0.0029401606880128384,-0.014651062898337841,-0.009318306110799313,0.01822330802679062,0.019901007413864136,0.002202707575634122,0.003464141394942999,0.0073665534146130085,-0.014449591748416424,-0.0014002956449985504,0.01639820821583271,0.010666480287909508,0.00931896548718214,-0.0015187592944130301,-0.023576384410262108,-0.00443253805860877,0.014584994874894619,-0.0053917961195111275,0.01415127795189619,0.011401182971894741,-0.0006382536957971752,0.018119532614946365,0.009133468382060528,0.012955060228705406,-0.0014709169045090675,-0.016649436205625534,0.02026389352977276,0.0006713725160807371,0.015495236963033676,0.003925270866602659,0.00319079402834177,-0.003925030119717121,-0.021138904616236687,-0.00461933808401227,-0.005469720810651779,0.00739274313673377,0.019258851185441017,0.02616351842880249,0.023124778643250465,-0.00566488690674305,0.01773357018828392,0.023644834756851196,0.0047590043395757675,0.017013562843203545,-0.0032865749672055244,-0.018152205273509026,-0.010509730316698551,0.004198023583739996,0.011710388585925102,-0.00446705985814333,0.002852680627256632,-0.002007831586524844,-0.000134904301376082,-0.01944751851260662,0.017555125057697296,0.007372296415269375,0.013482901267707348,-0.01416250690817833,0.009404434822499752,0.002286749193444848,0.005182494409382343,-0.0028514256700873375,0.004553719889372587,-0.0026370203122496605,-0.0011353131849318743,0.011851341463625431,-0.00646215071901679,-0.013426951132714748,0.020288217812776566,0.006485862657427788,0.01353476569056511,-0.015545669943094254,0.006692144554108381,0.0026561636477708817,0.0048660943284630775,-0.018292417749762535,-0.007460114546120167,0.022227099165320396,0.0106017105281353,0.05320962518453598,-0.02265460416674614,-0.01131453923881054,0.012853817082941532,-0.0002959979756269604,0.025417005643248558,-0.00955783948302269,0.0014118781546130776,-0.00904284231364727,-0.008947938680648804,-0.007168934214860201,-0.00964303594082594,-0.004022146109491587,-0.005613087210804224,-0.12938329577445984,-0.0043584736995399,0.020456742495298386,0.0071443296037614346,-0.011277008801698685,-0.02349260449409485,-0.010244361124932766,-0.00665429187938571,-0.010064574889838696,0.005249082110822201,0.005279236473143101,0.017985159531235695,-0.02883007377386093,0.010324330069124699,-0.012035149149596691,0.008913593366742134,0.008274752646684647,-0.0018126015784218907,-0.004603218752890825,0.00580825237557292,0.008159039542078972,0.01880655251443386,0.0002549282507970929,-0.004038217011839151,0.005237426608800888,-0.018459560349583626,-0.00046851334627717733,0.0023338748142123222,-0.0042199338786304,-0.006385834887623787,0.011244351975619793,0.0007573044276796281,0.01756402850151062,-0.008600994013249874,-0.0022277063690125942,-0.0030407358426600695,-0.007221739273518324,0.01820104382932186,-0.02493535354733467,0.01585320197045803,-0.0005586881306953728,0.0033721248619258404,-0.00026433906168676913,-0.000743469747249037,0.005868381354957819,0.006111698690801859,-0.0011203524190932512,0.011258958838880062,-0.0008901173714548349,-0.011496561579406261,-0.008037720806896687,0.016194118186831474,0.011407424695789814,-0.014084485359489918,0.017604801803827286,0.002007188042625785,-0.006658796686679125,-0.009705387987196445,0.015173210762441158,0.006459673400968313,-0.00285873725079,0.019698521122336388,0.012200135737657547,-0.008034748956561089,0.0028521015774458647,-0.00245031644590199,-0.006310049910098314,-0.00373665289953351,0.008135923184454441,-0.0090325390920043,-0.0002607999776955694,0.0046803392469882965,-0.01800999790430069,-0.008924789726734161,0.01823682151734829,-0.007351914420723915,-0.019322993233799934,0.012701595202088356,0.0053284624591469765,-0.0064052678644657135,0.019654009491205215,0.00013570864393841475,0.016256112605333328,0.007728443015366793,0.010437853634357452,0.00808533001691103,0.019011886790394783,0.012183984741568565,0.033292051404714584,0.005902435164898634,-0.018925726413726807,-0.00701944762840867,0.011261066421866417,0.005332435946911573,0.0031362916342914104,0.0005442180554382503,-0.0032328530214726925,-0.010592673905193806,-0.018920287489891052,-0.009756236337125301,-0.005785324610769749,-0.030977396294474602,0.001599933486431837,0.00013377821596805006,0.008112323470413685,-0.0063599590212106705,-0.005695757456123829,0.00597459077835083,0.01210800651460886,-0.006559251341968775,0.0007339463336393237,0.011125277727842331,0.022035440430045128,0.017060229554772377,0.01003420352935791,-0.0034310349728912115,0.00637843506410718,0.011094809509813786,-0.013998170383274555,-0.014564729295670986,0.01242771651595831,-0.0036663247738033533,-0.000654135481454432,0.00626980047672987,-0.0076171220280230045,-0.0020285514183342457,0.006653873715549707,0.012656455859541893,-0.01786595582962036,-0.008405892178416252,0.01965014822781086,-0.0021813763305544853,0.010792931541800499,-0.015798313543200493,-0.015769999474287033,-0.006753129884600639,-0.015076013281941414,0.007592670153826475,0.006454171612858772,0.02763102576136589,-0.008400551043450832,-0.0049078394658863544,-0.024386631324887276,0.006857115309685469,0.001914125750772655,-0.01439663302153349,-0.020056629553437233,0.008954518474638462,0.013706443831324577,0.007875348441302776,0.012146084569394588,-0.009473125450313091,0.009648504666984081,0.015645135194063187,0.01922854408621788,0.0068963672965765,0.008811811916530132,0.013530968688428402,-0.017957940697669983,-0.01021209079772234,0.0022633387707173824,-0.007277818396687508,-0.0031573977321386337,-0.11325757950544357,-0.0026099944952875376,0.01439537201076746,-0.004530924838036299,0.001019970397464931,-0.0020006245467811823,-0.004129558335989714,0.015971921384334564,-0.044551171362400055,0.0030149968806654215,0.007847486063838005,-0.01554462406784296,0.007680688984692097,-0.00788731686770916,-0.017942272126674652,-0.000786610587965697,0.005577197298407555,0.009266538545489311,-0.009329116903245449,-0.04451880231499672,-0.0037785109598189592,0.0028084840159863234,-0.009803786873817444,-0.010790380649268627,0.002866531489416957,0.0017853827448561788,0.007238357327878475,-0.007430804427713156,-0.004662869498133659,0.004536635708063841,0.01837938465178013,0.01211519818753004,0.0014415101613849401,-5.029150634072721e-05,0.021934866905212402,-0.010267108678817749,-0.013645731844007969,-0.015742121264338493,0.008256089873611927,-0.04040089249610901,0.07481249421834946,0.007236475590616465,0.009462444111704826,-0.027326276525855064,0.003720212262123823,0.000653174240142107,-0.002285812282934785,-0.0037178313359618187,0.012064619921147823,0.006163128651678562,-4.221188646624796e-05,-0.004891624208539724,-0.009622621349990368,0.0006778354290872812,0.013634954579174519,-0.020278330892324448,-0.004124345723539591,0.007662141229957342,0.018916331231594086,-0.0036245116498321295,0.01430609729140997,-0.01053135097026825,-0.012238960713148117,-0.016030864790081978,0.002648538677021861,0.014399755746126175,-0.008265534415841103,0.017143085598945618,-0.014470246620476246,-5.842742757522501e-05,-0.004861831199377775,-0.015087821520864964,-0.006019762251526117,0.01629151962697506,0.010227116756141186,-0.003751903073862195,-0.01222227606922388,0.0076263234950602055,0.042506661266088486,-0.01409455481916666,-0.0125817796215415,0.006965314969420433,-0.1917276829481125,0.00950542837381363,-0.01586632803082466,0.0023973588831722736,0.005743181332945824,-0.0027462500147521496,0.013118598610162735,0.011540125124156475,-4.4238830014364794e-05,0.0049981833435595036,0.010282487608492374,0.0003759496030397713,0.01399040874093771,0.018821081146597862,-0.014726671390235424,0.004507406149059534,0.011466688476502895,-0.005345562938600779,0.003956358879804611,-0.0034813869278877974,-0.0006390218622982502,-0.012699902057647705,0.006115961819887161,-0.00699468981474638,-0.00933891348540783,0.0034024324268102646,0.0066421241499483585,-0.002772600157186389,-0.00560781080275774,0.0124791469424963,0.008322587236762047,-0.009324386715888977,0.019184015691280365,-0.01484056655317545,0.004880982916802168,0.009200002998113632,-0.004697439726442099,-0.0016762494342401624,0.005595938302576542,0.0051397476345300674,0.015112820081412792,0.0016515520401299,0.0027893949300050735,0.004518795292824507,0.02610747143626213,0.010790864005684853,-0.00240150885656476,0.0018596394220367074,-0.00877827126532793,0.016919050365686417,-0.006034755613654852,0.004655871074646711,-0.007221192587167025,-0.010618927888572216,-0.010135614313185215,0.0057146274484694,-0.0011658620787784457,8.326552051585168e-05,-0.0037010847590863705,0.007693116553127766,-0.011633782647550106,-0.0017288855742663145,0.008993348106741905,0.006360128056257963,-0.006610793061554432,0.02352437563240528,0.001936598913744092,-0.011150550097227097,-0.01644146628677845,0.0009796085068956017,0.0030192439444363117,-0.0053696841932833195,0.013059624470770359,-0.0033805544953793287,0.016168439760804176,0.0018524626502767205,0.012617220170795918,0.005636119283735752,-0.016038715839385986,0.010487047955393791,-0.007545631844550371,-0.001429348485544324,-0.0017839670181274414,-0.008450678549706936,0.005330666434019804,-0.02991759404540062,0.00345455389469862,0.018851209431886673,-0.009807764552533627,0.027462579309940338,0.007071391679346561,0.0019209625897929072,-0.018841171637177467,-0.005503535736352205,0.02069077454507351,-0.020384222269058228,0.00936795026063919,0.007733526639640331,-0.009904591366648674,-0.004870839882642031,-0.03102888911962509,0.010977471247315407,0.015817424282431602,0.0011372757144272327,0.0072667705826461315,0.00784523319453001,-0.003772204741835594,0.015585226006805897,0.006962628103792667,-0.005917835980653763,-0.004866400267928839,-0.002367018721997738,0.005616626236587763,0.008822798728942871,-0.012629799544811249,-0.011987242847681046,0.0032996777445077896,0.0023828642442822456,0.012849369086325169,0.010437403805553913,0.008191507309675217,0.014551647007465363,-0.00907558761537075,-0.012082315981388092,-0.01734895631670952,-0.025283891707658768,0.011902658268809319,0.01442468911409378,-0.00960622914135456,0.009892510250210762,0.006284326780587435,0.09945326298475266,-0.000902246858458966,0.010209871456027031,0.006395020522177219,-0.014969841577112675,0.006021085660904646,0.005478468257933855,0.006624804809689522,-0.005861262790858746,0.018376680091023445,-0.005344887264072895,-0.008701054379343987,0.017867742106318474,0.02290046401321888,0.004558425396680832,-0.0031763159204274416,0.009653178043663502,0.017748555168509483,0.0004191588668618351,-0.020645441487431526,-0.0037479782477021217,0.01151856780052185,-0.018366899341344833,0.013412505388259888,-0.006302890833467245,0.006716001313179731,-0.00566723570227623,0.021751975640654564,-0.009203510358929634,-0.005479597952216864,-0.0036258467007428408,0.011007815599441528,-0.019736887887120247,0.0033232851419597864,-0.00348482932895422,0.005073791369795799,0.017230041325092316,0.020670218393206596,0.004283766727894545,-0.0009454562095925212,0.002031994052231312,-0.017311764881014824,-0.013582253828644753,-0.012368597090244293,0.010673816315829754,-0.0031707175076007843,0.008417531847953796,-0.004093330819159746,-0.01342865638434887,0.006839676760137081,0.007039966061711311,0.002886531176045537,-0.010179306380450726,0.01376741286367178,0.003229884896427393,-0.002050425624474883,-0.006090544629842043,-0.01241382211446762,-0.004899153020232916,-0.007758493069559336,-0.007976759225130081,-0.01766863465309143,0.0025243479758501053,0.0038350399117916822,0.011882581748068333,0.004422273952513933,-0.03836751729249954,-0.01081705279648304,-0.007251629140228033,-0.007358638569712639,0.007515196222811937,0.021443774923682213,-0.011086410842835903,0.003115957835689187,0.01913968101143837,0.023567553609609604,0.0044838543981313705,0.002975921845063567,-0.01662723533809185,-0.006301764864474535,0.011563225649297237,-0.007714479696005583,0.007416438311338425,-0.035197507590055466,0.009823915548622608,-0.017413947731256485,0.011747097596526146,-0.0038893171586096287,0.021576901897788048,0.01757732592523098,0.013345262035727501,-0.006837489083409309,0.029992317780852318,-0.011094197630882263,0.010682325810194016,0.002443913836032152,-0.0005208277725614607,-0.01606852374970913,0.010624848306179047,0.0047839065082371235,0.01419053040444851,-0.01350423227995634,0.012274585664272308,0.012537653557956219,0.007614258676767349,-0.0039986432529985905,0.010640677064657211,-0.0038547625299543142,-0.006087520159780979,0.027305202558636665,0.006098201964050531,-0.00494043156504631,0.004934415221214294,-0.01824975572526455,0.001602957840077579,0.026787754148244858,0.005400836933404207,0.008201074786484241,0.022710701450705528,0.005333361215889454,0.007449979893863201,-0.00023634797253180295,-0.011554860509932041,0.00011505313159432262,0.006364085711538792,0.0009316215291619301,0.012276645749807358,-0.002286005299538374,0.007153740152716637,-0.00578177347779274,-0.003366011893376708,0.016108853742480278,-0.007560239173471928,-0.012466534040868282,5.5177883041324094e-05,0.013790159486234188,-0.012926618568599224,1.878943839983549e-05,0.0008286013035103679,-0.0036813300102949142,-0.0005811856244690716,-0.0008696871809661388,-0.008247340098023415,0.02868564799427986,-0.014315041713416576,-0.017415814101696014,0.006972618401050568,-0.024270612746477127,-0.009373226203024387,0.0051077669486403465,0.0038382895290851593,-0.01722528040409088,0.015512949787080288,0.01026356965303421,0.00711700227111578,-0.010315561667084694,0.01249308604747057,0.014615736901760101,-0.002677438547834754,0.005468305200338364,-0.005088237579911947,-0.018737059086561203,-0.003193721640855074,0.0038784947246313095,0.0009255004115402699,0.006019891239702702,0.0115288645029068,-0.018515832722187042,-0.005315995309501886,0.0148364482447505,0.009229088202118874,-0.002652656752616167,0.005572419613599777,0.007090028841048479,-0.00805481243878603,0.027019791305065155,-0.005165357608348131,0.01384897343814373,-0.01675380766391754,0.014895391650497913,0.001922378083691001,-0.007131235208362341,0.010457383468747139,-0.0060896435752511024,-0.0035761059261858463,-0.017283009365200996,0.013179706409573555,0.01639494299888611,0.0069476836360991,-0.010041441768407822,-0.004489645827561617,-0.01367124542593956,-0.0003028188075404614,0.012466919608414173,-0.010653103701770306,0.008282281458377838,0.003187681082636118,-0.01343492977321148,-0.010245668701827526,-0.011471674777567387,-0.01613684557378292,-0.0010712954681366682,-0.0027505853213369846,-0.001911632250994444,-0.0011440966045483947,-0.02027985267341137,-0.003082658164203167,-0.0005120121641084552,-0.004386079031974077,-0.010168688371777534,0.0036431557964533567,0.006260099820792675,-0.010663633234798908,-0.002148623578250408,-0.002349805785343051,0.0030768970027565956,-0.0034179803915321827,-0.008466539904475212,-0.011844230815768242,-0.005494784563779831,0.0010436181910336018,0.011641600169241428,-0.011137792840600014,7.610687316628173e-05,0.005389544181525707,-0.023192087188363075,-0.005416119936853647,-0.009617231786251068,0.008793344721198082,-0.024386076256632805,0.020657410845160484,5.134117236593738e-05,-0.007362756412476301,-0.009800750762224197,0.006533399689942598,-0.010050579905509949,0.006684471387416124,0.011441572569310665,0.006047689355909824,0.016310229897499084,-0.005246692802757025,0.007157488260418177,0.0017344196094200015,-0.00866750068962574,0.0006803951691836119,0.00713065592572093,-0.0014674743870273232,0.0203915573656559,-0.005685457959771156,-0.007061901036649942,-0.016780640929937363,0.001550675486214459,-0.008510038256645203,-0.011533658020198345,-0.008761588484048843,0.022064397111535072,-0.0017128309700638056,0.0062705883756279945,0.0048079160042107105,0.018406344577670097,0.010051971301436424,0.003991404082626104,0.012091951444745064,-0.005227489396929741,-0.0035770712420344353,-0.009186764247715473,-0.0038295702543109655,-0.00698986416682601,0.012210141867399216,0.005487545393407345,-0.0013136116322129965,0.0018605402437970042,-0.011810770258307457,-0.001065592747181654,0.0004330579249653965,0.024547435343265533,-0.0043790326453745365,-0.0002492174389772117,-0.0189106035977602,-0.010918785817921162,0.020448731258511543,0.007792806718498468,-0.002034664386883378,0.008813790045678616,-0.01989891566336155,0.001182962441816926,0.000261572131421417,-0.0074978540651500225,0.0019776527769863605,-0.011139015667140484,-0.02664639614522457,0.0028707943856716156,0.007007550913840532,-0.017508666962385178,-0.014156038872897625,-0.02033647708594799,0.016214512288570404,0.006000136490911245,-0.016533177345991135,0.018597586080431938,0.005563668441027403,-0.00725555419921875,0.01448176521807909,0.016186457127332687,-0.016622057184576988,0.007171966601163149,0.009879093617200851,0.014025414362549782,0.015332052484154701,0.018447238951921463,0.01657157577574253,-0.01883309707045555,0.0012578627793118358,-0.01160209160298109,-0.0029103304259479046,-0.024813447147607803,-0.008269749581813812,0.019136399030685425,0.12509235739707947,0.00992282573133707,-0.010059620253741741,-0.006295362021774054,-0.009466594085097313,-0.005341983400285244,-0.006175258196890354,-0.00834791548550129,0.0037003285251557827,-0.009935236535966396,-0.022054295986890793,-0.021636681631207466,0.00747463246807456,0.0023884624242782593,0.0020293877460062504,0.000621370563749224,-0.010186834260821342,0.0025970444548875093,0.004555682651698589,0.010875705629587173,-0.00799268577247858,-0.010559020563960075,-0.018151158466935158,0.006607222370803356,0.00013699558621738106,0.0032064514234662056,-0.01213186327368021,0.017665095627307892,-0.001385656651109457,-0.013753159902989864,-0.0032455134205520153,0.004236889537423849,0.011882774531841278,-0.014331771992146969,0.007972095161676407,0.0015528311487287283,0.0077825915068387985,0.0031973575241863728,0.007028214633464813,-0.014710456132888794,0.019549252465367317,-0.013456358574330807,0.006737617775797844,-0.015732519328594208,0.0006138741155155003,0.0037009399384260178,0.011282256804406643,0.010245632380247116,0.002517430577427149,0.007911423221230507,0.00890109408646822,-0.010392270050942898,-0.017399711534380913,-0.02358563430607319,-0.006632172502577305,0.010217915289103985,-0.022281570360064507,0.007806669920682907,0.013242524117231369,-0.0033365730196237564,0.026809824630618095,-0.013774974271655083,-0.00872904434800148,-0.010284706950187683,-0.014805947430431843,0.015970248728990555,0.017862962558865547,0.015086662955582142,0.0027441910933703184,0.010856385342776775,-0.004200211260467768,-0.0081545514985919,0.0031795732211321592,-0.026753583922982216,0.014192008413374424,-0.012117899954319,-0.0035813823342323303,0.015963943675160408,-0.0860016718506813,0.03140305355191231,0.007273109629750252,-0.00939896609634161,0.008446688763797283,-0.00541621632874012,-0.0522768460214138,-0.0012892642989754677,-0.009854674339294434,-0.0076980385929346085,-0.015288103371858597,-0.03279374539852142,-0.014441356062889099,-0.005670452956110239,-0.0029624251183122396,-0.012520995922386646,-0.0102844825014472,-0.017415877431631088,-0.015840580686926842,-0.013365293852984905,-0.009166606701910496,-0.005349005106836557,-0.005249958485364914,0.019897757098078728,-0.007069654297083616,-0.009444724768400192,0.004441514145582914,-0.01018715649843216,0.009931439533829689,0.002962167840451002,-0.013154460117220879,0.014917655847966671,-0.015001467429101467,0.009532036259770393,-0.0044509246945381165,0.028517216444015503,0.00990370661020279,-0.010221325792372227,-0.010877507738769054,0.0023901837412267923,0.02150103636085987,-0.014040149748325348,-0.0007246803143061697,0.00785189401358366,0.0014458857476711273,-0.0006708737928420305,0.004349204711616039,-0.01244916021823883,-0.01190697681158781,-0.1309737116098404,-0.0030378401279449463,0.005152037832885981,-0.025020644068717957,0.013737556524574757,0.01354216504842043,-0.010803540237247944,-0.020594704896211624,-0.010123742744326591,-0.005482333246618509,0.007814539596438408,0.0062471660785377026,0.011471273377537727,0.014933951199054718,0.010366315953433514,-0.017068468034267426,0.0075530968606472015,0.0021459211129695177,-0.005174430552870035,0.004797837696969509,-0.0006980726611800492,-0.01761162281036377,-0.011748763732612133,0.007687899749726057,-0.015306426212191582,0.007811580318957567,-0.004673641175031662,0.019404791295528412,0.006644575856626034,-0.009581189602613449,0.01846865750849247,-0.00799687672406435,-0.008734514936804771,0.025797318667173386,0.004079817328602076,0.01512935757637024,-0.0006804736331105232,-0.0038689833600074053,0.006711303722113371,-0.014750850386917591,0.016202479600906372,0.01031462848186493,-0.005430308170616627,0.01708185113966465,0.008559875190258026,-0.005445751361548901,-0.0028198380023241043,-0.0038498397916555405,-0.006423091981559992,0.013393329456448555,0.008289198391139507,0.019474737346172333,0.013462373986840248,-0.009793463163077831,-0.013543033972382545,0.03380116820335388,0.057620640844106674,0.0037551848217844963,0.01428164541721344,0.011203941889107227,-0.00013776373816654086,-0.007206891197711229,0.011069182306528091,-0.0032131224870681763,0.009809983894228935,0.006570447236299515,-0.002480398863554001,0.022422587499022484,0.011351908557116985,-0.01595130003988743,-0.019222430884838104,0.00509705301374197,-0.006570335011929274,0.0017189440550282598,0.027080731466412544,-0.011916235089302063,0.0015000663697719574,-0.0020198484417051077,-0.02209283970296383,0.006771082524210215,0.0002977755793835968,-0.019696606323122978,0.008564154617488384,-0.0007474914309568703,0.011921319179236889,0.009810338728129864,0.014718177728354931,0.0014345606323331594,0.008807356469333172,-0.006630355026572943,-0.003958745859563351,-0.009559383615851402,-0.005430855322629213,-0.014630086719989777,-0.011925501748919487,0.0004732106754090637,0.018642853945493698,-0.013681734912097454,0.010839325375854969,-0.014961443841457367,0.0016361128073185682,0.0032435106113553047,-0.002405848354101181,-0.018609875813126564,0.0033618290908634663,0.011865722015500069,-0.012829582206904888,0.008958829566836357,-0.011033131740987301,0.007112349383533001,-0.007317069917917252,-0.003843147773295641,0.015338101424276829,0.0060599129647016525,0.013022753410041332,0.022979997098445892,-0.010455581359565258,0.003293846268206835,0.011678189970552921,0.03189416974782944,-0.0003863417077809572,0.006824394688010216,-0.008517374284565449,0.012291766703128815,-0.008964218199253082,0.007173221092671156,0.019597060978412628,0.0208904929459095,-0.008607679978013039,0.02034304104745388,0.010004634968936443,0.011900341138243675,-0.00043498832383193076,0.0033996535930782557,-0.002569137839600444,0.009322158992290497,-0.002651530783623457,-0.008777949027717113,-0.005856899078935385,-0.013607734814286232,0.0010277243563905358,-0.011572104878723621,-0.023325929418206215,0.008436039090156555,0.0016878400929272175,-0.0035754949785768986,0.010810618288815022,0.020025212317705154,-0.009496903046965599,0.01064186729490757,0.0021814408246427774,-0.0061418297700583935,-0.006570986472070217,0.01253622304648161,0.01944899745285511,-0.010414046235382557,0.00017785617092158645,0.006716644857078791,0.011308281682431698,0.014264336787164211,-0.0031749242916703224,-0.020774956792593002,-0.0003114172432105988,0.011388715356588364,-0.009031891822814941,-0.006522138603031635,0.018276477232575417,0.0024473723024129868,0.002980136778205633,-0.007986669428646564,0.010007386095821857,0.009231405332684517,-0.018392913043498993,-0.020028775557875633,0.012274328619241714,-0.008668269030749798,0.0041609592735767365,-0.0037708855234086514,-0.009803260676562786,-0.004945358261466026,-0.01740073226392269,0.0035423238296061754,-0.007416149135679007,0.023602621629834175,0.005355633329600096,-0.0019859694875776768,0.01988109014928341,7.979076144692954e-06,-0.006595607381314039,0.0053070830181241035,0.008229612372815609,0.016438249498605728,0.006289506796747446,0.00754022691398859,0.011281898245215416,0.00024167270748876035,0.006314409431070089,-0.0031186926644295454,-0.02108895592391491,-0.013352083042263985,0.020173614844679832,0.008024762384593487,0.013543741777539253,-0.015686606988310814,-0.008190031163394451,0.015606686472892761,-0.008021931163966656,-0.015871604904532433,0.0037902863696217537,0.0008586193434894085,0.003796238452196121,-0.010971165262162685,0.007283883169293404,-0.016522156074643135,0.0055426545441150665,-0.018035799264907837,-0.009387576021254063,-0.00015417633403558284,-0.009344720281660557,-0.005082639399915934,0.007296253461390734,-0.009880026802420616,-0.002254636026918888,0.02115420438349247,-0.00485372357070446,0.004400492645800114,-0.00884152390062809,-0.006040804088115692,0.011755109764635563,0.008026177994906902,-0.006253858096897602,-0.0029635189566761255,0.007403810508549213,0.0043754614889621735,0.026068542152643204,-0.024823419749736786,-0.004859900567680597,0.0077138361521065235,0.0007009119726717472,-0.018028592690825462,-0.011082421988248825,-0.007141128182411194,-0.01778709888458252,0.009043511003255844,0.0008742235950194299,0.019595323130488396,-0.00226938771083951,-0.0021313303150236607,0.0028745909221470356,0.013393265195190907,0.0035802884958684444,-0.0015817874809727073,0.006639556493610144,0.006195977795869112,-0.007812898606061935,-0.008897827938199043,-0.012519138865172863,0.014377216808497906,0.00478403503075242,-0.004690281115472317,0.003118644468486309,0.027247516438364983,-0.002435001777485013,0.033513087779283524,0.01822897233068943,0.007350771687924862,0.0011077403323724866,0.013501819223165512,-0.015879904851317406,0.013183299452066422,0.011308056302368641,-0.0003690966113936156,-5.669004895025864e-05,0.006077144294977188,-0.0071005732752382755,0.005103584378957748,0.012177292257547379,-0.0015176330925896764,0.00743842963129282,0.006680489517748356,0.004452131222933531,0.004653377924114466,-0.008840574882924557,-0.0031223606783896685,-0.013772077858448029,-0.005994860082864761,0.0052159992046654224,0.00597047246992588,-0.004418735392391682,-0.009556038305163383,-0.005633131135255098,0.02587483637034893,-0.002589789219200611,-0.0176318921148777,-0.009988966397941113,-0.015307571738958359,-0.009621800854802132,-0.002565787872299552,-0.01531350426375866,0.014097933657467365,-0.0033172364346683025,0.001826854539103806,0.0018190363189205527,-0.008359553292393684,-0.0038599425461143255,-0.004618598148226738,-0.0021358828525990248,-0.0039221663028001785,-0.0034684045240283012,-0.004433149006217718,0.006080731749534607,-0.0017949383473023772,-0.008630593307316303,0.001273048692382872,-0.019467659294605255,-6.12587173236534e-05,-0.018115075305104256,-0.006602621171623468,-0.007384441327303648,-0.007939839735627174,0.0019286199240013957,0.0008089773473329842,-0.01783713512122631,0.010118434205651283,-0.014237920753657818,0.01597065106034279,0.016588177531957626,-0.01785440556704998,0.01155418436974287,-0.005966603755950928,-0.014077438972890377,-0.013903025537729263,-0.002557036466896534,-0.021007491275668144,-0.005378428380936384,0.012218442745506763,0.004273728467524052,0.011610778979957104,-0.004312143661081791,0.01642666570842266,-0.023566925898194313,0.013862889260053635,0.015911821275949478,0.004173909313976765,-0.024028481915593147,-0.01222963910549879,-0.005391822662204504,0.011719332076609135,-0.007083456497639418,-0.0073945121839642525,0.010108668357133865,0.013066895306110382,-0.0004766210913658142,-0.006762267090380192,-0.0007032324792817235,0.0023309518583118916,0.012527922168374062,-0.006683377083390951,0.012418627738952637,-0.008594752289354801,-0.0089180339127779,-0.0018390804762020707,-0.01272482518106699,0.015199174173176289,-0.012042034417390823,-0.010652774013578892,0.001955002313479781,0.009363831952214241,-0.009031509980559349,-0.0028586569242179394,-0.0013132980093359947,0.009787592105567455,0.008148052729666233,0.004363750107586384,0.009258558973670006,-0.024081429466605186,0.01084060501307249,0.02108844183385372,-0.01939285360276699,0.011464710347354412,-0.010239985771477222,-0.009829654358327389,0.02925250120460987,-0.006770503241568804,-0.0068392264656722546,0.0012964068446308374,-0.016846660524606705,0.0068872300907969475,-0.003937834873795509,-8.339421765413135e-05,0.008675314486026764,-0.005402928218245506,-0.009232563897967339,0.011987275443971157,0.006109446752816439,-0.006341531407088041,0.007804907858371735,-0.007662084884941578,0.006093183066695929,-0.018207769840955734,-0.006304789334535599,0.000968299456872046,0.011293482035398483,0.0006706284475512803,0.00998291838914156,-0.016655774787068367,0.004729790613055229,0.008077752776443958,-0.0064179119653999805,-0.006763167679309845,0.0055464874021708965,-0.006630998104810715,-0.006346454378217459,0.0029069576412439346,0.004286420997232199,-0.00612212298437953,0.009613017551600933,-0.007194488774985075,-0.014121548272669315,-0.013963254168629646,0.008268116973340511,0.018683167174458504,0.00021566831856034696,0.010583395138382912,0.0023251124657690525,0.005577534902840853,-0.005223962478339672,-0.010808792896568775,-0.00891019869595766,0.0025711446069180965,-0.009238084778189659,0.00847254041582346,0.002356433542445302,-0.020508840680122375,0.008203793317079544,-0.013110458850860596,-0.00429300032556057,0.00894743949174881,-0.0010654800571501255,0.007953747175633907,0.0008857498760335147,0.008226757869124413,0.006239090580493212,-0.003030576976016164,-0.011644785292446613,-0.016018863767385483,0.0014197607524693012,0.012671319767832756,-0.014869586564600468,-0.011633380316197872,-0.0008804009412415326,0.005208792630583048,-0.009140313602983952,-0.004907278809696436,-0.01574484072625637,0.007207204587757587,-0.025614989921450615,0.010377657599747181,0.005622417200356722,0.020156607031822205,-8.534072549082339e-05,-0.013232074677944183,0.0025512452702969313,0.0074208625592291355,0.003769534407183528,0.006363023538142443,0.001976124243810773,-0.009836303070187569,0.014816982671618462,-0.02623211219906807,-0.013312103226780891,0.018329545855522156,0.011043942533433437,0.004413313698023558,-0.0026370524428784847,-0.006824623793363571,-0.01342408824712038,0.01530361082404852,0.02297188900411129,-0.015759512782096863,-0.0038370348047465086,0.008708260953426361,0.0386798270046711,0.006922588218003511,-0.014513103291392326,0.006315784528851509,0.0011656669666990638,-0.00011241488391533494,-0.0043263561092317104,0.006935876328498125,0.01871299184858799,-0.0018523683538660407,0.01645565964281559,0.0006411654176190495,-0.017343293875455856,0.01558641716837883,0.003914637491106987,-0.003911966923624277,0.010716164484620094,0.010333998128771782,0.009289140813052654,0.002327702473849058,-0.0016474217409268022,0.0085306940600276,-0.006147765554487705,-0.0027541646268218756,0.012298844754695892,-0.011853464879095554,0.0022197917569428682,0.009226707741618156,0.02173178642988205,-0.017738966271281242,-0.010917370207607746,-0.0029402251821011305,0.0004863214853685349,-0.0067732385359704494,-0.009347519837319851,-0.0026199843268841505,0.00044122201506979764,0.007049706764519215,-0.005566982086747885,-0.009083359502255917,0.005341717973351479,0.0016353566898033023,0.0075265211053192616,-0.025540797039866447,-0.00833797361701727,-0.00534829730167985,-0.004227929282933474,0.016433872282505035,0.006095499265938997,0.0034416201524436474,0.006703711114823818,-0.013493518345057964,-0.00048759233322925866,0.02160598710179329,-0.018758028745651245,-0.013188640587031841,0.00872473418712616,0.01274280995130539,-0.002263290574774146,-0.0006550966063514352,-0.01119509432464838,-0.010811157524585724,-0.007531395647674799,0.0025357375852763653,0.01623639091849327,0.012533069588243961,-0.11452934145927429,-0.014385758899152279,-0.0036055126693099737,0.002186845988035202,0.013855954632163048,-0.0006583944195881486,0.0048728990368545055,0.009528513066470623,0.003839930286630988,0.01954481191933155,0.001959699671715498,-0.00801488570868969,0.01553120743483305,0.010433783754706383,0.00287243933416903,0.0030284454114735126,0.0027071910444647074,0.005127111449837685,0.007968137040734291,0.004281257279217243,-0.011975499801337719,-0.017328623682260513,0.008220185525715351,0.007401622831821442,-0.013764807023108006,0.007864666171371937,-0.004687312990427017,-0.004217983223497868,-0.01190197467803955,0.005709093064069748,0.012869670987129211,-0.013801033608615398,-0.011998728848993778,0.20357556641101837,-0.0030479426495730877,0.012771195732057095,-0.0171239972114563,0.005747669842094183,0.00899829063564539,-0.014829105697572231,0.00494075333699584,-0.008008965291082859,-0.0036376866046339273,-0.033662255853414536,0.0065314690582454205,-0.009848415851593018,0.013626010157167912,0.012002847157418728,-0.013834439218044281,0.02108149044215679,0.016931405290961266,-0.0017394707538187504,-0.00963470246642828,-0.005704395938664675,0.01754046231508255,-0.015337469056248665,0.015215389430522919,-0.005915905814617872,-0.025276893749833107,-0.005014732480049133,-0.00463339826092124,-0.020541712641716003,-0.001968644093722105,0.000676644966006279,0.01785305328667164,-0.011794249527156353,0.016294624656438828,-0.004089083522558212,0.006442975252866745,-0.02364637888967991,-0.010055324994027615,0.008496284484863281,0.005891228560358286,0.010857462882995605,-0.0347641259431839,-0.014917171560227871,0.017434941604733467,-0.01820305548608303,-0.02300403080880642,-0.01460286695510149,-0.026439635083079338,-0.005786696448922157,0.005840812344104052,-0.002880639396607876,0.005296160001307726,-0.004211021587252617,-0.002037527970969677,-0.010035361163318157,0.004914330784231424,0.004394669085741043,0.005622674711048603,0.0011111185885965824,0.009060111828148365,-0.01080778706818819,-0.014376429840922356,-0.008422542363405228,0.0036981890443712473,-0.026923397555947304,0.009801522828638554,-0.0014322763308882713,-0.013493984937667847,0.012008155696094036,0.012425931170582771,0.009741486981511116,0.02373787946999073,0.0018142102053388953,-0.0050240508280694485,0.01613137498497963,0.005036276765167713,0.0027613716665655375,0.005145667586475611,-0.005073678679764271,0.00631151394918561,0.015935149043798447,0.005443435162305832,-0.0074535515159368515,0.012360554188489914,0.009225227870047092,0.010121893137693405,0.0003564523358363658,0.0020175480749458075,0.0005545940366573632,-0.018256383016705513,-0.0015494207618758082,-0.004463328048586845,0.010256974026560783,0.005540004465728998,-0.005248623434454203,0.005901942495256662,0.010503585450351238,-0.008990907110273838,0.008495476096868515,-0.029623478651046753,-0.0010746014304459095,0.010479615069925785,0.007128741126507521,-0.004881907254457474,-0.012746831402182579,-0.005546809174120426,-0.004563066177070141,0.0002746024983935058,-0.012642459943890572,-0.003734111087396741,0.01777506433427334,0.0049340128898620605,-0.0012290994636714458,-0.00021181550982873887,0.0020156176760792732,0.0010072377044707537,0.003468742361292243,-0.003944575320929289,0.014315459877252579,-0.005033606663346291,0.004686838481575251,-0.012386228889226913,0.0018407534807920456,0.004675609990954399,-0.0087699294090271,-0.005062884651124477,-0.0077690305188298225,0.00480366125702858,-0.012847527861595154,-0.007804791443049908,-0.0020366229582577944,0.010552520863711834,0.0009618164622224867,-0.02200361341238022,-0.02055400423705578,0.007025834172964096,0.005628401413559914,-0.003323606913909316,-0.00350605184212327,0.006432036403566599,0.004809271544218063,0.010274733416736126,0.04477909207344055,-0.009266168810427189,-0.014458194375038147,0.003407451556995511,-0.003966630436480045,0.00690626073628664,-0.005162558518350124,-0.017314080148935318,-0.0033658831380307674,-0.019236072897911072,-0.010986302979290485,-0.009487057104706764,-0.0126802958548069,0.009735309518873692,0.04154672846198082,-0.018142199143767357,0.002596642356365919,-0.0076661063358187675,0.013936100527644157,0.058171678334474564,-0.025674721226096153,-0.006219496950507164,-0.014702396467328072,0.007355244364589453,-0.01217672135680914,-0.01009633019566536,0.008379188366234303,-0.00898730382323265,-0.0017007015412673354,0.003610322717577219,0.0026148527394980192,0.0058074044063687325,-0.016003387048840523,-0.011510750278830528,0.0013994108885526657,-0.005675825756043196,-0.010906624607741833,0.003757855389267206,0.008256155997514725,0.0037957236636430025,0.0004637596430256963,0.0059378482401371,-0.006037457846105099,-0.018181998282670975,0.0013030506670475006,0.007541135419160128,0.009224391542375088,0.010982869192957878,-0.0036199912428855896,-0.002958113793283701,0.01651797443628311,-0.03149764612317085,0.004628603812307119,0.00334406946785748,-0.007923029363155365,0.015490380115807056,0.020828863605856895,0.016824204474687576,-0.0038670848589390516,0.014724436216056347,0.000400498160161078,0.0663076639175415,0.00567030580714345,-0.013410317711532116,0.008589716628193855,-0.008427352644503117,-0.01424303650856018,0.0008962303982116282,-0.009365360252559185,0.008820024318993092,0.013941312208771706,-0.007390265353024006,0.015612092800438404,0.008377837017178535,-0.006962129846215248,0.01604386232793331,0.004204136785119772,0.0069089229218661785,-0.0185052789747715,-0.013314954936504364,0.007275469601154327,0.014722811058163643,0.008437100797891617,0.011726523749530315,0.016620544716715813,0.015615695156157017,0.0120353102684021,0.006396838463842869,-0.008448812179267406,-0.00602632574737072,0.010790380649268627,0.002144247991964221,-0.014843912795186043,0.013109751045703888,-0.0005983744049444795,-0.01191713660955429,-0.0060539147816598415,0.007560625206679106,0.018343864008784294,-0.02141418308019638,-0.0038201757706701756,-0.0008210405358113348,0.0037896588910371065,0.00903385877609253,0.02255813404917717,0.0149000883102417,0.010207773186266422,0.01298686396330595,0.01658656820654869,-0.009689725004136562,-0.000968685548286885,-0.0354095958173275,-0.0020211192313581705,0.0172839667648077,0.017595110461115837,-0.007312276400625706,-0.009096597321331501,-0.012832960113883018,0.006029736716300249,0.01993134617805481,-0.007445869967341423,-0.013995345681905746,-0.021392418071627617,0.013174227438867092,0.0006699688965454698,0.0026909918524324894,0.0032831323333084583,0.012930993922054768,0.0012651460710912943,0.000811227539088577,0.01763002574443817,-0.00523826340213418,0.016636181622743607,-0.011958190239965916,-0.00934743881225586,0.011710581369698048,-0.009352635592222214,0.001517037977464497,0.022132251411676407,-0.0027835392393171787,-0.021134112030267715,0.000661684141959995,0.0020901961252093315,0.008411427959799767,-0.02320259064435959,-0.023216569796204567,-0.02040291577577591,-0.0019324647728353739,-0.012253865599632263,-0.012067129835486412,-0.012556578032672405,-0.006384226027876139,0.008578809909522533,-0.0006862648879177868,0.018786733970046043,0.008309703320264816,-0.004579378291964531,0.008779493160545826,-0.012430795468389988,0.010612075217068195,0.006497509777545929,0.00468828622251749,0.020637301728129387,0.014828919433057308,0.008801830001175404,-0.0012163587380200624,0.011090272106230259,0.00605464493855834,-0.00599315483123064,0.003595965448766947,0.0026772695127874613,0.007111930754035711,-0.0021474009845405817,-0.15517501533031464,-0.007093977648764849,0.016207048669457436,-0.003689244855195284,0.02290702797472477,-0.024147450923919678,0.02058466523885727,-0.003728344105184078,0.0020039579831063747,0.0036031962372362614,-0.00701624620705843,0.001598936039954424,-0.015112241730093956,-0.026839423924684525,-0.0005213304539211094,0.04432762786746025,0.0021426393650472164,0.008228357881307602,0.0006260357331484556,-0.0051366910338401794,0.0046644131653010845,-0.0015309208538383245,0.007084615062922239,-0.010650690644979477,-0.01891385205090046,-0.017962105572223663,-0.019904641434550285,-0.003021359210833907,0.00939719658344984,0.014427713118493557,0.0003639488131739199,0.01590440608561039,-0.007913827896118164,-0.008794532157480717,-0.004160219803452492,-0.00011183575406903401,-0.023288607597351074,0.001976816216483712,0.022937526926398277,-0.009748597629368305,-0.014059019275009632,-0.022420817986130714,0.014181907288730145,0.0013818360166624188,0.0023023937828838825,-0.007540484424680471,0.01842080056667328,0.006028867792338133,-0.022552955895662308,-0.005644746124744415,-0.0043883309699594975,-0.004599744454026222,-0.008561484515666962,0.014006786048412323,-0.011542826890945435,-0.009602931328117847,-0.036284975707530975,0.0013754897518083453,0.012572064064443111,0.006309454329311848,-0.0002941721468232572,-0.004653667565435171,-0.013862421736121178,0.004336177371442318,0.010433993302285671,0.009525666013360023,-0.006532643456012011,-0.0015942708123475313,0.014698229730129242,0.013635436072945595,0.01483591366559267,0.004928945563733578,0.011660551652312279,0.00346562173217535,-0.009555619210004807,0.01836557686328888,0.011766644194722176,0.005703310016542673,-0.005696287844330072,0.008640498854219913,0.00856035016477108,-0.03719845414161682,0.016891704872250557,0.009445746429264545,-0.0034338664263486862,-0.005024726502597332,-0.016796855255961418,-0.008475210517644882,-0.017073003575205803,0.004128266125917435,0.016665266826748848,0.00954902358353138,0.010982382111251354,-0.008389675989747047,-0.012186558917164803,0.008364107459783554,0.017737936228513718,0.01394137553870678,0.013139929622411728,-0.008969285525381565,-0.01151264924556017,-0.007080208044499159,-0.02486042119562626,0.00451834499835968,0.01454064343124628,-0.0027549047954380512,-0.01847361959517002,0.012725340202450752,0.02681497111916542,0.0022874209098517895,0.0060871499590575695,-0.012228837236762047,-0.01910441741347313,-0.02300979010760784,0.004791234154254198,-0.00982105266302824,-0.007742567453533411,0.01883193850517273,0.0016032794956117868,-0.0007860033656470478,-0.00030844920547679067,0.0010288181947544217,-0.01645890437066555,0.014252045191824436,-0.01001357939094305,0.002469572238624096,-0.025139495730400085,-0.007612746674567461,-0.05701448768377304,0.008700916543602943,0.01902882568538189,-0.02189522795379162,0.015759384259581566,0.010229690931737423,-0.013251837342977524,-0.013460122980177402,-0.01524634100496769,0.0020383321680128574,0.014956198632717133,-0.007906491868197918,-0.013498730957508087,0.006993595976382494,0.003018873743712902,0.001712734461762011,0.03202492371201515,0.026156842708587646,0.008240841329097748,-0.017780285328626633,0.006188404746353626,-0.014345478266477585,0.0025132661685347557,0.011938242241740227,-0.00015267223352566361,0.0147481644526124,-0.00812479481101036,-0.0010659064864739776,-0.0005582457524724305,0.006272712256759405,-0.004541509784758091,0.0014816629700362682,-0.02871515043079853,0.0016121916705742478,-0.02394980750977993,0.0008420820813626051,-0.007255136035382748,-0.006515704095363617,-0.005095303524285555,-0.005030743312090635,-0.011658716946840286,0.028127659112215042,0.00975873228162527,0.021014409139752388,-0.0160182137042284,0.008259791880846024,-0.00808415561914444,-0.011482791975140572,-0.0018780268728733063,-0.0016436574514955282,0.01837550289928913,0.0003763035056181252,0.009928029961884022,-0.008596843108534813,-0.0039632199332118034,0.01536337286233902,0.0038513196632266045,0.01520631741732359,-0.012446328997612,0.01358643639832735,-0.01477467454969883,0.0018546526553109288,-0.013842265121638775,-0.0008109700866043568,0.015721803531050682,0.006470515858381987,-0.01047314889729023,-0.017738599330186844,-0.002085148822516203,-0.00151948316488415,0.000500236579682678,-0.011062928475439548,-0.012429083697497845,-0.008604375645518303,-0.0033165609929710627,0.0162813700735569,-0.00872577540576458,0.006237449590116739,0.0014139856211841106,0.00227738288231194,0.007259607780724764,-0.0024163410998880863,-0.000929530244320631,0.01526214275509119,0.0005013305344618857,0.012352321296930313,0.0024202982895076275,-0.004930940456688404,0.005372138228267431,0.013471262529492378,0.011361593380570412,0.020780909806489944,-0.016667872667312622,-0.01875338703393936,-0.0006402565049938858,-0.0038189534097909927,-0.0173107348382473,-0.0007631341577507555,-0.004413474816828966,0.006579649168998003,-0.0007289272034540772,-0.016239607706665993,0.007476409897208214,5.302224599290639e-05,-0.01624462567269802,-0.014696476981043816,-0.0008294378640130162,6.569868855876848e-05,-0.006026261951774359,-0.0035658427514135838,0.00035259153810329735,-0.003949449863284826,0.009364716708660126,-0.010776331648230553,0.002928385278210044,-0.009490063413977623,-0.01819232851266861,0.004032875876873732,-0.0032316383440047503,0.00964342150837183,-0.0010484643280506134,-0.016542362049221992,-0.013282490894198418,-0.02188814990222454,0.014662325382232666,0.003973450977355242,0.01259040366858244,0.003396448213607073,0.0023380222264677286,-0.01695997640490532,0.012070347554981709,0.007248966954648495,0.011380953714251518,-0.009349804371595383,0.005258500576019287,0.01802116073668003,0.00570098590105772,-0.011989140883088112,0.011402743868529797,0.010607988573610783,0.008799505420029163,-0.009475105442106724,0.008064079098403454,-0.012264966033399105,-0.006731090601533651,0.00045869231689721346,-0.014379839412868023,-0.007578159682452679,-0.019541822373867035,0.02880922518670559,-0.01217967364937067,-0.0017422698438167572,0.009241893887519836,0.011424331925809383,-0.0059761349111795425,-0.10590112954378128,0.01093854196369648,-0.019668808206915855,-0.008417797274887562,-0.012183469720184803,-0.015398330055177212,0.022412968799471855,-0.014847170561552048,0.012399098835885525,-0.011321166530251503,-0.020581383258104324,-0.012875880114734173,0.009312482550740242,-0.01491408422589302,0.010381936095654964,0.014163745567202568,-0.00536081288009882,0.0030865189619362354,-0.017042148858308792,0.009154188446700573,0.003824438899755478,0.004048094153404236,-0.005840908735990524,-0.004764570388942957,-0.0011096063535660505,-0.01651327684521675,0.004218435846269131,0.0076619721949100494,0.016768736764788628,-0.010754378512501717,-0.007011130917817354,-0.0018741177627816796,0.004677861928939819,-0.0013004607753828168,0.02279837615787983,0.015664083883166313,-0.003047492355108261,-0.006805235054343939,-0.023204054683446884,0.011979939416050911,-0.01936367340385914,0.020488401874899864,0.0002779807255137712,0.01603945530951023,0.011033518239855766,-0.0034474434796720743,0.003860779106616974,0.0030094629619270563,-0.0025448587257415056,0.016781283542513847,0.0010827252408489585,-0.02335255965590477,0.000616254925262183,-0.0035649340134114027,0.0007393514970317483,-0.008183765225112438,0.0014471083413809538,0.0038755787536501884,0.007099337410181761,-0.012667966075241566,0.006208354607224464,-0.011235825717449188,-0.005788819864392281,-0.013990281149744987,-0.005277065094560385,-0.019661838188767433,-0.011538130231201649,0.011401553638279438,0.0067108855582773685,0.001396434847265482,0.0769028514623642,-0.0029904483817517757,0.002209946746006608,0.009979894384741783,-0.0010606379946693778,-0.016086678951978683,0.007984510622918606,0.018508948385715485,0.0032983184792101383,-0.004930043593049049,0.013569834642112255,1.877335125755053e-05,0.0041457414627075195,-0.0065275197848677635,0.01902691088616848,0.0049742781557142735,-0.008188189007341862,-0.004906102083623409,-0.0191107876598835,0.016605230048298836,-0.017471250146627426,0.010408093221485615,-0.008595138788223267,0.00039457817911170423,0.0075583732686936855,0.01484600454568863,0.011490130797028542,0.0035124020650982857,-0.006972779054194689,0.0128085408359766,0.006472124718129635,-0.011789342388510704,0.006717384327203035,-0.0022378091234713793,0.00325773935765028,0.0053901877254247665,0.008246632292866707,0.0030436997767537832,0.0072782342322170734,0.0012802877463400364,-0.00802643597126007,0.004147414583712816,0.008670682087540627,0.004049904178828001,0.0038673868402838707,0.014705437235534191,0.0026979250833392143,0.001775945769622922,-0.01869085803627968,0.0037806022446602583,0.012721864506602287,0.015738211572170258,-0.008133381605148315,-0.007445990107953548,-0.006062779109925032,0.005171599797904491,-0.007623749785125256,-0.001971603836864233,-0.03202363848686218,0.0014124091248959303,0.00964097585529089,-0.0062558529898524284,0.12542743980884552,-0.023395422846078873,-0.02142343297600746,0.00010404972999822348,0.0040498957969248295,0.009305443614721298,-0.005175766069442034,-0.006316371727734804,0.01862599514424801,0.01787419244647026,0.03209351748228073,-0.013965249061584473,-0.01298594195395708,0.003942033741623163,0.007697572000324726,-0.0037004253827035427,0.001353675965219736,0.004194419831037521,0.038188375532627106,-0.006305979564785957,0.008670156821608543,-0.011301315389573574,0.022354990243911743,0.011309697292745113,-0.006025111768394709,-0.02238098718225956,-0.014605054631829262,0.009788730181753635,-0.02146783284842968,-0.026633543893694878,0.008195299655199051,5.627179052680731e-05,-0.006054638884961605,0.018990008160471916,0.0018300878582522273,-0.006439500488340855,0.0015690467553213239,-0.004935315810143948,-0.005042776465415955,-0.008323850110173225,0.01732305809855461,0.004760194569826126,0.009951967746019363,0.002688618842512369,-0.02490813285112381,0.013938416726887226,-0.008612480014562607,0.017687037587165833,0.0007003569626249373,0.003144141985103488,0.00028641021344810724,0.006280304864048958,0.01704099029302597,-0.031904399394989014,-0.01954682171344757,0.006692659109830856,-0.0029927969444543123,-0.019856123253703117,0.01037242915481329,0.007297733798623085,-0.00034432284883223474,9.271252201870084e-05,3.400759305804968e-05,-0.008098633028566837,-0.017516130581498146,0.0009811046766117215,-0.007083006668835878,-0.013434672728180885,0.006502609234303236,0.00046227165148593485,-0.006619544234126806,-0.011502401903271675,-0.01764489896595478,-0.018358498811721802,-0.016132373362779617,0.01945388875901699,-0.004716904833912849,0.016170112416148186,0.002639401238411665,-0.008305462077260017,-0.030113548040390015,0.014484983868896961,0.049616213887929916,0.0026693870313465595,0.015345823019742966,0.0026869860012084246,0.019824400544166565,0.00838514044880867,0.0023412152659147978,-0.0035702185705304146,-0.007228761445730925,0.009889356791973114,-0.01150357536971569,0.006204118020832539,-0.007316265255212784,0.005138332024216652,-0.004389585927128792,-0.006546832155436277,-0.004268612712621689,0.022032320499420166,-0.014779822900891304,0.011949374340474606,0.0014258417068049312,0.0048449402675032616,0.02138534002006054,-0.0369078628718853,-0.0007908937404863536,-0.009307898581027985,0.009610539302229881,0.010517065413296223,-0.005397812929004431,-0.0021158468443900347,-0.003497409401461482,-0.0037914770655333996,-0.019967637956142426,0.002439747331663966,-0.020455583930015564,-0.006008759140968323,-0.008751148357987404,-0.018866462633013725,0.008806422352790833,-0.0035796293523162603,-0.003078668611124158,-0.004720652941614389,-0.010492903180420399],\"index\":0}],\"model\":\"vicuna-7b-v1.5\",\"usage\":{\"prompt_tokens\":13,\"total_tokens\":13}}" + ] + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "Try text completion with" + ], + "metadata": { + "id": "-U2SZWTghxzc" + } + }, + { + "cell_type": "code", + "source": [ + "!curl http://127.0.0.1:8000/v1/completions \\\n", + " -H \"Content-Type: application/json\" \\\n", + " -d '{ \\\n", + " \"model\": \"vicuna-7b-v1.5\", \\\n", + " \"prompt\": \"Once upon a time\", \\\n", + " \"max_tokens\": 20, \\\n", + " \"temperature\": 0.5 \\\n", + " }'" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "85T5NO7Wh03R", + "outputId": "1a2c9568-2aa3-4a89-ecd8-8af496be1a41" + }, + "execution_count": 20, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "{\"id\":\"cmpl-kB3gg4KtgcGdif9V4eNbh6\",\"object\":\"text_completion\",\"created\":1705782008,\"model\":\"vicuna-7b-v1.5\",\"choices\":[{\"index\":0,\"text\":\", there was a little girl named Alice. Alice lived in a small village nestled in a valley\",\"logprobs\":null,\"finish_reason\":\"length\"}],\"usage\":{\"prompt_tokens\":5,\"total_tokens\":24,\"completion_tokens\":19}}" + ] + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "Try create_embeddings to analyze the prompts!" + ], + "metadata": { + "id": "EDxLbQDKVLiQ" + } + }, + { + "cell_type": "code", + "source": [ + "import json\n", + "import numpy as np\n", + "import requests\n", + "from scipy.spatial.distance import cosine\n", + "\n", + "\n", + "def get_embedding_from_api(word, model='vicuna-7b-v1.5'):\n", + " url = 'http://127.0.0.1:8000/v1/embeddings'\n", + " headers = {'Content-Type': 'application/json'}\n", + " data = json.dumps({\n", + " 'model': model,\n", + " 'input': word\n", + " })\n", + "\n", + " response = requests.post(url, headers=headers, data=data)\n", + " if response.status_code == 200:\n", + " embedding = np.array(response.json()['data'][0]['embedding'])\n", + " return embedding\n", + " else:\n", + " print(f\"Error: {response.status_code} - {response.text}\")\n", + " return None\n", + "\n", + "\n", + "def cosine_similarity(vec1, vec2):\n", + " return 1 - cosine(vec1, vec2)\n", + "\n", + "\n", + "def print_cosine_similarity(embeddings, texts):\n", + " for i in range(len(texts)):\n", + " for j in range(i + 1, len(texts)):\n", + " sim = cosine_similarity(embeddings[texts[i]], embeddings[texts[j]])\n", + " print(f\"Cosine similarity between '{texts[i]}' and '{texts[j]}': {sim:.2f}\")\n", + "\n", + "\n", + "texts = [\n", + " 'The quick brown fox',\n", + " 'The quick brown dog',\n", + " 'The fast brown fox',\n", + " 'A completely different sentence'\n", + "]\n", + "\n", + "embeddings = {}\n", + "for text in texts:\n", + " embeddings[text] = get_embedding_from_api(text)\n", + "\n", + "print_cosine_similarity(embeddings, texts)" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "bbrFoxgaplhK", + "outputId": "48e23158-1468-445d-a4cd-b5bd67bd3bde" + }, + "execution_count": 21, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Cosine similarity between 'The quick brown fox' and 'The quick brown dog': 0.90\n", + "Cosine similarity between 'The quick brown fox' and 'The fast brown fox': 0.86\n", + "Cosine similarity between 'The quick brown fox' and 'A completely different sentence': 0.58\n", + "Cosine similarity between 'The quick brown dog' and 'The fast brown fox': 0.84\n", + "Cosine similarity between 'The quick brown dog' and 'A completely different sentence': 0.66\n", + "Cosine similarity between 'The fast brown fox' and 'A completely different sentence': 0.62\n" + ] + } + ] + } + ] +} diff --git a/playground/test_embedding/test_sentence_similarity.py b/playground/test_embedding/test_sentence_similarity.py index 0b9a54081..d7a8f6e5f 100644 --- a/playground/test_embedding/test_sentence_similarity.py +++ b/playground/test_embedding/test_sentence_similarity.py @@ -7,7 +7,7 @@ from scipy.spatial.distance import cosine -def get_embedding_from_api(word, model="vicuna-7b-v1.1"): +def get_embedding_from_api(word, model="vicuna-7b-v1.5"): if "ada" in model: resp = openai.Embedding.create( model=model, diff --git a/pyproject.toml b/pyproject.toml index f54ab30de..3770b350d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta" [project] name = "fschat" -version = "0.2.33" +version = "0.2.36" description = "An open platform for training, serving, and evaluating large language model based chatbots." readme = "README.md" requires-python = ">=3.8" @@ -13,15 +13,15 @@ classifiers = [ "License :: OSI Approved :: Apache Software License", ] dependencies = [ - "accelerate>=0.21", "einops", "fastapi", "gradio", "httpx", "markdown2[all]", "mysqlclient", "nh3", "numpy", - "peft", "prompt_toolkit>=3.0.0", "pydantic<2,>=1", "redis", "requests", "rich>=10.0.0", "sentencepiece", + "accelerate>=0.21", "aiohttp", "einops", "fastapi", "gradio", "httpx", "markdown2[all]", "mysqlclient", "nh3", "numpy", + "peft", "prompt_toolkit>=3.0.0", "pydantic", "redis", "requests", "rich>=10.0.0", "sentencepiece", "shortuuid", "SQLAlchemy", "slowapi", "tiktoken", "tokenizers>=0.12.1", "torch", - "transformers>=4.31.0", "uvicorn", "wandb", + "transformers>=4.31.0", "uvicorn", "wandb" ] [project.optional-dependencies] model_worker = ["accelerate>=0.21", "peft", "sentencepiece", "torch", "transformers>=4.31.0", "protobuf"] -webui = ["gradio"] +webui = ["gradio>=4.10"] train = ["einops", "flash-attn>=2.0", "wandb"] llm_judge = ["openai<1", "anthropic>=0.3", "ray"] dev = ["black==23.3.0", "pylint==2.8.2"] diff --git a/scripts/build-api.sh b/scripts/build-api.sh new file mode 100644 index 000000000..8198108e0 --- /dev/null +++ b/scripts/build-api.sh @@ -0,0 +1,60 @@ +#!/bin/bash +# A rather convenient script for spinning up models behind screens + + +# Variables +PROJECT_DIR="$(pwd)" +CONDA_ENV_NAME="fastchat" # + +MODEL_PATH="HuggingFaceH4/zephyr-7b-beta" #beta is better than the alpha version, base model w/o quantization +MODEL_PATH="lmsys/vicuna-7b-v1.5" + +API_HOST="0.0.0.0" +API_PORT_NUMBER=8000 + + +# init the screens +check_and_create_screen() { + local SCREENNAME="$1" + if screen -list | grep -q "$SCREENNAME"; then + echo "Screen session '$SCREENNAME' exists. Doing nothing." + else + echo "Screen session '$SCREENNAME' not found. Creating..." + screen -d -m -S "$SCREENNAME" + echo "created!" + fi +} + +# convenience function for sending commands to named screens +send_cmd() { + local SCREENNAME="$1" + local CMD="$2" + screen -DRRS $SCREENNAME -X stuff '$2 \r' +} + +# hardcoded names, for baby api +SCREENNAMES=( + "controller" + "api" + # Worker screens include the devices they are bound to, if 'd0' is only worker it has full GPU access + "worker-d0" + "worker-d1" +) + +for screen in "${SCREENNAMES[@]}"; do + check_and_create_screen "$screen" + sleep 0.1 + # also activate the conda compute environment for these + screen -DRRS "$screen" -X stuff "conda deactivate \r" + screen -DRRS "$screen" -X stuff "conda activate $CONDA_ENV_NAME \r" + +done + + +# Send Commmands on a per Screen Basis +screen -DRRS controller -X stuff "python3 -m fastchat.serve.controller \r" + +screen -DRRS worker-d0 -X stuff "CUDA_VISIBLE_DEVICES=0 python3 -m fastchat.serve.model_worker --model-path $MODEL_PATH --conv-template one_shot --limit-worker-concurrency 1 \r" +screen -DRRS worker-d1 -X stuff "CUDA_VISIBLE_DEVICES=1 python3 -m fastchat.serve.model_worker --model-path $MODEL_PATH --port 21003 --worker-address http://localhost:21003 --conv-template one_shot --limit-worker-concurrency 1 \r" + +screen -DRRS api -X stuff "python3 -m fastchat.serve.openai_api_server --host $API_HOST --port $API_PORT_NUMBER \r" diff --git a/tests/launch_openai_api_test_server.py b/tests/launch_openai_api_test_server.py index f555a3882..823e8734e 100644 --- a/tests/launch_openai_api_test_server.py +++ b/tests/launch_openai_api_test_server.py @@ -2,6 +2,7 @@ Launch an OpenAI API server with multiple model workers. """ import os +import argparse def launch_process(cmd): @@ -9,27 +10,44 @@ def launch_process(cmd): if __name__ == "__main__": + parser = argparse.ArgumentParser() + parser.add_argument("--multimodal", action="store_true", default=False) + args = parser.parse_args() + launch_process("python3 -m fastchat.serve.controller") launch_process("python3 -m fastchat.serve.openai_api_server") - models = [ - ("lmsys/vicuna-7b-v1.5", "model_worker"), - ("lmsys/fastchat-t5-3b-v1.0", "model_worker"), - ("THUDM/chatglm-6b", "model_worker"), - ("mosaicml/mpt-7b-chat", "model_worker"), - ("meta-llama/Llama-2-7b-chat-hf", "vllm_worker"), - ] + if args.multimodal: + models = [ + ("liuhaotian/llava-v1.5-7b", "sglang_worker"), + ] + else: + models = [ + ("lmsys/vicuna-7b-v1.5", "model_worker"), + ("lmsys/fastchat-t5-3b-v1.0", "model_worker"), + ("THUDM/chatglm-6b", "model_worker"), + ("mosaicml/mpt-7b-chat", "model_worker"), + ("meta-llama/Llama-2-7b-chat-hf", "vllm_worker"), + ] for i, (model_path, worker_name) in enumerate(models): cmd = ( f"CUDA_VISIBLE_DEVICES={i} python3 -m fastchat.serve.{worker_name} " - f"--model-path {model_path} --port {30000+i} " - f"--worker-address http://localhost:{30000+i} " + f"--model-path {model_path} --port {40000+i} " + f"--worker-address http://localhost:{40000+i} " ) if worker_name == "vllm_worker": cmd += "--tokenizer hf-internal-testing/llama-tokenizer" launch_process(cmd) + if "llava" in model_path.lower(): + cmd += f"--tokenizer-path llava-hf/llava-1.5-7b-hf" + + if worker_name == "vllm_worker": + cmd += "--tokenizer hf-internal-testing/llama-tokenizer" + + launch_process(cmd) + while True: pass diff --git a/tests/test_openai_api.py b/tests/test_openai_api.py index 064069833..4493dce2c 100644 --- a/tests/test_openai_api.py +++ b/tests/test_openai_api.py @@ -4,24 +4,25 @@ Launch: python3 launch_openai_api_test_server.py """ +import warnings import openai - from fastchat.utils import run_cmd + openai.api_key = "EMPTY" # Not support yet -openai.api_base = "http://localhost:8000/v1" +openai.base_url = "http://localhost:8000/v1/" def test_list_models(): - model_list = openai.Model.list() - names = [x["id"] for x in model_list["data"]] + model_list = openai.models.list() + names = [x.id for x in model_list.data] return names def test_completion(model, logprob): prompt = "Once upon a time" - completion = openai.Completion.create( + completion = openai.completions.create( model=model, prompt=prompt, logprobs=logprob, @@ -38,7 +39,7 @@ def test_completion(model, logprob): def test_completion_stream(model): prompt = "Once upon a time" - res = openai.Completion.create( + res = openai.completions.create( model=model, prompt=prompt, max_tokens=64, @@ -47,19 +48,19 @@ def test_completion_stream(model): ) print(prompt, end="") for chunk in res: - content = chunk["choices"][0]["text"] + content = chunk.choices[0].text print(content, end="", flush=True) print() def test_embedding(model): - embedding = openai.Embedding.create(model=model, input="Hello world!") - print(f"embedding len: {len(embedding['data'][0]['embedding'])}") - print(f"embedding value[:5]: {embedding['data'][0]['embedding'][:5]}") + embedding = openai.embeddings.create(model=model, input="Hello world!") + print(f"embedding len: {len(embedding.data[0].embedding)}") + print(f"embedding value[:5]: {embedding.data[0].embedding[:5]}") def test_chat_completion(model): - completion = openai.ChatCompletion.create( + completion = openai.chat.completions.create( model=model, messages=[{"role": "user", "content": "Hello! What is your name?"}], temperature=0, @@ -69,11 +70,16 @@ def test_chat_completion(model): def test_chat_completion_stream(model): messages = [{"role": "user", "content": "Hello! What is your name?"}] - res = openai.ChatCompletion.create( + res = openai.chat.completions.create( model=model, messages=messages, stream=True, temperature=0 ) for chunk in res: - content = chunk["choices"][0]["delta"].get("content", "") + try: + content = chunk.choices[0].delta.content + if content is None: + content = "" + except Exception as e: + content = chunk.choices[0].delta.get("content", "") print(content, end="", flush=True) print() @@ -135,7 +141,7 @@ def test_openai_curl(): test_chat_completion_stream(model) try: test_embedding(model) - except openai.error.APIError as e: + except openai.APIError as e: print(f"Embedding error: {e}") print("===== Test curl =====") diff --git a/tests/test_openai_vision_api.py b/tests/test_openai_vision_api.py new file mode 100644 index 000000000..2f089c418 --- /dev/null +++ b/tests/test_openai_vision_api.py @@ -0,0 +1,162 @@ +""" +Test the OpenAI compatible server + +Launch: +python3 launch_openai_api_test_server.py --multimodal +""" + +import openai + +from fastchat.utils import run_cmd + +openai.api_key = "EMPTY" # Not support yet +openai.base_url = "http://localhost:8000/v1/" + + +def encode_image(image): + import base64 + from io import BytesIO + import requests + + from PIL import Image + + if image.startswith("http://") or image.startswith("https://"): + response = requests.get(image) + image = Image.open(BytesIO(response.content)).convert("RGB") + else: + image = Image.open(image).convert("RGB") + + buffered = BytesIO() + image.save(buffered, format="PNG") + img_b64_str = base64.b64encode(buffered.getvalue()).decode("utf-8") + + return img_b64_str + + +def test_list_models(): + model_list = openai.models.list() + names = [x.id for x in model_list.data] + return names + + +def test_chat_completion(model): + image_url = "https://picsum.photos/seed/picsum/1024/1024" + base64_image_url = f"data:image/jpeg;base64,{encode_image(image_url)}" + + # No Image + completion = openai.chat.completions.create( + model=model, + messages=[ + { + "role": "user", + "content": [ + {"type": "text", "text": "Tell me about alpacas."}, + ], + } + ], + temperature=0, + ) + print(completion.choices[0].message.content) + print("=" * 25) + + # Image using url link + completion = openai.chat.completions.create( + model=model, + messages=[ + { + "role": "user", + "content": [ + {"type": "text", "text": "What’s in this image?"}, + {"type": "image_url", "image_url": {"url": image_url}}, + ], + } + ], + temperature=0, + ) + print(completion.choices[0].message.content) + print("=" * 25) + + # Image using base64 image url + completion = openai.chat.completions.create( + model=model, + messages=[ + { + "role": "user", + "content": [ + {"type": "text", "text": "What’s in this image?"}, + {"type": "image_url", "image_url": {"url": base64_image_url}}, + ], + } + ], + temperature=0, + ) + print(completion.choices[0].message.content) + print("=" * 25) + + +def test_chat_completion_stream(model): + image_url = "https://picsum.photos/seed/picsum/1024/1024" + + messages = [ + { + "role": "user", + "content": [ + {"type": "text", "text": "What’s in this image?"}, + {"type": "image_url", "image_url": {"url": image_url}}, + ], + } + ] + res = openai.chat.completions.create( + model=model, messages=messages, stream=True, temperature=0 + ) + for chunk in res: + try: + content = chunk.choices[0].delta.content + if content is None: + content = "" + except Exception as e: + content = chunk.choices[0].delta.get("content", "") + print(content, end="", flush=True) + print() + + +def test_openai_curl(): + run_cmd( + """curl http://localhost:8000/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "llava-v1.5-7b", + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What’s in this image?" + }, + { + "type": "image_url", + "image_url": { + "url": "https://picsum.photos/seed/picsum/1024/1024" + } + } + ] + } + ], + "max_tokens": 300 + }' + """ + ) + + print() + + +if __name__ == "__main__": + models = test_list_models() + print(f"models: {models}") + + for model in models: + print(f"===== Test {model} ======") + test_chat_completion(model) + test_chat_completion_stream(model) + test_openai_curl()