-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: unslothai/unsloth
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Will using Unsloth affect the training results, or does it only serve to accelerate the process?
#1337
opened Nov 25, 2024 by
lichaoahcil
[Issue] Triton Compilation Error in Unsloth Fine-Tuning Script on Kernel 5.4.0
#1336
opened Nov 25, 2024 by
gityeop
Can we use a custom chat template (or no template at all) for vision fine-tuning?
#1331
opened Nov 24, 2024 by
Any-Winter-4079
add generation prompt enforcement is too severe
currently fixing
Am fixing now!
#1330
opened Nov 24, 2024 by
RonanKMcGovern
Fail to Load LoRA Model for VLM Fine-Tune
currently fixing
Am fixing now!
URGENT BUG
Urgent bug
#1329
opened Nov 23, 2024 by
krittaprot
Running into this issue randomly: Am fixing now!
URGENT BUG
Urgent bug
ImportError: Unsloth: Cannot import unsloth_compiled_cache/Conv3d.py
currently fixing
#1328
opened Nov 22, 2024 by
saum7800
[Urgent] After reinstalling unsloth, Llama 3.2/3.1 fine tuning gets error with customized compute_metrics function
currently fixing
Am fixing now!
URGENT BUG
Urgent bug
#1327
opened Nov 22, 2024 by
yuan-xia
qwen2-vl 2b 4-bit always getting OOM, yet llama3.2 11b works!
#1326
opened Nov 22, 2024 by
mehamednews
Llama 3.2 vision finetuning error (Unsupported: hasattr ConstDictVariable to)
currently fixing
Am fixing now!
URGENT BUG
Urgent bug
#1325
opened Nov 22, 2024 by
adi7820
Unsloth Phi-3.5 LoRA: 3x the Number of Trainable Parameters with the Same Hyperparameters
#1324
opened Nov 22, 2024 by
KristianMoellmann
Saving the model with save_pretrained_merged failed.
currently fixing
Am fixing now!
#1323
opened Nov 22, 2024 by
WATCHARAPHON6912
Loading a vision lora fails with Am fixing now!
URGENT BUG
Urgent bug
ValueError: Unrecognized model in lora_model. Should have a
model_type key in its config.json
currently fixing
#1322
opened Nov 22, 2024 by
saum7800
How to fine-tune LLaMA 3.2 11B Vision using LoRA with the recent update?
#1319
opened Nov 21, 2024 by
yukiarimo
failed finetune qwen32b_awq_int4 using lora with llama-factory
#1314
opened Nov 21, 2024 by
Daya-Jin
The tokenizer does not have a {% if add_generation_prompt %}
#1312
opened Nov 21, 2024 by
Galaxy-Husky
Not able to load model from huggingface repo with correct path (FileNotFoundError: invalid repository id)
#1311
opened Nov 20, 2024 by
ygl1020
what was the quantisation algorithm used in unsloth/Llama-3.2-1B-bnb-4bit?
#1310
opened Nov 20, 2024 by
jayakommuru
Does tensorRT-LLM support serving 4bit quantised unsloth Llama model
#1309
opened Nov 20, 2024 by
jayakommuru
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.