Releases: huggingface/huggingface_hub
[v0.26.2] Fix: Reflect API response changes in file and repo security status fields
This patch release includes updates to align with recent API response changes:
- Update how file's security metadata is retrieved following changes in the API response (#2621).
- Expose repo security status field in ModelInfo (#2639).
Full Changelog: v0.26.1...v0.26.2
[v0.26.1] Hot-fix: fix Python 3.8 support for `huggingface-cli` commands
Full Changelog: v0.26.0...v0.26.1
See #2620 for more details.
v0.26.0: Multi-tokens support, conversational VLMs and quality of life improvements
🔐 Multiple access tokens support
Managing fine-grained access tokens locally just became much easier and more efficient!
Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.
To make managing these tokens easier, we've added a ✨ new set of CLI commands ✨ that allow you to handle them programmatically:
- Store multiple tokens on your machine by simply logging in with the
login()
command with each token:
huggingface-cli login
- Switch between tokens and choose the one that will be used for all interactions with the Hub:
huggingface-cli auth switch
- List available access tokens on your machine:
huggingface-cli auth list
- Delete a specific token from your machine with:
huggingface-cli logout [--token-name TOKEN_NAME]
✅ Nothing changes if you are using the HF_TOKEN
environment variable as it takes precedence over the token set via the CLI. More details in the documentation. 🤗
- Support multiple tokens locally by @hanouticelina in #2549
⚡️ InferenceClient improvements
🖼️ Conversational VLMs support
Conversational vision-language models inference is now supported with InferenceClient
's chat completion!
from huggingface_hub import InferenceClient
# works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": image_url},
},
{
"type": "text",
"text": "Describe this image in one sentence.",
},
],
},
],
)
print(output.choices[0].message.content)
#A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.
🔧 More complete support for inference parameters
You can now pass additional inference parameters to more task methods in the InferenceClient
, including: image_classification
, text_classification
, image_segmentation
, object_detection
, document_question_answering
and more!
For more details, visit the InferenceClient
reference guide.
✅ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗
- Support VLM in chat completion (+some specs updates) by @Wauplin in #2556
- [Inference Client] Add task parameters and a maintenance script of these parameters by @hanouticelina in #2561
- Document vision chat completion with Llama 3.2 11B V by @Wauplin in #2569
✨ HfApi
update_repo_settings
can now be used to switch visibility status of a repo. This is a drop-in replacement for update_repo_visibility
which is deprecated and will be removed in version v0.29.0
.
- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)
- Feature: switch visibility with update_repo_settings by @WizKnight in #2541
📄 Daily papers API is now supported in huggingface_hub
, enabling you to search for papers on the Hub and retrieve detailed paper information.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
# Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")
🌐 📚 Documentation
Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result here.
💔 Breaking changes
A few breaking changes have been introduced:
cached_download()
,url_to_filename()
,filename_to_url()
methods are now completely removed. From now on, you will have to usehf_hub_download()
to benefit from the new cache layout.legacy_cache_layout
argument fromhf_hub_download()
has been removed as well.
These breaking changes have been announced with a regular deprecation cycle.
Also, any templating-related utility has been removed from huggingface_hub
. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.
Prepare for release 0.26 by @hanouticelina in #2579
Remove templating utility by @Wauplin in #2611
🛠️ Small fixes and maintenance
😌 QoL improvements
- docs: move translations to
i18n
by @SauravMaheshkar in #2566 - Preserve card metadata format/ordering on load->save by @hlky in #2570
- Remove raw HTML from error message content and improve request ID capture by @hanouticelina in #2584
- [Inference Client] Factorize inference payload build by @hanouticelina in #2601
- Use proper logging in auth module by @hanouticelina in #2604
🐛 fixes
- Use repo_type in HfApi.grant_access url by @albertvillanova in #2551
- Raise error if encountered in chat completion SSE stream by @Wauplin in #2558
- Add 500 HTTP Error to retry list by @farzadab in #2567
- Add missing documentation by @adiaholic in #2572
- Serialization: take into account meta tensor when splitting the
state_dict
by @SunMarc in #2591 - Fix snapshot download when
local_dir
is provided. by @hanouticelina in #2592 - Fix PermissionError while creating '.no_exist/' directory in cache by @Wauplin in #2594
- Fix 2609 - Import packaging by default by @Wauplin in #2610
🏗️ internal
- Fix test by @Wauplin in #2582
- Make SafeTensorsInfo.parameters a Dict instead of List by @adiaholic in #2585
- Fix tests listing text generation models by @Wauplin in #2593
- Skip flaky Repository test by @Wauplin in #2595
- Support python 3.12 by @hanouticelina in #2605
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @SauravMaheshkar
- docs: move translations to
i18n
(#2566)
- docs: move translations to
- @WizKnight
- @hlky
- @Raghul-M
- Translated index.md and installation.md to Tamil (#2555)
[v0.25.2]: Fix snapshot download when `local_dir` is provided
Full Changelog : v0.25.1...v0.25.2
For more details, refer to the related PR #2592
[v0.25.1]: Raise error if encountered in chat completion SSE stream
Full Changelog : v0.25.0...v0.25.1
For more details, refer to the related PR #2558
v0.25.0: Large uploads made simple + quality of life improvements
📂 Upload large folders
Uploading large models or datasets is challenging. We've already written some tips and tricks to facilitate the process but something was still missing. We are now glad to release the huggingface-cli upload-large-folder
command. Consider it as a "please upload this no matter what, and be quick" command. Contrarily to huggingface-cli download
, this new command is more opinionated and will split the upload into several commits. Multiple workers are started locally to hash, pre-upload and commit the files in a way that is resumable, resilient to connection errors, and optimized against rate limits. This feature has already been stress tested by the community over the last months to make it as easy and convenient to use as possible.
Here is how to use it:
huggingface-cli upload-large-folder <repo-id> <local-path> --repo-type=dataset
Every minute, a report is logged with the current status of the files and workers:
---------- 2024-04-26 16:24:25 (0:00:00) ----------
Files: hashed 104/104 (22.5G/22.5G) | pre-uploaded: 0/42 (0.0/22.5G) | committed: 58/104 (24.9M/22.5G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 6 | committing: 0 | waiting: 0
---------------------------------------------------
You can also run it from a script:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_large_folder(
... repo_id="HuggingFaceM4/Docmatix",
... repo_type="dataset",
... folder_path="/path/to/local/docmatix",
... )
For more details about the command options, run:
huggingface-cli upload-large-folder --help
or visit the upload guide.
- CLI to upload arbitrary huge folder by @Wauplin in #2254
- Reduce number of commits in upload large folder by @Wauplin in #2546
- Suggest using upload_large_folder when appropriate by @Wauplin in #2547
✨ HfApi & CLI improvements
🔍 Search API
The search API have been updated. You can now list gated models and datasets, and filter models by their inference status (warm, cold, frozen).
- Add 'gated' search parameter by @Wauplin in #2448
- Filter models by inference status by @Wauplin in #2517
More complete support for the expand[]
parameter:
- Document baseModels and childrenModelCount as expand parameters by @Wauplin in #2475
- Better support for trending score by @Wauplin in #2513
- Add GGUF as supported expand[] parameter by @Wauplin in #2545
👤 User API
Organizations are now included when retrieving the user overview:
get_user_followers
and get_user_following
are now paginated. This was not the case before, leading to issues for users with more than 1000 followers.
📦 Repo API
Added auth_check
to easily verify if a user has access to a repo. It raises GatedRepoError
if the repo is gated and the user don't have the permission or RepositoryNotFoundError
if the repo does not exist or is private. If the method does not raise an error, you can assume the user has the permission to access the repo.
>>> from huggingface_hub import auth_check
>>> from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
try:
auth_check("user/my-cool-model")
except GatedRepoError:
# Handle gated repository error
print("You do not have permission to access this gated repository.")
except RepositoryNotFoundError:
# Handle repository not found error
print("The repository was not found or you do not have access.")
- implemented
auth_check
by @cjfghk5697 in #2497
It is now possible to set a repo as gated from a script:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.update_repo_settings(repo_id=repo_id, gated="auto") # Set to "auto", "manual" or False
- [Feature] Add
update_repo_settings
function to HfApi #2447 by @WizKnight in #2502
⚡️ Inference Endpoint API
A few improvements in the InferenceEndpoint
API. It's now possible to set a scale_to_zero_timeout
parameter + to configure secrets when creating or updating an Inference Endpoint.
- Add scale_to_zero_timeout parameter to HFApi.create/update_inference_endpoint by @hommayushi3 in #2463
- Update endpoint.update signature by @Wauplin in #2477
- feat: ✨ allow passing secrets to the inference endpoint client by @LuisBlanche in #2486
💾 Serialization
The torch serialization module now supports tensor subclasses.
We also made sure that now the library is tested with both torch
1.x and 2.x to ensure compatibility.
- Making wrapper tensor subclass to work in serialization by @jerryzh168 in #2440
- Torch: test on 1.11 and latest versions + explicitly load with
weights_only=True
by @Wauplin in #2488
💔 Breaking changes
Breaking changes:
InferenceClient.conversational
task has been removed in favor ofInferenceClient.chat_completion
. Also removedConversationalOutput
data class.- All
InferenceClient
output values are now dataclasses, not dictionaries. list_repo_likers
is now paginated. This means the output is now an iterator instead of a list.
Deprecation:
multi_commit: bool
parameter inupload_folder
is not deprecated, along thecreate_commits_on_pr
. It is now recommended to useupload_large_folder
instead. Thought its API and internals are different, the goal is still to be able to upload many files in several commits.
- Prepare for release 0.25 by @Wauplin in #2400
- Paginate repo likers endpoint by @hanouticelina in #2530
🛠️ Small fixes and maintenance
⚡️ InferenceClient fixes
Thanks to community feedback, we've been able to improve or fix significant things in both the InferenceClient
and its async version AsyncInferenceClient
. This fixes have been mainly focused on the OpenAI-compatible chat_completion
method and the Inference Endpoints services.
- [Inference] Support
stop
parameter intext-generation
instead ofstop_sequences
by @Wauplin in #2473 - [hot-fix] Handle [DONE] signal from TGI + remove logic for "non-TGI servers" by @Wauplin in #2410
- Fix chat completion url for OpenAI compatibility by @Wauplin in #2418
- Bug - [InferenceClient] - use proxy set in var env by @morgandiverrez in #2421
- Document the difference between model and base_url by @Wauplin in #2431
- Fix broken AsyncInferenceClient on [DONE] signal by @Wauplin in #2458
- Fix
InferenceClient
for HF Nvidia NIM API by @Wauplin in #2482 - Properly close session in
AsyncInferenceClient
by @Wauplin in #2496 - Fix unclosed aiohttp.ClientResponse objects by @Wauplin in #2528
- Fix resolve chat completion URL by @Wauplin in #2540
😌 QoL improvements
When uploading a folder, we validate the README.md file before hashing all the files, not after.
This should save some precious time when uploading large files and a corrupted model card.
Also, it is now possible to pass a --max-workers
argument when uploading a folder from the CLI
- huggingface-cli upload - Validate README.md before file hashing by @hlky in #2452
- Solved: Need to add the max-workers argument to the huggingface-cli command by @devymex in #2500
All custom exceptions raised by huggingface_hub
are now defined in huggingface_hub.errors
module. This should make it easier to import them for your try/except
statements.
- Define error by @cjfghk5697 in #2444
- Define cache errors in errors.py by @010kim in #2470
At the same occasion, we've reworked how errors are formatted in hf_raise_for_status
to print more relevant information to the users.
- Refacto error parsing (HfHubHttpError) by @Wauplin in #2474
- Raise with more info on 416 invalid range by @Wauplin in #2449
All constants in huggingface_hub
are now imported as a module. This makes it easier to patch their values, for example in a test pipeline.
- Update
constants
import to use module-level access #1172 by @WizKnight in #2453 - Update constants imports with module level access #1172 by @WizKnight in #2469
- Refactor all constant imports to module-level access by @WizKnight in #2489
Other quality of life improvements:
- Warn if user tries to upload a parquet file to a model repo by @Wauplin in #2403
- Tag repos using
HFSummaryWriter
with 'hf-summary-writer' by @Wauplin in #2398 - Do not raise if branch exists and no write permission by @Wauplin in #2426
- expose scan_cache table generation to python by @rsxdalv in #2437
- Expose
RepoUrl
info inCommitInfo
object by @Wauplin in #2487 - Add new hardware flavors by @apolinario in #2512
- http_backoff retry with SliceFileObj by @hlky in #2542
- Add version cli command by @010kim in #2498
🐛 fixes
- Fix filelock if flock not supported by @Wauplin in #2402
- Fix creating empty commit on PR by @Wauplin in #2413
- fix expand in CI by @Wauplin (direct commit on main)
- Update quick-start.md by @AxHa in #2422
- fix repo-files CLI example by @Wauplin in #2428
- Do not raise if chmod fails by @Wauplin in #2429
- fix .huggingface to .cache/huggingface in doc by @lizzzcai in #2432
- Fix shutil move by @Wauplin in #2433
- Correct "login" to "log in" when used as verb by @DePasqualeOrg in #2434
- Typo for plural by @david4096 in #2439
- fix typo in file download warning message about symlinks by @joetam in #2442
- Fix typo double assignment by @Wauplin in #2443
- [webhooks server] rely on SPACE_ID to check if app is local or in a Sapce by @Wauplin in #2450
- Fix error message on permission issue by @Wauplin in #2465
- Fix: do not erase existi...
[v0.24.7]: Fix race-condition issue when downloading from multiple threads
Full Changelog: v0.24.6...v0.24.7
For more details, refer to the related PR #2534.
[v0.24.6]: Fix [DONE] handling for `AsyncInferenceClient` on TGI 2.2.0+
Full Changelog: v0.24.5...v0.24.6
[v0.24.5] Fix download process on S3 mount (v2)
Follow-up after #2433 and v0.24.4 patch release. This release will definitely fix things.
Full Changelog: v0.24.4...v0.24.5
[v0.24.4] Fix download process on S3 mount
When downloading a file, the process was failing if the filesystem did not support either chmod
or shutils.copy2
when moving a file from the tmp folder to the cache. This patch release fixes this. More details in #2429.
Full Changelog: v0.24.3...v0.24.4