Releases: huggingface/huggingface_hub
v0.20.0: Authentication, speed, safetensors metadata, access requests and more.
(Discuss about the release in our Community Tab. Feedback welcome!! 🤗)
🔐 Authentication
Authentication has been greatly improved in Google Colab. The best way to authenticate in a Colab notebook is to define a HF_TOKEN
secret in your personal secrets. When a notebook tries to reach the Hub, a pop-up will ask you if you want to share the HF_TOKEN
secret with this notebook -as an opt-in mechanism. This way, no need to call huggingface_hub.login
and copy-paste your token anymore! 🔥🔥🔥
In addition to the Google Colab integration, the login guide has been revisited to focus on security. It is recommended to authenticate either using huggingface_hub.login
or the HF_TOKEN
environment variable, rather than passing a hardcoded token in your scripts. Check out the new guide here.
- Login/authentication enhancements by @Wauplin in #1895
- Catch
SecretNotFoundError
in google colab login by @Wauplin in #1912
🏎️ Faster HfFileSystem
HfFileSystem
is a pythonic fsspec-compatible file interface to the Hugging Face Hub. Implementation has been greatly improved to optimize fs.find
performances.
Here is a quick benchmark with the bigcode/the-stack-dedup dataset:
v0.19.4 | v0.20.0 | |
---|---|---|
hffs.find("datasets/bigcode/the-stack-dedup", detail=False) |
46.2s | 1.63s |
hffs.find("datasets/bigcode/the-stack-dedup", detail=True) |
47.3s | 24.2s |
- Faster
HfFileSystem.find
by @mariosasko in #1809 - Faster
HfFileSystem.glob
by @lhoestq in #1815 - Fix common path in
_ ls_tree
by @lhoestq in #1850 - Remove
maxdepth
param fromHfFileSystem.glob
by @mariosasko in #1875 - [HfFileSystem] Support quoted revisions in path by @lhoestq in #1888
- Deprecate
HfApi.list_files_info
by @mariosasko in #1910
🚪 Access requests API (gated repos)
Models and datasets can be gated to monitor who's accessing the data you are sharing. You can also filter access with a manual approval of the requests. Access requests can now be managed programmatically using HfApi
. This can be useful for example if you have advanced user request screening requirements (for advanced compliance requirements, etc) or if you want to condition access to a model based on completing a payment flow.
Check out this guide to learn more about gated repos.
>>> from huggingface_hub import list_pending_access_requests, accept_access_request
# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='pending',
fields=None,
),
...
]
# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
🔍 Parse Safetensors metadata
Safetensors is a simple, fast and secured format to save tensors in a file. Its advantages makes it the preferred format to host weights on the Hub. Thanks to its specification, it is possible to parse the file metadata on-the-fly. HfApi
now provides get_safetensors_metadata
, an helper to get safetensors metadata from a repo.
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
metadata=None,
sharded=False,
weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}
Other improvements
List and filter collections
You can now list collections on the Hub. You can filter them to return only collection containing a given item, or created by a given author.
>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
... print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568
- add list_collections endpoint, solves #1835 by @ceferisbarov in #1856
- fix list collections sort values by @Wauplin in #1867
- Warn about truncation when listing collections by @Wauplin in #1873
Respect .gitignore
upload_folder
now respect gitignore
files!
Previously it was possible to filter which files should be uploaded from a folder using the allow_patterns
and ignore_patterns
parameters. This can now automatically be done by simply creating a .gitignore
file in your repo.
- Respect
.gitignore
file in commits by @Wauplin in #1868 - Remove respect_gitignore parameter by @Wauplin in #1876
Robust uploads
Uploading LFS files has also gotten more robust with a retry mechanism if a transient error happen while uploading to S3.
Target language in InferenceClient.translation
InferenceClient.translation
now supports src_lang
/tgt_lang
for applicable models.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="es_XX")
'Mi nombre es Sarah Jessica Parker pero puedes llamarme Jessica'
- add language support to translation client, solves #1763 by @ceferisbarov in #1869
Support source in reported EvalResult
EvalResult
now support source_name
and source_link
to provide a custom source for a reported result.
🛠️ Misc
Fetch all pull requests refs with list_repo_refs
.
Filter discussion when listing them with get_repo_discussions
.
# List opened PR from "sanchit-gandhi" on model repo "openai/whisper-large-v3"
>>> from huggingface_hub import get_repo_discussions
>>> discussions = get_repo_discussions(
... repo_id="openai/whisper-large-v3",
... author="sanchit-gandhi",
... discussion_type="pull_request",
... discussion_status="open",
... )
- ✨ Add filters to HfApi.get_repo_discussions by @SBrandeis in #1845
New field createdAt
for ModelInfo
, DatasetInfo
and SpaceInfo
.
It's now possible to create an inference endpoint running on a custom docker image (typically: a TGI container).
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
... "aws-zephyr-7b-beta-0486",
... repository="HuggingFaceH4/zephyr-7b-beta",
... framework="pytorch",
... task="text-generation",
... accelerator="gpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="g5.2xlarge",
... custom_image={
... "health_route": "/health",
... "env": {
... "MAX_BATCH_PREFILL_TOKENS": "2048",
... "MAX_INPUT_LENGTH": "1024",
... "MAX_TOTAL_TOKENS": "1512",
... "MODEL_ID": "/repository"
... },
... "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
... },
... )
Upload CLI: create branch when revision does not exist
🖥️ Environment variables
huggingface_hub.constants.HF_HOME
has been made a public constant (see reference).
Offline mode has gotten more consistent. If HF_HUB_OFFLINE
is set, any http call to the Hub will fail. The fallback mechanism is snapshot_download
has been refactored to be aligned with the hf_hub_download
workflow. If offline mode is activated (or a connection error happens) and the files are already in the cache, snapshot_download
returns the corresponding snapshot directory.
- Respect HF_HUB_OFFLINE for every http call by @Wauplin in #1899
- Improve
snapshot_download
offline mode by @Wauplin in #1913
DO_NOT_TRACK
environment variable is now respected to deactivate telemetry calls. This is similar to HF_HUB_DISABLE_TELEMETRY
but not specific to Hugging Face.
📚 Documentation
- Document more list repos behavior by @Wauplin in #1823
- [i18n-KO] 🌐 Translated
git_vs_http.md
to Korean by @heuristicwave in #1862
Doc fixes
v0.19.4 - Hot-fix: do not fail if pydantic install is corrupted
On Python3.8, it is fairly easy to get a corrupted install of pydantic (more specificially, pydantic 2.x cannot run if tensorflow is installed because of an incompatible requirement on typing_extensions
). Since pydantic
is an optional dependency of huggingface_hub
, we do not want to crash at huggingface_hub
import time if pydantic install is corrupted. However this was the case because of how imports are made in huggingface_hub
. This hot-fix releases fixes this bug. If pydantic is not correctly installed, we only raise a warning and continue as if it was not installed at all.
Related PR: #1829
Full Changelog: v0.19.3...v0.19.4
v0.19.3 - Hot-fix: pin `pydantic<2.0` on Python3.8
Hot-fix release after #1828.
In 0.19.0
we've loosen pydantic requirements to accept both 1.x and 2.x since huggingface_hub
is compatible with both. However, it started to cause issues when installing both huggingface_hub[inference]
and tensorflow
in a Python3.8 environment. The problem comes from the fact that on Python3.8, Pydantic>=2.x and tensorflow don't seem to be compatible. Tensorflow depends on
typing_extension<=4.5.0
while pydantic 2.x requires typing_extensions>=4.6
. This causes a ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'.
when importing huggingface_hub.
As a side note, tensorflow support for Python3.8 has been dropped since 2.14.0. Therefore this issue should affect less and less users over time.
Full Changelog: v0.19.2...v0.19.3
v0.19.2 - Patch: expose HF_HOME in constants
Not a hot-fix.
In #1786 (already release in 0.19.0
), we harmonized the environment variables in the HF ecosystem with the goal to propagate this harmonization to other HF libraries. In this work, we forgot to expose HF_HOME
as a constant value that can be reused, especially by transformers
or datasets
. This release fixes this (see #1825).
Full Changelog: v0.19.1...v0.19.2
v0.19.1 - Hot-fix: ignore TypeError when listing models with corrupted ModelCard
Full Changelog: v0.19.0...v0.19.1.
Fixes a regression bug (PR #1821) introduced in 0.19.0
that made looping over models with list_models
fail. The problem came from the fact that we are now parsing the data returned by the server into Python objects. However for some models the metadata in the model card is not valid. This is usually checked by the server but some models created before we started to enforce correct metadata are not valid. This hot-fix fixes the issue by ignoring the corrupted data, if any.
v0.19.0: Inference Endpoints and robustness!
(Discuss about the release in our Community Tab. Feedback welcome!! 🤗)
🚀 Inference Endpoints API
Inference Endpoints provides a secure solution to easily deploy models hosted on the Hub in a production-ready infrastructure managed by Huggingface. With huggingface_hub>=0.19.0
integration, you can now manage your Inference Endpoints programmatically. Combined with the InferenceClient
, this becomes the go-to solution to deploy models and run jobs in production, either sequentially or in batch!
Here is an example how to get an inference endpoint, wake it up, wait for initialization, run jobs in batch and pause back the endpoint. All of this in a few lines of code! For more details, please check out our dedicated guide.
>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint
# Get endpoint + wait until initialized
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()
# Run inference
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])
# Pause endpoint
>>> endpoint.pause()
- Implement API for Inference Endpoints by @Wauplin in #1779
- Fix inference endpoints docs by @Wauplin in #1785
⏬ Improved download experience
huggingface_hub
is a library primarily used to transfer (huge!) files with the Huggingface Hub. Our goal is to keep improving the experience for this core part of the library. In this release, we introduce a more robust download mechanism for slow/limited connection while improving the UX for users with a high bandwidth available!
More robust downloads
Getting a connection error in the middle of a download is frustrating. That's why we've implemented a retry mechanism that automatically reconnects if a connection get closed or a ReadTimeout error is raised. The download restart exactly where it stopped without having to redownload any bytes.
- Retry on ConnectionError/ReadTimeout when streaming file from server by @Wauplin in #1766
- Reset nb_retries if data has been received from the server by @Wauplin in #1784
In addition to this, it is possible to configure huggingface_hub
with higher timeouts thanks to @Shahafgo. This should help getting around some issues on slower connections.
- Adding the ability to configure the timeout of get request by @Shahafgo in #1720
- Fix a bug to respect the HF_HUB_ETAG_TIMEOUT. by @Shahafgo in #1728
Progress bars while using hf_transfer
hf_transfer
is a Rust-based library focused on improving upload and download speed on machines with a high bandwidth available. Once installed (pip install -U hf_transfer
), it can transparently be used with huggingface_hub
simply by setting HF_HUB_ENABLE_HF_TRANSFER=1
as environment variable. The counterpart of higher performances is the lack of some user-friendly features such as better error handling or a retry mechanism -meaning it is recommended only to power-users-. In this release we still ship a new feature to improve UX: progress bars. No need to update any existing code, a simple library upgrade is enough.
hf-transfer
progress bar by @cbensimon in #1792- Add support for progress bars in hf_transfer uploads by @Wauplin in #1804
📚 Documentation
huggingface-cli
guide
huggingface-cli
is the CLI tool shipped with huggingface_hub
. It recently got some nice improvement, especially with commands to download and upload files directly from the terminal. All of this needed a guide, so here it is!
Environment variables
Environment variables are useful to configure how huggingface_hub
should work. Historically we had some inconsistencies on how those variables were named. This is now improved, with a backward compatible approach. Please check the package reference for more details. The goal is to propagate those changes to the whole HF-ecosystem, making configuration easier for everyone.
- Harmonize environment variables by @Wauplin in #1786
- Ensure backward compatibility for HUGGING_FACE_HUB_TOKEN env variable by @Wauplin in #1795
- Do not promote
HF_ENDPOINT
environment variable by @Wauplin in #1799
Hindi translation
Hindi documentation landed on the Hub thanks to @aneeshd27! Checkout the Hindi version of the quickstart guide here.
- Added translation of 3 files as mentioned in issue by @aneeshd27 in #1772
Minor docs fixes
- Added
[[autodoc]]
forModelStatus
by @jamesbraza in #1758 - Expanded docstrings on
post
andModelStatus
by @jamesbraza in #1740 - Fix document link for manage-cache by @liuxueyang in #1774
- Minor doc fixes by @pcuenca in #1775
💔 Breaking changes
Legacy ModelSearchArguments
and DatasetSearchArguments
have been completely removed from huggingface_hub
. This shouldn't cause problem as they were already not in use (and unusable in practice).
- Removed GeneralTags, ModelTags and DatasetTags by @VictorHugoPilled in #1761
Classes containing details about a repo (ModelInfo
, DatasetInfo
and SpaceInfo
) have been refactored by @mariosasko to be more Pythonic and aligned with the other classes in huggingface_hub
. In particular those objects are now based the dataclass
module instead of a custom ReprMixin
class. Every change is meant to be backward compatible, meaning no breaking changes is expected. However, if you detect any inconsistency, please let us know and we will fix it asap.
- Replace
ReprMixin
with dataclasses by @mariosasko in #1788 - Fix SpaceInfo initialization + add test by @Wauplin in #1802
The legacy Repository
and InferenceAPI
classes are now deprecated but will not be removed before the next major release (v1.0
).
Instead of the git-based Repository
, we advice to use the http-based HfApi
. Check out this guide explaining the reasons behind it. For InferenceAPI
, we recommend to switch to InferenceClient
which is much more feature-complete and will keep getting improved.
⚙️ Miscellaneous improvements, fixes and maintenance
InferenceClient
- Adding
InferenceClient.get_recommended_model
by @jamesbraza in #1770 - Fix InferenceClient.text_generation when pydantic is not installed by @Wauplin in #1793
- Supporting
pydantic<3
by @jamesbraza in #1727
HfFileSystem
- [hffs] Raise
NotImplementedError
on transaction commits by @Wauplin in #1736 - Fix huggingface filesystem repo_type not forwarded by @Wauplin in #1791
- Fix
HfFileSystemFile
when init fails + improve error message by @Wauplin in #1805
FIPS compliance
Misc fixes
- Fix UnboundLocalError when using commit context manager by @hahunavth in #1722
- Fixed improperly configured 'every' leading to test_sync_and_squash_history failure by @jamesbraza in #1731
- Testing
WEBHOOK_PAYLOAD_EXAMPLE
deserialization by @jamesbraza in #1732 - Keep lock files in a
/locks
folder to prevent rare concurrency issue by @beeender in #1659 - Fix Space runtime on static Space by @Wauplin in #1754
- Clearer error message on unprocessable entity. by @Wauplin in #1755
- Do not warn in ModelHubMixin on missing config file by @Wauplin in #1776
- Update SpaceHardware enum by @Wauplin in #1798
- change prop name by @julien-c in #1803
Internal
- Bump version to 0.19 by @Wauplin in #1723
- Make
@retry_endpoint
a default for all test by @Wauplin in #1725 - Retry test on 502 Bad Gateway by @Wauplin in #1737
- Consolidated mypy type ignores in
InferenceClient.post
by @jamesbraza in #1742 - fix: remove useless token by @rtrompier in #1765
- Fix CI (typing-extensions minimal requirement by @Wauplin in #1781
- remove black formatter to use only ruff by @Wauplin in #1783
- Separate test and prod cache (+ ruff formatter) by @Wauplin in #1789
- fix 3.8 tensorflow in ci by @Wauplin (direct commit on main)
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @VictorHugoPilled
- Removed GeneralTags, ModelTags and DatasetTags (#1761)
- @aneeshd27
- Added translation of 3 files as mentioned in issue (#1772)
v0.18.0: Collection API, translated documentation and more!
(Discuss about the release and provide feedback in the Community Tab!)
Collection API 🎉
Collection API is now fully supported in huggingface_hub
!
A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this guide to understand in more detail what collections are and this guide to learn how to build them programmatically.
Create/get/update/delete collection:
get_collection
create_collection
: title, description, namespace, privateupdate_collection_metadata
: title, description, position, private, themedelete_collection
Add/update/remove item from collection:
add_collection_item
: item id, item type, noteupdate_collection_item
: note, positiondelete_collection_item
Usage
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem: {
{'_id': '6507f6d5423b46492ee1413e',
'id': 'TheBloke/TigerBot-70B-Chat-GPTQ',
'author': 'TheBloke',
'item_type': 'model',
'lastModified': '2023-09-19T12:55:21.000Z',
(...)
}}
>>> from huggingface_hub import create_collection
# Create collection
>>> collection = create_collection(
... title="ICCV 2023",
... description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
# Add item with a note
>>> add_collection_item(
... collection_slug=collection.slug, # e.g. "davanstrien/climate-64f99dc2a5067f6b65531bab"
... item_id="datasets/climate_fever",
... item_type="dataset",
... note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
- Add Collection API by @Wauplin in #1687
- Add
url
attribute to Collection class by @Wauplin in #1695 - [Fix] Add collections guide to overview page by @Wauplin in #1696
📚 Translated documentation
Documentation is now available in both German and Korean thanks to community contributions! This is an important milestone for Hugging Face in its mission to democratize good machine learning.
- 🌐 [i18n-DE] Translate docs to German by @martinbrose in #1646
- 🌐 [i18n-KO] Translated README, landing docs to Korean by @wonhyeongseo in #1667
- Update i18n template by @Wauplin in #1680
- Add German concepts guide by @martinbrose in #1686
Preupload files before committing
(Disclaimer: this is a power-user usage. It is not expected to be used directly by end users.)
When using create_commit
(or upload_file
/upload_folder
), the internal workflow has 3 main steps:
- List the files to upload and check if those are regular files (text) or LFS files (binaries or huge files)
- Upload the LFS files to S3
- Create a commit on the Hub (upload regular files + reference S3 urls at once). The LFS upload is important to avoid large payloads during the commit call.
In this release, we introduce preupload_lfs_files
to perform step 2 independently of step 3. This is useful for libraries like datasets
that generate huge files "on-the-fly" and want to preupload them one by one before making one commit with all the files. For more details, please read this guide.
- Preupload lfs files before committing by @Wauplin in #1699
- Hide
CommitOperationAdd
's internal attributes by @mariosasko in #1716
Miscellaneous improvements
❤️ List repo likers
Similarly to list_user_likes
(listing all likes of a user), we now introduce list_repo_likers
to list all likes on a repo - thanks to @issamarabi.
>>> from huggingface_hub import list_repo_likers
>>> likers = list_repo_likers("gpt2")
>>> len(likers)
204
>>> likers
[User(username=..., fullname=..., avatar_url=...), ...]
- Add list_repo_likers method to HfApi by @issamarabi in #1715
Refactored Dataset Card template
Template for the Dataset Card has been updated to be more aligned with the Model Card template.
- Dataset card template overhaul by @mariosasko in #1708
QOL improvements
This release also adds a few QOL improvement for the users:
- Suggest to check firewall/proxy settings + default to local file by @Wauplin in #1670
- debug logs to debug level by @Wauplin (direct commit on main)
- Change
TimeoutError
=>asyncio.TimeoutError
by @matthewgrossman in #1666 - Handle
refs/convert/parquet
and PR revision correctly in hffs by @Wauplin in #1712 - Document hf_transfer more prominently by @Wauplin in #1714
Breaking change
A breaking change has been introduced in CommitOperationAdd
in order to implement preupload_lfs_files
in a way that is convenient for the users. The main change is that CommitOperationAdd
is no longer a static object but is modified internally by preupload_lfs_files
and create_commit
. This means that you cannot reuse a CommitOperationAdd
object once it has been committed to the Hub. If you do so, an explicit exception will be raised. You can still reuse the operation objects if the commit call failed and you retry it. We hope that it will not affect any users but please open an issue if you're encountering any problem.
⚙️ Small fixes and maintenance
Docs fixes
- Move repo size limitations to Hub docs by @Wauplin in #1660
- Correct typo in upload guide by @martinbrose in #1677
- Fix broken tips in login reference by @Wauplin in #1688
Misc fixes
- Fixes filtering by tags with list_models and adds test case by @martinbrose in #1673
- Add default user-agent to huggingface-cli by @Wauplin in #1664
- Automatically retry on create_repo if '409 conflicting op in progress' by @Wauplin in #1675
- Fix upload CLI when pushing to Space by @Wauplin in #1669
- longer pbar descr, drop D-word by @poedator in #1679
- Pin
fsspec
to use defaultexpand_path
by @mariosasko in #1681 - Address failing _check_disk_space() when path doesn't exist yet by @martinbrose in #1692
- Handle TGI error when streaming tokens by @Wauplin in #1711
Internal
- bump version to
0.18.0.dev0
by @Wauplin in #1658 - sudo apt update in CI by @Wauplin (direct commit on main)
- fix CI tests by @Wauplin (direct commit on main)
- Skip flaky InferenceAPI test by @Wauplin (direct commit on main)
- Respect
HTTPError
spec by @Wauplin in #1693 - skip flaky test by @Wauplin (direct commit on main)
- Fix LFS tests after password auth deprecation by @Wauplin in #1713
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @martinbrose
- @wonhyeongseo
- 🌐 [i18n-KO] Translated README, landing docs to Korean (#1667)
v0.17.3 - Hot-fix: ignore errors when checking available disk space
Full Changelog: v0.17.2...v0.17.3
Fixing a bug when downloading files to a non-existent directory. In #1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in #1690. This hot-fix fixes it thanks to #1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any OSError
) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.
Checkout those release notes to learn more about the v0.17 release.
v0.17.2 - Hot-fix: make `huggingface-cli upload` work with Spaces
Full Changelog: v0.17.1...v0.17.2
Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because space_sdk
was not found in that case. More details in #1669.
Also updated the user-agent when using huggingface-cli upload
. See #1664.
Checkout those release notes to learn more about the v0.17 release.
v0.17.0: Inference, CLI and Space API
InferenceClient
All tasks are now supported! 💥
Thanks to a massive community effort, all inference tasks are now supported in InferenceClient
. Newly added tasks are:
- Object detection by @dulayjm in #1548
- Text classification by @martinbrose in #1606
- Token classification by @martinbrose in #1607
- Translation by @martinbrose in #1608
- Question answering by @martinbrose in #1609
- Table question answering by @martinbrose in #1612
- Fill mask by @martinbrose in #1613
- Tabular classification by @martinbrose in #1614
- Tabular regression by @martinbrose in #1615
- Document question answering by @martinbrose in #1620
- Visual question answering by @martinbrose in #1621
- Zero shot classification by @Wauplin in #1644
Documentation, including examples, for each of these tasks can be found in this table.
All those methods also support async mode using AsyncInferenceClient
.
Get InferenceAPI status
Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:
list_deployed_models
aims to help users discover which models are currently deployed, listed by task.get_model_status
aims to get the status of a specific model. That's useful if you already know which model you want to use.
Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
# Discover zero-shot-classification models currently deployed
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]
# Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')
- Add get_model_status function by @sifisKoen in #1558
- Add list_deployed_models to inference client by @martinbrose in #1622
Few fixes
- Send Accept: image/png as header for image tasks by @Wauplin in #1567
- FIX
text_to_image
andimage_to_image
parameters by @Wauplin in #1582 - Distinguish _bytes_to_dict and _bytes_to_list + fix issues by @Wauplin in #1641
- Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648
Download and upload files... from the CLI 🔥 🔥 🔥
This is a long-awaited feature finally implemented! huggingface-cli
now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for git clone
, git pull
and git push
. Despite being less feature-complete than git
(no .git/
folder, no notion of local commits), it offers the flexibility required when working with large repositories.
Download
# Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
# Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json
# Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files: 100%|████████████████████████████████████████████| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7
Upload
# Upload single file
huggingface-cli upload my-cool-model model.safetensors
# Upload entire directory
huggingface-cli upload my-cool-model ./models
# Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"
Docs
For more examples, check out the documentation:
- Implemented CLI download functionality by @martinbrose in #1617
- Implemented CLI upload functionality by @martinbrose in #1618
🚀 Space API
Some new features have been added to the Space API to:
- request persistent storage for a Space
- set a description to a Space's secrets
- set variables on a Space
- configure your Space (hardware, storage, secrets,...) in a single call when you create or duplicate it
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
... repo_id=repo_id,
... repo_type="space",
... space_sdk="gradio",
... space_hardware="t4-medium",
... space_sleep_time="3600",
... space_storage="large",
... space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
... space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
A special thank to @martinbrose who largely contributed on those new features.
- Request Persistent Storage by @freddyaboulton in #1571
- Support factory reboot when restarting a Space by @Wauplin in #1586
- Added support for secret description by @martinbrose in #1594
- Added support for space variables by @martinbrose in #1592
- Add settings for creating and duplicating spaces by @martinbrose in #1625
📚 Documentation
A new section has been added to the upload guide with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.
- Tips to upload large models/datasets by @Wauplin in #1565
- Add the hard limit of 50GB on LFS files by @severo in #1624
🗺️ The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!
- Add translation guide + update repo structure by @Wauplin in #1602
- Fix i18n issue template links by @Wauplin in #1627
Breaking change
The behavior of InferenceClient.feature_extraction
has been updated to fix a bug happening with certain models. The shape of the returned array for transformers
models has changed from (sequence_length, hidden_size)
to (1, sequence_length, hidden_size)
which is the breaking change.
- Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648
QOL improvements
HfApi
helpers:
Two new helpers have been added to check if a file or a repo exists on the Hub:
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False
>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False
- Check if repo or file exists by @martinbrose in #1591
Also, hf_hub_download
and snapshot_download
are now part of HfApi
(keeping the same syntax and behavior).
Download improvements:
- When a user tries to download a model but the disk is full, a warning is triggered.
- When a user tries to download a model but a HTTP error happen, we still check locally if the file exists.
- Check local files if (RepoNotFound, GatedRepo, HTTPError) while downloading files by @jiamings in #1561
- Implemented check_disk_space function by @martinbrose in #1590
Small fixes and maintenance
⚙️ Doc fixes
- Fix table by @stevhliu in #1577
- Improve docstrings for text generation by @osanseviero in #1597
- Fix superfluous-typo by @julien-c in #1611
- minor missing paren by @julien-c in #1637
- update i18n template by @Wauplin (direct commit on main)
- Add documentation for modelcard Metadata. Resolves by @sifisKoen in #1448
⚙️ Other fixes
- Add
missing_ok
option indelete_repo
by @Wauplin in #1640 - Implement
super_squash_history
inHfApi
by @Wauplin in #1639 - 1546 fix empty metadata on windows by @Wauplin in #1547
- Fix tqdm by @NielsRogge in #1629
- Fix bug #1634 (drop finishing spaces and EOL) by @GBR-613 in #1638
⚙️ Internal
- Prepare for 0.17 by @Wauplin in #1540
- update mypy version + fix issues + remove deprecatedlist helper by @Wauplin in #1628
- mypy traceck by @Wauplin (direct commit on main)
- pin pydantic version by @Wauplin (direct commit on main)
- Fix ci tests by @Wauplin in #1630
- Fix test in contrib CI by @Wauplin (direct commit on main)
- skip gated repo test on contrib by @Wauplin (direct commit on main)
- skip failing test by @Wauplin (direct commit on main)
- Fix fsspec tests in ci by @Wauplin in #1635
- FIX windows CI by @Wauplin (direct commit on main)
- FIX style issues by pinning black version by @Wauplin (direct commit on main)
- forgot test case by @Wauplin (direct commit on main)
- shorter is better by @Wauplin (direct commit on main)
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @dulayjm
- Add object detection to inference client (#1548)
- @martinbrose
- Added support for s...