Releases: mlcommons/ck
V1.3.0: indexing and silent mode turned on by default
- Turned on artifact indexing by default
(can be turned off by setting CM_INDEX to "no", "off" or "false") - Turned on --silent mode in "cm run script" by default
Can be turned off via --verbose or -v flags - Fixed duplicate version detection for Python packages
- Added --new_tags for "cm copy" to add new tags to newly created artifacts
- Added --new_tags for "cm add" to add new tags to newly created artifacts
- Added basic check for forbidden artifact names such as "."
V1.2.2: bug fixes
Fixed minor bug during cm detect repo (turn off indexing).
V1.2.1
V1.2.0: Major update: transparent indexing of all artifacts to speed up search and scripts by ~10..50x.
It is currently OFF by default for further testing.
You can turn it on and test by setting this environment variable: "CM_INDEX=yes|on|true".
V1.2.1: Fixed Bug in indexing to avoid returning duplicate artifacts when wildcards are used.
MLCommons CM v1.1.6
Minor fixes and updates: https://github.com/mlcommons/ck/blob/master/cm/CHANGES.md#v116
MLCommons CM aka CK2 v1.1.3
Stable release of the MLCommons CM automation meta-framework from the MLCommons taskforce on education and reproducibility:
- improved removal of CM entries on Windows
- fixed #574
- improved detection of CM entries with "."
- added --yaml option in "cm add" to save meta in YAML
- added --save_to_json to save output to JSON (useful for web services)
- extended "cm info {automation} {artifact}" (copy to clipboard)
Stable release of MLCommons CM v1.1.1
Stable release from the MLCommons taskforce on education and reproducibility to automate MLPerf inference at the Student Cluster Competition at SuperComputing'22:
Stable release of MLCommons CM v1.1.0
Stable release from the MLCommons taskforce on education and reproducibility to automate MLPerf inference at the Student Cluster Competition at SuperComputing'22:
cm-v1.0.5
Stable release from the MLCommons taskforce on education and reproducibility to test the MLPerf inference benchmark automation for the Student Cluster Competition at SC'22.
MLCommons CM v1.0.0 - the next generation of the MLCommons Collective Knowledge framework
This is the stable release of the MLCommons Collective Mind framework v1.0.1 with reusable and portable MLOps components - the next generation of the MLCommons Collective Knowledge framework developed to modularize AI/ML Systems and automate their benchmarking, optimization and design space exploration based on the mature MLPerf methodology.
After donating the CK framework to the MLCommons, we have been developing this portable workflow automation technology as a community effort within the open education workgroup to modularize MLPerf and make it easier to plug in real-world tasks, models, data sets, software and hardware from the cloud to the edge.
We are very glad to see that more than 80% of all performance results and more than 95% of all power results were automated by the MLCommons CK v2.6.1 in the latest MLPerf inference round thanks to submissions from Qualcomm, Krai, Dell, HPE and Lenovo!
We invite you to join our public workgroup to continue developing this portable workflow framework and reusable automation for MLOps and DevOps as a community effort to:
- develop an open-source educational toolkit to make it easier to plug any real-world ML & AI tasks, models, data sets, software and hardware into the MLPerf benchmarking infrastructure;
- automate design space exploration of diverse ML/SW/HW stacks to trade off performance, accuracy, energy, size and costs;
- help end-users reproduce MLPerf results and deploy the most suitable ML/SW/HW stacks in production;
- support collaborative and reproducible research.
(C)opyright MLCommons 2022
Shortcuts
MLCommons CM toolkit v0.7.24 - the first stable release to modularize and automate MLPerf inference v2.1
Stable release of the MLCommons CM toolkit - the next generation of the CK framework developed in the open workgroup to modularize and automate MLPerf benchmarks.