Skip to content

Commit

Permalink
Remove deprecated Data and DataSerialize (#2703)
Browse files Browse the repository at this point in the history
  • Loading branch information
laggui authored Jan 16, 2025
1 parent f630b3b commit 93f8bad
Show file tree
Hide file tree
Showing 7 changed files with 37 additions and 477 deletions.
35 changes: 19 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -621,19 +621,20 @@ leads to more reliable, bug-free solutions built faster (after some practice
<br />

> **Deprecation Note**<br />Since `0.14.0`, the internal structure for tensor data has changed. The
> previous `Data` struct is being deprecated in favor of the new `TensorData` struct, which allows
> for more flexibility by storing the underlying data as bytes and keeping the data type as a field.
> If you are using `Data` in your code, make sure to switch to `TensorData`.
> previous `Data` struct was deprecated and officially removed since `0.17.0` in favor of the new
> `TensorData` struct, which allows for more flexibility by storing the underlying data as bytes and
> keeping the data type as a field. If you are using `Data` in your code, make sure to switch to
> `TensorData`.
<!-- >
> In the event that you are trying to load a model record saved in a previous version, make sure to
> enable the `record-backward-compat` feature. Otherwise, the record won't be deserialized correctly
> and you will get an error message (which will also point you to the backward compatible feature
> flag). The backward compatibility is maintained for deserialization (loading), so as soon as you
> have saved the record again it will be saved according to the new structure and you won't need the
> backward compatible feature flag anymore. Please note that binary formats are not backward
> compatible. Thus, you will need to load your record in a previous version and save it to another
> of the self-describing record formats before using the new version with the
> enable the `record-backward-compat` feature using a previous version of burn (<=0.16.0). Otherwise,
> the record won't be deserialized correctly and you will get an error message (which will also point
> you to the backward compatible feature flag). The backward compatibility was maintained for
> deserialization (loading), so as soon as you have saved the record again it will be saved according
> to the new structure and you will be able to upgrade to this version. Please note that binary formats
> are not backward compatible. Thus, you will need to load your record in a previous version and save it
> to another of the self-describing record formats before using a compatible version (as described) with the
> `record-backward-compat` feature flag. -->

<details id="deprecation">
Expand All @@ -642,8 +643,9 @@ Loading Model Records From Previous Versions ⚠️
</summary>
<br />

In the event that you are trying to load a model record saved in a previous version, make sure to
enable the `record-backward-compat` feature flag.
In the event that you are trying to load a model record saved in a version older than `0.14.0`, make
sure to use a compatible version (`0.14`, `0.15` or `0.16`) with the `record-backward-compat`
feature flag.

```
features = [..., "record-backward-compat"]
Expand All @@ -652,13 +654,14 @@ features = [..., "record-backward-compat"]
Otherwise, the record won't be deserialized correctly and you will get an error message. This error
will also point you to the backward compatible feature flag.

The backward compatibility is maintained for deserialization when loading records. Therefore, as
soon as you have saved the record again it will be saved according to the new structure and you
won't need the backward compatible feature flag anymore.
The backward compatibility was maintained for deserialization when loading records. Therefore, as
soon as you have saved the record again it will be saved according to the new structure and you can
upgrade back to the current version

Please note that binary formats are not backward compatible. Thus, you will need to load your record
in a previous version and save it in any of the other self-describing record format (e.g., using the
`NamedMpkFileRecorder`) before using the new version with the `record-backward-compat` feature flag.
`NamedMpkFileRecorder`) before using a compatible version (as described) with the
`record-backward-compat` feature flag.

</details>

Expand Down
3 changes: 0 additions & 3 deletions crates/burn-core/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -113,9 +113,6 @@ record-item-custom-serde = ["thiserror", "regex"]
# Serialization formats
experimental-named-tensor = ["burn-tensor/experimental-named-tensor"]

# Backwards compatibility with previous serialized data format.
record-backward-compat = []

test-cuda = ["cuda-jit"] # To use cuda during testing, default uses ndarray.
test-hip = ["hip-jit"] # To use hip during testing, default uses ndarray.
test-tch = ["tch"] # To use tch during testing, default uses ndarray.
Expand Down
21 changes: 1 addition & 20 deletions crates/burn-core/src/record/primitive.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,7 @@ use super::tensor::{BoolTensorSerde, FloatTensorSerde, IntTensorSerde};
use super::{PrecisionSettings, Record};
use crate::module::{Param, ParamId};

#[allow(deprecated)]
use burn_tensor::DataSerialize;
use burn_tensor::{backend::Backend, Bool, Element, Int, Tensor};
use burn_tensor::{backend::Backend, Bool, Int, Tensor};

use hashbrown::HashMap;
use serde::{
Expand Down Expand Up @@ -143,23 +141,6 @@ where
}
}

#[allow(deprecated)]
impl<E, B> Record<B> for DataSerialize<E>
where
E: Element,
B: Backend,
{
type Item<S: PrecisionSettings> = DataSerialize<S::FloatElem>;

fn into_item<S: PrecisionSettings>(self) -> Self::Item<S> {
self.convert()
}

fn from_item<S: PrecisionSettings>(item: Self::Item<S>, _device: &B::Device) -> Self {
item.convert()
}
}

/// (De)serialize parameters into a clean format.
#[derive(new, Debug, Clone, Serialize, Deserialize)]
pub struct ParamSerde<T> {
Expand Down
50 changes: 12 additions & 38 deletions crates/burn-core/src/record/tensor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,52 +4,26 @@ use super::{PrecisionSettings, Record};
use burn_tensor::{backend::Backend, Bool, DType, Element, Int, Tensor, TensorData};
use serde::{Deserialize, Serialize};

#[cfg(not(feature = "record-backward-compat"))]
use alloc::format;
#[cfg(feature = "record-backward-compat")]
use burn_tensor::DataSerialize;

/// Versioned serde data deserialization to maintain backward compatibility between formats.
#[cfg(feature = "record-backward-compat")]
#[allow(deprecated)]
#[derive(Serialize, Deserialize)]
#[serde(untagged)]
enum TensorDataSerde<E> {
V1(DataSerialize<E>),
V2(TensorData),
}

/// Deserialize the value into [`TensorData`].
fn deserialize_data<'de, E, De>(deserializer: De) -> Result<TensorData, De::Error>
where
E: Element + Deserialize<'de>,
De: serde::Deserializer<'de>,
{
#[cfg(feature = "record-backward-compat")]
{
let data = match TensorDataSerde::<E>::deserialize(deserializer)? {
TensorDataSerde::V1(data) => data.into_tensor_data(),
// NOTE: loading f32 weights with f16 precision will deserialize the f32 weights (bytes) first and then convert to f16
TensorDataSerde::V2(data) => data.convert::<E>(),
};
Ok(data)
}

#[cfg(not(feature = "record-backward-compat"))]
{
let data = TensorData::deserialize(deserializer).map_err(|e| {
serde::de::Error::custom(format!(
"{:?}\nThe internal data format has changed since version 0.14.0. If you are trying to load a record saved in a previous version, use the `record-backward-compat` feature flag. Once you have saved the record in the new format, you can disable the feature flag.\n",
e
))
})?;
let data = if let DType::QFloat(_) = data.dtype {
data // do not convert quantized tensors
} else {
data.convert::<E>()
};
Ok(data)
}
let data = TensorData::deserialize(deserializer).map_err(|e| {
serde::de::Error::custom(format!(
"{:?}\nThe internal data format has changed since version 0.14.0. If you are trying to load a record saved in a previous version, use the `record-backward-compat` feature flag with a previous version (<=0.16.0). Once you have saved the record in the new format, you can upgrade back to the current version.\n",
e
))
})?;
let data = if let DType::QFloat(_) = data.dtype {
data // do not convert quantized tensors
} else {
data.convert::<E>()
};
Ok(data)
}

/// This struct implements serde to lazily serialize and deserialize a float tensor
Expand Down
4 changes: 2 additions & 2 deletions crates/burn-tensor/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
#![cfg_attr(not(feature = "std"), no_std)]
#![warn(missing_docs)]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
// Allow deprecated `Data` and `DataSerialize`
#![allow(deprecated)]

//! This library provides multiple tensor implementations hidden behind an easy to use API
//! that supports reverse mode automatic differentiation.
Expand Down Expand Up @@ -59,6 +57,8 @@ mod cube_wgpu {
use crate::backend::{DeviceId, DeviceOps};
use cubecl::wgpu::WgpuDevice;

// Allow deprecated `WgpuDevice::BestAvailable`
#[allow(deprecated)]
impl DeviceOps for WgpuDevice {
fn id(&self) -> DeviceId {
match self {
Expand Down
Loading

0 comments on commit 93f8bad

Please sign in to comment.