You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In pyarrow we differentiate between missing (null) values, which we define with a bitmask, and NaN float values.
From the dataframe interchange protocol specification we have understood that one can use NaN to indicate missing values but that does not need to be the case (one can use NaN as a valid value)
Return the missing value (or "null") representation the column dtype
uses, as a tuple ``(kind, value)``.
Kind:
- 0 : non-nullable
- 1 : NaN/NaT
- 2 : sentinel value
- 3 : bit mask
- 4 : byte mask
Value : if kind is "sentinel value", the actual value. If kind is a bit
mask or a byte mask, the value (0 or 1) indicating a missing value. None
otherwise.
"""
pass
There will be disceptancy between pyarrow and pandas, for example, where NaN will be turned into missing value. But we do not think it would be correct for pyarrow to change the null_count property as the information about the difference would be lost for the libraries that would benefit from it. Also the bitmask information and the information in the null_count would need to be made equal.
Is there a way a library could keep the behaviour of not treating NaNs as nulls?
My understanding here is that if PyArrow was exporting the dataframe protocol, it would use option 3 indicating that a bit mask is used for null values, which means that NaN values should be treated as a valid values.
Thanks for opening this issue @AlenkaF. I agree with your request and with @kkraus14, that was the intent and that is exactly why we spent so much time on allowing different ways to encode nulls and have describe_null.
We discussed this yesterday, and it seems that something got lost in translation in the protocol test suite, and in a discussion on a Vaex PR. @honno took the action to investigate.
There will be discrepancy between pyarrow and pandas,
Indeed - @jorisvandenbossche indicated that this is expected; roundtripping with Pandas will lose the nan/NA distinction, but that is what it is due to a pandas design choice, and does not mean nan and NA aren't separately treated by the protocol.
In pyarrow we differentiate between missing (
null
) values, which we define with a bitmask, andNaN
float values.From the dataframe interchange protocol specification we have understood that one can use
NaN
to indicate missing values but that does not need to be the case (one can useNaN
as a valid value)dataframe-api/protocol/dataframe_protocol.py
Lines 195 to 213 in 4f7c1e0
There will be disceptancy between pyarrow and pandas, for example, where
NaN
will be turned into missing value. But we do not think it would be correct for pyarrow to change thenull_count
property as the information about the difference would be lost for the libraries that would benefit from it. Also the bitmask information and the information in thenull_count
would need to be made equal.Is there a way a library could keep the behaviour of not treating NaNs as nulls?
(Connected issue in the arrow repo apache/arrow#34774)
The text was updated successfully, but these errors were encountered: