You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when digging deeper, this condition compares the struct with order
this condition checks the schema order & data types as struct
if table_schema.as_struct() != task_schema.as_struct()
if the dataframe which is send to append don't have the columns in order w.r.t to the schema table, write fails because the struct turns about to be this
table schema - struct<1: a: optional timestamptz, 2: b: optional timestamptz, 3: x: optional string, 4: y: optional string>
(table columns in this order a, b,x,y)
dataframe schema - struct<1: a: optional timestamptz, 2: b: optional timestamptz, y: optional string, 3: x: optional string, 4:>
(dataframe columns in this order a,b,y,z)
I think schema validation can be applied to data types of columns instead of order or error message could be more helpful mismatch of fields doesn't make sense here?
Apache Iceberg version
0.6.1
Please describe the bug 🐞
while writing dataframe to iceberg through tbl.append(df), there happens to be a schema validation of table schema & df schema.
this function in append
_check_schema_compatible(self.schema(), other_schema=df.schema)
does the schema validation.here table schema & df schema are converted to pyarrow schema of struct type, and compared with order of dataframe columns with data types.
this results in the following error:
Traceback (most recent call last): File "/Users/apple/Projects/bright/brightmoney_collections_system/utils/index.py", line 172, in <module> dff = write_to_iceberg( File "/Users/apple/Projects/bright/brightmoney_collections_system/utils/index.py", line 163, in write_to_iceberg table.append(pyarrow_df) File "/Users/apple/Projects/bright/brightmoney_collections_system/venv/lib/python3.9/site-packages/pyiceberg/table/__init__.py", line 1057, in append _check_schema_compatible(self.schema(), other_schema=df.schema) File "/Users/apple/Projects/bright/brightmoney_collections_system/venv/lib/python3.9/site-packages/pyiceberg/table/__init__.py", line 175, in _check_schema_compatible raise ValueError(f"Mismatch in fields:\n{console.export_text()}") ValueError: Mismatch in fields: ┏━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Table field ┃ Dataframe field ┃ ┡━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ ✅ │ 1: a: optional timestamptz │ 1: a: optional timestamptz │ │ ✅ │ 2: b: optional timestamptz │ 2: b: optional timestamptz │ │ ✅ │ 3: x: optional string │ 3: x: optional string │ │ ✅ │ 4: y: optional string │ 4: y: optional string │ └────┴─────────────────────────────────────────┴─────────────────────────────────────────┘
yet there is no mismatch in field of table & dataframe.
ideally the schema compatibility should not consider the order in which dataframe is send?
The text was updated successfully, but these errors were encountered: