Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow processing of duplicated attribute data columns #203

Merged
merged 6 commits into from
Feb 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions xl2times/transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -468,6 +468,7 @@ def process_flexible_import_table(
known_columns = config.known_columns[datatypes.Tag.fi_t]
data_columns = [x for x in df.columns if x not in known_columns]

# TODO: Replace this with something similar to know columns from config
# Populate index columns
index_columns = [
"region",
Expand Down
10 changes: 8 additions & 2 deletions xl2times/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,14 @@ def explode(df, data_columns):
:return: Tuple with the exploded dataframe and a Series of the original
column name for each value in each new row.
"""
data = df[data_columns].values.tolist()
# Handle duplicate columns (https://pandas.pydata.org/docs/user_guide/duplicates.html)
if len(set(data_columns)) < len(data_columns):
cols = df.columns.to_list()
data_cols_idx = [idx for idx, val in enumerate(cols) if val in data_columns]
data = df.iloc[:, data_cols_idx].values.tolist()
else:
data = df[data_columns].values.tolist()
Copy link
Collaborator

@SamRWest SamRWest Feb 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(edited - I misunderstood the logic here originally)
This doesn't throw an exception for austimes, but it seems to retains the duplicated columns, which then get turned into duplicate rows after the explode, e.g. the first row, which looks like this when passed into explode():
image

just gets exploded into this at the end of the function, retaining the duplicate fixom columns (I appended the name column for clarity):
image
This happens to be ok in this particular example because the values are identical, but if, say, the second fixom value was None, then dropping first duplicate rows would give the wrong answer later.

Whereas I understood that we want to want to 'fill-right' their values before exploding (e.g. using the merge function from #198) - so that any missing values in the right-most column get filled with non-missing values from the next column to the left, etc.

Suggest we replace this (assuming the attribute/alias stuff is sorted at this point) with:

if len(set(data_columns)) < len(data_columns):
    df =  _merge_duplicate_named_columns(df) #see #198

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SamRWest, as I've tried to explain in #198, there may be an attribute alias, e.g.:

PROCESS NCAP_START START NCAP_START
PRC1 2012 2020
PRC2 2012 2016
PRC3 2016 2020

Using your approach will turn this to:

PROCESS START NCAP_START
PRC1 2020
PRC2 2016 2012
PRC3 2016 2020

It will later be transformed into :

PROCESS ATTRIBUTE VALUE
PRC1 NCAP_START 2020
PRC2 NCAP_START 2016
PRC2 NCAP_START 2012
PRC3 NCAP_START 2016
PRC3 NCAP_START 2020

Resulting in an incorrect NCAP_START value for PRC2, because the rows below overwrite the rows above.

What I try to ensure with my changes is that the original table results in:

PROCESS ATTRIBUTE VALUE
PRC1 NCAP_START 2012
PRC1 NCAP_START 2020
PRC2 NCAP_START 2012
PRC2 NCAP_START 2016
PRC3 NCAP_START 2016
PRC3 NCAP_START 2020

This way the NCAP_START value for PRC2 will be correct.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the approach proposed in #198 should be applied in normalize_column_aliases and only apply to columns not in config.all_attributes and config.attr_aliases sets.

Copy link
Member Author

@olejandro olejandro Feb 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This happens to be ok in this particular example because the values are identical, but if, say, the second fixom value was None, then dropping first duplicate rows would give the wrong answer later.

@SamRWest, I've modified Demo 7 to cover this case. It seems to be working fine. Also we normally do drop rows with invalid values before doing any kind of overwritting.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh ok, I think I get it (didn't realise how aliases worked before). The explode() and later dropping of duplicate rows kind of does the work or resolving duplicates for you.

If you have time, it'd probably be worth writing a unit test to spell this logic out. But if you've checked that it matches VEDA, then that's the main thing, so I'm happy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, sure! Tbh, I have never written one, but there is always the first time. 😃

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing to it, just make a fake dataframe (like your examples above) run it through the appropriate transforms, and assert what it should look like on the other end.
All the boilerplate is already set up in test_transforms.py, so just make a new test_whatever() function in there, and it'll get checked in every CI run so we don't accidentally change the desired behaviour later.


other_columns = [
colname for colname in df.columns.values if colname not in data_columns
]
Expand All @@ -69,7 +76,6 @@ def explode(df, data_columns):
df = df.assign(value=data)
nrows = df.shape[0]
df = df.explode(value_column, ignore_index=True)

names = pd.Series(data_columns * nrows, index=df.index, dtype=str)
# Remove rows with no VALUE
index = df[value_column].notna()
Expand Down
Loading