Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow processing of duplicated attribute data columns #203

Merged
merged 6 commits into from
Feb 29, 2024

Conversation

olejandro
Copy link
Member

@olejandro olejandro commented Feb 28, 2024

Implications of duplicate labels for operations in pandas are described here: https://pandas.pydata.org/docs/user_guide/duplicates.html#consequences-of-duplicate-labels

This PR ensures the data in a duplicated column is returned only once (and not e.g. twice if 2 columns with the same name are present) during processing.

data_cols_idx = [idx for idx, val in enumerate(cols) if val in data_columns]
data = df.iloc[:, data_cols_idx].values.tolist()
else:
data = df[data_columns].values.tolist()
Copy link
Collaborator

@SamRWest SamRWest Feb 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(edited - I misunderstood the logic here originally)
This doesn't throw an exception for austimes, but it seems to retains the duplicated columns, which then get turned into duplicate rows after the explode, e.g. the first row, which looks like this when passed into explode():
image

just gets exploded into this at the end of the function, retaining the duplicate fixom columns (I appended the name column for clarity):
image
This happens to be ok in this particular example because the values are identical, but if, say, the second fixom value was None, then dropping first duplicate rows would give the wrong answer later.

Whereas I understood that we want to want to 'fill-right' their values before exploding (e.g. using the merge function from #198) - so that any missing values in the right-most column get filled with non-missing values from the next column to the left, etc.

Suggest we replace this (assuming the attribute/alias stuff is sorted at this point) with:

if len(set(data_columns)) < len(data_columns):
    df =  _merge_duplicate_named_columns(df) #see #198

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SamRWest, as I've tried to explain in #198, there may be an attribute alias, e.g.:

PROCESS NCAP_START START NCAP_START
PRC1 2012 2020
PRC2 2012 2016
PRC3 2016 2020

Using your approach will turn this to:

PROCESS START NCAP_START
PRC1 2020
PRC2 2016 2012
PRC3 2016 2020

It will later be transformed into :

PROCESS ATTRIBUTE VALUE
PRC1 NCAP_START 2020
PRC2 NCAP_START 2016
PRC2 NCAP_START 2012
PRC3 NCAP_START 2016
PRC3 NCAP_START 2020

Resulting in an incorrect NCAP_START value for PRC2, because the rows below overwrite the rows above.

What I try to ensure with my changes is that the original table results in:

PROCESS ATTRIBUTE VALUE
PRC1 NCAP_START 2012
PRC1 NCAP_START 2020
PRC2 NCAP_START 2012
PRC2 NCAP_START 2016
PRC3 NCAP_START 2016
PRC3 NCAP_START 2020

This way the NCAP_START value for PRC2 will be correct.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the approach proposed in #198 should be applied in normalize_column_aliases and only apply to columns not in config.all_attributes and config.attr_aliases sets.

Copy link
Member Author

@olejandro olejandro Feb 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This happens to be ok in this particular example because the values are identical, but if, say, the second fixom value was None, then dropping first duplicate rows would give the wrong answer later.

@SamRWest, I've modified Demo 7 to cover this case. It seems to be working fine. Also we normally do drop rows with invalid values before doing any kind of overwritting.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh ok, I think I get it (didn't realise how aliases worked before). The explode() and later dropping of duplicate rows kind of does the work or resolving duplicates for you.

If you have time, it'd probably be worth writing a unit test to spell this logic out. But if you've checked that it matches VEDA, then that's the main thing, so I'm happy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, sure! Tbh, I have never written one, but there is always the first time. 😃

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing to it, just make a fake dataframe (like your examples above) run it through the appropriate transforms, and assert what it should look like on the other end.
All the boilerplate is already set up in test_transforms.py, so just make a new test_whatever() function in there, and it'll get checked in every CI run so we don't accidentally change the desired behaviour later.

@olejandro
Copy link
Member Author

olejandro commented Feb 29, 2024

I've udpated the benchmarks to cover a case like this:

PROCESS NCAP_START START NCAP_START
PRC1 2012 2020
PRC2 2012 2016
PRC3 2016 2020

Currently the tests are failing on Demos7 due to additional records generated by an alias. The records appear in the correct order, so the input for GAMS will be correct.

@olejandro
Copy link
Member Author

Now that the additional records are gone the tests seem to be failing due to failing updated Demo 7 and Demo 7r benchmarks when run on main. This makes sense because main cannot handle the type of duplicate columns introduced in Demo 7. @siddharth-krishna I guess we'll have to override the check?

Copy link
Collaborator

@siddharth-krishna siddharth-krishna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failing tests: would it work if we didn't update the demos-xlsx ref in this PR, merged it in, and updated the benchmarks in the next one?

+1 for having a unit test with the nice example you made. (Perhaps that makes updating the benchmarks redundant? Maybe for future PRs it may be easier to update unit tests than the benchmarks repo)

@olejandro
Copy link
Member Author

Thanks guys! Will revert the change to benchmarks version and add it together with a unit test in my next PR.

@olejandro olejandro merged commit b12287d into main Feb 29, 2024
1 check passed
@olejandro olejandro deleted the olex/process-dupes branch February 29, 2024 14:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants