-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
try-badger tables #137
base: main
Are you sure you want to change the base?
try-badger tables #137
Conversation
block_timestamp, | ||
tx_hash, | ||
contract_address, | ||
event_inputs:_id::string as badge_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please either use raw topics / data or silver__decoded_logs
. event inputs will be deprecated in next two weeks.
SELECT | ||
MAX( | ||
_inserted_timestamp | ||
) :: DATE - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to look back here, should only look at newly inserted rows.
_log_id, | ||
_inserted_timestamp, | ||
contract_address AS network_address, | ||
event_inputs:_id::string AS badge_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please either use raw topics / data or silver__decoded_logs
. event inputs will be deprecated in next two weeks.
AND _inserted_timestamp >= ( | ||
SELECT | ||
MAX( | ||
_inserted_timestamp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as above
SELECT | ||
MAX( | ||
_inserted_timestamp | ||
) :: DATE - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
) }} | ||
|
||
SELECT | ||
_log_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets pull in event index and drop this as a column in gold for all models. _log_id
should still be the PK in silver though
description: 'This table contains all badge movements from try badger' | ||
|
||
columns: | ||
- name: _LOG_ID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace with event index
block_number, | ||
block_timestamp, | ||
tx_hash, | ||
_log_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace with event index in gold, keep _log_id as PK
description: 'The block timestamp that the badge was moved.' | ||
- name: TX_HASH | ||
description: 'The transaction hash that the badge was moved.' | ||
- name: _LOG_ID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace with event index in gold, keep _log_id as PK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is tx hash always unique here? or should the key be call_id
?
please also add dbt tests to silver |
Hi @austinFlipside, made the review changes. let me know if you find more issues :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The folders are good, but the files are named core__
, so they will be built in the core schema. Please change core__
to badger__
in the file names and corresponding .ymls
to change the schema name.
I wish we had ABIs to drive decoded logs for these, because I suspect we could support custom curated models built off of the decoded logs? V2 is coming and will have decoded ABIs on Polygonscan which I'll submit for decoding. In the meantime I am very eager to make V1 analysis more accessible -- thank you @antonyip for moving this forward. Looking forward to QAing these with you live and working on changes Another point --- v2 of Badger is quite different. It would be cool to handle v1 and v2 in a cohesive way if possible. Unfortunately, tough for me to speak well to that until we're fully deployed and have data volume to compare. Maybe we should set the schema to |
@antonyip I think @drakedanner makes good points about v1 vs. v2. We do not want to have to deprecate / change the tables when v2 launches. I am cool with the schema being either |
done. |
No description provided.