Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compression algorithm about the OpenHistoian #88

Open
deathfromheaven opened this issue Sep 27, 2022 · 2 comments
Open

Compression algorithm about the OpenHistoian #88

deathfromheaven opened this issue Sep 27, 2022 · 2 comments

Comments

@deathfromheaven
Copy link

Dear developer,thanks for your contribution to the GSF.
Recently,some questions about the compression algorithm of OpenHistoian come to me. At present, I have used the OpenHistorian to receive the phasor datas ,which are measured by self-developed device. Therefore,I want to make an improvement on OpenHistorian's own compression algorithm(the TSSC algorithm) to adapt to our team's phasor data.
My question is : where to find the code of TSSC algorithm in the GSF framework and whether or not I can modify this algorithm.
Thanks for your kind attention and look forward your prompt reply!

@ritchiecarroll
Copy link
Member

Here's the C# code used for compression / decompression:

https://github.com/GridProtectionAlliance/openHistorian/blob/7d8643dc7af0239eb791020c187e10047148e2aa/Source/Libraries/openHistorian.Core/Snap/Encoding/HistorianFileEncoding.cs

See the Encode / Decode methods.

You can write your own algorithm by giving it its own EncodingDefinition, basically a unique Guid and some type definitions, similar to this:

https://github.com/GridProtectionAlliance/openHistorian/blob/master/Source/Libraries/openHistorian.Core/Snap/Definitions/HistorianFileEncodingDefinition.cs

You are not restricted to the specified HistorianKey / HistorianValue - you can use your own key/value pair definition.

I'm sure if you are willing to invest more CPU cycles, the compression can be improved - however, openHistorian has a design goal to focus on speed, although for synchrophasor data, the compression results are pretty good.

If you have a different kind of data in mind, certainly another algorithm may do better.

@deathfromheaven
Copy link
Author

OK,thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants