Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark metrics: model FLOPs utilization for @onefact benchmark? #21

Open
jaanli opened this issue Aug 13, 2024 · 0 comments
Open

Benchmark metrics: model FLOPs utilization for @onefact benchmark? #21

jaanli opened this issue Aug 13, 2024 · 0 comments

Comments

@jaanli
Copy link

jaanli commented Aug 13, 2024

Hi! My team at @onefact is excited to benchmark this awesome project for our RAG tasks.

quick question: how can we accurately estimate the any-to-any MFU metric please? Because it is any-to-any, we need a good sense of the pre-training mix dataset statistics (median length of tokens, etc).

We also need to run it on edge devices. Have you figured out how to distill it for iPhone pro max 15 devices? WASMEdge and WebAssembly have been working well for us; feel free to email me: [email protected].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant