Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can it support local large language models? #1

Open
Otoliths opened this issue Aug 25, 2024 · 6 comments
Open

Can it support local large language models? #1

Otoliths opened this issue Aug 25, 2024 · 6 comments

Comments

@Otoliths
Copy link

@wilkox

Thanks for creating GPTscreenR! It’s a super useful tool for scoping reviews. I noticed that it currently supports GPT-4 through the OpenAI API, which works great, but I was wondering if you’ve considered adding support for local large language models in future updates. This could make the tool more flexible, especially for users who want to cut down on API costs or work in environments where internet access isn’t reliable.

I recently came across a paper called “Evaluating the effectiveness of large language models in abstract screening: a comparative analysis,” and it got me thinking about how adding this capability could really broaden GPTscreenR’s appeal.

Just a thought—thanks for all your hard work on this!

Cheers,
Liuyong

@wilkox
Copy link
Owner

wilkox commented Aug 25, 2024

Thanks for your kind words Liuyong. Do you have a particular local LLM in mind? It wouldn't be too hard to adapt GPTscreenR to use a local model.

@Otoliths
Copy link
Author

Thanks for your kind words Liuyong. Do you have a particular local LLM in mind? It wouldn't be too hard to adapt GPTscreenR to use a local model.

Thanks. One local LLM platform that I recommend is Ollama. It allows users to run various large language models(e.g., llama3.1) directly on their local machines, which could be a great fit for GPTscreenR.

@Otoliths
Copy link
Author

@wilkox Hi, I wanted to recommend the SYNERGY dataset (26 systematic reviews) as a potential resource for testing and evaluating GPTscreenR’s performance. It’s a free dataset that might offer valuable insights into how the tool works in real-world scenarios, which could help guide future updates and improvements to the package.

@wilkox
Copy link
Owner

wilkox commented Oct 1, 2024

Thanks for these suggestions. I do think it would be a good idea to add support for local models, to do this I'll need to make some modifications to my lemur package. It might take me some time to get around to this though.

@wilkox
Copy link
Owner

wilkox commented Oct 15, 2024

@Otoliths I've just updated GPTscreenR to support local LLMs with ollama. You'll need to install the new lemur 0.2.0 first with install_github("wilkox/lemur"). You can try it with screen(..., service = "ollama", model = "llama3.2").

@Otoliths
Copy link
Author

@wilkox Awesome! I can’t wait to try it out. I’ll install the new version of lemur right away and test ollama with the llama3.2 model in GPTscreenR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants