Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix language usage in AI documentation #502

Merged
merged 2 commits into from
Aug 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions en/ai/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
# AI functionality in JabRef

Since version 6, JabRef has AI-functionality build in.
Since version 6, JabRef has AI functionality built in.

- AI can generate a summary of a research paper
- One can also chat with papers using a "smart" AI assistant
- You can also chat with papers using a "smart" AI assistant
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different style. I opted for "one", because the other JabRef documentation does not have any "you".

But I think, it is OK. Consistency can be done later.


## AI summary tab

On activation of this tab, AI will generate for you a quick overview of the paper.
When you activate this tab, AI will generate a quick overview of the paper for you.

![AI summary tab screenshot](../.gitbook/assets/AiSummary.png)

The AI will mention main objectives of the research, methods used, key findings, and conclusions.
The AI will mention the main objectives of the research, methods used, key findings, and conclusions.

## AI chat tab

Here, one can ask questions, which are answered by the LLM.
Here, you can ask questions, which are answered by the LLM.

![AI chat tab screenshot](../.gitbook/assets/AiChat.png)

In this window you can see the following elements:
In this window, you can see the following elements:

- Chat history with your messages
- Prompt for sending messages
Expand Down
44 changes: 24 additions & 20 deletions en/ai/ai-providers-and-api-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,51 +2,55 @@

## What is an AI provider?

An AI provider is a company or a service that gives you the ability to send requests to and receive responses from LLM. In order to get the response, you also need to send an API key to authenticate and manage billing.
An AI provider is a company or a service that gives you the ability to send requests to and receive responses from an LLM. In order to get the response, you also need to send an API key to authenticate and manage billing.

Here is the list of AI providers we currently support: OpenAI, Mistral AI, Hugging Face. Others include Google Vertex AI, Microsoft Azure OpenAI, Anthropic, etc. You can find more information on this topic on [langchain4j documentation website](https://docs.langchain4j.dev/category/language-models). This is the framework that we use in JabRef. This page lists available integrations.
Here is the list of AI providers we currently support: OpenAI, Mistral AI, Hugging Face. Others include Google Vertex AI, Microsoft Azure OpenAI, Anthropic, etc. You can find more information on this topic on the [`langchain4j` documentation website](https://docs.langchain4j.dev/category/language-models). This is the framework that we use in JabRef. This page lists available integrations.

## What is an API key?

An API key or API token is like a password that lets an app or program access information or services from another
app or website, such as an LLM service. It ensures only authorized users or applications can use
app or website, such as an LLM service. It ensures that only authorized users or applications can use
the service. For example, when an app uses an LLM service to generate text or answer questions, it includes its
unique API key in the request. The LLM service checks this key to make sure the request is legitimate before
providing the response. This process keeps the data secure and helps track how the service is being used.

## Which AI provider should I use?

We recomend you chosing the OpenAI.
For now, we recommend you choosing [OpenAI](https://platform.openai.com/docs/models).

For Mistral AI you need to make a subscription, while for OpenAI you can send money one time.
For Mistral AI, you might need to make a subscription, whereas for OpenAI, a one-time payment option is available

Hugging Face gives you access to numerous count of models for free. But it will take a very long time for Hugging Face to find a free computer resources for you, and the response time will be also long.
Hugging Face gives you access to numerous count of models for free.
However, it may take a long time for Hugging Face to allocate free computing resources, resulting in longer response times

In order to use any service, you always need an API key.
Please head to the [AI user documentation](https://docs.jabref.org/ai/ai-providers-and-api-keys) to learn about how to receive a key and where to enter it in the preferences.

## How to get an API key?

### How to get an OpenAI API key?

To get an OpenAI API key you need to perform these steps:
To get an OpenAI API key, follow these steps:

1. Login or create an account on [OpenAI website](https://platform.openai.com/login?launch)
2. Go to "API" section
3. Go to "Dashboard" (upper-right corner)
4. Go to "API keys" (left menu)
1. Log in or create an account on the [OpenAI website](https://platform.openai.com/login?launch)
2. Go to the "API" section
3. Go to the "Dashboard" (upper-right corner)
4. Go to the "API keys" (left menu)
5. Click "Create new secret key"
6. Click "Create secret key"
7. OpenAI will show you the key
7. OpenAI will display the key

### How to get a Mistral AI API key?

1. Login or create an account on [Mistral AI website](https://auth.mistral.ai/ui/login)
1. Login or create an account on the [Mistral AI website](https://auth.mistral.ai/ui/login)
2. Go to the [dashboard -> API keys](https://console.mistral.ai/api-keys/)
3. There you will find a button "Create new key". Click on it
4. You can optionally setup a name to API key and its expiration date
4. You can optionally set up a name for the API key and its expiration date
5. After the creation, you will see "Your key is:" with a string of random characters after that

### How to get a Hugging Face API key?

Hugging Face call an "API key" as "Access Token". It does not make much difference, you can interchangably use either "API key", or "API token", or "access token".
Hugging Face refers to an "API key" as an "Access Token". It does not make much difference, you can interchangeably use either "API key", or "API token", or "access token".

1. [Login](https://huggingface.co/login) or [create account](https://huggingface.co/join) on Hugging Face
2. Go to [create access token](https://huggingface.co/settings/tokens/new?)
Expand All @@ -56,9 +60,9 @@ Hugging Face call an "API key" as "Access Token". It does not make much differen

## What should I do with the API key and how can I enter it in JabRef?

Don't share the key to anyone, it's a secret that was created only for your account. Don't enter this key to unknown and unverfied services.
Do not share the key with anyone, it is a secret that was created only for your account. Do not enter this key into unknown or unverified services.

Now you need to copy and paste it in JabRef preferences. To do this:
Now you need to copy and paste it into JabRef preferences. To do this:

1. Launch JabRef
2. Go "File" -> "Preferences" -> "AI" (a new tab!)
Expand All @@ -76,9 +80,9 @@ If you have some money on your credit balance, you can chat with your library!

### OpenAI

In order to increase your credit balance on OpenAI, do this:
To increase your credit balance on OpenAI, follow these steps:

1. Add payment method [there](https://platform.openai.com/settings/organization/billing/payment-methods).
1. Add payment method [here](https://platform.openai.com/settings/organization/billing/payment-methods).
2. Add credit balance on [this](https://platform.openai.com/settings/organization/billing/overview) page.

### Mistral AI
Expand All @@ -87,4 +91,4 @@ Make the subscription on [their website](https://console.mistral.ai/billing/subs

### Hugging Face

You don't have to pay any cent for Hugging Face in order to send requests to LLMs. Though, the speed is very slow.
You do not have to pay anything for Hugging Face in order to send requests to LLMs. Though, the speed is very slow.
28 changes: 14 additions & 14 deletions en/ai/local-llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,32 +3,32 @@
Notice:

1. This tutorial is intended for expert users
2. Local LLM model requires a lot of computational power
3. Smaller models typically have less performance then bigger ones like OpenAI models
2. Local LLM requires a lot of computational power
3. Smaller models typically have lower performance than bigger ones like OpenAI models

## General explanation
## High-level explanation

You can use any program that will create a server with OpenAI compatible API.
You can use any program that creates a server with OpenAI-compatible API.

After you started your service, you can do this:

1. The "Chat Model" field in AI preference is editable, so you can write any model you have downloaded
2. There is a field "API base URL" in "Expert Settings" where you need to supply the address of an OpenAI API compatible server
1. The "Chat Model" field in AI preferences is editable, so you can enter any model you have downloaded
2. There is a field called "API base URL" in "Expert Settings" where you need to provide the address of an OpenAI-compatible API server

Voi la! You can use a local LLM right away in JabRef.
Voilà! You can use a local LLM right away in JabRef.

## Step-by-step guide for `ollama`

The following steps guide you how to use `ollama` for downloading and running local LLMs.
The following steps guide you on how to use `ollama` to download and runn local LLMs.

1. Install `ollama` from [their website](https://ollama.com/download)
2. Select a model that you want to run. The `ollama` provides [a big list of models](https://ollama.com/library) to choose from (we recommend you to try [`gemma2:2b`](https://ollama.com/library/gemma2:2b), or [`mistral:7b`](https://ollama.com/library/mistral), or [`tinyllama`](https://ollama.com/library/tinyllama))
3. When you selected your model, type `ollama pull <MODEL>:<PARAMETERS>` in your terminal. `<MODEL>` refers to the model name like `gemma2` or `mistral`, and `<PARAMETERS>` referes to parameters count like `2b` or `9b`
2. Select a model that you want to run. The `ollama` provides [a large list of models](https://ollama.com/library) to choose from (we recommend trying [`gemma2:2b`](https://ollama.com/library/gemma2:2b), or [`mistral:7b`](https://ollama.com/library/mistral), or [`tinyllama`](https://ollama.com/library/tinyllama))
3. When you have selected your model, type `ollama pull <MODEL>:<PARAMETERS>` in your terminal. `<MODEL>` refers to the model name like `gemma2` or `mistral`, and `<PARAMETERS>` refers to parameters count like `2b` or `9b`
4. `ollama` will download the model for you
5. After that you can run `ollama serve` to start a local web-server. It's a server to which you can send requests and it will respond with LLM output. Notice: `ollama` server may be already running, so don't be scared of `cannot bind` error
6. Got to JabRef Preferences -> AI
5. After that, you can run ollama serve to start a local web server. This server will accept requests and respond with LLM output. Note: The ollama server may already be running, so do not be alarmed by a cannot bind error.
6. Go to JabRef Preferences -> AI
7. Set the "AI provider" to "OpenAI"
8. Set the "Chat Model" to whichever model you've downloaded in form `<MODEL>:<PARAMETERS>`
9. Set the "API base URL" in "Expert Settings" to: `http://localhost:11434/v1/`
8. Set the "Chat Model" to the model you have downloaded in the format `<MODEL>:<PARAMETERS>`
9. Set the "API base URL" in "Expert Settings" to `http://localhost:11434/v1/`

Now, you are all set and can chat "locally".
Loading
Loading