llama.rn / LlamaContext
- bench
- completion
- detokenize
- embedding
- getFormattedChat
- loadSession
- release
- saveSession
- stopCompletion
- tokenize
• new LlamaContext(«destructured»
)
Name | Type |
---|---|
«destructured» |
NativeLlamaContext |
• gpu: boolean
= false
• id: number
• model: Object
= {}
Name | Type |
---|---|
isChatTemplateSupported? |
boolean |
• reasonNoGPU: string
= ''
▸ bench(pp
, tg
, pl
, nr
): Promise
<BenchResult
>
Name | Type |
---|---|
pp |
number |
tg |
number |
pl |
number |
nr |
number |
Promise
<BenchResult
>
▸ completion(params
, callback?
): Promise
<NativeCompletionResult
>
Name | Type |
---|---|
params |
CompletionParams |
callback? |
(data : TokenData ) => void |
Promise
<NativeCompletionResult
>
▸ detokenize(tokens
): Promise
<string
>
Name | Type |
---|---|
tokens |
number [] |
Promise
<string
>
▸ embedding(text
, params?
): Promise
<NativeEmbeddingResult
>
Name | Type |
---|---|
text |
string |
params? |
NativeEmbeddingParams |
Promise
<NativeEmbeddingResult
>
▸ getFormattedChat(messages
, template?
): Promise
<string
>
Name | Type |
---|---|
messages |
RNLlamaOAICompatibleMessage [] |
template? |
string |
Promise
<string
>
▸ loadSession(filepath
): Promise
<NativeSessionLoadResult
>
Load cached prompt & completion state from a file.
Name | Type |
---|---|
filepath |
string |
Promise
<NativeSessionLoadResult
>
▸ release(): Promise
<void
>
Promise
<void
>
▸ saveSession(filepath
, options?
): Promise
<number
>
Save current cached prompt & completion state to a file.
Name | Type |
---|---|
filepath |
string |
options? |
Object |
options.tokenSize |
number |
Promise
<number
>
▸ stopCompletion(): Promise
<void
>
Promise
<void
>
▸ tokenize(text
): Promise
<NativeTokenizeResult
>
Name | Type |
---|---|
text |
string |
Promise
<NativeTokenizeResult
>