Releases: vercel/modelfusion
v0.130.0
Changed
-
breaking change: updated
generateTranscription
interface. The function now takes amimeType
andaudioData
(base64-encoded string,Uint8Array
,Buffer
orArrayBuffer
). Example:import { generateTranscription, openai } from "modelfusion"; import fs from "node:fs"; const transcription = await generateTranscription({ model: openai.Transcriber({ model: "whisper-1" }), mimeType: "audio/mp3", audioData: await fs.promises.readFile("data/test.mp3"), });
-
Images in instruction and chat prompts can be
Buffer
orArrayBuffer
instances (in addition to base64-encoded strings andUint8Array
instances).
v0.129.1
v0.129.0
Changed
-
breaking change: Usage of Node
async_hooks
has been renamed fromnode:async_hooks
toasync_hooks
for easier Webpack configuration. To exclude theasync_hooks
from client-side bundling, you can use the following config for Next.js (next.config.mjs
ornext.config.js
):/** * @type {import('next').NextConfig} */ const nextConfig = { webpack: (config, { isServer }) => { if (isServer) { return config; } config.resolve = config.resolve ?? {}; config.resolve.fallback = config.resolve.fallback ?? {}; // async hooks is not available in the browser: config.resolve.fallback.async_hooks = false; return config; }, };
v0.128.0
Changed
-
breaking change: ModelFusion uses
Uint8Array
instead ofBuffer
for better cross-platform compatibility (see also "Goodbye, Node.js Buffer"). This can lead to breaking changes in your code if you useBuffer
-specific methods. -
breaking change: Image content in multi-modal instruction and chat inputs (e.g. for GPT Vision) is passed in the
image
property (instead ofbase64Image
) and supports both base64 strings andUint8Array
inputs:const image = fs.readFileSync(path.join("data", "example-image.png"), { encoding: "base64", }); const textStream = await streamText({ model: openai.ChatTextGenerator({ model: "gpt-4-vision-preview", maxGenerationTokens: 1000, }), prompt: [ openai.ChatMessage.user([ { type: "text", text: "Describe the image in detail:\n\n" }, { type: "image", image, mimeType: "image/png" }, ]), ], });
-
OpenAI-compatible providers with predefined API configurations have a customized provider name that shows up in the events.
v0.127.0
Changed
-
breaking change:
streamStructure
returns an async iterable over deep partial objects. If you need to get the fully validated final result, you can use thefullResponse: true
option and await thestructurePromise
value. Example:const { structureStream, structurePromise } = await streamStructure({ model: ollama .ChatTextGenerator({ model: "openhermes2.5-mistral", maxGenerationTokens: 1024, temperature: 0, }) .asStructureGenerationModel(jsonStructurePrompt.text()), schema: zodSchema( z.object({ characters: z.array( z.object({ name: z.string(), class: z .string() .describe("Character class, e.g. warrior, mage, or thief."), description: z.string(), }) ), }) ), prompt: "Generate 3 character descriptions for a fantasy role playing game.", fullResponse: true, }); for await (const partialStructure of structureStream) { console.clear(); console.log(partialStructure); } const structure = await structurePromise; console.clear(); console.log("FINAL STRUCTURE"); console.log(structure);
-
breaking change: Renamed
text
value instreamText
withfullResponse: true
totextPromise
.
Fixed
- Ollama streaming.
- Ollama structure generation and streaming.
v0.126.0
v0.125.0
Added
-
Perplexity AI chat completion support. Example:
import { openaicompatible, streamText } from "modelfusion"; const textStream = await streamText({ model: openaicompatible .ChatTextGenerator({ api: openaicompatible.PerplexityApi(), provider: "openaicompatible-perplexity", model: "pplx-70b-online", // online model with access to web search maxGenerationTokens: 500, }) .withTextPrompt(), prompt: "What is RAG in AI?", });
v0.124.0
Added
-
Embedding-support for OpenAI-compatible providers. You can for example use the Together AI embedding endpoint:
import { embed, openaicompatible } from "modelfusion"; const embedding = await embed({ model: openaicompatible.TextEmbedder({ api: openaicompatible.TogetherAIApi(), provider: "openaicompatible-togetherai", model: "togethercomputer/m2-bert-80M-8k-retrieval", }), value: "At first, Nox didn't know what to do with the pup.", });
v0.123.0
Added
-
classify
model function (docs) for classifying values. TheSemanticClassifier
has been renamed toEmbeddingSimilarityClassifier
and can be used in conjunction withclassify
:import { classify, EmbeddingSimilarityClassifier, openai } from "modelfusion"; const classifier = new EmbeddingSimilarityClassifier({ embeddingModel: openai.TextEmbedder({ model: "text-embedding-ada-002" }), similarityThreshold: 0.82, clusters: [ { name: "politics" as const, values: [ "they will save the country!", // ... ], }, { name: "chitchat" as const, values: [ "how's the weather today?", // ... ], }, ], }); // strongly typed result: const result = await classify({ model: classifier, value: "don't you love politics?", });
v0.122.0
Changed
-
breaking change: Switch from positional parameters to named parameters (parameter object) for all model and tool functions. The parameter object is the first and only parameter of the function. Additional options (last parameter before) are now part of the parameter object. Example:
// old: const text = await generateText( openai .ChatTextGenerator({ model: "gpt-3.5-turbo", maxGenerationTokens: 1000, }) .withTextPrompt(), "Write a short story about a robot learning to love", { functionId: "example-function", } ); // new: const text = await generateText({ model: openai .ChatTextGenerator({ model: "gpt-3.5-turbo", maxGenerationTokens: 1000, }) .withTextPrompt(), prompt: "Write a short story about a robot learning to love", functionId: "example-function", });
This change was made to make the API more flexible and to allow for future extensions.