An api to create local first human friendly agents in the browser or Nodejs
📚 Read the documentation
Check the 💻 examples
An agent is an anthropomorphic representation of a bot. It can:
- Think: use language model servers to perform inference queries
- Interact: perform interactions with the user and get input and feedback
- Work: manage long running jobs with multiple tasks, use custom terminal commands
- Remember: use transient or semantic memory to store data
Version | Name | Description | Nodejs | Browser |
---|---|---|---|---|
@agent-smith/body | The body | ✅ | ✅ | |
@agent-smith/brain | The brain | ✅ | ✅ | |
@agent-smith/jobs | Jobs | ✅ | ✅ | |
@agent-smith/tmem | Transient memory | ❌ | ✅ | |
@agent-smith/tmem-jobs | Jobs transient memory | ❌ | ✅ | |
@agent-smith/smem | Semantic memory | ✅ | ❌ | |
@agent-smith/tfm | Templates for models | ✅ | ✅ | |
@agent-smith/lmtask | Yaml model task | ✅ | ✅ | |
@agent-smith/cli | Terminal client | ✅ | ❌ | |
@agent-smith/feat-git | Git features | ✅ | ❌ |
- Composable: the packages have limited responsibilities and can work together
- Declarative: focus on the business logic by expressing features simply
- Explicit: keep it simple and under user control: no hidden magic
- What local or remote inference servers can I use?
Actually it works with Llama.cpp, Koboldcpp and Ollama.
It also works in the browser using gpu only inference and small models
- Can I use this with OpenAI or other big apis?
Sorry no: this library favours local first or private remote inference servers
Generate a commit message in a git repository (using the @agent-smith/feat-cli
plugin):
lm commit .
const backend = useLmBackend({
name: "koboldcpp",
localLm: "koboldcpp",
onToken: (t) => process.stdout.write(t),
});
const ex = useLmExpert({
name: "koboldcpp",
backend: backend,
template: templateName,
model: { name: modelName, ctx: 2048 },
});
const brain = useAgentBrain([expert]);
console.log("Auto discovering brain backend ...");
await brain.init();
brain.ex.checkStatus();
if (brain.ex.state.get().status != "ready") {
throw new Error("The expert's backend is not ready")
}
// run an inference query
const _prompt = "list the planets of the solar sytem";
await brain.think(_prompt, {
temperature: 0.2,
min_p: 0.05
});
Powered by:
- Nanostores for the state management and reactive variables
- Locallm for the inference api servers management
- Modprompt for the prompt templates management