(Forked from ai16z Eliza)
- make sure you use Node 23
- Switched to Together APIs using 405B models (~$150/mo)
- Added Luma AI for video generation (in progress)
So this thing is kinda disorganized to my spec, but here's how to get it started:
- fill out .env file with your keys -- get the cookies by downloading a json string from your browser when logged into your account, make sure 2FA is off.
pnpm i
pnpm build
pnpm start
- 🛠 Full-featured Discord, Twitter and Telegram connectors
- 👥 Multi-agent and room support
- 📚 Easily ingest and interact with your documents
- 💾 Retrievable memory and document store
- 🚀 Highly extensible - create your own actions and clients to extend capabilities
- ☁️ Supports many models, including local Llama, OpenAI, Anthropic, Groq, and more
- 📦 Just works!
- 🤖 Chatbots
- 🕵️ Autonomous Agents
- 📈 Business process handling
- 🎮 Video game NPCs
Prerequisites (MUST):
- Copy .env.example to .env and fill in the appropriate values
- Edit the TWITTER environment variables to add your bot's username and password
- Check out the file
src/core/defaultCharacter.ts
- you can modify this - You can also load characters with the
pnpm start --characters="path/to/your/character.json"
and run multiple bots at the same time.
After setting up the .env file and character file, you can start the bot with the following command:
pnpm i
pnpm start
To avoid git clashes in the core directory, we recommend adding custom actions to a custom_actions
directory and then adding them to the elizaConfig.yaml
file. See the elizaConfig.example.yaml
file for an example.
You can run Llama 70B or 405B models by setting the XAI_MODEL
environment variable to meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
or meta-llama/Meta-Llama-3.1-405B-Instruct
You can run Grok models by setting the XAI_MODEL
environment variable to grok-beta
You can run OpenAI models by setting the XAI_MODEL
environment variable to gpt-4o-mini
or gpt-4o
You may need to install Sharp. If you see an error when starting up, try installing it with the following command:
pnpm install --include=optional sharp
You will need to add environment variables to your .env file to connect to various platforms:
# Required environment variables
DISCORD_APPLICATION_ID=
DISCORD_API_TOKEN= # Bot token
OPENAI_API_KEY=sk-* # OpenAI API key, starting with sk-
ELEVENLABS_XI_API_KEY= # API key from elevenlabs
# ELEVENLABS SETTINGS
ELEVENLABS_MODEL_ID=eleven_multilingual_v2
ELEVENLABS_VOICE_ID=21m00Tcm4TlvDq8ikWAM
ELEVENLABS_VOICE_STABILITY=0.5
ELEVENLABS_VOICE_SIMILARITY_BOOST=0.9
ELEVENLABS_VOICE_STYLE=0.66
ELEVENLABS_VOICE_USE_SPEAKER_BOOST=false
ELEVENLABS_OPTIMIZE_STREAMING_LATENCY=4
ELEVENLABS_OUTPUT_FORMAT=pcm_16000
TWITTER_DRY_RUN=false
TWITTER_USERNAME= # Account username
TWITTER_PASSWORD= # Account password
TWITTER_EMAIL= # Account email
TWITTER_COOKIES= # Account cookies
X_SERVER_URL=
XAI_API_KEY=
XAI_MODEL=
# For asking Claude stuff
ANTHROPIC_API_KEY=
WALLET_PRIVATE_KEY=EXAMPLE_WALLET_PRIVATE_KEY
WALLET_PUBLIC_KEY=EXAMPLE_WALLET_PUBLIC_KEY
BIRDEYE_API_KEY=
SOL_ADDRESS=So11111111111111111111111111111111111111112
SLIPPAGE=1
RPC_URL=https://api.mainnet-beta.solana.com
HELIUS_API_KEY=
## Webgen/Websim
GITHUB_TOKEN=EXAMPLE-BOT-TOKEN
GITHUB_USERNAME=BOTTYBOI
Instructions to get Github Token: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic
## Telegram
TELEGRAM_BOT_TOKEN=
TOGETHER_API_KEY=
If you have an NVIDIA GPU, you can install CUDA to speed up local inference dramatically.
pnpm install
npx --no node-llama-cpp source download --gpu cuda
Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.
Add XAI_MODEL and set it to one of the above options from Run with Llama - you can leave X_SERVER_URL and XAI_API_KEY blank, it downloads the model from huggingface and queries it locally
For help with setting up your Discord Bot, check out here: https://discordjs.guide/preparations/setting-up-a-bot-application.html
To run the test suite:
pnpm test # Run tests once
pnpm test:watch # Run tests in watch mode
For database-specific tests:
pnpm test:sqlite # Run tests with SQLite
pnpm test:sqljs # Run tests with SQL.js
Tests are written using Jest and can be found in src/**/*.test.ts
files. The test environment is configured to:
- Load environment variables from
.env.test
- Use a 2-minute timeout for long-running tests
- Support ESM modules
- Run tests in sequence (--runInBand)
To create new tests, add a .test.ts
file adjacent to the code you're testing.