With gpinterface
, you can easily create API for your prompts.
A live demo is available at gpinterface.com.
Here are some specific examples of what gpinterface
can do:
You can select the LLM model and add context. Once deployed, you'll get endpoints for:
It is a powerful tool designed to streamline the testing and deployment of generative prompts across multiple large language models (LLMs). With an easy-to-use web interface, gpinterface
enables rapid configuration and experimentation.
- Simple Deployment: Deploy your configured prompts, creating accessible endpoints instantly.
- Easy Prompt Testing: Utilize a web interface to test prompts with various configurations effortlessly.
- Support for Multiple LLMs: Seamlessly integrate and use different Large Language Models to ensure your application is versatile and robust.
gpinterface
currently supports a variety of leading Large Language Models, including:
- AI21 Jamba
- Anthropic Claude
- Cohere Command
- Google Gemini
- Meta Llama
- Mistral AI Mistral
- OpenAI ChatGPT
This diverse support allows you to choose the best model for your specific needs and requirements.
Don't see your favorite model? Feel free to open PR or contact me!
The application requires a PostgreSQL database. Start the database using Docker:
cd backend
docker-compose up -d
The backend utilizes Prisma to manage the database schema and migrations. Execute the following commands at database initialization:
npm run prisma:migrate
To seed the database with initial data for each supported Large Language Model, run the following commands:
npx ts-node prisma/seed
Backend
The backend requires the following environment variables to be set:
CLIENT_URL="http://localhost:3003"
DATABASE_URL="postgresql://postgres:[email protected]:5432/postgres" # can be replaced with your DB endpoint
AI21_API_KEY="YOUR_AI21_API_KEY"
ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY"
COHERE_API_KEY="YOUR_COHERE_API_KEY"
GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
MISTRAL_API_KEY="YOUR_MISTRAL_API_KEY"
OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
AWS_ACCESS_KEY_ID="AWS_ACCESS_KEY" # you need Llama model access in AWS Bedrock
AWS_SECRET_ACCESS_KEY="AWS_SECRET_KEY" # you need Llama model access in AWS Bedrock
JWT_SECRET="SECURE_RANDOM_STRING"
COOKIE_SECRET="SECURE_RANDOM_STRING"
NODE_ENV="development" # for development logging
Ensure these variables are set in a .env
file in the backend directory before starting the application.
frontend
The frontend application requires the following environment variables:
NEXT_PUBLIC_API_ENDPOINT="http://localhost:3000"
NEXT_PUBLIC_CHAT_ENDPOINT="http://localhost:3001"
NEXT_PUBLIC_HOSTNAME="http://localhost:3003"
NEXT_PUBLIC_GOOGLE_OAUTH_CLIENT_KEY=""
These should be configured to match the endpoints where the backend services are available, ensuring the frontend can communicate with the backend properly.
To run the application components separately for development purposes:
Run Backend
cd backend && npm run dev
Run Frontend
cd frontend && npm run dev
To build all components for production deployment, follow these steps sequentially
cd shared
npm run build
cd ../backend
npm run build
cd ../frontend
npm run build
To start the server
Run Backend
cd backend && npm run start
Run Chat Server (API Server)
cd backend && npm run start:chat
Run Frontend
cd frontend && npm run start