This project is a simple web app for using the Google Gemini 1.5 model.
User input can be either text, image or audio through the client's microphone.
I created this to help others building apps with Gemini 1.5. If you find it helpful, please give a ⭐
Visit Google AI for your AI API key
Create a .env.local
file and add your api key
GOOGLE_API_KEY=
First, install packages:
yarn
Then, run the development server:
yarn dev
Open http://localhost:3000 with your browser to see the result.
Add at least one image to use Vision.
To learn more about OpenAI API Documentation
To learn more about uploadthing storage API Documentation
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - learn about Next.js features and API.
- Learn Next.js - an interactive Next.js tutorial.
You can check out the Next.js GitHub repository - your feedback and contributions are welcome!
The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.
Check out our Next.js deployment documentation for more details.