We are here to help. File issues for problems you encounter and we will resolve them.
Thanks to Fahd Mirza for creating an installation video for Transcribe. Please subscribe to his Youtube channel and read his blog.
Join the community Share your email in an issue to receive the invite to the community channel.
Transcribe provides real time transcription for microphone and speaker output. It generates a suggested conversation response using OpenAI's GPT API relevant to the current conversation.
- Use Most of the functionality for FREE
- Multi Lingual support
- Choose between GPT 4.0, 3.5 or other inference models from OpenAI, or a plethora of inference models from Together
- Streaming fast LLM responses instead of waiting for a complete response
- Upto date with the latest OpenAI libraries
- Get LLM responses for selected text
- Install and use without python or other dependencies
- Security Features
- Choose Audio Inputs (Speaker or Mic or Both)
- Speech to Text
- Offline - FREE
- Online - paid
- OpenAI Whisper - (Encouraged)
- Deepgram
- Chat Inference Engines
- OpenAI
- Together
- Perplexity
- Azure hosted OpenAI
- Conversation Summary
- Prompt customization
- Save chat history
- Response Audio
Response generation requires a paid account with an OpenAI API key. Encouraged or Deepgram or Together ($25 free Credits) oa Azure
OpenAI gpt-4 model provides the best response generation capabilities. Earlier models work ok, but can sometimes provide inaccurate answers if there is not enough conversation content at the beginning.
Together provides a large selection of Inference models. Any of these can be used by making changes to override.yaml
file.
When using OpenAI, without the OpenAI key, using continuous response or any action that requires interaction with the online LLM gives an error similar to below
Error when attempting to get a response from LLM.
Error code: 401 - {'error': {'message': 'Incorrect API key provided: API_KEY. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
With a valid OpenAI key and no available credits, using continuous response gives an error similar to below
Error when attempting to get a response from LLM. Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
We develop mutually beneficial features on demand.
Create an issue in the repo to request mutually beneficial on demand features.
Connect on LinkedIn to discuss further.
- Multilingual support
- Response Customization
- Audio Customization
- Response for selected text
- Speech Mode
- Save Content
- Model Selection
- Batch Operations
- Application Configuration
- OpenAI API Compatible Provider Support
- Secret scanning: Continuous Integration with GitGuardian
- Static Code Analysis: Regular static code scan scan with Bandit
- Secure Transmission: All secure communications for any network communications
- Dependency Security: All strictest security features enabled in the Github repo
Note that installation files are generated every few weeks. Generated binaries will almost always trail the latest codebase available in the repo.
Latest Binary
- Generated: 2024-01-30
- Git version: bbe1f4
- Install ffmpeg
First, install Chocolatey, a package manager for Windows.
Open PowerShell as Administrator and run the following command:
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
Once Chocolatey is installed, install FFmpeg by running the following command in PowerShell:
choco install ffmpeg
Run these commands in a PowerShell window with administrator privileges. For any issues during the installation, visit the official Chocolatey and FFmpeg websites for troubleshooting.
- Download the zip file from
https://drive.google.com/file/d/1vJCHv8eEjp6q7HEnCMY5mlX_8Ys2_06u/view?usp=drive_link
Using GPU provides 2-3 times faster reseponse time depending on processing power of GPU.
-
Unzip the files in a folder.
-
(Optional) Add Open API key in
override.yaml
file in the transcribe directory:Create an OpenAI account or account from another provider
Add OpenAI API key in
override.yaml
file manually. Open in a text editor and add these lines:
OpenAI:
api_key: 'API_KEY'
Replace "API_KEY" with the actual OpenAI API key. Save the file.
- Execute the file
transcribe\transcribe.exe\transcribe.exe
Application performs best with GPU support.
Make sure you have installed CUDA libraries if you have GPU: https://developer.nvidia.com/cuda-downloads
Application will automatically detect and use GPU once CUDA libraries are installed.
Follow below steps to run transcribe on your local machine.
- Python >=3.11.0 and < 3.12.0
- (Optional) An OpenAI API key (set up a paid OpenAI account)
- Windows OS (Not tested on others)
- FFmpeg
Steps to install FFmpeg on your system.
First, install Chocolatey, a package manager for Windows.
Open PowerShell as Administrator and run the following command:
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
Once Chocolatey is installed, install FFmpeg by running the following command in PowerShell:
choco install ffmpeg
Run these commands in a PowerShell window with administrator privileges. For any issues during the installation, visit the official Chocolatey and FFmpeg websites for troubleshooting.
-
Clone transcribe repository:
git clone https://github.com/vivekuppal/transcribe
-
Navigate to
app\transcribe
folder:cd app\transcribe
-
Create a virutal env and install the required packages:
python -m venv venv venv\Scripts\activate.bat pip install -r app\transcribe\requirements.txt
Virutal environments can also be created using conda or a tool of choice.
-
(Optional) Provide OpenAI API key in
override.yaml
file in the transcribe directory:Create the following section in
override.yaml
fileOpenAI: api_key: 'API_KEY'
Alter the line:
api_key: 'API_KEY'
Replace "API_KEY" with the actual OpenAI API key. Save the file.
Run the main script from app\transcribe\
folder:
python main.py
Upon initiation, Transcribe will begin transcribing microphone input and speaker output in real-time, optionally generating a suggested response based on the conversation. It is suggested to use continuous response feature after 1-2 minutes, once there is enough content in transcription window to provide enough context to the LLM.
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Open issues or submit pull requests to improve Transcribe.
- Install Video Thanks to Fahd Mirza.
- Fireside chat for Transcribe.