Skip to content

eliataylor/promptautomator

Repository files navigation

A tool to automate testing and comparing ChatGPT responses based on variations in prompts and configurations


Presideo Logo


interface.png view demo: https://promptautomator.taylormadetraffic.com


Environment Setup

  • git clone [email protected]:eliataylor/promptautomator.git
  • cd promptautomator
  • cp .env.public .env
  • update your .env file your OpenAI key
  • python3.9 -m venv .venv
  • source .venv/bin/activate
  • pip install -r requirements.txt

Run Demo:

Review and compare results (preloaded with example dataset for writing playlists and playlists themes)

  • npm install
  • npm start
  • open http://localhost:3000/

To index and test your own data:

Copy this Google Sheet to your own account

  1. Reuse the "Prompts" to list your prompts. The columns are:
  • Instructions: Give your AI a persona.
  • Prompt: Write your prompt.
  • Response: Your expected response structure
  • Prompt ID: Any id to group prompts

The following tokens will be replaced as described:

  • __RFORMAT__ gets replaced with your desired response structure from your prompts sheet above
  • __FILENAME__ gets replaced with your File Path below
  • __USERDATA__ gets replaced with each set of Survey Answers below
  1. Reuse the "Survey Answers" sheet:
  • Write the Questions down the Rows and add responses to questions along Columns. Question-Answers will be grouped into a paragraph during testing.
  1. Reuse the "Configs" sheets. The columns are:
  • Model: Selected any text based OpenAIs Model
  • File Path: Set the file path to any data set optionally referenced in your prompt. Embeddings requires a .pkl file. All others currently require a .json file OR a valid OpenAI file ID. You can use indexer.py to convert them from CSVs
  • Fine Tuning: Set your Fine-Tuning configs to be passed directly into the prompt
  • Executable: Select which Executable. (Threads / Completion / Embeddings)
  • File Search / Code Interpret: True or False, only used in Threads
  • Assistant / Vector Store: True, False, or a valid OpenAI id to reuse. Setting an ID will speed up further tests and reduce API usage.
  1. Export each CSV sheet to yourfolder/[sheetname].csv

  2. Index your surveys for the React app to display: python indexer.py index_surveys dataset/bags-userdata.csv

Normalize your dataset

  • Convert your CSV to JSON and replace your internal name for ID with source_id:

    • python indexer.py normalize_dataset
  • python indexer.py normalize_dataset examples/music-catalogue.csv id

  • If testing Embeddings, convert your JSON to a PKL:

  • python indexer.py build_embeddings public/music-catalogue.json

Run Prompt Tests

  • To run all prompts, against all configurations, against all userdata sets:
  • python main.py examples/music-catalogue-prompts.csv examples/music-catalogue-configs.csv examples/music-catalogue-userdata.csv
  • To copy the individual results into a single index file for the front-end to load:
  • python indexer.py index_results

DEV ISSUES / ROADMAP

  • Implement Code Interpreter
  • Validate JSON response by reading requested format from instructions
  • Optimize reuse to reduce token usage
  • Pass along Fine-Tuning variables like {temperature:1, max_tokens:256, top_p:1, frequency_penalty:0, presence_penalty:0}
  • Fix "Could not validate from dataset 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte"

About

AI Prompt Automator

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published