diff --git a/README.md b/README.md index afb387e..0eba166 100644 --- a/README.md +++ b/README.md @@ -8,6 +8,7 @@ ## Why do you need this? + AI models a new attack vector for hackers. They can use AI models to generate malicious content, spam, or phishing attacks. OpenShield is a firewall for AI models. It provides rate limiting, content filtering, and keyword filtering for AI models. It also provides a tokenizer calculation for OpenAI models. ## [OWASP Top 10 LLM attacks](https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_1.pdf): @@ -32,22 +33,27 @@ AI models a new attack vector for hackers. They can use AI models to generate ma - LLM10: Model Theft Unauthorized access to proprietary large language models risks theft, competitive advantage, and dissemination of sensitive information. ## Solution + OpenShield a transparent proxy that sits between your AI model and the client. It provides rate limiting, content filtering, and keyword filtering for AI models. ### Example - Input flow + ![Input flow](https://raw.githubusercontent.com/openshieldai/openshield/main/docs/assets/input.svg) ### Example - Output flow + ![Output flow](https://raw.githubusercontent.com/openshieldai/openshield/main/docs/assets/output.svg) You can chain multiple AI models together to create a pipeline before hitting the foundation model. ## Features + - You can set custom rate limits for OpenAI endpoints - Tokenizer calculation for OpenAI models - Python and LLM based rules ## Incoming features + - Rate limiting per user - Rate limiting per model - Prompts manager @@ -56,15 +62,15 @@ You can chain multiple AI models together to create a pipeline before hitting th - VectorDB integration ## Requirements + - OpenAI API key - Postgres - Redis - #### OpenShield is a firewall designed for AI models. - ### Endpoints + ``` /openai/v1/models /openai/v1/models/:model @@ -72,6 +78,7 @@ You can chain multiple AI models together to create a pipeline before hitting th ``` ## Demo mode + We are generating automatically demo data into the database. You can use the demo data to test the application. Adminer is available on port 8085. You can use it to see the database content. @@ -81,50 +88,74 @@ cd demo cp .env.example .env ``` -You need to modify the .env file with your OpenAI API key and HuggingFace API key. +You need to modify the .env file with your OpenAI API key and HuggingFace API key. Here's how to obtain these keys: + +1. OpenAI API key: + - Sign up for an OpenAI account at https://platform.openai.com/signup + - Once logged in, go to https://platform.openai.com/api-keys + - Click on "Create new secret key" to generate your API key + +2. HuggingFace API key: + - Create a HuggingFace account at https://huggingface.co/join + - Go to your account settings: https://huggingface.co/settings/token + - Click on "Create new token" to create your API key + +After obtaining both keys, update your .env file with the appropriate values. ```shell -docker-compose build -docker-compose up +docker compose build +docker compose up ``` -You can get an api_key from the database. The default port is 5433 and password openshield. +Now find suitable API key directly in the Docker Compose output. Look for a section labeled "CREATED API KEY" in the console output, which will look similar to this: + ```shell -psql -h localhost -p 5433 -U postgres -d postgres -c "SELECT api_key FROM api_keys WHERE status = 'active' LIMIT 1;" -Password for user postgres: - api_key ---------------------------- - -(1 row) +================================================== +šŸ”‘ CREATED API KEY šŸ”‘ +================================================== +------------------------------ +| API Key Details | +------------------------------ +| ProductID : 1 | +| Status : active | +| ApiKey : | +------------------------------ +================================================== ``` +Choose any of the displayed API keys for your demo. + A good request: + ```shell curl --location 'localhost:8080/openai/v1/chat/completions' \ --header 'Content-Type: application/json' \ ---header 'Authorization: ' \ +--header "Authorization: Bearer " \ --data '{"model":"gpt-4","messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"What is the meaning of life?"}]}' ``` A vulnerable request: + ```shell curl --location 'localhost:8080/openai/v1/chat/completions' \ --header 'Content-Type: application/json' \ ---header 'Authorization: ' \ +--header "Authorization: Bearer " \ --data '{"model":"gpt-4","messages":[{"role":"system","content":"You are ChatGPT, a large language model trained by OpenAI. Follow the user'\''s instructions carefully. Respond using markdown."},{"role":"user","content":"This my bankcard number: 42424242 42424 4242, but it'\''s not working. Who can help me?"}]}' ``` ## Local development + .env is supported in local development. Create a .env file in the root directory with the following content: -``` + +```shell ENV=development go run main.go ``` ## Example test-client -``` +```shell npm install npx tsc src/index.ts export OPENAI_API_KEY= node src/index.js -``` +``` \ No newline at end of file diff --git a/cmd/cli.go b/cmd/cli.go index 639f514..2343533 100644 --- a/cmd/cli.go +++ b/cmd/cli.go @@ -2,13 +2,14 @@ package cmd import ( "fmt" - "github.com/openshieldai/openshield/lib" - "github.com/openshieldai/openshield/server" - "github.com/spf13/cobra" "os" "os/signal" "syscall" "time" + + "github.com/openshieldai/openshield/lib" + "github.com/openshieldai/openshield/server" + "github.com/spf13/cobra" ) var rootCmd = &cobra.Command{ diff --git a/cmd/mock_data.go b/cmd/mock_data.go index af9a40a..3835f14 100644 --- a/cmd/mock_data.go +++ b/cmd/mock_data.go @@ -2,13 +2,14 @@ package cmd import ( "fmt" + "math/rand" + "reflect" + "strings" + "github.com/go-faker/faker/v4" "github.com/openshieldai/openshield/lib" "github.com/openshieldai/openshield/models" "gorm.io/gorm" - "math/rand" - "reflect" - "strings" ) var generatedTags []string @@ -38,11 +39,33 @@ func createMockData() { db := lib.DB() createMockTags(db, 10) createMockRecords(db, &models.AiModels{}, 1) - createMockRecords(db, &models.ApiKeys{}, 1) + apiKey := createMockRecords(db, &models.ApiKeys{}, 1) createMockRecords(db, &models.AuditLogs{}, 1) createMockRecords(db, &models.Products{}, 1) createMockRecords(db, &models.Usage{}, 1) createMockRecords(db, &models.Workspaces{}, 1) + + // Print the created ApiKey with more visibility + if apiKey != nil { + fmt.Println("\n" + strings.Repeat("=", 50)) + fmt.Println("šŸ”‘ CREATED API KEY šŸ”‘") + fmt.Println(strings.Repeat("=", 50)) + fmt.Printf("%s\n", strings.Repeat("-", 30)) + fmt.Printf("| %-26s |\n", "API Key Details") + fmt.Printf("%s\n", strings.Repeat("-", 30)) + v := reflect.ValueOf(apiKey).Elem() + fieldsToShow := []string{"ProductID", "Status", "ApiKey"} + for _, fieldName := range fieldsToShow { + field := v.FieldByName(fieldName) + if field.IsValid() { + fmt.Printf("| %-12s: %-11v |\n", fieldName, field.Interface()) + } + } + fmt.Printf("%s\n", strings.Repeat("-", 30)) + fmt.Println(strings.Repeat("=", 50) + "\n") + } else { + fmt.Println("\nāŒ No ApiKey was created. āŒ") + } } func createMockTags(db *gorm.DB, count int) { @@ -79,7 +102,8 @@ func getRandomTags() string { return strings.Join(selectedTags, ",") } -func createMockRecords(db *gorm.DB, model interface{}, count int) { +func createMockRecords(db *gorm.DB, model interface{}, count int) interface{} { + var createdModel interface{} for i := 0; i < count; i++ { newModel := reflect.New(reflect.TypeOf(model).Elem()).Interface() if err := faker.FakeData(newModel); err != nil { @@ -97,9 +121,13 @@ func createMockRecords(db *gorm.DB, model interface{}, count int) { result := db.Create(newModel) if result.Error != nil { fmt.Printf("error inserting fake data for %T: %v\n", newModel, result.Error) + } else { + createdModel = newModel } } + return createdModel } + func setValueOfObject(obj interface{}, fieldName string, value interface{}) { field := reflect.ValueOf(obj).Elem().FieldByName(fieldName) if field.IsValid() && field.CanSet() { diff --git a/demo/.env.example b/demo/.env.example new file mode 100644 index 0000000..830f0c5 --- /dev/null +++ b/demo/.env.example @@ -0,0 +1,2 @@ +OPENAI_API_KEY="sk-" +HUGGINGFACE_API_KEY="hf_" \ No newline at end of file diff --git a/demo/.env_example b/demo/.env_example deleted file mode 100644 index ef1d76e..0000000 --- a/demo/.env_example +++ /dev/null @@ -1,2 +0,0 @@ -OPENAI_API_KEY="" -HUGGINGFACE_API_KEY="" \ No newline at end of file diff --git a/demo/demo_config.yaml b/demo/demo_config.yaml new file mode 100644 index 0000000..94e4016 --- /dev/null +++ b/demo/demo_config.yaml @@ -0,0 +1,41 @@ +settings: + network: + port: 3005 + database: + uri: "postgres://postgres:openshield@postgres:5432/postgres" + auto_migration: true + redis: + uri: "redis://redis:6379/0" + ssl: false + cache: + enabled: true + ttl: 3600 + rate_limiting: + enabled: true + max: 100 + window: 60 + expiration: 60 + audit_logging: + enabled: true + usage_logging: + enabled: true + rule_server: + url: "http://ruleserver:8000" + +providers: + openai: + enabled: true + huggingface: + enabled: true + + +rules: + input: + - name: "prompt_injection_example" + type: "prompt_injection" + enabled: true + config: + plugin_name: "prompt_injection_llm" + threshold: 0.85 + action: + type: "block" diff --git a/demo/docker-compose.yml b/demo/docker-compose.yml index 81102ad..cd78e60 100644 --- a/demo/docker-compose.yml +++ b/demo/docker-compose.yml @@ -6,7 +6,7 @@ services: environment: - ENV=production - PORT="3005" - - DEMO_MODE="true" + - DEMO_MODE=true - OPENAI_API_KEY=${OPENAI_API_KEY} - HUGGINGFACE_API_KEY=${HUGGINGFACE_API_KEY} ports: @@ -15,7 +15,10 @@ services: - redis - postgres volumes: - - ./config.yaml:/app/config.yaml + - ./demo_config.yaml:/app/config.yaml + depends_on: + postgres: + condition: service_healthy ruleserver: build: @@ -35,9 +38,14 @@ services: POSTGRES_PASSWORD: openshield ports: - "5433:5432" + healthcheck: + test: ["CMD-SHELL", "pg_isready -U postgres"] + interval: 2s + timeout: 2s + retries: 5 adminer: image: adminer restart: always ports: - - 8085:8080 \ No newline at end of file + - 8085:8080