Skip to content
/ kai Public
forked from konveyor/kai

Konveyor AI - static code analysis driven migration to new targets via Generative AI

License

Notifications You must be signed in to change notification settings

rakhmad/kai

 
 

Repository files navigation

Konveyor AI (kai)

Konveyor AI (kai) is Konveyor's approach to easing modernization of application source code to a new target by leveraging LLMs with guidance from static code analysis augmented with data in Konveyor that helps to learn how an Organization solved a similar problem in the past.

Pronunciation of 'kai': https://www.howtopronounce.com/kai

Blog Posts

Approach

Our approach is to use static code analysis to find the areas in source code that need to be transformed. 'kai' will iterate through analysis information and work with LLMs to generate code changes to resolve incidents identified from analysis.

This approach does not require fine-tuning of LLMs, we augment a LLMs knowledge via the prompt, similar to approaches with RAG by leveraging external data from inside of Konveyor and from Analysis Rules to aid the LLM in constructing better results.

For example, analyzer-lsp Rules such as these (Java EE to Quarkus rulesets) are leveraged to aid guiding a LLM to update a legacy Java EE application to Quarkus

Note: For purposes of this initial prototype we are using an example of Java EE to Quarkus. That is an arbitrary choice to show viability of this approach. The code and the approach will work on other targets that Konveyor has rules for.

What happens technically to make this work?

  • Konveyor contains information related to an Organization's Application Portfolio, a view into all of the applications an Organization is managing. This view includes a history of analysis information over time, access to each applications source repositories, and metadata that tracks work in-progress/completed in regard to each application being migrated to a given technology.

  • When 'Konveyor AI' wants to fix a specific issue in a given application, it will mine data in Konveyor to extract 2 sources of information to inject into a given LLM prompt.

    1. Static Code Analysis

      • We pinpoint where to begin work by leveraging static code analysis to guide us

      • The static code analysis is informed via a collection of crowd sourced knowledge contained in our rulesets plus augmented via custom-rules

      • We include in the prompt Analysis metadata information to give the LLM more context such as

        remote-ejb-to-quarkus-00000:
          description: Remote EJBs are not supported in Quarkus
          incidents:
          - uri: file:///tmp/source-code/src/main/java/com/redhat/coolstore/service/ShippingService.java
          message: "Remote EJBs are not supported in Quarkus, and therefore its use must be removed and replaced with REST functionality. In order to do this:\n 1. Replace the `@Remote` annotation on the class with a `@jakarta.ws.rs.Path(\"<endpoint>\")` annotation. An endpoint must be added to the annotation in place of `<endpoint>` to specify the actual path to the REST service.\n 2. Remove `@Stateless` annotations if present. Given that REST services are stateless by nature, it makes it unnecessary.\n 3. For every public method on the EJB being converted, do the following:\n - Annotate the method with `@jakarta.ws.rs.GET`\n - Annotate the method with `@jakarta.ws.rs.Path(\"<endpoint>\")` and give it a proper endpoint path. As a rule of thumb... <snip for readability>"
        
          lineNumber: 12
          variables:
            file: file:///tmp/source-code/src/main/java/com/redhat/coolstore/service/ShippingService.java
            kind: Class
            name: Stateless
            package: com.redhat.coolstore.service
        
          - url: https://jakarta.ee/specifications/restful-ws/
            title: Jakarta RESTful Web Services
        
    2. Solved Examples - these are source code diffs that show a LLM how a similar problem was seen in another application the Organization has and how that Organization decided to fix it.

      • We mine data Konveyor has stored from the Application Hub to search for when other applications have fixed the same rule violations and learn how they fixed it and pass that info into the prompt to aid the LLM
      • This ability to leverage how the issue was seen and fixed in the past helps to give the LLM extra context to give a higher quality result.
      • This is an early prompt we created to help give a feel of this in action and the result we got back from a LLM

Pre-Requisites

Access to a Large Language Model (LLM)

  • If you want to run Kai against a LLM you will likely need to configure a LLM API Key to access your service (unless running against a local model)
    • We do provide a means of running Kai against previously cached data from a few models to aid demo flows. This allows you to run through the steps of using previously cached data without requiring access to a LLM. Note, if you do not provide LLM API access then the DEMO_MODE flow will only be able to replay previous cached responses.
      • We call this 'DEMO_MODE', i.e. DEMO_MODE=true make run-server
  • Note that results vary widely between models.

LLM API Keys

  • We expect that you have configured the environment variables required for the LLM you are attempting to use.
    • For example:
      • OpenAI service requires: OPENAI_API_KEY=my-secret-api-key-value
      • IBM BAM service requires: GENAI_KEY=my-secret-api-key-value

IBM BAM Service

OpenAI Service

  • If you have a valid API Key for OpenAI you may use this with Kai.
  • Ensure you have OPENAI_API_KEY=my-secret-api-key-value defined in your shell
Selecting a Model

We offer configuration choices of several models via config.toml which line up to choices we know about from kai/model_provider.py.

To change which llm you are targeting, open config.toml and change the [models] section to one of the following:

IBM served granite

[models]
  provider = "ChatIBMGenAI"

  [models.args]
  model_id = "ibm/granite-13b-chat-v2"

IBM served mistral

[models]
  provider = "ChatIBMGenAI"

  [models.args]
  model_id = "mistralai/mixtral-8x7b-instruct-v01"

IBM served codellama

[models]
  provider = "ChatIBMGenAI"

  [models.args]
  model_id = "meta-llama/llama-2-13b-chat"

IBM served llama3

  # Note:  llama3 complains if we use more than 2048 tokens
  # See:  https://github.com/konveyor-ecosystem/kai/issues/172
[models]
  provider = "ChatIBMGenAI"

  [models.args]
  model_id = "meta-llama/llama-3-70b-instruct"
  parameters.max_new_tokens = 2048

Ollama

[models]
  provider = "ChatOllama"

  [models.args]
  model = "mistral"

OpenAI GPT 4

[models]
  provider = "ChatOpenAI"

  [models.args]
  model = "gpt-4"

OpenAI GPT 3.5

[models]
  provider = "ChatOpenAI"

  [models.args]
  model = "gpt-3.5-turbo"

Kai will also work with OpenAI API Compatible alternatives.

Setup

Running Kai's backend involves running 2 processes:

  • Postgres instance which we deliver via container
  • Backend REST API server

Steps

  1. Clone Repo and Ensure you have the virtual environment setup
    1. git clone https://github.com/konveyor-ecosystem/kai.git
    2. cd kai
    3. python3 -m venv env
      • We've tested this with Python 3.11 and 3.12
    4. source env/bin/activate
    5. pip install -r ./requirements.txt
    6. pip install -e .
  2. Run the Postgres DB via podman
    1. Open a new shell tab
    2. source env/bin/activate
    3. Let this run in background: make run-postgres
  3. Run the Kai server in background
    1. Open a new shell tab
    2. source env/bin/activate
    3. Let this run in background: make run-server
      • If you want to run with cached LLM responses run with DEMO_MODE=true
        • Replace the above command and instead run: DEMO_MODE=true make run-server
        • The DEMO_MODE option will cache responses and play them back on subsequent runs.
      • If you want to run with debug information set the environment variable LOG_LEVEL=debug
        • Example: LOG_LEVEL=debug make run-server
  4. Load data into the database
    1. source env/bin/activate
    2. Fetch sample apps: pushd samples; ./fetch_apps.py; popd
    3. make load-data
      • This will complete in ~1-2 minutes

How to use Kai?

Client Usage

  • There are a few ways to use Kai

Demo

Demo Overview

  • We have a demo that will walk through the migration of a sample application written for EAP with Java EE and bring it to Quarkus.
  • Sample Application

What are the general steps of the demo?

  1. We launch VSCode with our Kai VS Code extension from konveyor-ecosystem/kai-vscode-plugin
  2. We open a git checkout of a sample application: coolstore
  3. We run Kantra inside of VSCode to do an analysis of the application to learn what issues are present that need to be addressed before migrating to Quarkus
  4. We view the analysis information in VSCode
  5. We look at the impacted files and choose what files/issues we want to fix
  6. We click 'Generate Fix' in VSCode on a given file/issue and wait ~45 seconds for the Kai backend to generate a fix
  7. We view the suggested fix as a 'Diff' in VSCode
  8. We accept the generated fix
  9. The file in question has now been updated
  10. We move onto the next file/issue and repeat

Demo Video

DemoVideo

Guided walk-through using Kai

  • See docs/demo.md for a guided walkthrough of how to use Kai to aid in a Java EE to Quarkus migration

Notes on DEMO_MODE and cached responses

The kai server will always cache responses in the kai/data/vcr/<application_name>/<model> directory. In non-demo mode, these responses will be overwritten whenever a new request is made. When the server is run with DEMO_MODE=true, these responses will be played back. The request will be matched on everything except for authorization headers, cookies, content-length and request body.

DEMO_MODE Cached Responses

  • We do not actively maintain cached responses for all models/requests.
  • You may look at: kai/data/vcr/coolstore to see a list of what models have cached responses.
    • In general when we cache responses we are running: example/run_demo.py and saving those responses.
      • This corresponds to a 'KAI Fix All' being run per file in Analysis.
  • When running from IDE and attempting to use cached response, we likely only have cached responses for 'Fix All', and we do not have cached responses for individual issues in a file.

DEMO_MODE Updating Cached Responses

There are two ways to record new responses:

  1. Run the requests while the server is not in DEMO_MODE
  2. Delete the specific existing cached response (under kai/data/vcr/<application_name>/<model>/<source-file-path-with-slashes-replaced-with-dashes.java.yaml>), then rerun. When a cached response does not exist, a new one will be recorded and played back on subsequent runs.

Contributors

  • Below information is only needed for those who are looking to contribute, run e2e tests, etc.

Updating requirements.txt

  • If you are a developer working on Kai and you are updating requirements.txt, you will need to do some manual changes beyond just a pip freeze &> ./requirements.txt, we have a few directives that address differences in 'darwin' systems that need to be preserved. These need to be added manually after a 'freeze' as the freeze command is not aware of what exists in requirements.txt. Please consult the diff of changes you are making now from prior version and note the extra directions for python_version and or sys_platform

Ensure you have the source code for the sample applications checked out locally

  1. cd ./samples
  2. ./fetch_apps.py
    • This will checkout the sample app source code to: ./samples/sample_repos
      • This directory is in .gitignore

(OPTIONAL) Run an analysis of a sample app (example for MacOS)

Note: We have checked in analysis runs for all sample applications so you do NOT need to run analysis yourself. The instructions below are ONLY if you want to recreate, this is NOT required

  1. Install podman so you can run Kantra for static code analysis
  2. cd samples
  3. ./fetch_apps.py # this will git clone example source code apps
  4. cd macos
  5. ./restart_podman_machine.sh # setups the podman VM on MacOS so it will mount the host filesystem into the VM
  6. ./get_latest_kantra_cli.sh # fetches 'kantra' our analyzer tool and stores it in ../bin
  7. cd ..
  8. ./analyze_apps.py # Analyzes all sample apps we know about, in both the 'initial' and 'solved' states, expect this to run for ~2-3 hours.

Analysis data will be stored in: samples/analysis_reports/{APP_NAME}/<initial|solved>/output.yaml

Tracing

The Kai server is able to capture LLM tracing information and write to disk to aid debugging. Kai's tracing is currently simple, we will write to disk but are not integrated with any LLM observability tools.

Tracing will gather information and write to various files under the 'logs/trace' directory.

Tracing can be enabled or disabled. It is enabled via:

  • Environment variable: TRACE=true

  • kai/config.toml

    trace_enabled = true
    

Example of information captured with tracing:

  • Prompt
  • LLM Result
  • Request Parameters
  • Exceptions
  • Duration of each request

Tracing info is written to a directory hierarchy structure of:

logs/trace/{model}/{app_name}/{src_file_path}/{batch_mode}/{timestamp_of_request}/{incident_batch_number}/{retry_attempt}

Example of hierarchy:

  └── trace
      └── gpt-3.5-turbo << MODEL ID>>
          └── coolstore << APP Name >>
              ├── pom.xml << Source File Path >>
              │   └── single_group << Incident Batch Mode >>
              │       └── 1719673609.8266618 << Start of Request Time Stamp >>
              │           ├── 1 << Incident Batch Number >>
              │           │   ├── 0 << Retry Attempt  >>
              │           │   │   └── llm_result << Contains the response from the LLM prior to us parsing >>
              │           │   ├── prompt << The formatted prompt prior to sending to LLM >>
              │           │   └── prompt_vars.json << The prompt variables which are injected into the prompt template >>
              │           ├── params.json << Request parameters >>
              │           └── timing << Duration of a Succesful Request >>
              └── src
                  └── main
                      ├── java
                      │   └── com
                      │       └── redhat
                      │           └── coolstore
                      │               ├── model
                      │               │   ├── InventoryEntity.java
                      │               │   │   └── single_group
                      │               │   │       └── 1719673609.827135
                      │               │   │           ├── 1
                      │               │   │           │   ├── 0
                      │               │   │           │   │   └── llm_result
                      │               │   │           │   ├── prompt
                      │               │   │           │   └── prompt_vars.json
                      │               │   │           ├── params.json
                      │               │   │           └── timing
                      │               │   ├── Order.java
                      │               │   │   └── single_group
                      │               │   │       └── 1719673609.826999
                      │               │   │           ├── 1
                      │               │   │           │   ├── 0
                      │               │   │           │   │   └── llm_result
                      │               │   │           │   ├── prompt
                      │               │   │           │   └── prompt_vars.json
                      │               │   │           ├── params.json
                      │               │   │           └── timing

Linting

  1. Install trunk via: https://docs.trunk.io/check#install-the-cli
  2. Run the linters: trunk check
  3. Format code: trunk fmt

Testing

How to run regression tests

  1. Install the prerequisites in Setup and activate the python virtual environment
  2. Ensure you've checked out the source code for sample applications: Run: ./samples/fetch_sample_apps.sh
  3. Run: ./run_tests.sh

Prototype

This repository represents a prototype implementation as the team explores the solution space. The intent is for this work to remain in the konveyor-ecosystem as the team builds knowledge in the domain and experiments with solutions. As the approach matures we will integrate this properly into Konveyor and seek to promote to github.com/konveyor organization.

Code of Conduct

Refer to Konveyor's Code of Conduct here.

About

Konveyor AI - static code analysis driven migration to new targets via Generative AI

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 84.0%
  • CSS 14.2%
  • Jupyter Notebook 1.1%
  • Java 0.4%
  • Python 0.3%
  • HTML 0.0%