Skip to content

Commit

Permalink
Update docs (autogenhub#11)
Browse files Browse the repository at this point in the history
* docs update WIP

* getting started guide updated

* update getting started guide

* clarify github app creation

* add webhook secret to getting started guide and gh-flow app

* restructure Readme

* fix the Organization assumption

* add mermaid diagram of the event flow

* devtunnel feature to devcontainer

* throw all the exceptions and add the history to the prompt

* Update github-flow.md

* update readme
  • Loading branch information
kostapetan authored Mar 14, 2024
1 parent 36951aa commit baabd41
Show file tree
Hide file tree
Showing 23 changed files with 294 additions and 429 deletions.
6 changes: 4 additions & 2 deletions .devcontainer/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
FROM mcr.microsoft.com/devcontainers/dotnet:8.0
# Install the xz-utils package
RUN apt-get update && apt-get install -y xz-utils ca-certificates curl gnupg
# RUN apt-get update && apt-get install -y xz-utils ca-certificates curl gnupg

RUN curl -fsSL https://aka.ms/install-azd.sh | bash
# RUN curl -fsSL https://aka.ms/install-azd.sh | bash

# RUN curl -sL https://aka.ms/DevTunnelCliInstall | bash
15 changes: 6 additions & 9 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,14 @@
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"features": {
"ghcr.io/devcontainers/features/azure-cli:1": {},
"ghcr.io/devcontainers/features/common-utils:2": {},
"ghcr.io/devcontainers/features/common-utils:2": {
"configureZshAsDefaultShell" : true
},
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
"ghcr.io/azure/azure-dev/azd:latest": {},
"ghcr.io/devcontainers/features/node:1": {
"nodeGypDependencies": true,
"version": "18",
"nvmVersion": "latest"
},
"ghcr.io/azure/azure-dev/azd:0": {
"version": "stable"
}
"ghcr.io/devcontainers/features/node:1": {},
"ghcr.io/azure/azure-dev/azd:0": {},
"ghcr.io/stuartleeks/dev-container-features/dev-tunnels:0": {}
},
"postCreateCommand": "bash .devcontainer/startup.sh",
"hostRequirements": {
Expand Down
10 changes: 5 additions & 5 deletions .devcontainer/startup.sh
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
#!/bin/bash

curl -k https://localhost:8081/_explorer/emulator.pem > ~/emulatorcert.crt
sudo cp ~/emulatorcert.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
sleep 10
# curl -k https://localhost:8081/_explorer/emulator.pem > ~/emulatorcert.crt
# sudo cp ~/emulatorcert.crt /usr/local/share/ca-certificates/
# sudo update-ca-certificates
# sleep 10
dotnet restore sk-dev-team.sln
dotnet build util/seed-memory/seed-memory.csproj && dotnet util/seed-memory/bin/Debug/net7.0/seed-memory.dll
# dotnet build util/seed-memory/seed-memory.csproj && dotnet util/seed-memory/bin/Debug/net7.0/seed-memory.dll
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -496,4 +496,5 @@ elsa-core/
sk-azfunc-server/local.settings.json
.azure
temp

.mono/**
**/values.xml
162 changes: 39 additions & 123 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,127 +1,43 @@

# sk-dev-team

# Build a Virtual AI Dev Team using Semantic Kernel Skills

# Goal

From a natural language specification, set out to integrate a team of AI copilot skills into your team’s dev process, either for discrete tasks on an existing repo (unit tests, pipeline expansions, PRs for specific intents), developing a new feature, or even building an application from scratch. Starting from an existing repo and a broad statement of intent, work with multiple AI copilot dev skills, each of which has a different emphasis - from architecture, to task breakdown, to plans for individual tasks, to code output, code review, efficiency, documentation, build, writing tests, setting up pipelines, deployment, integration tests, and then validation.
The system will present a view that facilitates chain-of-thought coordination across multiple trees of reasoning with the dev team skills.

## Status

* You can iterate on building a workflow for your semantic kernel ai dev skills using Elsa Workflows designer and run these workflows to see the results. The workflows do not yet support adding memory context.
* You can use the CLI project to run the SK dev skills from the command line. The CLI supports using the [Microsoft Azure Well-Architected Frameworl](https://learn.microsoft.com/en-us/azure/well-architected/) as memory context for the skill invocations.

## Trying it out

### Elsa.SemanticKernel

SemanticKernel Activity Provider for Elsa Workflows 3.x

The project supports running [Microsoft Semantic Kernel](https://github.com/microsoft/semantic-kernel) Skills as workflows using [Elsa Workflows](https://v3.elsaworkflows.io). You can build the workflows as .NET code or in the visual designer.
To run the designer:

```bash
> cd WorkflowsApp
> cp .env_example .env
# Edit the .env file to choose your AI model, add your API Endpoint, and secrets.
> . ./.env
> dotnet build
> dotnet run
# Open browser to the URI in the console output
```

By Default you can use "admin" and "password" to login. Please review [Workflow Security](https://v3.elsaworkflows.io/docs/installation/aspnet-apps-workflow-server) for into on securing the app, using API tokens, and more.

To [invoke](https://v3.elsaworkflows.io/docs/guides/invoking-workflows) a workflow, first it must be "Published". If your workflow has a trigger activity, you can use that. When your workflow is ready, click the "Publish" button. You can also execute the workflow using the API. Then, find the Workflow Definition ID. From a command line, you can use "curl":

```bash
> curl --location 'https://localhost:5001/elsa/api/workflow-definitions/{workflow_definition_id}/execute' \
--header 'Content-Type: application/json' \
--header 'Authorization: ApiKey {api_key}' \
--data '{
}'
```

Once you have the app runing locally, you can login (admin/password - see the [Elsa Workflows](https://v3.elsaworkflows.io) for info about securing). Then you can click "new workflow" to begin building your workflow with semantic kernel skills.

1. Drag workflow Activity blocks into the designer, and examine the settings.
2. Connect the Activities to specify an order of operations.
3. You can use Workfflow Variables to pass state between activities.
1. Create a Workflow Variable, "MyVariable"
2. Click on the Activity that you want to use to populate the variable.
3. In the Settings box for the Activity, Click "Output"
4. Set the "Output" to the variable chosen.
5. Click the Activity that will use the variable. Click on "Settings".
6. Find the text box representing the variable that you want to populate, in this case usually "input".
7. Click the "..." widget above the text box, and select "javascript"
8. Set the value of the text box to

```javascript
`${getMyVariable()}`
```

9. Run the workflow.

## Via CLI

The easiest way to run the project is in Codespaces. Codespaces will start a qdrant instance for you.

1. Create a new codespace from the *code* button on the main branch.
2. Once the code space setup is finished, from the terminal:

```bash
> cd cli
cli> cp ../WorkflowsApp/.env_example .
# Edit the .env file to choose your AI model, add your API Endpoint, and secrets.
cli> bash .env
cli> dotnet build
cli> dotnet run --file util/ToDoListSamplePrompt.txt do it
```

You will find the output in the *output/* directory.

## Proposed UX

* Possible UI: Start with an existing repo (GH or ADO), either populated or empty, and API Keys / config for access – once configured / loaded split view between three columns:
* Settings/History/Tasks (allows browsing into each of the chats with a copilot dev team role) | [Central Window Chat interface with Copilot DevTeam] | Repo browsing/editing
* Alternate interface will be via VS Code plugin/other IDE plugins, following the plugin idiom for each IDE
* Settings include teams channel for conversations, repo config and api keys, model config and api keys, and any desired prompt template additions
* CLI: start simple with a CLI that can be passed a file as prompt input and takes optional arguments as to which skills to invoke
* User begins with specifying a repository and then statement of what they want to accomplish, natural language, as simple or as detailed as needed.
* SK DevTeam skill will use dialog to refine the intent as needed, returns a plan, proposes necessary steps
* User approves the plan or gives feedback, requests iteration
* Plan is parceled out to the appropriate further skills
* Eg, for a new app:
* Architecture is passed to DevLead skill gives plan/task breakdown.
* DevLead breaks down tasks into smaller tasks, each of these is fed to a skill to decide if it is a single code module or multiple
* Each module is further fed to a dev lead to break down again or specify a prompt for a coder
* Each code module prompt is fed to a coder
* Each module output from a coder is fed to a code reviewer (with context, specific goals)
* Each reviewer proposes changes, which result in a new prompt for the original coder
* Changes are accepted by the coder
* Each module fed to a builder
* If it doesn’t build sent back to review
* (etc)

## Proposed Architecture

* SK Kernel Service – ASP.NET Core Service with REST API
* SK Skills:
* PM Skill – generates pot, word docs, describing app,
* Designer Skill – mockups?
* Architect Skill – proposes overall arch
* DevLead Skill – proposes task breakdown
* CoderSkill – builds code modules for each task
* ReviewerSkill – improves code modules
* TestSkill – writes tests
* Etc
* Web app: prompt front end and wizard style editor of app
* Build service sandboxes – using branches and actions/pipelines 1st draft; Alternate – ephemeral build containers
* Logging service streaming back to azure logs analytics, app insights, and teams channel
* Deployment service – actions/pipelines driven
* Azure Dev Skill – lean into azure integrations – crawl the azure estate to inventory a tenant’s existing resources to memory and help inform new code. Eg: you have a large azure sql estate? Ok, most likely you want to wire your new app to one of those dbs, etc….
# Ai Agents

Build a Dev Team using event driven agents.
This project is an experiment and is not intended to be used in production.

# Background - initial idea

From a natural language specification, set out to integrate a team of AI agents into your team’s dev process, either for discrete tasks on an existing repo (unit tests, pipeline expansions, PRs for specific intents), developing a new feature, or even building an application from scratch. Starting from an existing repo and a broad statement of intent, work with multiple AI agents, each of which has a different emphasis - from architecture, to task breakdown, to plans for individual tasks, to code output, code review, efficiency, documentation, build, writing tests, setting up pipelines, deployment, integration tests, and then validation.
The system will present a view that facilitates chain-of-thought coordination across multiple trees of reasoning with the dev team agents.

# Emerging framework - Ai Agents

While building the dev team agents, we stumbled upon few patterns and abstractions that we think are usefull for building a variety of agentic systems.
At the moment they reside in `src/libs/Microsoft.AI.DevTeam`, but we plan to move them to a separate repo and nuget package.

# Github dev agents demo

https://github.com/microsoft/azure-openai-dev-skills-orchestrator/assets/10728102/cafb1546-69ab-4c27-aaf5-1968313d637f

## How it works

* User begins with creating an issue and then stateing what they want to accomplish, natural language, as simple or as detailed as needed.
* Product manager agent will respond with a Readme, which can be iterated upon.
* User approves the readme or gives feedback via issue comments.
* Once the readme is approved, the user closes the issue and the Readme is commited to a PR.
* Developer lead agent responds with a decomposed plan for development, which also can be iterated upon.
* User approves the plan or gives feedback via issue comments.
* Once the readme is approved, the user closes the issue and the plan is used to break down the task to different developer agents.
* Developer agents respond with code, which can be iterated upon.
* User approves the code or gives feedback via issue comments.
* Once the code is approved, the user closes the issue and the code is commited to a PR.

# How to run the Github dev agents locally

Check [the getting started guide](./docs/github-flow-getting-started.md)

# Other scenarios using the AiAgents

## TODO

# Contributing

Expand Down
64 changes: 64 additions & 0 deletions docs/elsa-workflows.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# SemanticKernel Activity Provider for Elsa Workflows 3.x

The project supports running [Microsoft Semantic Kernel](https://github.com/microsoft/semantic-kernel) Skills as workflows using [Elsa Workflows](https://v3.elsaworkflows.io). You can build the workflows as .NET code or in the visual designer.
To run the designer:

```bash
> cd WorkflowsApp
> cp .env_example .env
# Edit the .env file to choose your AI model, add your API Endpoint, and secrets.
> . ./.env
> dotnet build
> dotnet run
# Open browser to the URI in the console output
```

By Default you can use "admin" and "password" to login. Please review [Workflow Security](https://v3.elsaworkflows.io/docs/installation/aspnet-apps-workflow-server) for into on securing the app, using API tokens, and more.

To [invoke](https://v3.elsaworkflows.io/docs/guides/invoking-workflows) a workflow, first it must be "Published". If your workflow has a trigger activity, you can use that. When your workflow is ready, click the "Publish" button. You can also execute the workflow using the API. Then, find the Workflow Definition ID. From a command line, you can use "curl":

```bash
> curl --location 'https://localhost:5001/elsa/api/workflow-definitions/{workflow_definition_id}/execute' \
--header 'Content-Type: application/json' \
--header 'Authorization: ApiKey {api_key}' \
--data '{
}'
```

Once you have the app runing locally, you can login (admin/password - see the [Elsa Workflows](https://v3.elsaworkflows.io) for info about securing). Then you can click "new workflow" to begin building your workflow with semantic kernel skills.

1. Drag workflow Activity blocks into the designer, and examine the settings.
2. Connect the Activities to specify an order of operations.
3. You can use Workfflow Variables to pass state between activities.
1. Create a Workflow Variable, "MyVariable"
2. Click on the Activity that you want to use to populate the variable.
3. In the Settings box for the Activity, Click "Output"
4. Set the "Output" to the variable chosen.
5. Click the Activity that will use the variable. Click on "Settings".
6. Find the text box representing the variable that you want to populate, in this case usually "input".
7. Click the "..." widget above the text box, and select "javascript"
8. Set the value of the text box to

```javascript
`${getMyVariable()}`
```

9. Run the workflow.

## Run via codespaces

The easiest way to run the project is in Codespaces. Codespaces will start a qdrant instance for you.

1. Create a new codespace from the *code* button on the main branch.
2. Once the code space setup is finished, from the terminal:

```bash
> cd cli
cli> cp ../WorkflowsApp/.env_example .
# Edit the .env file to choose your AI model, add your API Endpoint, and secrets.
cli> bash .env
cli> dotnet build
cli> dotnet run --file util/ToDoListSamplePrompt.txt do it
```

You will find the output in the *output/* directory.
1 change: 0 additions & 1 deletion docs/github-flow-architecture.md

This file was deleted.

Loading

0 comments on commit baabd41

Please sign in to comment.