diff --git a/samples/apps/autogen-studio/.gitignore b/samples/apps/autogen-studio/.gitignore index e94e41454a85..e1e3c9942ec1 100644 --- a/samples/apps/autogen-studio/.gitignore +++ b/samples/apps/autogen-studio/.gitignore @@ -1,6 +1,7 @@ database.sqlite .cache/* autogenstudio/web/files/user/* +autogenstudio/test autogenstudio/web/files/ui/* OAI_CONFIG_LIST scratch/ diff --git a/samples/apps/autogen-studio/README.md b/samples/apps/autogen-studio/README.md index 49f7e3d657be..1e60b5362dba 100644 --- a/samples/apps/autogen-studio/README.md +++ b/samples/apps/autogen-studio/README.md @@ -15,6 +15,8 @@ Code for AutoGen Studio is on GitHub at [microsoft/autogen](https://github.com/m > AutoGen Studio is currently under active development and we are iterating quickly. Kindly consider that we may introduce breaking changes in the releases during the upcoming weeks, and also the `README` might be outdated. We'll update the `README` as soon as we stabilize the API. > [!NOTE] Updates +> April 17: AutoGen Studio database layer is now rewritten to use [SQLModel](https://sqlmodel.tiangolo.com/) (Pydantic + SQLAlchemy). This provides entity linking (skills, models, agents and workflows are linked via association tables) and supports multiple [database backend dialects](https://docs.sqlalchemy.org/en/20/dialects/) supported in SQLAlchemy (SQLite, PostgreSQL, MySQL, Oracle, Microsoft SQL Server). The backend database can be specified a `--database-uri` argument when running the application. For example, `autogenstudio ui --database-uri sqlite:///database.sqlite` for SQLite and `autogenstudio ui --database-uri postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. + > March 12: Default directory for AutoGen Studio is now /home//.autogenstudio. You can also specify this directory using the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database and other files in the specified directory e.g. `/path/to/folder/database.sqlite`. `.env` files in that directory will be used to set environment variables for the app. ### Capabilities / Roadmap @@ -84,7 +86,14 @@ autogenstudio ui --port 8081 ``` This will start the application on the specified port. Open your web browser and go to `http://localhost:8081/` to begin using AutoGen Studio. -AutoGen Studio also takes a `--host ` argument to specify the host address. By default, it is set to `localhost`. You can also use the `--appdir ` argument to specify the directory where the app files (e.g., database and generated user files) are stored. By default, it is set to the directory where autogen pip package is installed. + +AutoGen Studio also takes several parameters to customize the application: + +- `--host ` argument to specify the host address. By default, it is set to `localhost`. Y +- `--appdir ` argument to specify the directory where the app files (e.g., database and generated user files) are stored. By default, it is set to the a `.autogenstudio` directory in the user's home directory. +- `--port ` argument to specify the port number. By default, it is set to `8080`. +- `--reload` argument to enable auto-reloading of the server when changes are made to the code. By default, it is set to `False`. +- `--database-uri` argument to specify the database URI. Example values include `sqlite:///database.sqlite` for SQLite and `postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. If this is not specified, the database URIL defaults to a `database.sqlite` file in the `--appdir` directory. Now that you have AutoGen Studio installed and running, you are ready to explore its capabilities, including defining and modifying agent workflows, interacting with agents and sessions, and expanding agent skills. @@ -98,8 +107,6 @@ AutoGen Studio proposes some high-level concepts. **Skills**: Skills are functions (e.g., Python functions) that describe how to solve a task. In general, a good skill has a descriptive name (e.g. `generate_images`), extensive docstrings and good defaults (e.g., writing out files to disk for persistence and reuse). You can add new skills AutoGen Studio app via the provided UI. At inference time, these skills are made available to the assistant agent as they address your tasks. -AutoGen Studio comes with 3 example skills: `fetch_profile`, `find_papers`, `generate_images`. The default skills, agents and workflows are based on the [dbdefaults.json](autogentstudio/utils/dbdefaults.json) file which is used to initialize the database. - ## Example Usage Consider the following query. @@ -116,8 +123,6 @@ The agent workflow responds by _writing and executing code_ to create a python p > Note: You can also view the debug console that generates useful information to see how the agents are interacting in the background. - - ## Contribution Guide We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project: @@ -134,7 +139,7 @@ We welcome contributions to AutoGen Studio. We recommend the following general s **Q: How do I specify the directory where files(e.g. database) are stored?** -A: You can specify the directory where files are stored by setting the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database and other files in the specified directory e.g. `/path/to/folder/database.sqlite`. +A: You can specify the directory where files are stored by setting the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database (default) and other files in the specified directory e.g. `/path/to/folder/database.sqlite`. **Q: Where can I adjust the default skills, agent and workflow configurations?** A: You can modify agent configurations directly from the UI or by editing the [dbdefaults.json](autogenstudio/utils/dbdefaults.json) file which is used to initialize the database. @@ -146,7 +151,7 @@ A: To reset your conversation history, you can delete the `database.sqlite` file A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages. **Q: Can I use other models with AutoGen Studio?** -Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model` is the deployment name or deployment id, and the `type` is "azure". +Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure". For other OSS models, we recommend using a server such as vllm to instantiate an openai compliant endpoint. **Q: The server starts but I can't access the UI** diff --git a/samples/apps/autogen-studio/autogenstudio/chatmanager.py b/samples/apps/autogen-studio/autogenstudio/chatmanager.py index 674ae3506a2a..84b85673f07c 100644 --- a/samples/apps/autogen-studio/autogenstudio/chatmanager.py +++ b/samples/apps/autogen-studio/autogenstudio/chatmanager.py @@ -4,14 +4,18 @@ import time from datetime import datetime from queue import Queue -from typing import Any, Dict, List, Optional, Tuple +from typing import Any, Dict, List, Optional, Tuple, Union import websockets from fastapi import WebSocket, WebSocketDisconnect -from .datamodel import AgentWorkFlowConfig, Message, SocketMessage -from .utils import extract_successful_code_blocks, get_modified_files, summarize_chat_history -from .workflowmanager import AutoGenWorkFlowManager +from .datamodel import Message, SocketMessage, Workflow +from .utils import ( + extract_successful_code_blocks, + get_modified_files, + summarize_chat_history, +) +from .workflowmanager import WorkflowManager class AutoGenChatManager: @@ -41,7 +45,7 @@ def chat( self, message: Message, history: List[Dict[str, Any]], - flow_config: Optional[AgentWorkFlowConfig] = None, + workflow: Any = None, connection_id: Optional[str] = None, user_dir: Optional[str] = None, **kwargs, @@ -59,78 +63,93 @@ def chat( """ # create a working director for workflow based on user_dir/session_id/time_hash - work_dir = os.path.join(user_dir, message.session_id, datetime.now().strftime("%Y%m%d_%H-%M-%S")) + work_dir = os.path.join( + user_dir, + str(message.session_id), + datetime.now().strftime("%Y%m%d_%H-%M-%S"), + ) os.makedirs(work_dir, exist_ok=True) # if no flow config is provided, use the default - if flow_config is None: - raise ValueError("flow_config must be specified") + if workflow is None: + raise ValueError("Workflow must be specified") - flow = AutoGenWorkFlowManager( - config=flow_config, + workflow_manager = WorkflowManager( + workflow=workflow, history=history, work_dir=work_dir, send_message_function=self.send, connection_id=connection_id, ) + workflow = Workflow.model_validate(workflow) + message_text = message.content.strip() start_time = time.time() - flow.run(message=f"{message_text}", clear_history=False) + workflow_manager.run(message=f"{message_text}", clear_history=False) end_time = time.time() metadata = { - "messages": flow.agent_history, - "summary_method": flow_config.summary_method, + "messages": workflow_manager.agent_history, + "summary_method": workflow.summary_method, "time": end_time - start_time, "files": get_modified_files(start_time, end_time, source_dir=work_dir), } - print("Modified files: ", len(metadata["files"])) - - output = self._generate_output(message_text, flow, flow_config) + output = self._generate_output(message_text, workflow_manager, workflow) output_message = Message( user_id=message.user_id, - root_msg_id=message.root_msg_id, role="assistant", content=output, - metadata=json.dumps(metadata), + meta=json.dumps(metadata), session_id=message.session_id, ) return output_message def _generate_output( - self, message_text: str, flow: AutoGenWorkFlowManager, flow_config: AgentWorkFlowConfig + self, + message_text: str, + workflow_manager: WorkflowManager, + workflow: Workflow, ) -> str: """ Generates the output response based on the workflow configuration and agent history. :param message_text: The text of the incoming message. - :param flow: An instance of `AutoGenWorkFlowManager`. + :param flow: An instance of `WorkflowManager`. :param flow_config: An instance of `AgentWorkFlowConfig`. :return: The output response as a string. """ output = "" - if flow_config.summary_method == "last": - successful_code_blocks = extract_successful_code_blocks(flow.agent_history) - last_message = flow.agent_history[-1]["message"]["content"] if flow.agent_history else "" + if workflow.summary_method == "last": + successful_code_blocks = extract_successful_code_blocks(workflow_manager.agent_history) + last_message = ( + workflow_manager.agent_history[-1]["message"]["content"] if workflow_manager.agent_history else "" + ) successful_code_blocks = "\n\n".join(successful_code_blocks) output = (last_message + "\n" + successful_code_blocks) if successful_code_blocks else last_message - elif flow_config.summary_method == "llm": - model = flow.config.receiver.config.llm_config.config_list[0] + elif workflow.summary_method == "llm": + client = workflow_manager.receiver.client status_message = SocketMessage( type="agent_status", - data={"status": "summarizing", "message": "Generating summary of agent dialogue"}, - connection_id=flow.connection_id, + data={ + "status": "summarizing", + "message": "Summarizing agent dialogue", + }, + connection_id=workflow_manager.connection_id, ) self.send(status_message.dict()) - output = summarize_chat_history(task=message_text, messages=flow.agent_history, model=model) + output = summarize_chat_history( + task=message_text, + messages=workflow_manager.agent_history, + client=client, + ) - elif flow_config.summary_method == "none": + elif workflow.summary_method == "none": output = "" return output @@ -141,7 +160,9 @@ class WebSocketConnectionManager: """ def __init__( - self, active_connections: List[Tuple[WebSocket, str]] = None, active_connections_lock: asyncio.Lock = None + self, + active_connections: List[Tuple[WebSocket, str]] = None, + active_connections_lock: asyncio.Lock = None, ) -> None: """ Initializes WebSocketConnectionManager with an optional list of active WebSocket connections. @@ -185,7 +206,7 @@ async def disconnect_all(self) -> None: for connection, _ in self.active_connections[:]: await self.disconnect(connection) - async def send_message(self, message: Dict, websocket: WebSocket) -> None: + async def send_message(self, message: Union[Dict, str], websocket: WebSocket) -> None: """ Sends a JSON message to a single WebSocket connection. @@ -202,7 +223,7 @@ async def send_message(self, message: Dict, websocket: WebSocket) -> None: print("Error: WebSocket connection closed normally") await self.disconnect(websocket) except Exception as e: - print(f"Error in sending message: {str(e)}") + print(f"Error in sending message: {str(e)}", message) await self.disconnect(websocket) async def broadcast(self, message: Dict) -> None: diff --git a/samples/apps/autogen-studio/autogenstudio/cli.py b/samples/apps/autogen-studio/autogenstudio/cli.py index aafb13317c84..42642bcd68af 100644 --- a/samples/apps/autogen-studio/autogenstudio/cli.py +++ b/samples/apps/autogen-studio/autogenstudio/cli.py @@ -1,10 +1,10 @@ import os +from typing import Optional import typer import uvicorn from typing_extensions import Annotated -from .utils.dbutils import DBManager from .version import VERSION app = typer.Typer() @@ -18,6 +18,7 @@ def ui( reload: Annotated[bool, typer.Option("--reload")] = False, docs: bool = False, appdir: str = None, + database_uri: Optional[str] = None, ): """ Run the AutoGen Studio UI. @@ -29,11 +30,14 @@ def ui( reload (bool, optional): Whether to reload the UI on code changes. Defaults to False. docs (bool, optional): Whether to generate API docs. Defaults to False. appdir (str, optional): Path to the AutoGen Studio app directory. Defaults to None. + database-uri (str, optional): Database URI to connect to. Defaults to None. Examples include sqlite:///autogenstudio.db, postgresql://user:password@localhost/autogenstudio. """ os.environ["AUTOGENSTUDIO_API_DOCS"] = str(docs) if appdir: os.environ["AUTOGENSTUDIO_APPDIR"] = appdir + if database_uri: + os.environ["AUTOGENSTUDIO_DATABASE_URI"] = database_uri uvicorn.run( "autogenstudio.web.app:app", diff --git a/samples/apps/autogen-studio/autogenstudio/database/__init__.py b/samples/apps/autogen-studio/autogenstudio/database/__init__.py new file mode 100644 index 000000000000..0518c24ba4fa --- /dev/null +++ b/samples/apps/autogen-studio/autogenstudio/database/__init__.py @@ -0,0 +1,3 @@ +# from .dbmanager import * +from .dbmanager import * +from .utils import * diff --git a/samples/apps/autogen-studio/autogenstudio/database/alembic.ini b/samples/apps/autogen-studio/autogenstudio/database/alembic.ini new file mode 100644 index 000000000000..cd413a26066c --- /dev/null +++ b/samples/apps/autogen-studio/autogenstudio/database/alembic.ini @@ -0,0 +1,116 @@ +# A generic, single database configuration. + +[alembic] +# path to migration scripts +script_location = migrations + +# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s +# Uncomment the line below if you want the files to be prepended with date and time +# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file +# for all available tokens +# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s + +# sys.path path, will be prepended to sys.path if present. +# defaults to the current working directory. +prepend_sys_path = . + +# timezone to use when rendering the date within the migration file +# as well as the filename. +# If specified, requires the python>=3.9 or backports.zoneinfo library. +# Any required deps can installed by adding `alembic[tz]` to the pip requirements +# string value is passed to ZoneInfo() +# leave blank for localtime +# timezone = + +# max length of characters to apply to the +# "slug" field +# truncate_slug_length = 40 + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + +# set to 'true' to allow .pyc and .pyo files without +# a source .py file to be detected as revisions in the +# versions/ directory +# sourceless = false + +# version location specification; This defaults +# to migrations/versions. When using multiple version +# directories, initial revisions must be specified with --version-path. +# The path separator used here should be the separator specified by "version_path_separator" below. +# version_locations = %(here)s/bar:%(here)s/bat:migrations/versions + +# version path separator; As mentioned above, this is the character used to split +# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep. +# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas. +# Valid values for version_path_separator are: +# +# version_path_separator = : +# version_path_separator = ; +# version_path_separator = space +version_path_separator = os # Use os.pathsep. Default configuration used for new projects. + +# set to 'true' to search source files recursively +# in each "version_locations" directory +# new in Alembic version 1.10 +# recursive_version_locations = false + +# the output encoding used when revision files +# are written from script.py.mako +# output_encoding = utf-8 + +sqlalchemy.url = driver://user:pass@localhost/dbname + + +[post_write_hooks] +# post_write_hooks defines scripts or Python functions that are run +# on newly generated revision scripts. See the documentation for further +# detail and examples + +# format using "black" - use the console_scripts runner, against the "black" entrypoint +# hooks = black +# black.type = console_scripts +# black.entrypoint = black +# black.options = -l 79 REVISION_SCRIPT_FILENAME + +# lint with attempts to fix using "ruff" - use the exec runner, execute a binary +# hooks = ruff +# ruff.type = exec +# ruff.executable = %(here)s/.venv/bin/ruff +# ruff.options = --fix REVISION_SCRIPT_FILENAME + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/samples/apps/autogen-studio/autogenstudio/database/dbmanager.py b/samples/apps/autogen-studio/autogenstudio/database/dbmanager.py new file mode 100644 index 000000000000..00d3714b63fa --- /dev/null +++ b/samples/apps/autogen-studio/autogenstudio/database/dbmanager.py @@ -0,0 +1,472 @@ +from datetime import datetime +from typing import Optional + +from loguru import logger +from sqlalchemy import exc +from sqlmodel import Session, SQLModel, and_, create_engine, select + +from ..datamodel import ( + Agent, + AgentLink, + AgentModelLink, + AgentSkillLink, + Model, + Response, + Skill, + Workflow, + WorkflowAgentLink, +) +from .utils import init_db_samples + +valid_link_types = ["agent_model", "agent_skill", "agent_agent", "workflow_agent"] + + +class DBManager: + """A class to manage database operations""" + + def __init__(self, engine_uri: str): + connection_args = {"check_same_thread": True} if "sqlite" in engine_uri else {} + self.engine = create_engine(engine_uri, connect_args=connection_args) + # run_migration(engine_uri=engine_uri) + + def create_db_and_tables(self): + """Create a new database and tables""" + try: + SQLModel.metadata.create_all(self.engine) + try: + init_db_samples(self) + except Exception as e: + logger.info("Error while initializing database samples: " + str(e)) + except Exception as e: + logger.info("Error while creating database tables:" + str(e)) + + def upsert(self, model: SQLModel): + """Create a new entity""" + # check if the model exists, update else add + status = True + model_class = type(model) + existing_model = None + + with Session(self.engine) as session: + try: + existing_model = session.exec(select(model_class).where(model_class.id == model.id)).first() + if existing_model: + model.updated_at = datetime.now() + for key, value in model.model_dump().items(): + setattr(existing_model, key, value) + model = existing_model + session.add(model) + else: + session.add(model) + session.commit() + session.refresh(model) + except Exception as e: + session.rollback() + logger.error("Error while upserting %s", e) + status = False + + response = Response( + message=( + f"{model_class.__name__} Updated Successfully " + if existing_model + else f"{model_class.__name__} Created Successfully" + ), + status=status, + data=model.model_dump(), + ) + + return response + + def _model_to_dict(self, model_obj): + return {col.name: getattr(model_obj, col.name) for col in model_obj.__table__.columns} + + def get_items( + self, + model_class: SQLModel, + session: Session, + filters: dict = None, + return_json: bool = False, + order: str = "desc", + ): + """List all entities""" + result = [] + status = True + status_message = "" + + try: + if filters: + conditions = [getattr(model_class, col) == value for col, value in filters.items()] + statement = select(model_class).where(and_(*conditions)) + + if hasattr(model_class, "created_at") and order: + if order == "desc": + statement = statement.order_by(model_class.created_at.desc()) + else: + statement = statement.order_by(model_class.created_at.asc()) + else: + statement = select(model_class) + + if return_json: + result = [self._model_to_dict(row) for row in session.exec(statement).all()] + else: + result = session.exec(statement).all() + status_message = f"{model_class.__name__} Retrieved Successfully" + except Exception as e: + session.rollback() + status = False + status_message = f"Error while fetching {model_class.__name__}" + logger.error("Error while getting %s: %s", model_class.__name__, e) + + response: Response = Response( + message=status_message, + status=status, + data=result, + ) + return response + + def get( + self, + model_class: SQLModel, + filters: dict = None, + return_json: bool = False, + order: str = "desc", + ): + """List all entities""" + + with Session(self.engine) as session: + response = self.get_items(model_class, session, filters, return_json, order) + return response + + def delete(self, model_class: SQLModel, filters: dict = None): + """Delete an entity""" + row = None + status_message = "" + status = True + + with Session(self.engine) as session: + try: + if filters: + conditions = [getattr(model_class, col) == value for col, value in filters.items()] + row = session.exec(select(model_class).where(and_(*conditions))).all() + else: + row = session.exec(select(model_class)).all() + if row: + for row in row: + session.delete(row) + session.commit() + status_message = f"{model_class.__name__} Deleted Successfully" + else: + print(f"Row with filters {filters} not found") + logger.info("Row with filters %s not found", filters) + status_message = "Row not found" + except exc.IntegrityError as e: + session.rollback() + logger.error("Integrity ... Error while deleting: %s", e) + status_message = f"The {model_class.__name__} is linked to another entity and cannot be deleted." + status = False + except Exception as e: + session.rollback() + logger.error("Error while deleting: %s", e) + status_message = f"Error while deleting: {e}" + status = False + response = Response( + message=status_message, + status=status, + data=None, + ) + return response + + def get_linked_entities( + self, + link_type: str, + primary_id: int, + return_json: bool = False, + agent_type: Optional[str] = None, + ): + """ + Get all entities linked to the primary entity. + + Args: + link_type (str): The type of link to retrieve, e.g., "agent_model". + primary_id (int): The identifier for the primary model. + return_json (bool): Whether to return the result as a JSON object. + + Returns: + List[SQLModel]: A list of linked entities. + """ + + linked_entities = [] + + if link_type not in valid_link_types: + return [] + + status = True + status_message = "" + + with Session(self.engine) as session: + try: + if link_type == "agent_model": + # get the agent + agent = self.get_items(Agent, filters={"id": primary_id}, session=session).data[0] + linked_entities = agent.models + elif link_type == "agent_skill": + agent = self.get_items(Agent, filters={"id": primary_id}, session=session).data[0] + linked_entities = agent.skills + elif link_type == "agent_agent": + agent = self.get_items(Agent, filters={"id": primary_id}, session=session).data[0] + linked_entities = agent.agents + elif link_type == "workflow_agent": + linked_entities = session.exec( + select(Agent) + .join(WorkflowAgentLink) + .where( + WorkflowAgentLink.workflow_id == primary_id, + WorkflowAgentLink.agent_type == agent_type, + ) + ).all() + except Exception as e: + logger.error("Error while getting linked entities: %s", e) + status_message = f"Error while getting linked entities: {e}" + status = False + if return_json: + linked_entities = [self._model_to_dict(row) for row in linked_entities] + + response = Response( + message=status_message, + status=status, + data=linked_entities, + ) + + return response + + def link( + self, + link_type: str, + primary_id: int, + secondary_id: int, + agent_type: Optional[str] = None, + ) -> Response: + """ + Link two entities together. + + Args: + link_type (str): The type of link to create, e.g., "agent_model". + primary_id (int): The identifier for the primary model. + secondary_id (int): The identifier for the secondary model. + agent_type (Optional[str]): The type of agent, e.g., "sender" or receiver. + + Returns: + Response: The response of the linking operation, including success status and message. + """ + + # TBD verify that is creator of the primary entity being linked + status = True + status_message = "" + primary_model = None + secondary_model = None + + if link_type not in valid_link_types: + status = False + status_message = f"Invalid link type: {link_type}. Valid link types are: {valid_link_types}" + else: + with Session(self.engine) as session: + try: + if link_type == "agent_model": + primary_model = session.exec(select(Agent).where(Agent.id == primary_id)).first() + secondary_model = session.exec(select(Model).where(Model.id == secondary_id)).first() + if primary_model is None or secondary_model is None: + status = False + status_message = "One or both entity records do not exist." + else: + # check if the link already exists + existing_link = session.exec( + select(AgentModelLink).where( + AgentModelLink.agent_id == primary_id, + AgentModelLink.model_id == secondary_id, + ) + ).first() + if existing_link: # link already exists + return Response( + message=( + f"{secondary_model.__class__.__name__} already linked " + f"to {primary_model.__class__.__name__}" + ), + status=False, + ) + else: + primary_model.models.append(secondary_model) + elif link_type == "agent_agent": + primary_model = session.exec(select(Agent).where(Agent.id == primary_id)).first() + secondary_model = session.exec(select(Agent).where(Agent.id == secondary_id)).first() + if primary_model is None or secondary_model is None: + status = False + status_message = "One or both entity records do not exist." + else: + # check if the link already exists + existing_link = session.exec( + select(AgentLink).where( + AgentLink.parent_id == primary_id, + AgentLink.agent_id == secondary_id, + ) + ).first() + if existing_link: + return Response( + message=( + f"{secondary_model.__class__.__name__} already linked " + f"to {primary_model.__class__.__name__}" + ), + status=False, + ) + else: + primary_model.agents.append(secondary_model) + + elif link_type == "agent_skill": + primary_model = session.exec(select(Agent).where(Agent.id == primary_id)).first() + secondary_model = session.exec(select(Skill).where(Skill.id == secondary_id)).first() + if primary_model is None or secondary_model is None: + status = False + status_message = "One or both entity records do not exist." + else: + # check if the link already exists + existing_link = session.exec( + select(AgentSkillLink).where( + AgentSkillLink.agent_id == primary_id, + AgentSkillLink.skill_id == secondary_id, + ) + ).first() + if existing_link: + return Response( + message=( + f"{secondary_model.__class__.__name__} already linked " + f"to {primary_model.__class__.__name__}" + ), + status=False, + ) + else: + primary_model.skills.append(secondary_model) + elif link_type == "workflow_agent": + primary_model = session.exec(select(Workflow).where(Workflow.id == primary_id)).first() + secondary_model = session.exec(select(Agent).where(Agent.id == secondary_id)).first() + if primary_model is None or secondary_model is None: + status = False + status_message = "One or both entity records do not exist." + else: + # check if the link already exists + existing_link = session.exec( + select(WorkflowAgentLink).where( + WorkflowAgentLink.workflow_id == primary_id, + WorkflowAgentLink.agent_id == secondary_id, + WorkflowAgentLink.agent_type == agent_type, + ) + ).first() + if existing_link: + return Response( + message=( + f"{secondary_model.__class__.__name__} already linked " + f"to {primary_model.__class__.__name__}" + ), + status=False, + ) + else: + # primary_model.agents.append(secondary_model) + workflow_agent_link = WorkflowAgentLink( + workflow_id=primary_id, + agent_id=secondary_id, + agent_type=agent_type, + ) + session.add(workflow_agent_link) + # add and commit the link + session.add(primary_model) + session.commit() + status_message = ( + f"{secondary_model.__class__.__name__} successfully linked " + f"to {primary_model.__class__.__name__}" + ) + + except Exception as e: + session.rollback() + logger.error("Error while linking: %s", e) + status = False + status_message = f"Error while linking due to an exception: {e}" + + response = Response( + message=status_message, + status=status, + ) + + return response + + def unlink( + self, + link_type: str, + primary_id: int, + secondary_id: int, + agent_type: Optional[str] = None, + ) -> Response: + """ + Unlink two entities. + + Args: + link_type (str): The type of link to remove, e.g., "agent_model". + primary_id (int): The identifier for the primary model. + secondary_id (int): The identifier for the secondary model. + agent_type (Optional[str]): The type of agent, e.g., "sender" or receiver. + + Returns: + Response: The response of the unlinking operation, including success status and message. + """ + status = True + status_message = "" + + if link_type not in valid_link_types: + status = False + status_message = f"Invalid link type: {link_type}. Valid link types are: {valid_link_types}" + return Response(message=status_message, status=status) + + with Session(self.engine) as session: + try: + if link_type == "agent_model": + existing_link = session.exec( + select(AgentModelLink).where( + AgentModelLink.agent_id == primary_id, + AgentModelLink.model_id == secondary_id, + ) + ).first() + elif link_type == "agent_skill": + existing_link = session.exec( + select(AgentSkillLink).where( + AgentSkillLink.agent_id == primary_id, + AgentSkillLink.skill_id == secondary_id, + ) + ).first() + elif link_type == "agent_agent": + existing_link = session.exec( + select(AgentLink).where( + AgentLink.parent_id == primary_id, + AgentLink.agent_id == secondary_id, + ) + ).first() + elif link_type == "workflow_agent": + existing_link = session.exec( + select(WorkflowAgentLink).where( + WorkflowAgentLink.workflow_id == primary_id, + WorkflowAgentLink.agent_id == secondary_id, + WorkflowAgentLink.agent_type == agent_type, + ) + ).first() + + if existing_link: + session.delete(existing_link) + session.commit() + status_message = "Link removed successfully." + else: + status = False + status_message = "Link does not exist." + + except Exception as e: + session.rollback() + logger.error("Error while unlinking: %s", e) + status = False + status_message = f"Error while unlinking due to an exception: {e}" + + return Response(message=status_message, status=status) diff --git a/samples/apps/autogen-studio/autogenstudio/database/migrations/README b/samples/apps/autogen-studio/autogenstudio/database/migrations/README new file mode 100644 index 000000000000..2500aa1bcf72 --- /dev/null +++ b/samples/apps/autogen-studio/autogenstudio/database/migrations/README @@ -0,0 +1 @@ +Generic single-database configuration. diff --git a/samples/apps/autogen-studio/autogenstudio/database/migrations/__init__.py b/samples/apps/autogen-studio/autogenstudio/database/migrations/__init__.py new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/samples/apps/autogen-studio/autogenstudio/database/migrations/env.py b/samples/apps/autogen-studio/autogenstudio/database/migrations/env.py new file mode 100644 index 000000000000..1431492ad910 --- /dev/null +++ b/samples/apps/autogen-studio/autogenstudio/database/migrations/env.py @@ -0,0 +1,80 @@ +import os +from logging.config import fileConfig + +from alembic import context +from sqlalchemy import engine_from_config, pool +from sqlmodel import SQLModel + +from autogenstudio.datamodel import * +from autogenstudio.utils import get_db_uri + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config +config.set_main_option("sqlalchemy.url", get_db_uri()) + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +if config.config_file_name is not None: + fileConfig(config.config_file_name) + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +target_metadata = SQLModel.metadata + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def run_migrations_offline() -> None: + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = config.get_main_option("sqlalchemy.url") + context.configure( + url=url, + target_metadata=target_metadata, + literal_binds=True, + dialect_opts={"paramstyle": "named"}, + ) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online() -> None: + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + connectable = engine_from_config( + config.get_section(config.config_ini_section, {}), + prefix="sqlalchemy.", + poolclass=pool.NullPool, + ) + + with connectable.connect() as connection: + context.configure(connection=connection, target_metadata=target_metadata) + + with context.begin_transaction(): + context.run_migrations() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/samples/apps/autogen-studio/autogenstudio/database/migrations/script.py.mako b/samples/apps/autogen-studio/autogenstudio/database/migrations/script.py.mako new file mode 100644 index 000000000000..6ce3351093cf --- /dev/null +++ b/samples/apps/autogen-studio/autogenstudio/database/migrations/script.py.mako @@ -0,0 +1,27 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa +import sqlmodel +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision: str = ${repr(up_revision)} +down_revision: Union[str, None] = ${repr(down_revision)} +branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} +depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} + + +def upgrade() -> None: + ${upgrades if upgrades else "pass"} + + +def downgrade() -> None: + ${downgrades if downgrades else "pass"} diff --git a/samples/apps/autogen-studio/autogenstudio/database/utils.py b/samples/apps/autogen-studio/autogenstudio/database/utils.py new file mode 100644 index 000000000000..c14003b414c3 --- /dev/null +++ b/samples/apps/autogen-studio/autogenstudio/database/utils.py @@ -0,0 +1,323 @@ +# from .util import get_app_root +import os +import time +from datetime import datetime +from pathlib import Path +from typing import Any + +from alembic import command, util +from alembic.config import Config +from loguru import logger + +# from ..utils.db_utils import get_db_uri +from sqlmodel import Session, create_engine, text + +from autogen.agentchat import AssistantAgent + +from ..datamodel import ( + Agent, + AgentConfig, + AgentType, + CodeExecutionConfigTypes, + Model, + Skill, + Workflow, + WorkflowAgentLink, +) + + +def workflow_from_id(workflow_id: int, dbmanager: Any): + workflow = dbmanager.get(Workflow, filters={"id": workflow_id}).data + if not workflow or len(workflow) == 0: + raise ValueError("The specified workflow does not exist.") + workflow = workflow[0].model_dump(mode="json") + workflow_agent_links = dbmanager.get(WorkflowAgentLink, filters={"workflow_id": workflow_id}).data + + def dump_agent(agent: Agent): + exclude = [] + if agent.type != AgentType.groupchat: + exclude = [ + "admin_name", + "messages", + "max_round", + "admin_name", + "speaker_selection_method", + "allow_repeat_speaker", + ] + return agent.model_dump(warnings=False, mode="json", exclude=exclude) + + def get_agent(agent_id): + with Session(dbmanager.engine) as session: + agent: Agent = dbmanager.get_items(Agent, filters={"id": agent_id}, session=session).data[0] + agent_dict = dump_agent(agent) + agent_dict["skills"] = [Skill.model_validate(skill.model_dump(mode="json")) for skill in agent.skills] + model_exclude = [ + "id", + "agent_id", + "created_at", + "updated_at", + "user_id", + "description", + ] + models = [model.model_dump(mode="json", exclude=model_exclude) for model in agent.models] + agent_dict["models"] = [model.model_dump(mode="json") for model in agent.models] + + if len(models) > 0: + agent_dict["config"]["llm_config"] = agent_dict.get("config", {}).get("llm_config", {}) + llm_config = agent_dict["config"]["llm_config"] + if llm_config: + llm_config["config_list"] = models + agent_dict["config"]["llm_config"] = llm_config + agent_dict["agents"] = [get_agent(agent.id) for agent in agent.agents] + return agent_dict + + for link in workflow_agent_links: + agent_dict = get_agent(link.agent_id) + workflow[str(link.agent_type.value)] = agent_dict + return workflow + + +def run_migration(engine_uri: str): + database_dir = Path(__file__).parent + script_location = database_dir / "migrations" + + engine = create_engine(engine_uri) + buffer = open(script_location / "alembic.log", "w") + alembic_cfg = Config(stdout=buffer) + alembic_cfg.set_main_option("script_location", str(script_location)) + alembic_cfg.set_main_option("sqlalchemy.url", engine_uri) + + print(f"Running migrations with engine_uri: {engine_uri}") + + should_initialize_alembic = False + with Session(engine) as session: + try: + session.exec(text("SELECT * FROM alembic_version")) + except Exception: + logger.info("Alembic not initialized") + should_initialize_alembic = True + else: + logger.info("Alembic already initialized") + + if should_initialize_alembic: + try: + logger.info("Initializing alembic") + command.ensure_version(alembic_cfg) + command.upgrade(alembic_cfg, "head") + logger.info("Alembic initialized") + except Exception as exc: + logger.error(f"Error initializing alembic: {exc}") + raise RuntimeError("Error initializing alembic") from exc + + logger.info(f"Running DB migrations in {script_location}") + + try: + buffer.write(f"{datetime.now().isoformat()}: Checking migrations\n") + command.check(alembic_cfg) + except Exception as exc: + if isinstance(exc, (util.exc.CommandError, util.exc.AutogenerateDiffsDetected)): + try: + command.upgrade(alembic_cfg, "head") + time.sleep(3) + except Exception as exc: + logger.error(f"Error running migrations: {exc}") + + try: + buffer.write(f"{datetime.now().isoformat()}: Checking migrations\n") + command.check(alembic_cfg) + except util.exc.AutogenerateDiffsDetected as exc: + logger.info(f"AutogenerateDiffsDetected: {exc}") + # raise RuntimeError( + # f"There's a mismatch between the models and the database.\n{exc}") + except util.exc.CommandError as exc: + logger.error(f"CommandError: {exc}") + # raise RuntimeError(f"Error running migrations: {exc}") + + +def init_db_samples(dbmanager: Any): + workflows = dbmanager.get(Workflow).data + workflow_names = [w.name for w in workflows] + if "Default Workflow" in workflow_names and "Travel Planning Workflow" in workflow_names: + logger.info("Database already initialized with Default and Travel Planning Workflows") + return + logger.info("Initializing database with Default and Travel Planning Workflows") + # models + gpt_4_model = Model( + model="gpt-4-1106-preview", description="OpenAI GPT-4 model", user_id="guestuser@gmail.com", api_type="open_ai" + ) + azure_model = Model( + model="gpt4-turbo", + description="Azure OpenAI model", + user_id="guestuser@gmail.com", + api_type="azure", + base_url="https://api.your azureendpoint.com/v1", + ) + zephyr_model = Model( + model="zephyr", + description="Local Huggingface Zephyr model via vLLM, LMStudio or Ollama", + base_url="http://localhost:1234/v1", + user_id="guestuser@gmail.com", + api_type="open_ai", + ) + + google_gemini_model = Model( + model="gemini-1.5-pro-latest", + description="Google's Gemini model", + user_id="guestuser@gmail.com", + api_type="google", + ) + + # skills + + generate_image_skill = Skill( + name="generate_images", + description="Generate and save images based on a user's query.", + content='\nfrom typing import List\nimport uuid\nimport requests # to perform HTTP requests\nfrom pathlib import Path\n\nfrom openai import OpenAI\n\n\ndef generate_and_save_images(query: str, image_size: str = "1024x1024") -> List[str]:\n """\n Function to paint, draw or illustrate images based on the users query or request. Generates images from a given query using OpenAI\'s DALL-E model and saves them to disk. Use the code below anytime there is a request to create an image.\n\n :param query: A natural language description of the image to be generated.\n :param image_size: The size of the image to be generated. (default is "1024x1024")\n :return: A list of filenames for the saved images.\n """\n\n client = OpenAI() # Initialize the OpenAI client\n response = client.images.generate(model="dall-e-3", prompt=query, n=1, size=image_size) # Generate images\n\n # List to store the file names of saved images\n saved_files = []\n\n # Check if the response is successful\n if response.data:\n for image_data in response.data:\n # Generate a random UUID as the file name\n file_name = str(uuid.uuid4()) + ".png" # Assuming the image is a PNG\n file_path = Path(file_name)\n\n img_url = image_data.url\n img_response = requests.get(img_url)\n if img_response.status_code == 200:\n # Write the binary content to a file\n with open(file_path, "wb") as img_file:\n img_file.write(img_response.content)\n print(f"Image saved to {file_path}")\n saved_files.append(str(file_path))\n else:\n print(f"Failed to download the image from {img_url}")\n else:\n print("No image data found in the response!")\n\n # Return the list of saved files\n return saved_files\n\n\n# Example usage of the function:\n# generate_and_save_images("A cute baby sea otter")\n', + user_id="guestuser@gmail.com", + ) + + # agents + user_proxy_config = AgentConfig( + name="user_proxy", + description="User Proxy Agent Configuration", + human_input_mode="NEVER", + max_consecutive_auto_reply=25, + system_message="You are a helpful assistant", + code_execution_config=CodeExecutionConfigTypes.local, + default_auto_reply="TERMINATE", + llm_config=False, + ) + user_proxy = Agent( + user_id="guestuser@gmail.com", type=AgentType.userproxy, config=user_proxy_config.model_dump(mode="json") + ) + + painter_assistant_config = AgentConfig( + name="default_assistant", + description="Assistant Agent", + human_input_mode="NEVER", + max_consecutive_auto_reply=25, + system_message=AssistantAgent.DEFAULT_SYSTEM_MESSAGE, + code_execution_config=CodeExecutionConfigTypes.none, + llm_config={}, + ) + painter_assistant = Agent( + user_id="guestuser@gmail.com", type=AgentType.assistant, config=painter_assistant_config.model_dump(mode="json") + ) + + planner_assistant_config = AgentConfig( + name="planner_assistant", + description="Assistant Agent", + human_input_mode="NEVER", + max_consecutive_auto_reply=25, + system_message="You are a helpful assistant that can suggest a travel plan for a user. You are the primary cordinator who will receive suggestions or advice from other agents (local_assistant, language_assistant). You must ensure that the finally plan integrates the suggestions from other agents or team members. YOUR FINAL RESPONSE MUST BE THE COMPLETE PLAN. When the plan is complete and all perspectives are integrated, you can respond with TERMINATE.", + code_execution_config=CodeExecutionConfigTypes.none, + llm_config={}, + ) + planner_assistant = Agent( + user_id="guestuser@gmail.com", type=AgentType.assistant, config=planner_assistant_config.model_dump(mode="json") + ) + + local_assistant_config = AgentConfig( + name="local_assistant", + description="Local Assistant Agent", + human_input_mode="NEVER", + max_consecutive_auto_reply=25, + system_message="You are a local assistant that can suggest local activities or places to visit for a user. You can suggest local activities, places to visit, restaurants to eat at, etc. You can also provide information about the weather, local events, etc. You can provide information about the local area, but you cannot suggest a complete travel plan. You can only provide information about the local area.", + code_execution_config=CodeExecutionConfigTypes.none, + llm_config={}, + ) + local_assistant = Agent( + user_id="guestuser@gmail.com", type=AgentType.assistant, config=local_assistant_config.model_dump(mode="json") + ) + + language_assistant_config = AgentConfig( + name="language_assistant", + description="Language Assistant Agent", + human_input_mode="NEVER", + max_consecutive_auto_reply=25, + system_message="You are a helpful assistant that can review travel plans, providing feedback on important/critical tips about how best to address language or communication challenges for the given destination. If the plan already includes language tips, you can mention that the plan is satisfactory, with rationale.", + code_execution_config=CodeExecutionConfigTypes.none, + llm_config={}, + ) + language_assistant = Agent( + user_id="guestuser@gmail.com", + type=AgentType.assistant, + config=language_assistant_config.model_dump(mode="json"), + ) + + # group chat + travel_groupchat_config = AgentConfig( + name="travel_groupchat", + admin_name="groupchat", + description="Group Chat Agent Configuration", + human_input_mode="NEVER", + max_consecutive_auto_reply=25, + system_message="You are a group chat manager", + code_execution_config=CodeExecutionConfigTypes.none, + default_auto_reply="TERMINATE", + llm_config={}, + speaker_selection_method="auto", + ) + travel_groupchat_agent = Agent( + user_id="guestuser@gmail.com", type=AgentType.groupchat, config=travel_groupchat_config.model_dump(mode="json") + ) + + # workflows + default_workflow = Workflow(name="Default Workflow", description="Default workflow", user_id="guestuser@gmail.com") + + travel_workflow = Workflow( + name="Travel Planning Workflow", description="Travel workflow", user_id="guestuser@gmail.com" + ) + + with Session(dbmanager.engine) as session: + session.add(zephyr_model) + session.add(google_gemini_model) + session.add(azure_model) + session.add(gpt_4_model) + session.add(generate_image_skill) + session.add(user_proxy) + session.add(painter_assistant) + session.add(travel_groupchat_agent) + session.add(planner_assistant) + session.add(local_assistant) + session.add(language_assistant) + + session.add(default_workflow) + session.add(travel_workflow) + session.commit() + + dbmanager.link(link_type="agent_model", primary_id=painter_assistant.id, secondary_id=gpt_4_model.id) + dbmanager.link(link_type="agent_skill", primary_id=painter_assistant.id, secondary_id=generate_image_skill.id) + dbmanager.link( + link_type="workflow_agent", primary_id=default_workflow.id, secondary_id=user_proxy.id, agent_type="sender" + ) + dbmanager.link( + link_type="workflow_agent", + primary_id=default_workflow.id, + secondary_id=painter_assistant.id, + agent_type="receiver", + ) + + # link agents to travel groupchat agent + + dbmanager.link(link_type="agent_agent", primary_id=travel_groupchat_agent.id, secondary_id=planner_assistant.id) + dbmanager.link(link_type="agent_agent", primary_id=travel_groupchat_agent.id, secondary_id=local_assistant.id) + dbmanager.link( + link_type="agent_agent", primary_id=travel_groupchat_agent.id, secondary_id=language_assistant.id + ) + dbmanager.link(link_type="agent_agent", primary_id=travel_groupchat_agent.id, secondary_id=user_proxy.id) + dbmanager.link(link_type="agent_model", primary_id=travel_groupchat_agent.id, secondary_id=gpt_4_model.id) + dbmanager.link(link_type="agent_model", primary_id=planner_assistant.id, secondary_id=gpt_4_model.id) + dbmanager.link(link_type="agent_model", primary_id=local_assistant.id, secondary_id=gpt_4_model.id) + dbmanager.link(link_type="agent_model", primary_id=language_assistant.id, secondary_id=gpt_4_model.id) + + dbmanager.link( + link_type="workflow_agent", primary_id=travel_workflow.id, secondary_id=user_proxy.id, agent_type="sender" + ) + dbmanager.link( + link_type="workflow_agent", + primary_id=travel_workflow.id, + secondary_id=travel_groupchat_agent.id, + agent_type="receiver", + ) + logger.info("Successfully initialized database with Default and Travel Planning Workflows") diff --git a/samples/apps/autogen-studio/autogenstudio/datamodel.py b/samples/apps/autogen-studio/autogenstudio/datamodel.py index 083bddccfcfe..3dbd46c357ee 100644 --- a/samples/apps/autogen-studio/autogenstudio/datamodel.py +++ b/samples/apps/autogen-studio/autogenstudio/datamodel.py @@ -1,318 +1,262 @@ -import uuid -from dataclasses import asdict, field from datetime import datetime +from enum import Enum from typing import Any, Callable, Dict, List, Literal, Optional, Union -from pydantic.dataclasses import dataclass - - -@dataclass -class Message(object): - user_id: str +from sqlalchemy import ForeignKey, Integer, orm +from sqlmodel import ( + JSON, + Column, + DateTime, + Field, + Relationship, + SQLModel, + func, +) +from sqlmodel import ( + Enum as SqlEnum, +) + +SQLModel.model_config["protected_namespaces"] = () +# pylint: disable=protected-access + + +class Message(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + id: Optional[int] = Field(default=None, primary_key=True) + created_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), server_default=func.now()), + ) # pylint: disable=not-callable + updated_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), onupdate=func.now()), + ) # pylint: disable=not-callable + user_id: Optional[str] = None role: str content: str - root_msg_id: Optional[str] = None - msg_id: Optional[str] = None - timestamp: Optional[str] = None - personalize: Optional[bool] = False - ra: Optional[str] = None - code: Optional[str] = None - metadata: Optional[Any] = None - session_id: Optional[str] = None - - def __post_init__(self): - if self.msg_id is None: - self.msg_id = str(uuid.uuid4()) - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - - def dict(self): - result = asdict(self) - return result - - -@dataclass -class Skill(object): - title: str - content: str - file_name: Optional[str] = None - id: Optional[str] = None - description: Optional[str] = None - timestamp: Optional[str] = None + session_id: Optional[int] = Field( + default=None, sa_column=Column(Integer, ForeignKey("session.id", ondelete="CASCADE")) + ) + connection_id: Optional[str] = None + meta: Optional[Dict] = Field(default={}, sa_column=Column(JSON)) + + +class Session(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + id: Optional[int] = Field(default=None, primary_key=True) + created_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), server_default=func.now()), + ) # pylint: disable=not-callable + updated_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), onupdate=func.now()), + ) # pylint: disable=not-callable user_id: Optional[str] = None + workflow_id: Optional[int] = Field(default=None, foreign_key="workflow.id") + name: Optional[str] = None + description: Optional[str] = None - def __post_init__(self): - if self.id is None: - self.id = str(uuid.uuid4()) - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - if self.user_id is None: - self.user_id = "default" - - def dict(self): - result = asdict(self) - return result +class AgentSkillLink(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + agent_id: int = Field(default=None, primary_key=True, foreign_key="agent.id") + skill_id: int = Field(default=None, primary_key=True, foreign_key="skill.id") -# web api data models +class AgentModelLink(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + agent_id: int = Field(default=None, primary_key=True, foreign_key="agent.id") + model_id: int = Field(default=None, primary_key=True, foreign_key="model.id") -# autogenflow data models -@dataclass -class Model: - """Data model for Model Config item in LLMConfig for AutoGen""" - model: str - api_key: Optional[str] = None - base_url: Optional[str] = None - api_type: Optional[str] = None - api_version: Optional[str] = None - id: Optional[str] = None - timestamp: Optional[str] = None +class Skill(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + id: Optional[int] = Field(default=None, primary_key=True) + created_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), server_default=func.now()), + ) # pylint: disable=not-callable + updated_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), onupdate=func.now()), + ) # pylint: disable=not-callable user_id: Optional[str] = None + name: str + content: str description: Optional[str] = None + secrets: Optional[Dict] = Field(default={}, sa_column=Column(JSON)) + libraries: Optional[Dict] = Field(default={}, sa_column=Column(JSON)) + agents: List["Agent"] = Relationship(back_populates="skills", link_model=AgentSkillLink) - def dict(self): - result = asdict(self) - return result - def __post_init__(self): - if self.id is None: - self.id = str(uuid.uuid4()) - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - if self.user_id is None: - self.user_id = "default" - - -@dataclass -class LLMConfig: +class LLMConfig(SQLModel, table=False): """Data model for LLM Config for AutoGen""" - config_list: List[Any] = field(default_factory=list) + config_list: List[Any] = Field(default_factory=list) temperature: float = 0 cache_seed: Optional[Union[int, None]] = None timeout: Optional[int] = None - max_tokens: Optional[int] = None + max_tokens: Optional[int] = 1000 extra_body: Optional[dict] = None - def dict(self): - result = asdict(self) - result["config_list"] = [c.dict() for c in self.config_list] - return result +class ModelTypes(str, Enum): + openai = "open_ai" + google = "google" + azure = "azure" -@dataclass -class AgentConfig: - """Data model for Agent Config for AutoGen""" - name: str - llm_config: Optional[Union[LLMConfig, bool]] = False +class Model(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + id: Optional[int] = Field(default=None, primary_key=True) + created_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), server_default=func.now()), + ) # pylint: disable=not-callable + updated_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), onupdate=func.now()), + ) # pylint: disable=not-callable + user_id: Optional[str] = None + model: str + api_key: Optional[str] = None + base_url: Optional[str] = None + api_type: ModelTypes = Field(default=ModelTypes.openai, sa_column=Column(SqlEnum(ModelTypes))) + api_version: Optional[str] = None + description: Optional[str] = None + agents: List["Agent"] = Relationship(back_populates="models", link_model=AgentModelLink) + + +class CodeExecutionConfigTypes(str, Enum): + local = "local" + docker = "docker" + none = "none" + + +class AgentConfig(SQLModel, table=False): + name: Optional[str] = None human_input_mode: str = "NEVER" max_consecutive_auto_reply: int = 10 system_message: Optional[str] = None is_termination_msg: Optional[Union[bool, str, Callable]] = None - code_execution_config: Optional[Union[bool, str, Dict[str, Any]]] = None + code_execution_config: CodeExecutionConfigTypes = Field( + default=CodeExecutionConfigTypes.local, sa_column=Column(SqlEnum(CodeExecutionConfigTypes)) + ) default_auto_reply: Optional[str] = "" description: Optional[str] = None + llm_config: Optional[Union[LLMConfig, bool]] = Field(default=False, sa_column=Column(JSON)) - def dict(self): - result = asdict(self) - if isinstance(result["llm_config"], LLMConfig): - result["llm_config"] = result["llm_config"].dict() - return result - - -@dataclass -class AgentFlowSpec: - """Data model to help flow load agents from config""" - - type: Literal["assistant", "userproxy"] - config: AgentConfig - id: Optional[str] = None - timestamp: Optional[str] = None - user_id: Optional[str] = None - skills: Optional[Union[None, List[Skill]]] = None - - def __post_init__(self): - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - if self.id is None: - self.id = str(uuid.uuid4()) - if self.user_id is None: - self.user_id = "default" - - def dict(self): - result = asdict(self) - return result - - -@dataclass -class GroupChatConfig: - """Data model for GroupChat Config for AutoGen""" - - agents: List[AgentFlowSpec] = field(default_factory=list) - admin_name: str = "Admin" - messages: List[Dict] = field(default_factory=list) - max_round: Optional[int] = 10 admin_name: Optional[str] = "Admin" + messages: Optional[List[Dict]] = Field(default_factory=list) + max_round: Optional[int] = 100 speaker_selection_method: Optional[str] = "auto" - # TODO: match the new group chat default and support transition spec - allow_repeat_speaker: Optional[Union[bool, List[AgentConfig]]] = True + allow_repeat_speaker: Optional[Union[bool, List["AgentConfig"]]] = True - def dict(self): - result = asdict(self) - result["agents"] = [a.dict() for a in self.agents] - return result +class AgentType(str, Enum): + assistant = "assistant" + userproxy = "userproxy" + groupchat = "groupchat" -@dataclass -class GroupChatFlowSpec: - """Data model to help flow load agents from config""" - type: Literal["groupchat"] - config: AgentConfig = field(default_factory=AgentConfig) - groupchat_config: Optional[GroupChatConfig] = field(default_factory=GroupChatConfig) - id: Optional[str] = None - timestamp: Optional[str] = None - user_id: Optional[str] = None - skills: Optional[Union[None, List[Skill]]] = None +class WorkflowAgentType(str, Enum): + sender = "sender" + receiver = "receiver" + planner = "planner" - def __post_init__(self): - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - if self.id is None: - self.id = str(uuid.uuid4()) - if self.user_id is None: - self.user_id = "default" - def dict(self): - result = asdict(self) - # result["config"] = self.config.dict() - # result["groupchat_config"] = self.groupchat_config.dict() - return result +class WorkflowAgentLink(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + workflow_id: int = Field(default=None, primary_key=True, foreign_key="workflow.id") + agent_id: int = Field(default=None, primary_key=True, foreign_key="agent.id") + agent_type: WorkflowAgentType = Field( + default=WorkflowAgentType.sender, + sa_column=Column(SqlEnum(WorkflowAgentType), primary_key=True), + ) -@dataclass -class AgentWorkFlowConfig: - """Data model for Flow Config for AutoGen""" +class AgentLink(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + parent_id: Optional[int] = Field(default=None, foreign_key="agent.id", primary_key=True) + agent_id: Optional[int] = Field(default=None, foreign_key="agent.id", primary_key=True) + +class Agent(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + id: Optional[int] = Field(default=None, primary_key=True) + created_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), server_default=func.now()), + ) # pylint: disable=not-callable + updated_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), onupdate=func.now()), + ) # pylint: disable=not-callable + user_id: Optional[str] = None + type: AgentType = Field(default=AgentType.assistant, sa_column=Column(SqlEnum(AgentType))) + config: AgentConfig = Field(default_factory=AgentConfig, sa_column=Column(JSON)) + skills: List[Skill] = Relationship(back_populates="agents", link_model=AgentSkillLink) + models: List[Model] = Relationship(back_populates="agents", link_model=AgentModelLink) + workflows: List["Workflow"] = Relationship(link_model=WorkflowAgentLink, back_populates="agents") + parents: List["Agent"] = Relationship( + back_populates="agents", + link_model=AgentLink, + sa_relationship_kwargs=dict( + primaryjoin="Agent.id==AgentLink.agent_id", + secondaryjoin="Agent.id==AgentLink.parent_id", + ), + ) + agents: List["Agent"] = Relationship( + back_populates="parents", + link_model=AgentLink, + sa_relationship_kwargs=dict( + primaryjoin="Agent.id==AgentLink.parent_id", + secondaryjoin="Agent.id==AgentLink.agent_id", + ), + ) + + +class WorkFlowType(str, Enum): + twoagents = "twoagents" + groupchat = "groupchat" + + +class WorkFlowSummaryMethod(str, Enum): + last = "last" + none = "none" + llm = "llm" + + +class Workflow(SQLModel, table=True): + __table_args__ = {"sqlite_autoincrement": True} + id: Optional[int] = Field(default=None, primary_key=True) + created_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), server_default=func.now()), + ) # pylint: disable=not-callable + updated_at: datetime = Field( + default_factory=datetime.now, + sa_column=Column(DateTime(timezone=True), onupdate=func.now()), + ) # pylint: disable=not-callable + user_id: Optional[str] = None name: str description: str - sender: AgentFlowSpec - receiver: Union[AgentFlowSpec, GroupChatFlowSpec] - type: Literal["twoagents", "groupchat"] = "twoagents" - id: Optional[str] = None - user_id: Optional[str] = None - timestamp: Optional[str] = None - # how the agent message summary is generated. last: only last message is used, none: no summary, llm: use llm to generate summary - summary_method: Optional[Literal["last", "none", "llm"]] = "last" - - def init_spec(self, spec: Dict): - """initialize the agent spec""" - if not isinstance(spec, dict): - spec = spec.dict() - if spec["type"] == "groupchat": - return GroupChatFlowSpec(**spec) - else: - return AgentFlowSpec(**spec) - - def __post_init__(self): - if self.id is None: - self.id = str(uuid.uuid4()) - self.sender = self.init_spec(self.sender) - self.receiver = self.init_spec(self.receiver) - if self.user_id is None: - self.user_id = "default" - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - - def dict(self): - result = asdict(self) - result["sender"] = self.sender.dict() - result["receiver"] = self.receiver.dict() - return result - - -@dataclass -class Session(object): - """Data model for AutoGen Chat Session""" - - user_id: str - id: Optional[str] = None - timestamp: Optional[str] = None - flow_config: AgentWorkFlowConfig = None - name: Optional[str] = None - description: Optional[str] = None + agents: List[Agent] = Relationship(back_populates="workflows", link_model=WorkflowAgentLink) + type: WorkFlowType = Field(default=WorkFlowType.twoagents, sa_column=Column(SqlEnum(WorkFlowType))) + summary_method: Optional[WorkFlowSummaryMethod] = Field( + default=WorkFlowSummaryMethod.last, + sa_column=Column(SqlEnum(WorkFlowSummaryMethod)), + ) - def __post_init__(self): - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - if self.id is None: - self.id = str(uuid.uuid4()) - - def dict(self): - result = asdict(self) - result["flow_config"] = self.flow_config.dict() - return result - - -@dataclass -class Gallery(object): - """Data model for Gallery Item""" - - session: Session - messages: List[Message] - tags: List[str] - id: Optional[str] = None - timestamp: Optional[str] = None - - def __post_init__(self): - if self.timestamp is None: - self.timestamp = datetime.now().isoformat() - if self.id is None: - self.id = str(uuid.uuid4()) - - def dict(self): - result = asdict(self) - return result - - -@dataclass -class ChatWebRequestModel(object): - """Data model for Chat Web Request for Web End""" - - message: Message - flow_config: AgentWorkFlowConfig - - -@dataclass -class DeleteMessageWebRequestModel(object): - user_id: str - msg_id: str - session_id: Optional[str] = None - - -@dataclass -class DBWebRequestModel(object): - user_id: str - msg_id: Optional[str] = None - session: Optional[Session] = None - skill: Optional[Skill] = None - tags: Optional[List[str]] = None - agent: Optional[AgentFlowSpec] = None - workflow: Optional[AgentWorkFlowConfig] = None - model: Optional[Model] = None - message: Optional[Message] = None - connection_id: Optional[str] = None + +class Response(SQLModel): + message: str + status: bool + data: Optional[Any] = None -@dataclass -class SocketMessage(object): +class SocketMessage(SQLModel, table=False): connection_id: str data: Dict[str, Any] type: str - - def dict(self): - result = asdict(self) - return result diff --git a/samples/apps/autogen-studio/autogenstudio/utils/__init__.py b/samples/apps/autogen-studio/autogenstudio/utils/__init__.py index f37b0b0486a2..16281fe0b66d 100644 --- a/samples/apps/autogen-studio/autogenstudio/utils/__init__.py +++ b/samples/apps/autogen-studio/autogenstudio/utils/__init__.py @@ -1,2 +1 @@ -from .dbutils import * from .utils import * diff --git a/samples/apps/autogen-studio/autogenstudio/utils/dbutils.py b/samples/apps/autogen-studio/autogenstudio/utils/dbutils.py deleted file mode 100644 index dca0fc6b0a64..000000000000 --- a/samples/apps/autogen-studio/autogenstudio/utils/dbutils.py +++ /dev/null @@ -1,860 +0,0 @@ -import json -import logging -import os -import sqlite3 -import threading -from typing import Any, Dict, List, Optional, Tuple - -from ..datamodel import AgentFlowSpec, AgentWorkFlowConfig, Gallery, Message, Model, Session, Skill -from ..version import __version__ as __db_version__ - -VERSION_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS version ( - - version TEXT NOT NULL, - UNIQUE (version) - ) - """ - -MODELS_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS models ( - id TEXT NOT NULL, - user_id TEXT NOT NULL, - timestamp DATETIME NOT NULL, - model TEXT, - api_key TEXT, - base_url TEXT, - api_type TEXT, - api_version TEXT, - description TEXT, - UNIQUE (id, user_id) - ) - """ - - -MESSAGES_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS messages ( - user_id TEXT NOT NULL, - session_id TEXT, - root_msg_id TEXT NOT NULL, - msg_id TEXT, - role TEXT NOT NULL, - content TEXT NOT NULL, - metadata TEXT, - timestamp DATETIME, - UNIQUE (user_id, root_msg_id, msg_id) - ) - """ - -SESSIONS_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS sessions ( - id TEXT NOT NULL, - user_id TEXT NOT NULL, - timestamp DATETIME NOT NULL, - name TEXT, - flow_config TEXT, - UNIQUE (user_id, id) - ) - """ - -SKILLS_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS skills ( - id TEXT NOT NULL, - user_id TEXT NOT NULL, - timestamp DATETIME NOT NULL, - content TEXT, - title TEXT, - file_name TEXT, - UNIQUE (id, user_id) - ) - """ -AGENTS_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS agents ( - - id TEXT NOT NULL, - user_id TEXT NOT NULL, - timestamp DATETIME NOT NULL, - config TEXT, - type TEXT, - skills TEXT, - UNIQUE (id, user_id) - ) - """ - -WORKFLOWS_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS workflows ( - id TEXT NOT NULL, - user_id TEXT NOT NULL, - timestamp DATETIME NOT NULL, - sender TEXT, - receiver TEXT, - type TEXT, - name TEXT, - description TEXT, - summary_method TEXT, - UNIQUE (id, user_id) - ) - """ - -GALLERY_TABLE_SQL = """ - CREATE TABLE IF NOT EXISTS gallery ( - id TEXT NOT NULL, - session TEXT, - messages TEXT, - tags TEXT, - timestamp DATETIME NOT NULL, - UNIQUE ( id) - ) - """ - - -lock = threading.Lock() -logger = logging.getLogger() - - -class DBManager: - """ - A database manager class that handles the creation and interaction with an SQLite database. - """ - - def __init__(self, path: str = "database.sqlite", **kwargs: Any) -> None: - """ - Initializes the DBManager object, creates a database if it does not exist, and establishes a connection. - - Args: - path (str): The file path to the SQLite database file. - **kwargs: Additional keyword arguments to pass to the sqlite3.connect method. - """ - - self.path = path - # check if the database exists, if not create it - # self.reset_db() - if not os.path.exists(self.path): - logger.info("Creating database") - self.init_db(path=self.path, **kwargs) - - try: - self.conn = sqlite3.connect(self.path, check_same_thread=False, **kwargs) - self.cursor = self.conn.cursor() - self.migrate() - except Exception as e: - logger.error("Error connecting to database: %s", e) - raise e - - def migrate(self): - """ - Run migrations to update the database schema. - """ - self.add_column_if_not_exists("sessions", "name", "TEXT") - self.add_column_if_not_exists("models", "description", "TEXT") - - def add_column_if_not_exists(self, table: str, column: str, column_type: str): - """ - Adds a new column to the specified table if it does not exist. - - Args: - table (str): The table name where the column should be added. - column (str): The column name that should be added. - column_type (str): The data type of the new column. - """ - try: - self.cursor.execute(f"PRAGMA table_info({table})") - column_names = [row[1] for row in self.cursor.fetchall()] - if column not in column_names: - self.cursor.execute(f"ALTER TABLE {table} ADD COLUMN {column} {column_type}") - self.conn.commit() - logger.info(f"Migration: New '{column}' column has been added to the '{table}' table.") - else: - logger.info(f"'{column}' column already exists in the '{table}' table.") - - except Exception as e: - print(f"Error while checking and updating '{table}' table: {e}") - - def reset_db(self): - """ - Reset the database by deleting the database file and creating a new one. - """ - print("resetting db") - if os.path.exists(self.path): - os.remove(self.path) - self.init_db(path=self.path) - - def init_db(self, path: str = "database.sqlite", **kwargs: Any) -> None: - """ - Initializes the database by creating necessary tables. - - Args: - path (str): The file path to the SQLite database file. - **kwargs: Additional keyword arguments to pass to the sqlite3.connect method. - """ - # Connect to the database (or create a new one if it doesn't exist) - self.conn = sqlite3.connect(path, check_same_thread=False, **kwargs) - self.cursor = self.conn.cursor() - - # Create the version table - self.cursor.execute(VERSION_TABLE_SQL) - self.cursor.execute("INSERT INTO version (version) VALUES (?)", (__db_version__,)) - - # Create the models table - self.cursor.execute(MODELS_TABLE_SQL) - - # Create the messages table - self.cursor.execute(MESSAGES_TABLE_SQL) - - # Create a sessions table - self.cursor.execute(SESSIONS_TABLE_SQL) - - # Create a skills - self.cursor.execute(SKILLS_TABLE_SQL) - - # Create a gallery table - self.cursor.execute(GALLERY_TABLE_SQL) - - # Create a agents table - self.cursor.execute(AGENTS_TABLE_SQL) - - # Create a workflows table - self.cursor.execute(WORKFLOWS_TABLE_SQL) - - # init skills table with content of defaultskills.json in current directory - current_dir = os.path.dirname(os.path.realpath(__file__)) - with open(os.path.join(current_dir, "dbdefaults.json"), "r", encoding="utf-8") as json_file: - data = json.load(json_file) - skills = data["skills"] - agents = data["agents"] - models = data["models"] - for model in models: - model = Model(**model) - self.cursor.execute( - "INSERT INTO models (id, user_id, timestamp, model, api_key, base_url, api_type, api_version, description) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)", - ( - model.id, - "default", - model.timestamp, - model.model, - model.api_key, - model.base_url, - model.api_type, - model.api_version, - model.description, - ), - ) - - for skill in skills: - skill = Skill(**skill) - - self.cursor.execute( - "INSERT INTO skills (id, user_id, timestamp, content, title, file_name) VALUES (?, ?, ?, ?, ?, ?)", - (skill.id, "default", skill.timestamp, skill.content, skill.title, skill.file_name), - ) - for agent in agents: - agent = AgentFlowSpec(**agent) - agent.skills = [skill.dict() for skill in agent.skills] if agent.skills else None - self.cursor.execute( - "INSERT INTO agents (id, user_id, timestamp, config, type, skills) VALUES (?, ?, ?, ?, ?, ?)", - ( - agent.id, - "default", - agent.timestamp, - json.dumps(agent.config.dict()), - agent.type, - json.dumps(agent.skills), - ), - ) - - for workflow in data["workflows"]: - workflow = AgentWorkFlowConfig(**workflow) - self.cursor.execute( - "INSERT INTO workflows (id, user_id, timestamp, sender, receiver, type, name, description, summary_method) VALUES (?, ?, ?, ?, ?, ?, ?, ?,?)", - ( - workflow.id, - "default", - workflow.timestamp, - json.dumps(workflow.sender.dict()), - json.dumps(workflow.receiver.dict()), - workflow.type, - workflow.name, - workflow.description, - workflow.summary_method, - ), - ) - - # Commit the changes and close the connection - self.conn.commit() - - def query(self, query: str, args: Tuple = (), return_json: bool = False) -> List[Dict[str, Any]]: - """ - Executes a given SQL query and returns the results. - - Args: - query (str): The SQL query to execute. - args (Tuple): The arguments to pass to the SQL query. - return_json (bool): If True, the results will be returned as a list of dictionaries. - - Returns: - List[Dict[str, Any]]: The result of the SQL query. - """ - try: - with lock: - self.cursor.execute(query, args) - result = self.cursor.fetchall() - self.commit() - if return_json: - result = [dict(zip([key[0] for key in self.cursor.description], row)) for row in result] - return result - except Exception as e: - logger.error("Error running query with query %s and args %s: %s", query, args, e) - raise e - - def commit(self) -> None: - """ - Commits the current transaction Modelto the database. - """ - self.conn.commit() - - def close(self) -> None: - """ - Closes the database connection. - """ - self.conn.close() - - -def get_models(user_id: str, dbmanager: DBManager) -> List[dict]: - """ - Get all models for a given user from the database. - - Args: - user_id: The user id to get models for - dbmanager: The DBManager instance to interact with the database - - Returns: - A list of model configurations - """ - query = "SELECT * FROM models WHERE user_id = ? OR user_id = ?" - args = (user_id, "default") - results = dbmanager.query(query, args, return_json=True) - return results - - -def upsert_model(model: Model, dbmanager: DBManager) -> List[dict]: - """ - Insert or update a model configuration in the database. - - Args: - model: The Model object containing model configuration data - dbmanager: The DBManager instance to interact with the database - - Returns: - A list of model configurations - """ - - # Check if the model config with the provided id already exists in the database - existing_model = get_item_by_field("models", "id", model.id, dbmanager) - - if existing_model: - # If the model config exists, update it with the new data - updated_data = { - "model": model.model, - "api_key": model.api_key, - "base_url": model.base_url, - "api_type": model.api_type, - "api_version": model.api_version, - "user_id": model.user_id, - "timestamp": model.timestamp, - "description": model.description, - } - update_item("models", model.id, updated_data, dbmanager) - else: - # If the model config does not exist, insert a new one - query = """ - INSERT INTO models (id, user_id, timestamp, model, api_key, base_url, api_type, api_version, description) - VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?) - """ - args = ( - model.id, - model.user_id, - model.timestamp, - model.model, - model.api_key, - model.base_url, - model.api_type, - model.api_version, - model.description, - ) - dbmanager.query(query=query, args=args) - - # Return the inserted or updated model config - models = get_models(model.user_id, dbmanager) - return models - - -def delete_model(model: Model, dbmanager: DBManager) -> List[dict]: - """ - Delete a model configuration from the database where id = model.id and user_id = model.user_id. - - Args: - model: The Model object containing model configuration data - dbmanager: The DBManager instance to interact with the database - - Returns: - A list of model configurations - """ - - query = "DELETE FROM models WHERE id = ? AND user_id = ?" - args = (model.id, model.user_id) - dbmanager.query(query=query, args=args) - - # Return the remaining model configs - models = get_models(model.user_id, dbmanager) - return models - - -def create_message(message: Message, dbmanager: DBManager) -> List[dict]: - """ - Save a message in the database using the provided database manager. - - :param message: The Message object containing message data - :param dbmanager: The DBManager instance used to interact with the database - """ - query = "INSERT INTO messages (user_id, root_msg_id, msg_id, role, content, metadata, timestamp, session_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?)" - args = ( - message.user_id, - message.root_msg_id, - message.msg_id, - message.role, - message.content, - message.metadata, - message.timestamp, - message.session_id, - ) - dbmanager.query(query=query, args=args) - messages = get_messages(user_id=message.user_id, session_id=message.session_id, dbmanager=dbmanager) - return messages - - -def get_messages(user_id: str, session_id: str, dbmanager: DBManager) -> List[dict]: - """ - Load messages for a specific user and session from the database, sorted by timestamp. - - :param user_id: The ID of the user whose messages are to be loaded - :param session_id: The ID of the session whose messages are to be loaded - :param dbmanager: The DBManager instance to interact with the database - - :return: A list of dictionaries, each representing a message - """ - query = "SELECT * FROM messages WHERE user_id = ? AND session_id = ?" - args = (user_id, session_id) - result = dbmanager.query(query=query, args=args, return_json=True) - # Sort by timestamp ascending - result = sorted(result, key=lambda k: k["timestamp"], reverse=False) - return result - - -def get_sessions(user_id: str, dbmanager: DBManager) -> List[dict]: - """ - Load sessions for a specific user from the database, sorted by timestamp. - - :param user_id: The ID of the user whose sessions are to be loaded - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a session - """ - query = "SELECT * FROM sessions WHERE user_id = ?" - args = (user_id,) - result = dbmanager.query(query=query, args=args, return_json=True) - # Sort by timestamp ascending - result = sorted(result, key=lambda k: k["timestamp"], reverse=True) - for row in result: - row["flow_config"] = json.loads(row["flow_config"]) - return result - - -def create_session(user_id: str, session: Session, dbmanager: DBManager) -> List[dict]: - """ - Create a new session for a specific user in the database. - - :param user_id: The ID of the user whose session is to be created - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a session - """ - query = "INSERT INTO sessions (user_id, id, timestamp, flow_config) VALUES (?, ?, ?,?)" - args = (session.user_id, session.id, session.timestamp, json.dumps(session.flow_config.dict())) - dbmanager.query(query=query, args=args) - sessions = get_sessions(user_id=user_id, dbmanager=dbmanager) - - return sessions - - -def rename_session(name: str, session: Session, dbmanager: DBManager) -> List[dict]: - """ - Edit a session for a specific user in the database. - - :param name: The new name of the session - :param session: The Session object containing session data - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a session - """ - - query = "UPDATE sessions SET name = ? WHERE id = ?" - args = (name, session.id) - dbmanager.query(query=query, args=args) - sessions = get_sessions(user_id=session.user_id, dbmanager=dbmanager) - - return sessions - - -def delete_session(session: Session, dbmanager: DBManager) -> List[dict]: - """ - Delete a specific session and all messages for that session in the database. - - :param session: The Session object containing session data - :param dbmanager: The DBManager instance to interact with the database - :return: A list of the remaining sessions - """ - - query = "DELETE FROM sessions WHERE id = ?" - args = (session.id,) - dbmanager.query(query=query, args=args) - - query = "DELETE FROM messages WHERE session_id = ?" - args = (session.id,) - dbmanager.query(query=query, args=args) - - return get_sessions(user_id=session.user_id, dbmanager=dbmanager) - - -def create_gallery(session: Session, dbmanager: DBManager, tags: List[str] = []) -> Gallery: - """ - Publish a session to the gallery table in the database. Fetches the session messages first, then saves session and messages object to the gallery database table. - :param session: The Session object containing session data - :param dbmanager: The DBManager instance used to interact with the database - :param tags: A list of tags to associate with the session - :return: A gallery object containing the session and messages objects - """ - - messages = get_messages(user_id=session.user_id, session_id=session.id, dbmanager=dbmanager) - gallery_item = Gallery(session=session, messages=messages, tags=tags) - query = "INSERT INTO gallery (id, session, messages, tags, timestamp) VALUES (?, ?, ?, ?,?)" - args = ( - gallery_item.id, - json.dumps(gallery_item.session.dict()), - json.dumps([message.dict() for message in gallery_item.messages]), - json.dumps(gallery_item.tags), - gallery_item.timestamp, - ) - dbmanager.query(query=query, args=args) - return gallery_item - - -def get_gallery(gallery_id, dbmanager: DBManager) -> List[Gallery]: - """ - Load gallery items from the database, sorted by timestamp. If gallery_id is provided, only the gallery item with the matching gallery_id will be returned. - - :param gallery_id: The ID of the gallery item to be loaded - :param dbmanager: The DBManager instance to interact with the database - :return: A list of Gallery objects - """ - - if gallery_id: - query = "SELECT * FROM gallery WHERE id = ?" - args = (gallery_id,) - else: - query = "SELECT * FROM gallery" - args = () - result = dbmanager.query(query=query, args=args, return_json=True) - # Sort by timestamp ascending - result = sorted(result, key=lambda k: k["timestamp"], reverse=True) - gallery = [] - for row in result: - gallery_item = Gallery( - id=row["id"], - session=Session(**json.loads(row["session"])), - messages=[Message(**message) for message in json.loads(row["messages"])], - tags=json.loads(row["tags"]), - timestamp=row["timestamp"], - ) - gallery.append(gallery_item) - return gallery - - -def get_skills(user_id: str, dbmanager: DBManager) -> List[Skill]: - """ - Load skills from the database, sorted by timestamp. Load skills where id = user_id or user_id = default. - - :param user_id: The ID of the user whose skills are to be loaded - :param dbmanager: The DBManager instance to interact with the database - :return: A list of Skill objects - """ - - query = "SELECT * FROM skills WHERE user_id = ? OR user_id = ?" - args = (user_id, "default") - result = dbmanager.query(query=query, args=args, return_json=True) - # Sort by timestamp ascending - result = sorted(result, key=lambda k: k["timestamp"], reverse=True) - skills = [] - for row in result: - skill = Skill(**row) - skills.append(skill) - return skills - - -def upsert_skill(skill: Skill, dbmanager: DBManager) -> List[Skill]: - """ - Insert or update a skill for a specific user in the database. - - If the skill with the given ID already exists, it will be updated with the new data. - Otherwise, a new skill will be created. - - :param skill: The Skill object containing skill data - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a skill - """ - - existing_skill = get_item_by_field("skills", "id", skill.id, dbmanager) - - if existing_skill: - updated_data = { - "user_id": skill.user_id, - "timestamp": skill.timestamp, - "content": skill.content, - "title": skill.title, - "file_name": skill.file_name, - } - update_item("skills", skill.id, updated_data, dbmanager) - else: - query = "INSERT INTO skills (id, user_id, timestamp, content, title, file_name) VALUES (?, ?, ?, ?, ?, ?)" - args = (skill.id, skill.user_id, skill.timestamp, skill.content, skill.title, skill.file_name) - dbmanager.query(query=query, args=args) - - skills = get_skills(user_id=skill.user_id, dbmanager=dbmanager) - - return skills - - -def delete_skill(skill: Skill, dbmanager: DBManager) -> List[Skill]: - """ - Delete a skill for a specific user in the database. - - :param skill: The Skill object containing skill data - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a skill - """ - # delete where id = skill.id and user_id = skill.user_id - query = "DELETE FROM skills WHERE id = ? AND user_id = ?" - args = (skill.id, skill.user_id) - dbmanager.query(query=query, args=args) - - return get_skills(user_id=skill.user_id, dbmanager=dbmanager) - - -def delete_message( - user_id: str, msg_id: str, session_id: str, dbmanager: DBManager, delete_all: bool = False -) -> List[dict]: - """ - Delete a specific message or all messages for a user and session from the database. - - :param user_id: The ID of the user whose messages are to be deleted - :param msg_id: The ID of the specific message to be deleted (ignored if delete_all is True) - :param session_id: The ID of the session whose messages are to be deleted - :param dbmanager: The DBManager instance to interact with the database - :param delete_all: If True, all messages for the user will be deleted - :return: A list of the remaining messages if not all were deleted, otherwise an empty list - """ - - if delete_all: - query = "DELETE FROM messages WHERE user_id = ? AND session_id = ?" - args = (user_id, session_id) - dbmanager.query(query=query, args=args) - return [] - else: - query = "DELETE FROM messages WHERE user_id = ? AND msg_id = ? AND session_id = ?" - args = (user_id, msg_id, session_id) - dbmanager.query(query=query, args=args) - messages = get_messages(user_id=user_id, session_id=session_id, dbmanager=dbmanager) - return messages - - -def get_agents(user_id: str, dbmanager: DBManager) -> List[AgentFlowSpec]: - """ - Load agents from the database, sorted by timestamp. Load agents where id = user_id or user_id = default. - - :param user_id: The ID of the user whose agents are to be loaded - :param dbmanager: The DBManager instance to interact with the database - :return: A list of AgentFlowSpec objects - """ - - query = "SELECT * FROM agents WHERE user_id = ? OR user_id = ?" - args = (user_id, "default") - result = dbmanager.query(query=query, args=args, return_json=True) - # Sort by timestamp ascending - result = sorted(result, key=lambda k: k["timestamp"], reverse=True) - agents = [] - for row in result: - row["config"] = json.loads(row["config"]) - row["skills"] = json.loads(row["skills"] or "[]") - agent = AgentFlowSpec(**row) - agents.append(agent) - return agents - - -def upsert_agent(agent_flow_spec: AgentFlowSpec, dbmanager: DBManager) -> List[Dict[str, Any]]: - """ - Insert or update an agent for a specific user in the database. - - If the agent with the given ID already exists, it will be updated with the new data. - Otherwise, a new agent will be created. - - :param agent_flow_spec: The AgentFlowSpec object containing agent configuration - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing an agent after insertion or update - """ - - existing_agent = get_item_by_field("agents", "id", agent_flow_spec.id, dbmanager) - - if existing_agent: - updated_data = { - "user_id": agent_flow_spec.user_id, - "timestamp": agent_flow_spec.timestamp, - "config": json.dumps(agent_flow_spec.config.dict()), - "type": agent_flow_spec.type, - "skills": json.dumps([x.dict() for x in agent_flow_spec.skills] if agent_flow_spec.skills else []), - } - update_item("agents", agent_flow_spec.id, updated_data, dbmanager) - else: - query = "INSERT INTO agents (id, user_id, timestamp, config, type, skills) VALUES (?, ?, ?, ?, ?,?)" - config_json = json.dumps(agent_flow_spec.config.dict()) - args = ( - agent_flow_spec.id, - agent_flow_spec.user_id, - agent_flow_spec.timestamp, - config_json, - agent_flow_spec.type, - json.dumps([x.dict() for x in agent_flow_spec.skills] if agent_flow_spec.skills else []), - ) - dbmanager.query(query=query, args=args) - - agents = get_agents(user_id=agent_flow_spec.user_id, dbmanager=dbmanager) - return agents - - -def delete_agent(agent: AgentFlowSpec, dbmanager: DBManager) -> List[Dict[str, Any]]: - """ - Delete an agent for a specific user from the database. - - :param agent: The AgentFlowSpec object containing agent configuration - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing an agent after deletion - """ - - # delete based on agent.id and agent.user_id - query = "DELETE FROM agents WHERE id = ? AND user_id = ?" - args = (agent.id, agent.user_id) - dbmanager.query(query=query, args=args) - - return get_agents(user_id=agent.user_id, dbmanager=dbmanager) - - -def get_item_by_field(table: str, field: str, value: Any, dbmanager: DBManager) -> Optional[Dict[str, Any]]: - query = f"SELECT * FROM {table} WHERE {field} = ?" - args = (value,) - result = dbmanager.query(query=query, args=args) - return result[0] if result else None - - -def update_item(table: str, item_id: str, updated_data: Dict[str, Any], dbmanager: DBManager) -> None: - set_clause = ", ".join([f"{key} = ?" for key in updated_data.keys()]) - query = f"UPDATE {table} SET {set_clause} WHERE id = ?" - args = (*updated_data.values(), item_id) - dbmanager.query(query=query, args=args) - - -def get_workflows(user_id: str, dbmanager: DBManager) -> List[Dict[str, Any]]: - """ - Load workflows for a specific user from the database, sorted by timestamp. - - :param user_id: The ID of the user whose workflows are to be loaded - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a workflow - """ - query = "SELECT * FROM workflows WHERE user_id = ? OR user_id = ?" - args = (user_id, "default") - result = dbmanager.query(query=query, args=args, return_json=True) - # Sort by timestamp ascending - result = sorted(result, key=lambda k: k["timestamp"], reverse=True) - workflows = [] - for row in result: - row["sender"] = json.loads(row["sender"]) - row["receiver"] = json.loads(row["receiver"]) - workflow = AgentWorkFlowConfig(**row) - workflows.append(workflow) - return workflows - - -def upsert_workflow(workflow: AgentWorkFlowConfig, dbmanager: DBManager) -> List[Dict[str, Any]]: - """ - Insert or update a workflow for a specific user in the database. - - If the workflow with the given ID already exists, it will be updated with the new data. - Otherwise, a new workflow will be created. - - :param workflow: The AgentWorkFlowConfig object containing workflow data - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a workflow after insertion or update - """ - existing_workflow = get_item_by_field("workflows", "id", workflow.id, dbmanager) - - # print(workflow.receiver) - - if existing_workflow: - updated_data = { - "user_id": workflow.user_id, - "timestamp": workflow.timestamp, - "sender": json.dumps(workflow.sender.dict()), - "receiver": json.dumps( - [receiver.dict() for receiver in workflow.receiver] - if isinstance(workflow.receiver, list) - else workflow.receiver.dict() - ), - "type": workflow.type, - "name": workflow.name, - "description": workflow.description, - "summary_method": workflow.summary_method, - } - update_item("workflows", workflow.id, updated_data, dbmanager) - else: - query = "INSERT INTO workflows (id, user_id, timestamp, sender, receiver, type, name, description, summary_method) VALUES (?, ?, ?, ?, ?, ?, ?, ?,?)" - args = ( - workflow.id, - workflow.user_id, - workflow.timestamp, - json.dumps(workflow.sender.dict()), - json.dumps( - [receiver.dict() for receiver in workflow.receiver] - if isinstance(workflow.receiver, list) - else workflow.receiver.dict() - ), - workflow.type, - workflow.name, - workflow.description, - workflow.summary_method, - ) - dbmanager.query(query=query, args=args) - - return get_workflows(user_id=workflow.user_id, dbmanager=dbmanager) - - -def delete_workflow(workflow: AgentWorkFlowConfig, dbmanager: DBManager) -> List[Dict[str, Any]]: - """ - Delete a workflow for a specific user from the database. If the workflow does not exist, do nothing. - - :param workflow: The AgentWorkFlowConfig object containing workflow data - :param dbmanager: The DBManager instance to interact with the database - :return: A list of dictionaries, each representing a workflow after deletion - """ - - # delete where workflow.id =id and workflow.user_id = user_id - - query = "DELETE FROM workflows WHERE id = ? AND user_id = ?" - args = (workflow.id, workflow.user_id) - dbmanager.query(query=query, args=args) - - return get_workflows(user_id=workflow.user_id, dbmanager=dbmanager) diff --git a/samples/apps/autogen-studio/autogenstudio/utils/utils.py b/samples/apps/autogen-studio/autogenstudio/utils/utils.py index 49a8ac91acdc..ed533ec3883c 100644 --- a/samples/apps/autogen-studio/autogenstudio/utils/utils.py +++ b/samples/apps/autogen-studio/autogenstudio/utils/utils.py @@ -3,15 +3,17 @@ import os import re import shutil +from datetime import datetime from pathlib import Path -from typing import Dict, List, Tuple, Union +from typing import Any, Dict, List, Tuple, Union from dotenv import load_dotenv +from loguru import logger -import autogen -from autogen.oai.client import OpenAIWrapper +from autogen.coding import DockerCommandLineCodeExecutor, LocalCommandLineCodeExecutor +from autogen.oai.client import ModelClient, OpenAIWrapper -from ..datamodel import AgentConfig, AgentFlowSpec, AgentWorkFlowConfig, LLMConfig, Model, Skill +from ..datamodel import CodeExecutionConfigTypes, Model, Skill from ..version import APP_NAME @@ -25,6 +27,23 @@ def md5_hash(text: str) -> str: return hashlib.md5(text.encode()).hexdigest() +def check_and_cast_datetime_fields(obj: Any) -> Any: + if hasattr(obj, "created_at") and isinstance(obj.created_at, str): + obj.created_at = str_to_datetime(obj.created_at) + + if hasattr(obj, "updated_at") and isinstance(obj.updated_at, str): + obj.updated_at = str_to_datetime(obj.updated_at) + + return obj + + +def str_to_datetime(dt_str: str) -> datetime: + if dt_str[-1] == "Z": + # Replace 'Z' with '+00:00' for UTC timezone + dt_str = dt_str[:-1] + "+00:00" + return datetime.fromisoformat(dt_str) + + def clear_folder(folder_path: str) -> None: """ Clear the contents of a folder. @@ -98,7 +117,16 @@ def get_file_type(file_path: str) -> str: CSV_EXTENSIONS = {".csv", ".xlsx"} # Supported image extensions - IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".svg", ".webp"} + IMAGE_EXTENSIONS = { + ".png", + ".jpg", + ".jpeg", + ".gif", + ".bmp", + ".tiff", + ".svg", + ".webp", + } # Supported (web) video extensions VIDEO_EXTENSIONS = {".mp4", ".webm", ".ogg", ".mov", ".avi", ".wmv"} @@ -199,20 +227,42 @@ def get_modified_files(start_timestamp: float, end_timestamp: float, source_dir: return modified_files -def init_app_folders(app_file_path: str) -> Dict[str, str]: +def get_app_root() -> str: """ - Initialize folders needed for a web server, such as static file directories - and user-specific data directories. Also load any .env file if it exists. + Get the root directory of the application. - :param root_file_path: The root directory where webserver folders will be created - :return: A dictionary with the path of each created folder + :return: The root directory of the application. """ - app_name = f".{APP_NAME}" default_app_root = os.path.join(os.path.expanduser("~"), app_name) if not os.path.exists(default_app_root): os.makedirs(default_app_root, exist_ok=True) app_root = os.environ.get("AUTOGENSTUDIO_APPDIR") or default_app_root + return app_root + + +def get_db_uri(app_root: str) -> str: + """ + Get the default database URI for the application. + + :param app_root: The root directory of the application. + :return: The default database URI. + """ + db_uri = f"sqlite:///{os.path.join(app_root, 'database.sqlite')}" + db_uri = os.environ.get("AUTOGENSTUDIO_DATABASE_URI") or db_uri + logger.info(f"Using database URI: {db_uri}") + return db_uri + + +def init_app_folders(app_file_path: str) -> Dict[str, str]: + """ + Initialize folders needed for a web server, such as static file directories + and user-specific data directories. Also load any .env file if it exists. + + :param root_file_path: The root directory where webserver folders will be created + :return: A dictionary with the path of each created folder + """ + app_root = get_app_root() if not os.path.exists(app_root): os.makedirs(app_root, exist_ok=True) @@ -220,7 +270,7 @@ def init_app_folders(app_file_path: str) -> Dict[str, str]: # load .env file if it exists env_file = os.path.join(app_root, ".env") if os.path.exists(env_file): - print(f"Loading environment variables from {env_file}") + logger.info(f"Loaded environment variables from {env_file}") load_dotenv(env_file) files_static_root = os.path.join(app_root, "files/") @@ -233,8 +283,9 @@ def init_app_folders(app_file_path: str) -> Dict[str, str]: "files_static_root": files_static_root, "static_folder_root": static_folder_root, "app_root": app_root, + "database_engine_uri": get_db_uri(app_root=app_root), } - print(f"Initialized application data folder: {app_root}") + logger.info(f"Initialized application data folder: {app_root}") return folders @@ -258,11 +309,11 @@ def get_skills_from_prompt(skills: List[Skill], work_dir: str) -> str: for skill in skills: prompt += f""" -##### Begin of {skill.title} ##### +##### Begin of {skill.name} ##### {skill.content} -#### End of {skill.title} #### +#### End of {skill.name} #### """ @@ -290,7 +341,6 @@ def delete_files_in_folder(folders: Union[str, List[str]]) -> None: for folder in folders: # Check if the folder exists if not os.path.isdir(folder): - print(f"The folder {folder} does not exist.") continue # List all the entries in the directory @@ -306,56 +356,7 @@ def delete_files_in_folder(folders: Union[str, List[str]]) -> None: shutil.rmtree(path) except Exception as e: # Print the error message and skip - print(f"Failed to delete {path}. Reason: {e}") - - -def get_default_agent_config(work_dir: str) -> AgentWorkFlowConfig: - """ - Get a default agent flow config . - """ - - llm_config = LLMConfig( - config_list=[{"model": "gpt-4"}], - temperature=0, - ) - - USER_PROXY_INSTRUCTIONS = """If the request has been addressed sufficiently, summarize the answer and end with the word TERMINATE. Otherwise, ask a follow-up question. - """ - - userproxy_spec = AgentFlowSpec( - type="userproxy", - config=AgentConfig( - name="user_proxy", - human_input_mode="NEVER", - system_message=USER_PROXY_INSTRUCTIONS, - code_execution_config={ - "work_dir": work_dir, - "use_docker": False, - }, - max_consecutive_auto_reply=10, - llm_config=llm_config, - is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"), - ), - ) - - assistant_spec = AgentFlowSpec( - type="assistant", - config=AgentConfig( - name="primary_assistant", - system_message=autogen.AssistantAgent.DEFAULT_SYSTEM_MESSAGE, - llm_config=llm_config, - ), - ) - - flow_config = AgentWorkFlowConfig( - name="default", - sender=userproxy_spec, - receiver=assistant_spec, - type="default", - description="Default agent flow config", - ) - - return flow_config + logger.info(f"Failed to delete {path}. Reason: {e}") def extract_successful_code_blocks(messages: List[Dict[str, str]]) -> List[str]: @@ -392,7 +393,7 @@ def sanitize_model(model: Model): Sanitize model dictionary to remove None values and empty strings and only keep valid keys. """ if isinstance(model, Model): - model = model.dict() + model = model.model_dump() valid_keys = ["model", "base_url", "api_key", "api_type", "api_version"] # only add key if value is not None sanitized_model = {k: v for k, v in model.items() if (v is not None and v != "") and k in valid_keys} @@ -410,16 +411,36 @@ def test_model(model: Model): return response.choices[0].message.content -# summarize_chat_history (messages, model) .. returns a summary of the chat history +def load_code_execution_config(code_execution_type: CodeExecutionConfigTypes, work_dir: str): + """ + Load the code execution configuration based on the code execution type. + :param code_execution_type: The code execution type. + :param work_dir: The working directory to store code execution files. + :return: The code execution configuration. -def summarize_chat_history(task: str, messages: List[Dict[str, str]], model: Model): + """ + work_dir = Path(work_dir) + work_dir.mkdir(exist_ok=True) + executor = None + if code_execution_type == CodeExecutionConfigTypes.local: + executor = LocalCommandLineCodeExecutor(work_dir=work_dir) + elif code_execution_type == CodeExecutionConfigTypes.docker: + executor = DockerCommandLineCodeExecutor(work_dir=work_dir) + elif code_execution_type == CodeExecutionConfigTypes.none: + return False + else: + raise ValueError(f"Invalid code execution type: {code_execution_type}") + code_execution_config = { + "executor": executor, + } + return code_execution_config + + +def summarize_chat_history(task: str, messages: List[Dict[str, str]], client: ModelClient): """ Summarize the chat history using the model endpoint and returning the response. """ - - sanitized_model = sanitize_model(model) - client = OpenAIWrapper(config_list=[sanitized_model]) summarization_system_prompt = f""" You are a helpful assistant that is able to review the chat history between a set of agents (userproxy agents, assistants etc) as they try to address a given TASK and provide a summary. Be SUCCINCT but also comprehensive enough to allow others (who cannot see the chat history) understand and recreate the solution. @@ -427,7 +448,7 @@ def summarize_chat_history(task: str, messages: List[Dict[str, str]], model: Mod === {task} === - The summary should focus on extracting the actual solution to the task from the chat history (assuming the task was addressed) such that any other agent reading the summary will understand what the actual solution is. Use a neutral tone and DO NOT directly mention the agents. Instead only focus on the actions that were carried out (e.g. do not say 'assistant agent generated some code visualization code ..' instead say say 'visualization code was generated ..' ). + The summary should focus on extracting the actual solution to the task from the chat history (assuming the task was addressed) such that any other agent reading the summary will understand what the actual solution is. Use a neutral tone and DO NOT directly mention the agents. Instead only focus on the actions that were carried out (e.g. do not say 'assistant agent generated some code visualization code ..' instead say say 'visualization code was generated ..'. The answer should be framed as a response to the user task. E.g. if the task is "What is the height of the Eiffel tower", the summary should be "The height of the Eiffel Tower is ..."). """ summarization_prompt = [ { diff --git a/samples/apps/autogen-studio/autogenstudio/version.py b/samples/apps/autogen-studio/autogenstudio/version.py index 18b7f42aac34..bafe37f75b14 100644 --- a/samples/apps/autogen-studio/autogenstudio/version.py +++ b/samples/apps/autogen-studio/autogenstudio/version.py @@ -1,3 +1,3 @@ -VERSION = "0.0.54" +VERSION = "0.0.56rc9" __version__ = VERSION APP_NAME = "autogenstudio" diff --git a/samples/apps/autogen-studio/autogenstudio/web/app.py b/samples/apps/autogen-studio/autogenstudio/web/app.py index 6d5412e9fed5..76ab8139ebc3 100644 --- a/samples/apps/autogen-studio/autogenstudio/web/app.py +++ b/samples/apps/autogen-studio/autogenstudio/web/app.py @@ -1,25 +1,23 @@ import asyncio -import json import os import queue import threading import traceback from contextlib import asynccontextmanager +from typing import Any -from fastapi import FastAPI, HTTPException, WebSocket, WebSocketDisconnect +from fastapi import FastAPI, WebSocket, WebSocketDisconnect from fastapi.middleware.cors import CORSMiddleware from fastapi.staticfiles import StaticFiles +from loguru import logger from openai import OpenAIError from ..chatmanager import AutoGenChatManager, WebSocketConnectionManager -from ..datamodel import ( - DBWebRequestModel, - DeleteMessageWebRequestModel, - Message, - Session, -) -from ..utils import DBManager, dbutils, init_app_folders, md5_hash, test_model -from ..version import APP_NAME, VERSION +from ..database import workflow_from_id +from ..database.dbmanager import DBManager +from ..datamodel import Agent, Message, Model, Response, Session, Skill, Workflow +from ..utils import check_and_cast_datetime_fields, init_app_folders, md5_hash, test_model +from ..version import VERSION managers = {"chat": None} # manage calls to autogen # Create thread-safe queue for messages between api thread and autogen threads @@ -27,18 +25,29 @@ active_connections = [] active_connections_lock = asyncio.Lock() websocket_manager = WebSocketConnectionManager( - active_connections=active_connections, active_connections_lock=active_connections_lock + active_connections=active_connections, + active_connections_lock=active_connections_lock, ) def message_handler(): while True: message = message_queue.get() - print("Active Connections: ", [client_id for _, client_id in websocket_manager.active_connections]) - print("Current message connection id: ", message["connection_id"]) + logger.info( + "** Processing Agent Message on Queue: Active Connections: " + + str([client_id for _, client_id in websocket_manager.active_connections]) + + " **" + ) for connection, socket_client_id in websocket_manager.active_connections: if message["connection_id"] == socket_client_id: + logger.info( + f"Sending message to connection_id: {message['connection_id']}. Connection ID: {socket_client_id}" + ) asyncio.run(websocket_manager.send_message(message, connection)) + else: + logger.info( + f"Skipping message for connection_id: {message['connection_id']}. Connection ID: {socket_client_id}" + ) message_queue.task_done() @@ -46,10 +55,19 @@ def message_handler(): message_handler_thread.start() +app_file_path = os.path.dirname(os.path.abspath(__file__)) +folders = init_app_folders(app_file_path) +ui_folder_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "ui") + +database_engine_uri = folders["database_engine_uri"] +dbmanager = DBManager(engine_uri=database_engine_uri) + + @asynccontextmanager async def lifespan(app: FastAPI): print("***** App started *****") managers["chat"] = AutoGenChatManager(message_queue=message_queue) + dbmanager.create_db_and_tables() yield # Close all active connections @@ -75,477 +93,312 @@ async def lifespan(app: FastAPI): ) -app_file_path = os.path.dirname(os.path.abspath(__file__)) -# init folders skills, workdir, static, files etc -folders = init_app_folders(app_file_path) -ui_folder_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "ui") - api = FastAPI(root_path="/api") # mount an api route such that the main route serves the ui and the /api app.mount("/api", api) app.mount("/", StaticFiles(directory=ui_folder_path, html=True), name="ui") -api.mount("/files", StaticFiles(directory=folders["files_static_root"], html=True), name="files") +api.mount( + "/files", + StaticFiles(directory=folders["files_static_root"], html=True), + name="files", +) -db_path = os.path.join(folders["app_root"], "database.sqlite") -dbmanager = DBManager(path=db_path) # manage database operations # manage websocket connections -@api.post("/messages") -async def add_message(req: DBWebRequestModel): - message = Message(**req.message.dict()) - user_history = dbutils.get_messages(user_id=message.user_id, session_id=req.message.session_id, dbmanager=dbmanager) - - # save incoming message to db - dbutils.create_message(message=message, dbmanager=dbmanager) - user_dir = os.path.join(folders["files_static_root"], "user", md5_hash(message.user_id)) - os.makedirs(user_dir, exist_ok=True) - +def create_entity(model: Any, model_class: Any, filters: dict = None): + """Create a new entity""" + model = check_and_cast_datetime_fields(model) try: - response_message: Message = managers["chat"].chat( - message=message, - history=user_history, - user_dir=user_dir, - flow_config=req.workflow, - connection_id=req.connection_id, - ) - - # save agent's response to db - messages = dbutils.create_message(message=response_message, dbmanager=dbmanager) - response = { - "status": True, - "message": "Message processed successfully", - "data": messages, - # "metadata": json.loads(response_message.metadata), - } - return response - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while processing message: " + str(ex_error), - } - - -@api.get("/messages") -async def get_messages(user_id: str = None, session_id: str = None): - if user_id is None: - raise HTTPException(status_code=400, detail="user_id is required") - try: - user_history = dbutils.get_messages(user_id=user_id, session_id=session_id, dbmanager=dbmanager) - - return { - "status": True, - "data": user_history, - "message": "Messages retrieved successfully", - } - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while retrieving messages: " + str(ex_error), - } - + response: Response = dbmanager.upsert(model) + return response.model_dump(mode="json") -@api.get("/gallery") -async def get_gallery_items(gallery_id: str = None): - try: - gallery = dbutils.get_gallery(gallery_id=gallery_id, dbmanager=dbmanager) - return { - "status": True, - "data": gallery, - "message": "Gallery items retrieved successfully", - } except Exception as ex_error: print(ex_error) return { "status": False, - "message": "Error occurred while retrieving messages: " + str(ex_error), + "message": f"Error occurred while creating {model_class.__name__}: " + str(ex_error), } -@api.get("/sessions") -async def get_user_sessions(user_id: str = None): - """Return a list of all sessions for a user""" - if user_id is None: - raise HTTPException(status_code=400, detail="user_id is required") +def list_entity( + model_class: Any, + filters: dict = None, + return_json: bool = True, + order: str = "desc", +): + """List all entities for a user""" + return dbmanager.get(model_class, filters=filters, return_json=return_json, order=order) - try: - user_sessions = dbutils.get_sessions(user_id=user_id, dbmanager=dbmanager) - return { - "status": True, - "data": user_sessions, - "message": "Sessions retrieved successfully", - } - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while retrieving sessions: " + str(ex_error), - } +def delete_entity(model_class: Any, filters: dict = None): + """Delete an entity""" + return dbmanager.delete(filters=filters, model_class=model_class) -@api.post("/sessions") -async def create_user_session(req: DBWebRequestModel): - """Create a new session for a user""" - # print(req.session, "**********" ) - - try: - session = Session(user_id=req.session.user_id, flow_config=req.session.flow_config) - user_sessions = dbutils.create_session(user_id=req.user_id, session=session, dbmanager=dbmanager) - - return { - "status": True, - "message": "Session created successfully", - "data": user_sessions, - } - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while creating session: " + str(ex_error), - } - - -@api.post("/sessions/rename") -async def rename_user_session(name: str, req: DBWebRequestModel): - """Rename a session for a user""" - print("Rename: " + name) - print("renaming session for user: " + req.user_id + " to: " + name) - try: - session = dbutils.rename_session(name=name, session=req.session, dbmanager=dbmanager) - return { - "status": True, - "message": "Session renamed successfully", - "data": session, - } - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while renaming session: " + str(ex_error), - } - - -@api.post("/sessions/publish") -async def publish_user_session_to_gallery(req: DBWebRequestModel): - """Create a new session for a user""" - - try: - gallery_item = dbutils.create_gallery(req.session, tags=req.tags, dbmanager=dbmanager) - return { - "status": True, - "message": "Session successfully published", - "data": gallery_item, - } - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while publishing session: " + str(ex_error), - } +@api.get("/skills") +async def list_skills(user_id: str): + """List all skills for a user""" + filters = {"user_id": user_id} + return list_entity(Skill, filters=filters) -@api.delete("/sessions/delete") -async def delete_user_session(req: DBWebRequestModel): - """Delete a session for a user""" - try: - sessions = dbutils.delete_session(session=req.session, dbmanager=dbmanager) - return { - "status": True, - "message": "Session deleted successfully", - "data": sessions, - } - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while deleting session: " + str(ex_error), - } +@api.post("/skills") +async def create_skill(skill: Skill): + """Create a new skill""" + filters = {"user_id": skill.user_id} + return create_entity(skill, Skill, filters=filters) -@api.post("/messages/delete") -async def remove_message(req: DeleteMessageWebRequestModel): - """Delete a message from the database""" +@api.delete("/skills/delete") +async def delete_skill(skill_id: int, user_id: str): + """Delete a skill""" + filters = {"id": skill_id, "user_id": user_id} + return delete_entity(Skill, filters=filters) - try: - messages = dbutils.delete_message( - user_id=req.user_id, msg_id=req.msg_id, session_id=req.session_id, dbmanager=dbmanager - ) - return { - "status": True, - "message": "Message deleted successfully", - "data": messages, - } - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while deleting message: " + str(ex_error), - } +@api.get("/models") +async def list_models(user_id: str): + """List all models for a user""" + filters = {"user_id": user_id} + return list_entity(Model, filters=filters) -@api.get("/skills") -async def get_user_skills(user_id: str): - try: - skills = dbutils.get_skills(user_id, dbmanager=dbmanager) - return { - "status": True, - "message": "Skills retrieved successfully", - "data": skills, - } - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while retrieving skills: " + str(ex_error), - } +@api.post("/models") +async def create_model(model: Model): + """Create a new model""" + return create_entity(model, Model) -@api.post("/skills") -async def create_user_skills(req: DBWebRequestModel): +@api.post("/models/test") +async def test_model_endpoint(model: Model): + """Test a model""" try: - skills = dbutils.upsert_skill(skill=req.skill, dbmanager=dbmanager) + response = test_model(model) return { "status": True, - "message": "Skills retrieved successfully", - "data": skills, + "message": "Model tested successfully", + "data": response, } - - except Exception as ex_error: - print(ex_error) + except (OpenAIError, Exception) as ex_error: return { "status": False, - "message": "Error occurred while creating skills: " + str(ex_error), + "message": "Error occurred while testing model: " + str(ex_error), } -@api.delete("/skills/delete") -async def delete_user_skills(req: DBWebRequestModel): - """Delete a skill for a user""" +@api.delete("/models/delete") +async def delete_model(model_id: int, user_id: str): + """Delete a model""" + filters = {"id": model_id, "user_id": user_id} + return delete_entity(Model, filters=filters) - try: - skills = dbutils.delete_skill(req.skill, dbmanager=dbmanager) - return { - "status": True, - "message": "Skill deleted successfully", - "data": skills, - } +@api.get("/agents") +async def list_agents(user_id: str): + """List all agents for a user""" + filters = {"user_id": user_id} + return list_entity(Agent, filters=filters) - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while deleting skill: " + str(ex_error), - } +@api.post("/agents") +async def create_agent(agent: Agent): + """Create a new agent""" + return create_entity(agent, Agent) -@api.get("/agents") -async def get_user_agents(user_id: str): - try: - agents = dbutils.get_agents(user_id, dbmanager=dbmanager) - return { - "status": True, - "message": "Agents retrieved successfully", - "data": agents, - } - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while retrieving agents: " + str(ex_error), - } +@api.delete("/agents/delete") +async def delete_agent(agent_id: int, user_id: str): + """Delete an agent""" + filters = {"id": agent_id, "user_id": user_id} + return delete_entity(Agent, filters=filters) -@api.post("/agents") -async def create_user_agents(req: DBWebRequestModel): - """Create a new agent for a user""" +@api.post("/agents/link/model/{agent_id}/{model_id}") +async def link_agent_model(agent_id: int, model_id: int): + """Link a model to an agent""" + return dbmanager.link(link_type="agent_model", primary_id=agent_id, secondary_id=model_id) - try: - agents = dbutils.upsert_agent(agent_flow_spec=req.agent, dbmanager=dbmanager) - return { - "status": True, - "message": "Agent created successfully", - "data": agents, - } +@api.delete("/agents/link/model/{agent_id}/{model_id}") +async def unlink_agent_model(agent_id: int, model_id: int): + """Unlink a model from an agent""" + return dbmanager.unlink(link_type="agent_model", primary_id=agent_id, secondary_id=model_id) - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while creating agent: " + str(ex_error), - } +@api.get("/agents/link/model/{agent_id}") +async def get_agent_models(agent_id: int): + """Get all models linked to an agent""" + return dbmanager.get_linked_entities("agent_model", agent_id, return_json=True) -@api.delete("/agents/delete") -async def delete_user_agent(req: DBWebRequestModel): - """Delete an agent for a user""" - try: - agents = dbutils.delete_agent(agent=req.agent, dbmanager=dbmanager) +@api.post("/agents/link/skill/{agent_id}/{skill_id}") +async def link_agent_skill(agent_id: int, skill_id: int): + """Link an a skill to an agent""" + return dbmanager.link(link_type="agent_skill", primary_id=agent_id, secondary_id=skill_id) - return { - "status": True, - "message": "Agent deleted successfully", - "data": agents, - } - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while deleting agent: " + str(ex_error), - } +@api.delete("/agents/link/skill/{agent_id}/{skill_id}") +async def unlink_agent_skill(agent_id: int, skill_id: int): + """Unlink an a skill from an agent""" + return dbmanager.unlink(link_type="agent_skill", primary_id=agent_id, secondary_id=skill_id) -@api.get("/models") -async def get_user_models(user_id: str): - try: - models = dbutils.get_models(user_id, dbmanager=dbmanager) +@api.get("/agents/link/skill/{agent_id}") +async def get_agent_skills(agent_id: int): + """Get all skills linked to an agent""" + return dbmanager.get_linked_entities("agent_skill", agent_id, return_json=True) - return { - "status": True, - "message": "Models retrieved successfully", - "data": models, - } - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while retrieving models: " + str(ex_error), - } +@api.post("/agents/link/agent/{primary_agent_id}/{secondary_agent_id}") +async def link_agent_agent(primary_agent_id: int, secondary_agent_id: int): + """Link an agent to another agent""" + return dbmanager.link( + link_type="agent_agent", + primary_id=primary_agent_id, + secondary_id=secondary_agent_id, + ) -@api.post("/models") -async def create_user_models(req: DBWebRequestModel): - """Create a new model for a user""" - try: - models = dbutils.upsert_model(model=req.model, dbmanager=dbmanager) +@api.delete("/agents/link/agent/{primary_agent_id}/{secondary_agent_id}") +async def unlink_agent_agent(primary_agent_id: int, secondary_agent_id: int): + """Unlink an agent from another agent""" + return dbmanager.unlink( + link_type="agent_agent", + primary_id=primary_agent_id, + secondary_id=secondary_agent_id, + ) - return { - "status": True, - "message": "Model created successfully", - "data": models, - } - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while creating model: " + str(ex_error), - } +@api.get("/agents/link/agent/{agent_id}") +async def get_linked_agents(agent_id: int): + """Get all agents linked to an agent""" + return dbmanager.get_linked_entities("agent_agent", agent_id, return_json=True) -@api.post("/models/test") -async def test_user_models(req: DBWebRequestModel): - """Test a model to verify it works""" +@api.get("/workflows") +async def list_workflows(user_id: str): + """List all workflows for a user""" + filters = {"user_id": user_id} + return list_entity(Workflow, filters=filters) - try: - response = test_model(model=req.model) - return { - "status": True, - "message": "Model tested successfully", - "data": response, - } - except OpenAIError as oai_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while testing model: " + str(oai_error), - } - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while testing model: " + str(ex_error), - } +@api.get("/workflows/{workflow_id}") +async def get_workflow(workflow_id: int, user_id: str): + """Get a workflow""" + filters = {"id": workflow_id, "user_id": user_id} + return list_entity(Workflow, filters=filters) -@api.delete("/models/delete") -async def delete_user_model(req: DBWebRequestModel): - """Delete a model for a user""" +@api.post("/workflows") +async def create_workflow(workflow: Workflow): + """Create a new workflow""" + return create_entity(workflow, Workflow) - try: - models = dbutils.delete_model(model=req.model, dbmanager=dbmanager) - return { - "status": True, - "message": "Model deleted successfully", - "data": models, - } +@api.delete("/workflows/delete") +async def delete_workflow(workflow_id: int, user_id: str): + """Delete a workflow""" + filters = {"id": workflow_id, "user_id": user_id} + return delete_entity(Workflow, filters=filters) + + +@api.post("/workflows/link/agent/{workflow_id}/{agent_id}/{agent_type}") +async def link_workflow_agent(workflow_id: int, agent_id: int, agent_type: str): + """Link an agent to a workflow""" + return dbmanager.link( + link_type="workflow_agent", + primary_id=workflow_id, + secondary_id=agent_id, + agent_type=agent_type, + ) + + +@api.delete("/workflows/link/agent/{workflow_id}/{agent_id}/{agent_type}") +async def unlink_workflow_agent(workflow_id: int, agent_id: int, agent_type: str): + """Unlink an agent from a workflow""" + return dbmanager.unlink( + link_type="workflow_agent", + primary_id=workflow_id, + secondary_id=agent_id, + agent_type=agent_type, + ) + + +@api.get("/workflows/link/agent/{workflow_id}/{agent_type}") +async def get_linked_workflow_agents(workflow_id: int, agent_type: str): + """Get all agents linked to a workflow""" + return dbmanager.get_linked_entities( + link_type="workflow_agent", + primary_id=workflow_id, + agent_type=agent_type, + return_json=True, + ) - except Exception as ex_error: - print(traceback.format_exc()) - return { - "status": False, - "message": "Error occurred while deleting model: " + str(ex_error), - } +@api.get("/sessions") +async def list_sessions(user_id: str): + """List all sessions for a user""" + filters = {"user_id": user_id} + return list_entity(Session, filters=filters) -@api.get("/workflows") -async def get_user_workflows(user_id: str): - try: - workflows = dbutils.get_workflows(user_id, dbmanager=dbmanager) - return { - "status": True, - "message": "Workflows retrieved successfully", - "data": workflows, - } - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while retrieving workflows: " + str(ex_error), - } +@api.post("/sessions") +async def create_session(session: Session): + """Create a new session""" + return create_entity(session, Session) -@api.post("/workflows") -async def create_user_workflow(req: DBWebRequestModel): - """Create a new workflow for a user""" - try: - workflow = dbutils.upsert_workflow(workflow=req.workflow, dbmanager=dbmanager) - return { - "status": True, - "message": "Workflow created successfully", - "data": workflow, - } +@api.delete("/sessions/delete") +async def delete_session(session_id: int, user_id: str): + """Delete a session""" + filters = {"id": session_id, "user_id": user_id} + return delete_entity(Session, filters=filters) - except Exception as ex_error: - print(ex_error) - return { - "status": False, - "message": "Error occurred while creating workflow: " + str(ex_error), - } +@api.get("/sessions/{session_id}/messages") +async def list_messages(user_id: str, session_id: int): + """List all messages for a use session""" + filters = {"user_id": user_id, "session_id": session_id} + return list_entity(Message, filters=filters, order="asc", return_json=True) -@api.delete("/workflows/delete") -async def delete_user_workflow(req: DBWebRequestModel): - """Delete a workflow for a user""" +@api.post("/sessions/{session_id}/workflow/{workflow_id}/run") +async def run_session_workflow(message: Message, session_id: int, workflow_id: int): + """Runs a workflow on provided message""" try: - workflow = dbutils.delete_workflow(workflow=req.workflow, dbmanager=dbmanager) - return { - "status": True, - "message": "Workflow deleted successfully", - "data": workflow, - } + user_message_history = ( + dbmanager.get( + Message, + filters={"user_id": message.user_id, "session_id": message.session_id}, + return_json=True, + ).data + if session_id is not None + else [] + ) + # save incoming message + dbmanager.upsert(message) + user_dir = os.path.join(folders["files_static_root"], "user", md5_hash(message.user_id)) + os.makedirs(user_dir, exist_ok=True) + workflow = workflow_from_id(workflow_id, dbmanager=dbmanager) + agent_response: Message = managers["chat"].chat( + message=message, + history=user_message_history, + user_dir=user_dir, + workflow=workflow, + connection_id=message.connection_id, + ) + response: Response = dbmanager.upsert(agent_response) + return response.model_dump(mode="json") except Exception as ex_error: - print(ex_error) + print(traceback.format_exc()) return { "status": False, - "message": "Error occurred while deleting workflow: " + str(ex_error), + "message": "Error occurred while processing message: " + str(ex_error), } @@ -558,11 +411,16 @@ async def get_version(): } +# websockets + + async def process_socket_message(data: dict, websocket: WebSocket, client_id: str): print(f"Client says: {data['type']}") if data["type"] == "user_message": - user_request_body = DBWebRequestModel(**data["data"]) - response = await add_message(user_request_body) + user_message = Message(**data["data"]) + session_id = data["data"].get("session_id", None) + workflow_id = data["data"].get("workflow_id", None) + response = await run_session_workflow(message=user_message, session_id=session_id, workflow_id=workflow_id) response_socket_message = { "type": "agent_response", "data": response, diff --git a/samples/apps/autogen-studio/autogenstudio/workflowmanager.py b/samples/apps/autogen-studio/autogenstudio/workflowmanager.py index c5475e58d830..8b41caab4285 100644 --- a/samples/apps/autogen-studio/autogenstudio/workflowmanager.py +++ b/samples/apps/autogen-studio/autogenstudio/workflowmanager.py @@ -1,23 +1,26 @@ import os from datetime import datetime -from typing import Dict, List, Optional, Union - -from requests import Session +from typing import Any, Dict, List, Optional, Union import autogen -from .datamodel import AgentConfig, AgentFlowSpec, AgentWorkFlowConfig, Message, SocketMessage -from .utils import clear_folder, get_skills_from_prompt, sanitize_model +from .datamodel import ( + Agent, + AgentType, + Message, + SocketMessage, +) +from .utils import clear_folder, get_skills_from_prompt, load_code_execution_config, sanitize_model -class AutoGenWorkFlowManager: +class WorkflowManager: """ AutoGenWorkFlowManager class to load agents from a provided configuration and run a chat between them """ def __init__( self, - config: AgentWorkFlowConfig, + workflow: Dict, history: Optional[List[Message]] = None, work_dir: str = None, clear_work_dir: bool = True, @@ -33,20 +36,57 @@ def __init__( history: An optional list of previous messages to populate the agents' history. """ + # TODO - improved typing for workflow self.send_message_function = send_message_function self.connection_id = connection_id self.work_dir = work_dir or "work_dir" if clear_work_dir: clear_folder(self.work_dir) - self.config = config - # given the config, return an AutoGen agent object - self.sender = self.load(config.sender) - # given the config, return an AutoGen agent object - self.receiver = self.load(config.receiver) + self.workflow = workflow + self.sender = self.load(workflow.get("sender")) + self.receiver = self.load(workflow.get("receiver")) self.agent_history = [] if history: - self.populate_history(history) + self._populate_history(history) + + def _serialize_agent( + self, + agent: Agent, + mode: str = "python", + include: Optional[List[str]] = {"config"}, + exclude: Optional[List[str]] = None, + ) -> Dict: + """ """ + # exclude = ["id","created_at", "updated_at","user_id","type"] + exclude = exclude or {} + include = include or {} + if agent.type != AgentType.groupchat: + exclude.update( + { + "config": { + "admin_name", + "messages", + "max_round", + "admin_name", + "speaker_selection_method", + "allow_repeat_speaker", + } + } + ) + else: + include = { + "config": { + "admin_name", + "messages", + "max_round", + "admin_name", + "speaker_selection_method", + "allow_repeat_speaker", + } + } + result = agent.model_dump(warnings=False, exclude=exclude, include=include, mode=mode) + return result["config"] def process_message( self, @@ -84,25 +124,14 @@ def process_message( if request_reply is not False or sender_type == "groupchat": self.agent_history.append(message_payload) # add to history if self.send_message_function: # send over the message queue - socket_msg = SocketMessage(type="agent_message", data=message_payload, connection_id=self.connection_id) + socket_msg = SocketMessage( + type="agent_message", + data=message_payload, + connection_id=self.connection_id, + ) self.send_message_function(socket_msg.dict()) - def _sanitize_history_message(self, message: str) -> str: - """ - Sanitizes the message e.g. remove references to execution completed - - Args: - message: The message to be sanitized. - - Returns: - The sanitized message. - """ - to_replace = ["execution succeeded", "exitcode"] - for replace in to_replace: - message = message.replace(replace, "") - return message - - def populate_history(self, history: List[Message]) -> None: + def _populate_history(self, history: List[Message]) -> None: """ Populates the agent message history from the provided list of messages. @@ -127,19 +156,12 @@ def populate_history(self, history: List[Message]) -> None: silent=True, ) - def sanitize_agent_spec(self, agent_spec: AgentFlowSpec) -> AgentFlowSpec: - """ - Sanitizes the agent spec by setting loading defaults - - Args: - config: The agent configuration to be sanitized. - agent_type: The type of the agent. + def sanitize_agent(self, agent: Dict) -> Agent: + """ """ - Returns: - The sanitized agent configuration. - """ - - agent_spec.config.is_termination_msg = agent_spec.config.is_termination_msg or ( + skills = agent.get("skills", []) + agent = Agent.model_validate(agent) + agent.config.is_termination_msg = agent.config.is_termination_msg or ( lambda x: "TERMINATE" in x.get("content", "").rstrip()[-20:] ) @@ -149,40 +171,33 @@ def get_default_system_message(agent_type: str) -> str: else: return "You are a helpful AI Assistant." - # sanitize llm_config if present - if agent_spec.config.llm_config is not False: + if agent.config.llm_config is not False: config_list = [] - for llm in agent_spec.config.llm_config.config_list: + for llm in agent.config.llm_config.config_list: # check if api_key is present either in llm or env variable if "api_key" not in llm and "OPENAI_API_KEY" not in os.environ: - error_message = f"api_key is not present in llm_config or OPENAI_API_KEY env variable for agent ** {agent_spec.config.name}**. Update your workflow to provide an api_key to use the LLM." + error_message = f"api_key is not present in llm_config or OPENAI_API_KEY env variable for agent ** {agent.config.name}**. Update your workflow to provide an api_key to use the LLM." raise ValueError(error_message) # only add key if value is not None sanitized_llm = sanitize_model(llm) config_list.append(sanitized_llm) - agent_spec.config.llm_config.config_list = config_list - if agent_spec.config.code_execution_config is not False: - code_execution_config = agent_spec.config.code_execution_config or {} - code_execution_config["work_dir"] = self.work_dir - # tbd check if docker is installed - code_execution_config["use_docker"] = False - agent_spec.config.code_execution_config = code_execution_config - - if agent_spec.skills: - # get skill prompt, also write skills to a file named skills.py - skills_prompt = "" - skills_prompt = get_skills_from_prompt(agent_spec.skills, self.work_dir) - if agent_spec.config.system_message: - agent_spec.config.system_message = agent_spec.config.system_message + "\n\n" + skills_prompt - else: - agent_spec.config.system_message = ( - get_default_system_message(agent_spec.type) + "\n\n" + skills_prompt - ) - - return agent_spec - - def load(self, agent_spec: AgentFlowSpec) -> autogen.Agent: + agent.config.llm_config.config_list = config_list + + agent.config.code_execution_config = load_code_execution_config( + agent.config.code_execution_config, work_dir=self.work_dir + ) + + if skills: + skills_prompt = "" + skills_prompt = get_skills_from_prompt(skills, self.work_dir) + if agent.config.system_message: + agent.config.system_message = agent.config.system_message + "\n\n" + skills_prompt + else: + agent.config.system_message = get_default_system_message(agent.type) + "\n\n" + skills_prompt + return agent + + def load(self, agent: Any) -> autogen.Agent: """ Loads an agent based on the provided agent specification. @@ -192,43 +207,40 @@ def load(self, agent_spec: AgentFlowSpec) -> autogen.Agent: Returns: An instance of the loaded agent. """ - agent_spec = self.sanitize_agent_spec(agent_spec) - if agent_spec.type == "groupchat": - agents = [ - self.load(self.sanitize_agent_spec(agent_config)) for agent_config in agent_spec.groupchat_config.agents - ] - group_chat_config = agent_spec.groupchat_config.dict() - group_chat_config["agents"] = agents + if not agent: + raise ValueError( + "An agent configuration in this workflow is empty. Please provide a valid agent configuration." + ) + + linked_agents = agent.get("agents", []) + agent = self.sanitize_agent(agent) + if agent.type == "groupchat": + groupchat_agents = [self.load(agent) for agent in linked_agents] + group_chat_config = self._serialize_agent(agent) + group_chat_config["agents"] = groupchat_agents groupchat = autogen.GroupChat(**group_chat_config) agent = ExtendedGroupChatManager( - groupchat=groupchat, **agent_spec.config.dict(), message_processor=self.process_message + groupchat=groupchat, + message_processor=self.process_message, + llm_config=agent.config.llm_config.model_dump(), ) return agent else: - agent = self.load_agent_config(agent_spec.config, agent_spec.type) + if agent.type == "assistant": + agent = ExtendedConversableAgent( + **self._serialize_agent(agent), + message_processor=self.process_message, + ) + elif agent.type == "userproxy": + agent = ExtendedConversableAgent( + **self._serialize_agent(agent), + message_processor=self.process_message, + ) + else: + raise ValueError(f"Unknown agent type: {agent.type}") return agent - def load_agent_config(self, agent_config: AgentConfig, agent_type: str) -> autogen.Agent: - """ - Loads an agent based on the provided agent configuration. - - Args: - agent_config: The configuration of the agent to be loaded. - agent_type: The type of the agent to be loaded. - - Returns: - An instance of the loaded agent. - """ - if agent_type == "assistant": - agent = ExtendedConversableAgent(**agent_config.dict(), message_processor=self.process_message) - elif agent_type == "userproxy": - agent = ExtendedConversableAgent(**agent_config.dict(), message_processor=self.process_message) - else: - raise ValueError(f"Unknown agent type: {agent_type}") - - return agent - def run(self, message: str, clear_history: bool = False) -> None: """ Initiates a chat between the sender and receiver agents with an initial message @@ -262,6 +274,9 @@ def receive( super().receive(message, sender, request_reply, silent) +"" + + class ExtendedGroupChatManager(autogen.GroupChatManager): def __init__(self, message_processor=None, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/samples/apps/autogen-studio/frontend/src/components/atoms.tsx b/samples/apps/autogen-studio/frontend/src/components/atoms.tsx index 8bc70f89a90b..c4c1368a1232 100644 --- a/samples/apps/autogen-studio/frontend/src/components/atoms.tsx +++ b/samples/apps/autogen-studio/frontend/src/components/atoms.tsx @@ -4,53 +4,18 @@ import { Cog8ToothIcon, XMarkIcon, ClipboardIcon, - PlusIcon, - UserGroupIcon, - UsersIcon, - ExclamationTriangleIcon, InformationCircleIcon, } from "@heroicons/react/24/outline"; import React, { ReactNode, useEffect, useRef, useState } from "react"; import Icon from "./icons"; -import { - Button, - Divider, - Dropdown, - Input, - MenuProps, - Modal, - Select, - Slider, - Table, - Space, - Tooltip, - message, - theme, -} from "antd"; +import { Modal, Table, Tooltip, theme } from "antd"; import Editor from "@monaco-editor/react"; import Papa from "papaparse"; import remarkGfm from "remark-gfm"; import ReactMarkdown from "react-markdown"; import { atomDark } from "react-syntax-highlighter/dist/esm/styles/prism"; import { Prism as SyntaxHighlighter } from "react-syntax-highlighter"; -import { - checkAndSanitizeInput, - fetchJSON, - getServerUrl, - obscureString, - truncateText, -} from "./utils"; -import { - IAgentFlowSpec, - IFlowConfig, - IGroupChatFlowSpec, - ILLMConfig, - IModelConfig, - ISkill, - IStatus, -} from "./types"; -import TextArea from "antd/es/input/TextArea"; -import { appContext } from "../hooks/provider"; +import { truncateText } from "./utils"; const { useToken } = theme; interface CodeProps { @@ -162,12 +127,13 @@ export const Card = ({ border = hoverable ? border : "border-secondary"; return ( -
-
+
{title && (
{title} @@ -176,7 +142,7 @@ export const Card = ({
{subtitle}
{children}
-
+ ); }; @@ -303,7 +269,7 @@ export const MessageBox = ({ title, children, className }: IProps) => { export const GroupView = ({ children, title, - className = " bg-primary ", + className = "text-primary bg-primary ", }: any) => { return (
@@ -590,19 +556,21 @@ export const ControlRowView = ({ value, control, className, + truncateLength = 20, }: { title: string; description: string; - value: string | number; + value: string | number | boolean; control: any; className?: string; + truncateLength?: number; }) => { return (
{title} - {truncateText(value + "", 20)} + {truncateText(value + "", truncateLength)} {" "} @@ -614,291 +582,6 @@ export const ControlRowView = ({ ); }; -export const ModelSelector = ({ - configs, - setConfigs, - className, -}: { - configs: IModelConfig[]; - setConfigs: (configs: IModelConfig[]) => void; - className?: string; -}) => { - // const [configs, setConfigs] = useState(modelConfigs); - const [isModalVisible, setIsModalVisible] = useState(false); - const [newModelConfig, setNewModelConfig] = useState( - null - ); - const [editIndex, setEditIndex] = useState(null); - const [loading, setLoading] = useState(false); - const [error, setError] = useState(null); - - const [models, setModels] = useState([]); - const serverUrl = getServerUrl(); - - const { user } = React.useContext(appContext); - const listModelsUrl = `${serverUrl}/models?user_id=${user?.email}`; - - // const sanitizeModelConfig = (config: IModelConfig) => { - // const sanitizedConfig: IModelConfig = { model: config.model }; - // if (config.api_key) sanitizedConfig.api_key = config.api_key; - // if (config.base_url) sanitizedConfig.base_url = config.base_url; - // if (config.api_type) sanitizedConfig.api_type = config.api_type; - // if (config.api_version) sanitizedConfig.api_version = config.api_version; - // return sanitizedConfig; - // }; - - const handleRemoveConfig = (index: number) => { - const updatedConfigs = configs.filter((_, i) => i !== index); - - setConfigs(updatedConfigs); - }; - - const showModal = (config: IModelConfig | null, index: number | null) => { - setNewModelConfig(config); - setEditIndex(index); - setIsModalVisible(true); - }; - - const fetchModels = () => { - setError(null); - setLoading(true); - // const fetch; - const payLoad = { - method: "GET", - headers: { - "Content-Type": "application/json", - }, - }; - - const onSuccess = (data: any) => { - if (data && data.status) { - // message.success(data.message); - setModels(data.data); - } else { - message.error(data.message); - } - setLoading(false); - }; - const onError = (err: any) => { - setError(err); - message.error(err.message); - setLoading(false); - }; - fetchJSON(listModelsUrl, payLoad, onSuccess, onError); - }; - - useEffect(() => { - fetchModels(); - }, []); - - const modelItems: MenuProps["items"] = - models.length > 0 - ? models.map((model: IModelConfig, index: number) => ({ - key: index, - label: ( - <> -
{model.model}
-
- {truncateText(model.description || "", 20)} -
- - ), - value: index, - })) - : [ - { - key: -1, - label: <>No models found, - value: 0, - }, - ]; - - const modelOnClick: MenuProps["onClick"] = ({ key }) => { - const selectedIndex = parseInt(key.toString()); - let selectedModel = models[selectedIndex]; - const updatedConfigs = [...configs, selectedModel]; - setConfigs(updatedConfigs); - }; - - const menuStyle: React.CSSProperties = { - boxShadow: "none", - }; - - const { token } = useToken(); - const contentStyle: React.CSSProperties = { - backgroundColor: token.colorBgElevated, - borderRadius: token.borderRadiusLG, - boxShadow: token.boxShadowSecondary, - }; - - const addModelsMessage = ( - - {" "} - Please - create models in the Model tab - - ); - - const AddModelsDropDown = () => { - return ( - ( -
- {React.cloneElement(menu as React.ReactElement, { - style: menuStyle, - })} - {models.length === 0 && ( - <> - - -
{addModelsMessage}
- - )} -
- )} - > -
- add -
-
- ); - }; - - const handleOk = () => { - if (newModelConfig?.model.trim()) { - const sanitizedConfig = newModelConfig; - - if (editIndex !== null) { - // Edit existing model - const updatedConfigs = [...configs]; - updatedConfigs[editIndex] = sanitizedConfig; - setConfigs(updatedConfigs); - } else { - // Add new model - setConfigs([...configs, sanitizedConfig]); - } - setIsModalVisible(false); - setNewModelConfig(null); - setEditIndex(null); - } else { - // Handle case where 'model' field is empty - // Could provide user feedback here (e.g., input validation error) - message.error("Model name cannot be empty"); - } - }; - - const handleCancel = () => { - setIsModalVisible(false); - setNewModelConfig(null); - setEditIndex(null); - }; - - const updateNewModelConfig = (field: keyof IModelConfig, value: string) => { - setNewModelConfig((prevState) => - prevState ? { ...prevState, [field]: value } : null - ); - }; - - const modelButtons = configs.map((config, i) => { - const tooltipText = ( - <> -
{config.model}
- {config.base_url &&
{config.base_url}
} - {config.api_key &&
{obscureString(config.api_key, 3)}
} -
- {truncateText(config.description || "", 90)} -
- - ); - return ( -
showModal(config, i)} - > -
- {" "} - -
{config.model}
{" "} -
-
{ - e.stopPropagation(); // Prevent opening the modal to edit - handleRemoveConfig(i); - }} - className="ml-1 text-primary hover:text-accent duration-300" - > - -
-
-
- ); - }); - - return ( -
-
- {modelButtons} - -
- - Cancel - , - , - ]} - > -
Enter parameters for your model.
- updateNewModelConfig("model", e.target.value)} - /> - updateNewModelConfig("api_key", e.target.value)} - /> - updateNewModelConfig("base_url", e.target.value)} - /> - updateNewModelConfig("api_type", e.target.value)} - /> - - updateNewModelConfig("api_version", e.target.value)} - /> -
-
- ); -}; - export const BounceLoader = ({ className, title = "", @@ -937,7 +620,7 @@ export const ImageLoader = ({ Dynamic content setIsLoading(false)} @@ -1077,946 +760,6 @@ export const PdfViewer = ({ url }: { url: string }) => { ); }; -export const AgentFlowSpecView = ({ - title = "Agent Specification", - flowSpec, - setFlowSpec, -}: { - title: string; - flowSpec: IAgentFlowSpec; - setFlowSpec: (newFlowSpec: IAgentFlowSpec) => void; - editMode?: boolean; -}) => { - // Local state for the FlowView component - const [localFlowSpec, setLocalFlowSpec] = - React.useState(flowSpec); - - // Required to monitor localAgent updates that occur in GroupChatFlowSpecView and reflect updates. - useEffect(() => { - setLocalFlowSpec(flowSpec); - }, [flowSpec]); - - // Event handlers for updating local state and propagating changes - - const onControlChange = (value: any, key: string) => { - if (key === "llm_config") { - if (value.config_list.length === 0) { - value = false; - } - } - const updatedFlowSpec = { - ...localFlowSpec, - config: { ...localFlowSpec.config, [key]: value }, - }; - - setLocalFlowSpec(updatedFlowSpec); - setFlowSpec(updatedFlowSpec); - }; - - const llm_config: ILLMConfig = localFlowSpec?.config?.llm_config || { - config_list: [], - temperature: 0.1, - }; - - const nameValidation = checkAndSanitizeInput(flowSpec?.config?.name); - - return ( - <> -
{title}
- {flowSpec?.config?.name}
- className="mb-4 bg-primary " - > - - { - onControlChange(e.target.value, "name"); - }} - /> - {!nameValidation.status && ( -
- {nameValidation.message} -
- )} - - } - /> - - { - onControlChange(e.target.value, "description"); - }} - /> - } - /> - - { - onControlChange(value, "max_consecutive_auto_reply"); - }} - /> - } - /> - - { - onControlChange(e.target.value, "default_auto_reply"); - }} - /> - } - /> - - { - onControlChange(value, "human_input_mode"); - }} - options={ - [ - { label: "NEVER", value: "NEVER" }, - // { label: "TERMINATE", value: "TERMINATE" }, - // { label: "ALWAYS", value: "ALWAYS" }, - ] as any - } - /> - } - /> - - {llm_config && llm_config.config_list.length > 0 && ( - { - onControlChange(e.target.value, "system_message"); - }} - /> - } - /> - )} - - {llm_config && ( - { - const llm_config = { - ...(flowSpec.config.llm_config || { temperature: 0.1 }), - config_list, - }; - onControlChange(llm_config, "llm_config"); - }} - /> - } - /> - )} - - {llm_config && llm_config.config_list.length > 0 && ( - { - const llm_config = { - ...flowSpec.config.llm_config, - temperature: value, - }; - onControlChange(llm_config, "llm_config"); - }} - /> - } - /> - )} - - { - { - const updatedFlowSpec = { - ...localFlowSpec, - skills, - }; - setLocalFlowSpec(updatedFlowSpec); - setFlowSpec(updatedFlowSpec); - }} - /> - } - /> - } - - - ); -}; - -interface SkillSelectorProps { - skills: ISkill[]; - setSkills: (skills: ISkill[]) => void; - className?: string; -} - -export const SkillSelector: React.FC = ({ - skills, - setSkills, - className, -}) => { - const [isModalVisible, setIsModalVisible] = useState(false); - const [showSkillModal, setShowSkillModal] = React.useState(false); - const [newSkill, setNewSkill] = useState(null); - - const [localSkills, setLocalSkills] = useState(skills); - const [selectedSkill, setSelectedSkill] = useState(null); - - const handleRemoveSkill = (index: number) => { - const updatedSkills = localSkills.filter((_, i) => i !== index); - setLocalSkills(updatedSkills); - setSkills(updatedSkills); - }; - - const handleAddSkill = () => { - if (newSkill) { - const updatedSkills = [...localSkills, newSkill]; - setLocalSkills(updatedSkills); - setSkills(updatedSkills); - setNewSkill(null); - } - }; - - useEffect(() => { - if (selectedSkill) { - setShowSkillModal(true); - } - }, [selectedSkill]); - - return ( - <> - { - setShowSkillModal(false); - setSelectedSkill(null); - }} - onCancel={() => { - setShowSkillModal(false); - setSelectedSkill(null); - }} - > - {selectedSkill && ( -
-
{selectedSkill.file_name}
- -
- )} -
- -
- {localSkills.map((skill, index) => ( -
- { - setSelectedSkill(skill); - }} - className=" inline-block " - > - {skill.title} - - handleRemoveSkill(index)} - className="ml-1 text-primary hover:text-accent duration-300 w-4 h-4 inline-block" - /> -
- ))} - -
{ - setIsModalVisible(true); - }} - > - add -
-
- - setIsModalVisible(false)} - footer={[ - , - , - ]} - > - - - - ); -}; - -export const SkillLoader = ({ - skill, - setSkill, -}: { - skill: ISkill | null; - setSkill: (skill: ISkill | null) => void; -}) => { - const [skills, setSkills] = useState([]); - const [loading, setLoading] = useState(false); - const [error, setError] = React.useState({ - status: true, - message: "All good", - }); - const serverUrl = getServerUrl(); - const { user } = React.useContext(appContext); - const listSkillsUrl = `${serverUrl}/skills?user_id=${user?.email}`; - - const fetchSkills = () => { - setError(null); - setLoading(true); - // const fetch; - const payLoad = { - method: "GET", - headers: { - "Content-Type": "application/json", - }, - }; - - const onSuccess = (data: any) => { - if (data && data.status) { - message.success(data.message); - setSkills(data.data); - if (data.data.length > 0) { - setSkill(data.data[0]); - } - } else { - message.error(data.message); - } - setLoading(false); - }; - const onError = (err: any) => { - setError(err); - message.error(err.message); - setLoading(false); - }; - fetchJSON(listSkillsUrl, payLoad, onSuccess, onError); - }; - - useEffect(() => { - fetchSkills(); - }, []); - - const skillOptions = skills.map((skill: ISkill, index: number) => ({ - label: skill.title, - value: index, - })); - return ( -
- - - {skills && ( - <> - ({ - label: spec.config.name, - value: index, - }))} - /> -
- )} - {/* {JSON.stringify(localAgent)} */} - - ); -}; - -export const AgentSelector = ({ - flowSpec, - setFlowSpec, -}: { - flowSpec: IAgentFlowSpec | null; - setFlowSpec: (agent: IAgentFlowSpec | null) => void; -}) => { - const [isModalVisible, setIsModalVisible] = useState(false); - - return ( -
-
setIsModalVisible(true)} - className="hover:bg-secondary h-full duration-300 border border-dashed rounded p-2" - > - {flowSpec && ( -
- {flowSpec.type === "groupchat" ? ( - - ) : ( - - )} - {flowSpec.config.name} -
- {" "} - {flowSpec.config.description || flowSpec.config.name} -
-
- {" "} - - {(flowSpec.skills && flowSpec.skills?.length) || 0} skills - - - | max replies: {flowSpec.config.max_consecutive_auto_reply} - -
-
- )} -
- { - <> - { - setFlowSpec(agent); - }} - /> - - } -
- ); -}; -export const FlowConfigViewer = ({ - flowConfig, - setFlowConfig, -}: { - flowConfig: IFlowConfig; - setFlowConfig: (newFlowConfig: IFlowConfig) => void; -}) => { - // Local state for sender and receiver FlowSpecs - const [senderFlowSpec, setSenderFlowSpec] = - React.useState(flowConfig.sender); - - const [localFlowConfig, setLocalFlowConfig] = - React.useState(flowConfig); - - const [receiverFlowSpec, setReceiverFlowSpec] = - React.useState(flowConfig.receiver); - - // Update the local state and propagate changes to the parent component - const updateSenderFlowSpec = (newFlowSpec: IAgentFlowSpec | null) => { - setSenderFlowSpec(newFlowSpec); - if (newFlowSpec) { - setFlowConfig({ ...flowConfig, sender: newFlowSpec }); - } - }; - - const updateReceiverFlowSpec = (newFlowSpec: IAgentFlowSpec | null) => { - setReceiverFlowSpec(newFlowSpec); - if (newFlowSpec) { - setFlowConfig({ ...flowConfig, receiver: newFlowSpec }); - } - }; - - const updateFlowConfig = (key: string, value: string) => { - // When an updatedFlowConfig is created using localFlowConfig, if the contents of FlowConfigViewer Modal are changed after the Agent Specification Modal is updated, the updated contents of the Agent Specification Modal are not saved. Fixed to localFlowConfig->flowConfig. Fixed a bug. - const updatedFlowConfig = { ...flowConfig, [key]: value }; - console.log("updatedFlowConfig: ", updatedFlowConfig); - setLocalFlowConfig(updatedFlowConfig); - setFlowConfig(updatedFlowConfig); - }; - - return ( - <> - {/*
{flowConfig.name}
*/} - updateFlowConfig("name", e.target.value)} - /> - } - /> - - updateFlowConfig("description", e.target.value)} - /> - } - /> - - updateFlowConfig("summary_method", value)} - options={ - [ - { label: "last", value: "last" }, - { label: "none", value: "none" }, - { label: "llm", value: "llm" }, - ] as any - } - /> - } - /> -
-
-
Sender
- -
-
-
Receiver
- -
-
- - ); -}; - export const MonacoEditor = ({ value, editorRef, diff --git a/samples/apps/autogen-studio/frontend/src/components/header.tsx b/samples/apps/autogen-studio/frontend/src/components/header.tsx index 8ec853269233..d0adf2e0a3ab 100644 --- a/samples/apps/autogen-studio/frontend/src/components/header.tsx +++ b/samples/apps/autogen-studio/frontend/src/components/header.tsx @@ -25,7 +25,7 @@ const Header = ({ meta, link }: any) => { const links: any[] = [ { name: "Build", href: "/build" }, { name: "Playground", href: "/" }, - { name: "Gallery", href: "/gallery" }, + // { name: "Gallery", href: "/gallery" }, // { name: "Data Explorer", href: "/explorer" }, ]; diff --git a/samples/apps/autogen-studio/frontend/src/components/types.ts b/samples/apps/autogen-studio/frontend/src/components/types.ts index 522682a4884e..eba391446028 100644 --- a/samples/apps/autogen-studio/frontend/src/components/types.ts +++ b/samples/apps/autogen-studio/frontend/src/components/types.ts @@ -2,14 +2,13 @@ export type NotificationType = "success" | "info" | "warning" | "error"; export interface IMessage { user_id: string; - root_msg_id: string; - msg_id?: string; role: string; content: string; - timestamp?: string; - personalize?: boolean; - ra?: string; - session_id?: string; + created_at?: string; + updated_at?: string; + session_id?: number; + connection_id?: string; + workflow_id?: number; } export interface IStatus { @@ -21,7 +20,7 @@ export interface IStatus { export interface IChatMessage { text: string; sender: "user" | "bot"; - metadata?: any; + meta?: any; msg_id: string; } @@ -30,6 +29,7 @@ export interface ILLMConfig { timeout?: number; cache_seed?: number | null; temperature: number; + max_tokens: number; } export interface IAgentConfig { @@ -40,47 +40,36 @@ export interface IAgentConfig { system_message: string | ""; is_termination_msg?: boolean | string; default_auto_reply?: string | null; - code_execution_config?: boolean | string | { [key: string]: any } | null; + code_execution_config?: "none" | "local" | "docker"; description?: string; -} -export interface IAgentFlowSpec { - type: "assistant" | "userproxy" | "groupchat"; - config: IAgentConfig; - timestamp?: string; - id?: string; - skills?: Array; - user_id?: string; + admin_name?: string; + messages?: Array; + max_round?: number; + speaker_selection_method?: string; + allow_repeat_speaker?: boolean; } -export interface IGroupChatConfig { - agents: Array; - admin_name: string; - messages: Array; - max_round: number; - speaker_selection_method: "auto" | "round_robin" | "random"; - allow_repeat_speaker: boolean | Array; -} - -export interface IGroupChatFlowSpec { - type: "groupchat"; +export interface IAgent { + type?: "assistant" | "userproxy" | "groupchat"; config: IAgentConfig; - groupchat_config: IGroupChatConfig; - id?: string; - timestamp?: string; + created_at?: string; + updated_at?: string; + id?: number; + skills?: Array; user_id?: string; - description?: string; } -export interface IFlowConfig { +export interface IWorkflow { name: string; description: string; - sender: IAgentFlowSpec; - receiver: IAgentFlowSpec | IGroupChatFlowSpec; + sender: IAgent; + receiver: IAgent; type: "twoagents" | "groupchat"; - timestamp?: string; + created_at?: string; + updated_at?: string; summary_method?: "none" | "last" | "llm"; - id?: string; + id?: number; user_id?: string; } @@ -89,11 +78,12 @@ export interface IModelConfig { api_key?: string; api_version?: string; base_url?: string; - api_type?: string; + api_type?: "open_ai" | "azure" | "google"; user_id?: string; - timestamp?: string; + created_at?: string; + updated_at?: string; description?: string; - id?: string; + id?: number; } export interface IMetadataFile { @@ -105,27 +95,29 @@ export interface IMetadataFile { } export interface IChatSession { - id: string; + id?: number; user_id: string; - timestamp: string; - flow_config: IFlowConfig; + workflow_id?: number; + created_at?: string; + updated_at?: string; name: string; } export interface IGalleryItem { - id: string; + id: number; messages: Array; session: IChatSession; tags: Array; - timestamp: string; + created_at: string; + updated_at: string; } export interface ISkill { - title: string; - file_name?: string; + name: string; content: string; - id?: string; - timestamp?: string; + id?: number; description?: string; user_id?: string; + created_at?: string; + updated_at?: string; } diff --git a/samples/apps/autogen-studio/frontend/src/components/utils.ts b/samples/apps/autogen-studio/frontend/src/components/utils.ts index 73b9f42207c2..2264f5c66a21 100644 --- a/samples/apps/autogen-studio/frontend/src/components/utils.ts +++ b/samples/apps/autogen-studio/frontend/src/components/utils.ts @@ -1,12 +1,11 @@ import { + IAgent, IAgentConfig, - IAgentFlowSpec, - IFlowConfig, - IGroupChatFlowSpec, ILLMConfig, IModelConfig, ISkill, IStatus, + IWorkflow, } from "./types"; export const getServerUrl = () => { @@ -66,7 +65,8 @@ export function fetchJSON( url: string | URL, payload: any = {}, onSuccess: (data: any) => void, - onError: (error: IStatus) => void + onError: (error: IStatus) => void, + onFinal: () => void = () => {} ) { return fetch(url, payload) .then(function (response) { @@ -95,6 +95,9 @@ export function fetchJSON( status: false, message: `There was an error connecting to server. (${err}) `, }); + }) + .finally(() => { + onFinal(); }); } export const capitalize = (s: string) => { @@ -243,60 +246,138 @@ export const formatDuration = (seconds: number) => { return parts.length > 0 ? parts.join(" ") : "0 sec"; }; -export const sampleAgentConfig = (user_id: string = "guestuser@gmail.com") => { - const sampleAgent: IAgentFlowSpec = { +export const sampleModelConfig = (modelType: string = "open_ai") => { + const openaiConfig: IModelConfig = { + model: "gpt-4-1106-preview", + api_type: "open_ai", + description: "OpenAI GPT-4 model", + }; + const azureConfig: IModelConfig = { + model: "gpt-4", + api_type: "azure", + api_version: "v1", + base_url: "https://youazureendpoint.azure.com/", + description: "Azure model", + }; + + const googleConfig: IModelConfig = { + model: "gemini-1.0-pro", + api_type: "google", + description: "Google Gemini Model model", + }; + + switch (modelType) { + case "open_ai": + return openaiConfig; + case "azure": + return azureConfig; + case "google": + return googleConfig; + default: + return openaiConfig; + } +}; + +export const getRandomIntFromDateAndSalt = (salt: number = 43444) => { + const currentDate = new Date(); + const seed = currentDate.getTime() + salt; + const randomValue = Math.sin(seed) * 10000; + const randomInt = Math.floor(randomValue) % 100; + return randomInt; +}; + +export const sampleAgentConfig = (agent_type: string = "assistant") => { + const llm_config: ILLMConfig = { + config_list: [], + temperature: 0.1, + timeout: 600, + cache_seed: null, + max_tokens: 1000, + }; + + const userProxyConfig: IAgentConfig = { + name: "userproxy", + human_input_mode: "NEVER", + description: "User Proxy", + max_consecutive_auto_reply: 25, + system_message: "You are a helpful assistant.", + default_auto_reply: "TERMINATE", + llm_config: false, + code_execution_config: "local", + }; + const userProxyFlowSpec: IAgent = { + type: "userproxy", + config: userProxyConfig, + }; + + const assistantConfig: IAgentConfig = { + name: "primary_assistant", + description: "Primary Assistant", + llm_config: llm_config, + human_input_mode: "NEVER", + max_consecutive_auto_reply: 25, + code_execution_config: "none", + system_message: + "You are a helpful AI assistant. Solve tasks using your coding and language skills. In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute. 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself. 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly. Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill. When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user. If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible. Reply 'TERMINATE' in the end when everything is done.", + }; + + const assistantFlowSpec: IAgent = { type: "assistant", - user_id: user_id, - config: { - name: "sample_assistant", - description: "Sample assistant", - llm_config: { - config_list: [ - { - model: "gpt-4-1106-preview", - }, - ], - temperature: 0.1, - timeout: 600, - cache_seed: null, - }, - human_input_mode: "NEVER", - code_execution_config: false, - max_consecutive_auto_reply: 8, - system_message: - "You are a helpful AI assistant. Solve tasks using your coding and language skills. In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute. 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself. 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly. Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill. When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user. If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible. Reply 'TERMINATE' in the end when everything is done.", + config: assistantConfig, + }; + + const groupChatAssistantConfig = Object.assign( + { + admin_name: "groupchat_assistant", + messages: [], + max_round: 10, + speaker_selection_method: "auto", + allow_repeat_speaker: false, }, + assistantConfig + ); + groupChatAssistantConfig.name = "groupchat_assistant"; + groupChatAssistantConfig.system_message = + "You are a helpful assistant skilled at cordinating a group of other assistants to solve a task. "; + groupChatAssistantConfig.description = "Group Chat Assistant"; + + const groupChatFlowSpec: IAgent = { + type: "groupchat", + config: groupChatAssistantConfig, }; - return sampleAgent; + + if (agent_type === "userproxy") { + return userProxyFlowSpec; + } else if (agent_type === "assistant") { + return assistantFlowSpec; + } else if (agent_type === "groupchat") { + return groupChatFlowSpec; + } else { + return assistantFlowSpec; + } }; export const sampleWorkflowConfig = (type = "twoagents") => { - const llm_model_config: IModelConfig[] = [ - { - model: "gpt-4-1106-preview", - }, - ]; + const llm_model_config: IModelConfig[] = []; const llm_config: ILLMConfig = { config_list: llm_model_config, temperature: 0.1, timeout: 600, cache_seed: null, + max_tokens: 1000, }; const userProxyConfig: IAgentConfig = { name: "userproxy", human_input_mode: "NEVER", - max_consecutive_auto_reply: 5, + max_consecutive_auto_reply: 15, system_message: "You are a helpful assistant.", default_auto_reply: "TERMINATE", llm_config: false, - code_execution_config: { - work_dir: null, - use_docker: false, - }, + code_execution_config: "local", }; - const userProxyFlowSpec: IAgentFlowSpec = { + const userProxyFlowSpec: IAgent = { type: "userproxy", config: userProxyConfig, }; @@ -306,17 +387,17 @@ export const sampleWorkflowConfig = (type = "twoagents") => { llm_config: llm_config, human_input_mode: "NEVER", max_consecutive_auto_reply: 8, - code_execution_config: false, + code_execution_config: "none", system_message: "You are a helpful AI assistant. Solve tasks using your coding and language skills. In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute. 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself. 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly. Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill. When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user. If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible. Reply 'TERMINATE' in the end when everything is done.", }; - const assistantFlowSpec: IAgentFlowSpec = { + const assistantFlowSpec: IAgent = { type: "assistant", config: assistantConfig, }; - const workFlowConfig: IFlowConfig = { + const workFlowConfig: IWorkflow = { name: "Default Agent Workflow", description: "Default Agent Workflow", sender: userProxyFlowSpec, @@ -324,26 +405,27 @@ export const sampleWorkflowConfig = (type = "twoagents") => { type: "twoagents", }; - const groupChatAssistantConfig = Object.assign({}, assistantConfig); - groupChatAssistantConfig.name = "groupchat_assistant"; - groupChatAssistantConfig.system_message = - "You are a helpful assistant skilled at cordinating a group of other assistants to solve a task. "; - - const groupChatFlowSpec: IGroupChatFlowSpec = { - type: "groupchat", - config: groupChatAssistantConfig, - groupchat_config: { - agents: [assistantFlowSpec, assistantFlowSpec], + const groupChatAssistantConfig = Object.assign( + { admin_name: "groupchat_assistant", messages: [], max_round: 10, speaker_selection_method: "auto", allow_repeat_speaker: false, + description: "Group Chat Assistant", }, - description: "Default Group Workflow", + assistantConfig + ); + groupChatAssistantConfig.name = "groupchat_assistant"; + groupChatAssistantConfig.system_message = + "You are a helpful assistant skilled at cordinating a group of other assistants to solve a task. "; + + const groupChatFlowSpec: IAgent = { + type: "groupchat", + config: groupChatAssistantConfig, }; - const groupChatWorkFlowConfig: IFlowConfig = { + const groupChatWorkFlowConfig: IWorkflow = { name: "Default Group Workflow", description: "Default Group Workflow", sender: userProxyFlowSpec, @@ -359,79 +441,72 @@ export const sampleWorkflowConfig = (type = "twoagents") => { return workFlowConfig; }; -export const getModels = () => { - const models = [ - { - model: "gpt-4-1106-preview", - }, - { - model: "gpt-3.5-turbo-16k", - }, - { - model: "TheBloke/zephyr-7B-alpha-AWQ", - base_url: "http://localhost:8000/v1", - }, - ]; - return models; -}; - export const getSampleSkill = () => { const content = ` - ## This is a sample skill. Replace with your own skill function - ## In general, a good skill must have 3 sections: - ## 1. Imports (import libraries needed for your skill) - ## 2. Function definition AND docstrings (this helps the LLM understand what the function does and how to use it) - ## 3. Function body (the actual code that implements the function) - - import numpy as np - import matplotlib.pyplot as plt - from matplotlib import font_manager as fm - - def save_cat_ascii_art_to_png(filename='ascii_cat.png'): - """ - Creates ASCII art of a cat and saves it to a PNG file. - - :param filename: str, the name of the PNG file to save the ASCII art. - """ - # ASCII art string - cat_art = [ - " /\_/\ ", - " ( o.o ) ", - " > ^ < " - ] - - # Determine shape of output array - height = len(cat_art) - width = max(len(line) for line in cat_art) - - # Create a figure and axis to display ASCII art - fig, ax = plt.subplots(figsize=(width, height)) - ax.axis('off') # Hide axes - - # Get a monospace font - prop = fm.FontProperties(family='monospace') - - # Display ASCII art using text - for y, line in enumerate(cat_art): - ax.text(0, height-y-1, line, fontproperties=prop, fontsize=12) - - # Adjust layout - plt.tight_layout() - - # Save figure to file - plt.savefig(filename, dpi=120, bbox_inches='tight', pad_inches=0.1) - plt.close(fig)`; +from typing import List +import uuid +import requests # to perform HTTP requests +from pathlib import Path + +from openai import OpenAI + + +def generate_and_save_images(query: str, image_size: str = "1024x1024") -> List[str]: + """ + Function to paint, draw or illustrate images based on the users query or request. Generates images from a given query using OpenAI's DALL-E model and saves them to disk. Use the code below anytime there is a request to create an image. + + :param query: A natural language description of the image to be generated. + :param image_size: The size of the image to be generated. (default is "1024x1024") + :return: A list of filenames for the saved images. + """ + + client = OpenAI() # Initialize the OpenAI client + response = client.images.generate(model="dall-e-3", prompt=query, n=1, size=image_size) # Generate images + + # List to store the file names of saved images + saved_files = [] + + # Check if the response is successful + if response.data: + for image_data in response.data: + # Generate a random UUID as the file name + file_name = str(uuid.uuid4()) + ".png" # Assuming the image is a PNG + file_path = Path(file_name) + + img_url = image_data.url + img_response = requests.get(img_url) + if img_response.status_code == 200: + # Write the binary content to a file + with open(file_path, "wb") as img_file: + img_file.write(img_response.content) + print(f"Image saved to {file_path}") + saved_files.append(str(file_path)) + else: + print(f"Failed to download the image from {img_url}") + else: + print("No image data found in the response!") + + # Return the list of saved files + return saved_files + + +# Example usage of the function: +# generate_and_save_images("A cute baby sea otter") + `; const skill: ISkill = { - title: "save_cat_ascii_art_to_png", - description: "save cat ascii art to png", + name: "generate_images", + description: "Generate and save images based on a user's query.", content: content, }; return skill; }; -export const timeAgo = (dateString: string): string => { +export const timeAgo = ( + dateString: string, + returnFormatted: boolean = false +): string => { // if dateStr is empty, return empty string if (!dateString) { return ""; @@ -454,10 +529,20 @@ export const timeAgo = (dateString: string): string => { const minutesAgo = Math.floor(timeDifference / (1000 * 60)); const hoursAgo = Math.floor(minutesAgo / 60); - // Format the date into a readable format e.g. "November 27" - const options: Intl.DateTimeFormatOptions = { month: "long", day: "numeric" }; + // Format the date into a readable format e.g. "November 27, 2021, 3:45 PM" + const options: Intl.DateTimeFormatOptions = { + month: "long", + day: "numeric", + year: "numeric", + hour: "numeric", + minute: "numeric", + }; const formattedDate = timestamp.toLocaleDateString(undefined, options); + if (returnFormatted) { + return formattedDate; + } + // Determine the time difference string let timeAgoStr: string; if (minutesAgo < 1) { @@ -527,7 +612,7 @@ export const fetchVersion = () => { */ export const sanitizeConfig = ( data: any, - keys: string[] = ["api_key", "id"] + keys: string[] = ["api_key", "id", "created_at", "updated_at"] ): any => { if (Array.isArray(data)) { return data.map((item) => sanitizeConfig(item, keys)); diff --git a/samples/apps/autogen-studio/frontend/src/components/views/builder/agents.tsx b/samples/apps/autogen-studio/frontend/src/components/views/builder/agents.tsx index be8a30f72476..8800ebfbdd37 100644 --- a/samples/apps/autogen-studio/frontend/src/components/views/builder/agents.tsx +++ b/samples/apps/autogen-studio/frontend/src/components/views/builder/agents.tsx @@ -8,24 +8,17 @@ import { } from "@heroicons/react/24/outline"; import { Dropdown, MenuProps, Modal, message } from "antd"; import * as React from "react"; -import { IAgentFlowSpec, IStatus } from "../../types"; +import { IAgent, IStatus } from "../../types"; import { appContext } from "../../../hooks/provider"; import { fetchJSON, getServerUrl, - sampleAgentConfig, sanitizeConfig, timeAgo, truncateText, } from "../../utils"; -import { - AgentFlowSpecView, - BounceLoader, - Card, - CardHoverBar, - LaunchButton, - LoadingOverlay, -} from "../../atoms"; +import { BounceLoader, Card, CardHoverBar, LoadingOverlay } from "../../atoms"; +import { AgentViewer } from "./utils/agentconfig"; const AgentsView = ({}: any) => { const [loading, setLoading] = React.useState(false); @@ -37,25 +30,30 @@ const AgentsView = ({}: any) => { const { user } = React.useContext(appContext); const serverUrl = getServerUrl(); const listAgentsUrl = `${serverUrl}/agents?user_id=${user?.email}`; - const saveAgentsUrl = `${serverUrl}/agents`; - const deleteAgentUrl = `${serverUrl}/agents/delete`; - const [agents, setAgents] = React.useState([]); - const [selectedAgent, setSelectedAgent] = - React.useState(null); + const [agents, setAgents] = React.useState([]); + const [selectedAgent, setSelectedAgent] = React.useState(null); const [showNewAgentModal, setShowNewAgentModal] = React.useState(false); const [showAgentModal, setShowAgentModal] = React.useState(false); - const sampleAgent = sampleAgentConfig(user?.email || ""); - const [newAgent, setNewAgent] = React.useState( - sampleAgent - ); + const sampleAgent = { + config: { + name: "sample_agent", + description: "Sample agent description", + human_input_mode: "NEVER", + max_consecutive_auto_reply: 3, + system_message: "", + }, + }; + const [newAgent, setNewAgent] = React.useState(sampleAgent); - const deleteAgent = (agent: IAgentFlowSpec) => { + const deleteAgent = (agent: IAgent) => { setError(null); setLoading(true); + + const deleteAgentUrl = `${serverUrl}/agents/delete?user_id=${user?.email}&agent_id=${agent.id}`; // const fetch; const payLoad = { method: "DELETE", @@ -71,8 +69,7 @@ const AgentsView = ({}: any) => { const onSuccess = (data: any) => { if (data && data.status) { message.success(data.message); - console.log("agents", data.data); - setAgents(data.data); + fetchAgents(); } else { message.error(data.message); } @@ -98,8 +95,6 @@ const AgentsView = ({}: any) => { const onSuccess = (data: any) => { if (data && data.status) { - // message.success(data.message); - setAgents(data.data); } else { message.error(data.message); @@ -114,42 +109,6 @@ const AgentsView = ({}: any) => { fetchJSON(listAgentsUrl, payLoad, onSuccess, onError); }; - const saveAgent = (agent: IAgentFlowSpec) => { - setError(null); - setLoading(true); - // const fetch; - - const payLoad = { - method: "POST", - headers: { - Accept: "application/json", - "Content-Type": "application/json", - }, - body: JSON.stringify({ - user_id: user?.email, - agent: agent, - }), - }; - - const onSuccess = (data: any) => { - if (data && data.status) { - message.success(data.message); - // console.log("agents", data.data); - setAgents(data.data); - } else { - message.error(data.message); - } - setLoading(false); - setNewAgent(sampleAgent); - }; - const onError = (err: any) => { - setError(err); - message.error(err.message); - setLoading(false); - }; - fetchJSON(saveAgentsUrl, payLoad, onSuccess, onError); - }; - React.useEffect(() => { if (user) { // console.log("fetching messages", messages); @@ -157,7 +116,7 @@ const AgentsView = ({}: any) => { } }, []); - const agentRows = (agents || []).map((agent: IAgentFlowSpec, i: number) => { + const agentRows = (agents || []).map((agent: IAgent, i: number) => { const cardItems = [ { title: "Download", @@ -185,11 +144,10 @@ const AgentsView = ({}: any) => { let newAgent = { ...agent }; newAgent.config.name = `${agent.config.name}_copy`; newAgent.user_id = user?.email; - newAgent.timestamp = new Date().toISOString(); + newAgent.updated_at = new Date().toISOString(); if (newAgent.id) { delete newAgent.id; } - setNewAgent(newAgent); setShowNewAgentModal(true); }, @@ -206,27 +164,41 @@ const AgentsView = ({}: any) => { }, ]; return ( -
-
- {truncateText(agent.config.name, 25)}
- } - onClick={() => { - setSelectedAgent(agent); - setShowAgentModal(true); - }} - > -
- {" "} - {truncateText(agent.config.description || "", 70)} +
  • + + {truncateText(agent.config.name || "", 25)}
  • -
    {timeAgo(agent.timestamp || "")}
    - - -
    -
    + } + onClick={() => { + setSelectedAgent(agent); + setShowAgentModal(true); + }} + > + +
    + {timeAgo(agent.updated_at || "")} +
    + + + ); }); @@ -237,45 +209,39 @@ const AgentsView = ({}: any) => { setShowAgentModal, handler, }: { - agent: IAgentFlowSpec | null; - setAgent: (agent: IAgentFlowSpec | null) => void; + agent: IAgent | null; + setAgent: (agent: IAgent | null) => void; showAgentModal: boolean; setShowAgentModal: (show: boolean) => void; - handler?: (agent: IAgentFlowSpec | null) => void; + handler?: (agent: IAgent | null) => void; }) => { - const [localAgent, setLocalAgent] = React.useState( - agent - ); + const [localAgent, setLocalAgent] = React.useState(agent); + + const closeModal = () => { + setShowAgentModal(false); + if (handler) { + handler(localAgent); + } + }; return ( - Agent Specification{" "} - - {agent?.config?.name || ""} - {" "} - - } + title={<>Agent Configuration} width={800} open={showAgentModal} onOk={() => { - setAgent(null); - setShowAgentModal(false); - if (handler) { - handler(localAgent); - } + closeModal(); }} onCancel={() => { - setAgent(null); - setShowAgentModal(false); + closeModal(); }} + footer={[]} > {agent && ( - )} {/* {JSON.stringify(localAgent)} */} @@ -344,10 +310,8 @@ const AgentsView = ({}: any) => { setAgent={setSelectedAgent} setShowAgentModal={setShowAgentModal} showAgentModal={showAgentModal} - handler={(agent: IAgentFlowSpec | null) => { - if (agent) { - saveAgent(agent); - } + handler={(agent: IAgent | null) => { + fetchAgents(); }} /> @@ -356,10 +320,8 @@ const AgentsView = ({}: any) => { setAgent={setNewAgent} setShowAgentModal={setShowNewAgentModal} showAgentModal={showNewAgentModal} - handler={(agent: IAgentFlowSpec | null) => { - if (agent) { - saveAgent(agent); - } + handler={(agent: IAgent | null) => { + fetchAgents(); }} /> @@ -397,7 +359,7 @@ const AgentsView = ({}: any) => { {agents && agents.length > 0 && (
    -
    {agentRows}
    +
      {agentRows}
    )} diff --git a/samples/apps/autogen-studio/frontend/src/components/views/builder/models.tsx b/samples/apps/autogen-studio/frontend/src/components/views/builder/models.tsx index be2c11099e38..2a3b0506d79c 100644 --- a/samples/apps/autogen-studio/frontend/src/components/views/builder/models.tsx +++ b/samples/apps/autogen-studio/frontend/src/components/views/builder/models.tsx @@ -2,7 +2,6 @@ import { ArrowDownTrayIcon, ArrowUpTrayIcon, DocumentDuplicateIcon, - ExclamationTriangleIcon, InformationCircleIcon, PlusIcon, TrashIcon, @@ -18,8 +17,15 @@ import { timeAgo, truncateText, } from "../../utils"; -import { BounceLoader, Card, CardHoverBar, LoadingOverlay } from "../../atoms"; +import { + BounceLoader, + Card, + CardHoverBar, + ControlRowView, + LoadingOverlay, +} from "../../atoms"; import TextArea from "antd/es/input/TextArea"; +import { ModelConfigView } from "./utils/modelconfig"; const ModelsView = ({}: any) => { const [loading, setLoading] = React.useState(false); @@ -31,8 +37,7 @@ const ModelsView = ({}: any) => { const { user } = React.useContext(appContext); const serverUrl = getServerUrl(); const listModelsUrl = `${serverUrl}/models?user_id=${user?.email}`; - const saveModelsUrl = `${serverUrl}/models`; - const deleteModelUrl = `${serverUrl}/models/delete`; + const createModelUrl = `${serverUrl}/models`; const testModelUrl = `${serverUrl}/models/test`; const defaultModel: IModelConfig = { @@ -50,28 +55,23 @@ const ModelsView = ({}: any) => { ); const [showNewModelModal, setShowNewModelModal] = React.useState(false); - const [showModelModal, setShowModelModal] = React.useState(false); const deleteModel = (model: IModelConfig) => { setError(null); setLoading(true); - // const fetch; + const deleteModelUrl = `${serverUrl}/models/delete?user_id=${user?.email}&model_id=${model.id}`; const payLoad = { method: "DELETE", headers: { "Content-Type": "application/json", }, - body: JSON.stringify({ - user_id: user?.email, - model: model, - }), }; const onSuccess = (data: any) => { if (data && data.status) { message.success(data.message); - setModels(data.data); + fetchModels(); } else { message.error(data.message); } @@ -111,9 +111,10 @@ const ModelsView = ({}: any) => { fetchJSON(listModelsUrl, payLoad, onSuccess, onError); }; - const saveModel = (model: IModelConfig) => { + const createModel = (model: IModelConfig) => { setError(null); setLoading(true); + model.user_id = user?.email; const payLoad = { method: "POST", @@ -121,17 +122,14 @@ const ModelsView = ({}: any) => { Accept: "application/json", "Content-Type": "application/json", }, - body: JSON.stringify({ - user_id: user?.email, - model: model, - }), + body: JSON.stringify(model), }; const onSuccess = (data: any) => { if (data && data.status) { message.success(data.message); - // console.log("models", data.data); - setModels(data.data); + const updatedModels = [data.data].concat(models || []); + setModels(updatedModels); } else { message.error(data.message); } @@ -142,7 +140,7 @@ const ModelsView = ({}: any) => { message.error(err.message); setLoading(false); }; - fetchJSON(saveModelsUrl, payLoad, onSuccess, onError); + fetchJSON(createModelUrl, payLoad, onSuccess, onError); }; React.useEffect(() => { @@ -180,7 +178,7 @@ const ModelsView = ({}: any) => { let newModel = { ...model }; newModel.model = `${model.model} Copy`; newModel.user_id = user?.email; - newModel.timestamp = new Date().toISOString(); + newModel.updated_at = new Date().toISOString(); if (newModel.id) { delete newModel.id; } @@ -200,27 +198,35 @@ const ModelsView = ({}: any) => { }, ]; return ( -
    -
    - {truncateText(model.model || "", 20)}
    - } - onClick={() => { - setSelectedModel(model); - setShowModelModal(true); - }} +
  • + {truncateText(model.model || "", 20)}
  • + } + onClick={() => { + setSelectedModel(model); + setShowModelModal(true); + }} + > +
    + {" "} + {truncateText(model.description || model.model || "", 70)} +
    +
    -
    - {" "} - {truncateText(model.description || model.model || "", 70)} -
    -
    {timeAgo(model.timestamp || "")}
    - - -
    -
    + {timeAgo(model.updated_at || "")} +
    + + + ); }); @@ -231,47 +237,20 @@ const ModelsView = ({}: any) => { setShowModelModal, handler, }: { - model: IModelConfig | null; + model: IModelConfig; setModel: (model: IModelConfig | null) => void; showModelModal: boolean; setShowModelModal: (show: boolean) => void; handler?: (agent: IModelConfig) => void; }) => { - const [loadingModelTest, setLoadingModelTest] = React.useState(false); - const [modelStatus, setModelStatus] = React.useState(null); - - const [localModel, setLocalModel] = React.useState( - model - ); - const testModel = (model: IModelConfig) => { - setModelStatus(null); - setLoadingModelTest(true); - const payLoad = { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - body: JSON.stringify({ - user_id: user?.email, - model: model, - }), - }; + const [localModel, setLocalModel] = React.useState(model); - const onSuccess = (data: any) => { - if (data && data.status) { - message.success(data.message); - setModelStatus(data.data); - } else { - message.error(data.message); - } - setLoadingModelTest(false); - setModelStatus(data); - }; - const onError = (err: any) => { - message.error(err.message); - setLoadingModelTest(false); - }; - fetchJSON(testModelUrl, payLoad, onSuccess, onError); + const closeModal = () => { + setModel(null); + setShowModelModal(false); + if (handler) { + handler(model); + } }; return ( @@ -284,137 +263,21 @@ const ModelsView = ({}: any) => { } width={800} open={showModelModal} - footer={[ - , - , - , - ]} + footer={[]} onOk={() => { - setModel(null); - setShowModelModal(false); - if (handler) { - if (localModel) { - handler(localModel); - } - } + closeModal(); }} onCancel={() => { - setModel(null); - setShowModelModal(false); + closeModal(); }} > -
    -
    Enter parameters for your model.
    - { - setLocalModel({ ...localModel, model: e.target.value }); - }} - /> - { - if (localModel) { - setLocalModel({ ...localModel, api_key: e.target.value }); - } - }} - /> - { - if (localModel) { - setLocalModel({ ...localModel, base_url: e.target.value }); - } - }} - /> - { - if (localModel) { - setLocalModel({ ...localModel, api_type: e.target.value }); - } - }} + {model && ( + - { - if (localModel) { - setLocalModel({ ...localModel, api_version: e.target.value }); - } - }} - /> -