Skip to content

Commit

Permalink
Devv (#5)
Browse files Browse the repository at this point in the history
* Fixed a bug in setup_text_llm

* chore: update test suite

This adds a system_message prepending test, a .reset() test, and makes the math testing a little more robust, while also trying to prevent some edge cases where the llm would respond with explanations or an affirmative 'Sure I can do that. Here's the result...' or similar responses instead of just the exepcted result.

* New documentation site: https://docs.openinterpreter.com/

* feat: add %tokens magic command that counts tokens via tiktoken

* feat: add estimated cost from litellm to token counter

* fix: add note about only including current messages

* chore: add %tokens to README

* fix: include generated code in token count; round to 6 decimals

* Put quotes around sys.executable (bug fix)

* Added powershell language

* Adding Mistral support

* Removed /archive, adding Mistral support

* Removed /archive, adding Mistral support

* First version of ooba-powered setup_local_text_llm

* First version of ooba-powered setup_local_text_llm

* Second version of ooba-powered setup_local_text_llm

* Testing tests

* More flexible tests

* Paused math test

Let's look into this soon. Failing a lot

* Improved tests

* feat: add support for loading different config.yaml files

This adds a --config_file option that allows users to specify a path to a config file or the name of a config file in their Open Interpreter config directory and use that config file when invoking interpreter.

It also adds similar functionality to the --config parameter allowing users to open and edit different config files.

To simplify finding and loading files I also added a utility to return the path to a directory in the Open Interpreter config directory and moved some other points in the code from using a manually constructed path to utilizing the same utility method for consistency and simplicity.

* feat: add optional prompt token/cost estimate to %tokens

This gives  and optional  argument that will estimate the tokens and cost of any provided prompt to allow users to consider the implications of what they are going to send before it has an impact on their token usage.

* Paused math test

* Switched tests to turbo

* More Ooba

* Using Eric's tests

* The Local Update

* Alignment

* Alignment

* Fixed shell blocks not ending on error bug

* Added useful flags to generator

* Fixed Mistral HTML entities + backticks problem

* Fixed Mistral HTML entities + backticks problem

* OpenAI messages -> text LLMs are now non-function-calling

* OpenAI messages -> text LLMs are now non-function-calling

* Better messaging

* Incremented version, updated litellm

* Skipping nested test

* Exposed Procedures

* Exposed get_relevant_procedures_string

* Better procedures exposure

* Better procedures exposure

* Exits properly in colab

* Better exposed procedures

* Better exposed procedures

* More powerful reset function, incremented version

* WELCOME HACKERS!

The Open Interpreter Hackathon is on.

* Welcome hackers!

* Fix typo in setup_text_llm.py

recieve -> receive

* Welcome hackers!

* The OI hackathon has wrapped! Thank you everyone!

* THE HACKATHON IS ON

* ● The Open Interpreter Hackathon has been extended!

* Join the hackathon! https://lablab.ai/event/open-interpreter-hackathon

* Thank you hackathon participants!

* Fix "depracated" typo

* Update python.py

Resolves issue: OpenInterpreter#635

* Update python.py

More robust handling.

* Fix indentation in language_map.py

* Made semgrep optional, updated packages, pinned LiteLLM

* Fixed end_of_message and end_of_code flags

* Add container timeout for easier server integration of OI. controllable via env var 'OI_CONTAINER_TIMEOUT'. defaults to no timeout. Also add type safety to core/core.py

* Update things, resolve merge conflicts.

* fixed the tests, since they imported and assumed that was a instance, but is wasnt. now uses interpreter.create_interpreter()

---------

Co-authored-by: Kyle Huang <[email protected]>
Co-authored-by: Eric allen <[email protected]>
Co-authored-by: killian <[email protected]>
Co-authored-by: DaveChini <[email protected]>
Co-authored-by: Ikko Eltociear Ashimine <[email protected]>
Co-authored-by: Jamie Dubs <[email protected]>
Co-authored-by: Leif Taylor <[email protected]>
Co-authored-by: chenpeng08 <[email protected]>
  • Loading branch information
9 people authored Oct 20, 2023
1 parent e2716cd commit 65b7ad0
Show file tree
Hide file tree
Showing 49 changed files with 2,688 additions and 3,579 deletions.
99 changes: 90 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@

![banner 2](https://github.com/KillianLucas/open-interpreter/assets/63927363/c1aec011-6d3c-4960-ab55-749326b8a7c9)
<h1 align="center">● Open Interpreter</h1>

<p align="center">
<a href="https://discord.gg/6p3fD6rBVm">
Expand All @@ -9,14 +8,19 @@
<a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
<a href="README_IN.md"><img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
<img src="https://img.shields.io/static/v1?label=license&message=MIT&color=white&style=flat" alt="License"/>
<br><br>
<b>Open Interpreter</b> lets language models run code on your computer.<br>
<br>
<br>
<b>Let language models run code on your computer.</b><br>
An open-source, locally running implementation of OpenAI's Code Interpreter.<br>
<br><a href="https://openinterpreter.com">Get early access to the desktop app</a>‎ ‎ |‎ ‎ <b><a href="https://docs.openinterpreter.com/">Read our new docs</a></b><br>
</p>

<br>

![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)

<br>

```shell
pip install open-interpreter
```
Expand Down Expand Up @@ -236,13 +240,14 @@ In the interactive mode, you can use the below commands to enhance your experien

**Available Commands:**
`%debug [true/false]`: Toggle debug mode. Without arguments or with 'true', it
enters debug mode. With 'false', it exits debug mode.
`%reset`: Resets the current session.
`%undo`: Remove previous messages and its response from the message history.
enters debug mode. With 'false', it exits debug mode.
`%reset`: Resets the current session.
`%undo`: Remove previous messages and its response from the message history.
`%save_message [path]`: Saves messages to a specified JSON path. If no path is
provided, it defaults to 'messages.json'.
provided, it defaults to 'messages.json'.
`%load_message [path]`: Loads messages from a specified JSON path. If no path
is provided, it defaults to 'messages.json'.
is provided, it defaults to 'messages.json'.
`%tokens [prompt]`: Calculate the tokens used by the current conversation's messages and estimate their cost, and optionally calculate the tokens and estimated cost of a `prompt` if one is provided. Relies on [LiteLLM's `cost_per_token()` method](https://docs.litellm.ai/docs/completion/token_usage#2-cost_per_token) for estimated cost.
`%help`: Show the help message.

### Configuration
Expand All @@ -257,6 +262,82 @@ Run the following command to open the configuration file:
interpreter --config
```

#### Multiple Configuration Files

Open Interpreter supports multiple `config.yaml` files, allowing you to easily switch between configurations via the `--config_file` argument.

**Note**: `--config_file` accepts either a file name or a file path. File names will use the default configuration directory, while file paths will use the specified path.

To create or edit a new configuration, run:

```
interpreter --config --config_file $config_path
```

To have Open Interpreter load a specific configuration file run:

```
interpreter --config_file $config_path
```

**Note**: Replace `$config_path` with the name of or path to your configuration file.

##### CLI Example

1. Create a new `config.turbo.yaml` file
```
interpreter --config --config_file config.turbo.yaml
```
2. Edit the `config.turbo.yaml` file to set `model` to `gpt-3.5-turbo`
3. Run Open Interpreter with the `config.turbo.yaml` configuration
```
interpreter --config_file config.turbo.yaml
```

##### Python Example

You can also load configuration files when calling Open Interpreter from Python scripts:

```python
import os
import interpreter

currentPath = os.path.dirname(os.path.abspath(__file__))
config_path=os.path.join(currentPath, './config.test.yaml')

interpreter.extend_config(config_path=config_path)

message = "What operating system are we on?"

for chunk in interpreter.chat(message, display=False, stream=True):
print(chunk)
```

## Sample FastAPI Server

The generator update enables Open Interpreter to be controlled via HTTP REST endpoints:

```python
# server.py

from fastapi import FastAPI, Response
import interpreter

app = FastAPI()

@app.get("/chat")
def chat_endpoint(message):
return Response(interpreter.chat(message, stream=True), media_type="text/event-stream")

@app.get("/history")
def history_endpoint():
return interpreter.messages
```
```shell
pip install fastapi uvicorn
uvicorn server:app --reload
```

## Safety Notice

Since generated code is executed in your local environment, it can interact with your files and system settings, potentially leading to unexpected outcomes like data loss or security risks.
Expand Down
43 changes: 0 additions & 43 deletions interpreter/archive/(wip)_model_explorer.py

This file was deleted.

8 changes: 0 additions & 8 deletions interpreter/archive/README.md

This file was deleted.

212 changes: 0 additions & 212 deletions interpreter/archive/cli.py

This file was deleted.

Loading

0 comments on commit 65b7ad0

Please sign in to comment.