Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Commit

Permalink
up
Browse files Browse the repository at this point in the history
  • Loading branch information
emrgnt-cmplxty committed Oct 31, 2023
1 parent aa37f86 commit f6a9a1d
Show file tree
Hide file tree
Showing 5 changed files with 130 additions and 40 deletions.
96 changes: 96 additions & 0 deletions docs/source/api/main.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# SciPhi API Documentation (in reStructuredText format)

SciPhi API Documentation
========================

Welcome to the SciPhi API documentation. Here, you'll find a detailed guide on how to use the different endpoints provided by the SciPhi service. This API allows you to interact with the powerful functionalities of the SciPhi codebase, bringing the power of large language models directly to your applications.

Endpoint Overview
-----------------

1. **Search**: This endpoint allows you to use the Retriever to fetch related documents from a given set of queries. Meta's `Contreiver` embeddings are used in this process. Currently just Wikipedia is embedded, but the goal is to scale this to a comprehensive database embedded via recent SOTA methods.
2. **OpenAI Formatted LLM Request (v1)**: SciPhi models are served via an API that is compatible with the OpenAI API.

Detailed Endpoint Descriptions
------------------------------

Search Endpoint
~~~~~~~~~~~~~~~

- **URL**: ``/search``
- **Method**: ``POST``
- **Description**: This endpoint interacts with the Retriever module of the SciPhi codebase, allowing you to search for related documents based on the provided queries.

**Request Body**:
- ``queries``: List of query strings for which related documents are to be retrieved.
- ``top_k``: (Optional) The number of top related documents you wish to retrieve for each query.

**Response**:
A list of lists containing Document objects, where each list corresponds to the related documents for each query.

**Example**:

.. code-block:: bash
curl -X POST http://<api_url>/search \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"queries": ["What is general relativity?", "Who is Albert Einstein?"], "top_k": 5}'
OpenAI API (v1) Endpoint
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- **URL**: ``/v1/{path:path}``
- **Method**: ``GET``, ``POST``, ``PUT``, ``DELETE``
- **Description**: This endpoint is designed to forward requests to another server, such as vLLM. It can act as a middleware, allowing you to utilize other services while managing access through the SciPhi API.

**Request Body**:
The body should match the request format of the service to which you are forwarding the request.

**Response**:
The response received from the forwarded service.

**Example**:

.. code-block:: bash
curl -X POST https://api.sciphi.ai/v1/completion \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"prompt": "Describe the universe.", ...}'
Alternatively, with the SciPhi framework, you may execute a generation as shown:


.. code-block:: python
from sciphi.interface import LLMInterfaceManager, RAGInterfaceManager
from sciphi.llm import GenerationConfig
# LLM Provider Settings
llm_interface = LLMInterfaceManager.get_interface_from_args(
LLMProviderName(llm_provider_name),
api_key=llm_api_key,
api_base=llm_api_base,
rag_interface=rag_interface,
model_name=llm_model_name,
)
# Set up typical LLM generation settings
completion_config = GenerationConfig(
temperature=llm_temperature,
top_k=llm_top_k,
max_tokens_to_sample=llm_max_tokens_to_sample,
model_name=llm_model_name,
skip_special_tokens=llm_skip_special_tokens,
stop_token=SciPhiFormatter.INIT_PARAGRAPH_TOKEN,
)
# Get the completion for a prompt
completion = llm_interface.get_completion(prompt, generation_config)
API Key and Signup
------------------

To access the SciPhi API, you will need an API key. If you don't have one, you can sign up `here <https://www.sciphi.ai/signup>`_. Ensure you include the API key in your request headers as shown in the examples.
6 changes: 3 additions & 3 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@

# -- Project information -----------------------------------------------------

project = "SciPHi"
copyright = "2023, SciPHi Team"
author = "the SciPHi Team"
project = "SciPhi"
copyright = "2023, Emergent AGI Inc."
author = "the SciPhi Team"


# -- General configuration ---------------------------------------------------
Expand Down
35 changes: 20 additions & 15 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,27 @@ Developers can also instantiate their own LLM and RAG providers using the SciPhi
For a detailed setup guide, deeper feature exploration, and developer insights, refer to:

* `SciPhi GitHub Repository <https://github.com/emrgnt-cmplxty/sciphi>`_
* `Example Textbook Generated with SciPhi <https://github.com/SciPhi-AI/sciphi/data/sample/textbooks/Aerodynamics_of_Viscous_Fluids.md>`_
* `Default Settings for Textbook Generation <https://github.com/SciPhi-AI/sciphi/config/generation_settings/textbook_generation_settings.yaml>`_
* `Library of SciPhi Books <https://github.com/SciPhi-AI/github.com/SciPhi-AI/library-of-phi>`_
* `Example Textbook Generated with SciPhi <https://github.com/SciPhi-AI/sciphi/blob/main/sciphi/data/sample/textbooks/Aerodynamics_of_Viscous_Fluids.md>`_
* `ToC Used for Sample Textbook Generation <https://github.com/SciPhi-AI/sciphi/blob/main/sciphi/data/sample/table_of_contents/Aerodynamics_of_Viscous_Fluids.yaml>`_
* `Default Settings for Textbook Generation <https://github.com/SciPhi-AI/sciphi/blob/main/sciphi/config/generation_settings/textbook_generation_settings.yaml>`_
* `Library of SciPhi Books <https://github.com/SciPhi-AI/library-of-phi/>`_

Do consider citing our work if SciPhi aids your research. Check the citation section for details.

Citing Our Work
---------------

If you're using SciPhi in your research or project, please cite our work:

.. code-block:: plaintext
@software{SciPhi,
author = {Colegrove, Owen},
doi = {Pending},
month = {09},
title = {{SciPhi: A Framework for LLM Powered Data}},
url = {https://github.com/sciphi-ai/sciphi},
year = {2023}
}
Documentation
-------------
Expand All @@ -71,15 +85,6 @@ Documentation

.. toctree::
:maxdepth: 1
:caption: Serving

serving/distributed_serving
serving/run_on_sky
serving/deploying_with_triton

.. toctree::
:maxdepth: 1
:caption: Models
:caption: API

models/supported_models
models/adding_model
api/main
24 changes: 2 additions & 22 deletions docs/source/setup/installation.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,8 @@
.. _sciphi_installation:

Installation for SciPhi [ΨΦ]: AI's Knowledge Engine 💡
Installation for SciPhi
=====================================================

<p align="center">
<img width="716" alt="SciPhi Logo" src="https://github.com/emrgnt-cmplxty/sciphi/assets/68796651/195367d8-54fd-4281-ace0-87ea8523f982">
</p>

SciPhi is a powerful knowledge engine that integrates with multiple LLM providers and RAG providers, allowing for customizable data creation, retriever-augmented generation, and even textbook generation.

Requirements
Expand Down Expand Up @@ -74,20 +70,4 @@ To set up SciPhi for development:
Licensing and Acknowledgment
---------------------------

SciPhi is licensed under the [Apache-2.0 License](./LICENSE).

Citing Our Work
---------------

If you're using SciPhi in your research or project, please cite our work:

.. code-block:: plaintext
@software{SciPhi,
author = {Colegrove, Owen},
doi = {Pending},
month = {09},
title = {{SciPhi: A Framework for LLM Powered Data}},
url = {https://github.com/sciphi-ai/sciphi},
year = {2023}
}
SciPhi is licensed under the Apache-2.0 License.
9 changes: 9 additions & 0 deletions docs/source/setup/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,15 @@ Here's a simple example of how you can utilize SciPhi to work with your own LLM
This example showcases the flexibility and power of SciPhi, allowing you to seamlessly integrate various LLM and RAG providers into your applications.


Generating Completions with SciPhi
---------------------------

SciPhi supports multiple LLM providers (e.g. OpenAI, Anthropic, HuggingFace, and vLLM) and RAG providers (e.g. SciPhi). To run an example completion with SciPhi the code shown above, execute:

```bash
python -m sciphi.scripts.sciphi_gen_completion -llm_provider_name=sciphi --llm_api_key=YOUR_SCIPHI_API_KEY --llm_api_base=https://api.sciphi.ai/v1 --rag_api_base=https://api.sciphi.ai --llm_model_name=SciPhi/SciPhi-Self-RAG-Mistral-7B-32k --query="Write a few paragraphs on general relativity. Include the mathematical definition of Einsteins field equation in your writeup."
```

Generating Data with SciPhi
---------------------------

Expand Down

0 comments on commit f6a9a1d

Please sign in to comment.