diff --git a/docs/source/api/main.rst b/docs/source/api/main.rst new file mode 100644 index 0000000..c7d06d9 --- /dev/null +++ b/docs/source/api/main.rst @@ -0,0 +1,96 @@ +# SciPhi API Documentation (in reStructuredText format) + +SciPhi API Documentation +======================== + +Welcome to the SciPhi API documentation. Here, you'll find a detailed guide on how to use the different endpoints provided by the SciPhi service. This API allows you to interact with the powerful functionalities of the SciPhi codebase, bringing the power of large language models directly to your applications. + +Endpoint Overview +----------------- + +1. **Search**: This endpoint allows you to use the Retriever to fetch related documents from a given set of queries. Meta's `Contreiver` embeddings are used in this process. Currently just Wikipedia is embedded, but the goal is to scale this to a comprehensive database embedded via recent SOTA methods. +2. **OpenAI Formatted LLM Request (v1)**: SciPhi models are served via an API that is compatible with the OpenAI API. + +Detailed Endpoint Descriptions +------------------------------ + +Search Endpoint +~~~~~~~~~~~~~~~ + +- **URL**: ``/search`` +- **Method**: ``POST`` +- **Description**: This endpoint interacts with the Retriever module of the SciPhi codebase, allowing you to search for related documents based on the provided queries. + +**Request Body**: + - ``queries``: List of query strings for which related documents are to be retrieved. + - ``top_k``: (Optional) The number of top related documents you wish to retrieve for each query. + +**Response**: +A list of lists containing Document objects, where each list corresponds to the related documents for each query. + +**Example**: + +.. code-block:: bash + + curl -X POST http:///search \ + -H "Authorization: Bearer YOUR_API_KEY" \ + -d '{"queries": ["What is general relativity?", "Who is Albert Einstein?"], "top_k": 5}' + +OpenAI API (v1) Endpoint +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- **URL**: ``/v1/{path:path}`` +- **Method**: ``GET``, ``POST``, ``PUT``, ``DELETE`` +- **Description**: This endpoint is designed to forward requests to another server, such as vLLM. It can act as a middleware, allowing you to utilize other services while managing access through the SciPhi API. + +**Request Body**: +The body should match the request format of the service to which you are forwarding the request. + +**Response**: +The response received from the forwarded service. + +**Example**: + +.. code-block:: bash + + curl -X POST https://api.sciphi.ai/v1/completion \ + -H "Authorization: Bearer YOUR_API_KEY" \ + -d '{"prompt": "Describe the universe.", ...}' + + +Alternatively, with the SciPhi framework, you may execute a generation as shown: + + +.. code-block:: python + + from sciphi.interface import LLMInterfaceManager, RAGInterfaceManager + from sciphi.llm import GenerationConfig + + # LLM Provider Settings + llm_interface = LLMInterfaceManager.get_interface_from_args( + LLMProviderName(llm_provider_name), + api_key=llm_api_key, + api_base=llm_api_base, + rag_interface=rag_interface, + model_name=llm_model_name, + ) + + # Set up typical LLM generation settings + completion_config = GenerationConfig( + temperature=llm_temperature, + top_k=llm_top_k, + max_tokens_to_sample=llm_max_tokens_to_sample, + model_name=llm_model_name, + skip_special_tokens=llm_skip_special_tokens, + stop_token=SciPhiFormatter.INIT_PARAGRAPH_TOKEN, + ) + + # Get the completion for a prompt + completion = llm_interface.get_completion(prompt, generation_config) + + + +API Key and Signup +------------------ + +To access the SciPhi API, you will need an API key. If you don't have one, you can sign up `here `_. Ensure you include the API key in your request headers as shown in the examples. \ No newline at end of file diff --git a/docs/source/conf.py b/docs/source/conf.py index c41a2cd..8a9fdd9 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -17,9 +17,9 @@ # -- Project information ----------------------------------------------------- -project = "SciPHi" -copyright = "2023, SciPHi Team" -author = "the SciPHi Team" +project = "SciPhi" +copyright = "2023, Emergent AGI Inc." +author = "the SciPhi Team" # -- General configuration --------------------------------------------------- diff --git a/docs/source/index.rst b/docs/source/index.rst index 4833308..c5162e1 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -51,13 +51,27 @@ Developers can also instantiate their own LLM and RAG providers using the SciPhi For a detailed setup guide, deeper feature exploration, and developer insights, refer to: * `SciPhi GitHub Repository `_ -* `Example Textbook Generated with SciPhi `_ -* `Default Settings for Textbook Generation `_ -* `Library of SciPhi Books `_ +* `Example Textbook Generated with SciPhi `_ +* `ToC Used for Sample Textbook Generation `_ +* `Default Settings for Textbook Generation `_ +* `Library of SciPhi Books `_ -Do consider citing our work if SciPhi aids your research. Check the citation section for details. +Citing Our Work +--------------- +If you're using SciPhi in your research or project, please cite our work: + +.. code-block:: plaintext + + @software{SciPhi, + author = {Colegrove, Owen}, + doi = {Pending}, + month = {09}, + title = {{SciPhi: A Framework for LLM Powered Data}}, + url = {https://github.com/sciphi-ai/sciphi}, + year = {2023} + } Documentation ------------- @@ -71,15 +85,6 @@ Documentation .. toctree:: :maxdepth: 1 - :caption: Serving - - serving/distributed_serving - serving/run_on_sky - serving/deploying_with_triton - -.. toctree:: - :maxdepth: 1 - :caption: Models + :caption: API - models/supported_models - models/adding_model \ No newline at end of file + api/main \ No newline at end of file diff --git a/docs/source/setup/installation.rst b/docs/source/setup/installation.rst index c04499a..2e02baf 100644 --- a/docs/source/setup/installation.rst +++ b/docs/source/setup/installation.rst @@ -1,12 +1,8 @@ .. _sciphi_installation: -Installation for SciPhi [ΨΦ]: AI's Knowledge Engine 💡 +Installation for SciPhi ===================================================== -

-SciPhi Logo -

- SciPhi is a powerful knowledge engine that integrates with multiple LLM providers and RAG providers, allowing for customizable data creation, retriever-augmented generation, and even textbook generation. Requirements @@ -74,20 +70,4 @@ To set up SciPhi for development: Licensing and Acknowledgment --------------------------- -SciPhi is licensed under the [Apache-2.0 License](./LICENSE). - -Citing Our Work ---------------- - -If you're using SciPhi in your research or project, please cite our work: - -.. code-block:: plaintext - - @software{SciPhi, - author = {Colegrove, Owen}, - doi = {Pending}, - month = {09}, - title = {{SciPhi: A Framework for LLM Powered Data}}, - url = {https://github.com/sciphi-ai/sciphi}, - year = {2023} - } +SciPhi is licensed under the Apache-2.0 License. diff --git a/docs/source/setup/quickstart.rst b/docs/source/setup/quickstart.rst index 21217ca..6e83541 100644 --- a/docs/source/setup/quickstart.rst +++ b/docs/source/setup/quickstart.rst @@ -76,6 +76,15 @@ Here's a simple example of how you can utilize SciPhi to work with your own LLM This example showcases the flexibility and power of SciPhi, allowing you to seamlessly integrate various LLM and RAG providers into your applications. +Generating Completions with SciPhi +--------------------------- + +SciPhi supports multiple LLM providers (e.g. OpenAI, Anthropic, HuggingFace, and vLLM) and RAG providers (e.g. SciPhi). To run an example completion with SciPhi the code shown above, execute: + +```bash +python -m sciphi.scripts.sciphi_gen_completion -llm_provider_name=sciphi --llm_api_key=YOUR_SCIPHI_API_KEY --llm_api_base=https://api.sciphi.ai/v1 --rag_api_base=https://api.sciphi.ai --llm_model_name=SciPhi/SciPhi-Self-RAG-Mistral-7B-32k --query="Write a few paragraphs on general relativity. Include the mathematical definition of Einsteins field equation in your writeup." +``` + Generating Data with SciPhi ---------------------------