Skip to content

Commit

Permalink
v0.5.0 (#153)
Browse files Browse the repository at this point in the history
Changes proposed in this pull request:

 * Adjusted AI example to use automatic model download feature
 * Adjusted Makefiles of all examples
 * Fixed missing default icon for `files_dropdown_menu` API
 * Updated README.md

---------

Signed-off-by: Alexander Piskun <[email protected]>
  • Loading branch information
bigcat88 authored Oct 23, 2023
1 parent 84de0f2 commit df8f252
Show file tree
Hide file tree
Showing 34 changed files with 206 additions and 376 deletions.
4 changes: 2 additions & 2 deletions .run/Skeleton.run.xml → .run/Skeleton (28).run.xml
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="Skeleton" type="PythonConfigurationType" factoryName="Python">
<configuration default="false" name="Skeleton (28)" type="PythonConfigurationType" factoryName="Python">
<module name="nc_py_api" />
<option name="INTERPRETER_OPTIONS" value="" />
<option name="PARENT_ENVS" value="true" />
<envs>
<env name="PYTHONUNBUFFERED" value="1" />
<env name="APP_ID" value="skeleton" />
<env name="APP_PORT" value="9030" />
<env name="APP_SECRET" value="12345" />
<env name="APP_VERSION" value="1.0.0" />
<env name="NEXTCLOUD_URL" value="http://nextcloud.local/index.php" />
<env name="PYTHONUNBUFFERED" value="1" />
</envs>
<option name="SDK_HOME" value="" />
<option name="WORKING_DIRECTORY" value="$PROJECT_DIR$" />
Expand Down
4 changes: 2 additions & 2 deletions .run/TalkBot.run.xml → .run/TalkBot (28).run.xml
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="TalkBot" type="PythonConfigurationType" factoryName="Python">
<configuration default="false" name="TalkBot (28)" type="PythonConfigurationType" factoryName="Python">
<module name="nc_py_api" />
<option name="INTERPRETER_OPTIONS" value="" />
<option name="PARENT_ENVS" value="true" />
<envs>
<env name="PYTHONUNBUFFERED" value="1" />
<env name="APP_ID" value="talk_bot" />
<env name="APP_PORT" value="9032" />
<env name="APP_SECRET" value="12345" />
<env name="APP_VERSION" value="1.0.0" />
<env name="NEXTCLOUD_URL" value="http://nextcloud.local/index.php" />
<env name="PYTHONUNBUFFERED" value="1" />
</envs>
<option name="SDK_HOME" value="" />
<option name="WORKING_DIRECTORY" value="$PROJECT_DIR$" />
Expand Down
29 changes: 0 additions & 29 deletions .run/TalkBotMulti.run.xml

This file was deleted.

5 changes: 3 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,16 @@

All notable changes to this project will be documented in this file.

## [0.4.1 - 2023-10-17]
## [0.5.0 - 2023-10-23]

### Added

- Support for the new AppAPI endpoint `/init` and automatically downloading models from `huggingface`. #151
- Support for the new `/init` AppAPI endpoint and the ability to automatically load models from `huggingface`. #151

### Changed

- All examples were adjusted to changes in AppAPI.
- The examples now use FastAPIs `lifespan` instead of the deprecated `on_event`.

## [0.4.0 - 2023-10-15]

Expand Down
47 changes: 32 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,38 @@ as long as it doesn't involve calls that require user password verification.

**NextcloudApp** avalaible only from Nextcloud 27.1.2 and greater version with installed **AppAPI**.

### Nextcloud skeleton app in Python

```python3
from contextlib import asynccontextmanager

from fastapi import FastAPI

from nc_py_api import NextcloudApp
from nc_py_api.ex_app import LogLvl, run_app, set_handlers


@asynccontextmanager
async def lifespan(_app: FastAPI):
set_handlers(APP, enabled_handler)
yield


APP = FastAPI(lifespan=lifespan)


def enabled_handler(enabled: bool, nc: NextcloudApp) -> str:
if enabled:
nc.log(LogLvl.WARNING, "Hello from nc_py_api.")
else:
nc.log(LogLvl.WARNING, "Bye bye from nc_py_api.")
return ""


if __name__ == "__main__":
run_app("main:APP", log_level="trace")
```

### Support

You can support us in several ways:
Expand All @@ -81,18 +113,3 @@ You can support us in several ways:
- [Issues](https://github.com/cloud-py-api/nc_py_api/issues)
- [Setting up dev environment](https://cloud-py-api.github.io/nc_py_api/DevSetup.html)
- [Changelog](https://github.com/cloud-py-api/nc_py_api/blob/main/CHANGELOG.md)

### Motivation

_Python's language, elegant and clear,_<br>
_Weaves logic's threads without fear,_<br>
_And in the sky, where clouds take form,_<br>
_Nextcloud emerges, a digital norm._<br>

_Together they stand, a duo bright,_<br>
_Python and Nextcloud, day and night,_<br>
_In a digital dance, they guide and sail,_<br>
_Shaping tomorrow, where new ideas prevail._<br>

#### **Know that we are always here to support and assist you on your journey.**
### P.S: **_Good luck, and we hope you have fun!_**
15 changes: 7 additions & 8 deletions docs/NextcloudApp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@ Skeleton

What's going on in the skeleton?

First, it's important to understand that an external application acts more like a microservice, with its endpoints being called by Nextcloud.
In `FastAPI lifespan <https://fastapi.tiangolo.com/advanced/events/?h=lifespan#lifespan>`_ we call the ``set_handlers`` function to further process the application installation logic.

Therefore, when the application receives a request at the endpoint ``/enable``,
it should register all its functionalities in the cloud and wait for requests from Nextcloud.
Since this is a simple skeleton application, we only define the ``/enable`` endpoint.

.. note:: This doesn't apply to system applications, which will be covered in the next chapter.
When the application receives a request at the endpoint ``/enable``,
it should register all its functionalities in the cloud and wait for requests from Nextcloud.

So, calling:

Expand Down Expand Up @@ -54,7 +54,7 @@ Here they are:

* APP_ID - ID of the application.
* APP_PORT - Port on which application listen for the requests from the Nextcloud.
* APP_SECRET - Secret for ``hmac`` signature generation.
* APP_SECRET - Shared secret between Nextcloud and Application.
* APP_VERSION - Version of the application.
* AA_VERSION - Version of the AppAPI.
* NEXTCLOUD_URL - URL at which the application can access the Nextcloud API.
Expand All @@ -67,7 +67,7 @@ After launching your application, execute the following command in the Nextcloud
php occ app_api:app:register YOUR_APP_ID manual_install --json-info \
"{\"appid\":\"YOUR_APP_ID\",\"name\":\"YOUR_APP_DISPLAY_NAME\",\"daemon_config_name\":\"manual_install\",\"version\":\"YOU_APP_VERSION\",\"secret\":\"YOUR_APP_SECRET\",\"host\":\"host.docker.internal\",\"scopes\":{\"required\":[2, 10, 11],\"optional\":[30, 31, 32, 33]},\"port\":SELECTED_PORT,\"protocol\":\"http\",\"system_app\":0}" \
-e --force-scopes
--force-scopes
You can see how **nc_py_api** registers in ``scripts/dev_register.sh``.

Expand All @@ -79,8 +79,7 @@ Examples for such Makefiles can be found in this repository:
`ToGif <https://github.com/cloud-py-api/nc_py_api/blob/main/examples/as_app/to_gif/Makefile>`_ ,
`nc_py_api <https://github.com/cloud-py-api/nc_py_api/blob/main/scripts/dev_register.sh>`_

During the execution of `php occ app_api:app:register`, the **enabled_handler** will be called,
as we pass the flag ``-e``, meaning ``enable after registration``.
During the execution of `php occ app_api:app:register`, the **enabled_handler** will be called

This is likely all you need to start debugging and developing an application for Nextcloud.

Expand Down
68 changes: 14 additions & 54 deletions docs/NextcloudTalkBotTransformers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Requirements
We opt for the latest version of the Transformers library.
Because the example was developed on a Mac, we ended up using Torchvision.

`If you're working solely with Nvidia, you're free to use TensorFlow instead of PyTorch.`
`You're free to use TensorFlow instead of PyTorch.`

Next, we integrate the latest version of `nc_py_api` to minimize code redundancy and focus on the application's logic.

Expand All @@ -52,63 +52,20 @@ We specify the model name globally so that we can easily change the model name i

**When Should We Download the Language Model?**

Although the example uses the smallest model available, weighing in at 300 megabytes, it's common knowledge that larger language models can be substantially bigger.
Downloading such models should not begin when a processing request is already received.
To make process of initializing applications more robust, separate logic was introduced, with an ``/init`` endpoint.

So we have two options:

* Heartbeat
* enabled_handler

This can't be accomplished in the **app on/off handler** as Nextcloud expects an immediate response regarding the app's operability.

Thus, we place the model downloading within the Heartbeat:

.. code-block::
# Thread that performs model download.
def download_models():
pipeline("text2text-generation", model=MODEL_NAME) # this will download model
def heartbeat_handler() -> str:
global MODEL_INIT_THREAD
print("heartbeat_handler: called") # for debug
# if it is the first heartbeat, then start background thread to download a model
if MODEL_INIT_THREAD is None:
MODEL_INIT_THREAD = Thread(target=download_models)
MODEL_INIT_THREAD.start()
print("heartbeat_handler: started initialization thread") # for debug
# if thread is finished then we will have "ok" in response, and AppAPI will consider that program is ready.
r = "init" if MODEL_INIT_THREAD.is_alive() else "ok"
print(f"heartbeat_handler: result={r}") # for debug
return r
@APP.on_event("startup")
def initialization():
# Provide our custom **heartbeat_handler** to set_handlers
set_handlers(APP, enabled_handler, heartbeat_handler)
.. note:: While this may not be the most ideal way to download models, it remains a viable method.
In the future, a more efficient wrapper for model downloading is planned to make the process even more convenient.

Model Storage
"""""""""""""

By default, models will be downloaded to a directory that's removed when updating the app.
To persistently store the models even after updates, add the following line to your code:
This library also provides an additional functionality over this endpoint for easy downloading of models from the `huggingface <https://huggingface.co>`_.

.. code-block::
from nc_py_api.ex_app import persist_transformers_cache # noqa # isort:skip
@asynccontextmanager
async def lifespan(_app: FastAPI):
set_handlers(APP, enabled_handler, models_to_fetch=[MODEL_NAME])
yield
This will set ``TRANSFORMERS_CACHE`` environment variable to point to the application persistent storage.
Import of this **must be** on top before importing any code that perform the import of the ``transformers`` library.
This will automatically download models specified in ``models_to_fetch`` parameter to the application persistent storage.

And that is all, ``transformers`` will automatically download all
models you use to the **Application Persistent Storage** and AppAPI will keep it between updates.
If you want write your own logic, you can always pass your own defined ``init_handler`` callback to ``set_handlers``.

Working with Language Models
""""""""""""""""""""""""""""
Expand All @@ -122,13 +79,16 @@ Finally, we arrive at the core aspect of the application, where we interact with
r = re.search(r"@ai\s(.*)", message.object_content["message"], re.IGNORECASE)
if r is None:
return
model = pipeline("text2text-generation", model=MODEL_NAME)
model = pipeline(
"text2text-generation",
model=snapshot_download(MODEL_NAME, local_files_only=True, cache_dir=persistent_storage()),
)
# Pass all text after "@ai" we to the Language model.
response_text = model(r.group(1), max_length=64, do_sample=True)[0]["generated_text"]
AI_BOT.send_message(response_text, message)
Simply put, the AI logic is just two lines of code when using Transformers, which is incredibly efficient and cool.
Simply put, AI logic is a few lines of code when using Transformers, which is incredibly efficient and cool.

Messages from the AI model are then sent back to Talk Chat as you would expect from a typical chatbot.

Expand Down
20 changes: 16 additions & 4 deletions examples/as_app/skeleton/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,16 @@ build-push:
docker login ghcr.io
docker buildx build --push --platform linux/arm64/v8,linux/amd64 --tag ghcr.io/cloud-py-api/skeleton:latest .

.PHONY: deploy
deploy:
.PHONY: deploy28
deploy28:
docker exec master-nextcloud-1 sudo -u www-data php occ app_api:app:deploy skeleton docker_dev \
--info-xml https://raw.githubusercontent.com/cloud-py-api/nc_py_api/main/examples/as_app/skeleton/appinfo/info.xml

.PHONY: deploy27
deploy27:
docker exec master-stable27-1 sudo -u www-data php occ app_api:app:deploy skeleton docker_dev \
--info-xml https://raw.githubusercontent.com/cloud-py-api/nc_py_api/main/examples/as_app/skeleton/appinfo/info.xml

.PHONY: run28
run28:
docker exec master-nextcloud-1 sudo -u www-data php occ app_api:app:unregister skeleton --silent || true
Expand All @@ -41,9 +46,16 @@ run27:
docker exec master-stable27-1 sudo -u www-data php occ app_api:app:register skeleton docker_dev --force-scopes \
--info-xml https://raw.githubusercontent.com/cloud-py-api/nc_py_api/main/examples/as_app/skeleton/appinfo/info.xml

.PHONY: manual_register
manual_register:
.PHONY: register28
register28:
docker exec master-nextcloud-1 sudo -u www-data php occ app_api:app:unregister skeleton --silent || true
docker exec master-nextcloud-1 sudo -u www-data php occ app_api:app:register skeleton manual_install --json-info \
"{\"appid\":\"skeleton\",\"name\":\"App Skeleton\",\"daemon_config_name\":\"manual_install\",\"version\":\"1.0.0\",\"secret\":\"12345\",\"host\":\"host.docker.internal\",\"port\":9030,\"scopes\":{\"required\":[],\"optional\":[]},\"protocol\":\"http\",\"system_app\":0}" \
--force-scopes

.PHONY: register27
register27:
docker exec master-stable27-1 sudo -u www-data php occ app_api:app:unregister skeleton --silent || true
docker exec master-stable27-1 sudo -u www-data php occ app_api:app:register skeleton manual_install --json-info \
"{\"appid\":\"skeleton\",\"name\":\"App Skeleton\",\"daemon_config_name\":\"manual_install\",\"version\":\"1.0.0\",\"secret\":\"12345\",\"host\":\"host.docker.internal\",\"port\":9030,\"scopes\":{\"required\":[],\"optional\":[]},\"protocol\":\"http\",\"system_app\":0}" \
--force-scopes
2 changes: 1 addition & 1 deletion examples/as_app/skeleton/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
nc_py_api[app]>=0.4.1
nc_py_api[app]>=0.5.0
21 changes: 10 additions & 11 deletions examples/as_app/skeleton/src/main.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,19 @@
"""
Simplest example.
"""
"""Simplest example."""
from contextlib import asynccontextmanager

from fastapi import FastAPI

from nc_py_api import NextcloudApp
from nc_py_api.ex_app import LogLvl, run_app, set_handlers

APP = FastAPI()

@asynccontextmanager
async def lifespan(_app: FastAPI):
set_handlers(APP, enabled_handler)
yield


APP = FastAPI(lifespan=lifespan)


def enabled_handler(enabled: bool, nc: NextcloudApp) -> str:
Expand All @@ -22,13 +28,6 @@ def enabled_handler(enabled: bool, nc: NextcloudApp) -> str:
return ""


# Of course, you can use `FastAPI lifespan: <https://fastapi.tiangolo.com/advanced/events/#lifespan>` instead of this.
# The only requirement for the application is to define `/enabled` and `/heartbeat` handlers.
@APP.on_event("startup")
def initialization():
set_handlers(APP, enabled_handler)


if __name__ == "__main__":
# Wrapper around `uvicorn.run`.
# You are free to call it directly, with just using the `APP_HOST` and `APP_PORT` variables from the environment.
Expand Down
2 changes: 1 addition & 1 deletion examples/as_app/talk_bot/HOW_TO_INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ How To Install

to deploy a docker image with Bot to docker.

4. php occ app_api:app:register talk_bot "daemon_deploy_name" -e --force-scopes \
4. php occ app_api:app:register talk_bot "daemon_deploy_name" --force-scopes \
--info-xml https://raw.githubusercontent.com/cloud-py-api/nc_py_api/main/examples/as_app/talk_bot/appinfo/info.xml

to call its **enable** handler and accept all required API scopes by default.
Loading

0 comments on commit df8f252

Please sign in to comment.