Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support inbuilt functionality for controlling home-assistant #6

Merged
merged 4 commits into from
Feb 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/settings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ repository:
description: This custom component for Home Assistant allows you to generate text responses using GigaChain LLM framework (like GigaChat or YandexGPT and ChatGPT).

# A URL with more information about the repository
homepage: https://github.com/gritaro/gigachat
homepage: https://github.com/gritaro/gigachain

# A comma-separated list of topics to set on the repository
topics: openai, gpt, homeassistant, voice-assistant, hacs-integration, chatgpt, yandexgpt, gigachat, gigachain, langchain
Expand Down
16 changes: 16 additions & 0 deletions .github/workflows/hacs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: HACS Action

on:
push:
branches:
- main

jobs:
hacs:
name: HACS Action
runs-on: "ubuntu-latest"
steps:
- name: HACS Action
uses: "hacs/action@main"
with:
category: "integration"
13 changes: 13 additions & 0 deletions .github/workflows/hassfest.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
name: Validate with hassfest

on:
push:
branches:
- main

jobs:
validate:
runs-on: "ubuntu-latest"
steps:
- uses: "actions/checkout@v3"
- uses: "home-assistant/actions/hassfest@master"
15 changes: 15 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.280
hooks:
- id: ruff

- repo: https://github.com/psf/black
rev: 23.7.0
hooks:
- id: black

- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
12 changes: 11 additions & 1 deletion README-ru.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,10 +87,20 @@
Температура выборки. Значение температуры должно быть не меньше ноля. Чем выше значение, тем более случайным будет ответ модели. При значениях температуры больше двух, набор токенов в ответе модели может отличаться избыточной случайностью.
Значение по умолчанию зависит от выбранной модели.

* Максимум токенов (max_tokens, int)
* Максимум токенов (max_tokens, `int`)

Максимальное количество токенов, которые будут использованы для создания ответов.

* _Использовать встроенный HA командный процессор_ (process_builtin_sentences, `bool`)

Если включено, все фразы сначала будут отдаваться [встроенному в HA процессору шаблонных фраз](https://www.home-assistant.io/voice_control/builtin_sentences).
Это основное поведение встроенной в Home Assistant диалоговой системы, что позволяет использовать команды вида `включи телевизор в зале`.
Если фраза не может быть распознана встроенным процессором - она будет передана дальше, выбранной языковой модели.

* История сообщений (chat_history, `bool`)

Если у вашей модели дорогой тариф, либо ваш сценарий использования это позволяет, вы можете отключить историю. В противном случае вся история диалога передаётся в каждом запросе.

## Использование в качестве диалоговой системы
Создайте и настройте новый голосовой ассистент:

Expand Down
13 changes: 12 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,10 +82,21 @@ Language model is used for text generation
A value that determines the level of creativity and risk-taking the model should use when generating text.
A higher temperature means the model is more likely to generate unexpected results, while a lower temperature results in more deterministic results.

* Max Tokens (max_tokens, int)
* Max Tokens (max_tokens, `int`)

The maximum number of words or “tokens” that the AI model should generate in its completion of the prompt.

* _Process HA Builtin Sentences_ (process_builtin_sentences, `bool`)

If enabled, integration first will pass all sentences to [HA built-in sentence processor](https://www.home-assistant.io/voice_control/builtin_sentences).
This is default behaviour of default Home Assistant Voice Assistant engine which allow you to use commands something like `turn on the living room light`.
If sentence will not be recognized by HA, it will be passed further to chosen LLM.

* Chat History (chat_history, `bool`)

Keep all conversation history.


## Using as Voice Assistant
Create and configure Voice Assistant:

Expand Down
39 changes: 31 additions & 8 deletions custom_components/gigachain/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,22 @@
import logging
from typing import Literal

from home_assistant_intents import get_languages
from homeassistant.components import conversation
from homeassistant.components.conversation import agent
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import MATCH_ALL
from homeassistant.core import HomeAssistant
from homeassistant.helpers import intent, template
from homeassistant.util import ulid
from langchain.schema import BaseMessage, HumanMessage, SystemMessage
from langchain.schema import BaseMessage, HumanMessage, SystemMessage, AIMessage

from .client_util import get_client
from .const import (CONF_API_KEY, CONF_CHAT_MODEL, CONF_CHAT_MODEL_USER,
CONF_ENGINE, CONF_FOLDER_ID, CONF_MAX_TOKENS,
CONF_PROFANITY, CONF_PROMPT, CONF_TEMPERATURE,
DEFAULT_CHAT_MODEL, DEFAULT_PROFANITY, DEFAULT_PROMPT,
CONF_PROCESS_BUILTIN_SENTENCES, DEFAULT_PROCESS_BUILTIN_SENTENCES,
CONF_CHAT_HISTORY, DEFAULT_CHAT_HISTORY,
DEFAULT_TEMPERATURE, DOMAIN, ID_GIGACHAT)

LOGGER = logging.getLogger(__name__)
Expand Down Expand Up @@ -50,7 +52,11 @@ async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
_client = await get_client(engine, common_args, entry)

hass.data.setdefault(DOMAIN, {})[entry.entry_id] = _client
conversation.async_set_agent(hass, entry, GigaChatAI(hass, entry))
_agent = GigaChatAI(hass, entry)
await _agent.async_initialize(
hass.data.get("conversation_config")
)
conversation.async_set_agent(hass, entry, _agent)
return True


Expand All @@ -61,31 +67,48 @@ async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
return True


class GigaChatAI(conversation.AbstractConversationAgent):
class GigaChatAI(conversation.DefaultAgent):
def __init__(self, hass: HomeAssistant, entry: ConfigEntry) -> None:
"""Initialize the agent."""
super().__init__(hass)
self.hass = hass
self.entry = entry
self.history: dict[str, list[BaseMessage]] = {}

@property
def supported_languages(self) -> list[str] | Literal["*"]:
"""Return a list of supported languages."""
return MATCH_ALL
return get_languages()

async def async_process(
self, user_input: agent.ConversationInput
) -> agent.ConversationResult:
"""Process a sentence."""
raw_prompt = self.entry.options.get(CONF_PROMPT, DEFAULT_PROMPT)
if user_input.conversation_id in self.history:
chat_history_enabled = self.entry.options.get(CONF_CHAT_HISTORY, DEFAULT_CHAT_HISTORY)

if user_input.conversation_id in self.history and chat_history_enabled:
conversation_id = user_input.conversation_id
messages = self.history[conversation_id]
else:
conversation_id = ulid.ulid()
prompt = self._async_generate_prompt(raw_prompt)
messages = [SystemMessage(content=prompt)]

messages.append(HumanMessage(content=user_input.text))

use_builtin_sentences = self.entry.options.get(CONF_PROCESS_BUILTIN_SENTENCES,
DEFAULT_PROCESS_BUILTIN_SENTENCES)
if use_builtin_sentences:
default_agent_response = await super(GigaChatAI, self).async_process(user_input)

if default_agent_response.response.intent:
messages.append(AIMessage(content=default_agent_response.response.speech.get("plain").get("speech")))
self.history[conversation_id] = messages
return agent.ConversationResult(
conversation_id=conversation_id, response=default_agent_response.response
)

_client = self.hass.data[DOMAIN][self.entry.entry_id]

try:
Expand All @@ -103,11 +126,11 @@ async def async_process(

messages.append(res)
self.history[conversation_id] = messages
LOGGER.debug(messages)
LOGGER.info(messages)

response = intent.IntentResponse(language=user_input.language)
response.async_set_speech(res.content)
LOGGER.debug(response)
LOGGER.info(response)
return agent.ConversationResult(
conversation_id=conversation_id, response=response
)
Expand Down
17 changes: 16 additions & 1 deletion custom_components/gigachain/config_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@
DEFAULT_MODELS, DEFAULT_PROFANITY, DEFAULT_PROMPT,
DEFAULT_SKIP_VALIDATION, DEFAULT_TEMPERATURE, DOMAIN,
ID_GIGACHAT, ID_OPENAI, ID_YANDEX_GPT, UNIQUE_ID,
CONF_PROCESS_BUILTIN_SENTENCES, DEFAULT_PROCESS_BUILTIN_SENTENCES,
CONF_CHAT_HISTORY, DEFAULT_CHAT_HISTORY,
UNIQUE_ID_GIGACHAT)

LOGGER = logging.getLogger(__name__)
Expand Down Expand Up @@ -66,6 +68,7 @@
CONF_PROMPT: DEFAULT_PROMPT,
CONF_CHAT_MODEL: DEFAULT_CHAT_MODEL,
CONF_CHAT_MODEL_USER: DEFAULT_CHAT_MODEL,
CONF_PROCESS_BUILTIN_SENTENCES: DEFAULT_PROCESS_BUILTIN_SENTENCES,
}
)

Expand Down Expand Up @@ -211,7 +214,19 @@ def common_config_option_schema(
description={
"suggested_value": options.get(CONF_MAX_TOKENS)
},
): int
): int,
vol.Optional(
CONF_PROCESS_BUILTIN_SENTENCES,
description={
"suggested_value": options.get(CONF_PROCESS_BUILTIN_SENTENCES, DEFAULT_PROCESS_BUILTIN_SENTENCES)
},
default=DEFAULT_PROCESS_BUILTIN_SENTENCES): bool,
vol.Optional(
CONF_CHAT_HISTORY,
description={
"suggested_value": options.get(CONF_CHAT_HISTORY, DEFAULT_CHAT_HISTORY)
},
default=DEFAULT_CHAT_HISTORY): bool
})
if unique_id == UNIQUE_ID_GIGACHAT:
schema = schema.extend(
Expand Down
4 changes: 4 additions & 0 deletions custom_components/gigachain/const.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@
CONF_MAX_TOKENS = "max_tokens"
CONF_SKIP_VALIDATION = "skip_validation"
DEFAULT_SKIP_VALIDATION = False
CONF_PROCESS_BUILTIN_SENTENCES = "process_builtin_sentences"
DEFAULT_PROCESS_BUILTIN_SENTENCES = True
CONF_CHAT_HISTORY = "chat_history"
DEFAULT_CHAT_HISTORY = True
CONF_PROMPT = "prompt"
DEFAULT_PROMPT = """Ты HAL 9000, компьютер из цикла произведений «Космическая одиссея» Артура Кларка, обладающий способностью к самообучению.
Мы находимся в умном доме под управлением системы Home Assistant.
Expand Down
3 changes: 2 additions & 1 deletion custom_components/gigachain/manifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,9 @@
"iot_class": "cloud_polling",
"issue_tracker": "https://github.com/gritaro/gigachain/issues",
"requirements": [
"home-assistant-intents",
"gigachain==0.1.4",
"yandexcloud==0.259.0"
],
"version": "0.1.4"
"version": "0.1.5"
}
4 changes: 3 additions & 1 deletion custom_components/gigachain/strings.json
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,9 @@
"model_openai": "Custom Model Name (leave empty to use from list above)",
"temperature": "Temperature",
"max_tokens": "Max Tokens",
"profanity": "Profanity"
"profanity": "Profanity",
"process_builtin_sentences": "Process HA Builtin Sentences",
"chat_history": "Chat History"
}
}
}
Expand Down
4 changes: 3 additions & 1 deletion custom_components/gigachain/translations/en.json
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,9 @@
"model_openai": "Custom Model Name (leave empty to use from list above)",
"temperature": "Temperature",
"max_tokens": "Max Tokens",
"profanity": "Profanity"
"profanity": "Profanity",
"process_builtin_sentences": "Process HA Builtin Sentences",
"chat_history": "Chat History"
}
}
}
Expand Down
4 changes: 3 additions & 1 deletion custom_components/gigachain/translations/ru.json
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,9 @@
"model_openai": "Своё имя модели (оставьте пустым для использования имени из списка)",
"temperature": "Температура",
"max_tokens": "Максимум токенов",
"profanity": "Цензура"
"profanity": "Цензура",
"process_builtin_sentences": "Использовать встроенный HA командный процессор",
"chat_history": "История сообщений"
}
}
}
Expand Down
Loading