Skip to content

Commit

Permalink
add ruff and pre-commit hooks
Browse files Browse the repository at this point in the history
  • Loading branch information
fgebhart committed Dec 4, 2024
1 parent 6193b74 commit 832dbbe
Show file tree
Hide file tree
Showing 10 changed files with 357 additions and 295 deletions.
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -139,4 +139,6 @@ cython_debug/

# IDEs
.vscode/
.idea/
.idea/

.ruff_cache/
13 changes: 13 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.1
hooks:
- id: ruff
- id: ruff-format
- repo: https://github.com/pre-commit/mirrors-mypy
rev: 'v1.13.0'
hooks:
- id: mypy
pass_filenames: false
args: ['aleph_alpha_client', 'tests']
language: system
8 changes: 4 additions & 4 deletions aleph_alpha_client/completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,12 @@ class CompletionRequest:
presence_penalty (float, optional, default 0.0)
The presence penalty reduces the likelihood of generating tokens that are already present in the
generated text (`repetition_penalties_include_completion=true`) respectively the prompt (`repetition_penalties_include_prompt=true`).
Presence penalty is independent of the number of occurences. Increase the value to produce text that is not repeating the input.
Presence penalty is independent of the number of occurrences. Increase the value to produce text that is not repeating the input.
frequency_penalty (float, optional, default 0.0)
The frequency penalty reduces the likelihood of generating tokens that are already present in the
generated text (`repetition_penalties_include_completion=true`) respectively the prompt (`repetition_penalties_include_prompt=true`).
Frequency penalty is dependent on the number of occurences of a token.
Frequency penalty is dependent on the number of occurrences of a token.
repetition_penalties_include_prompt (bool, optional, default False)
Flag deciding whether presence penalty or frequency penalty are updated from the prompt
Expand Down Expand Up @@ -107,7 +107,7 @@ class CompletionRequest:
stop_sequences (List(str), optional, default None)
List of strings which will stop generation if they're generated. Stop sequences may be helpful in structured texts.
Example: In a question answering scenario a text may consist of lines starting with either "Question: " or "Answer: " (alternating). After producing an answer, the model will be likely to generate "Question: ". "Question: " may therfore be used as stop sequence in order not to have the model generate more questions but rather restrict text generation to the answers.
Example: In a question answering scenario a text may consist of lines starting with either "Question: " or "Answer: " (alternating). After producing an answer, the model will be likely to generate "Question: ". "Question: " may therefore be used as stop sequence in order not to have the model generate more questions but rather restrict text generation to the answers.
tokens (bool, optional, default False)
return tokens of completion
Expand All @@ -131,7 +131,7 @@ class CompletionRequest:
(if repetition_penalties_include_prompt is True) and prior completion (if repetition_penalties_include_completion is True).
sequence_penalty_min_length (int, default 2)
Minimal number of tokens to be considered as sequence. Must be greater or eqaul 2.
Minimal number of tokens to be considered as sequence. Must be greater or equal 2.
use_multiplicative_sequence_penalty (bool, default False)
Flag deciding whether sequence penalty is applied multiplicatively (True) or additively (False).
Expand Down
12 changes: 6 additions & 6 deletions aleph_alpha_client/prompt.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ class TokenControl:
factor (float, required):
The amount to adjust model attention by.
Values between 0 and 1 will supress attention.
Values between 0 and 1 will suppress attention.
A value of 1 will have no effect.
Values above 1 will increase attention.
Expand Down Expand Up @@ -121,7 +121,7 @@ class TextControl:
The amount of characters to apply the factor to.
factor (float, required):
The amount to adjust model attention by.
Values between 0 and 1 will supress attention.
Values between 0 and 1 will suppress attention.
A value of 1 will have no effect.
Values above 1 will increase attention.
token_overlap (ControlTokenOverlap, optional):
Expand Down Expand Up @@ -163,7 +163,7 @@ class Text:
text (str, required):
The text prompt
controls (list of TextControl, required):
A list of TextControls to manilpulate attention when processing the prompt.
A list of TextControls to manipulate attention when processing the prompt.
Can be empty if no manipulation is required.
Examples:
Expand Down Expand Up @@ -227,7 +227,7 @@ class ImageControl:
Must be a value between 0 and 1, where 1 means the full height of the image.
factor (float, required):
The amount to adjust model attention by.
Values between 0 and 1 will supress attention.
Values between 0 and 1 will suppress attention.
A value of 1 will have no effect.
Values above 1 will increase attention.
token_overlap (ControlTokenOverlap, optional):
Expand Down Expand Up @@ -285,7 +285,7 @@ class Image:
>>> image = Image.from_url(url)
"""

# We use a base_64 reperesentation, because we want to embed the image
# We use a base_64 representation, because we want to embed the image
# into a prompt send in JSON.
base_64: str
cropping: Optional[Cropping]
Expand All @@ -310,7 +310,7 @@ def from_image_source(
p = urlparse(image_source)
if p.scheme:
return cls.from_url(url=image_source, controls=controls)
except Exception as e:
except Exception:
# we assume that If the string runs into a Exception it isn't not a valid ulr
pass

Expand Down
Loading

0 comments on commit 832dbbe

Please sign in to comment.