Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
philipperemy authored Mar 19, 2023
1 parent 4dd76ac commit f6db9c8
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[![license](https://img.shields.io/badge/License-Apache_2.0-brightgreen.svg)](https://github.com/philipperemy/keras-attention-mechanism/blob/master/LICENSE) [![dep1](https://img.shields.io/badge/Tensorflow-2.0+-brightgreen.svg)](https://www.tensorflow.org/)
![Simple Keras Attention CI](https://github.com/philipperemy/keras-attention-mechanism/workflows/Simple%20Keras%20Attention%20CI/badge.svg)

Many-to-one attention mechanism for Keras.
Attention Layer for Keras. Supports the score functions of Luong and Bahdanau.

<p align="center">
<img src="examples/equations.png" width="600">
Expand Down Expand Up @@ -34,7 +34,7 @@ Attention(

### Arguments

- `units`: Integer. The number of units in the attention vector ($a_t$).
- `units`: Integer. The number of (output) units in the attention vector ($a_t$).
- `score`: String. The score function $score(h_t, \bar{h_s})$. Possible values are `luong` or `bahdanau`.


Expand Down

0 comments on commit f6db9c8

Please sign in to comment.