InvariantPointAttention
Documentation for InvariantPointAttention.
InvariantPointAttention.BackboneUpdate
InvariantPointAttention.IPA
InvariantPointAttention.IPACache
InvariantPointAttention.IPAStructureModuleLayer
InvariantPointAttention.IPCrossA
InvariantPointAttention.IPCrossAStructureModuleLayer
InvariantPointAttention.IPA_settings
InvariantPointAttention.dotproducts
InvariantPointAttention.get_T
InvariantPointAttention.get_T_batch
InvariantPointAttention.get_rotation
InvariantPointAttention.get_translation
InvariantPointAttention.left_to_right_mask
InvariantPointAttention.right_to_left_mask
InvariantPointAttention.right_to_left_mask
InvariantPointAttention.softmax1
InvariantPointAttention.BackboneUpdate
— TypeProjects the frame embedding => 6, and uses this to transform the input frames.
InvariantPointAttention.IPA
— TypeStrictly Self-IPA initialization
InvariantPointAttention.IPACache
— MethodIPACache(settings, batchsize)
Initialize an empty IPA cache.
InvariantPointAttention.IPAStructureModuleLayer
— TypeSelf IPA Partial Structure Module initialization - single layer - adapted from AF2.
InvariantPointAttention.IPCrossA
— TypeIPCrossA(settings)
Invariant Point Cross Attention (IPCrossA). Information flows from L (Keys, Values) to R (Queries).
Get settings
with IPA_settings
InvariantPointAttention.IPCrossAStructureModuleLayer
— TypeCross IPA Partial Structure Module initialization - single layer - adapted from AF2. From left to right.
InvariantPointAttention.IPA_settings
— MethodIPA_settings(
+Home · InvariantPointAttention.jl InvariantPointAttention
Documentation for InvariantPointAttention.
InvariantPointAttention.BackboneUpdate
InvariantPointAttention.IPA
InvariantPointAttention.IPACache
InvariantPointAttention.IPAStructureModuleLayer
InvariantPointAttention.IPCrossA
InvariantPointAttention.IPCrossAStructureModuleLayer
InvariantPointAttention.IPA_settings
InvariantPointAttention.dotproducts
InvariantPointAttention.get_T
InvariantPointAttention.get_T_batch
InvariantPointAttention.get_rotation
InvariantPointAttention.get_translation
InvariantPointAttention.left_to_right_mask
InvariantPointAttention.right_to_left_mask
InvariantPointAttention.right_to_left_mask
InvariantPointAttention.softmax1
InvariantPointAttention.BackboneUpdate
— TypeProjects the frame embedding => 6, and uses this to transform the input frames.
sourceInvariantPointAttention.IPA
— TypeStrictly Self-IPA initialization
sourceInvariantPointAttention.IPACache
— MethodIPACache(settings, batchsize)
Initialize an empty IPA cache.
sourceInvariantPointAttention.IPAStructureModuleLayer
— TypeSelf IPA Partial Structure Module initialization - single layer - adapted from AF2.
sourceInvariantPointAttention.IPCrossA
— TypeIPCrossA(settings)
Invariant Point Cross Attention (IPCrossA). Information flows from L (Keys, Values) to R (Queries).
Get settings
with IPA_settings
sourceInvariantPointAttention.IPCrossAStructureModuleLayer
— TypeCross IPA Partial Structure Module initialization - single layer - adapted from AF2. From left to right.
sourceInvariantPointAttention.IPA_settings
— MethodIPA_settings(
dims;
c = 16,
N_head = 12,
@@ -9,4 +9,4 @@
Typ = Float32,
use_softmax1 = false,
scaling_qk = :default,
-)
Returns a tuple of the IPA settings, with defaults for everything except dims. This can be passed to the IPA and IPCrossAStructureModuleLayer.
sourceInvariantPointAttention.dotproducts
— Methodfunction RoPEdotproducts(iparope::IPARoPE, q, k; chain_diffs = nothing)
chain_diffs is either nothing or a array of 0's and 1's describing the ij-pair as pertaining to the same chain if the entry at ij is 1, else 0.
sourceInvariantPointAttention.get_T
— Methodget_T(coords::Array{<:Real, 3})
Get the assosciated SE(3) frame for all residues in a protein backbone represented as a 3x3xL array of coordinates.
sourceInvariantPointAttention.get_T_batch
— MethodGet the associated SE(3) frames for all residues in a batch of proteins
sourceInvariantPointAttention.get_rotation
— Methodget_rotation([T=Float32,] dims...)
Generates random rotation matrices of given size.
sourceInvariantPointAttention.get_translation
— Methodget_translation([T=Float32,] dims...)
Generates random translations of given size.
sourceInvariantPointAttention.left_to_right_mask
— Methodleft_to_right_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)
sourceInvariantPointAttention.right_to_left_mask
— Methodright_to_left_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)
sourceInvariantPointAttention.right_to_left_mask
— Methodright_to_left_mask([T=Float32,] N::Integer)
Create a right-to-left mask for the self-attention mechanism. The mask is a matrix of size N x N
where the diagonal and the lower triangular part are set to zero and the upper triangular part is set to infinity.
sourceInvariantPointAttention.softmax1
— Methodsoftmax1(x, dims = 1)
Behaves like softmax, but as though there was an additional logit of zero along dims (which is excluded from the output). So the values will sum to a value between zero and 1.
See https://www.evanmiller.org/attention-is-off-by-one.html
sourceSettings
This document was generated with Documenter.jl version 1.8.0 on Friday 13 December 2024. Using Julia version 1.11.2.
+)
Returns a tuple of the IPA settings, with defaults for everything except dims. This can be passed to the IPA and IPCrossAStructureModuleLayer.
InvariantPointAttention.dotproducts
— Methodfunction RoPEdotproducts(iparope::IPARoPE, q, k; chain_diffs = nothing)
chain_diffs is either nothing or a array of 0's and 1's describing the ij-pair as pertaining to the same chain if the entry at ij is 1, else 0.
InvariantPointAttention.get_T
— Methodget_T(coords::Array{<:Real, 3})
Get the assosciated SE(3) frame for all residues in a protein backbone represented as a 3x3xL array of coordinates.
InvariantPointAttention.get_T_batch
— MethodGet the associated SE(3) frames for all residues in a batch of proteins
InvariantPointAttention.get_rotation
— Methodget_rotation([T=Float32,] dims...)
Generates random rotation matrices of given size.
InvariantPointAttention.get_translation
— Methodget_translation([T=Float32,] dims...)
Generates random translations of given size.
InvariantPointAttention.left_to_right_mask
— Methodleft_to_right_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)
InvariantPointAttention.right_to_left_mask
— Methodright_to_left_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)
InvariantPointAttention.right_to_left_mask
— Methodright_to_left_mask([T=Float32,] N::Integer)
Create a right-to-left mask for the self-attention mechanism. The mask is a matrix of size N x N
where the diagonal and the lower triangular part are set to zero and the upper triangular part is set to infinity.
InvariantPointAttention.softmax1
— Methodsoftmax1(x, dims = 1)
Behaves like softmax, but as though there was an additional logit of zero along dims (which is excluded from the output). So the values will sum to a value between zero and 1.
See https://www.evanmiller.org/attention-is-off-by-one.html