From 2fa02b832acb54a329dad99101eba40fb7d6d49a Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 11 Jun 2024 11:16:33 +0000 Subject: [PATCH] build based on 9ddd90f --- dev/.documenter-siteinfo.json | 2 +- dev/index.html | 4 ++-- dev/objects.inv | Bin 827 -> 591 bytes dev/search_index.js | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 41a2f8b..2fb9d67 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-06-11T10:26:18","documenter_version":"1.4.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-06-11T11:16:30","documenter_version":"1.4.1"}} \ No newline at end of file diff --git a/dev/index.html b/dev/index.html index 9cb4440..ffa7835 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,5 +1,5 @@ -Home · InvariantPointAttention.jl

InvariantPointAttention

Documentation for InvariantPointAttention.

Core.UnionMethod

Cross IPA Partial Structure Module - single layer - adapted from AF2. From left to right.

source
InvariantPointAttention.IPA_settingsMethod
IPA_settings(
+Home · InvariantPointAttention.jl

InvariantPointAttention

Documentation for InvariantPointAttention.

InvariantPointAttention.IPA_settingsMethod
IPA_settings(
     dims;
     c = 16,
     N_head = 12,
@@ -9,4 +9,4 @@
     Typ = Float32,
     use_softmax1 = false,
     scaling_qk = :default,
-)

Returns a tuple of the IPA settings, with defaults for everything except dims. This can be passed to the IPA and IPCrossAStructureModuleLayer.

source
InvariantPointAttention.T_R3Method

Applies the SE3 transformations T = (rot,trans) ∈ SE(3)^N to N batches of m points in R3, i.e., mat ∈ R^(3 x m x N) ↦ T(mat) ∈ R^(3 x m x N). Note here that rotations here are represented in matrix form.

source
InvariantPointAttention.T_R3_invMethod

Applies the group inverse of the SE3 transformations T = (R,t) ∈ SE(3)^N to N batches of m points in R3, such that T^-1(T*x) = T^-1(Rx+t) = R^T(Rx+t-t) = x.

source
InvariantPointAttention.T_TMethod

Returns the composition of two SE(3) transformations T1 and T2. If T1 = (R1,t1), and T2 = (R2,t2) then T1T2 = (R1R2, R1*t2 + t1).

source
InvariantPointAttention.right_to_left_maskMethod
right_to_left_mask([T=Float32,] N::Integer)

Create a right-to-left mask for the self-attention mechanism. The mask is a matrix of size N x N where the diagonal and the lower triangular part are set to zero and the upper triangular part is set to infinity.

source
InvariantPointAttention.softmax1Method

softmax1(x, dims = 1)

Behaves like softmax, but as though there was an additional logit of zero along dims (which is excluded from the output). So the values will sum to a value between zero and 1.

source
+)

Returns a tuple of the IPA settings, with defaults for everything except dims. This can be passed to the IPA and IPCrossAStructureModuleLayer.

source
InvariantPointAttention.get_TMethod
get_T(coords::Array{<:Real, 3})

Get the assosciated SE(3) frame for all residues in a protein backbone represented as a 3x3xL array of coordinates.

source
InvariantPointAttention.right_to_left_maskMethod
right_to_left_mask([T=Float32,] N::Integer)

Create a right-to-left mask for the self-attention mechanism. The mask is a matrix of size N x N where the diagonal and the lower triangular part are set to zero and the upper triangular part is set to infinity.

source
InvariantPointAttention.softmax1Method
softmax1(x, dims = 1)

Behaves like softmax, but as though there was an additional logit of zero along dims (which is excluded from the output). So the values will sum to a value between zero and 1.

See https://www.evanmiller.org/attention-is-off-by-one.html

source
diff --git a/dev/objects.inv b/dev/objects.inv index 2c524ffcd835a2586c45911346e8f6a123fa13c2..e9481936be7357c10b4d2729e357bdfc53ed864b 100644 GIT binary patch delta 460 zcmV;-0Wd3%y8Dcs%N)0c;u%y1g05vI$R|QUnG|jyhY@l+ zbbE(&myO&VUJsPi+P@_=iD;tmQpOXG7cfP|9(kXpV`OVEw10e{9A5Xkcf#Dxg2|ZU z%pYk|5RtDGOlQG--eJWHbID#(RLBBLkRzoswiNB)y`vL{W6li<``kM^wa3B0JJ_oR z)erX^seQ9D4!^DI6NsKhQsCV<2Ew+p3_+-oC=-e9mPv`}yj>EXC)*T+@qW6R@+a*k zNTG>YXb$$i6n`C_I43Qwe2Hr7GPvgC^#)$gx9D}D1xIaowPy<- zgd?CRtx|UDuu$A#hnwhZ09gU$dRq|b6dtaXWvUoJlwzsr3UZ9yWrqy~VS=jtN>~6g zTHc`wEhXG)IUFBM3o=w$l2Vw1ermQ`ePCx?0?liUn?;IUM)R@m23_5I<)pnW*V&h~ z_cp41ZQ;GOg%_5McbCDRCA^I0FB4XhetSb`LjK-vQ2pfEZh7Bq?O)4shTi~a2-?__ Cd*CSm delta 698 zcmV;r0!97L1iJ>1jepHoU2obj6n)>Xu(a_8RSQU!s=ZLAX-I2Tb!lHMlUyLHV>A1v z10nwV9ET)?e4}=ogcK6n=iGblhizYYupt_~8&A4qa7D;B8Ni2`cxB3G0w1I=aDoT| zeJSij&gQ_QsUJU)LDkHQ*~3&xttUTFg)W8)?xYuT_|CSV$bV0MgWENb1`I717_iON z;7OR*ofED(>?U)Kin%yZifwn!e%~kYfcZ;)uRsBDB7y*LCB2BE9enTl*5a6Fjw1Tr zyZ#dWhST2gGMSWqytJgzNUa=`wqD;d_hl{xJgq&3&~}xQ#urrnhAxy0=&z9BATQXR z9o~Juc?8CdR)4KDs)Rj~DF!wldZTG?G>PD2ZEDPQ8Fxc&{G$rA?}g?mkwrt436C0F zp}Bo8^`fQqU0F(;s&m2wm1;or#Zo6`ifv2Vii;z5v8|_cDEgddWPQ?}@p$cl%Vu;+ zI+Y48Y%vV(V}@}Veqovc8=pW`+weHbm9vx9?`dHh>VL*BT}wWrKPq>pP}`*qAa=0Z zNS7AjQ0LsZIyFJ9TD1f$DKp2knxsYa$um<-Ep1Y_w51d^Qp4aDI)pVe=16nDIen-v zUjK0lWhVT8X=M7!l({Q71oWN-;Mu=@G7|_(Q0*Jk5s)z@gNjW(z%{3tL+1?LRg&FS z_&CS0xwE)-$EomcuDLcW*h}zS22nAlshe0Jrn= g)QFPHO?qmtzTK2=X8CpTl;Q@JrHzs54|TH 6, and uses this to transform the input frames.\n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPA","page":"Home","title":"InvariantPointAttention.IPA","text":"Strictly Self-IPA initialization\n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPACache-Tuple{NamedTuple, Integer}","page":"Home","title":"InvariantPointAttention.IPACache","text":"IPACache(settings, batchsize)\n\nInitialize an empty IPA cache.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.IPCrossA","page":"Home","title":"InvariantPointAttention.IPCrossA","text":"IPCrossA(settings)\n\nInvariant Point Cross Attention (IPCrossA). Information flows from L (Keys, Values) to R (Queries).\n\nGet settings with IPA_settings\n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPCrossAStructureModuleLayer","page":"Home","title":"InvariantPointAttention.IPCrossAStructureModuleLayer","text":"Cross IPA Partial Structure Module initialization - single layer - adapted from AF2. From left to right. \n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPA_settings-Tuple{Any}","page":"Home","title":"InvariantPointAttention.IPA_settings","text":"IPA_settings(\n dims;\n c = 16,\n N_head = 12,\n N_query_points = 4,\n N_point_values = 8,\n c_z = 0,\n Typ = Float32,\n use_softmax1 = false,\n scaling_qk = :default,\n)\n\nReturns a tuple of the IPA settings, with defaults for everything except dims. This can be passed to the IPA and IPCrossAStructureModuleLayer.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.T_R3-Union{Tuple{T}, Tuple{AbstractArray{T}, AbstractArray{T}, AbstractArray{T}}} where T","page":"Home","title":"InvariantPointAttention.T_R3","text":"Applies the SE3 transformations T = (rot,trans) ∈ SE(3)^N to N batches of m points in R3, i.e., mat ∈ R^(3 x m x N) ↦ T(mat) ∈ R^(3 x m x N). Note here that rotations here are represented in matrix form. \n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.T_R3_inv-Union{Tuple{T}, Tuple{AbstractArray{T}, AbstractArray{T}, AbstractArray{T}}} where T","page":"Home","title":"InvariantPointAttention.T_R3_inv","text":"Applies the group inverse of the SE3 transformations T = (R,t) ∈ SE(3)^N to N batches of m points in R3, such that T^-1(T*x) = T^-1(Rx+t) = R^T(Rx+t-t) = x.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.T_T-Tuple{Any, Any}","page":"Home","title":"InvariantPointAttention.T_T","text":"Returns the composition of two SE(3) transformations T1 and T2. If T1 = (R1,t1), and T2 = (R2,t2) then T1T2 = (R1R2, R1*t2 + t1).\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.T_till-Tuple{Any, Any}","page":"Home","title":"InvariantPointAttention.T_till","text":"Index into a T up to index i. \n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.bcds2quats-Union{Tuple{AbstractMatrix{T}}, Tuple{T}, Tuple{AbstractMatrix{T}, T}} where T<:Real","page":"Home","title":"InvariantPointAttention.bcds2quats","text":"Takes a 3xN matrix of imaginary quaternion components, bcd, sets the real part to a, and normalizes to unit quaternions.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.calculate_residue_rotation_and_translation-Tuple{AbstractMatrix}","page":"Home","title":"InvariantPointAttention.calculate_residue_rotation_and_translation","text":"Get frame from residue\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_T-Tuple{Array{<:Real, 3}}","page":"Home","title":"InvariantPointAttention.get_T","text":"Get the assosciated SE(3) frame for all residues in a prot\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_T_batch-Tuple{Array{<:Real, 4}}","page":"Home","title":"InvariantPointAttention.get_T_batch","text":"Get the assosciated SE(3) frames for all residues in a batch of prots \n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_rotation-Tuple{Type{<:Real}, Vararg{Any}}","page":"Home","title":"InvariantPointAttention.get_rotation","text":"get_rotation([T=Float32,] dims...)\n\nGenerates random rotation matrices of given size. \n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_translation-Tuple{Type{<:Real}, Vararg{Any}}","page":"Home","title":"InvariantPointAttention.get_translation","text":"get_translation([T=Float32,] dims...)\n\nGenerates random translations of given size.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.left_to_right_mask-Tuple{Type{<:AbstractFloat}, Integer, Integer}","page":"Home","title":"InvariantPointAttention.left_to_right_mask","text":"left_to_right_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.right_to_left_mask-Tuple{Type{<:AbstractFloat}, Integer, Integer}","page":"Home","title":"InvariantPointAttention.right_to_left_mask","text":"right_to_left_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.right_to_left_mask-Tuple{Type{<:AbstractFloat}, Integer}","page":"Home","title":"InvariantPointAttention.right_to_left_mask","text":"right_to_left_mask([T=Float32,] N::Integer)\n\nCreate a right-to-left mask for the self-attention mechanism. The mask is a matrix of size N x N where the diagonal and the lower triangular part are set to zero and the upper triangular part is set to infinity.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.rotmatrix_from_quat-Tuple{AbstractMatrix{<:Real}}","page":"Home","title":"InvariantPointAttention.rotmatrix_from_quat","text":"Takes a 4xN matrix of unit quaternions and returns a 3x3xN array of rotation matrices.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.softmax1-Union{Tuple{AbstractArray{T}}, Tuple{T}} where T","page":"Home","title":"InvariantPointAttention.softmax1","text":"softmax1(x, dims = 1)\n\nBehaves like softmax, but as though there was an additional logit of zero along dims (which is excluded from the output). So the values will sum to a value between zero and 1.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.update_frame-Tuple{Any, Any}","page":"Home","title":"InvariantPointAttention.update_frame","text":"Takes a 6-dim vec and maps to a rotation matrix and translation vector, which is then applied to the input frames.\n\n\n\n\n\n","category":"method"}] +[{"location":"","page":"Home","title":"Home","text":"CurrentModule = InvariantPointAttention","category":"page"},{"location":"#InvariantPointAttention","page":"Home","title":"InvariantPointAttention","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Documentation for InvariantPointAttention.","category":"page"},{"location":"","page":"Home","title":"Home","text":"","category":"page"},{"location":"","page":"Home","title":"Home","text":"Modules = [InvariantPointAttention]","category":"page"},{"location":"#InvariantPointAttention.BackboneUpdate","page":"Home","title":"InvariantPointAttention.BackboneUpdate","text":"Projects the frame embedding => 6, and uses this to transform the input frames.\n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPA","page":"Home","title":"InvariantPointAttention.IPA","text":"Strictly Self-IPA initialization\n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPACache-Tuple{NamedTuple, Integer}","page":"Home","title":"InvariantPointAttention.IPACache","text":"IPACache(settings, batchsize)\n\nInitialize an empty IPA cache.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.IPAStructureModuleLayer","page":"Home","title":"InvariantPointAttention.IPAStructureModuleLayer","text":"Self IPA Partial Structure Module initialization - single layer - adapted from AF2. \n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPCrossA","page":"Home","title":"InvariantPointAttention.IPCrossA","text":"IPCrossA(settings)\n\nInvariant Point Cross Attention (IPCrossA). Information flows from L (Keys, Values) to R (Queries).\n\nGet settings with IPA_settings\n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPCrossAStructureModuleLayer","page":"Home","title":"InvariantPointAttention.IPCrossAStructureModuleLayer","text":"Cross IPA Partial Structure Module initialization - single layer - adapted from AF2. From left to right. \n\n\n\n\n\n","category":"type"},{"location":"#InvariantPointAttention.IPA_settings-Tuple{Any}","page":"Home","title":"InvariantPointAttention.IPA_settings","text":"IPA_settings(\n dims;\n c = 16,\n N_head = 12,\n N_query_points = 4,\n N_point_values = 8,\n c_z = 0,\n Typ = Float32,\n use_softmax1 = false,\n scaling_qk = :default,\n)\n\nReturns a tuple of the IPA settings, with defaults for everything except dims. This can be passed to the IPA and IPCrossAStructureModuleLayer.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_T-Tuple{Array{<:Real, 3}}","page":"Home","title":"InvariantPointAttention.get_T","text":"get_T(coords::Array{<:Real, 3})\n\nGet the assosciated SE(3) frame for all residues in a protein backbone represented as a 3x3xL array of coordinates.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_T_batch-Tuple{Array{<:Real, 4}}","page":"Home","title":"InvariantPointAttention.get_T_batch","text":"Get the associated SE(3) frames for all residues in a batch of proteins\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_rotation-Tuple{Type{<:Real}, Vararg{Any}}","page":"Home","title":"InvariantPointAttention.get_rotation","text":"get_rotation([T=Float32,] dims...)\n\nGenerates random rotation matrices of given size. \n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.get_translation-Tuple{Type{<:Real}, Vararg{Any}}","page":"Home","title":"InvariantPointAttention.get_translation","text":"get_translation([T=Float32,] dims...)\n\nGenerates random translations of given size.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.left_to_right_mask-Tuple{Type{<:AbstractFloat}, Integer, Integer}","page":"Home","title":"InvariantPointAttention.left_to_right_mask","text":"left_to_right_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.right_to_left_mask-Tuple{Type{<:AbstractFloat}, Integer, Integer}","page":"Home","title":"InvariantPointAttention.right_to_left_mask","text":"right_to_left_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.right_to_left_mask-Tuple{Type{<:AbstractFloat}, Integer}","page":"Home","title":"InvariantPointAttention.right_to_left_mask","text":"right_to_left_mask([T=Float32,] N::Integer)\n\nCreate a right-to-left mask for the self-attention mechanism. The mask is a matrix of size N x N where the diagonal and the lower triangular part are set to zero and the upper triangular part is set to infinity.\n\n\n\n\n\n","category":"method"},{"location":"#InvariantPointAttention.softmax1-Union{Tuple{AbstractArray{T}}, Tuple{T}} where T","page":"Home","title":"InvariantPointAttention.softmax1","text":"softmax1(x, dims = 1)\n\nBehaves like softmax, but as though there was an additional logit of zero along dims (which is excluded from the output). So the values will sum to a value between zero and 1.\n\nSee https://www.evanmiller.org/attention-is-off-by-one.html\n\n\n\n\n\n","category":"method"}] }