You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ACT-R provides ways to set base level activations on individual chunk types directly instead of computing it. This is used in the spreading activation equation (pp. 290-291).
vanilla
There does not seem to be a way to set the base level when declaring a chunk.
set-base-levels (pg. 326)
Explicitly sets a base level on the listed chunks.
This will set c's base level to 10:
(define-model test-base-levels
(sgp :esct:bllnil)
(add-dm (a isa chunk)
(b isa chunk)
(c isa chunk)))
(set-base-levels (c 10))
set-all-base-levels (pg. 326)
Set it on all chunks.
(set-all-base-levels 1.5)
ccm
Base levels seem to be set using the DMBaseLevel memory sub-module, but I haven not yet found an example of how it works.
pyactr
These might be set using DecMem's add_activation? I will have to look at the math to see if that's what's happening. There are no examples using it.
In addition, there seems to be different defaults from ACT-R w.r.t. base-level learning.
The baselevel_learning function in the utilities takes bll and decay as parameters which are passed in from the model's parameters:
baselevel_learning: a boolean which defaults to True
decay: a float which defaults to 0.5
In ACT-R, however, :bll controls both:
The base-level learning parameter controls whether base-level learning is enabled, and also sets the value of the decay parameter, d. It can be set to any positive number or the value nil. The value nil means do not use base-level learning and is the default value, a number means that base-level is enabled and the given value is the decay parameter. The recommended value for :bll is .5, and it is one of the few parameters which have a strong recommended value. (pg. 298)
So to match ACT-R, the default in pyactr should be baselevel_learning=False.
The text was updated successfully, but these errors were encountered:
ACT-R provides ways to set
base level activations
on individual chunk types directly instead of computing it. This is used in the spreading activation equation (pp. 290-291).vanilla
There does not seem to be a way to set the base level when declaring a chunk.
set-base-levels (pg. 326)
Explicitly sets a base level on the listed chunks.
This will set c's base level to 10:
set-all-base-levels (pg. 326)
Set it on all chunks.
(set-all-base-levels 1.5)
ccm
Base levels seem to be set using the
DMBaseLevel
memory sub-module, but I haven not yet found an example of how it works.pyactr
These might be set using DecMem's
add_activation
? I will have to look at the math to see if that's what's happening. There are no examples using it.In addition, there seems to be different defaults from ACT-R w.r.t. base-level learning.
The
baselevel_learning
function in the utilities takesbll
anddecay
as parameters which are passed in from the model's parameters:baselevel_learning
: a boolean which defaults toTrue
decay
: a float which defaults to0.5
In ACT-R, however,
:bll
controls both:So to match ACT-R, the default in pyactr should be
baselevel_learning=False
.The text was updated successfully, but these errors were encountered: