Releases
pomdp_1.1.1
pomdp 1.1.1 09/04/2023)
Changes
plot_policy_graph(): The parameter order has slightly changed; belief_col is now called state_col;
unreachable states are now suppressed.
policy() gained parameters alpha and action.
color palettes are now exported.
POMPD accessors gain parameter drop.
POMDP constructor and read_POMDP gained parameter normalize and, by default, normalize
the POMDP definition.
New Features
Large POMDP descriptions are now handled better by keeping the reward as a data.frame and
supporting sparse matrices in the C++ code.
New function value_function() to access alpha vectors.
New function regret() to calculate the regret of a policy.
transition_graph() to visualize the transition model.
Problem descriptions are now normalized by default.
pomdp 1.1.0 (01/23/2023)
New Features
Added C++ (Rcpp) support. Speed up for simulate_POMDP, sample_belief_space, reward, ...
simulate_POMDP and sample_belief_space now have parallel (foreach) support.
Sparse matrices from package Matrix for matrices with a density below 50%.
Added support to parse matrices for POMDP files.
Added model normalization.
is_solved_POMDP(), is_converged_POMDP(), is_timedependent_POMDP(), and is_solved_MDP() are now exported.
Changes
accessors are now called now transition_val() and observation_val().
simulate_POMDP() and simulate_MDP() now return a list.
reimplemented round_stochastic() to improve speed.
MDP policy now uses factors for actions.
estimate_belief_for_nodes() now can also use trajectories to estimate beliefs faster.
cleaned up the interface for episodes and epochs.
You can’t perform that action at this time.