Releases
pomdp_0.99.0
pomdp_0.99.0 (05/04/2020)
Changes from pomdp_0.9.2
Support finite-horizon POMDPs and store epochs.
reward now looks at different epochs, calculates the optimal actions, and the parameter names are improved.
solve_POMDP not looks at convergence.
solve_POMDP gained parameter terminal_values.
solve_POMDP gained parameter discount to overwrite the discount rate specified in the model.
solve_POMDP can now solve POMDPs with time-dependent transition probabilities, observation probabilities and reward structure.
solve_POMDP gained parameter grid in the parameter list to specify a custom belief point grid for the grid method.
write_POMDP and solve_POMDP gained parameter digits.
added read_POMDP to read POMDP files.
plot for POMDP is now replaced by plot_policy_graph.
added policy graph visualization with vizNetwork.
added plot_value_function.
added function sample_belief_space to sample from the belief space.
added function plot_belief_space.
added function transition_matrix.
added function observation_matrix.
added function reward_matrix.
POMDP model now also contains horizon and terminal_values.
added MDP formulated as a POMDP.
added policy function to extract a better readable policy.
added update_belief.
added simulate_POMDP.
added round_stochastic.
added optimal_action.
You can’t perform that action at this time.