Skip to content

v0.4.1

Compare
Choose a tag to compare
@github-actions github-actions released this 27 Oct 18:50
· 139 commits to master since this release
e8d868c

SeaPearl v0.4.1

Diff since v0.4.0

Closed issues:

  • Tackle the "stochastic" issue while training: taking the difficulty of an episode into account. (#23)
  • Explore the idea of having two (or more) agents to solve a problem (#30)
  • Make it possible to give rewards later (not directly after an action is taken) (#32)
  • Try to give some metrics as input of the RL agent (#34)
  • Make the variable heuristic an RL agent learned heuristic as well (#35)
  • Add other type of search strategy (#38)
  • Support Max sense for @objective (#42)
  • Make JuMP GraphColoring tests deterministic (#43)
  • Representable variables (#72)
  • TstptwReward weird behaviour with numberOfDecisions (#96)
  • Set up documentation (#102)
  • Testset for CPreward2 (#190)
  • [CP] No easy way to generate a multiple of a variable, y = ax (#196)
  • [CP] No easy way to generate an offsetted variable, y = x + c (#197)
  • [CP] No easy way to generate the opposite of a variable, y = -x (#198)
  • [CP][Boolean Operator] Implement truth-functional operator of logical conjunction : ∧ (#200)
  • Remove dead code, useless/test comments and empty files (#201)
  • TSPTW problem refactoring (#211)
  • DefaultStateRepresentation improvement (#212)
  • Add a random value selection heuristic. (#213)
  • Generic reward encouraging smart exploration of the tree. (#216)
  • Refacto of the Graph Convolution (#225)
  • Implementation of the heterogeneous graph (#227)
  • Adapt New heterogeneous Graph convolution to FullFeaturedCPNN and VariableOutputCPNN (#230)

Merged pull requests: