- Improved methods for fitting PyTorch surrogates, including auto-stopping by parameter value norm
- Greatly reduced the number of samples for DNN posterior, speeding up optimization
- Stabilized the mean estimate of ensemble surrogates by avoiding resampling
- Disabled root caching for ensemble surrogates during optimization
- Increased the maximum length of a category name to 32 characters
- More optional outputs for verbose settings
- Parameters in ParamSpace can also be indexed by name
- Parameters now have search_space property, to modify the optimizer search space from the full space
- Continuous parameters have search_min/search_max; Discete parameteres have search_categories
- Constraints are now defined by Constraint class
- Input constraints can now be included in ParamSpace, and serialized from there
- Output constraints can now be included in Campaign, and serialized from there
- New interface class IParamSpace to address circular import issues between ParamSpace and Constraint
- Optimizer and Campaign X_space attributes are now assigned using setter
- Optimizer.maximize() appropriately recognizes fixed_var argument
- Torch device references and options (GPU compatibility may be re-added)
- Campaign X_best method
- Optimizer X_best_f attribute(s)
- Sequence of colors "color_list" to branding
- Informative hoverdata for MDS plot
- Created Product_Objective and Divide_Objective
- Switched all usages of X_ref = X_space.mean() to optimizer.X_best_f
- Refactored mpl "visualize_inputs" as plotly "visualize_inputs" for better interactivity
- Text formatting for some plotly hoverdata
- Default values for NParEGO scalarization_weights
- SHAP PDP ICE plots now work with categorical values
- Added scikit-learn to dependencies, for MDS
- Added MDS plot
- SHAP PDP ICE plots must now have color and x-axis indices that are distinct
- Project metadata properly captured on PyPI based on changes in pyproject.toml
- Fixed infobar on dash app
- Better handling of X_space on dash app
- Bug fixes for optim_progress
- Improved color and axes of parity_plot
- Major improvements to testing and numerous small bug fixes to improve code robustness
- Code coverage > 90%
- New method for asserting equivalence of state_dicts during serialization
- Objective PyTests separated
- Constraint PyTests separated
- Campaign.Explainer now added to PyTests
- Docstrings and typing to Explainer methods
- Campaign.out property to dynamically capture measured responses "y" or objectives as appropriate
- Campaign.evaluate method to map optimizer.evaluate method
- DNN to PyTests
- Fixed SHAP explainer analysis and visualization functions
- Changed SHAP visualization colors to use obsidian branding
- Moved sensitivity method from campaign.analysis to campaign.explainer
- Moved Explainer testing from optimizer pytests to campaign pytests
- Generalized plotting function MOO_results and renamed optim_progress
- Campaign analysis and plotting methods fixed for
- Greatly increased the number of samples used for DNNPosterior, increasing the stability of posterior predictions
- Removed code chunks regarding unused optional inputs to PDP ICE function imported from SHAP GitHub
- More informative docstrings for optimizer.bayesian, optimizer.predict, to explain choices of surrogate models and aq_funcs
- Renamed aq_func hyperparmeter "Xi_f" to "inflate"
- Moved default aq_func choices for single/multi into aq_defaults of acquisition.config
- Fixed and improved campaign analysis methods
- Documentation improvements
- Naming conventions for modules (config, utils, base)
- Import order convention for modules
- First (working) release on PyPI
- Added DNN surrogate model using EnsemblePosterior. Requries PosteriorList and IndexSampler during optimization
- Added EHVI and NIPV aq_funcs
- Added discarding of NaN X value and Y values
- Target transforms now ignore NaN values
- Quantile prediction to optimizer.predict() and surrogate.predict() to better suit non-GP models nad non-normal distributions
- Generalized surrogate_botorch fitting for models which are not GPs
- Generalzed torch.dtype using global variable for import obsidian.utils.TORCH_DTYPE
- Improved some tensor.repeat() using tensor.repeat_interleave()
- Switched EI, NEI, and qNEHVI to Log-EI versions based on BoTorch warning
- Dropped "q" from qNEHVI / qNParEGO names
- Simplified surrogate model loading in BO_optimizer
- Changed name of surrogate.args/kwargs to surrogate.hps
- Corrected typing in target transforms
- Thompson sampling aq_func
- Removed explainer from campaign attributes
- Updated "features objectives" to operate on real space instead of scaled space; virtually no speed difference and tensor grads not needed
- Improvements to campaign object and explainer object features
- Switched all GenericMC and GenericMOMC objectives to custom objectives to simplify SOO vs MOO, and enable serialization
- Added set_objective to campaign, and objective serialization
- Bug fix for surrogate.save_state() with train_Y
- Marked GPflat + categorical space test as an expected fail in pytest
- Minor bug fixes and test improvements to improve coverage
- Bug fix for qMean, made sure objective is used
- Set up basic validation tests for parameters package
- Moved plotting dependencies to main build
- Updated Contributing section
- Minor modifications to main Readme
- Moved jupyterlab from docs to dev group
- docs/readme.md; moved contents to CONTRIBUTING
- Added imported members to sphinx-autodoc results using all at module level
- Included readme on main docs page, removed redundant link
- Various cosmetic changes to autodoc and autosummary options
- Shortened object names on TOC tree for readability
- Removed autosummary from package and subpackage-level autodoc. Automated only at module level with templates
- Split subpackages into regular and "shallow" templates, to expose different meaningful levels on the toctree
- Updated sphinx theme with more preferred navigation features
- Unused dependencies from pyproject.toml
- Evaluate now does not calculate aq_func by default, to speed up evaluation of objective
- Bug fixes for optimizer.evaluate() under fringe tests
- Updated license to GPLv3 on Readme
- Removed Merck references and updated branding
- optimizer.evaluate() to handle y_predict, f_predict, o_predict, and a_predict independently of optimize.suggest()
- Regardless of q_batch, evaluate aq functions on each sample individually, then as a joint sample
- Made sure that X_baseline accepted X_pending
- Made sure that ref_point and f_best in aq_hps are now based on objectives, not raw responses
- Removed o_dim and optim_type from optimizer.attrs to better support stateless operation
- No longer calculate hypervolume or pareto front on raw responses; only after considering objectives
- Removed y_pareto from optimizer.attrs
- Temporarily removed campaign._analyze due to bugs with _hv
- Added plotting module to test coverage
- Reloading/fitting f_train in optimizer.load_state
- Added default_campaign.json to test directory, for faster testing using pre-fit objects
- Added matplotlib plotting library
- Can now provide X_pending to suggest, so that users can manually do iterative suggestions
- Various plotting bug fixes
- Ensure that paramspace encode/unit_map return double dtypes
- Added getitem to ParamSpace to enable X_space[idx]
- Overhaul of all encode/decode and map/demap functions using a decorator to handle robust typing
- Bug fix for SHAP_explain resulting from object dtypes with new encode functions
- Added notebook tutorials to docs
- repr method for Target
- Bug fixes in factor_plot with X_ref provided
- Fixed bug where categorical encoding wouldn't work if all categories were numerical strings
- Changed categorical OH-encode separator from _ to ^ and protected against usage in X_names
- Dash app updates with module refactoring
- Minor refactoring of param_space discrete handling
- Replaced assert statements with appropriate Python base exceptions, added custom obsidian exceptions as necessary
- Moved benchmark module to obsidian.experiment
- Removed deprecated examples
- Dev capabilities for document generation using sphinx
- Module docstrings
- Overhaul of class and method docstrings
- Removed parameter.type and replaced with isinstance checking and class.name loading
- Custom exception handling
- Composite objectives
- Utopian point support for SOO and MOO
- Bounded target support for SOO and MOO
- Bounding for only selected targets in multi-output scenarios
- Completed refactored weighted MOO to separate out component parts of scalarization, utopian distance, norming, and bounding
- Greatly simplified the structure of aq_kwargs based on the above
- weightedMOO acquisition function
- Dynamic reference point utility
- More custom objectives
- Switched default single-objective aq from EI to NEI
- Simplex sampling of weights for weightedMOO; if weights aren't provided, even weights are used
- Moved "explain" functionality to base optimizer
- Added backup exceptions to catch fit_gpytorch
- Enabled multi-output objectives to expand single-response models (e.g. using X features)
- Enabled single-output objectives to condense multi-response models (e.g. scalarization)
- Updated demo notebooks
- Fixed error with calling campaign._profile_max after every fit
- Redundant objective formulations
- Removed GPFlex model, which is now redundant with multi-output objective
- Updated all plotly plotting methods for new optimizer methods
- Updated typing and enabled multi-objective on custom aq functions
- Added new features to campaign object
- Moved "seed" kwarg to ExpDesigner.init, consistent with BayesianOptimizer
- Added hypervolume and pareto calculations (incl. pf_distance) to base optimizer
- Fixed bug with target_transforms sharing hyperparameters because of bad initialization
- Task parameters and multi-task learning
- Added "index" objective to select tasks for optimization in multi-task learning
- Fixed bugs related to torch dtype mismatches
- Fixed cat_dims specification for surrogate models so that they do not include ordinal params
- Overhauled f_transform approach to avoid scikit-learn and be more customizable
- Enforced abstractmethods on various classes
- Fixed cat_dims specification for surrogate models so that they do not include ordinal params
- Overhauled f_transform approach to avoid scikit-learn and be more customizable
- Enforced abstractmethods on various classes
- Added utopian point subtraction to all scalarization methods, made optional also
- Moved PI_bounded weightedMOO to its own objective called "boundedMOO"
- Fixed error where default hyperparameters were being written back to objects outside of the optimizer
- Fixed bugs with new parameter types in experiment design modules
- Added pytest parametrization to improve scope of tests
- Added pytest attributes (slow, fast) to manage speed
- Fixed bug with _fixed_features generation when fixed_var is specified
- Updated all MOO custom aq functions to match current BoTorch patterns
- PI_bounded acquisition function, due to various issues. MOO_weighted with PI_bounded+weights scalarization does work
- Added Param_Discrete_Numeric class (parent Param_Discrete)
- Added Param_Observational subclass to parent Continuous class which can be used for fitting but avoid optimization
- Implemented checks and validation to enforce the order of X, y, and targets across ParamSpace, Optimizer, Surrogate as appropriate
- Note: Only Optimizer can handle extraneous or re-ordered columns, but they will be processed before passing to Surrogate
- Fixed input/output constraints
- Added de/transformation to output constraints (applied to target samples)
- Added de/transformation to input constraints (applied to coefficients and RHS)
- Implemented a de/transform map for ParamSpace in order to handle constraints in the encoded input space
- Implemented a non-linear input constraint which keeps the range of one dimension in a joint optimization < 1% of the max-min
- Constrained multi-objective example notebook
- Allowed optimizer.suggest() on a subset of fit responses
- Enabled optimizer.maximize() for multi-response models based on the above
- Added custom multi-output objective class
- Fixed constraints specification, as we were using output constraints. Added input constraints to TODO
- Added parameter name to lb/ub labels in optimizer.predict() to avoid index issues
- Custom constraints are now specified as a constructer, so that parameters can be added and a callable is returned
- New object-oriented design for several classes:
Campaign
Parameter
ParamSpace
Target
Objective
andConstraint