'+ escapeHtml(title) + '
' + escapeHtml(summary) +'
{{ details }}+
Page not found
-GPopt
's Code of ConductA primary goal of this project is to be inclusive to the largest number of contributors, and most importantly with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion, or lack of religion thereof.
-This code of conduct outlines our expectations for all those who participate to the project, as well as the consequences for unacceptable behavior.
-We invite all those who participate in, to help us create safe and positive experiences for everyone.
-A supplemental goal of this Code of Conduct is to encourage participants to recognize and strengthen the relationships between our actions and their effects on other participants.
-Communities mirror the societies in which they exist, and positive action is essential to counteract the many forms of inequality and abuses of power that exist in society.
-The following behaviors are expected and requested of all contributors:
-The following behaviors are considered harassment and are unacceptable:
-Unacceptable behavior from any contributor will not be tolerated.
-Anyone asked to stop unacceptable behavior is expected to comply immediately.
-If a contributor engages in unacceptable behavior, appropriate action will be taken, up to and including a temporary ban or permanent expulsion without warning.
-We expect all contributors to abide by this Code of Conduct in all venues, online and in-person.
-thierry.moudiki AT gmail.com
-Portions of text derived from the Citizen Code of Conduct.
The Clear BSD License
-Copyright (c) [2019] [Thierry Moudiki] -All rights reserved.
-Redistribution and use in source and binary forms, with or without -modification, are permitted (subject to the limitations in the disclaimer -below) provided that the following conditions are met:
- * Redistributions of source code must retain the above copyright notice,
- this list of conditions and the following disclaimer.
-
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in the
- documentation and/or other materials provided with the distribution.
-
- * Neither the name of the copyright holder nor the names of its
- contributors may be used to endorse or promote products derived from this
- software without specific prior written permission.
-
-NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY -THIS LICENSE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND -CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A -PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR -CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR -BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER -IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE.
Bayesian optimization using Gaussian Process Regression
- -GPopt.GPOpt.GPOpt.GPOpt(
- lower_bound,
- upper_bound,
- objective_func=None,
- x_init=None,
- y_init=None,
- n_init=10,
- n_choices=25000,
- n_iter=190,
- alpha=1e-06,
- n_restarts_optimizer=25,
- seed=123,
- save=None,
- n_jobs=1,
- per_second=False,
- log_scale=False,
-)
-
-Class GPOpt.
-Arguments:
-lower_bound: a numpy array; - lower bound for researched minimum
-upper_bound: a numpy array; - upper bound for researched minimum
-objective_func: a function; - the objective function to be minimized
-x_init:
- initial setting of points where objective_func
is evaluated (optional)
y_init:
- initial setting values at points where objective_func
is evaluated (optional)
n_init: an integer;
- number of points in the initial setting, when x_init
and y_init
are not provided
n_choices: an integer; - number of points for the calculation of expected improvement
-n_iter: an integer; - number of iterations of the minimization algorithm
-alpha: a float; - Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel)
-n_restarts_optimizer: an integer; - The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood.
-seed: an integer; - reproducibility seed
-save: a string; - Specifies where to save the optimizer in its current state
-n_jobs: an integer; - number of jobs for parallel computing on initial setting (can be -1)
-per_second: a boolean; - experimental, default is False (leave to default for now)
-log_scale: a boolean; - experimental, default is False (leave to default for now)
-see also Bayesian Optimization with GPopt - and Hyperparameters tuning with GPopt
-GPOpt.optimize(verbose=1, n_more_iter=None, abs_tol=None, min_budget=50, func_args=None)
-
-Launch optimization loop.
-Arguments:
-verbose: an integer; - verbose = 0: nothing is printed, - verbose = 1: a progress bar is printed (longer than 0), - verbose = 2: information about each iteration is printed (longer than 1)
-n_more_iter: an integer; - additional number of iterations for the optimizer (which has been run once)
-abs_tol: a float; - tolerance for convergence of the optimizer (early stopping based on expected improvement)
-min_budget: an integer (default is 50);
- minimum number of iterations before early stopping controlled by abs_tol
func_args: a list; - additional parameters for the objective function (if necessary)
-see also Bayesian Optimization with GPopt -and Hyperparameters tuning with GPopt
-GPOpt.load(path)
-
-load data from stored shelve.
-Arguments
-path : a string; path to stored shelve.
-See also: Bayesian Optimization with GPopt Part 2 (save and resume)
-GPOpt.close_shelve()
-
-Close shelve.
-Arguments
-No argument.
-See also: Bayesian Optimization with GPopt Part 2 (save and resume)
-Welcome to GPopt's website.
-GPopt does Bayesian optimization using Gaussian Process regression
-pip
at the command line for the stable versionpip install mlsauce
-
-The documentation can be found (work in progress) here.
-Want to contribute to mlsauce's development on Github, read this!
-' + escapeHtml(summary) +'
' + noResultsText + '
'); - } -} - -function doSearch () { - var query = document.getElementById('mkdocs-search-query').value; - if (query.length > min_search_length) { - if (!window.Worker) { - displayResults(search(query)); - } else { - searchWorker.postMessage({query: query}); - } - } else { - // Clear results for short queries - displayResults([]); - } -} - -function initSearch () { - var search_input = document.getElementById('mkdocs-search-query'); - if (search_input) { - search_input.addEventListener("keyup", doSearch); - } - var term = getSearchTermFromLocation(); - if (term) { - search_input.value = term; - doSearch(); - } -} - -function onWorkerMessage (e) { - if (e.data.allowSearch) { - initSearch(); - } else if (e.data.results) { - var results = e.data.results; - displayResults(results); - } else if (e.data.config) { - min_search_length = e.data.config.min_search_length-1; - } -} - -if (!window.Worker) { - console.log('Web Worker API not supported'); - // load index in main thread - $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { - console.log('Loaded worker'); - init(); - window.postMessage = function (msg) { - onWorkerMessage({data: msg}); - }; - }).fail(function (jqxhr, settings, exception) { - console.error('Could not load worker.js'); - }); -} else { - // Wrap search in a web worker - var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); - searchWorker.postMessage({init: true}); - searchWorker.onmessage = onWorkerMessage; -} diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json deleted file mode 100644 index 914c4bf..0000000 --- a/docs/site/search/search_index.json +++ /dev/null @@ -1 +0,0 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"GPopt | Star Welcome to GPopt 's website. GPopt does Bayesian optimization using Gaussian Process regression Installing 1st method : by using pip at the command line for the stable version pip install mlsauce Quickstart Bayesian Optimization with GPopt Bayesian Optimization with GPopt Part 2 (save and resume) Hyperparameters tuning with GPopt Documentation The documentation can be found (work in progress) here . Contributing Want to contribute to mlsauce 's development on Github, read this !","title":"Home"},{"location":"#gpopt-star","text":"Welcome to GPopt 's website. GPopt does Bayesian optimization using Gaussian Process regression","title":"GPopt | Star"},{"location":"#installing","text":"1st method : by using pip at the command line for the stable version pip install mlsauce","title":"Installing"},{"location":"#quickstart","text":"Bayesian Optimization with GPopt Bayesian Optimization with GPopt Part 2 (save and resume) Hyperparameters tuning with GPopt","title":"Quickstart"},{"location":"#documentation","text":"The documentation can be found (work in progress) here .","title":"Documentation"},{"location":"#contributing","text":"Want to contribute to mlsauce 's development on Github, read this !","title":"Contributing"},{"location":"CONTRIBUTING/","text":"GPopt 's Code of Conduct 1. Purpose A primary goal of this project is to be inclusive to the largest number of contributors, and most importantly with the most varied and diverse backgrounds possible . As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion, or lack of religion thereof. This code of conduct outlines our expectations for all those who participate to the project, as well as the consequences for unacceptable behavior. We invite all those who participate in, to help us create safe and positive experiences for everyone. 2. Open [Source/Culture/Tech] Citizenship A supplemental goal of this Code of Conduct is to encourage participants to recognize and strengthen the relationships between our actions and their effects on other participants. Communities mirror the societies in which they exist, and positive action is essential to counteract the many forms of inequality and abuses of power that exist in society. 3. Expected Behavior The following behaviors are expected and requested of all contributors: Attempt collaboration before conflict . Participate in an authentic and active way. In doing so, you contribute to the health and longevity of this project. Exercise consideration and respect in your speech and actions. Refrain from demeaning, discriminatory, or harassing behavior and speech. Be mindful of your surroundings and of your fellow participants. 4. Unacceptable Behavior The following behaviors are considered harassment and are unacceptable: Violence, threats of violence or violent language directed against another person. Sexist, racist, homophobic, transphobic, ableist or otherwise discriminatory jokes and language. Posting or displaying sexually explicit or violent material. Posting or threatening to post other people's personally identifying information (\"doxing\"). Personal insults, particularly those related to gender, sexual orientation, race, religion, or disability. Inappropriate photography or recording. Unwelcome sexual attention. This includes, sexualized comments or jokes. Deliberate intimidation, stalking or following (online or in person). Advocating for, or encouraging, any of the above behavior. 5. Consequences of Unacceptable Behavior Unacceptable behavior from any contributor will not be tolerated. Anyone asked to stop unacceptable behavior is expected to comply immediately. If a contributor engages in unacceptable behavior, appropriate action will be taken, up to and including a temporary ban or permanent expulsion without warning. 6. Scope We expect all contributors to abide by this Code of Conduct in all venues, online and in-person. 7. Contact info thierry.moudiki AT gmail.com 8. License and attribution Portions of text derived from the Citizen Code of Conduct .","title":"Contributing"},{"location":"CONTRIBUTING/#gpopts-code-of-conduct","text":"","title":"GPopt's Code of Conduct"},{"location":"CONTRIBUTING/#1-purpose","text":"A primary goal of this project is to be inclusive to the largest number of contributors, and most importantly with the most varied and diverse backgrounds possible . As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion, or lack of religion thereof. This code of conduct outlines our expectations for all those who participate to the project, as well as the consequences for unacceptable behavior. We invite all those who participate in, to help us create safe and positive experiences for everyone.","title":"1. Purpose"},{"location":"CONTRIBUTING/#2-open-sourceculturetech-citizenship","text":"A supplemental goal of this Code of Conduct is to encourage participants to recognize and strengthen the relationships between our actions and their effects on other participants. Communities mirror the societies in which they exist, and positive action is essential to counteract the many forms of inequality and abuses of power that exist in society.","title":"2. Open [Source/Culture/Tech] Citizenship"},{"location":"CONTRIBUTING/#3-expected-behavior","text":"The following behaviors are expected and requested of all contributors: Attempt collaboration before conflict . Participate in an authentic and active way. In doing so, you contribute to the health and longevity of this project. Exercise consideration and respect in your speech and actions. Refrain from demeaning, discriminatory, or harassing behavior and speech. Be mindful of your surroundings and of your fellow participants.","title":"3. Expected Behavior"},{"location":"CONTRIBUTING/#4-unacceptable-behavior","text":"The following behaviors are considered harassment and are unacceptable: Violence, threats of violence or violent language directed against another person. Sexist, racist, homophobic, transphobic, ableist or otherwise discriminatory jokes and language. Posting or displaying sexually explicit or violent material. Posting or threatening to post other people's personally identifying information (\"doxing\"). Personal insults, particularly those related to gender, sexual orientation, race, religion, or disability. Inappropriate photography or recording. Unwelcome sexual attention. This includes, sexualized comments or jokes. Deliberate intimidation, stalking or following (online or in person). Advocating for, or encouraging, any of the above behavior.","title":"4. Unacceptable Behavior"},{"location":"CONTRIBUTING/#5-consequences-of-unacceptable-behavior","text":"Unacceptable behavior from any contributor will not be tolerated. Anyone asked to stop unacceptable behavior is expected to comply immediately. If a contributor engages in unacceptable behavior, appropriate action will be taken, up to and including a temporary ban or permanent expulsion without warning.","title":"5. Consequences of Unacceptable Behavior"},{"location":"CONTRIBUTING/#6-scope","text":"We expect all contributors to abide by this Code of Conduct in all venues, online and in-person.","title":"6. Scope"},{"location":"CONTRIBUTING/#7-contact-info","text":"thierry.moudiki AT gmail.com","title":"7. Contact info"},{"location":"CONTRIBUTING/#8-license-and-attribution","text":"Portions of text derived from the Citizen Code of Conduct .","title":"8. License and attribution"},{"location":"LICENSE/","text":"The Clear BSD License Copyright (c) [2019] [Thierry Moudiki] All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted (subject to the limitations in the disclaimer below) provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS LICENSE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.","title":"License"},{"location":"documentation/gpopt/","text":"GPopt Bayesian optimization using Gaussian Process Regression [source] GPOpt GPopt.GPOpt.GPOpt.GPOpt( lower_bound, upper_bound, objective_func=None, x_init=None, y_init=None, n_init=10, n_choices=25000, n_iter=190, alpha=1e-06, n_restarts_optimizer=25, seed=123, save=None, n_jobs=1, per_second=False, log_scale=False, ) Class GPOpt. Arguments: lower_bound: a numpy array; lower bound for researched minimum upper_bound: a numpy array; upper bound for researched minimum objective_func: a function; the objective function to be minimized x_init: initial setting of points where objective_func is evaluated (optional) y_init: initial setting values at points where objective_func is evaluated (optional) n_init: an integer; number of points in the initial setting, when x_init and y_init are not provided n_choices: an integer; number of points for the calculation of expected improvement n_iter: an integer; number of iterations of the minimization algorithm alpha: a float; Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) n_restarts_optimizer: an integer; The number of restarts of the optimizer for finding the kernel\u2019s parameters which maximize the log-marginal likelihood. seed: an integer; reproducibility seed save: a string; Specifies where to save the optimizer in its current state n_jobs: an integer; number of jobs for parallel computing on initial setting (can be -1) per_second: a boolean; experimental , default is False (leave to default for now) log_scale: a boolean; experimental , default is False (leave to default for now) see also Bayesian Optimization with GPopt and Hyperparameters tuning with GPopt [source] optimize GPOpt.optimize(verbose=1, n_more_iter=None, abs_tol=None, min_budget=50, func_args=None) Launch optimization loop. Arguments: verbose: an integer; verbose = 0: nothing is printed, verbose = 1: a progress bar is printed (longer than 0), verbose = 2: information about each iteration is printed (longer than 1) n_more_iter: an integer; additional number of iterations for the optimizer (which has been run once) abs_tol: a float; tolerance for convergence of the optimizer (early stopping based on expected improvement) min_budget: an integer (default is 50); minimum number of iterations before early stopping controlled by abs_tol func_args: a list; additional parameters for the objective function (if necessary) see also Bayesian Optimization with GPopt and Hyperparameters tuning with GPopt [source] load GPOpt.load(path) load data from stored shelve. Arguments path : a string; path to stored shelve. See also: Bayesian Optimization with GPopt Part 2 (save and resume) [source] close_shelve GPOpt.close_shelve() Close shelve. Arguments No argument. See also: Bayesian Optimization with GPopt Part 2 (save and resume)","title":"GPopt"},{"location":"documentation/gpopt/#gpopt","text":"Bayesian optimization using Gaussian Process Regression [source]","title":"GPopt"},{"location":"documentation/gpopt/#gpopt_1","text":"GPopt.GPOpt.GPOpt.GPOpt( lower_bound, upper_bound, objective_func=None, x_init=None, y_init=None, n_init=10, n_choices=25000, n_iter=190, alpha=1e-06, n_restarts_optimizer=25, seed=123, save=None, n_jobs=1, per_second=False, log_scale=False, ) Class GPOpt. Arguments: lower_bound: a numpy array; lower bound for researched minimum upper_bound: a numpy array; upper bound for researched minimum objective_func: a function; the objective function to be minimized x_init: initial setting of points where objective_func is evaluated (optional) y_init: initial setting values at points where objective_func is evaluated (optional) n_init: an integer; number of points in the initial setting, when x_init and y_init are not provided n_choices: an integer; number of points for the calculation of expected improvement n_iter: an integer; number of iterations of the minimization algorithm alpha: a float; Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) n_restarts_optimizer: an integer; The number of restarts of the optimizer for finding the kernel\u2019s parameters which maximize the log-marginal likelihood. seed: an integer; reproducibility seed save: a string; Specifies where to save the optimizer in its current state n_jobs: an integer; number of jobs for parallel computing on initial setting (can be -1) per_second: a boolean; experimental , default is False (leave to default for now) log_scale: a boolean; experimental , default is False (leave to default for now) see also Bayesian Optimization with GPopt and Hyperparameters tuning with GPopt [source]","title":"GPOpt"},{"location":"documentation/gpopt/#optimize","text":"GPOpt.optimize(verbose=1, n_more_iter=None, abs_tol=None, min_budget=50, func_args=None) Launch optimization loop. Arguments: verbose: an integer; verbose = 0: nothing is printed, verbose = 1: a progress bar is printed (longer than 0), verbose = 2: information about each iteration is printed (longer than 1) n_more_iter: an integer; additional number of iterations for the optimizer (which has been run once) abs_tol: a float; tolerance for convergence of the optimizer (early stopping based on expected improvement) min_budget: an integer (default is 50); minimum number of iterations before early stopping controlled by abs_tol func_args: a list; additional parameters for the objective function (if necessary) see also Bayesian Optimization with GPopt and Hyperparameters tuning with GPopt [source]","title":"optimize"},{"location":"documentation/gpopt/#load","text":"GPOpt.load(path) load data from stored shelve. Arguments path : a string; path to stored shelve. See also: Bayesian Optimization with GPopt Part 2 (save and resume) [source]","title":"load"},{"location":"documentation/gpopt/#close_shelve","text":"GPOpt.close_shelve() Close shelve. Arguments No argument. See also: Bayesian Optimization with GPopt Part 2 (save and resume)","title":"close_shelve"}]} \ No newline at end of file diff --git a/docs/site/search/worker.js b/docs/site/search/worker.js deleted file mode 100644 index 8628dbc..0000000 --- a/docs/site/search/worker.js +++ /dev/null @@ -1,133 +0,0 @@ -var base_path = 'function' === typeof importScripts ? '.' : '/search/'; -var allowSearch = false; -var index; -var documents = {}; -var lang = ['en']; -var data; - -function getScript(script, callback) { - console.log('Loading script: ' + script); - $.getScript(base_path + script).done(function () { - callback(); - }).fail(function (jqxhr, settings, exception) { - console.log('Error: ' + exception); - }); -} - -function getScriptsInOrder(scripts, callback) { - if (scripts.length === 0) { - callback(); - return; - } - getScript(scripts[0], function() { - getScriptsInOrder(scripts.slice(1), callback); - }); -} - -function loadScripts(urls, callback) { - if( 'function' === typeof importScripts ) { - importScripts.apply(null, urls); - callback(); - } else { - getScriptsInOrder(urls, callback); - } -} - -function onJSONLoaded () { - data = JSON.parse(this.responseText); - var scriptsToLoad = ['lunr.js']; - if (data.config && data.config.lang && data.config.lang.length) { - lang = data.config.lang; - } - if (lang.length > 1 || lang[0] !== "en") { - scriptsToLoad.push('lunr.stemmer.support.js'); - if (lang.length > 1) { - scriptsToLoad.push('lunr.multi.js'); - } - if (lang.includes("ja") || lang.includes("jp")) { - scriptsToLoad.push('tinyseg.js'); - } - for (var i=0; i < lang.length; i++) { - if (lang[i] != 'en') { - scriptsToLoad.push(['lunr', lang[i], 'js'].join('.')); - } - } - } - loadScripts(scriptsToLoad, onScriptsLoaded); -} - -function onScriptsLoaded () { - console.log('All search scripts loaded, building Lunr index...'); - if (data.config && data.config.separator && data.config.separator.length) { - lunr.tokenizer.separator = new RegExp(data.config.separator); - } - - if (data.index) { - index = lunr.Index.load(data.index); - data.docs.forEach(function (doc) { - documents[doc.location] = doc; - }); - console.log('Lunr pre-built index loaded, search ready'); - } else { - index = lunr(function () { - if (lang.length === 1 && lang[0] !== "en" && lunr[lang[0]]) { - this.use(lunr[lang[0]]); - } else if (lang.length > 1) { - this.use(lunr.multiLanguage.apply(null, lang)); // spread operator not supported in all browsers: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator#Browser_compatibility - } - this.field('title'); - this.field('text'); - this.ref('location'); - - for (var i=0; i < data.docs.length; i++) { - var doc = data.docs[i]; - this.add(doc); - documents[doc.location] = doc; - } - }); - console.log('Lunr index built, search ready'); - } - allowSearch = true; - postMessage({config: data.config}); - postMessage({allowSearch: allowSearch}); -} - -function init () { - var oReq = new XMLHttpRequest(); - oReq.addEventListener("load", onJSONLoaded); - var index_path = base_path + '/search_index.json'; - if( 'function' === typeof importScripts ){ - index_path = 'search_index.json'; - } - oReq.open("GET", index_path); - oReq.send(); -} - -function search (query) { - if (!allowSearch) { - console.error('Assets for search still loading'); - return; - } - - var resultDocuments = []; - var results = index.search(query); - for (var i=0; i < results.length; i++){ - var result = results[i]; - doc = documents[result.ref]; - doc.summary = doc.text.substring(0, 200); - resultDocuments.push(doc); - } - return resultDocuments; -} - -if( 'function' === typeof importScripts ) { - onmessage = function (e) { - if (e.data.init) { - init(); - } else if (e.data.query) { - postMessage({ results: search(e.data.query) }); - } else { - console.error("Worker - Unrecognized message: " + e); - } - }; -} diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml deleted file mode 100644 index 0f8724e..0000000 --- a/docs/site/sitemap.xml +++ /dev/null @@ -1,3 +0,0 @@ - -21class GPOpt: - 22 """Class GPOpt. - 23 - 24 # Arguments: - 25 - 26 lower_bound: a numpy array; - 27 lower bound for researched minimum - 28 - 29 upper_bound: a numpy array; - 30 upper bound for researched minimum - 31 - 32 objective_func: a function; - 33 the objective function to be minimized - 34 - 35 gp_obj: a GaussianProcessRegressor object; - 36 An ML model for estimating the uncertainty around the objective function - 37 - 38 x_init: - 39 initial setting of points where `objective_func` is evaluated (optional) - 40 - 41 y_init: - 42 initial setting values at points where `objective_func` is evaluated (optional) - 43 - 44 n_init: an integer; - 45 number of points in the initial setting, when `x_init` and `y_init` are not provided - 46 - 47 n_choices: an integer; - 48 number of points for the calculation of expected improvement - 49 - 50 n_iter: an integer; - 51 number of iterations of the minimization algorithm - 52 - 53 alpha: a float; - 54 Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) - 55 - 56 n_restarts_optimizer: an integer; - 57 The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. - 58 - 59 seed: an integer; - 60 reproducibility seed - 61 - 62 save: a string; - 63 Specifies where to save the optimizer in its current state - 64 - 65 n_jobs: an integer; - 66 number of jobs for parallel computing on initial setting (can be -1) - 67 - 68 per_second: a boolean; - 69 __experimental__, default is False (leave to default for now) - 70 - 71 log_scale: a boolean; - 72 __experimental__, default is False (leave to default for now) - 73 - 74 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) - 75 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) - 76 - 77 """ - 78 - 79 def __init__( - 80 self, - 81 lower_bound, - 82 upper_bound, - 83 objective_func=None, - 84 gp_obj=None, - 85 x_init=None, - 86 y_init=None, - 87 n_init=10, - 88 n_choices=25000, - 89 n_iter=190, - 90 alpha=1e-6, - 91 n_restarts_optimizer=25, - 92 seed=123, - 93 save=None, - 94 n_jobs=1, - 95 per_second=False, # /!\ very experimental - 96 log_scale=False, # /!\ experimental - 97 ): - 98 - 99 n_dims = len(lower_bound) -100 -101 assert n_dims == len( -102 upper_bound -103 ), "'upper_bound' and 'lower_bound' must have the same dimensions" -104 -105 self.objective_func = objective_func -106 self.lower_bound = lower_bound -107 self.upper_bound = upper_bound -108 self.y_init = y_init -109 self.log_scale = log_scale -110 self.n_dims = n_dims -111 self.n_init = n_init -112 self.n_choices = n_choices -113 self.n_iter = n_iter -114 self.alpha = alpha -115 self.n_restarts_optimizer = n_restarts_optimizer -116 self.seed = seed -117 self.save = save -118 self.per_second = per_second -119 self.x_min = None -120 self.y_min = None -121 self.y_mean = None -122 self.y_std = None -123 self.ei = np.array([]) -124 self.max_ei = [] -125 if gp_obj is None: -126 self.gp_obj = GaussianProcessRegressor( -127 kernel=Matern(nu=2.5), -128 alpha=self.alpha, -129 normalize_y=True, -130 n_restarts_optimizer=self.n_restarts_optimizer, -131 random_state=self.seed, -132 ) -133 else: -134 self.gp_obj = gp_obj -135 -136 # Sobol seqs for initial design and choices -137 sobol_seq_init = np.transpose( -138 generate_sobol2( -139 n_dims=self.n_dims, -140 n_points=self.n_init, -141 skip=2, -142 ) -143 ) -144 sobol_seq_choices = np.transpose( -145 generate_sobol2( -146 n_dims=self.n_dims, -147 n_points=self.n_choices, -148 skip=self.n_init + 2, -149 ) -150 ) -151 -152 # Sobol seqs for initial design and choices with bounds -153 if self.log_scale == False: -154 -155 bounds_range = upper_bound - lower_bound -156 self.x_init = ( -157 bounds_range * sobol_seq_init + lower_bound -158 if x_init is None -159 else x_init -160 ) -161 self.x_choices = bounds_range * sobol_seq_choices + lower_bound -162 -163 else: # (!) experimental -164 -165 assert ( -166 lower_bound > 0 -167 ).all(), "all elements of `lower_bound` must be > 0" -168 assert ( -169 upper_bound > 0 -170 ).all(), "all elements of `upper_bound` must be > 0" -171 -172 log_lower_bound = np.log(lower_bound) -173 log_upper_bound = np.log(upper_bound) -174 log_bounds_range = log_upper_bound - log_lower_bound -175 self.x_init = ( -176 np.minimum( -177 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), -178 1.7976931348623157e308, -179 ) -180 if x_init is None -181 else x_init -182 ) -183 self.x_choices = np.minimum( -184 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), -185 1.7976931348623157e308, -186 ) -187 -188 # shelve for saving (not for loading) -189 if self.save is not None: -190 self.sh = shelve.open(filename=save, flag="c", writeback=True) -191 -192 if self.per_second: -193 self.timings = [] -194 self.rf_obj = RandomForestRegressor( -195 n_estimators=250, random_state=self.seed -196 ) +@@ -823,6 +843,9 @@22class GPOpt: + 23 """Class GPOpt. + 24 + 25 # Arguments: + 26 + 27 lower_bound: a numpy array; + 28 lower bound for researched minimum + 29 + 30 upper_bound: a numpy array; + 31 upper bound for researched minimum + 32 + 33 objective_func: a function; + 34 the objective function to be minimized + 35 + 36 params_names: a list; + 37 names of the parameters of the objective function (optional) + 38 + 39 gp_obj: a GaussianProcessRegressor object; + 40 An ML model for estimating the uncertainty around the objective function + 41 + 42 x_init: + 43 initial setting of points where `objective_func` is evaluated (optional) + 44 + 45 y_init: + 46 initial setting values at points where `objective_func` is evaluated (optional) + 47 + 48 n_init: an integer; + 49 number of points in the initial setting, when `x_init` and `y_init` are not provided + 50 + 51 n_choices: an integer; + 52 number of points for the calculation of expected improvement + 53 + 54 n_iter: an integer; + 55 number of iterations of the minimization algorithm + 56 + 57 alpha: a float; + 58 Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) + 59 + 60 n_restarts_optimizer: an integer; + 61 The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. + 62 + 63 seed: an integer; + 64 reproducibility seed + 65 + 66 save: a string; + 67 Specifies where to save the optimizer in its current state + 68 + 69 n_jobs: an integer; + 70 number of jobs for parallel computing on initial setting (can be -1) + 71 + 72 per_second: a boolean; + 73 __experimental__, default is False (leave to default for now) + 74 + 75 log_scale: a boolean; + 76 __experimental__, default is False (leave to default for now) + 77 + 78 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) + 79 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) + 80 + 81 """ + 82 + 83 def __init__( + 84 self, + 85 lower_bound, + 86 upper_bound, + 87 objective_func=None, + 88 params_names=None, + 89 gp_obj=None, + 90 x_init=None, + 91 y_init=None, + 92 n_init=10, + 93 n_choices=25000, + 94 n_iter=190, + 95 alpha=1e-6, + 96 n_restarts_optimizer=25, + 97 seed=123, + 98 save=None, + 99 n_jobs=1, +100 per_second=False, # /!\ very experimental +101 log_scale=False, # /!\ experimental +102 ): +103 +104 n_dims = len(lower_bound) +105 +106 assert n_dims == len( +107 upper_bound +108 ), "'upper_bound' and 'lower_bound' must have the same dimensions" +109 +110 self.objective_func = objective_func +111 self.params_names = params_names +112 self.lower_bound = lower_bound +113 self.upper_bound = upper_bound +114 self.y_init = y_init +115 self.log_scale = log_scale +116 self.n_dims = n_dims +117 self.n_init = n_init +118 self.n_choices = n_choices +119 self.n_iter = n_iter +120 self.alpha = alpha +121 self.n_restarts_optimizer = n_restarts_optimizer +122 self.seed = seed +123 self.save = save +124 self.per_second = per_second +125 self.x_min = None +126 self.y_min = None +127 self.y_mean = None +128 self.y_std = None +129 self.ei = np.array([]) +130 self.max_ei = [] +131 if gp_obj is None: +132 self.gp_obj = GaussianProcessRegressor( +133 kernel=Matern(nu=2.5), +134 alpha=self.alpha, +135 normalize_y=True, +136 n_restarts_optimizer=self.n_restarts_optimizer, +137 random_state=self.seed, +138 ) +139 else: +140 self.gp_obj = gp_obj +141 +142 # Sobol seqs for initial design and choices +143 sobol_seq_init = np.transpose( +144 generate_sobol2( +145 n_dims=self.n_dims, +146 n_points=self.n_init, +147 skip=2, +148 ) +149 ) +150 sobol_seq_choices = np.transpose( +151 generate_sobol2( +152 n_dims=self.n_dims, +153 n_points=self.n_choices, +154 skip=self.n_init + 2, +155 ) +156 ) +157 +158 # Sobol seqs for initial design and choices with bounds +159 if self.log_scale == False: +160 +161 bounds_range = upper_bound - lower_bound +162 self.x_init = ( +163 bounds_range * sobol_seq_init + lower_bound +164 if x_init is None +165 else x_init +166 ) +167 self.x_choices = bounds_range * sobol_seq_choices + lower_bound +168 +169 else: # (!) experimental +170 +171 assert ( +172 lower_bound > 0 +173 ).all(), "all elements of `lower_bound` must be > 0" +174 assert ( +175 upper_bound > 0 +176 ).all(), "all elements of `upper_bound` must be > 0" +177 +178 log_lower_bound = np.log(lower_bound) +179 log_upper_bound = np.log(upper_bound) +180 log_bounds_range = log_upper_bound - log_lower_bound +181 self.x_init = ( +182 np.minimum( +183 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), +184 1.7976931348623157e308, +185 ) +186 if x_init is None +187 else x_init +188 ) +189 self.x_choices = np.minimum( +190 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), +191 1.7976931348623157e308, +192 ) +193 +194 # shelve for saving (not for loading) +195 if self.save is not None: +196 self.sh = shelve.open(filename=save, flag="c", writeback=True) 197 -198 self.n_jobs = n_jobs -199 -200 # from sklearn.base -201 def get_params(self): -202 """Get object attributes. +198 if self.per_second: +199 self.timings = [] +200 self.rf_obj = RandomForestRegressor( +201 n_estimators=250, random_state=self.seed +202 ) 203 -204 Returns -205 ------- -206 params : mapping of string to any -207 Parameter names mapped to their values. -208 """ -209 out = dict() -210 param_names = dir(self) -211 for key in param_names: -212 if key.startswith("_") is False: -213 out[key] = getattr(self, key, None) -214 -215 return out -216 -217 # for parallel case on initial design -218 def eval_objective(self, arg): -219 try: -220 return self.objective_func(self.x_init[arg, :]) -221 except: -222 return 1e06 -223 -224 # load data from stored shelve -225 def load(self, path): -226 """load data from stored shelve. -227 -228 # Arguments +204 self.n_jobs = n_jobs +205 +206 # from sklearn.base +207 def get_params(self): +208 """Get object attributes. +209 +210 Returns +211 ------- +212 params : mapping of string to any +213 Parameter names mapped to their values. +214 """ +215 out = dict() +216 param_names = dir(self) +217 for key in param_names: +218 if key.startswith("_") is False: +219 out[key] = getattr(self, key, None) +220 +221 return out +222 +223 # for parallel case on initial design +224 def eval_objective(self, arg): +225 try: +226 return self.objective_func(self.x_init[arg, :]) +227 except: +228 return 1e06 229 -230 path : a string; path to stored shelve. -231 -232 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -233 """ -234 -235 self.sh = shelve.open(filename=path) -236 for key, value in self.sh.items(): -237 setattr(self, key, value) -238 -239 # update shelve in optimization loop -240 def update_shelve(self): -241 for key, value in self.get_params().items(): -242 if (callable(value) is False) & (key != "sh"): -243 self.sh[key] = value -244 self.sh.sync() -245 -246 # closing shelve (can't be modified after) -247 def close_shelve(self): -248 """Close shelve. -249 -250 # Arguments +230 # load data from stored shelve +231 def load(self, path): +232 """load data from stored shelve. +233 +234 # Arguments +235 +236 path : a string; path to stored shelve. +237 +238 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +239 """ +240 +241 self.sh = shelve.open(filename=path) +242 for key, value in self.sh.items(): +243 setattr(self, key, value) +244 +245 # update shelve in optimization loop +246 def update_shelve(self): +247 for key, value in self.get_params().items(): +248 if (callable(value) is False) & (key != "sh"): +249 self.sh[key] = value +250 self.sh.sync() 251 -252 No argument. -253 -254 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -255 """ -256 -257 self.sh.close() -258 -259 # fit predict -260 def gp_fit_predict(self, X_train, y_train, X_test): -261 -262 if len(X_train.shape) == 1: -263 X_train = X_train.reshape((-1, 1)) -264 X_test = X_test.reshape((-1, 1)) -265 -266 # Get mean and standard deviation -267 return self.gp_obj.fit(X_train, y_train).predict( -268 X_test, return_std=True -269 ) -270 -271 # fit predict timings -272 def timings_fit_predict(self, X_train, y_train, X_test): -273 -274 if len(X_train.shape) == 1: -275 X_train = X_train.reshape((-1, 1)) -276 X_test = X_test.reshape((-1, 1)) -277 -278 # Get mean preds for timings -279 return self.rf_obj.fit(X_train, y_train).predict(X_test) -280 -281 # find next parameter by using expected improvement (ei) -282 def next_parameter_by_ei(self, seed, i): +252 # closing shelve (can't be modified after) +253 def close_shelve(self): +254 """Close shelve. +255 +256 # Arguments +257 +258 No argument. +259 +260 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +261 """ +262 +263 self.sh.close() +264 +265 # fit predict +266 def gp_fit_predict(self, X_train, y_train, X_test): +267 +268 if len(X_train.shape) == 1: +269 X_train = X_train.reshape((-1, 1)) +270 X_test = X_test.reshape((-1, 1)) +271 +272 # Get mean and standard deviation +273 return self.gp_obj.fit(X_train, y_train).predict( +274 X_test, return_std=True +275 ) +276 +277 # fit predict timings +278 def timings_fit_predict(self, X_train, y_train, X_test): +279 +280 if len(X_train.shape) == 1: +281 X_train = X_train.reshape((-1, 1)) +282 X_test = X_test.reshape((-1, 1)) 283 -284 gamma_hat = (self.y_min - self.y_mean) / self.y_std -285 -286 self.ei = -self.y_std * ( -287 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) -288 ) +284 # Get mean preds for timings +285 return self.rf_obj.fit(X_train, y_train).predict(X_test) +286 +287 # find next parameter by using expected improvement (ei) +288 def next_parameter_by_ei(self, seed, i): 289 -290 # find max index ----- +290 gamma_hat = (self.y_min - self.y_mean) / self.y_std 291 -292 if self.per_second is False: -293 -294 # find index for max. ei -295 max_index = self.ei.argmin() -296 -297 else: # self.per_second is True -298 -299 # predict timings on self.x_choices -300 # train on X = self.parameters and y = self.timings -301 # (must have same shape[0]) -302 timing_preds = self.timings_fit_predict( -303 X_train=np.asarray(self.parameters), -304 y_train=np.asarray(self.timings), -305 X_test=self.x_choices, -306 ) -307 -308 # find index for max. ei (and min. timings) -309 max_index = (-self.ei / timing_preds).argmax() -310 -311 self.max_ei.append(np.abs(self.ei[max_index])) -312 -313 # Select next choice -314 next_param = self.x_choices[max_index, :] -315 -316 if next_param in np.asarray(self.parameters): -317 -318 if self.log_scale == False: -319 -320 np.random.seed(self.seed * i + 1000) -321 next_param = ( -322 self.upper_bound - self.lower_bound -323 ) * np.random.rand(self.n_dims) + self.lower_bound -324 -325 else: # /!\ very... experimental -326 -327 np.random.seed(self.seed) -328 log_upper_bound = np.log(self.upper_bound) -329 log_lower_bound = np.log(self.lower_bound) -330 log_bounds_range = log_upper_bound - log_lower_bound -331 -332 next_param = np.minimum( -333 np.exp( -334 log_bounds_range * np.random.rand(self.n_dims) -335 + log_lower_bound -336 ), -337 1.7976931348623157e308, -338 ) -339 -340 return next_param -341 -342 # optimize the objective -343 def optimize( -344 self, -345 verbose=1, -346 n_more_iter=None, -347 abs_tol=None, # suggested 1e-4, for n_iter = 200 -348 min_budget=50, # minimum budget for early stopping -349 func_args=None, -350 ): -351 """Launch optimization loop. -352 -353 # Arguments: -354 -355 verbose: an integer; -356 verbose = 0: nothing is printed, -357 verbose = 1: a progress bar is printed (longer than 0), -358 verbose = 2: information about each iteration is printed (longer than 1) -359 -360 n_more_iter: an integer; -361 additional number of iterations for the optimizer (which has been run once) -362 -363 abs_tol: a float; -364 tolerance for convergence of the optimizer (early stopping based on expected improvement) +292 self.ei = -self.y_std * ( +293 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) +294 ) +295 +296 # find max index ----- +297 +298 if self.per_second is False: +299 +300 # find index for max. ei +301 max_index = self.ei.argmin() +302 +303 else: # self.per_second is True +304 +305 # predict timings on self.x_choices +306 # train on X = self.parameters and y = self.timings +307 # (must have same shape[0]) +308 timing_preds = self.timings_fit_predict( +309 X_train=np.asarray(self.parameters), +310 y_train=np.asarray(self.timings), +311 X_test=self.x_choices, +312 ) +313 +314 # find index for max. ei (and min. timings) +315 max_index = (-self.ei / timing_preds).argmax() +316 +317 self.max_ei.append(np.abs(self.ei[max_index])) +318 +319 # Select next choice +320 next_param = self.x_choices[max_index, :] +321 +322 if next_param in np.asarray(self.parameters): +323 +324 if self.log_scale == False: +325 +326 np.random.seed(self.seed * i + 1000) +327 next_param = ( +328 self.upper_bound - self.lower_bound +329 ) * np.random.rand(self.n_dims) + self.lower_bound +330 +331 else: # /!\ very... experimental +332 +333 np.random.seed(self.seed) +334 log_upper_bound = np.log(self.upper_bound) +335 log_lower_bound = np.log(self.lower_bound) +336 log_bounds_range = log_upper_bound - log_lower_bound +337 +338 next_param = np.minimum( +339 np.exp( +340 log_bounds_range * np.random.rand(self.n_dims) +341 + log_lower_bound +342 ), +343 1.7976931348623157e308, +344 ) +345 +346 return next_param +347 +348 # optimize the objective +349 def optimize( +350 self, +351 verbose=1, +352 n_more_iter=None, +353 abs_tol=None, # suggested 1e-4, for n_iter = 200 +354 min_budget=50, # minimum budget for early stopping +355 func_args=None, +356 ): +357 """Launch optimization loop. +358 +359 # Arguments: +360 +361 verbose: an integer; +362 verbose = 0: nothing is printed, +363 verbose = 1: a progress bar is printed (longer than 0), +364 verbose = 2: information about each iteration is printed (longer than 1) 365 -366 min_budget: an integer (default is 50); -367 minimum number of iterations before early stopping controlled by `abs_tol` +366 n_more_iter: an integer; +367 additional number of iterations for the optimizer (which has been run once) 368 -369 func_args: a list; -370 additional parameters for the objective function (if necessary) +369 abs_tol: a float; +370 tolerance for convergence of the optimizer (early stopping based on expected improvement) 371 -372 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) -373 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +372 min_budget: an integer (default is 50); +373 minimum number of iterations before early stopping controlled by `abs_tol` 374 -375 """ -376 -377 # verbose = 0: nothing is printed -378 # verbose = 1: a progress bar is printed (longer than 0) -379 # verbose = 2: information about each iteration is printed (longer than 1) -380 if func_args is None: -381 func_args = [] +375 func_args: a list; +376 additional parameters for the objective function (if necessary) +377 +378 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) +379 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +380 +381 """ 382 -383 if ( -384 n_more_iter is None -385 ): # initial optimization, before more iters are requested -386 -387 n_iter = self.n_iter -388 # stopping iter for early stopping (default is total budget) -389 iter_stop = n_iter # potentially # got to check this -390 -391 # initial design ---------- +383 # verbose = 0: nothing is printed +384 # verbose = 1: a progress bar is printed (longer than 0) +385 # verbose = 2: information about each iteration is printed (longer than 1) +386 if func_args is None: +387 func_args = [] +388 +389 if ( +390 n_more_iter is None +391 ): # initial optimization, before more iters are requested 392 -393 if (verbose == 1) | (verbose == 2): -394 print(f"\n Creating initial design... \n") -395 -396 if verbose == 1: -397 progbar = Progbar(target=self.n_init) +393 n_iter = self.n_iter +394 # stopping iter for early stopping (default is total budget) +395 iter_stop = n_iter # potentially # got to check this +396 +397 # initial design ---------- 398 -399 self.parameters = self.x_init.tolist() -400 self.scores = [] +399 if (verbose == 1) | (verbose == 2): +400 print(f"\n Creating initial design... \n") 401 -402 if self.save is not None: -403 self.update_shelve() +402 if verbose == 1: +403 progbar = Progbar(target=self.n_init) 404 -405 if self.y_init is None: # calculate scores on initial design -406 -407 assert ( -408 self.objective_func is not None -409 ), "self.y_init is None: must have 'objective_func' not None" +405 self.parameters = self.x_init.tolist() +406 self.scores = [] +407 +408 if self.save is not None: +409 self.update_shelve() 410 -411 if self.n_jobs == 1: +411 if self.y_init is None: # calculate scores on initial design 412 -413 for i in range(self.n_init): -414 -415 x_next = self.x_init[i, :] +413 assert ( +414 self.objective_func is not None +415 ), "self.y_init is None: must have 'objective_func' not None" 416 -417 try: +417 if self.n_jobs == 1: 418 -419 if self.per_second is True: +419 for i in range(self.n_init): 420 -421 start = time() -422 score = self.objective_func(x_next, *func_args) -423 if (np.isfinite(score) == False) or ( -424 np.isnan(score) == True -425 ): -426 continue -427 self.timings.append(np.log(time() - start)) -428 -429 else: # self.per_second is False -430 -431 score = self.objective_func(x_next, *func_args) -432 if (np.isfinite(score) == False) or ( -433 np.isnan(score) == True -434 ): -435 continue +421 x_next = self.x_init[i, :] +422 +423 try: +424 +425 if self.per_second is True: +426 +427 start = time() +428 score = self.objective_func(x_next, *func_args) +429 if (np.isfinite(score) == False) or ( +430 np.isnan(score) == True +431 ): +432 continue +433 self.timings.append(np.log(time() - start)) +434 +435 else: # self.per_second is False 436 -437 self.scores.append(score) -438 -439 if self.save is not None: -440 self.update_shelve() -441 -442 except: -443 -444 continue -445 -446 if verbose == 1: -447 progbar.update(i) # update progress bar -448 -449 if verbose == 2: -450 print(f"point: {x_next}; score: {score}") -451 # end loop # calculate scores on initial design -452 -453 if verbose == 1: -454 progbar.update(self.n_init) -455 -456 else: # self.n_jobs != 1 -457 -458 assert ( -459 self.per_second is False -460 ), "timings not calculated here" +437 score = self.objective_func(x_next, *func_args) +438 if (np.isfinite(score) == False) or ( +439 np.isnan(score) == True +440 ): +441 continue +442 +443 self.scores.append(score) +444 +445 if self.save is not None: +446 self.update_shelve() +447 +448 except: +449 +450 continue +451 +452 if verbose == 1: +453 progbar.update(i) # update progress bar +454 +455 if verbose == 2: +456 print(f"point: {x_next}; score: {score}") +457 # end loop # calculate scores on initial design +458 +459 if verbose == 1: +460 progbar.update(self.n_init) 461 -462 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( -463 delayed(self.objective_func)(self.x_init[i, :]) -464 for i in range(self.n_init) -465 ) -466 -467 self.scores = scores -468 -469 if self.save is not None: -470 self.update_shelve() -471 -472 else: # if self.y_init is None: -473 -474 assert self.x_init.shape[0] == len( -475 self.y_init -476 ), "must have: self.x_init.shape[0] == len(self.y_init)" +462 else: # self.n_jobs != 1 +463 +464 assert ( +465 self.per_second is False +466 ), "timings not calculated here" +467 +468 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( +469 delayed(self.objective_func)(self.x_init[i, :]) +470 for i in range(self.n_init) +471 ) +472 +473 self.scores = scores +474 +475 if self.save is not None: +476 self.update_shelve() 477 -478 self.scores = pickle.loads( -479 pickle.dumps(self.y_init.tolist(), -1) -480 ) -481 -482 # current best score on initial design -483 min_index = (np.asarray(self.scores)).argmin() -484 self.y_min = self.scores[min_index] -485 self.x_min = self.x_init[min_index, :] -486 -487 # current gp mean and std on initial design -488 y_mean, y_std = self.gp_fit_predict( -489 np.asarray(self.parameters), -490 np.asarray(self.scores), -491 self.x_choices, -492 ) -493 self.y_mean = y_mean -494 self.y_std = np.maximum(2.220446049250313e-16, y_std) -495 -496 # saving after initial design computation -497 if self.save is not None: -498 self.update_shelve() -499 -500 else: # if n_more_iter is not None +478 else: # if self.y_init is None: +479 +480 assert self.x_init.shape[0] == len( +481 self.y_init +482 ), "must have: self.x_init.shape[0] == len(self.y_init)" +483 +484 self.scores = pickle.loads( +485 pickle.dumps(self.y_init.tolist(), -1) +486 ) +487 +488 # current best score on initial design +489 min_index = (np.asarray(self.scores)).argmin() +490 self.y_min = self.scores[min_index] +491 self.x_min = self.x_init[min_index, :] +492 +493 # current gp mean and std on initial design +494 y_mean, y_std = self.gp_fit_predict( +495 np.asarray(self.parameters), +496 np.asarray(self.scores), +497 self.x_choices, +498 ) +499 self.y_mean = y_mean +500 self.y_std = np.maximum(2.220446049250313e-16, y_std) 501 -502 assert self.n_iter > 5, "you must have n_iter > 5" -503 n_iter = n_more_iter -504 iter_stop = len(self.max_ei) + n_more_iter # potentially +502 # saving after initial design computation +503 if self.save is not None: +504 self.update_shelve() 505 -506 if (verbose == 1) | (verbose == 2): -507 print(f"\n ...Done. \n") -508 try: -509 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) -510 except: -511 pass -512 -513 # end init design ---------- -514 -515 # if n_more_iter is None: # initial optimization, before more iters are requested -516 -517 if (verbose == 1) | (verbose == 2): -518 print(f"\n Optimization loop... \n") -519 -520 # early stopping? -521 if abs_tol is not None: -522 assert ( -523 min_budget > 20 -524 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +506 else: # if n_more_iter is not None +507 +508 assert self.n_iter > 5, "you must have n_iter > 5" +509 n_iter = n_more_iter +510 iter_stop = len(self.max_ei) + n_more_iter # potentially +511 +512 if (verbose == 1) | (verbose == 2): +513 print(f"\n ...Done. \n") +514 try: +515 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) +516 except: +517 pass +518 +519 # end init design ---------- +520 +521 # if n_more_iter is None: # initial optimization, before more iters are requested +522 +523 if (verbose == 1) | (verbose == 2): +524 print(f"\n Optimization loop... \n") 525 -526 if verbose == 1: -527 progbar = Progbar(target=n_iter) -528 -529 # main loop ---------- -530 -531 for i in range(n_iter): -532 -533 # find next set of parameters (vector), maximizing ei -534 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) -535 -536 try: -537 -538 if self.per_second is True: -539 -540 start = time() +526 # early stopping? +527 if abs_tol is not None: +528 assert ( +529 min_budget > 20 +530 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +531 +532 if verbose == 1: +533 progbar = Progbar(target=n_iter) +534 +535 # main loop ---------- +536 +537 for i in range(n_iter): +538 +539 # find next set of parameters (vector), maximizing ei +540 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) 541 -542 if self.objective_func is not None: +542 try: 543 -544 score_next_param = self.objective_func( -545 next_param, *func_args -546 ) +544 if self.per_second is True: +545 +546 start = time() 547 -548 if (np.isfinite(score_next_param) == False) or ( -549 np.isnan(score_next_param) == True -550 ): -551 continue -552 -553 else: -554 -555 assert (self.x_init is not None) and ( -556 self.y_init is not None -557 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +548 if self.objective_func is not None: +549 +550 score_next_param = self.objective_func( +551 next_param, *func_args +552 ) +553 +554 if (np.isfinite(score_next_param) == False) or ( +555 np.isnan(score_next_param) == True +556 ): +557 continue 558 -559 print(f"\n next param: {next_param} \n") -560 score_next_param = float( -561 input("get new score: \n") -562 ) # or an API response -563 -564 if (np.isfinite(score_next_param) == False) or ( -565 np.isnan(score_next_param) == True -566 ): -567 continue -568 -569 self.timings.append(np.log(time() - start)) -570 -571 else: # self.per_second is False: -572 -573 if self.objective_func is not None: +559 else: +560 +561 assert (self.x_init is not None) and ( +562 self.y_init is not None +563 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +564 +565 print(f"\n next param: {next_param} \n") +566 score_next_param = float( +567 input("get new score: \n") +568 ) # or an API response +569 +570 if (np.isfinite(score_next_param) == False) or ( +571 np.isnan(score_next_param) == True +572 ): +573 continue 574 -575 score_next_param = self.objective_func( -576 next_param, *func_args -577 ) +575 self.timings.append(np.log(time() - start)) +576 +577 else: # self.per_second is False: 578 -579 if (np.isfinite(score_next_param) == False) or ( -580 np.isnan(score_next_param) == True -581 ): -582 continue -583 -584 else: -585 -586 assert (self.x_init is not None) and ( -587 self.y_init is not None -588 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +579 if self.objective_func is not None: +580 +581 score_next_param = self.objective_func( +582 next_param, *func_args +583 ) +584 +585 if (np.isfinite(score_next_param) == False) or ( +586 np.isnan(score_next_param) == True +587 ): +588 continue 589 -590 print(f"\n next param: {next_param} \n") -591 score_next_param = float( -592 input("get new score: \n") -593 ) # or an API response -594 -595 if (np.isfinite(score_next_param) == False) or ( -596 np.isnan(score_next_param) == True -597 ): -598 continue -599 -600 except: -601 -602 continue -603 -604 self.parameters.append(next_param.tolist()) +590 else: +591 +592 assert (self.x_init is not None) and ( +593 self.y_init is not None +594 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +595 +596 print(f"\n next param: {next_param} \n") +597 score_next_param = float( +598 input("get new score: \n") +599 ) # or an API response +600 +601 if (np.isfinite(score_next_param) == False) or ( +602 np.isnan(score_next_param) == True +603 ): +604 continue 605 -606 self.scores.append(score_next_param) +606 except: 607 -608 if self.save is not None: -609 self.update_shelve() -610 -611 if verbose == 2: -612 print(f"iteration {i + 1} -----") -613 print(f"current minimum: {self.x_min}") -614 print(f"current minimum score: {self.y_min}") -615 print(f"next parameter: {next_param}") -616 print(f"score for next parameter: {score_next_param} \n") -617 -618 if score_next_param < self.y_min: -619 self.x_min = next_param -620 self.y_min = score_next_param -621 if self.save is not None: -622 self.update_shelve() +608 continue +609 +610 self.parameters.append(next_param.tolist()) +611 +612 self.scores.append(score_next_param) +613 +614 if self.save is not None: +615 self.update_shelve() +616 +617 if verbose == 2: +618 print(f"iteration {i + 1} -----") +619 print(f"current minimum: {self.x_min}") +620 print(f"current minimum score: {self.y_min}") +621 print(f"next parameter: {next_param}") +622 print(f"score for next parameter: {score_next_param} \n") 623 -624 self.y_mean, self.y_std = self.gp_fit_predict( -625 np.asarray(self.parameters), -626 np.asarray(self.scores), -627 self.x_choices, -628 ) +624 if score_next_param < self.y_min: +625 self.x_min = next_param +626 self.y_min = score_next_param +627 if self.save is not None: +628 self.update_shelve() 629 -630 if self.save is not None: -631 self.update_shelve() -632 -633 if verbose == 1: -634 progbar.update(i + 1) # update progress bar +630 self.y_mean, self.y_std = self.gp_fit_predict( +631 np.asarray(self.parameters), +632 np.asarray(self.scores), +633 self.x_choices, +634 ) 635 -636 # early stopping -637 -638 if abs_tol is not None: -639 -640 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): -641 if len(self.max_ei) > min_budget: -642 -643 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) -644 -645 if diff_max_ei[-1] <= abs_tol: -646 -647 iter_stop = len(self.max_ei) # index i starts at 0 +636 if self.save is not None: +637 self.update_shelve() +638 +639 if verbose == 1: +640 progbar.update(i + 1) # update progress bar +641 +642 # early stopping +643 +644 if abs_tol is not None: +645 +646 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): +647 if len(self.max_ei) > min_budget: 648 -649 break +649 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) 650 -651 # end main loop ---------- +651 if diff_max_ei[-1] <= abs_tol: 652 -653 if (verbose == 1) & (i < (n_iter - 1)): -654 progbar.update(n_iter) -655 -656 self.n_iter = iter_stop -657 if self.save is not None: -658 self.update_shelve() -659 -660 return (self.x_min, self.y_min) +653 iter_stop = len(self.max_ei) # index i starts at 0 +654 +655 break +656 +657 # end main loop ---------- +658 +659 if (verbose == 1) & (i < (n_iter - 1)): +660 progbar.update(n_iter) +661 +662 self.n_iter = iter_stop +663 if self.save is not None: +664 self.update_shelve() +665 +666 DescribeResult = namedtuple( +667 "DescribeResult", ("best_params", "best_score") +668 ) +669 +670 if self.params_names is None: +671 +672 return DescribeResult(self.x_min, self.y_min) +673 +674 else: +675 +676 return DescribeResult( +677 dict(zip(self.params_names, self.x_min)), self.y_min +678 )Arguments:
objective_func: a function; the objective function to be minimized +params_names: a list; + names of the parameters of the objective function (optional) + gp_obj: a GaussianProcessRegressor object; An ML model for estimating the uncertainty around the objective function @@ -872,132 +895,134 @@Arguments:
- GPOpt( lower_bound, upper_bound, objective_func=None, gp_obj=None, x_init=None, y_init=None, n_init=10, n_choices=25000, n_iter=190, alpha=1e-06, n_restarts_optimizer=25, seed=123, save=None, n_jobs=1, per_second=False, log_scale=False) + GPOpt( lower_bound, upper_bound, objective_func=None, params_names=None, gp_obj=None, x_init=None, y_init=None, n_init=10, n_choices=25000, n_iter=190, alpha=1e-06, n_restarts_optimizer=25, seed=123, save=None, n_jobs=1, per_second=False, log_scale=False)-+79 def __init__( - 80 self, - 81 lower_bound, - 82 upper_bound, - 83 objective_func=None, - 84 gp_obj=None, - 85 x_init=None, - 86 y_init=None, - 87 n_init=10, - 88 n_choices=25000, - 89 n_iter=190, - 90 alpha=1e-6, - 91 n_restarts_optimizer=25, - 92 seed=123, - 93 save=None, - 94 n_jobs=1, - 95 per_second=False, # /!\ very experimental - 96 log_scale=False, # /!\ experimental - 97 ): - 98 - 99 n_dims = len(lower_bound) -100 -101 assert n_dims == len( -102 upper_bound -103 ), "'upper_bound' and 'lower_bound' must have the same dimensions" -104 -105 self.objective_func = objective_func -106 self.lower_bound = lower_bound -107 self.upper_bound = upper_bound -108 self.y_init = y_init -109 self.log_scale = log_scale -110 self.n_dims = n_dims -111 self.n_init = n_init -112 self.n_choices = n_choices -113 self.n_iter = n_iter -114 self.alpha = alpha -115 self.n_restarts_optimizer = n_restarts_optimizer -116 self.seed = seed -117 self.save = save -118 self.per_second = per_second -119 self.x_min = None -120 self.y_min = None -121 self.y_mean = None -122 self.y_std = None -123 self.ei = np.array([]) -124 self.max_ei = [] -125 if gp_obj is None: -126 self.gp_obj = GaussianProcessRegressor( -127 kernel=Matern(nu=2.5), -128 alpha=self.alpha, -129 normalize_y=True, -130 n_restarts_optimizer=self.n_restarts_optimizer, -131 random_state=self.seed, -132 ) -133 else: -134 self.gp_obj = gp_obj -135 -136 # Sobol seqs for initial design and choices -137 sobol_seq_init = np.transpose( -138 generate_sobol2( -139 n_dims=self.n_dims, -140 n_points=self.n_init, -141 skip=2, -142 ) -143 ) -144 sobol_seq_choices = np.transpose( -145 generate_sobol2( -146 n_dims=self.n_dims, -147 n_points=self.n_choices, -148 skip=self.n_init + 2, -149 ) -150 ) -151 -152 # Sobol seqs for initial design and choices with bounds -153 if self.log_scale == False: -154 -155 bounds_range = upper_bound - lower_bound -156 self.x_init = ( -157 bounds_range * sobol_seq_init + lower_bound -158 if x_init is None -159 else x_init -160 ) -161 self.x_choices = bounds_range * sobol_seq_choices + lower_bound -162 -163 else: # (!) experimental -164 -165 assert ( -166 lower_bound > 0 -167 ).all(), "all elements of `lower_bound` must be > 0" -168 assert ( -169 upper_bound > 0 -170 ).all(), "all elements of `upper_bound` must be > 0" -171 -172 log_lower_bound = np.log(lower_bound) -173 log_upper_bound = np.log(upper_bound) -174 log_bounds_range = log_upper_bound - log_lower_bound -175 self.x_init = ( -176 np.minimum( -177 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), -178 1.7976931348623157e308, -179 ) -180 if x_init is None -181 else x_init -182 ) -183 self.x_choices = np.minimum( -184 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), -185 1.7976931348623157e308, -186 ) -187 -188 # shelve for saving (not for loading) -189 if self.save is not None: -190 self.sh = shelve.open(filename=save, flag="c", writeback=True) -191 -192 if self.per_second: -193 self.timings = [] -194 self.rf_obj = RandomForestRegressor( -195 n_estimators=250, random_state=self.seed -196 ) +@@ -1014,6 +1039,17 @@83 def __init__( + 84 self, + 85 lower_bound, + 86 upper_bound, + 87 objective_func=None, + 88 params_names=None, + 89 gp_obj=None, + 90 x_init=None, + 91 y_init=None, + 92 n_init=10, + 93 n_choices=25000, + 94 n_iter=190, + 95 alpha=1e-6, + 96 n_restarts_optimizer=25, + 97 seed=123, + 98 save=None, + 99 n_jobs=1, +100 per_second=False, # /!\ very experimental +101 log_scale=False, # /!\ experimental +102 ): +103 +104 n_dims = len(lower_bound) +105 +106 assert n_dims == len( +107 upper_bound +108 ), "'upper_bound' and 'lower_bound' must have the same dimensions" +109 +110 self.objective_func = objective_func +111 self.params_names = params_names +112 self.lower_bound = lower_bound +113 self.upper_bound = upper_bound +114 self.y_init = y_init +115 self.log_scale = log_scale +116 self.n_dims = n_dims +117 self.n_init = n_init +118 self.n_choices = n_choices +119 self.n_iter = n_iter +120 self.alpha = alpha +121 self.n_restarts_optimizer = n_restarts_optimizer +122 self.seed = seed +123 self.save = save +124 self.per_second = per_second +125 self.x_min = None +126 self.y_min = None +127 self.y_mean = None +128 self.y_std = None +129 self.ei = np.array([]) +130 self.max_ei = [] +131 if gp_obj is None: +132 self.gp_obj = GaussianProcessRegressor( +133 kernel=Matern(nu=2.5), +134 alpha=self.alpha, +135 normalize_y=True, +136 n_restarts_optimizer=self.n_restarts_optimizer, +137 random_state=self.seed, +138 ) +139 else: +140 self.gp_obj = gp_obj +141 +142 # Sobol seqs for initial design and choices +143 sobol_seq_init = np.transpose( +144 generate_sobol2( +145 n_dims=self.n_dims, +146 n_points=self.n_init, +147 skip=2, +148 ) +149 ) +150 sobol_seq_choices = np.transpose( +151 generate_sobol2( +152 n_dims=self.n_dims, +153 n_points=self.n_choices, +154 skip=self.n_init + 2, +155 ) +156 ) +157 +158 # Sobol seqs for initial design and choices with bounds +159 if self.log_scale == False: +160 +161 bounds_range = upper_bound - lower_bound +162 self.x_init = ( +163 bounds_range * sobol_seq_init + lower_bound +164 if x_init is None +165 else x_init +166 ) +167 self.x_choices = bounds_range * sobol_seq_choices + lower_bound +168 +169 else: # (!) experimental +170 +171 assert ( +172 lower_bound > 0 +173 ).all(), "all elements of `lower_bound` must be > 0" +174 assert ( +175 upper_bound > 0 +176 ).all(), "all elements of `upper_bound` must be > 0" +177 +178 log_lower_bound = np.log(lower_bound) +179 log_upper_bound = np.log(upper_bound) +180 log_bounds_range = log_upper_bound - log_lower_bound +181 self.x_init = ( +182 np.minimum( +183 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), +184 1.7976931348623157e308, +185 ) +186 if x_init is None +187 else x_init +188 ) +189 self.x_choices = np.minimum( +190 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), +191 1.7976931348623157e308, +192 ) +193 +194 # shelve for saving (not for loading) +195 if self.save is not None: +196 self.sh = shelve.open(filename=save, flag="c", writeback=True) 197 -198 self.n_jobs = n_jobs +198 if self.per_second: +199 self.timings = [] +200 self.rf_obj = RandomForestRegressor( +201 n_estimators=250, random_state=self.seed +202 ) +203 +204 self.n_jobs = n_jobsArguments:
+-@@ -1246,21 +1282,21 @@-Arguments:
-201 def get_params(self): -202 """Get object attributes. -203 -204 Returns -205 ------- -206 params : mapping of string to any -207 Parameter names mapped to their values. -208 """ -209 out = dict() -210 param_names = dir(self) -211 for key in param_names: -212 if key.startswith("_") is False: -213 out[key] = getattr(self, key, None) -214 -215 return out +@@ -1285,11 +1321,11 @@207 def get_params(self): +208 """Get object attributes. +209 +210 Returns +211 ------- +212 params : mapping of string to any +213 Parameter names mapped to their values. +214 """ +215 out = dict() +216 param_names = dir(self) +217 for key in param_names: +218 if key.startswith("_") is False: +219 out[key] = getattr(self, key, None) +220 +221 return outReturns
-218 def eval_objective(self, arg): -219 try: -220 return self.objective_func(self.x_init[arg, :]) -221 except: -222 return 1e06 +@@ -1307,19 +1343,19 @@224 def eval_objective(self, arg): +225 try: +226 return self.objective_func(self.x_init[arg, :]) +227 except: +228 return 1e06Returns
-225 def load(self, path): -226 """load data from stored shelve. -227 -228 # Arguments -229 -230 path : a string; path to stored shelve. -231 -232 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -233 """ -234 -235 self.sh = shelve.open(filename=path) -236 for key, value in self.sh.items(): -237 setattr(self, key, value) +@@ -1345,11 +1381,11 @@231 def load(self, path): +232 """load data from stored shelve. +233 +234 # Arguments +235 +236 path : a string; path to stored shelve. +237 +238 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +239 """ +240 +241 self.sh = shelve.open(filename=path) +242 for key, value in self.sh.items(): +243 setattr(self, key, value)Arguments
-240 def update_shelve(self): -241 for key, value in self.get_params().items(): -242 if (callable(value) is False) & (key != "sh"): -243 self.sh[key] = value -244 self.sh.sync() +@@ -1367,17 +1403,17 @@246 def update_shelve(self): +247 for key, value in self.get_params().items(): +248 if (callable(value) is False) & (key != "sh"): +249 self.sh[key] = value +250 self.sh.sync()Arguments
-247 def close_shelve(self): -248 """Close shelve. -249 -250 # Arguments -251 -252 No argument. -253 -254 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -255 """ -256 -257 self.sh.close() +@@ -1403,16 +1439,16 @@253 def close_shelve(self): +254 """Close shelve. +255 +256 # Arguments +257 +258 No argument. +259 +260 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +261 """ +262 +263 self.sh.close()Arguments
-260 def gp_fit_predict(self, X_train, y_train, X_test): -261 -262 if len(X_train.shape) == 1: -263 X_train = X_train.reshape((-1, 1)) -264 X_test = X_test.reshape((-1, 1)) -265 -266 # Get mean and standard deviation -267 return self.gp_obj.fit(X_train, y_train).predict( -268 X_test, return_std=True -269 ) +@@ -1430,14 +1466,14 @@266 def gp_fit_predict(self, X_train, y_train, X_test): +267 +268 if len(X_train.shape) == 1: +269 X_train = X_train.reshape((-1, 1)) +270 X_test = X_test.reshape((-1, 1)) +271 +272 # Get mean and standard deviation +273 return self.gp_obj.fit(X_train, y_train).predict( +274 X_test, return_std=True +275 )Arguments
-272 def timings_fit_predict(self, X_train, y_train, X_test): -273 -274 if len(X_train.shape) == 1: -275 X_train = X_train.reshape((-1, 1)) -276 X_test = X_test.reshape((-1, 1)) -277 -278 # Get mean preds for timings -279 return self.rf_obj.fit(X_train, y_train).predict(X_test) +@@ -1455,65 +1491,65 @@278 def timings_fit_predict(self, X_train, y_train, X_test): +279 +280 if len(X_train.shape) == 1: +281 X_train = X_train.reshape((-1, 1)) +282 X_test = X_test.reshape((-1, 1)) +283 +284 # Get mean preds for timings +285 return self.rf_obj.fit(X_train, y_train).predict(X_test)Arguments
-282 def next_parameter_by_ei(self, seed, i): -283 -284 gamma_hat = (self.y_min - self.y_mean) / self.y_std -285 -286 self.ei = -self.y_std * ( -287 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) -288 ) +@@ -1531,324 +1567,336 @@288 def next_parameter_by_ei(self, seed, i): 289 -290 # find max index ----- +290 gamma_hat = (self.y_min - self.y_mean) / self.y_std 291 -292 if self.per_second is False: -293 -294 # find index for max. ei -295 max_index = self.ei.argmin() -296 -297 else: # self.per_second is True -298 -299 # predict timings on self.x_choices -300 # train on X = self.parameters and y = self.timings -301 # (must have same shape[0]) -302 timing_preds = self.timings_fit_predict( -303 X_train=np.asarray(self.parameters), -304 y_train=np.asarray(self.timings), -305 X_test=self.x_choices, -306 ) -307 -308 # find index for max. ei (and min. timings) -309 max_index = (-self.ei / timing_preds).argmax() -310 -311 self.max_ei.append(np.abs(self.ei[max_index])) -312 -313 # Select next choice -314 next_param = self.x_choices[max_index, :] -315 -316 if next_param in np.asarray(self.parameters): -317 -318 if self.log_scale == False: -319 -320 np.random.seed(self.seed * i + 1000) -321 next_param = ( -322 self.upper_bound - self.lower_bound -323 ) * np.random.rand(self.n_dims) + self.lower_bound -324 -325 else: # /!\ very... experimental -326 -327 np.random.seed(self.seed) -328 log_upper_bound = np.log(self.upper_bound) -329 log_lower_bound = np.log(self.lower_bound) -330 log_bounds_range = log_upper_bound - log_lower_bound -331 -332 next_param = np.minimum( -333 np.exp( -334 log_bounds_range * np.random.rand(self.n_dims) -335 + log_lower_bound -336 ), -337 1.7976931348623157e308, -338 ) -339 -340 return next_param +292 self.ei = -self.y_std * ( +293 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) +294 ) +295 +296 # find max index ----- +297 +298 if self.per_second is False: +299 +300 # find index for max. ei +301 max_index = self.ei.argmin() +302 +303 else: # self.per_second is True +304 +305 # predict timings on self.x_choices +306 # train on X = self.parameters and y = self.timings +307 # (must have same shape[0]) +308 timing_preds = self.timings_fit_predict( +309 X_train=np.asarray(self.parameters), +310 y_train=np.asarray(self.timings), +311 X_test=self.x_choices, +312 ) +313 +314 # find index for max. ei (and min. timings) +315 max_index = (-self.ei / timing_preds).argmax() +316 +317 self.max_ei.append(np.abs(self.ei[max_index])) +318 +319 # Select next choice +320 next_param = self.x_choices[max_index, :] +321 +322 if next_param in np.asarray(self.parameters): +323 +324 if self.log_scale == False: +325 +326 np.random.seed(self.seed * i + 1000) +327 next_param = ( +328 self.upper_bound - self.lower_bound +329 ) * np.random.rand(self.n_dims) + self.lower_bound +330 +331 else: # /!\ very... experimental +332 +333 np.random.seed(self.seed) +334 log_upper_bound = np.log(self.upper_bound) +335 log_lower_bound = np.log(self.lower_bound) +336 log_bounds_range = log_upper_bound - log_lower_bound +337 +338 next_param = np.minimum( +339 np.exp( +340 log_bounds_range * np.random.rand(self.n_dims) +341 + log_lower_bound +342 ), +343 1.7976931348623157e308, +344 ) +345 +346 return next_paramArguments
@@ -826,646 +847,663 @@343 def optimize( -344 self, -345 verbose=1, -346 n_more_iter=None, -347 abs_tol=None, # suggested 1e-4, for n_iter = 200 -348 min_budget=50, # minimum budget for early stopping -349 func_args=None, -350 ): -351 """Launch optimization loop. -352 -353 # Arguments: -354 -355 verbose: an integer; -356 verbose = 0: nothing is printed, -357 verbose = 1: a progress bar is printed (longer than 0), -358 verbose = 2: information about each iteration is printed (longer than 1) -359 -360 n_more_iter: an integer; -361 additional number of iterations for the optimizer (which has been run once) -362 -363 abs_tol: a float; -364 tolerance for convergence of the optimizer (early stopping based on expected improvement) +diff --git a/gpopt-docs/GPopt/GPOpt/GPOpt.html b/gpopt-docs/GPopt/GPOpt/GPOpt.html index 4a56053..52f5959 100644 --- a/gpopt-docs/GPopt/GPOpt/GPOpt.html +++ b/gpopt-docs/GPopt/GPOpt/GPOpt.html @@ -39,6 +39,9 @@349 def optimize( +350 self, +351 verbose=1, +352 n_more_iter=None, +353 abs_tol=None, # suggested 1e-4, for n_iter = 200 +354 min_budget=50, # minimum budget for early stopping +355 func_args=None, +356 ): +357 """Launch optimization loop. +358 +359 # Arguments: +360 +361 verbose: an integer; +362 verbose = 0: nothing is printed, +363 verbose = 1: a progress bar is printed (longer than 0), +364 verbose = 2: information about each iteration is printed (longer than 1) 365 -366 min_budget: an integer (default is 50); -367 minimum number of iterations before early stopping controlled by `abs_tol` +366 n_more_iter: an integer; +367 additional number of iterations for the optimizer (which has been run once) 368 -369 func_args: a list; -370 additional parameters for the objective function (if necessary) +369 abs_tol: a float; +370 tolerance for convergence of the optimizer (early stopping based on expected improvement) 371 -372 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) -373 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +372 min_budget: an integer (default is 50); +373 minimum number of iterations before early stopping controlled by `abs_tol` 374 -375 """ -376 -377 # verbose = 0: nothing is printed -378 # verbose = 1: a progress bar is printed (longer than 0) -379 # verbose = 2: information about each iteration is printed (longer than 1) -380 if func_args is None: -381 func_args = [] +375 func_args: a list; +376 additional parameters for the objective function (if necessary) +377 +378 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) +379 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +380 +381 """ 382 -383 if ( -384 n_more_iter is None -385 ): # initial optimization, before more iters are requested -386 -387 n_iter = self.n_iter -388 # stopping iter for early stopping (default is total budget) -389 iter_stop = n_iter # potentially # got to check this -390 -391 # initial design ---------- +383 # verbose = 0: nothing is printed +384 # verbose = 1: a progress bar is printed (longer than 0) +385 # verbose = 2: information about each iteration is printed (longer than 1) +386 if func_args is None: +387 func_args = [] +388 +389 if ( +390 n_more_iter is None +391 ): # initial optimization, before more iters are requested 392 -393 if (verbose == 1) | (verbose == 2): -394 print(f"\n Creating initial design... \n") -395 -396 if verbose == 1: -397 progbar = Progbar(target=self.n_init) +393 n_iter = self.n_iter +394 # stopping iter for early stopping (default is total budget) +395 iter_stop = n_iter # potentially # got to check this +396 +397 # initial design ---------- 398 -399 self.parameters = self.x_init.tolist() -400 self.scores = [] +399 if (verbose == 1) | (verbose == 2): +400 print(f"\n Creating initial design... \n") 401 -402 if self.save is not None: -403 self.update_shelve() +402 if verbose == 1: +403 progbar = Progbar(target=self.n_init) 404 -405 if self.y_init is None: # calculate scores on initial design -406 -407 assert ( -408 self.objective_func is not None -409 ), "self.y_init is None: must have 'objective_func' not None" +405 self.parameters = self.x_init.tolist() +406 self.scores = [] +407 +408 if self.save is not None: +409 self.update_shelve() 410 -411 if self.n_jobs == 1: +411 if self.y_init is None: # calculate scores on initial design 412 -413 for i in range(self.n_init): -414 -415 x_next = self.x_init[i, :] +413 assert ( +414 self.objective_func is not None +415 ), "self.y_init is None: must have 'objective_func' not None" 416 -417 try: +417 if self.n_jobs == 1: 418 -419 if self.per_second is True: +419 for i in range(self.n_init): 420 -421 start = time() -422 score = self.objective_func(x_next, *func_args) -423 if (np.isfinite(score) == False) or ( -424 np.isnan(score) == True -425 ): -426 continue -427 self.timings.append(np.log(time() - start)) -428 -429 else: # self.per_second is False -430 -431 score = self.objective_func(x_next, *func_args) -432 if (np.isfinite(score) == False) or ( -433 np.isnan(score) == True -434 ): -435 continue +421 x_next = self.x_init[i, :] +422 +423 try: +424 +425 if self.per_second is True: +426 +427 start = time() +428 score = self.objective_func(x_next, *func_args) +429 if (np.isfinite(score) == False) or ( +430 np.isnan(score) == True +431 ): +432 continue +433 self.timings.append(np.log(time() - start)) +434 +435 else: # self.per_second is False 436 -437 self.scores.append(score) -438 -439 if self.save is not None: -440 self.update_shelve() -441 -442 except: -443 -444 continue -445 -446 if verbose == 1: -447 progbar.update(i) # update progress bar -448 -449 if verbose == 2: -450 print(f"point: {x_next}; score: {score}") -451 # end loop # calculate scores on initial design -452 -453 if verbose == 1: -454 progbar.update(self.n_init) -455 -456 else: # self.n_jobs != 1 -457 -458 assert ( -459 self.per_second is False -460 ), "timings not calculated here" +437 score = self.objective_func(x_next, *func_args) +438 if (np.isfinite(score) == False) or ( +439 np.isnan(score) == True +440 ): +441 continue +442 +443 self.scores.append(score) +444 +445 if self.save is not None: +446 self.update_shelve() +447 +448 except: +449 +450 continue +451 +452 if verbose == 1: +453 progbar.update(i) # update progress bar +454 +455 if verbose == 2: +456 print(f"point: {x_next}; score: {score}") +457 # end loop # calculate scores on initial design +458 +459 if verbose == 1: +460 progbar.update(self.n_init) 461 -462 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( -463 delayed(self.objective_func)(self.x_init[i, :]) -464 for i in range(self.n_init) -465 ) -466 -467 self.scores = scores -468 -469 if self.save is not None: -470 self.update_shelve() -471 -472 else: # if self.y_init is None: -473 -474 assert self.x_init.shape[0] == len( -475 self.y_init -476 ), "must have: self.x_init.shape[0] == len(self.y_init)" +462 else: # self.n_jobs != 1 +463 +464 assert ( +465 self.per_second is False +466 ), "timings not calculated here" +467 +468 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( +469 delayed(self.objective_func)(self.x_init[i, :]) +470 for i in range(self.n_init) +471 ) +472 +473 self.scores = scores +474 +475 if self.save is not None: +476 self.update_shelve() 477 -478 self.scores = pickle.loads( -479 pickle.dumps(self.y_init.tolist(), -1) -480 ) -481 -482 # current best score on initial design -483 min_index = (np.asarray(self.scores)).argmin() -484 self.y_min = self.scores[min_index] -485 self.x_min = self.x_init[min_index, :] -486 -487 # current gp mean and std on initial design -488 y_mean, y_std = self.gp_fit_predict( -489 np.asarray(self.parameters), -490 np.asarray(self.scores), -491 self.x_choices, -492 ) -493 self.y_mean = y_mean -494 self.y_std = np.maximum(2.220446049250313e-16, y_std) -495 -496 # saving after initial design computation -497 if self.save is not None: -498 self.update_shelve() -499 -500 else: # if n_more_iter is not None +478 else: # if self.y_init is None: +479 +480 assert self.x_init.shape[0] == len( +481 self.y_init +482 ), "must have: self.x_init.shape[0] == len(self.y_init)" +483 +484 self.scores = pickle.loads( +485 pickle.dumps(self.y_init.tolist(), -1) +486 ) +487 +488 # current best score on initial design +489 min_index = (np.asarray(self.scores)).argmin() +490 self.y_min = self.scores[min_index] +491 self.x_min = self.x_init[min_index, :] +492 +493 # current gp mean and std on initial design +494 y_mean, y_std = self.gp_fit_predict( +495 np.asarray(self.parameters), +496 np.asarray(self.scores), +497 self.x_choices, +498 ) +499 self.y_mean = y_mean +500 self.y_std = np.maximum(2.220446049250313e-16, y_std) 501 -502 assert self.n_iter > 5, "you must have n_iter > 5" -503 n_iter = n_more_iter -504 iter_stop = len(self.max_ei) + n_more_iter # potentially +502 # saving after initial design computation +503 if self.save is not None: +504 self.update_shelve() 505 -506 if (verbose == 1) | (verbose == 2): -507 print(f"\n ...Done. \n") -508 try: -509 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) -510 except: -511 pass -512 -513 # end init design ---------- -514 -515 # if n_more_iter is None: # initial optimization, before more iters are requested -516 -517 if (verbose == 1) | (verbose == 2): -518 print(f"\n Optimization loop... \n") -519 -520 # early stopping? -521 if abs_tol is not None: -522 assert ( -523 min_budget > 20 -524 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +506 else: # if n_more_iter is not None +507 +508 assert self.n_iter > 5, "you must have n_iter > 5" +509 n_iter = n_more_iter +510 iter_stop = len(self.max_ei) + n_more_iter # potentially +511 +512 if (verbose == 1) | (verbose == 2): +513 print(f"\n ...Done. \n") +514 try: +515 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) +516 except: +517 pass +518 +519 # end init design ---------- +520 +521 # if n_more_iter is None: # initial optimization, before more iters are requested +522 +523 if (verbose == 1) | (verbose == 2): +524 print(f"\n Optimization loop... \n") 525 -526 if verbose == 1: -527 progbar = Progbar(target=n_iter) -528 -529 # main loop ---------- -530 -531 for i in range(n_iter): -532 -533 # find next set of parameters (vector), maximizing ei -534 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) -535 -536 try: -537 -538 if self.per_second is True: -539 -540 start = time() +526 # early stopping? +527 if abs_tol is not None: +528 assert ( +529 min_budget > 20 +530 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +531 +532 if verbose == 1: +533 progbar = Progbar(target=n_iter) +534 +535 # main loop ---------- +536 +537 for i in range(n_iter): +538 +539 # find next set of parameters (vector), maximizing ei +540 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) 541 -542 if self.objective_func is not None: +542 try: 543 -544 score_next_param = self.objective_func( -545 next_param, *func_args -546 ) +544 if self.per_second is True: +545 +546 start = time() 547 -548 if (np.isfinite(score_next_param) == False) or ( -549 np.isnan(score_next_param) == True -550 ): -551 continue -552 -553 else: -554 -555 assert (self.x_init is not None) and ( -556 self.y_init is not None -557 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +548 if self.objective_func is not None: +549 +550 score_next_param = self.objective_func( +551 next_param, *func_args +552 ) +553 +554 if (np.isfinite(score_next_param) == False) or ( +555 np.isnan(score_next_param) == True +556 ): +557 continue 558 -559 print(f"\n next param: {next_param} \n") -560 score_next_param = float( -561 input("get new score: \n") -562 ) # or an API response -563 -564 if (np.isfinite(score_next_param) == False) or ( -565 np.isnan(score_next_param) == True -566 ): -567 continue -568 -569 self.timings.append(np.log(time() - start)) -570 -571 else: # self.per_second is False: -572 -573 if self.objective_func is not None: +559 else: +560 +561 assert (self.x_init is not None) and ( +562 self.y_init is not None +563 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +564 +565 print(f"\n next param: {next_param} \n") +566 score_next_param = float( +567 input("get new score: \n") +568 ) # or an API response +569 +570 if (np.isfinite(score_next_param) == False) or ( +571 np.isnan(score_next_param) == True +572 ): +573 continue 574 -575 score_next_param = self.objective_func( -576 next_param, *func_args -577 ) +575 self.timings.append(np.log(time() - start)) +576 +577 else: # self.per_second is False: 578 -579 if (np.isfinite(score_next_param) == False) or ( -580 np.isnan(score_next_param) == True -581 ): -582 continue -583 -584 else: -585 -586 assert (self.x_init is not None) and ( -587 self.y_init is not None -588 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +579 if self.objective_func is not None: +580 +581 score_next_param = self.objective_func( +582 next_param, *func_args +583 ) +584 +585 if (np.isfinite(score_next_param) == False) or ( +586 np.isnan(score_next_param) == True +587 ): +588 continue 589 -590 print(f"\n next param: {next_param} \n") -591 score_next_param = float( -592 input("get new score: \n") -593 ) # or an API response -594 -595 if (np.isfinite(score_next_param) == False) or ( -596 np.isnan(score_next_param) == True -597 ): -598 continue -599 -600 except: -601 -602 continue -603 -604 self.parameters.append(next_param.tolist()) +590 else: +591 +592 assert (self.x_init is not None) and ( +593 self.y_init is not None +594 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +595 +596 print(f"\n next param: {next_param} \n") +597 score_next_param = float( +598 input("get new score: \n") +599 ) # or an API response +600 +601 if (np.isfinite(score_next_param) == False) or ( +602 np.isnan(score_next_param) == True +603 ): +604 continue 605 -606 self.scores.append(score_next_param) +606 except: 607 -608 if self.save is not None: -609 self.update_shelve() -610 -611 if verbose == 2: -612 print(f"iteration {i + 1} -----") -613 print(f"current minimum: {self.x_min}") -614 print(f"current minimum score: {self.y_min}") -615 print(f"next parameter: {next_param}") -616 print(f"score for next parameter: {score_next_param} \n") -617 -618 if score_next_param < self.y_min: -619 self.x_min = next_param -620 self.y_min = score_next_param -621 if self.save is not None: -622 self.update_shelve() +608 continue +609 +610 self.parameters.append(next_param.tolist()) +611 +612 self.scores.append(score_next_param) +613 +614 if self.save is not None: +615 self.update_shelve() +616 +617 if verbose == 2: +618 print(f"iteration {i + 1} -----") +619 print(f"current minimum: {self.x_min}") +620 print(f"current minimum score: {self.y_min}") +621 print(f"next parameter: {next_param}") +622 print(f"score for next parameter: {score_next_param} \n") 623 -624 self.y_mean, self.y_std = self.gp_fit_predict( -625 np.asarray(self.parameters), -626 np.asarray(self.scores), -627 self.x_choices, -628 ) +624 if score_next_param < self.y_min: +625 self.x_min = next_param +626 self.y_min = score_next_param +627 if self.save is not None: +628 self.update_shelve() 629 -630 if self.save is not None: -631 self.update_shelve() -632 -633 if verbose == 1: -634 progbar.update(i + 1) # update progress bar +630 self.y_mean, self.y_std = self.gp_fit_predict( +631 np.asarray(self.parameters), +632 np.asarray(self.scores), +633 self.x_choices, +634 ) 635 -636 # early stopping -637 -638 if abs_tol is not None: -639 -640 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): -641 if len(self.max_ei) > min_budget: -642 -643 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) -644 -645 if diff_max_ei[-1] <= abs_tol: -646 -647 iter_stop = len(self.max_ei) # index i starts at 0 +636 if self.save is not None: +637 self.update_shelve() +638 +639 if verbose == 1: +640 progbar.update(i + 1) # update progress bar +641 +642 # early stopping +643 +644 if abs_tol is not None: +645 +646 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): +647 if len(self.max_ei) > min_budget: 648 -649 break +649 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) 650 -651 # end main loop ---------- +651 if diff_max_ei[-1] <= abs_tol: 652 -653 if (verbose == 1) & (i < (n_iter - 1)): -654 progbar.update(n_iter) -655 -656 self.n_iter = iter_stop -657 if self.save is not None: -658 self.update_shelve() -659 -660 return (self.x_min, self.y_min) +653 iter_stop = len(self.max_ei) # index i starts at 0 +654 +655 break +656 +657 # end main loop ---------- +658 +659 if (verbose == 1) & (i < (n_iter - 1)): +660 progbar.update(n_iter) +661 +662 self.n_iter = iter_stop +663 if self.save is not None: +664 self.update_shelve() +665 +666 DescribeResult = namedtuple( +667 "DescribeResult", ("best_params", "best_score") +668 ) +669 +670 if self.params_names is None: +671 +672 return DescribeResult(self.x_min, self.y_min) +673 +674 else: +675 +676 return DescribeResult( +677 dict(zip(self.params_names, self.x_min)), self.y_min +678 )API Documentation
objective_func ++ params_names + lower_bound @@ -161,656 +164,674 @@7import numpy as np 8import pickle 9import shelve - 10from sklearn.gaussian_process import GaussianProcessRegressor - 11from sklearn.ensemble import RandomForestRegressor - 12from sklearn.gaussian_process.kernels import Matern - 13import scipy.stats as st - 14from joblib import Parallel, delayed - 15from time import time - 16from ..utils import generate_sobol2 - 17from ..utils import Progbar - 18 + 10from collections import namedtuple + 11from sklearn.gaussian_process import GaussianProcessRegressor + 12from sklearn.ensemble import RandomForestRegressor + 13from sklearn.gaussian_process.kernels import Matern + 14import scipy.stats as st + 15from joblib import Parallel, delayed + 16from time import time + 17from ..utils import generate_sobol2 + 18from ..utils import Progbar 19 - 20class GPOpt: - 21 """Class GPOpt. - 22 - 23 # Arguments: - 24 - 25 lower_bound: a numpy array; - 26 lower bound for researched minimum - 27 - 28 upper_bound: a numpy array; - 29 upper bound for researched minimum - 30 - 31 objective_func: a function; - 32 the objective function to be minimized - 33 - 34 gp_obj: a GaussianProcessRegressor object; - 35 An ML model for estimating the uncertainty around the objective function - 36 - 37 x_init: - 38 initial setting of points where `objective_func` is evaluated (optional) - 39 - 40 y_init: - 41 initial setting values at points where `objective_func` is evaluated (optional) - 42 - 43 n_init: an integer; - 44 number of points in the initial setting, when `x_init` and `y_init` are not provided - 45 - 46 n_choices: an integer; - 47 number of points for the calculation of expected improvement - 48 - 49 n_iter: an integer; - 50 number of iterations of the minimization algorithm - 51 - 52 alpha: a float; - 53 Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) - 54 - 55 n_restarts_optimizer: an integer; - 56 The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. - 57 - 58 seed: an integer; - 59 reproducibility seed - 60 - 61 save: a string; - 62 Specifies where to save the optimizer in its current state - 63 - 64 n_jobs: an integer; - 65 number of jobs for parallel computing on initial setting (can be -1) - 66 - 67 per_second: a boolean; - 68 __experimental__, default is False (leave to default for now) - 69 - 70 log_scale: a boolean; - 71 __experimental__, default is False (leave to default for now) - 72 - 73 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) - 74 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) - 75 - 76 """ - 77 - 78 def __init__( - 79 self, - 80 lower_bound, - 81 upper_bound, - 82 objective_func=None, - 83 gp_obj=None, - 84 x_init=None, - 85 y_init=None, - 86 n_init=10, - 87 n_choices=25000, - 88 n_iter=190, - 89 alpha=1e-6, - 90 n_restarts_optimizer=25, - 91 seed=123, - 92 save=None, - 93 n_jobs=1, - 94 per_second=False, # /!\ very experimental - 95 log_scale=False, # /!\ experimental - 96 ): - 97 - 98 n_dims = len(lower_bound) - 99 -100 assert n_dims == len( -101 upper_bound -102 ), "'upper_bound' and 'lower_bound' must have the same dimensions" -103 -104 self.objective_func = objective_func -105 self.lower_bound = lower_bound -106 self.upper_bound = upper_bound -107 self.y_init = y_init -108 self.log_scale = log_scale -109 self.n_dims = n_dims -110 self.n_init = n_init -111 self.n_choices = n_choices -112 self.n_iter = n_iter -113 self.alpha = alpha -114 self.n_restarts_optimizer = n_restarts_optimizer -115 self.seed = seed -116 self.save = save -117 self.per_second = per_second -118 self.x_min = None -119 self.y_min = None -120 self.y_mean = None -121 self.y_std = None -122 self.ei = np.array([]) -123 self.max_ei = [] -124 if gp_obj is None: -125 self.gp_obj = GaussianProcessRegressor( -126 kernel=Matern(nu=2.5), -127 alpha=self.alpha, -128 normalize_y=True, -129 n_restarts_optimizer=self.n_restarts_optimizer, -130 random_state=self.seed, -131 ) -132 else: -133 self.gp_obj = gp_obj -134 -135 # Sobol seqs for initial design and choices -136 sobol_seq_init = np.transpose( -137 generate_sobol2( -138 n_dims=self.n_dims, -139 n_points=self.n_init, -140 skip=2, -141 ) -142 ) -143 sobol_seq_choices = np.transpose( -144 generate_sobol2( -145 n_dims=self.n_dims, -146 n_points=self.n_choices, -147 skip=self.n_init + 2, -148 ) -149 ) -150 -151 # Sobol seqs for initial design and choices with bounds -152 if self.log_scale == False: -153 -154 bounds_range = upper_bound - lower_bound -155 self.x_init = ( -156 bounds_range * sobol_seq_init + lower_bound -157 if x_init is None -158 else x_init -159 ) -160 self.x_choices = bounds_range * sobol_seq_choices + lower_bound -161 -162 else: # (!) experimental -163 -164 assert ( -165 lower_bound > 0 -166 ).all(), "all elements of `lower_bound` must be > 0" -167 assert ( -168 upper_bound > 0 -169 ).all(), "all elements of `upper_bound` must be > 0" -170 -171 log_lower_bound = np.log(lower_bound) -172 log_upper_bound = np.log(upper_bound) -173 log_bounds_range = log_upper_bound - log_lower_bound -174 self.x_init = ( -175 np.minimum( -176 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), -177 1.7976931348623157e308, -178 ) -179 if x_init is None -180 else x_init -181 ) -182 self.x_choices = np.minimum( -183 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), -184 1.7976931348623157e308, -185 ) -186 -187 # shelve for saving (not for loading) -188 if self.save is not None: -189 self.sh = shelve.open(filename=save, flag="c", writeback=True) -190 -191 if self.per_second: -192 self.timings = [] -193 self.rf_obj = RandomForestRegressor( -194 n_estimators=250, random_state=self.seed -195 ) + 20 + 21class GPOpt: + 22 """Class GPOpt. + 23 + 24 # Arguments: + 25 + 26 lower_bound: a numpy array; + 27 lower bound for researched minimum + 28 + 29 upper_bound: a numpy array; + 30 upper bound for researched minimum + 31 + 32 objective_func: a function; + 33 the objective function to be minimized + 34 + 35 params_names: a list; + 36 names of the parameters of the objective function (optional) + 37 + 38 gp_obj: a GaussianProcessRegressor object; + 39 An ML model for estimating the uncertainty around the objective function + 40 + 41 x_init: + 42 initial setting of points where `objective_func` is evaluated (optional) + 43 + 44 y_init: + 45 initial setting values at points where `objective_func` is evaluated (optional) + 46 + 47 n_init: an integer; + 48 number of points in the initial setting, when `x_init` and `y_init` are not provided + 49 + 50 n_choices: an integer; + 51 number of points for the calculation of expected improvement + 52 + 53 n_iter: an integer; + 54 number of iterations of the minimization algorithm + 55 + 56 alpha: a float; + 57 Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) + 58 + 59 n_restarts_optimizer: an integer; + 60 The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. + 61 + 62 seed: an integer; + 63 reproducibility seed + 64 + 65 save: a string; + 66 Specifies where to save the optimizer in its current state + 67 + 68 n_jobs: an integer; + 69 number of jobs for parallel computing on initial setting (can be -1) + 70 + 71 per_second: a boolean; + 72 __experimental__, default is False (leave to default for now) + 73 + 74 log_scale: a boolean; + 75 __experimental__, default is False (leave to default for now) + 76 + 77 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) + 78 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) + 79 + 80 """ + 81 + 82 def __init__( + 83 self, + 84 lower_bound, + 85 upper_bound, + 86 objective_func=None, + 87 params_names=None, + 88 gp_obj=None, + 89 x_init=None, + 90 y_init=None, + 91 n_init=10, + 92 n_choices=25000, + 93 n_iter=190, + 94 alpha=1e-6, + 95 n_restarts_optimizer=25, + 96 seed=123, + 97 save=None, + 98 n_jobs=1, + 99 per_second=False, # /!\ very experimental +100 log_scale=False, # /!\ experimental +101 ): +102 +103 n_dims = len(lower_bound) +104 +105 assert n_dims == len( +106 upper_bound +107 ), "'upper_bound' and 'lower_bound' must have the same dimensions" +108 +109 self.objective_func = objective_func +110 self.params_names = params_names +111 self.lower_bound = lower_bound +112 self.upper_bound = upper_bound +113 self.y_init = y_init +114 self.log_scale = log_scale +115 self.n_dims = n_dims +116 self.n_init = n_init +117 self.n_choices = n_choices +118 self.n_iter = n_iter +119 self.alpha = alpha +120 self.n_restarts_optimizer = n_restarts_optimizer +121 self.seed = seed +122 self.save = save +123 self.per_second = per_second +124 self.x_min = None +125 self.y_min = None +126 self.y_mean = None +127 self.y_std = None +128 self.ei = np.array([]) +129 self.max_ei = [] +130 if gp_obj is None: +131 self.gp_obj = GaussianProcessRegressor( +132 kernel=Matern(nu=2.5), +133 alpha=self.alpha, +134 normalize_y=True, +135 n_restarts_optimizer=self.n_restarts_optimizer, +136 random_state=self.seed, +137 ) +138 else: +139 self.gp_obj = gp_obj +140 +141 # Sobol seqs for initial design and choices +142 sobol_seq_init = np.transpose( +143 generate_sobol2( +144 n_dims=self.n_dims, +145 n_points=self.n_init, +146 skip=2, +147 ) +148 ) +149 sobol_seq_choices = np.transpose( +150 generate_sobol2( +151 n_dims=self.n_dims, +152 n_points=self.n_choices, +153 skip=self.n_init + 2, +154 ) +155 ) +156 +157 # Sobol seqs for initial design and choices with bounds +158 if self.log_scale == False: +159 +160 bounds_range = upper_bound - lower_bound +161 self.x_init = ( +162 bounds_range * sobol_seq_init + lower_bound +163 if x_init is None +164 else x_init +165 ) +166 self.x_choices = bounds_range * sobol_seq_choices + lower_bound +167 +168 else: # (!) experimental +169 +170 assert ( +171 lower_bound > 0 +172 ).all(), "all elements of `lower_bound` must be > 0" +173 assert ( +174 upper_bound > 0 +175 ).all(), "all elements of `upper_bound` must be > 0" +176 +177 log_lower_bound = np.log(lower_bound) +178 log_upper_bound = np.log(upper_bound) +179 log_bounds_range = log_upper_bound - log_lower_bound +180 self.x_init = ( +181 np.minimum( +182 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), +183 1.7976931348623157e308, +184 ) +185 if x_init is None +186 else x_init +187 ) +188 self.x_choices = np.minimum( +189 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), +190 1.7976931348623157e308, +191 ) +192 +193 # shelve for saving (not for loading) +194 if self.save is not None: +195 self.sh = shelve.open(filename=save, flag="c", writeback=True) 196 -197 self.n_jobs = n_jobs -198 -199 # from sklearn.base -200 def get_params(self): -201 """Get object attributes. +197 if self.per_second: +198 self.timings = [] +199 self.rf_obj = RandomForestRegressor( +200 n_estimators=250, random_state=self.seed +201 ) 202 -203 Returns -204 ------- -205 params : mapping of string to any -206 Parameter names mapped to their values. -207 """ -208 out = dict() -209 param_names = dir(self) -210 for key in param_names: -211 if key.startswith("_") is False: -212 out[key] = getattr(self, key, None) -213 -214 return out -215 -216 # for parallel case on initial design -217 def eval_objective(self, arg): -218 try: -219 return self.objective_func(self.x_init[arg, :]) -220 except: -221 return 1e06 -222 -223 # load data from stored shelve -224 def load(self, path): -225 """load data from stored shelve. -226 -227 # Arguments +203 self.n_jobs = n_jobs +204 +205 # from sklearn.base +206 def get_params(self): +207 """Get object attributes. +208 +209 Returns +210 ------- +211 params : mapping of string to any +212 Parameter names mapped to their values. +213 """ +214 out = dict() +215 param_names = dir(self) +216 for key in param_names: +217 if key.startswith("_") is False: +218 out[key] = getattr(self, key, None) +219 +220 return out +221 +222 # for parallel case on initial design +223 def eval_objective(self, arg): +224 try: +225 return self.objective_func(self.x_init[arg, :]) +226 except: +227 return 1e06 228 -229 path : a string; path to stored shelve. -230 -231 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -232 """ -233 -234 self.sh = shelve.open(filename=path) -235 for key, value in self.sh.items(): -236 setattr(self, key, value) -237 -238 # update shelve in optimization loop -239 def update_shelve(self): -240 for key, value in self.get_params().items(): -241 if (callable(value) is False) & (key != "sh"): -242 self.sh[key] = value -243 self.sh.sync() -244 -245 # closing shelve (can't be modified after) -246 def close_shelve(self): -247 """Close shelve. -248 -249 # Arguments +229 # load data from stored shelve +230 def load(self, path): +231 """load data from stored shelve. +232 +233 # Arguments +234 +235 path : a string; path to stored shelve. +236 +237 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +238 """ +239 +240 self.sh = shelve.open(filename=path) +241 for key, value in self.sh.items(): +242 setattr(self, key, value) +243 +244 # update shelve in optimization loop +245 def update_shelve(self): +246 for key, value in self.get_params().items(): +247 if (callable(value) is False) & (key != "sh"): +248 self.sh[key] = value +249 self.sh.sync() 250 -251 No argument. -252 -253 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -254 """ -255 -256 self.sh.close() -257 -258 # fit predict -259 def gp_fit_predict(self, X_train, y_train, X_test): -260 -261 if len(X_train.shape) == 1: -262 X_train = X_train.reshape((-1, 1)) -263 X_test = X_test.reshape((-1, 1)) -264 -265 # Get mean and standard deviation -266 return self.gp_obj.fit(X_train, y_train).predict( -267 X_test, return_std=True -268 ) -269 -270 # fit predict timings -271 def timings_fit_predict(self, X_train, y_train, X_test): -272 -273 if len(X_train.shape) == 1: -274 X_train = X_train.reshape((-1, 1)) -275 X_test = X_test.reshape((-1, 1)) -276 -277 # Get mean preds for timings -278 return self.rf_obj.fit(X_train, y_train).predict(X_test) -279 -280 # find next parameter by using expected improvement (ei) -281 def next_parameter_by_ei(self, seed, i): +251 # closing shelve (can't be modified after) +252 def close_shelve(self): +253 """Close shelve. +254 +255 # Arguments +256 +257 No argument. +258 +259 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +260 """ +261 +262 self.sh.close() +263 +264 # fit predict +265 def gp_fit_predict(self, X_train, y_train, X_test): +266 +267 if len(X_train.shape) == 1: +268 X_train = X_train.reshape((-1, 1)) +269 X_test = X_test.reshape((-1, 1)) +270 +271 # Get mean and standard deviation +272 return self.gp_obj.fit(X_train, y_train).predict( +273 X_test, return_std=True +274 ) +275 +276 # fit predict timings +277 def timings_fit_predict(self, X_train, y_train, X_test): +278 +279 if len(X_train.shape) == 1: +280 X_train = X_train.reshape((-1, 1)) +281 X_test = X_test.reshape((-1, 1)) 282 -283 gamma_hat = (self.y_min - self.y_mean) / self.y_std -284 -285 self.ei = -self.y_std * ( -286 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) -287 ) +283 # Get mean preds for timings +284 return self.rf_obj.fit(X_train, y_train).predict(X_test) +285 +286 # find next parameter by using expected improvement (ei) +287 def next_parameter_by_ei(self, seed, i): 288 -289 # find max index ----- +289 gamma_hat = (self.y_min - self.y_mean) / self.y_std 290 -291 if self.per_second is False: -292 -293 # find index for max. ei -294 max_index = self.ei.argmin() -295 -296 else: # self.per_second is True -297 -298 # predict timings on self.x_choices -299 # train on X = self.parameters and y = self.timings -300 # (must have same shape[0]) -301 timing_preds = self.timings_fit_predict( -302 X_train=np.asarray(self.parameters), -303 y_train=np.asarray(self.timings), -304 X_test=self.x_choices, -305 ) -306 -307 # find index for max. ei (and min. timings) -308 max_index = (-self.ei / timing_preds).argmax() -309 -310 self.max_ei.append(np.abs(self.ei[max_index])) -311 -312 # Select next choice -313 next_param = self.x_choices[max_index, :] -314 -315 if next_param in np.asarray(self.parameters): -316 -317 if self.log_scale == False: -318 -319 np.random.seed(self.seed * i + 1000) -320 next_param = ( -321 self.upper_bound - self.lower_bound -322 ) * np.random.rand(self.n_dims) + self.lower_bound -323 -324 else: # /!\ very... experimental -325 -326 np.random.seed(self.seed) -327 log_upper_bound = np.log(self.upper_bound) -328 log_lower_bound = np.log(self.lower_bound) -329 log_bounds_range = log_upper_bound - log_lower_bound -330 -331 next_param = np.minimum( -332 np.exp( -333 log_bounds_range * np.random.rand(self.n_dims) -334 + log_lower_bound -335 ), -336 1.7976931348623157e308, -337 ) -338 -339 return next_param -340 -341 # optimize the objective -342 def optimize( -343 self, -344 verbose=1, -345 n_more_iter=None, -346 abs_tol=None, # suggested 1e-4, for n_iter = 200 -347 min_budget=50, # minimum budget for early stopping -348 func_args=None, -349 ): -350 """Launch optimization loop. -351 -352 # Arguments: -353 -354 verbose: an integer; -355 verbose = 0: nothing is printed, -356 verbose = 1: a progress bar is printed (longer than 0), -357 verbose = 2: information about each iteration is printed (longer than 1) -358 -359 n_more_iter: an integer; -360 additional number of iterations for the optimizer (which has been run once) -361 -362 abs_tol: a float; -363 tolerance for convergence of the optimizer (early stopping based on expected improvement) +291 self.ei = -self.y_std * ( +292 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) +293 ) +294 +295 # find max index ----- +296 +297 if self.per_second is False: +298 +299 # find index for max. ei +300 max_index = self.ei.argmin() +301 +302 else: # self.per_second is True +303 +304 # predict timings on self.x_choices +305 # train on X = self.parameters and y = self.timings +306 # (must have same shape[0]) +307 timing_preds = self.timings_fit_predict( +308 X_train=np.asarray(self.parameters), +309 y_train=np.asarray(self.timings), +310 X_test=self.x_choices, +311 ) +312 +313 # find index for max. ei (and min. timings) +314 max_index = (-self.ei / timing_preds).argmax() +315 +316 self.max_ei.append(np.abs(self.ei[max_index])) +317 +318 # Select next choice +319 next_param = self.x_choices[max_index, :] +320 +321 if next_param in np.asarray(self.parameters): +322 +323 if self.log_scale == False: +324 +325 np.random.seed(self.seed * i + 1000) +326 next_param = ( +327 self.upper_bound - self.lower_bound +328 ) * np.random.rand(self.n_dims) + self.lower_bound +329 +330 else: # /!\ very... experimental +331 +332 np.random.seed(self.seed) +333 log_upper_bound = np.log(self.upper_bound) +334 log_lower_bound = np.log(self.lower_bound) +335 log_bounds_range = log_upper_bound - log_lower_bound +336 +337 next_param = np.minimum( +338 np.exp( +339 log_bounds_range * np.random.rand(self.n_dims) +340 + log_lower_bound +341 ), +342 1.7976931348623157e308, +343 ) +344 +345 return next_param +346 +347 # optimize the objective +348 def optimize( +349 self, +350 verbose=1, +351 n_more_iter=None, +352 abs_tol=None, # suggested 1e-4, for n_iter = 200 +353 min_budget=50, # minimum budget for early stopping +354 func_args=None, +355 ): +356 """Launch optimization loop. +357 +358 # Arguments: +359 +360 verbose: an integer; +361 verbose = 0: nothing is printed, +362 verbose = 1: a progress bar is printed (longer than 0), +363 verbose = 2: information about each iteration is printed (longer than 1) 364 -365 min_budget: an integer (default is 50); -366 minimum number of iterations before early stopping controlled by `abs_tol` +365 n_more_iter: an integer; +366 additional number of iterations for the optimizer (which has been run once) 367 -368 func_args: a list; -369 additional parameters for the objective function (if necessary) +368 abs_tol: a float; +369 tolerance for convergence of the optimizer (early stopping based on expected improvement) 370 -371 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) -372 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +371 min_budget: an integer (default is 50); +372 minimum number of iterations before early stopping controlled by `abs_tol` 373 -374 """ -375 -376 # verbose = 0: nothing is printed -377 # verbose = 1: a progress bar is printed (longer than 0) -378 # verbose = 2: information about each iteration is printed (longer than 1) -379 if func_args is None: -380 func_args = [] +374 func_args: a list; +375 additional parameters for the objective function (if necessary) +376 +377 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) +378 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +379 +380 """ 381 -382 if ( -383 n_more_iter is None -384 ): # initial optimization, before more iters are requested -385 -386 n_iter = self.n_iter -387 # stopping iter for early stopping (default is total budget) -388 iter_stop = n_iter # potentially # got to check this -389 -390 # initial design ---------- +382 # verbose = 0: nothing is printed +383 # verbose = 1: a progress bar is printed (longer than 0) +384 # verbose = 2: information about each iteration is printed (longer than 1) +385 if func_args is None: +386 func_args = [] +387 +388 if ( +389 n_more_iter is None +390 ): # initial optimization, before more iters are requested 391 -392 if (verbose == 1) | (verbose == 2): -393 print(f"\n Creating initial design... \n") -394 -395 if verbose == 1: -396 progbar = Progbar(target=self.n_init) +392 n_iter = self.n_iter +393 # stopping iter for early stopping (default is total budget) +394 iter_stop = n_iter # potentially # got to check this +395 +396 # initial design ---------- 397 -398 self.parameters = self.x_init.tolist() -399 self.scores = [] +398 if (verbose == 1) | (verbose == 2): +399 print(f"\n Creating initial design... \n") 400 -401 if self.save is not None: -402 self.update_shelve() +401 if verbose == 1: +402 progbar = Progbar(target=self.n_init) 403 -404 if self.y_init is None: # calculate scores on initial design -405 -406 assert ( -407 self.objective_func is not None -408 ), "self.y_init is None: must have 'objective_func' not None" +404 self.parameters = self.x_init.tolist() +405 self.scores = [] +406 +407 if self.save is not None: +408 self.update_shelve() 409 -410 if self.n_jobs == 1: +410 if self.y_init is None: # calculate scores on initial design 411 -412 for i in range(self.n_init): -413 -414 x_next = self.x_init[i, :] +412 assert ( +413 self.objective_func is not None +414 ), "self.y_init is None: must have 'objective_func' not None" 415 -416 try: +416 if self.n_jobs == 1: 417 -418 if self.per_second is True: +418 for i in range(self.n_init): 419 -420 start = time() -421 score = self.objective_func(x_next, *func_args) -422 if (np.isfinite(score) == False) or ( -423 np.isnan(score) == True -424 ): -425 continue -426 self.timings.append(np.log(time() - start)) -427 -428 else: # self.per_second is False -429 -430 score = self.objective_func(x_next, *func_args) -431 if (np.isfinite(score) == False) or ( -432 np.isnan(score) == True -433 ): -434 continue +420 x_next = self.x_init[i, :] +421 +422 try: +423 +424 if self.per_second is True: +425 +426 start = time() +427 score = self.objective_func(x_next, *func_args) +428 if (np.isfinite(score) == False) or ( +429 np.isnan(score) == True +430 ): +431 continue +432 self.timings.append(np.log(time() - start)) +433 +434 else: # self.per_second is False 435 -436 self.scores.append(score) -437 -438 if self.save is not None: -439 self.update_shelve() -440 -441 except: -442 -443 continue -444 -445 if verbose == 1: -446 progbar.update(i) # update progress bar -447 -448 if verbose == 2: -449 print(f"point: {x_next}; score: {score}") -450 # end loop # calculate scores on initial design -451 -452 if verbose == 1: -453 progbar.update(self.n_init) -454 -455 else: # self.n_jobs != 1 -456 -457 assert ( -458 self.per_second is False -459 ), "timings not calculated here" +436 score = self.objective_func(x_next, *func_args) +437 if (np.isfinite(score) == False) or ( +438 np.isnan(score) == True +439 ): +440 continue +441 +442 self.scores.append(score) +443 +444 if self.save is not None: +445 self.update_shelve() +446 +447 except: +448 +449 continue +450 +451 if verbose == 1: +452 progbar.update(i) # update progress bar +453 +454 if verbose == 2: +455 print(f"point: {x_next}; score: {score}") +456 # end loop # calculate scores on initial design +457 +458 if verbose == 1: +459 progbar.update(self.n_init) 460 -461 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( -462 delayed(self.objective_func)(self.x_init[i, :]) -463 for i in range(self.n_init) -464 ) -465 -466 self.scores = scores -467 -468 if self.save is not None: -469 self.update_shelve() -470 -471 else: # if self.y_init is None: -472 -473 assert self.x_init.shape[0] == len( -474 self.y_init -475 ), "must have: self.x_init.shape[0] == len(self.y_init)" +461 else: # self.n_jobs != 1 +462 +463 assert ( +464 self.per_second is False +465 ), "timings not calculated here" +466 +467 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( +468 delayed(self.objective_func)(self.x_init[i, :]) +469 for i in range(self.n_init) +470 ) +471 +472 self.scores = scores +473 +474 if self.save is not None: +475 self.update_shelve() 476 -477 self.scores = pickle.loads( -478 pickle.dumps(self.y_init.tolist(), -1) -479 ) -480 -481 # current best score on initial design -482 min_index = (np.asarray(self.scores)).argmin() -483 self.y_min = self.scores[min_index] -484 self.x_min = self.x_init[min_index, :] -485 -486 # current gp mean and std on initial design -487 y_mean, y_std = self.gp_fit_predict( -488 np.asarray(self.parameters), -489 np.asarray(self.scores), -490 self.x_choices, -491 ) -492 self.y_mean = y_mean -493 self.y_std = np.maximum(2.220446049250313e-16, y_std) -494 -495 # saving after initial design computation -496 if self.save is not None: -497 self.update_shelve() -498 -499 else: # if n_more_iter is not None +477 else: # if self.y_init is None: +478 +479 assert self.x_init.shape[0] == len( +480 self.y_init +481 ), "must have: self.x_init.shape[0] == len(self.y_init)" +482 +483 self.scores = pickle.loads( +484 pickle.dumps(self.y_init.tolist(), -1) +485 ) +486 +487 # current best score on initial design +488 min_index = (np.asarray(self.scores)).argmin() +489 self.y_min = self.scores[min_index] +490 self.x_min = self.x_init[min_index, :] +491 +492 # current gp mean and std on initial design +493 y_mean, y_std = self.gp_fit_predict( +494 np.asarray(self.parameters), +495 np.asarray(self.scores), +496 self.x_choices, +497 ) +498 self.y_mean = y_mean +499 self.y_std = np.maximum(2.220446049250313e-16, y_std) 500 -501 assert self.n_iter > 5, "you must have n_iter > 5" -502 n_iter = n_more_iter -503 iter_stop = len(self.max_ei) + n_more_iter # potentially +501 # saving after initial design computation +502 if self.save is not None: +503 self.update_shelve() 504 -505 if (verbose == 1) | (verbose == 2): -506 print(f"\n ...Done. \n") -507 try: -508 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) -509 except: -510 pass -511 -512 # end init design ---------- -513 -514 # if n_more_iter is None: # initial optimization, before more iters are requested -515 -516 if (verbose == 1) | (verbose == 2): -517 print(f"\n Optimization loop... \n") -518 -519 # early stopping? -520 if abs_tol is not None: -521 assert ( -522 min_budget > 20 -523 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +505 else: # if n_more_iter is not None +506 +507 assert self.n_iter > 5, "you must have n_iter > 5" +508 n_iter = n_more_iter +509 iter_stop = len(self.max_ei) + n_more_iter # potentially +510 +511 if (verbose == 1) | (verbose == 2): +512 print(f"\n ...Done. \n") +513 try: +514 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) +515 except: +516 pass +517 +518 # end init design ---------- +519 +520 # if n_more_iter is None: # initial optimization, before more iters are requested +521 +522 if (verbose == 1) | (verbose == 2): +523 print(f"\n Optimization loop... \n") 524 -525 if verbose == 1: -526 progbar = Progbar(target=n_iter) -527 -528 # main loop ---------- -529 -530 for i in range(n_iter): -531 -532 # find next set of parameters (vector), maximizing ei -533 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) -534 -535 try: -536 -537 if self.per_second is True: -538 -539 start = time() +525 # early stopping? +526 if abs_tol is not None: +527 assert ( +528 min_budget > 20 +529 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +530 +531 if verbose == 1: +532 progbar = Progbar(target=n_iter) +533 +534 # main loop ---------- +535 +536 for i in range(n_iter): +537 +538 # find next set of parameters (vector), maximizing ei +539 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) 540 -541 if self.objective_func is not None: +541 try: 542 -543 score_next_param = self.objective_func( -544 next_param, *func_args -545 ) +543 if self.per_second is True: +544 +545 start = time() 546 -547 if (np.isfinite(score_next_param) == False) or ( -548 np.isnan(score_next_param) == True -549 ): -550 continue -551 -552 else: -553 -554 assert (self.x_init is not None) and ( -555 self.y_init is not None -556 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +547 if self.objective_func is not None: +548 +549 score_next_param = self.objective_func( +550 next_param, *func_args +551 ) +552 +553 if (np.isfinite(score_next_param) == False) or ( +554 np.isnan(score_next_param) == True +555 ): +556 continue 557 -558 print(f"\n next param: {next_param} \n") -559 score_next_param = float( -560 input("get new score: \n") -561 ) # or an API response -562 -563 if (np.isfinite(score_next_param) == False) or ( -564 np.isnan(score_next_param) == True -565 ): -566 continue -567 -568 self.timings.append(np.log(time() - start)) -569 -570 else: # self.per_second is False: -571 -572 if self.objective_func is not None: +558 else: +559 +560 assert (self.x_init is not None) and ( +561 self.y_init is not None +562 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +563 +564 print(f"\n next param: {next_param} \n") +565 score_next_param = float( +566 input("get new score: \n") +567 ) # or an API response +568 +569 if (np.isfinite(score_next_param) == False) or ( +570 np.isnan(score_next_param) == True +571 ): +572 continue 573 -574 score_next_param = self.objective_func( -575 next_param, *func_args -576 ) +574 self.timings.append(np.log(time() - start)) +575 +576 else: # self.per_second is False: 577 -578 if (np.isfinite(score_next_param) == False) or ( -579 np.isnan(score_next_param) == True -580 ): -581 continue -582 -583 else: -584 -585 assert (self.x_init is not None) and ( -586 self.y_init is not None -587 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +578 if self.objective_func is not None: +579 +580 score_next_param = self.objective_func( +581 next_param, *func_args +582 ) +583 +584 if (np.isfinite(score_next_param) == False) or ( +585 np.isnan(score_next_param) == True +586 ): +587 continue 588 -589 print(f"\n next param: {next_param} \n") -590 score_next_param = float( -591 input("get new score: \n") -592 ) # or an API response -593 -594 if (np.isfinite(score_next_param) == False) or ( -595 np.isnan(score_next_param) == True -596 ): -597 continue -598 -599 except: -600 -601 continue -602 -603 self.parameters.append(next_param.tolist()) +589 else: +590 +591 assert (self.x_init is not None) and ( +592 self.y_init is not None +593 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +594 +595 print(f"\n next param: {next_param} \n") +596 score_next_param = float( +597 input("get new score: \n") +598 ) # or an API response +599 +600 if (np.isfinite(score_next_param) == False) or ( +601 np.isnan(score_next_param) == True +602 ): +603 continue 604 -605 self.scores.append(score_next_param) +605 except: 606 -607 if self.save is not None: -608 self.update_shelve() -609 -610 if verbose == 2: -611 print(f"iteration {i + 1} -----") -612 print(f"current minimum: {self.x_min}") -613 print(f"current minimum score: {self.y_min}") -614 print(f"next parameter: {next_param}") -615 print(f"score for next parameter: {score_next_param} \n") -616 -617 if score_next_param < self.y_min: -618 self.x_min = next_param -619 self.y_min = score_next_param -620 if self.save is not None: -621 self.update_shelve() +607 continue +608 +609 self.parameters.append(next_param.tolist()) +610 +611 self.scores.append(score_next_param) +612 +613 if self.save is not None: +614 self.update_shelve() +615 +616 if verbose == 2: +617 print(f"iteration {i + 1} -----") +618 print(f"current minimum: {self.x_min}") +619 print(f"current minimum score: {self.y_min}") +620 print(f"next parameter: {next_param}") +621 print(f"score for next parameter: {score_next_param} \n") 622 -623 self.y_mean, self.y_std = self.gp_fit_predict( -624 np.asarray(self.parameters), -625 np.asarray(self.scores), -626 self.x_choices, -627 ) +623 if score_next_param < self.y_min: +624 self.x_min = next_param +625 self.y_min = score_next_param +626 if self.save is not None: +627 self.update_shelve() 628 -629 if self.save is not None: -630 self.update_shelve() -631 -632 if verbose == 1: -633 progbar.update(i + 1) # update progress bar +629 self.y_mean, self.y_std = self.gp_fit_predict( +630 np.asarray(self.parameters), +631 np.asarray(self.scores), +632 self.x_choices, +633 ) 634 -635 # early stopping -636 -637 if abs_tol is not None: -638 -639 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): -640 if len(self.max_ei) > min_budget: -641 -642 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) -643 -644 if diff_max_ei[-1] <= abs_tol: -645 -646 iter_stop = len(self.max_ei) # index i starts at 0 +635 if self.save is not None: +636 self.update_shelve() +637 +638 if verbose == 1: +639 progbar.update(i + 1) # update progress bar +640 +641 # early stopping +642 +643 if abs_tol is not None: +644 +645 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): +646 if len(self.max_ei) > min_budget: 647 -648 break +648 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) 649 -650 # end main loop ---------- +650 if diff_max_ei[-1] <= abs_tol: 651 -652 if (verbose == 1) & (i < (n_iter - 1)): -653 progbar.update(n_iter) -654 -655 self.n_iter = iter_stop -656 if self.save is not None: -657 self.update_shelve() -658 -659 return (self.x_min, self.y_min) +652 iter_stop = len(self.max_ei) # index i starts at 0 +653 +654 break +655 +656 # end main loop ---------- +657 +658 if (verbose == 1) & (i < (n_iter - 1)): +659 progbar.update(n_iter) +660 +661 self.n_iter = iter_stop +662 if self.save is not None: +663 self.update_shelve() +664 +665 DescribeResult = namedtuple( +666 "DescribeResult", ("best_params", "best_score") +667 ) +668 +669 if self.params_names is None: +670 +671 return DescribeResult(self.x_min, self.y_min) +672 +673 else: +674 +675 return DescribeResult( +676 dict(zip(self.params_names, self.x_min)), self.y_min +677 )
21class GPOpt: - 22 """Class GPOpt. - 23 - 24 # Arguments: - 25 - 26 lower_bound: a numpy array; - 27 lower bound for researched minimum - 28 - 29 upper_bound: a numpy array; - 30 upper bound for researched minimum - 31 - 32 objective_func: a function; - 33 the objective function to be minimized - 34 - 35 gp_obj: a GaussianProcessRegressor object; - 36 An ML model for estimating the uncertainty around the objective function - 37 - 38 x_init: - 39 initial setting of points where `objective_func` is evaluated (optional) - 40 - 41 y_init: - 42 initial setting values at points where `objective_func` is evaluated (optional) - 43 - 44 n_init: an integer; - 45 number of points in the initial setting, when `x_init` and `y_init` are not provided - 46 - 47 n_choices: an integer; - 48 number of points for the calculation of expected improvement - 49 - 50 n_iter: an integer; - 51 number of iterations of the minimization algorithm - 52 - 53 alpha: a float; - 54 Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) - 55 - 56 n_restarts_optimizer: an integer; - 57 The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. - 58 - 59 seed: an integer; - 60 reproducibility seed - 61 - 62 save: a string; - 63 Specifies where to save the optimizer in its current state - 64 - 65 n_jobs: an integer; - 66 number of jobs for parallel computing on initial setting (can be -1) - 67 - 68 per_second: a boolean; - 69 __experimental__, default is False (leave to default for now) - 70 - 71 log_scale: a boolean; - 72 __experimental__, default is False (leave to default for now) - 73 - 74 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) - 75 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) - 76 - 77 """ - 78 - 79 def __init__( - 80 self, - 81 lower_bound, - 82 upper_bound, - 83 objective_func=None, - 84 gp_obj=None, - 85 x_init=None, - 86 y_init=None, - 87 n_init=10, - 88 n_choices=25000, - 89 n_iter=190, - 90 alpha=1e-6, - 91 n_restarts_optimizer=25, - 92 seed=123, - 93 save=None, - 94 n_jobs=1, - 95 per_second=False, # /!\ very experimental - 96 log_scale=False, # /!\ experimental - 97 ): - 98 - 99 n_dims = len(lower_bound) -100 -101 assert n_dims == len( -102 upper_bound -103 ), "'upper_bound' and 'lower_bound' must have the same dimensions" -104 -105 self.objective_func = objective_func -106 self.lower_bound = lower_bound -107 self.upper_bound = upper_bound -108 self.y_init = y_init -109 self.log_scale = log_scale -110 self.n_dims = n_dims -111 self.n_init = n_init -112 self.n_choices = n_choices -113 self.n_iter = n_iter -114 self.alpha = alpha -115 self.n_restarts_optimizer = n_restarts_optimizer -116 self.seed = seed -117 self.save = save -118 self.per_second = per_second -119 self.x_min = None -120 self.y_min = None -121 self.y_mean = None -122 self.y_std = None -123 self.ei = np.array([]) -124 self.max_ei = [] -125 if gp_obj is None: -126 self.gp_obj = GaussianProcessRegressor( -127 kernel=Matern(nu=2.5), -128 alpha=self.alpha, -129 normalize_y=True, -130 n_restarts_optimizer=self.n_restarts_optimizer, -131 random_state=self.seed, -132 ) -133 else: -134 self.gp_obj = gp_obj -135 -136 # Sobol seqs for initial design and choices -137 sobol_seq_init = np.transpose( -138 generate_sobol2( -139 n_dims=self.n_dims, -140 n_points=self.n_init, -141 skip=2, -142 ) -143 ) -144 sobol_seq_choices = np.transpose( -145 generate_sobol2( -146 n_dims=self.n_dims, -147 n_points=self.n_choices, -148 skip=self.n_init + 2, -149 ) -150 ) -151 -152 # Sobol seqs for initial design and choices with bounds -153 if self.log_scale == False: -154 -155 bounds_range = upper_bound - lower_bound -156 self.x_init = ( -157 bounds_range * sobol_seq_init + lower_bound -158 if x_init is None -159 else x_init -160 ) -161 self.x_choices = bounds_range * sobol_seq_choices + lower_bound -162 -163 else: # (!) experimental -164 -165 assert ( -166 lower_bound > 0 -167 ).all(), "all elements of `lower_bound` must be > 0" -168 assert ( -169 upper_bound > 0 -170 ).all(), "all elements of `upper_bound` must be > 0" -171 -172 log_lower_bound = np.log(lower_bound) -173 log_upper_bound = np.log(upper_bound) -174 log_bounds_range = log_upper_bound - log_lower_bound -175 self.x_init = ( -176 np.minimum( -177 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), -178 1.7976931348623157e308, -179 ) -180 if x_init is None -181 else x_init -182 ) -183 self.x_choices = np.minimum( -184 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), -185 1.7976931348623157e308, -186 ) -187 -188 # shelve for saving (not for loading) -189 if self.save is not None: -190 self.sh = shelve.open(filename=save, flag="c", writeback=True) -191 -192 if self.per_second: -193 self.timings = [] -194 self.rf_obj = RandomForestRegressor( -195 n_estimators=250, random_state=self.seed -196 ) +@@ -1482,6 +1520,9 @@22class GPOpt: + 23 """Class GPOpt. + 24 + 25 # Arguments: + 26 + 27 lower_bound: a numpy array; + 28 lower bound for researched minimum + 29 + 30 upper_bound: a numpy array; + 31 upper bound for researched minimum + 32 + 33 objective_func: a function; + 34 the objective function to be minimized + 35 + 36 params_names: a list; + 37 names of the parameters of the objective function (optional) + 38 + 39 gp_obj: a GaussianProcessRegressor object; + 40 An ML model for estimating the uncertainty around the objective function + 41 + 42 x_init: + 43 initial setting of points where `objective_func` is evaluated (optional) + 44 + 45 y_init: + 46 initial setting values at points where `objective_func` is evaluated (optional) + 47 + 48 n_init: an integer; + 49 number of points in the initial setting, when `x_init` and `y_init` are not provided + 50 + 51 n_choices: an integer; + 52 number of points for the calculation of expected improvement + 53 + 54 n_iter: an integer; + 55 number of iterations of the minimization algorithm + 56 + 57 alpha: a float; + 58 Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel) + 59 + 60 n_restarts_optimizer: an integer; + 61 The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. + 62 + 63 seed: an integer; + 64 reproducibility seed + 65 + 66 save: a string; + 67 Specifies where to save the optimizer in its current state + 68 + 69 n_jobs: an integer; + 70 number of jobs for parallel computing on initial setting (can be -1) + 71 + 72 per_second: a boolean; + 73 __experimental__, default is False (leave to default for now) + 74 + 75 log_scale: a boolean; + 76 __experimental__, default is False (leave to default for now) + 77 + 78 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) + 79 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) + 80 + 81 """ + 82 + 83 def __init__( + 84 self, + 85 lower_bound, + 86 upper_bound, + 87 objective_func=None, + 88 params_names=None, + 89 gp_obj=None, + 90 x_init=None, + 91 y_init=None, + 92 n_init=10, + 93 n_choices=25000, + 94 n_iter=190, + 95 alpha=1e-6, + 96 n_restarts_optimizer=25, + 97 seed=123, + 98 save=None, + 99 n_jobs=1, +100 per_second=False, # /!\ very experimental +101 log_scale=False, # /!\ experimental +102 ): +103 +104 n_dims = len(lower_bound) +105 +106 assert n_dims == len( +107 upper_bound +108 ), "'upper_bound' and 'lower_bound' must have the same dimensions" +109 +110 self.objective_func = objective_func +111 self.params_names = params_names +112 self.lower_bound = lower_bound +113 self.upper_bound = upper_bound +114 self.y_init = y_init +115 self.log_scale = log_scale +116 self.n_dims = n_dims +117 self.n_init = n_init +118 self.n_choices = n_choices +119 self.n_iter = n_iter +120 self.alpha = alpha +121 self.n_restarts_optimizer = n_restarts_optimizer +122 self.seed = seed +123 self.save = save +124 self.per_second = per_second +125 self.x_min = None +126 self.y_min = None +127 self.y_mean = None +128 self.y_std = None +129 self.ei = np.array([]) +130 self.max_ei = [] +131 if gp_obj is None: +132 self.gp_obj = GaussianProcessRegressor( +133 kernel=Matern(nu=2.5), +134 alpha=self.alpha, +135 normalize_y=True, +136 n_restarts_optimizer=self.n_restarts_optimizer, +137 random_state=self.seed, +138 ) +139 else: +140 self.gp_obj = gp_obj +141 +142 # Sobol seqs for initial design and choices +143 sobol_seq_init = np.transpose( +144 generate_sobol2( +145 n_dims=self.n_dims, +146 n_points=self.n_init, +147 skip=2, +148 ) +149 ) +150 sobol_seq_choices = np.transpose( +151 generate_sobol2( +152 n_dims=self.n_dims, +153 n_points=self.n_choices, +154 skip=self.n_init + 2, +155 ) +156 ) +157 +158 # Sobol seqs for initial design and choices with bounds +159 if self.log_scale == False: +160 +161 bounds_range = upper_bound - lower_bound +162 self.x_init = ( +163 bounds_range * sobol_seq_init + lower_bound +164 if x_init is None +165 else x_init +166 ) +167 self.x_choices = bounds_range * sobol_seq_choices + lower_bound +168 +169 else: # (!) experimental +170 +171 assert ( +172 lower_bound > 0 +173 ).all(), "all elements of `lower_bound` must be > 0" +174 assert ( +175 upper_bound > 0 +176 ).all(), "all elements of `upper_bound` must be > 0" +177 +178 log_lower_bound = np.log(lower_bound) +179 log_upper_bound = np.log(upper_bound) +180 log_bounds_range = log_upper_bound - log_lower_bound +181 self.x_init = ( +182 np.minimum( +183 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), +184 1.7976931348623157e308, +185 ) +186 if x_init is None +187 else x_init +188 ) +189 self.x_choices = np.minimum( +190 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), +191 1.7976931348623157e308, +192 ) +193 +194 # shelve for saving (not for loading) +195 if self.save is not None: +196 self.sh = shelve.open(filename=save, flag="c", writeback=True) 197 -198 self.n_jobs = n_jobs -199 -200 # from sklearn.base -201 def get_params(self): -202 """Get object attributes. +198 if self.per_second: +199 self.timings = [] +200 self.rf_obj = RandomForestRegressor( +201 n_estimators=250, random_state=self.seed +202 ) 203 -204 Returns -205 ------- -206 params : mapping of string to any -207 Parameter names mapped to their values. -208 """ -209 out = dict() -210 param_names = dir(self) -211 for key in param_names: -212 if key.startswith("_") is False: -213 out[key] = getattr(self, key, None) -214 -215 return out -216 -217 # for parallel case on initial design -218 def eval_objective(self, arg): -219 try: -220 return self.objective_func(self.x_init[arg, :]) -221 except: -222 return 1e06 -223 -224 # load data from stored shelve -225 def load(self, path): -226 """load data from stored shelve. -227 -228 # Arguments +204 self.n_jobs = n_jobs +205 +206 # from sklearn.base +207 def get_params(self): +208 """Get object attributes. +209 +210 Returns +211 ------- +212 params : mapping of string to any +213 Parameter names mapped to their values. +214 """ +215 out = dict() +216 param_names = dir(self) +217 for key in param_names: +218 if key.startswith("_") is False: +219 out[key] = getattr(self, key, None) +220 +221 return out +222 +223 # for parallel case on initial design +224 def eval_objective(self, arg): +225 try: +226 return self.objective_func(self.x_init[arg, :]) +227 except: +228 return 1e06 229 -230 path : a string; path to stored shelve. -231 -232 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -233 """ -234 -235 self.sh = shelve.open(filename=path) -236 for key, value in self.sh.items(): -237 setattr(self, key, value) -238 -239 # update shelve in optimization loop -240 def update_shelve(self): -241 for key, value in self.get_params().items(): -242 if (callable(value) is False) & (key != "sh"): -243 self.sh[key] = value -244 self.sh.sync() -245 -246 # closing shelve (can't be modified after) -247 def close_shelve(self): -248 """Close shelve. -249 -250 # Arguments +230 # load data from stored shelve +231 def load(self, path): +232 """load data from stored shelve. +233 +234 # Arguments +235 +236 path : a string; path to stored shelve. +237 +238 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +239 """ +240 +241 self.sh = shelve.open(filename=path) +242 for key, value in self.sh.items(): +243 setattr(self, key, value) +244 +245 # update shelve in optimization loop +246 def update_shelve(self): +247 for key, value in self.get_params().items(): +248 if (callable(value) is False) & (key != "sh"): +249 self.sh[key] = value +250 self.sh.sync() 251 -252 No argument. -253 -254 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -255 """ -256 -257 self.sh.close() -258 -259 # fit predict -260 def gp_fit_predict(self, X_train, y_train, X_test): -261 -262 if len(X_train.shape) == 1: -263 X_train = X_train.reshape((-1, 1)) -264 X_test = X_test.reshape((-1, 1)) -265 -266 # Get mean and standard deviation -267 return self.gp_obj.fit(X_train, y_train).predict( -268 X_test, return_std=True -269 ) -270 -271 # fit predict timings -272 def timings_fit_predict(self, X_train, y_train, X_test): -273 -274 if len(X_train.shape) == 1: -275 X_train = X_train.reshape((-1, 1)) -276 X_test = X_test.reshape((-1, 1)) -277 -278 # Get mean preds for timings -279 return self.rf_obj.fit(X_train, y_train).predict(X_test) -280 -281 # find next parameter by using expected improvement (ei) -282 def next_parameter_by_ei(self, seed, i): +252 # closing shelve (can't be modified after) +253 def close_shelve(self): +254 """Close shelve. +255 +256 # Arguments +257 +258 No argument. +259 +260 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +261 """ +262 +263 self.sh.close() +264 +265 # fit predict +266 def gp_fit_predict(self, X_train, y_train, X_test): +267 +268 if len(X_train.shape) == 1: +269 X_train = X_train.reshape((-1, 1)) +270 X_test = X_test.reshape((-1, 1)) +271 +272 # Get mean and standard deviation +273 return self.gp_obj.fit(X_train, y_train).predict( +274 X_test, return_std=True +275 ) +276 +277 # fit predict timings +278 def timings_fit_predict(self, X_train, y_train, X_test): +279 +280 if len(X_train.shape) == 1: +281 X_train = X_train.reshape((-1, 1)) +282 X_test = X_test.reshape((-1, 1)) 283 -284 gamma_hat = (self.y_min - self.y_mean) / self.y_std -285 -286 self.ei = -self.y_std * ( -287 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) -288 ) +284 # Get mean preds for timings +285 return self.rf_obj.fit(X_train, y_train).predict(X_test) +286 +287 # find next parameter by using expected improvement (ei) +288 def next_parameter_by_ei(self, seed, i): 289 -290 # find max index ----- +290 gamma_hat = (self.y_min - self.y_mean) / self.y_std 291 -292 if self.per_second is False: -293 -294 # find index for max. ei -295 max_index = self.ei.argmin() -296 -297 else: # self.per_second is True -298 -299 # predict timings on self.x_choices -300 # train on X = self.parameters and y = self.timings -301 # (must have same shape[0]) -302 timing_preds = self.timings_fit_predict( -303 X_train=np.asarray(self.parameters), -304 y_train=np.asarray(self.timings), -305 X_test=self.x_choices, -306 ) -307 -308 # find index for max. ei (and min. timings) -309 max_index = (-self.ei / timing_preds).argmax() -310 -311 self.max_ei.append(np.abs(self.ei[max_index])) -312 -313 # Select next choice -314 next_param = self.x_choices[max_index, :] -315 -316 if next_param in np.asarray(self.parameters): -317 -318 if self.log_scale == False: -319 -320 np.random.seed(self.seed * i + 1000) -321 next_param = ( -322 self.upper_bound - self.lower_bound -323 ) * np.random.rand(self.n_dims) + self.lower_bound -324 -325 else: # /!\ very... experimental -326 -327 np.random.seed(self.seed) -328 log_upper_bound = np.log(self.upper_bound) -329 log_lower_bound = np.log(self.lower_bound) -330 log_bounds_range = log_upper_bound - log_lower_bound -331 -332 next_param = np.minimum( -333 np.exp( -334 log_bounds_range * np.random.rand(self.n_dims) -335 + log_lower_bound -336 ), -337 1.7976931348623157e308, -338 ) -339 -340 return next_param -341 -342 # optimize the objective -343 def optimize( -344 self, -345 verbose=1, -346 n_more_iter=None, -347 abs_tol=None, # suggested 1e-4, for n_iter = 200 -348 min_budget=50, # minimum budget for early stopping -349 func_args=None, -350 ): -351 """Launch optimization loop. -352 -353 # Arguments: -354 -355 verbose: an integer; -356 verbose = 0: nothing is printed, -357 verbose = 1: a progress bar is printed (longer than 0), -358 verbose = 2: information about each iteration is printed (longer than 1) -359 -360 n_more_iter: an integer; -361 additional number of iterations for the optimizer (which has been run once) -362 -363 abs_tol: a float; -364 tolerance for convergence of the optimizer (early stopping based on expected improvement) +292 self.ei = -self.y_std * ( +293 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) +294 ) +295 +296 # find max index ----- +297 +298 if self.per_second is False: +299 +300 # find index for max. ei +301 max_index = self.ei.argmin() +302 +303 else: # self.per_second is True +304 +305 # predict timings on self.x_choices +306 # train on X = self.parameters and y = self.timings +307 # (must have same shape[0]) +308 timing_preds = self.timings_fit_predict( +309 X_train=np.asarray(self.parameters), +310 y_train=np.asarray(self.timings), +311 X_test=self.x_choices, +312 ) +313 +314 # find index for max. ei (and min. timings) +315 max_index = (-self.ei / timing_preds).argmax() +316 +317 self.max_ei.append(np.abs(self.ei[max_index])) +318 +319 # Select next choice +320 next_param = self.x_choices[max_index, :] +321 +322 if next_param in np.asarray(self.parameters): +323 +324 if self.log_scale == False: +325 +326 np.random.seed(self.seed * i + 1000) +327 next_param = ( +328 self.upper_bound - self.lower_bound +329 ) * np.random.rand(self.n_dims) + self.lower_bound +330 +331 else: # /!\ very... experimental +332 +333 np.random.seed(self.seed) +334 log_upper_bound = np.log(self.upper_bound) +335 log_lower_bound = np.log(self.lower_bound) +336 log_bounds_range = log_upper_bound - log_lower_bound +337 +338 next_param = np.minimum( +339 np.exp( +340 log_bounds_range * np.random.rand(self.n_dims) +341 + log_lower_bound +342 ), +343 1.7976931348623157e308, +344 ) +345 +346 return next_param +347 +348 # optimize the objective +349 def optimize( +350 self, +351 verbose=1, +352 n_more_iter=None, +353 abs_tol=None, # suggested 1e-4, for n_iter = 200 +354 min_budget=50, # minimum budget for early stopping +355 func_args=None, +356 ): +357 """Launch optimization loop. +358 +359 # Arguments: +360 +361 verbose: an integer; +362 verbose = 0: nothing is printed, +363 verbose = 1: a progress bar is printed (longer than 0), +364 verbose = 2: information about each iteration is printed (longer than 1) 365 -366 min_budget: an integer (default is 50); -367 minimum number of iterations before early stopping controlled by `abs_tol` +366 n_more_iter: an integer; +367 additional number of iterations for the optimizer (which has been run once) 368 -369 func_args: a list; -370 additional parameters for the objective function (if necessary) +369 abs_tol: a float; +370 tolerance for convergence of the optimizer (early stopping based on expected improvement) 371 -372 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) -373 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +372 min_budget: an integer (default is 50); +373 minimum number of iterations before early stopping controlled by `abs_tol` 374 -375 """ -376 -377 # verbose = 0: nothing is printed -378 # verbose = 1: a progress bar is printed (longer than 0) -379 # verbose = 2: information about each iteration is printed (longer than 1) -380 if func_args is None: -381 func_args = [] +375 func_args: a list; +376 additional parameters for the objective function (if necessary) +377 +378 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) +379 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +380 +381 """ 382 -383 if ( -384 n_more_iter is None -385 ): # initial optimization, before more iters are requested -386 -387 n_iter = self.n_iter -388 # stopping iter for early stopping (default is total budget) -389 iter_stop = n_iter # potentially # got to check this -390 -391 # initial design ---------- +383 # verbose = 0: nothing is printed +384 # verbose = 1: a progress bar is printed (longer than 0) +385 # verbose = 2: information about each iteration is printed (longer than 1) +386 if func_args is None: +387 func_args = [] +388 +389 if ( +390 n_more_iter is None +391 ): # initial optimization, before more iters are requested 392 -393 if (verbose == 1) | (verbose == 2): -394 print(f"\n Creating initial design... \n") -395 -396 if verbose == 1: -397 progbar = Progbar(target=self.n_init) +393 n_iter = self.n_iter +394 # stopping iter for early stopping (default is total budget) +395 iter_stop = n_iter # potentially # got to check this +396 +397 # initial design ---------- 398 -399 self.parameters = self.x_init.tolist() -400 self.scores = [] +399 if (verbose == 1) | (verbose == 2): +400 print(f"\n Creating initial design... \n") 401 -402 if self.save is not None: -403 self.update_shelve() +402 if verbose == 1: +403 progbar = Progbar(target=self.n_init) 404 -405 if self.y_init is None: # calculate scores on initial design -406 -407 assert ( -408 self.objective_func is not None -409 ), "self.y_init is None: must have 'objective_func' not None" +405 self.parameters = self.x_init.tolist() +406 self.scores = [] +407 +408 if self.save is not None: +409 self.update_shelve() 410 -411 if self.n_jobs == 1: +411 if self.y_init is None: # calculate scores on initial design 412 -413 for i in range(self.n_init): -414 -415 x_next = self.x_init[i, :] +413 assert ( +414 self.objective_func is not None +415 ), "self.y_init is None: must have 'objective_func' not None" 416 -417 try: +417 if self.n_jobs == 1: 418 -419 if self.per_second is True: +419 for i in range(self.n_init): 420 -421 start = time() -422 score = self.objective_func(x_next, *func_args) -423 if (np.isfinite(score) == False) or ( -424 np.isnan(score) == True -425 ): -426 continue -427 self.timings.append(np.log(time() - start)) -428 -429 else: # self.per_second is False -430 -431 score = self.objective_func(x_next, *func_args) -432 if (np.isfinite(score) == False) or ( -433 np.isnan(score) == True -434 ): -435 continue +421 x_next = self.x_init[i, :] +422 +423 try: +424 +425 if self.per_second is True: +426 +427 start = time() +428 score = self.objective_func(x_next, *func_args) +429 if (np.isfinite(score) == False) or ( +430 np.isnan(score) == True +431 ): +432 continue +433 self.timings.append(np.log(time() - start)) +434 +435 else: # self.per_second is False 436 -437 self.scores.append(score) -438 -439 if self.save is not None: -440 self.update_shelve() -441 -442 except: -443 -444 continue -445 -446 if verbose == 1: -447 progbar.update(i) # update progress bar -448 -449 if verbose == 2: -450 print(f"point: {x_next}; score: {score}") -451 # end loop # calculate scores on initial design -452 -453 if verbose == 1: -454 progbar.update(self.n_init) -455 -456 else: # self.n_jobs != 1 -457 -458 assert ( -459 self.per_second is False -460 ), "timings not calculated here" +437 score = self.objective_func(x_next, *func_args) +438 if (np.isfinite(score) == False) or ( +439 np.isnan(score) == True +440 ): +441 continue +442 +443 self.scores.append(score) +444 +445 if self.save is not None: +446 self.update_shelve() +447 +448 except: +449 +450 continue +451 +452 if verbose == 1: +453 progbar.update(i) # update progress bar +454 +455 if verbose == 2: +456 print(f"point: {x_next}; score: {score}") +457 # end loop # calculate scores on initial design +458 +459 if verbose == 1: +460 progbar.update(self.n_init) 461 -462 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( -463 delayed(self.objective_func)(self.x_init[i, :]) -464 for i in range(self.n_init) -465 ) -466 -467 self.scores = scores -468 -469 if self.save is not None: -470 self.update_shelve() -471 -472 else: # if self.y_init is None: -473 -474 assert self.x_init.shape[0] == len( -475 self.y_init -476 ), "must have: self.x_init.shape[0] == len(self.y_init)" +462 else: # self.n_jobs != 1 +463 +464 assert ( +465 self.per_second is False +466 ), "timings not calculated here" +467 +468 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( +469 delayed(self.objective_func)(self.x_init[i, :]) +470 for i in range(self.n_init) +471 ) +472 +473 self.scores = scores +474 +475 if self.save is not None: +476 self.update_shelve() 477 -478 self.scores = pickle.loads( -479 pickle.dumps(self.y_init.tolist(), -1) -480 ) -481 -482 # current best score on initial design -483 min_index = (np.asarray(self.scores)).argmin() -484 self.y_min = self.scores[min_index] -485 self.x_min = self.x_init[min_index, :] -486 -487 # current gp mean and std on initial design -488 y_mean, y_std = self.gp_fit_predict( -489 np.asarray(self.parameters), -490 np.asarray(self.scores), -491 self.x_choices, -492 ) -493 self.y_mean = y_mean -494 self.y_std = np.maximum(2.220446049250313e-16, y_std) -495 -496 # saving after initial design computation -497 if self.save is not None: -498 self.update_shelve() -499 -500 else: # if n_more_iter is not None +478 else: # if self.y_init is None: +479 +480 assert self.x_init.shape[0] == len( +481 self.y_init +482 ), "must have: self.x_init.shape[0] == len(self.y_init)" +483 +484 self.scores = pickle.loads( +485 pickle.dumps(self.y_init.tolist(), -1) +486 ) +487 +488 # current best score on initial design +489 min_index = (np.asarray(self.scores)).argmin() +490 self.y_min = self.scores[min_index] +491 self.x_min = self.x_init[min_index, :] +492 +493 # current gp mean and std on initial design +494 y_mean, y_std = self.gp_fit_predict( +495 np.asarray(self.parameters), +496 np.asarray(self.scores), +497 self.x_choices, +498 ) +499 self.y_mean = y_mean +500 self.y_std = np.maximum(2.220446049250313e-16, y_std) 501 -502 assert self.n_iter > 5, "you must have n_iter > 5" -503 n_iter = n_more_iter -504 iter_stop = len(self.max_ei) + n_more_iter # potentially +502 # saving after initial design computation +503 if self.save is not None: +504 self.update_shelve() 505 -506 if (verbose == 1) | (verbose == 2): -507 print(f"\n ...Done. \n") -508 try: -509 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) -510 except: -511 pass -512 -513 # end init design ---------- -514 -515 # if n_more_iter is None: # initial optimization, before more iters are requested -516 -517 if (verbose == 1) | (verbose == 2): -518 print(f"\n Optimization loop... \n") -519 -520 # early stopping? -521 if abs_tol is not None: -522 assert ( -523 min_budget > 20 -524 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +506 else: # if n_more_iter is not None +507 +508 assert self.n_iter > 5, "you must have n_iter > 5" +509 n_iter = n_more_iter +510 iter_stop = len(self.max_ei) + n_more_iter # potentially +511 +512 if (verbose == 1) | (verbose == 2): +513 print(f"\n ...Done. \n") +514 try: +515 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) +516 except: +517 pass +518 +519 # end init design ---------- +520 +521 # if n_more_iter is None: # initial optimization, before more iters are requested +522 +523 if (verbose == 1) | (verbose == 2): +524 print(f"\n Optimization loop... \n") 525 -526 if verbose == 1: -527 progbar = Progbar(target=n_iter) -528 -529 # main loop ---------- -530 -531 for i in range(n_iter): -532 -533 # find next set of parameters (vector), maximizing ei -534 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) -535 -536 try: -537 -538 if self.per_second is True: -539 -540 start = time() +526 # early stopping? +527 if abs_tol is not None: +528 assert ( +529 min_budget > 20 +530 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +531 +532 if verbose == 1: +533 progbar = Progbar(target=n_iter) +534 +535 # main loop ---------- +536 +537 for i in range(n_iter): +538 +539 # find next set of parameters (vector), maximizing ei +540 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) 541 -542 if self.objective_func is not None: +542 try: 543 -544 score_next_param = self.objective_func( -545 next_param, *func_args -546 ) +544 if self.per_second is True: +545 +546 start = time() 547 -548 if (np.isfinite(score_next_param) == False) or ( -549 np.isnan(score_next_param) == True -550 ): -551 continue -552 -553 else: -554 -555 assert (self.x_init is not None) and ( -556 self.y_init is not None -557 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +548 if self.objective_func is not None: +549 +550 score_next_param = self.objective_func( +551 next_param, *func_args +552 ) +553 +554 if (np.isfinite(score_next_param) == False) or ( +555 np.isnan(score_next_param) == True +556 ): +557 continue 558 -559 print(f"\n next param: {next_param} \n") -560 score_next_param = float( -561 input("get new score: \n") -562 ) # or an API response -563 -564 if (np.isfinite(score_next_param) == False) or ( -565 np.isnan(score_next_param) == True -566 ): -567 continue -568 -569 self.timings.append(np.log(time() - start)) -570 -571 else: # self.per_second is False: -572 -573 if self.objective_func is not None: +559 else: +560 +561 assert (self.x_init is not None) and ( +562 self.y_init is not None +563 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +564 +565 print(f"\n next param: {next_param} \n") +566 score_next_param = float( +567 input("get new score: \n") +568 ) # or an API response +569 +570 if (np.isfinite(score_next_param) == False) or ( +571 np.isnan(score_next_param) == True +572 ): +573 continue 574 -575 score_next_param = self.objective_func( -576 next_param, *func_args -577 ) +575 self.timings.append(np.log(time() - start)) +576 +577 else: # self.per_second is False: 578 -579 if (np.isfinite(score_next_param) == False) or ( -580 np.isnan(score_next_param) == True -581 ): -582 continue -583 -584 else: -585 -586 assert (self.x_init is not None) and ( -587 self.y_init is not None -588 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +579 if self.objective_func is not None: +580 +581 score_next_param = self.objective_func( +582 next_param, *func_args +583 ) +584 +585 if (np.isfinite(score_next_param) == False) or ( +586 np.isnan(score_next_param) == True +587 ): +588 continue 589 -590 print(f"\n next param: {next_param} \n") -591 score_next_param = float( -592 input("get new score: \n") -593 ) # or an API response -594 -595 if (np.isfinite(score_next_param) == False) or ( -596 np.isnan(score_next_param) == True -597 ): -598 continue -599 -600 except: -601 -602 continue -603 -604 self.parameters.append(next_param.tolist()) +590 else: +591 +592 assert (self.x_init is not None) and ( +593 self.y_init is not None +594 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +595 +596 print(f"\n next param: {next_param} \n") +597 score_next_param = float( +598 input("get new score: \n") +599 ) # or an API response +600 +601 if (np.isfinite(score_next_param) == False) or ( +602 np.isnan(score_next_param) == True +603 ): +604 continue 605 -606 self.scores.append(score_next_param) +606 except: 607 -608 if self.save is not None: -609 self.update_shelve() -610 -611 if verbose == 2: -612 print(f"iteration {i + 1} -----") -613 print(f"current minimum: {self.x_min}") -614 print(f"current minimum score: {self.y_min}") -615 print(f"next parameter: {next_param}") -616 print(f"score for next parameter: {score_next_param} \n") -617 -618 if score_next_param < self.y_min: -619 self.x_min = next_param -620 self.y_min = score_next_param -621 if self.save is not None: -622 self.update_shelve() +608 continue +609 +610 self.parameters.append(next_param.tolist()) +611 +612 self.scores.append(score_next_param) +613 +614 if self.save is not None: +615 self.update_shelve() +616 +617 if verbose == 2: +618 print(f"iteration {i + 1} -----") +619 print(f"current minimum: {self.x_min}") +620 print(f"current minimum score: {self.y_min}") +621 print(f"next parameter: {next_param}") +622 print(f"score for next parameter: {score_next_param} \n") 623 -624 self.y_mean, self.y_std = self.gp_fit_predict( -625 np.asarray(self.parameters), -626 np.asarray(self.scores), -627 self.x_choices, -628 ) +624 if score_next_param < self.y_min: +625 self.x_min = next_param +626 self.y_min = score_next_param +627 if self.save is not None: +628 self.update_shelve() 629 -630 if self.save is not None: -631 self.update_shelve() -632 -633 if verbose == 1: -634 progbar.update(i + 1) # update progress bar +630 self.y_mean, self.y_std = self.gp_fit_predict( +631 np.asarray(self.parameters), +632 np.asarray(self.scores), +633 self.x_choices, +634 ) 635 -636 # early stopping -637 -638 if abs_tol is not None: -639 -640 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): -641 if len(self.max_ei) > min_budget: -642 -643 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) -644 -645 if diff_max_ei[-1] <= abs_tol: -646 -647 iter_stop = len(self.max_ei) # index i starts at 0 +636 if self.save is not None: +637 self.update_shelve() +638 +639 if verbose == 1: +640 progbar.update(i + 1) # update progress bar +641 +642 # early stopping +643 +644 if abs_tol is not None: +645 +646 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): +647 if len(self.max_ei) > min_budget: 648 -649 break +649 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) 650 -651 # end main loop ---------- +651 if diff_max_ei[-1] <= abs_tol: 652 -653 if (verbose == 1) & (i < (n_iter - 1)): -654 progbar.update(n_iter) -655 -656 self.n_iter = iter_stop -657 if self.save is not None: -658 self.update_shelve() -659 -660 return (self.x_min, self.y_min) +653 iter_stop = len(self.max_ei) # index i starts at 0 +654 +655 break +656 +657 # end main loop ---------- +658 +659 if (verbose == 1) & (i < (n_iter - 1)): +660 progbar.update(n_iter) +661 +662 self.n_iter = iter_stop +663 if self.save is not None: +664 self.update_shelve() +665 +666 DescribeResult = namedtuple( +667 "DescribeResult", ("best_params", "best_score") +668 ) +669 +670 if self.params_names is None: +671 +672 return DescribeResult(self.x_min, self.y_min) +673 +674 else: +675 +676 return DescribeResult( +677 dict(zip(self.params_names, self.x_min)), self.y_min +678 )Arguments:
objective_func: a function; the objective function to be minimized +params_names: a list; + names of the parameters of the objective function (optional) + gp_obj: a GaussianProcessRegressor object; An ML model for estimating the uncertainty around the objective function @@ -1531,132 +1572,134 @@Arguments:
- GPOpt( lower_bound, upper_bound, objective_func=None, gp_obj=None, x_init=None, y_init=None, n_init=10, n_choices=25000, n_iter=190, alpha=1e-06, n_restarts_optimizer=25, seed=123, save=None, n_jobs=1, per_second=False, log_scale=False) + GPOpt( lower_bound, upper_bound, objective_func=None, params_names=None, gp_obj=None, x_init=None, y_init=None, n_init=10, n_choices=25000, n_iter=190, alpha=1e-06, n_restarts_optimizer=25, seed=123, save=None, n_jobs=1, per_second=False, log_scale=False)-+79 def __init__( - 80 self, - 81 lower_bound, - 82 upper_bound, - 83 objective_func=None, - 84 gp_obj=None, - 85 x_init=None, - 86 y_init=None, - 87 n_init=10, - 88 n_choices=25000, - 89 n_iter=190, - 90 alpha=1e-6, - 91 n_restarts_optimizer=25, - 92 seed=123, - 93 save=None, - 94 n_jobs=1, - 95 per_second=False, # /!\ very experimental - 96 log_scale=False, # /!\ experimental - 97 ): - 98 - 99 n_dims = len(lower_bound) -100 -101 assert n_dims == len( -102 upper_bound -103 ), "'upper_bound' and 'lower_bound' must have the same dimensions" -104 -105 self.objective_func = objective_func -106 self.lower_bound = lower_bound -107 self.upper_bound = upper_bound -108 self.y_init = y_init -109 self.log_scale = log_scale -110 self.n_dims = n_dims -111 self.n_init = n_init -112 self.n_choices = n_choices -113 self.n_iter = n_iter -114 self.alpha = alpha -115 self.n_restarts_optimizer = n_restarts_optimizer -116 self.seed = seed -117 self.save = save -118 self.per_second = per_second -119 self.x_min = None -120 self.y_min = None -121 self.y_mean = None -122 self.y_std = None -123 self.ei = np.array([]) -124 self.max_ei = [] -125 if gp_obj is None: -126 self.gp_obj = GaussianProcessRegressor( -127 kernel=Matern(nu=2.5), -128 alpha=self.alpha, -129 normalize_y=True, -130 n_restarts_optimizer=self.n_restarts_optimizer, -131 random_state=self.seed, -132 ) -133 else: -134 self.gp_obj = gp_obj -135 -136 # Sobol seqs for initial design and choices -137 sobol_seq_init = np.transpose( -138 generate_sobol2( -139 n_dims=self.n_dims, -140 n_points=self.n_init, -141 skip=2, -142 ) -143 ) -144 sobol_seq_choices = np.transpose( -145 generate_sobol2( -146 n_dims=self.n_dims, -147 n_points=self.n_choices, -148 skip=self.n_init + 2, -149 ) -150 ) -151 -152 # Sobol seqs for initial design and choices with bounds -153 if self.log_scale == False: -154 -155 bounds_range = upper_bound - lower_bound -156 self.x_init = ( -157 bounds_range * sobol_seq_init + lower_bound -158 if x_init is None -159 else x_init -160 ) -161 self.x_choices = bounds_range * sobol_seq_choices + lower_bound -162 -163 else: # (!) experimental -164 -165 assert ( -166 lower_bound > 0 -167 ).all(), "all elements of `lower_bound` must be > 0" -168 assert ( -169 upper_bound > 0 -170 ).all(), "all elements of `upper_bound` must be > 0" -171 -172 log_lower_bound = np.log(lower_bound) -173 log_upper_bound = np.log(upper_bound) -174 log_bounds_range = log_upper_bound - log_lower_bound -175 self.x_init = ( -176 np.minimum( -177 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), -178 1.7976931348623157e308, -179 ) -180 if x_init is None -181 else x_init -182 ) -183 self.x_choices = np.minimum( -184 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), -185 1.7976931348623157e308, -186 ) -187 -188 # shelve for saving (not for loading) -189 if self.save is not None: -190 self.sh = shelve.open(filename=save, flag="c", writeback=True) -191 -192 if self.per_second: -193 self.timings = [] -194 self.rf_obj = RandomForestRegressor( -195 n_estimators=250, random_state=self.seed -196 ) +@@ -1673,6 +1716,17 @@83 def __init__( + 84 self, + 85 lower_bound, + 86 upper_bound, + 87 objective_func=None, + 88 params_names=None, + 89 gp_obj=None, + 90 x_init=None, + 91 y_init=None, + 92 n_init=10, + 93 n_choices=25000, + 94 n_iter=190, + 95 alpha=1e-6, + 96 n_restarts_optimizer=25, + 97 seed=123, + 98 save=None, + 99 n_jobs=1, +100 per_second=False, # /!\ very experimental +101 log_scale=False, # /!\ experimental +102 ): +103 +104 n_dims = len(lower_bound) +105 +106 assert n_dims == len( +107 upper_bound +108 ), "'upper_bound' and 'lower_bound' must have the same dimensions" +109 +110 self.objective_func = objective_func +111 self.params_names = params_names +112 self.lower_bound = lower_bound +113 self.upper_bound = upper_bound +114 self.y_init = y_init +115 self.log_scale = log_scale +116 self.n_dims = n_dims +117 self.n_init = n_init +118 self.n_choices = n_choices +119 self.n_iter = n_iter +120 self.alpha = alpha +121 self.n_restarts_optimizer = n_restarts_optimizer +122 self.seed = seed +123 self.save = save +124 self.per_second = per_second +125 self.x_min = None +126 self.y_min = None +127 self.y_mean = None +128 self.y_std = None +129 self.ei = np.array([]) +130 self.max_ei = [] +131 if gp_obj is None: +132 self.gp_obj = GaussianProcessRegressor( +133 kernel=Matern(nu=2.5), +134 alpha=self.alpha, +135 normalize_y=True, +136 n_restarts_optimizer=self.n_restarts_optimizer, +137 random_state=self.seed, +138 ) +139 else: +140 self.gp_obj = gp_obj +141 +142 # Sobol seqs for initial design and choices +143 sobol_seq_init = np.transpose( +144 generate_sobol2( +145 n_dims=self.n_dims, +146 n_points=self.n_init, +147 skip=2, +148 ) +149 ) +150 sobol_seq_choices = np.transpose( +151 generate_sobol2( +152 n_dims=self.n_dims, +153 n_points=self.n_choices, +154 skip=self.n_init + 2, +155 ) +156 ) +157 +158 # Sobol seqs for initial design and choices with bounds +159 if self.log_scale == False: +160 +161 bounds_range = upper_bound - lower_bound +162 self.x_init = ( +163 bounds_range * sobol_seq_init + lower_bound +164 if x_init is None +165 else x_init +166 ) +167 self.x_choices = bounds_range * sobol_seq_choices + lower_bound +168 +169 else: # (!) experimental +170 +171 assert ( +172 lower_bound > 0 +173 ).all(), "all elements of `lower_bound` must be > 0" +174 assert ( +175 upper_bound > 0 +176 ).all(), "all elements of `upper_bound` must be > 0" +177 +178 log_lower_bound = np.log(lower_bound) +179 log_upper_bound = np.log(upper_bound) +180 log_bounds_range = log_upper_bound - log_lower_bound +181 self.x_init = ( +182 np.minimum( +183 np.exp(log_bounds_range * sobol_seq_init + log_lower_bound), +184 1.7976931348623157e308, +185 ) +186 if x_init is None +187 else x_init +188 ) +189 self.x_choices = np.minimum( +190 np.exp(log_bounds_range * sobol_seq_choices + log_lower_bound), +191 1.7976931348623157e308, +192 ) +193 +194 # shelve for saving (not for loading) +195 if self.save is not None: +196 self.sh = shelve.open(filename=save, flag="c", writeback=True) 197 -198 self.n_jobs = n_jobs +198 if self.per_second: +199 self.timings = [] +200 self.rf_obj = RandomForestRegressor( +201 n_estimators=250, random_state=self.seed +202 ) +203 +204 self.n_jobs = n_jobsArguments:
+@@ -1905,21 +1959,21 @@-Arguments:
-201 def get_params(self): -202 """Get object attributes. -203 -204 Returns -205 ------- -206 params : mapping of string to any -207 Parameter names mapped to their values. -208 """ -209 out = dict() -210 param_names = dir(self) -211 for key in param_names: -212 if key.startswith("_") is False: -213 out[key] = getattr(self, key, None) -214 -215 return out +@@ -1944,11 +1998,11 @@207 def get_params(self): +208 """Get object attributes. +209 +210 Returns +211 ------- +212 params : mapping of string to any +213 Parameter names mapped to their values. +214 """ +215 out = dict() +216 param_names = dir(self) +217 for key in param_names: +218 if key.startswith("_") is False: +219 out[key] = getattr(self, key, None) +220 +221 return outReturns
-218 def eval_objective(self, arg): -219 try: -220 return self.objective_func(self.x_init[arg, :]) -221 except: -222 return 1e06 +@@ -1966,19 +2020,19 @@224 def eval_objective(self, arg): +225 try: +226 return self.objective_func(self.x_init[arg, :]) +227 except: +228 return 1e06Returns
-225 def load(self, path): -226 """load data from stored shelve. -227 -228 # Arguments -229 -230 path : a string; path to stored shelve. -231 -232 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -233 """ -234 -235 self.sh = shelve.open(filename=path) -236 for key, value in self.sh.items(): -237 setattr(self, key, value) +@@ -2004,11 +2058,11 @@231 def load(self, path): +232 """load data from stored shelve. +233 +234 # Arguments +235 +236 path : a string; path to stored shelve. +237 +238 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +239 """ +240 +241 self.sh = shelve.open(filename=path) +242 for key, value in self.sh.items(): +243 setattr(self, key, value)Arguments
-240 def update_shelve(self): -241 for key, value in self.get_params().items(): -242 if (callable(value) is False) & (key != "sh"): -243 self.sh[key] = value -244 self.sh.sync() +@@ -2026,17 +2080,17 @@246 def update_shelve(self): +247 for key, value in self.get_params().items(): +248 if (callable(value) is False) & (key != "sh"): +249 self.sh[key] = value +250 self.sh.sync()Arguments
-247 def close_shelve(self): -248 """Close shelve. -249 -250 # Arguments -251 -252 No argument. -253 -254 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) -255 """ -256 -257 self.sh.close() +@@ -2062,16 +2116,16 @@253 def close_shelve(self): +254 """Close shelve. +255 +256 # Arguments +257 +258 No argument. +259 +260 See also: [Bayesian Optimization with GPopt Part 2 (save and resume)](https://thierrymoudiki.github.io/blog/2021/04/30/python/misc/gpopt) +261 """ +262 +263 self.sh.close()Arguments
-260 def gp_fit_predict(self, X_train, y_train, X_test): -261 -262 if len(X_train.shape) == 1: -263 X_train = X_train.reshape((-1, 1)) -264 X_test = X_test.reshape((-1, 1)) -265 -266 # Get mean and standard deviation -267 return self.gp_obj.fit(X_train, y_train).predict( -268 X_test, return_std=True -269 ) +@@ -2089,14 +2143,14 @@266 def gp_fit_predict(self, X_train, y_train, X_test): +267 +268 if len(X_train.shape) == 1: +269 X_train = X_train.reshape((-1, 1)) +270 X_test = X_test.reshape((-1, 1)) +271 +272 # Get mean and standard deviation +273 return self.gp_obj.fit(X_train, y_train).predict( +274 X_test, return_std=True +275 )Arguments
-272 def timings_fit_predict(self, X_train, y_train, X_test): -273 -274 if len(X_train.shape) == 1: -275 X_train = X_train.reshape((-1, 1)) -276 X_test = X_test.reshape((-1, 1)) -277 -278 # Get mean preds for timings -279 return self.rf_obj.fit(X_train, y_train).predict(X_test) +@@ -2114,65 +2168,65 @@278 def timings_fit_predict(self, X_train, y_train, X_test): +279 +280 if len(X_train.shape) == 1: +281 X_train = X_train.reshape((-1, 1)) +282 X_test = X_test.reshape((-1, 1)) +283 +284 # Get mean preds for timings +285 return self.rf_obj.fit(X_train, y_train).predict(X_test)Arguments
-282 def next_parameter_by_ei(self, seed, i): -283 -284 gamma_hat = (self.y_min - self.y_mean) / self.y_std -285 -286 self.ei = -self.y_std * ( -287 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) -288 ) +@@ -2190,324 +2244,336 @@288 def next_parameter_by_ei(self, seed, i): 289 -290 # find max index ----- +290 gamma_hat = (self.y_min - self.y_mean) / self.y_std 291 -292 if self.per_second is False: -293 -294 # find index for max. ei -295 max_index = self.ei.argmin() -296 -297 else: # self.per_second is True -298 -299 # predict timings on self.x_choices -300 # train on X = self.parameters and y = self.timings -301 # (must have same shape[0]) -302 timing_preds = self.timings_fit_predict( -303 X_train=np.asarray(self.parameters), -304 y_train=np.asarray(self.timings), -305 X_test=self.x_choices, -306 ) -307 -308 # find index for max. ei (and min. timings) -309 max_index = (-self.ei / timing_preds).argmax() -310 -311 self.max_ei.append(np.abs(self.ei[max_index])) -312 -313 # Select next choice -314 next_param = self.x_choices[max_index, :] -315 -316 if next_param in np.asarray(self.parameters): -317 -318 if self.log_scale == False: -319 -320 np.random.seed(self.seed * i + 1000) -321 next_param = ( -322 self.upper_bound - self.lower_bound -323 ) * np.random.rand(self.n_dims) + self.lower_bound -324 -325 else: # /!\ very... experimental -326 -327 np.random.seed(self.seed) -328 log_upper_bound = np.log(self.upper_bound) -329 log_lower_bound = np.log(self.lower_bound) -330 log_bounds_range = log_upper_bound - log_lower_bound -331 -332 next_param = np.minimum( -333 np.exp( -334 log_bounds_range * np.random.rand(self.n_dims) -335 + log_lower_bound -336 ), -337 1.7976931348623157e308, -338 ) -339 -340 return next_param +292 self.ei = -self.y_std * ( +293 gamma_hat * st.norm.cdf(gamma_hat) + st.norm.pdf(gamma_hat) +294 ) +295 +296 # find max index ----- +297 +298 if self.per_second is False: +299 +300 # find index for max. ei +301 max_index = self.ei.argmin() +302 +303 else: # self.per_second is True +304 +305 # predict timings on self.x_choices +306 # train on X = self.parameters and y = self.timings +307 # (must have same shape[0]) +308 timing_preds = self.timings_fit_predict( +309 X_train=np.asarray(self.parameters), +310 y_train=np.asarray(self.timings), +311 X_test=self.x_choices, +312 ) +313 +314 # find index for max. ei (and min. timings) +315 max_index = (-self.ei / timing_preds).argmax() +316 +317 self.max_ei.append(np.abs(self.ei[max_index])) +318 +319 # Select next choice +320 next_param = self.x_choices[max_index, :] +321 +322 if next_param in np.asarray(self.parameters): +323 +324 if self.log_scale == False: +325 +326 np.random.seed(self.seed * i + 1000) +327 next_param = ( +328 self.upper_bound - self.lower_bound +329 ) * np.random.rand(self.n_dims) + self.lower_bound +330 +331 else: # /!\ very... experimental +332 +333 np.random.seed(self.seed) +334 log_upper_bound = np.log(self.upper_bound) +335 log_lower_bound = np.log(self.lower_bound) +336 log_bounds_range = log_upper_bound - log_lower_bound +337 +338 next_param = np.minimum( +339 np.exp( +340 log_bounds_range * np.random.rand(self.n_dims) +341 + log_lower_bound +342 ), +343 1.7976931348623157e308, +344 ) +345 +346 return next_paramArguments
343 def optimize( -344 self, -345 verbose=1, -346 n_more_iter=None, -347 abs_tol=None, # suggested 1e-4, for n_iter = 200 -348 min_budget=50, # minimum budget for early stopping -349 func_args=None, -350 ): -351 """Launch optimization loop. -352 -353 # Arguments: -354 -355 verbose: an integer; -356 verbose = 0: nothing is printed, -357 verbose = 1: a progress bar is printed (longer than 0), -358 verbose = 2: information about each iteration is printed (longer than 1) -359 -360 n_more_iter: an integer; -361 additional number of iterations for the optimizer (which has been run once) -362 -363 abs_tol: a float; -364 tolerance for convergence of the optimizer (early stopping based on expected improvement) +diff --git a/gpopt-docs/search.js b/gpopt-docs/search.js index 5180332..2740d26 100644 --- a/gpopt-docs/search.js +++ b/gpopt-docs/search.js @@ -1,6 +1,6 @@ window.pdocSearch = (function(){ /** elasticlunr - http://weixsong.github.io * Copyright (C) 2017 Oliver Nightingale * Copyright (C) 2017 Wei Song * MIT Licensed */!function(){function e(e){if(null===e||"object"!=typeof e)return e;var t=e.constructor();for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n]);return t}var t=function(e){var n=new t.Index;return n.pipeline.add(t.trimmer,t.stopWordFilter,t.stemmer),e&&e.call(n,n),n};t.version="0.9.5",lunr=t,t.utils={},t.utils.warn=function(e){return function(t){e.console&&console.warn&&console.warn(t)}}(this),t.utils.toString=function(e){return void 0===e||null===e?"":e.toString()},t.EventEmitter=function(){this.events={}},t.EventEmitter.prototype.addListener=function(){var e=Array.prototype.slice.call(arguments),t=e.pop(),n=e;if("function"!=typeof t)throw new TypeError("last argument must be a function");n.forEach(function(e){this.hasHandler(e)||(this.events[e]=[]),this.events[e].push(t)},this)},t.EventEmitter.prototype.removeListener=function(e,t){if(this.hasHandler(e)){var n=this.events[e].indexOf(t);-1!==n&&(this.events[e].splice(n,1),0==this.events[e].length&&delete this.events[e])}},t.EventEmitter.prototype.emit=function(e){if(this.hasHandler(e)){var t=Array.prototype.slice.call(arguments,1);this.events[e].forEach(function(e){e.apply(void 0,t)},this)}},t.EventEmitter.prototype.hasHandler=function(e){return e in this.events},t.tokenizer=function(e){if(!arguments.length||null===e||void 0===e)return[];if(Array.isArray(e)){var n=e.filter(function(e){return null===e||void 0===e?!1:!0});n=n.map(function(e){return t.utils.toString(e).toLowerCase()});var i=[];return n.forEach(function(e){var n=e.split(t.tokenizer.seperator);i=i.concat(n)},this),i}return e.toString().trim().toLowerCase().split(t.tokenizer.seperator)},t.tokenizer.defaultSeperator=/[\s\-]+/,t.tokenizer.seperator=t.tokenizer.defaultSeperator,t.tokenizer.setSeperator=function(e){null!==e&&void 0!==e&&"object"==typeof e&&(t.tokenizer.seperator=e)},t.tokenizer.resetSeperator=function(){t.tokenizer.seperator=t.tokenizer.defaultSeperator},t.tokenizer.getSeperator=function(){return t.tokenizer.seperator},t.Pipeline=function(){this._queue=[]},t.Pipeline.registeredFunctions={},t.Pipeline.registerFunction=function(e,n){n in t.Pipeline.registeredFunctions&&t.utils.warn("Overwriting existing registered function: "+n),e.label=n,t.Pipeline.registeredFunctions[n]=e},t.Pipeline.getRegisteredFunction=function(e){return e in t.Pipeline.registeredFunctions!=!0?null:t.Pipeline.registeredFunctions[e]},t.Pipeline.warnIfFunctionNotRegistered=function(e){var n=e.label&&e.label in this.registeredFunctions;n||t.utils.warn("Function is not registered with pipeline. This may cause problems when serialising the index.\n",e)},t.Pipeline.load=function(e){var n=new t.Pipeline;return e.forEach(function(e){var i=t.Pipeline.getRegisteredFunction(e);if(!i)throw new Error("Cannot load un-registered function: "+e);n.add(i)}),n},t.Pipeline.prototype.add=function(){var e=Array.prototype.slice.call(arguments);e.forEach(function(e){t.Pipeline.warnIfFunctionNotRegistered(e),this._queue.push(e)},this)},t.Pipeline.prototype.after=function(e,n){t.Pipeline.warnIfFunctionNotRegistered(n);var i=this._queue.indexOf(e);if(-1===i)throw new Error("Cannot find existingFn");this._queue.splice(i+1,0,n)},t.Pipeline.prototype.before=function(e,n){t.Pipeline.warnIfFunctionNotRegistered(n);var i=this._queue.indexOf(e);if(-1===i)throw new Error("Cannot find existingFn");this._queue.splice(i,0,n)},t.Pipeline.prototype.remove=function(e){var t=this._queue.indexOf(e);-1!==t&&this._queue.splice(t,1)},t.Pipeline.prototype.run=function(e){for(var t=[],n=e.length,i=this._queue.length,o=0;n>o;o++){for(var r=e[o],s=0;i>s&&(r=this._queue[s](r,o,e),void 0!==r&&null!==r);s++);void 0!==r&&null!==r&&t.push(r)}return t},t.Pipeline.prototype.reset=function(){this._queue=[]},t.Pipeline.prototype.get=function(){return this._queue},t.Pipeline.prototype.toJSON=function(){return this._queue.map(function(e){return t.Pipeline.warnIfFunctionNotRegistered(e),e.label})},t.Index=function(){this._fields=[],this._ref="id",this.pipeline=new t.Pipeline,this.documentStore=new t.DocumentStore,this.index={},this.eventEmitter=new t.EventEmitter,this._idfCache={},this.on("add","remove","update",function(){this._idfCache={}}.bind(this))},t.Index.prototype.on=function(){var e=Array.prototype.slice.call(arguments);return this.eventEmitter.addListener.apply(this.eventEmitter,e)},t.Index.prototype.off=function(e,t){return this.eventEmitter.removeListener(e,t)},t.Index.load=function(e){e.version!==t.version&&t.utils.warn("version mismatch: current "+t.version+" importing "+e.version);var n=new this;n._fields=e.fields,n._ref=e.ref,n.documentStore=t.DocumentStore.load(e.documentStore),n.pipeline=t.Pipeline.load(e.pipeline),n.index={};for(var i in e.index)n.index[i]=t.InvertedIndex.load(e.index[i]);return n},t.Index.prototype.addField=function(e){return this._fields.push(e),this.index[e]=new t.InvertedIndex,this},t.Index.prototype.setRef=function(e){return this._ref=e,this},t.Index.prototype.saveDocument=function(e){return this.documentStore=new t.DocumentStore(e),this},t.Index.prototype.addDoc=function(e,n){if(e){var n=void 0===n?!0:n,i=e[this._ref];this.documentStore.addDoc(i,e),this._fields.forEach(function(n){var o=this.pipeline.run(t.tokenizer(e[n]));this.documentStore.addFieldLength(i,n,o.length);var r={};o.forEach(function(e){e in r?r[e]+=1:r[e]=1},this);for(var s in r){var u=r[s];u=Math.sqrt(u),this.index[n].addToken(s,{ref:i,tf:u})}},this),n&&this.eventEmitter.emit("add",e,this)}},t.Index.prototype.removeDocByRef=function(e){if(e&&this.documentStore.isDocStored()!==!1&&this.documentStore.hasDoc(e)){var t=this.documentStore.getDoc(e);this.removeDoc(t,!1)}},t.Index.prototype.removeDoc=function(e,n){if(e){var n=void 0===n?!0:n,i=e[this._ref];this.documentStore.hasDoc(i)&&(this.documentStore.removeDoc(i),this._fields.forEach(function(n){var o=this.pipeline.run(t.tokenizer(e[n]));o.forEach(function(e){this.index[n].removeToken(e,i)},this)},this),n&&this.eventEmitter.emit("remove",e,this))}},t.Index.prototype.updateDoc=function(e,t){var t=void 0===t?!0:t;this.removeDocByRef(e[this._ref],!1),this.addDoc(e,!1),t&&this.eventEmitter.emit("update",e,this)},t.Index.prototype.idf=function(e,t){var n="@"+t+"/"+e;if(Object.prototype.hasOwnProperty.call(this._idfCache,n))return this._idfCache[n];var i=this.index[t].getDocFreq(e),o=1+Math.log(this.documentStore.length/(i+1));return this._idfCache[n]=o,o},t.Index.prototype.getFields=function(){return this._fields.slice()},t.Index.prototype.search=function(e,n){if(!e)return[];e="string"==typeof e?{any:e}:JSON.parse(JSON.stringify(e));var i=null;null!=n&&(i=JSON.stringify(n));for(var o=new t.Configuration(i,this.getFields()).get(),r={},s=Object.keys(e),u=0;u349 def optimize( +350 self, +351 verbose=1, +352 n_more_iter=None, +353 abs_tol=None, # suggested 1e-4, for n_iter = 200 +354 min_budget=50, # minimum budget for early stopping +355 func_args=None, +356 ): +357 """Launch optimization loop. +358 +359 # Arguments: +360 +361 verbose: an integer; +362 verbose = 0: nothing is printed, +363 verbose = 1: a progress bar is printed (longer than 0), +364 verbose = 2: information about each iteration is printed (longer than 1) 365 -366 min_budget: an integer (default is 50); -367 minimum number of iterations before early stopping controlled by `abs_tol` +366 n_more_iter: an integer; +367 additional number of iterations for the optimizer (which has been run once) 368 -369 func_args: a list; -370 additional parameters for the objective function (if necessary) +369 abs_tol: a float; +370 tolerance for convergence of the optimizer (early stopping based on expected improvement) 371 -372 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) -373 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +372 min_budget: an integer (default is 50); +373 minimum number of iterations before early stopping controlled by `abs_tol` 374 -375 """ -376 -377 # verbose = 0: nothing is printed -378 # verbose = 1: a progress bar is printed (longer than 0) -379 # verbose = 2: information about each iteration is printed (longer than 1) -380 if func_args is None: -381 func_args = [] +375 func_args: a list; +376 additional parameters for the objective function (if necessary) +377 +378 see also [Bayesian Optimization with GPopt](https://thierrymoudiki.github.io/blog/2021/04/16/python/misc/gpopt) +379 and [Hyperparameters tuning with GPopt](https://thierrymoudiki.github.io/blog/2021/06/11/python/misc/hyperparam-tuning-gpopt) +380 +381 """ 382 -383 if ( -384 n_more_iter is None -385 ): # initial optimization, before more iters are requested -386 -387 n_iter = self.n_iter -388 # stopping iter for early stopping (default is total budget) -389 iter_stop = n_iter # potentially # got to check this -390 -391 # initial design ---------- +383 # verbose = 0: nothing is printed +384 # verbose = 1: a progress bar is printed (longer than 0) +385 # verbose = 2: information about each iteration is printed (longer than 1) +386 if func_args is None: +387 func_args = [] +388 +389 if ( +390 n_more_iter is None +391 ): # initial optimization, before more iters are requested 392 -393 if (verbose == 1) | (verbose == 2): -394 print(f"\n Creating initial design... \n") -395 -396 if verbose == 1: -397 progbar = Progbar(target=self.n_init) +393 n_iter = self.n_iter +394 # stopping iter for early stopping (default is total budget) +395 iter_stop = n_iter # potentially # got to check this +396 +397 # initial design ---------- 398 -399 self.parameters = self.x_init.tolist() -400 self.scores = [] +399 if (verbose == 1) | (verbose == 2): +400 print(f"\n Creating initial design... \n") 401 -402 if self.save is not None: -403 self.update_shelve() +402 if verbose == 1: +403 progbar = Progbar(target=self.n_init) 404 -405 if self.y_init is None: # calculate scores on initial design -406 -407 assert ( -408 self.objective_func is not None -409 ), "self.y_init is None: must have 'objective_func' not None" +405 self.parameters = self.x_init.tolist() +406 self.scores = [] +407 +408 if self.save is not None: +409 self.update_shelve() 410 -411 if self.n_jobs == 1: +411 if self.y_init is None: # calculate scores on initial design 412 -413 for i in range(self.n_init): -414 -415 x_next = self.x_init[i, :] +413 assert ( +414 self.objective_func is not None +415 ), "self.y_init is None: must have 'objective_func' not None" 416 -417 try: +417 if self.n_jobs == 1: 418 -419 if self.per_second is True: +419 for i in range(self.n_init): 420 -421 start = time() -422 score = self.objective_func(x_next, *func_args) -423 if (np.isfinite(score) == False) or ( -424 np.isnan(score) == True -425 ): -426 continue -427 self.timings.append(np.log(time() - start)) -428 -429 else: # self.per_second is False -430 -431 score = self.objective_func(x_next, *func_args) -432 if (np.isfinite(score) == False) or ( -433 np.isnan(score) == True -434 ): -435 continue +421 x_next = self.x_init[i, :] +422 +423 try: +424 +425 if self.per_second is True: +426 +427 start = time() +428 score = self.objective_func(x_next, *func_args) +429 if (np.isfinite(score) == False) or ( +430 np.isnan(score) == True +431 ): +432 continue +433 self.timings.append(np.log(time() - start)) +434 +435 else: # self.per_second is False 436 -437 self.scores.append(score) -438 -439 if self.save is not None: -440 self.update_shelve() -441 -442 except: -443 -444 continue -445 -446 if verbose == 1: -447 progbar.update(i) # update progress bar -448 -449 if verbose == 2: -450 print(f"point: {x_next}; score: {score}") -451 # end loop # calculate scores on initial design -452 -453 if verbose == 1: -454 progbar.update(self.n_init) -455 -456 else: # self.n_jobs != 1 -457 -458 assert ( -459 self.per_second is False -460 ), "timings not calculated here" +437 score = self.objective_func(x_next, *func_args) +438 if (np.isfinite(score) == False) or ( +439 np.isnan(score) == True +440 ): +441 continue +442 +443 self.scores.append(score) +444 +445 if self.save is not None: +446 self.update_shelve() +447 +448 except: +449 +450 continue +451 +452 if verbose == 1: +453 progbar.update(i) # update progress bar +454 +455 if verbose == 2: +456 print(f"point: {x_next}; score: {score}") +457 # end loop # calculate scores on initial design +458 +459 if verbose == 1: +460 progbar.update(self.n_init) 461 -462 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( -463 delayed(self.objective_func)(self.x_init[i, :]) -464 for i in range(self.n_init) -465 ) -466 -467 self.scores = scores -468 -469 if self.save is not None: -470 self.update_shelve() -471 -472 else: # if self.y_init is None: -473 -474 assert self.x_init.shape[0] == len( -475 self.y_init -476 ), "must have: self.x_init.shape[0] == len(self.y_init)" +462 else: # self.n_jobs != 1 +463 +464 assert ( +465 self.per_second is False +466 ), "timings not calculated here" +467 +468 scores = Parallel(n_jobs=self.n_jobs, prefer="threads")( +469 delayed(self.objective_func)(self.x_init[i, :]) +470 for i in range(self.n_init) +471 ) +472 +473 self.scores = scores +474 +475 if self.save is not None: +476 self.update_shelve() 477 -478 self.scores = pickle.loads( -479 pickle.dumps(self.y_init.tolist(), -1) -480 ) -481 -482 # current best score on initial design -483 min_index = (np.asarray(self.scores)).argmin() -484 self.y_min = self.scores[min_index] -485 self.x_min = self.x_init[min_index, :] -486 -487 # current gp mean and std on initial design -488 y_mean, y_std = self.gp_fit_predict( -489 np.asarray(self.parameters), -490 np.asarray(self.scores), -491 self.x_choices, -492 ) -493 self.y_mean = y_mean -494 self.y_std = np.maximum(2.220446049250313e-16, y_std) -495 -496 # saving after initial design computation -497 if self.save is not None: -498 self.update_shelve() -499 -500 else: # if n_more_iter is not None +478 else: # if self.y_init is None: +479 +480 assert self.x_init.shape[0] == len( +481 self.y_init +482 ), "must have: self.x_init.shape[0] == len(self.y_init)" +483 +484 self.scores = pickle.loads( +485 pickle.dumps(self.y_init.tolist(), -1) +486 ) +487 +488 # current best score on initial design +489 min_index = (np.asarray(self.scores)).argmin() +490 self.y_min = self.scores[min_index] +491 self.x_min = self.x_init[min_index, :] +492 +493 # current gp mean and std on initial design +494 y_mean, y_std = self.gp_fit_predict( +495 np.asarray(self.parameters), +496 np.asarray(self.scores), +497 self.x_choices, +498 ) +499 self.y_mean = y_mean +500 self.y_std = np.maximum(2.220446049250313e-16, y_std) 501 -502 assert self.n_iter > 5, "you must have n_iter > 5" -503 n_iter = n_more_iter -504 iter_stop = len(self.max_ei) + n_more_iter # potentially +502 # saving after initial design computation +503 if self.save is not None: +504 self.update_shelve() 505 -506 if (verbose == 1) | (verbose == 2): -507 print(f"\n ...Done. \n") -508 try: -509 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) -510 except: -511 pass -512 -513 # end init design ---------- -514 -515 # if n_more_iter is None: # initial optimization, before more iters are requested -516 -517 if (verbose == 1) | (verbose == 2): -518 print(f"\n Optimization loop... \n") -519 -520 # early stopping? -521 if abs_tol is not None: -522 assert ( -523 min_budget > 20 -524 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +506 else: # if n_more_iter is not None +507 +508 assert self.n_iter > 5, "you must have n_iter > 5" +509 n_iter = n_more_iter +510 iter_stop = len(self.max_ei) + n_more_iter # potentially +511 +512 if (verbose == 1) | (verbose == 2): +513 print(f"\n ...Done. \n") +514 try: +515 print(np.hstack((self.x_init, self.y_init.reshape(-1, 1)))) +516 except: +517 pass +518 +519 # end init design ---------- +520 +521 # if n_more_iter is None: # initial optimization, before more iters are requested +522 +523 if (verbose == 1) | (verbose == 2): +524 print(f"\n Optimization loop... \n") 525 -526 if verbose == 1: -527 progbar = Progbar(target=n_iter) -528 -529 # main loop ---------- -530 -531 for i in range(n_iter): -532 -533 # find next set of parameters (vector), maximizing ei -534 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) -535 -536 try: -537 -538 if self.per_second is True: -539 -540 start = time() +526 # early stopping? +527 if abs_tol is not None: +528 assert ( +529 min_budget > 20 +530 ), "With 'abs_tol' provided, you must have 'min_budget' > 20" +531 +532 if verbose == 1: +533 progbar = Progbar(target=n_iter) +534 +535 # main loop ---------- +536 +537 for i in range(n_iter): +538 +539 # find next set of parameters (vector), maximizing ei +540 next_param = self.next_parameter_by_ei(seed=len(self.max_ei), i=i) 541 -542 if self.objective_func is not None: +542 try: 543 -544 score_next_param = self.objective_func( -545 next_param, *func_args -546 ) +544 if self.per_second is True: +545 +546 start = time() 547 -548 if (np.isfinite(score_next_param) == False) or ( -549 np.isnan(score_next_param) == True -550 ): -551 continue -552 -553 else: -554 -555 assert (self.x_init is not None) and ( -556 self.y_init is not None -557 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +548 if self.objective_func is not None: +549 +550 score_next_param = self.objective_func( +551 next_param, *func_args +552 ) +553 +554 if (np.isfinite(score_next_param) == False) or ( +555 np.isnan(score_next_param) == True +556 ): +557 continue 558 -559 print(f"\n next param: {next_param} \n") -560 score_next_param = float( -561 input("get new score: \n") -562 ) # or an API response -563 -564 if (np.isfinite(score_next_param) == False) or ( -565 np.isnan(score_next_param) == True -566 ): -567 continue -568 -569 self.timings.append(np.log(time() - start)) -570 -571 else: # self.per_second is False: -572 -573 if self.objective_func is not None: +559 else: +560 +561 assert (self.x_init is not None) and ( +562 self.y_init is not None +563 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +564 +565 print(f"\n next param: {next_param} \n") +566 score_next_param = float( +567 input("get new score: \n") +568 ) # or an API response +569 +570 if (np.isfinite(score_next_param) == False) or ( +571 np.isnan(score_next_param) == True +572 ): +573 continue 574 -575 score_next_param = self.objective_func( -576 next_param, *func_args -577 ) +575 self.timings.append(np.log(time() - start)) +576 +577 else: # self.per_second is False: 578 -579 if (np.isfinite(score_next_param) == False) or ( -580 np.isnan(score_next_param) == True -581 ): -582 continue -583 -584 else: -585 -586 assert (self.x_init is not None) and ( -587 self.y_init is not None -588 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +579 if self.objective_func is not None: +580 +581 score_next_param = self.objective_func( +582 next_param, *func_args +583 ) +584 +585 if (np.isfinite(score_next_param) == False) or ( +586 np.isnan(score_next_param) == True +587 ): +588 continue 589 -590 print(f"\n next param: {next_param} \n") -591 score_next_param = float( -592 input("get new score: \n") -593 ) # or an API response -594 -595 if (np.isfinite(score_next_param) == False) or ( -596 np.isnan(score_next_param) == True -597 ): -598 continue -599 -600 except: -601 -602 continue -603 -604 self.parameters.append(next_param.tolist()) +590 else: +591 +592 assert (self.x_init is not None) and ( +593 self.y_init is not None +594 ), "self.objective_func is not None: must have (self.x_init is not None) and (self.y_init is not None)" +595 +596 print(f"\n next param: {next_param} \n") +597 score_next_param = float( +598 input("get new score: \n") +599 ) # or an API response +600 +601 if (np.isfinite(score_next_param) == False) or ( +602 np.isnan(score_next_param) == True +603 ): +604 continue 605 -606 self.scores.append(score_next_param) +606 except: 607 -608 if self.save is not None: -609 self.update_shelve() -610 -611 if verbose == 2: -612 print(f"iteration {i + 1} -----") -613 print(f"current minimum: {self.x_min}") -614 print(f"current minimum score: {self.y_min}") -615 print(f"next parameter: {next_param}") -616 print(f"score for next parameter: {score_next_param} \n") -617 -618 if score_next_param < self.y_min: -619 self.x_min = next_param -620 self.y_min = score_next_param -621 if self.save is not None: -622 self.update_shelve() +608 continue +609 +610 self.parameters.append(next_param.tolist()) +611 +612 self.scores.append(score_next_param) +613 +614 if self.save is not None: +615 self.update_shelve() +616 +617 if verbose == 2: +618 print(f"iteration {i + 1} -----") +619 print(f"current minimum: {self.x_min}") +620 print(f"current minimum score: {self.y_min}") +621 print(f"next parameter: {next_param}") +622 print(f"score for next parameter: {score_next_param} \n") 623 -624 self.y_mean, self.y_std = self.gp_fit_predict( -625 np.asarray(self.parameters), -626 np.asarray(self.scores), -627 self.x_choices, -628 ) +624 if score_next_param < self.y_min: +625 self.x_min = next_param +626 self.y_min = score_next_param +627 if self.save is not None: +628 self.update_shelve() 629 -630 if self.save is not None: -631 self.update_shelve() -632 -633 if verbose == 1: -634 progbar.update(i + 1) # update progress bar +630 self.y_mean, self.y_std = self.gp_fit_predict( +631 np.asarray(self.parameters), +632 np.asarray(self.scores), +633 self.x_choices, +634 ) 635 -636 # early stopping -637 -638 if abs_tol is not None: -639 -640 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): -641 if len(self.max_ei) > min_budget: -642 -643 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) -644 -645 if diff_max_ei[-1] <= abs_tol: -646 -647 iter_stop = len(self.max_ei) # index i starts at 0 +636 if self.save is not None: +637 self.update_shelve() +638 +639 if verbose == 1: +640 progbar.update(i + 1) # update progress bar +641 +642 # early stopping +643 +644 if abs_tol is not None: +645 +646 # if self.max_ei.size > (self.n_init + self.n_iter * min_budget_pct): +647 if len(self.max_ei) > min_budget: 648 -649 break +649 diff_max_ei = np.abs(np.diff(np.asarray(self.max_ei))) 650 -651 # end main loop ---------- +651 if diff_max_ei[-1] <= abs_tol: 652 -653 if (verbose == 1) & (i < (n_iter - 1)): -654 progbar.update(n_iter) -655 -656 self.n_iter = iter_stop -657 if self.save is not None: -658 self.update_shelve() -659 -660 return (self.x_min, self.y_min) +653 iter_stop = len(self.max_ei) # index i starts at 0 +654 +655 break +656 +657 # end main loop ---------- +658 +659 if (verbose == 1) & (i < (n_iter - 1)): +660 progbar.update(n_iter) +661 +662 self.n_iter = iter_stop +663 if self.save is not None: +664 self.update_shelve() +665 +666 DescribeResult = namedtuple( +667 "DescribeResult", ("best_params", "best_score") +668 ) +669 +670 if self.params_names is None: +671 +672 return DescribeResult(self.x_min, self.y_min) +673 +674 else: +675 +676 return DescribeResult( +677 dict(zip(self.params_names, self.x_min)), self.y_min +678 )0&&t.push(e);for(var i in n)"docs"!==i&&"df"!==i&&this.expandToken(e+i,t,n[i]);return t},t.InvertedIndex.prototype.toJSON=function(){return{root:this.root}},t.Configuration=function(e,n){var e=e||"";if(void 0==n||null==n)throw new Error("fields should not be null");this.config={};var i;try{i=JSON.parse(e),this.buildUserConfig(i,n)}catch(o){t.utils.warn("user configuration parse failed, will use default configuration"),this.buildDefaultConfig(n)}},t.Configuration.prototype.buildDefaultConfig=function(e){this.reset(),e.forEach(function(e){this.config[e]={boost:1,bool:"OR",expand:!1}},this)},t.Configuration.prototype.buildUserConfig=function(e,n){var i="OR",o=!1;if(this.reset(),"bool"in e&&(i=e.bool||i),"expand"in e&&(o=e.expand||o),"fields"in e)for(var r in e.fields)if(n.indexOf(r)>-1){var s=e.fields[r],u=o;void 0!=s.expand&&(u=s.expand),this.config[r]={boost:s.boost||0===s.boost?s.boost:1,bool:s.bool||i,expand:u}}else t.utils.warn("field name in user configuration not found in index instance fields");else this.addAllFields2UserConfig(i,o,n)},t.Configuration.prototype.addAllFields2UserConfig=function(e,t,n){n.forEach(function(n){this.config[n]={boost:1,bool:e,expand:t}},this)},t.Configuration.prototype.get=function(){return this.config},t.Configuration.prototype.reset=function(){this.config={}},lunr.SortedSet=function(){this.length=0,this.elements=[]},lunr.SortedSet.load=function(e){var t=new this;return t.elements=e,t.length=e.length,t},lunr.SortedSet.prototype.add=function(){var e,t;for(e=0;e 1;){if(r===e)return o;e>r&&(t=o),r>e&&(n=o),i=n-t,o=t+Math.floor(i/2),r=this.elements[o]}return r===e?o:-1},lunr.SortedSet.prototype.locationFor=function(e){for(var t=0,n=this.elements.length,i=n-t,o=t+Math.floor(i/2),r=this.elements[o];i>1;)e>r&&(t=o),r>e&&(n=o),i=n-t,o=t+Math.floor(i/2),r=this.elements[o];return r>e?o:e>r?o+1:void 0},lunr.SortedSet.prototype.intersect=function(e){for(var t=new lunr.SortedSet,n=0,i=0,o=this.length,r=e.length,s=this.elements,u=e.elements;;){if(n>o-1||i>r-1)break;s[n]!==u[i]?s[n]u[i]&&i++:(t.add(s[n]),n++,i++)}return t},lunr.SortedSet.prototype.clone=function(){var e=new lunr.SortedSet;return e.elements=this.toArray(),e.length=e.elements.length,e},lunr.SortedSet.prototype.union=function(e){var t,n,i;this.length>=e.length?(t=this,n=e):(t=e,n=this),i=t.clone();for(var o=0,r=n.toArray();o GPOpt class.\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt", "kind": "class", "doc": " Class GPOpt.
\n\nArguments:
\n\n\n\nlower_bound: a numpy array;\n lower bound for researched minimum\n\nupper_bound: a numpy array;\n upper bound for researched minimum\n\nobjective_func: a function;\n the objective function to be minimized\n\ngp_obj: a GaussianProcessRegressor object;\n An ML model for estimating the uncertainty around the objective function \n\nx_init:\n initial setting of points where `objective_func` is evaluated (optional)\n\ny_init:\n initial setting values at points where `objective_func` is evaluated (optional)\n\nn_init: an integer;\n number of points in the initial setting, when `x_init` and `y_init` are not provided\n\nn_choices: an integer;\n number of points for the calculation of expected improvement\n\nn_iter: an integer;\n number of iterations of the minimization algorithm\n\nalpha: a float;\n Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel)\n\nn_restarts_optimizer: an integer;\n The number of restarts of the optimizer for finding the kernel\u2019s parameters which maximize the log-marginal likelihood.\n\nseed: an integer;\n reproducibility seed\n\nsave: a string;\n Specifies where to save the optimizer in its current state\n\nn_jobs: an integer;\n number of jobs for parallel computing on initial setting (can be -1)\n\nper_second: a boolean;\n __experimental__, default is False (leave to default for now)\n\nlog_scale: a boolean;\n __experimental__, default is False (leave to default for now)\n
see also Bayesian Optimization with GPopt\n and Hyperparameters tuning with GPopt
\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.__init__", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.__init__", "kind": "function", "doc": "\n", "signature": "(\tlower_bound,\tupper_bound,\tobjective_func=None,\tgp_obj=None,\tx_init=None,\ty_init=None,\tn_init=10,\tn_choices=25000,\tn_iter=190,\talpha=1e-06,\tn_restarts_optimizer=25,\tseed=123,\tsave=None,\tn_jobs=1,\tper_second=False,\tlog_scale=False)"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.objective_func", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.objective_func", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.lower_bound", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.lower_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.upper_bound", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.upper_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_init", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.log_scale", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.log_scale", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_dims", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_dims", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_init", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_choices", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_choices", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_iter", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_iter", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.alpha", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.alpha", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_restarts_optimizer", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_restarts_optimizer", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.seed", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.seed", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.save", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.save", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.per_second", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.per_second", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.x_min", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.x_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_min", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_mean", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_mean", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_std", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_std", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.ei", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.max_ei", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.max_ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_jobs", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_jobs", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.get_params", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.get_params", "kind": "function", "doc": "Get object attributes.
\n\nReturns
\n\nparams : mapping of string to any\n Parameter names mapped to their values.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.eval_objective", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.eval_objective", "kind": "function", "doc": "\n", "signature": "(self, arg):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.load", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.load", "kind": "function", "doc": "load data from stored shelve.
\n\nArguments
\n\npath : a string; path to stored shelve.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self, path):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.update_shelve", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.update_shelve", "kind": "function", "doc": "\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.close_shelve", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.close_shelve", "kind": "function", "doc": "Close shelve.
\n\nArguments
\n\nNo argument.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.gp_fit_predict", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.gp_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.timings_fit_predict", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.timings_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.next_parameter_by_ei", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.next_parameter_by_ei", "kind": "function", "doc": "\n", "signature": "(self, seed, i):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.optimize", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.optimize", "kind": "function", "doc": "Launch optimization loop.
\n\nArguments:
\n\n\n\nverbose: an integer;\n verbose = 0: nothing is printed,\n verbose = 1: a progress bar is printed (longer than 0),\n verbose = 2: information about each iteration is printed (longer than 1)\n\nn_more_iter: an integer;\n additional number of iterations for the optimizer (which has been run once)\n\nabs_tol: a float;\n tolerance for convergence of the optimizer (early stopping based on expected improvement)\n\nmin_budget: an integer (default is 50);\n minimum number of iterations before early stopping controlled by `abs_tol`\n\nfunc_args: a list;\n additional parameters for the objective function (if necessary)\n
see also Bayesian Optimization with GPopt\nand Hyperparameters tuning with GPopt
\n", "signature": "(\tself,\tverbose=1,\tn_more_iter=None,\tabs_tol=None,\tmin_budget=50,\tfunc_args=None):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt", "modulename": "GPopt.GPOpt", "kind": "module", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt", "modulename": "GPopt.GPOpt", "qualname": "GPOpt", "kind": "class", "doc": "Class GPOpt.
\n\nArguments:
\n\n\n\nlower_bound: a numpy array;\n lower bound for researched minimum\n\nupper_bound: a numpy array;\n upper bound for researched minimum\n\nobjective_func: a function;\n the objective function to be minimized\n\ngp_obj: a GaussianProcessRegressor object;\n An ML model for estimating the uncertainty around the objective function \n\nx_init:\n initial setting of points where `objective_func` is evaluated (optional)\n\ny_init:\n initial setting values at points where `objective_func` is evaluated (optional)\n\nn_init: an integer;\n number of points in the initial setting, when `x_init` and `y_init` are not provided\n\nn_choices: an integer;\n number of points for the calculation of expected improvement\n\nn_iter: an integer;\n number of iterations of the minimization algorithm\n\nalpha: a float;\n Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel)\n\nn_restarts_optimizer: an integer;\n The number of restarts of the optimizer for finding the kernel\u2019s parameters which maximize the log-marginal likelihood.\n\nseed: an integer;\n reproducibility seed\n\nsave: a string;\n Specifies where to save the optimizer in its current state\n\nn_jobs: an integer;\n number of jobs for parallel computing on initial setting (can be -1)\n\nper_second: a boolean;\n __experimental__, default is False (leave to default for now)\n\nlog_scale: a boolean;\n __experimental__, default is False (leave to default for now)\n
see also Bayesian Optimization with GPopt\n and Hyperparameters tuning with GPopt
\n"}, {"fullname": "GPopt.GPOpt.GPOpt.__init__", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.__init__", "kind": "function", "doc": "\n", "signature": "(\tlower_bound,\tupper_bound,\tobjective_func=None,\tgp_obj=None,\tx_init=None,\ty_init=None,\tn_init=10,\tn_choices=25000,\tn_iter=190,\talpha=1e-06,\tn_restarts_optimizer=25,\tseed=123,\tsave=None,\tn_jobs=1,\tper_second=False,\tlog_scale=False)"}, {"fullname": "GPopt.GPOpt.GPOpt.objective_func", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.objective_func", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.lower_bound", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.lower_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.upper_bound", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.upper_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_init", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.log_scale", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.log_scale", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_dims", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_dims", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_init", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_choices", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_choices", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_iter", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_iter", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.alpha", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.alpha", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_restarts_optimizer", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_restarts_optimizer", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.seed", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.seed", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.save", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.save", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.per_second", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.per_second", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.x_min", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.x_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_min", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_mean", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_mean", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_std", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_std", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.ei", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.max_ei", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.max_ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_jobs", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_jobs", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.get_params", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.get_params", "kind": "function", "doc": "Get object attributes.
\n\nReturns
\n\nparams : mapping of string to any\n Parameter names mapped to their values.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.eval_objective", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.eval_objective", "kind": "function", "doc": "\n", "signature": "(self, arg):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.load", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.load", "kind": "function", "doc": "load data from stored shelve.
\n\nArguments
\n\npath : a string; path to stored shelve.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self, path):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.update_shelve", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.update_shelve", "kind": "function", "doc": "\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.close_shelve", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.close_shelve", "kind": "function", "doc": "Close shelve.
\n\nArguments
\n\nNo argument.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.gp_fit_predict", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.gp_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.timings_fit_predict", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.timings_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.next_parameter_by_ei", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.next_parameter_by_ei", "kind": "function", "doc": "\n", "signature": "(self, seed, i):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.optimize", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.optimize", "kind": "function", "doc": "Launch optimization loop.
\n\nArguments:
\n\n\n\nverbose: an integer;\n verbose = 0: nothing is printed,\n verbose = 1: a progress bar is printed (longer than 0),\n verbose = 2: information about each iteration is printed (longer than 1)\n\nn_more_iter: an integer;\n additional number of iterations for the optimizer (which has been run once)\n\nabs_tol: a float;\n tolerance for convergence of the optimizer (early stopping based on expected improvement)\n\nmin_budget: an integer (default is 50);\n minimum number of iterations before early stopping controlled by `abs_tol`\n\nfunc_args: a list;\n additional parameters for the objective function (if necessary)\n
see also Bayesian Optimization with GPopt\nand Hyperparameters tuning with GPopt
\n", "signature": "(\tself,\tverbose=1,\tn_more_iter=None,\tabs_tol=None,\tmin_budget=50,\tfunc_args=None):", "funcdef": "def"}]; + /** pdoc search index */const docs = [{"fullname": "GPopt.GPOpt.GPOpt", "modulename": "GPopt.GPOpt.GPOpt", "kind": "module", "doc": "GPOpt class.
\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt", "kind": "class", "doc": "Class GPOpt.
\n\nArguments:
\n\n\n\nlower_bound: a numpy array;\n lower bound for researched minimum\n\nupper_bound: a numpy array;\n upper bound for researched minimum\n\nobjective_func: a function;\n the objective function to be minimized\n\nparams_names: a list;\n names of the parameters of the objective function (optional)\n\ngp_obj: a GaussianProcessRegressor object;\n An ML model for estimating the uncertainty around the objective function \n\nx_init:\n initial setting of points where `objective_func` is evaluated (optional)\n\ny_init:\n initial setting values at points where `objective_func` is evaluated (optional)\n\nn_init: an integer;\n number of points in the initial setting, when `x_init` and `y_init` are not provided\n\nn_choices: an integer;\n number of points for the calculation of expected improvement\n\nn_iter: an integer;\n number of iterations of the minimization algorithm\n\nalpha: a float;\n Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel)\n\nn_restarts_optimizer: an integer;\n The number of restarts of the optimizer for finding the kernel\u2019s parameters which maximize the log-marginal likelihood.\n\nseed: an integer;\n reproducibility seed\n\nsave: a string;\n Specifies where to save the optimizer in its current state\n\nn_jobs: an integer;\n number of jobs for parallel computing on initial setting (can be -1)\n\nper_second: a boolean;\n __experimental__, default is False (leave to default for now)\n\nlog_scale: a boolean;\n __experimental__, default is False (leave to default for now)\n
see also Bayesian Optimization with GPopt\n and Hyperparameters tuning with GPopt
\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.__init__", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.__init__", "kind": "function", "doc": "\n", "signature": "(\tlower_bound,\tupper_bound,\tobjective_func=None,\tparams_names=None,\tgp_obj=None,\tx_init=None,\ty_init=None,\tn_init=10,\tn_choices=25000,\tn_iter=190,\talpha=1e-06,\tn_restarts_optimizer=25,\tseed=123,\tsave=None,\tn_jobs=1,\tper_second=False,\tlog_scale=False)"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.objective_func", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.objective_func", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.params_names", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.params_names", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.lower_bound", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.lower_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.upper_bound", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.upper_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_init", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.log_scale", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.log_scale", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_dims", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_dims", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_init", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_choices", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_choices", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_iter", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_iter", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.alpha", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.alpha", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_restarts_optimizer", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_restarts_optimizer", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.seed", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.seed", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.save", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.save", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.per_second", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.per_second", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.x_min", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.x_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_min", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_mean", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_mean", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.y_std", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.y_std", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.ei", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.max_ei", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.max_ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.n_jobs", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.n_jobs", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.get_params", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.get_params", "kind": "function", "doc": "Get object attributes.
\n\nReturns
\n\nparams : mapping of string to any\n Parameter names mapped to their values.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.eval_objective", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.eval_objective", "kind": "function", "doc": "\n", "signature": "(self, arg):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.load", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.load", "kind": "function", "doc": "load data from stored shelve.
\n\nArguments
\n\npath : a string; path to stored shelve.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self, path):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.update_shelve", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.update_shelve", "kind": "function", "doc": "\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.close_shelve", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.close_shelve", "kind": "function", "doc": "Close shelve.
\n\nArguments
\n\nNo argument.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.gp_fit_predict", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.gp_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.timings_fit_predict", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.timings_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.next_parameter_by_ei", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.next_parameter_by_ei", "kind": "function", "doc": "\n", "signature": "(self, seed, i):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.GPOpt.optimize", "modulename": "GPopt.GPOpt.GPOpt", "qualname": "GPOpt.optimize", "kind": "function", "doc": "Launch optimization loop.
\n\nArguments:
\n\n\n\nverbose: an integer;\n verbose = 0: nothing is printed,\n verbose = 1: a progress bar is printed (longer than 0),\n verbose = 2: information about each iteration is printed (longer than 1)\n\nn_more_iter: an integer;\n additional number of iterations for the optimizer (which has been run once)\n\nabs_tol: a float;\n tolerance for convergence of the optimizer (early stopping based on expected improvement)\n\nmin_budget: an integer (default is 50);\n minimum number of iterations before early stopping controlled by `abs_tol`\n\nfunc_args: a list;\n additional parameters for the objective function (if necessary)\n
see also Bayesian Optimization with GPopt\nand Hyperparameters tuning with GPopt
\n", "signature": "(\tself,\tverbose=1,\tn_more_iter=None,\tabs_tol=None,\tmin_budget=50,\tfunc_args=None):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt", "modulename": "GPopt.GPOpt", "kind": "module", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt", "modulename": "GPopt.GPOpt", "qualname": "GPOpt", "kind": "class", "doc": "Class GPOpt.
\n\nArguments:
\n\n\n\nlower_bound: a numpy array;\n lower bound for researched minimum\n\nupper_bound: a numpy array;\n upper bound for researched minimum\n\nobjective_func: a function;\n the objective function to be minimized\n\nparams_names: a list;\n names of the parameters of the objective function (optional)\n\ngp_obj: a GaussianProcessRegressor object;\n An ML model for estimating the uncertainty around the objective function \n\nx_init:\n initial setting of points where `objective_func` is evaluated (optional)\n\ny_init:\n initial setting values at points where `objective_func` is evaluated (optional)\n\nn_init: an integer;\n number of points in the initial setting, when `x_init` and `y_init` are not provided\n\nn_choices: an integer;\n number of points for the calculation of expected improvement\n\nn_iter: an integer;\n number of iterations of the minimization algorithm\n\nalpha: a float;\n Value added to the diagonal of the kernel matrix during fitting (for Matern 5/2 kernel)\n\nn_restarts_optimizer: an integer;\n The number of restarts of the optimizer for finding the kernel\u2019s parameters which maximize the log-marginal likelihood.\n\nseed: an integer;\n reproducibility seed\n\nsave: a string;\n Specifies where to save the optimizer in its current state\n\nn_jobs: an integer;\n number of jobs for parallel computing on initial setting (can be -1)\n\nper_second: a boolean;\n __experimental__, default is False (leave to default for now)\n\nlog_scale: a boolean;\n __experimental__, default is False (leave to default for now)\n
see also Bayesian Optimization with GPopt\n and Hyperparameters tuning with GPopt
\n"}, {"fullname": "GPopt.GPOpt.GPOpt.__init__", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.__init__", "kind": "function", "doc": "\n", "signature": "(\tlower_bound,\tupper_bound,\tobjective_func=None,\tparams_names=None,\tgp_obj=None,\tx_init=None,\ty_init=None,\tn_init=10,\tn_choices=25000,\tn_iter=190,\talpha=1e-06,\tn_restarts_optimizer=25,\tseed=123,\tsave=None,\tn_jobs=1,\tper_second=False,\tlog_scale=False)"}, {"fullname": "GPopt.GPOpt.GPOpt.objective_func", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.objective_func", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.params_names", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.params_names", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.lower_bound", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.lower_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.upper_bound", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.upper_bound", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_init", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.log_scale", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.log_scale", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_dims", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_dims", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_init", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_init", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_choices", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_choices", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_iter", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_iter", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.alpha", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.alpha", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_restarts_optimizer", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_restarts_optimizer", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.seed", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.seed", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.save", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.save", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.per_second", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.per_second", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.x_min", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.x_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_min", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_min", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_mean", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_mean", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.y_std", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.y_std", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.ei", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.max_ei", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.max_ei", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.n_jobs", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.n_jobs", "kind": "variable", "doc": "\n"}, {"fullname": "GPopt.GPOpt.GPOpt.get_params", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.get_params", "kind": "function", "doc": "Get object attributes.
\n\nReturns
\n\nparams : mapping of string to any\n Parameter names mapped to their values.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.eval_objective", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.eval_objective", "kind": "function", "doc": "\n", "signature": "(self, arg):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.load", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.load", "kind": "function", "doc": "load data from stored shelve.
\n\nArguments
\n\npath : a string; path to stored shelve.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self, path):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.update_shelve", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.update_shelve", "kind": "function", "doc": "\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.close_shelve", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.close_shelve", "kind": "function", "doc": "Close shelve.
\n\nArguments
\n\nNo argument.
\n\nSee also: Bayesian Optimization with GPopt Part 2 (save and resume)
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.gp_fit_predict", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.gp_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.timings_fit_predict", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.timings_fit_predict", "kind": "function", "doc": "\n", "signature": "(self, X_train, y_train, X_test):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.next_parameter_by_ei", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.next_parameter_by_ei", "kind": "function", "doc": "\n", "signature": "(self, seed, i):", "funcdef": "def"}, {"fullname": "GPopt.GPOpt.GPOpt.optimize", "modulename": "GPopt.GPOpt", "qualname": "GPOpt.optimize", "kind": "function", "doc": "Launch optimization loop.
\n\nArguments:
\n\n\n\nverbose: an integer;\n verbose = 0: nothing is printed,\n verbose = 1: a progress bar is printed (longer than 0),\n verbose = 2: information about each iteration is printed (longer than 1)\n\nn_more_iter: an integer;\n additional number of iterations for the optimizer (which has been run once)\n\nabs_tol: a float;\n tolerance for convergence of the optimizer (early stopping based on expected improvement)\n\nmin_budget: an integer (default is 50);\n minimum number of iterations before early stopping controlled by `abs_tol`\n\nfunc_args: a list;\n additional parameters for the objective function (if necessary)\n
see also Bayesian Optimization with GPopt\nand Hyperparameters tuning with GPopt
\n", "signature": "(\tself,\tverbose=1,\tn_more_iter=None,\tabs_tol=None,\tmin_budget=50,\tfunc_args=None):", "funcdef": "def"}]; // mirrored in build-search-index.js (part 1) // Also split on html tags. this is a cheap heuristic, but good enough. diff --git a/requirements.txt b/requirements.txt index 9f9430b..43ece8a 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,7 +1,9 @@ joblib matplotlib +nnetsauce numpy pandas scipy scikit-learn -threadpoolctl \ No newline at end of file +threadpoolctl +tqdm \ No newline at end of file diff --git a/save.db b/save.db new file mode 100644 index 0000000..06208da Binary files /dev/null and b/save.db differ diff --git a/setup.py b/setup.py index 4dc4545..19a09ad 100644 --- a/setup.py +++ b/setup.py @@ -3,7 +3,7 @@ from codecs import open from os import path -__version__ = "0.4.1" +__version__ = "0.6.0" subprocess.call("pip install -r requirements.txt", shell=True)