Skip to content

Commit

Permalink
Merge pull request #26 from MSDLLCpapers/release_docs
Browse files Browse the repository at this point in the history
Add more Wiki pages and Tutorials
  • Loading branch information
kstone40 authored Aug 15, 2024
2 parents 06c4e1f + a744522 commit 8145b8f
Show file tree
Hide file tree
Showing 15 changed files with 18,111 additions and 3,779 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -7,34 +7,47 @@
"outputs": [],
"source": [
"import obsidian\n",
"obsidian.__version__"
"print(f'obsidian version: ' + obsidian.__version__)\n",
"\n",
"import pandas as pd\n",
"import plotly.express as px\n",
"import plotly.io as pio\n",
"pio.renderers.default = \"plotly_mimetype+notebook\""
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import plotly.express as px"
"## Introduction"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"from obsidian.parameters import ParamSpace, Param_Continuous\n",
"from obsidian.experiment import ExpDesigner"
"In this tutorial, we will see how to use _obsidian_ for multi-output optimization. To demonstrate the versatility of the approach, we will seek to maximize one response while minimizing the other.\n",
"\n",
"$$\\underset{X}{argmax} HV\\left(+f\\left(y_1\\right) -f\\left(y_2\\right)\\right)$$\n",
"\n",
"Furthermore, we will apply a linear constraint on the input variables; requiring that the $X_1 + X_2 \\leq 6 $."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up parameter space and initialize a design"
"## Set up parameter space and initialize a design"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from obsidian import Campaign, Target, ParamSpace, BayesianOptimizer\n",
"from obsidian.parameters import Param_Continuous"
]
},
{
Expand All @@ -49,8 +62,12 @@
" ]\n",
"\n",
"X_space = ParamSpace(params)\n",
"designer = ExpDesigner(X_space, seed=0)\n",
"X0 = designer.initialize(4, 'LHS')\n",
"target = [\n",
" Target('Response 1', aim='max'),\n",
" Target('Response 2', aim='min')\n",
"]\n",
"campaign = Campaign(X_space, target, seed=0)\n",
"X0 = campaign.designer.initialize(4, 'LHS')\n",
"\n",
"X0"
]
Expand All @@ -59,7 +76,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Collect results (e.g. from a simulation)"
"## Collect results (e.g. from a simulation)"
]
},
{
Expand All @@ -75,23 +92,15 @@
"y0 = simulator.simulate(X0)\n",
"Z0 = pd.concat([X0, y0], axis=1)\n",
"\n",
"Z0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"Z0.plot(x='Response 1', y='Response 2', kind='scatter', figsize=(4,3))"
"campaign.add_data(Z0)\n",
"campaign.data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a campaign to track optimization"
"### Fit an optimizer and visualize results"
]
},
{
Expand All @@ -100,7 +109,7 @@
"metadata": {},
"outputs": [],
"source": [
"from obsidian.campaign import Campaign"
"campaign.fit()"
]
},
{
Expand All @@ -109,25 +118,23 @@
"metadata": {},
"outputs": [],
"source": [
"my_campaign = Campaign(X_space)\n",
"my_campaign.add_data(Z0)\n",
"my_campaign.data"
"from obsidian.plotting import surface_plot, optim_progress"
]
},
{
"cell_type": "markdown",
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"### Fit an optimizer"
"surface_plot(campaign.optimizer)"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"from obsidian.parameters import Target"
"## Optimize new experiment suggestions"
]
},
{
Expand All @@ -136,29 +143,16 @@
"metadata": {},
"outputs": [],
"source": [
"target = [\n",
" Target('Response 1', aim='max'),\n",
" Target('Response 2', aim='min')\n",
"]\n",
"\n",
"my_campaign.set_target(target)\n",
"my_campaign.fit()\n"
"from obsidian.constraints import InConstraint_Generic"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Make new experiment suggestions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from obsidian.constraints import InConstraint_Generic"
"_Note:_ It is a good idea to balance a set of acquisition functions with those that prefer design-space exploration. This helps to ensure that the optimizer is not severely misled by deficiencies in the dataset, particularly for small data. It also helps to ascertain a global optimum.\n",
"\n",
"A simple choice is __Space Filling (SF)__ although __Negative Integrated Posterior Variance (NIPV)__ is available for single-output optimizations; and there are various other acquisiiton functions whose hyperparameters can be tuned to manage the \"explore-exploit\" balance."
]
},
{
Expand All @@ -167,13 +161,9 @@
"metadata": {},
"outputs": [],
"source": [
"# # X1 + X2 >= 2\n",
"# optim_kwargs = {'m_batch':2, 'acquisition':[{'qNEHVI':{'ref_point':[-350,0]}}], 'ineq_constraints':[InConstraint_Generic(X_space, indices=[0,1], coeff=[1,1], rhs=2)]}\n",
"\n",
"# X1 + X2 <= 6 aka -X1 - X2 >= -6\n",
"optim_kwargs = {'m_batch':2, 'acquisition':[{'qNEHVI':{'ref_point':[-350,0]}}], 'ineq_constraints':[InConstraint_Generic(X_space, indices=[0,1], coeff=[-1,-1], rhs=-6)]}\n",
"\n",
"X_suggest, eval_suggest = my_campaign.optimizer.suggest(**optim_kwargs)"
"X_suggest, eval_suggest = campaign.optimizer.suggest(acquisition = [{'NEHVI':{'ref_point':[-350, -20]}}, 'SF'],\n",
" # X1 + X2 <= 6, written as -X1 - X2 >= -6\n",
" ineq_constraints = [InConstraint_Generic(X_space, indices=[0,1], coeff=[-1,-1], rhs=-6)])"
]
},
{
Expand All @@ -189,7 +179,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Collect data at new suggestions"
"## Collect data at new suggestions"
]
},
{
Expand All @@ -200,15 +190,15 @@
"source": [
"y_iter1 = pd.DataFrame(simulator.simulate(X_suggest))\n",
"Z_iter1 = pd.concat([X_suggest, y_iter1, eval_suggest], axis=1)\n",
"my_campaign.add_data(Z_iter1)\n",
"my_campaign.data"
"campaign.add_data(Z_iter1)\n",
"campaign.data.tail()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Repeat as desired"
"## Repeat as desired"
]
},
{
Expand All @@ -218,11 +208,12 @@
"outputs": [],
"source": [
"for iter in range(5):\n",
" my_campaign.fit()\n",
" X_suggest, eval_suggest = my_campaign.optimizer.suggest(**optim_kwargs)\n",
" campaign.fit()\n",
" X_suggest, eval_suggest = campaign.optimizer.suggest(acquisition = [{'NEHVI':{'ref_point':[-350, -20]}}, 'SF'],\n",
" ineq_constraints = [InConstraint_Generic(X_space, indices=[0,1], coeff=[-1,-1], rhs=-6)])\n",
" y_iter = pd.DataFrame(simulator.simulate(X_suggest))\n",
" Z_iter = pd.concat([X_suggest, y_iter, eval_suggest], axis=1)\n",
" my_campaign.add_data(Z_iter)"
" campaign.add_data(Z_iter)"
]
},
{
Expand All @@ -231,8 +222,7 @@
"metadata": {},
"outputs": [],
"source": [
"fig = px.scatter(my_campaign.data, x='Response 1', y='Response 2', color='Iteration')\n",
"fig.update_layout(height=300, width=400, template='ggplot2')"
"optim_progress(campaign)"
]
},
{
Expand All @@ -241,15 +231,17 @@
"metadata": {},
"outputs": [],
"source": [
"my_campaign.data"
"surface_plot(campaign.optimizer, response_id = 0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
"source": [
"surface_plot(campaign.optimizer, response_id = 1)"
]
}
],
"metadata": {
Expand Down
Loading

0 comments on commit 8145b8f

Please sign in to comment.