Skip to content

Commit a04e67a

Browse files
committed
Revert "Merge branch 'dev' into tutorial-updates"
This reverts commit dafd4c1, reversing changes made to 2645e9d.
1 parent dafd4c1 commit a04e67a

File tree

807 files changed

+71176
-4890
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

807 files changed

+71176
-4890
lines changed

.gitignore

-3
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,3 @@ venv/*
5454
.conda*/
5555
.python-version
5656
reports/~$honegumi-logo.pptx
57-
docs/curriculum/assignments/Assignment_1_SOBO.ipynb
58-
docs/curriculum/assignments/Assignment_2_MOBO.ipynb
59-
src/honegumi/core/honegumi-pyodide-refactor-backup.html.jinja

.pre-commit-config.yaml

+7-7
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,13 @@ repos:
1818
- id: mixed-line-ending
1919
args: ['--fix=auto'] # replace 'auto' with 'lf' to enforce Linux/Mac line endings or 'crlf' for Windows
2020

21-
# # If you want to automatically "modernize" your Python code:
22-
# - repo: https://github.com/asottile/pyupgrade
23-
# rev: v3.9.0
24-
# hooks:
25-
# - id: pyupgrade
26-
# args: ['--py37-plus']
27-
# exclude: ^tests/generated_scripts/
21+
# If you want to automatically "modernize" your Python code:
22+
- repo: https://github.com/asottile/pyupgrade
23+
rev: v3.9.0
24+
hooks:
25+
- id: pyupgrade
26+
args: ['--py37-plus']
27+
exclude: ^tests/generated_scripts/
2828

2929
# # If you want to avoid flake8 errors due to unused vars or imports:
3030
# Failing due to

CONTRIBUTING.md

+7-34
Original file line numberDiff line numberDiff line change
@@ -103,45 +103,19 @@ python3 -m http.server --directory 'docs/_build/html'
103103

104104
## Code Contributions
105105

106-
For a high-level roadmap of Honegumi's development, see https://github.com/sgbaird/honegumi/discussions/2. Honegumi uses Python, Javascript, Jinja2, pytest, and GitHub actions to automate the generation, testing, and deployment of templates with a focus on Bayesian optimization packages. As of 2024-06-18, only [Meta's Ax Platform](https://ax.dev) is supported. The plumbing and logic that creates this is thorough and scalable.
106+
For a high-level roadmap of Honegumi's development, see https://github.com/sgbaird/honegumi/discussions/2. Honegumi uses Python, Javascript, Jinja2, pytest, and GitHub actions to automate the generation, testing, and deployment of templates with a focus on Bayesian optimization packages. As of 2023-08-21, only a single package ([Meta's Ax Platform](https://ax.dev) for a small set of features. However, the plumbing and logic that creates this is thorough and scalable. I focused first on getting all the pieces together before scaling up to many features (and thus slowing down the development cycle).
107107

108-
Here are some ways you can help with the https://github.com/sgbaird/honegumi/blob/main/
108+
Here are some ways you can help with the project:
109109
1. Use the tool and let us know what you think 😉
110110
2. [Provide feedback](https://github.com/sgbaird/honegumi/discussions/2) on the overall organization, logic, and workflow of the project
111111
3. Extend the Ax features to additional options (i.e., additional rows and options within rows) via direct edits to [ax/main.py.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja)
112-
4.Extend the [`honegumi.html.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja) and [`main.py.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja) templates (make sure to run [`generate_scripts.py`](https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py) after changes). See below for more information.
113-
1. Extend Honegumi to additional platforms such as BoFire, Atlas, or BayBE
114-
2. Spread the word about the tool
112+
4. Improve the `honegumi.html` and `honegumi.ipynb` templates (may also need to update `generate_scripts.py`). See below for more information.
113+
5. Extend Honegumi to additional platforms such as BoFire or Atlas
114+
6. Spread the word about the tool
115115

116-
For those unfamiliar with Jinja2, see the Google Colab tutorial: [_A Gentle Introduction to Jinja2_](https://colab.research.google.com/github/sgbaird/honegumi/blob/main/notebooks/1.0-sgb-gentle-introduction-jinja.ipynb). The main template file for Meta's Adaptive Experimentation (Ax) Platform is [`ax/main.py.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja). The main file that interacts with this template is at [`scripts/generate_scripts.py`](https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py). The generated scripts are [available on GitHub](https://github.com/sgbaird/honegumi/blob/main/docs/generated_scripts/ax). Each script is tested [via `pytest`](https://github.com/sgbaird/honegumi/blob/main/tests/) and [GitHub Actions](https://github.com/sgbaird/honegumi/actions/workflows/ci.yml) to ensure it can run error-free. Finally, the results are passed to [core/honegumi.html.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja) and [core/honegumi.ipynb.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.ipynb.jinja) to create the scripts and notebooks, respectively.
116+
For those unfamiliar with Jinja2, see the Google Colab tutorial: [_A Gentle Introduction to Jinja2_](https://colab.research.google.com/github/sgbaird/honegumi/blob/main/notebooks/1.0-sgb-gentle-introduction-jinja.ipynb). The main template file for Meta's Adaptive Experimentation (Ax) Platform is [`ax/main.py.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja). The main file that interacts with this template is at [`scripts/generate_scripts.py`](https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py). The generated scripts are [available on GitHub](https://github.com/sgbaird/honegumi/tree/main/docs/generated_scripts/ax). Each script is tested [via `pytest`](https://github.com/sgbaird/honegumi/tree/main/tests) and [GitHub Actions](https://github.com/sgbaird/honegumi/actions/workflows/ci.yml) to ensure it can run error-free. Finally, the results are passed to [core/honegumi.html.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja) and [core/honegumi.ipynb.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.ipynb.jinja) to create the scripts and notebooks, respectively.
117117

118-
```{figure} _static/honegumi-mermaid.png
119-
Behind-the-scenes flowchart for Honegumi.
120-
```
121-
122-
```{evalrst}
123-
flowchart TD
124-
A[main.py.jinja Template] -->|Used by| B[generate_scripts.py]
125-
B -->|Generates| C[.py Files]
126-
B -->|Generates| D[_test.py Files]
127-
B -->|Generates| E[.ipynb Files]
128-
B -->|Generates| F[honegumi.html]
129-
D -->|Tested via| G[GitHub Actions running pytest]
130-
G -->|If Tests Pass| H[Documentation]
131-
F -->|Included in| H
132-
133-
click A href "https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja" "main.py.jinja Template"
134-
click B href "https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py" "generate_scripts.py"
135-
click C href "https://github.com/sgbaird/honegumi/blob/main/docs/generated_scripts/ax" ".py Files"
136-
click D href "https://github.com/sgbaird/honegumi/blob/main/tests/" "_test.py Files"
137-
click E href "https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.ipynb.jinja" ".ipynb Files"
138-
click F href "https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja" "honegumi.html"
139-
click G href "https://github.com/sgbaird/honegumi/actions/workflows/ci.yml" "GitHub Actions"
140-
click H href "https://github.com/sgbaird/honegumi/blob/main/docs/generated_scripts/ax" "Documentation"
141-
```
142-
143-
````{tip}
144-
If you are committing some of the generated scripts or notebooks on Windows, you will [likely need to run the following command](https://stackoverflow.com/questions/22575662/filename-too-long-in-git-for-windows) in a terminal (e.g., git bash) as an administrator to avoid an `lstat(...) Filename too long` error:
118+
NOTE: If you are committing some of the generated scripts or notebooks on Windows, you will [likely need to run this command](https://stackoverflow.com/questions/22575662/filename-too-long-in-git-for-windows) in a terminal (e.g., git bash) as an administrator to avoid an `lstat(...) Filename too long` error:
145119

146120
```bash
147121
git config --system core.longpaths true
@@ -161,7 +135,6 @@ To only commit non-generated files, you can add all files and reset the generate
161135
git add .
162136
git reset docs/generated_scripts docs/generated_notebooks tests/generated_scripts
163137
```
164-
````
165138

166139
## Project Organization
167140

docs/_static/honegumi-mermaid.png

-352 KB
Binary file not shown.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
import numpy as np
2+
from ax.service.ax_client import AxClient, ObjectiveProperties
3+
4+
obj1_name = "branin"
5+
obj2_name = "branin_swapped"
6+
7+
8+
def branin_moo(x1, x2):
9+
y = float(
10+
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
11+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
12+
+ 10
13+
)
14+
15+
# second objective has x1 and x2 swapped
16+
y2 = float(
17+
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
18+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
19+
+ 10
20+
)
21+
22+
return {obj1_name: y, obj2_name: y2}
23+
24+
25+
ax_client = AxClient()
26+
27+
ax_client.create_experiment(
28+
parameters=[
29+
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
30+
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
31+
],
32+
objectives={
33+
obj1_name: ObjectiveProperties(minimize=True),
34+
obj2_name: ObjectiveProperties(minimize=True),
35+
},
36+
)
37+
38+
39+
batch_size = 2
40+
41+
42+
for _ in range(19):
43+
44+
parameterizations, optimization_complete = ax_client.get_next_trials(batch_size)
45+
for trial_index, parameterization in list(parameterizations.items()):
46+
# extract parameters
47+
x1 = parameterization["x1"]
48+
x2 = parameterization["x2"]
49+
50+
results = branin_moo(x1, x2)
51+
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
52+
53+
pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
import numpy as np
2+
from ax.service.ax_client import AxClient, ObjectiveProperties
3+
4+
obj1_name = "branin"
5+
obj2_name = "branin_swapped"
6+
7+
8+
def branin_moo(x1, x2):
9+
y = float(
10+
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
11+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
12+
+ 10
13+
)
14+
15+
# second objective has x1 and x2 swapped
16+
y2 = float(
17+
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
18+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
19+
+ 10
20+
)
21+
22+
return {obj1_name: y, obj2_name: y2}
23+
24+
25+
ax_client = AxClient()
26+
27+
ax_client.create_experiment(
28+
parameters=[
29+
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
30+
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
31+
],
32+
objectives={
33+
obj1_name: ObjectiveProperties(minimize=True),
34+
obj2_name: ObjectiveProperties(minimize=True),
35+
},
36+
)
37+
38+
39+
for _ in range(19):
40+
41+
parameterization, trial_index = ax_client.get_next_trial()
42+
43+
# extract parameters
44+
x1 = parameterization["x1"]
45+
x2 = parameterization["x2"]
46+
47+
results = branin_moo(x1, x2)
48+
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
49+
50+
pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
import numpy as np
2+
from ax.service.ax_client import AxClient, ObjectiveProperties
3+
4+
obj1_name = "branin"
5+
obj2_name = "branin_swapped"
6+
7+
8+
def branin_moo(x1, x2):
9+
y = float(
10+
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
11+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
12+
+ 10
13+
)
14+
15+
# second objective has x1 and x2 swapped
16+
y2 = float(
17+
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
18+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
19+
+ 10
20+
)
21+
22+
return {obj1_name: y, obj2_name: y2}
23+
24+
25+
ax_client = AxClient()
26+
27+
ax_client.create_experiment(
28+
parameters=[
29+
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
30+
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
31+
],
32+
objectives={
33+
obj1_name: ObjectiveProperties(minimize=True, threshold=25.0),
34+
obj2_name: ObjectiveProperties(minimize=True, threshold=15.0),
35+
},
36+
)
37+
38+
39+
batch_size = 2
40+
41+
42+
for _ in range(19):
43+
44+
parameterizations, optimization_complete = ax_client.get_next_trials(batch_size)
45+
for trial_index, parameterization in list(parameterizations.items()):
46+
# extract parameters
47+
x1 = parameterization["x1"]
48+
x2 = parameterization["x2"]
49+
50+
results = branin_moo(x1, x2)
51+
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
52+
53+
pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
import numpy as np
2+
from ax.service.ax_client import AxClient, ObjectiveProperties
3+
4+
obj1_name = "branin"
5+
obj2_name = "branin_swapped"
6+
7+
8+
def branin_moo(x1, x2):
9+
y = float(
10+
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
11+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
12+
+ 10
13+
)
14+
15+
# second objective has x1 and x2 swapped
16+
y2 = float(
17+
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
18+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
19+
+ 10
20+
)
21+
22+
return {obj1_name: y, obj2_name: y2}
23+
24+
25+
ax_client = AxClient()
26+
27+
ax_client.create_experiment(
28+
parameters=[
29+
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
30+
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
31+
],
32+
objectives={
33+
obj1_name: ObjectiveProperties(minimize=True, threshold=25.0),
34+
obj2_name: ObjectiveProperties(minimize=True, threshold=15.0),
35+
},
36+
)
37+
38+
39+
for _ in range(19):
40+
41+
parameterization, trial_index = ax_client.get_next_trial()
42+
43+
# extract parameters
44+
x1 = parameterization["x1"]
45+
x2 = parameterization["x2"]
46+
47+
results = branin_moo(x1, x2)
48+
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
49+
50+
pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
import numpy as np
2+
from ax.service.ax_client import AxClient, ObjectiveProperties
3+
4+
obj1_name = "branin"
5+
obj2_name = "branin_swapped"
6+
7+
8+
def branin_moo(x1, x2, c1):
9+
y = float(
10+
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
11+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
12+
+ 10
13+
)
14+
15+
# add a made-up penalty based on category
16+
penalty_lookup = {"A": 1.0, "B": 0.0, "C": 2.0}
17+
y += penalty_lookup[c1]
18+
19+
# second objective has x1 and x2 swapped
20+
y2 = float(
21+
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
22+
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
23+
+ 10
24+
)
25+
26+
# add a made-up penalty based on category
27+
penalty_lookup = {"A": 0.0, "B": 2.0, "C": 1.0}
28+
y2 += penalty_lookup[c1]
29+
30+
return {obj1_name: y, obj2_name: y2}
31+
32+
33+
ax_client = AxClient()
34+
35+
ax_client.create_experiment(
36+
parameters=[
37+
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
38+
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
39+
{
40+
"name": "c1",
41+
"type": "choice",
42+
"is_ordered": False,
43+
"values": ["A", "B", "C"],
44+
},
45+
],
46+
objectives={
47+
obj1_name: ObjectiveProperties(minimize=True),
48+
obj2_name: ObjectiveProperties(minimize=True),
49+
},
50+
)
51+
52+
53+
batch_size = 2
54+
55+
56+
for _ in range(21):
57+
58+
parameterizations, optimization_complete = ax_client.get_next_trials(batch_size)
59+
for trial_index, parameterization in list(parameterizations.items()):
60+
# extract parameters
61+
x1 = parameterization["x1"]
62+
x2 = parameterization["x2"]
63+
64+
c1 = parameterization["c1"]
65+
66+
results = branin_moo(x1, x2, c1)
67+
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
68+
69+
pareto_results = ax_client.get_pareto_optimal_parameters()

0 commit comments

Comments
 (0)