Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -214,3 +214,6 @@ __marimo__/

# Streamlit
.streamlit/secrets.toml

# Mac stuff
.DS_Store
64 changes: 64 additions & 0 deletions code_snippets/IOHClustering/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# IOH Clustering

Evaluates a function from the [IOHclustering](https://github.com/IOHprofiler/IOHClustering) benchmark suite — a set of continuous black-box optimization problems derived from data clustering tasks, integrated with the [IOHprofiler](https://github.com/IOHprofiler) framework.

Each problem encodes a k-means-style clustering objective: the search vector represents `k` cluster centers in a 2D feature space (after PCA or feature selection), so the problem dimensionality is `k × 2`. All variables are bounded in `[0, 1]`.

## Quick Start

1. Install [uv](https://docs.astral.sh/uv/) if you don't have it yet:

```bash
pip install uv
```

No extra setup is needed beyond having `uv` installed. The `ioh` and `iohclustering` packages are resolved automatically.

> **Note:** This snippet requires **Python 3.10**. The inline metadata enforces this via `requires-python = "==3.10"`. Make sure a Python 3.10 interpreter is available on your system.

## Usage

```bash
uv run call_clustering.py
```

## What the Snippet Does

The script creates clustering problem 5 (`iris_pca`) with `k=2` clusters, evaluates it at the origin, and prints the result. You can adjust the behavior by editing these variables in the script:

- **`fid`** — problem ID (integer) or dataset name (string) passed to `get_problem()` (default: `5`)
- **`k`** — number of cluster centers (default: `2`; available values: `2`, `3`, `5`, `10`)
- **`instance`** — problem instance for transformation-based generalization (default: `1`)
- **`eval_point`** — the point at which the function is evaluated (default: all zeros)

> **Note:** `get_problem()` returns a tuple `(problem, retransform)`. The `retransform` function converts a solution vector back into cluster center coordinates in the original data space.

### Available Datasets

| ID | Name |
|----|------|
| 1 | breast_pca |
| 2 | diabetes_pca |
| 3 | german_postal_selected |
| 4 | glass_pca |
| 5 | iris_pca |
| 6 | kc1_pca |
| 7 | mfeat-fourier_pca |
| 8 | ruspini_selected |
| 9 | segment_pca |
| 10 | wine_pca |

### Available Values of k

| k | Dimensionality |
|---|----------------|
| 2 | 4 |
| 3 | 6 |
| 5 | 10 |
| 10 | 20 |

## Resources

- [IOHClustering GitHub repository](https://github.com/IOHprofiler/IOHClustering)
- [IOHexperimenter GitHub repository](https://github.com/IOHprofiler/IOHexperimenter)
- [Benchmark paper (arXiv)](https://arxiv.org/abs/2505.09233)
20 changes: 20 additions & 0 deletions code_snippets/IOHClustering/call_IOHClustering.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# /// script
# requires-python = "==3.10"
# dependencies = [
# "ioh",
# "iohclustering",
# ]
# ///

from iohclustering import get_problem

# Get benchmark problem by its ID (e.g., ID=5) with k=2 clusters
# Alternatively, by name (e.g., "iris_pca")
clustering_problem, retransform = get_problem(fid=5, k=2)

### evaluation point
dim = clustering_problem.meta_data.n_variables
eval_point = [0.0]*dim

### print function value for eval_point
print(clustering_problem(eval_point))
57 changes: 57 additions & 0 deletions code_snippets/PBO/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# IOH PBO

Evaluates a function from the [IOHexperimenter](https://github.com/IOHprofiler/IOHexperimenter) **PBO** (Pseudo-Boolean Optimization) problem class — a suite of 25 test functions defined on {0, 1}^n. All problems are maximization problems.

## Quick Start

1. Install [uv](https://docs.astral.sh/uv/) if you don't have it yet:

```bash
pip install uv
```

No extra setup is needed beyond having `uv` installed. The `ioh` package is resolved automatically.

> **Note:** This snippet requires **Python 3.10**. The inline metadata enforces this via `requires-python = "==3.10"`. Make sure a Python 3.10 interpreter is available on your system.

## Usage

```bash
uv run call_pbo.py
```

## What the Snippet Does

The script creates PBO problem 1 (OneMax, instance 1) in 16 dimensions, evaluates it at the all-zeros bitstring, and prints the result. You can adjust the behavior by editing these variables in the script:

- **Problem ID** — the first argument to `ioh.get_problem()` selects which of the 25 functions to load (default: `1`)
- **`dim`** — problem dimensionality, i.e. bitstring length (default: `16`)
- **`instance`** — problem instance; controls transformations such as objective scaling (default: `1`)
- **`eval_point`** — the bitstring at which the function is evaluated (default: all zeros)

> **Note:** PBO problems take **integer** inputs in {0, 1}^n (not floats). The evaluation point should be a list of `0`s and `1`s.

### Available Functions

| ID | Name | ID | Name |
|----|------|----|------|
| 1 | OneMax | 14 | LeadingOnesEpistasis |
| 2 | LeadingOnes | 15 | LeadingOnesRuggedness1 |
| 3 | Linear | 16 | LeadingOnesRuggedness2 |
| 4 | OneMaxDummy1 | 17 | LeadingOnesRuggedness3 |
| 5 | OneMaxDummy2 | 18 | LABS |
| 6 | OneMaxNeutrality | 19 | IsingRing |
| 7 | OneMaxEpistasis | 20 | IsingTorus |
| 8 | OneMaxRuggedness1 | 21 | IsingTriangular |
| 9 | OneMaxRuggedness2 | 22 | MIS |
| 10 | OneMaxRuggedness3 | 23 | NQueens |
| 11 | LeadingOnesDummy1 | 24 | ConcatenatedTrap |
| 12 | LeadingOnesDummy2 | 25 | NKLandscapes |
| 13 | LeadingOnesNeutrality | | |

## Resources

- [PBO problem descriptions](https://iohprofiler.github.io/IOHproblem/PBO)
- [PBO class documentation](https://iohprofiler.github.io/IOHexperimenter/python/pbo.html)
- [PBO source code](https://github.com/IOHprofiler/IOHexperimenter/tree/master/include/ioh/problem/pbo)
- [IOHexperimenter GitHub repository](https://github.com/IOHprofiler/IOHexperimenter)
18 changes: 18 additions & 0 deletions code_snippets/PBO/call_pbo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# /// script
# requires-python = "==3.10"
# dependencies = [
# "ioh",
# ]
# ///

import ioh

### evaluation point
dim = 16
eval_point = [0]*dim

### input
f = ioh.get_problem(1, instance=1, dimension=dim, problem_class=ioh.ProblemClass.PBO)

### print function value for eval_point
print(f(eval_point))
51 changes: 51 additions & 0 deletions code_snippets/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# OPL Code Snippets

A collection of minimal, self-contained code snippets for evaluating optimization benchmark functions from the OPL library. Each snippet uses [uv](https://docs.astral.sh/uv/) as the script runner and requires no manual virtual-environment setup.

## Repository Structure

Every benchmark problem has its own repository containing:

- **`call_<problem>.py`** — the evaluation script, with inline dependency metadata ([PEP 723](https://peps.python.org/pep-0723/)) so `uv` resolves everything automatically.
- **`README.md`** — problem-specific instructions covering any prerequisites (cloning external repos, running setup scripts, downloading executables, etc.) and the usage example.

**Always start by reading the README inside the problem's repository.** Some benchmarks need extra setup steps before the snippet will run.

## Quick Start

1. Install **uv** if you don't have it yet:

```bash
pip install uv
```

2. Navigate to the problem's repository and follow its specific README.

3. Run the snippet:

```bash
uv run call_<problem>.py
```

## Available Benchmarks

| Repository | Benchmark | Description |
|------------|-----------|-------------|
| `cocoex/` | [COCO/BBOB](https://github.com/numbbo/coco) | Evaluates function 1 from the BBOB suite (2-D) |
| `mf2/` | [mf2](https://github.com/sjvrijn/mf2) | Evaluates the Branin function at high and low fidelity |
| … | … | See the full list in the [OPL Library](#) |

## Contributing a New Snippet

1. Create a new repository (or folder) named after the problem.
2. Add a `call_<problem>.py` file with the inline dependency block at the top:
```python
# /// script
# dependencies = [
# "your-package",
# ]
# ///
```
3. Write your evaluation code below the dependency block.
4. Add a `README.md` that documents any setup steps a user must complete before running the script (cloning repos, installing non-Python dependencies, downloading data, etc.).
5. Update the table above to include your new benchmark.
33 changes: 33 additions & 0 deletions code_snippets/bbob/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# BBOB

Evaluates a function from the [COCO](https://github.com/numbbo/coco) **bbob** suite — the core set of 24 noiseless single-objective benchmark functions widely used for comparing continuous optimizers.

## Quick Start

1. Install [uv](https://docs.astral.sh/uv/) if you don't have it yet:

```bash
pip install uv
```

No extra setup is needed beyond having `uv` installed. The `coco-experiment` package is resolved automatically.

## Usage

```bash
uv run call_bbob.py
```

## What the Snippet Does

The script evaluates function 1 (instance 1) from the `bbob` suite in 2 dimensions at the origin and prints the result. You can adjust the behavior by editing these variables in the script:

- **`function_indices`** — which benchmark function(s) to load (default: `1`)
- **`dimensions`** — problem dimensionality (default: `2`)
- **`instances`** — problem instance(s) (default: `1`)
- **`eval_point`** — the point at which the function is evaluated (default: all zeros)

## Resources

- [COCO documentation](https://numbbo.github.io/coco/)
- [bbob suite function definitions](https://numbbo.github.io/coco/testsuites/bbob)
24 changes: 24 additions & 0 deletions code_snippets/bbob/call_bbob.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# /// script
# dependencies = [
# "coco-experiment",
# ]
# ///

import cocoex

### evaluation point
dim = 2
eval_point = [0.0]*dim

### input
suite_name = "bbob"

### prepare
suite = cocoex.Suite(suite_name,
"instances: 1",
"function_indices: 1 dimensions: 2")
print(suite)

### go
for problem in suite: # this loop may take several minutes or more
print(problem(eval_point))
33 changes: 33 additions & 0 deletions code_snippets/bbob_largescale/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# BBOB Large-Scale

Evaluates a function from the [COCO](https://github.com/numbbo/coco) **bbob-largescale** suite — a set of large-scale benchmark functions designed for testing optimizers in higher dimensions.

## Quick Start

1. Install [uv](https://docs.astral.sh/uv/) if you don't have it yet:

```bash
pip install uv
```

No extra setup is needed beyond having `uv` installed. The `coco-experiment` package is resolved automatically.

## Usage

```bash
uv run call_bbob_largescale.py
```

## What the Snippet Does

The script evaluates function 1 (instance 1) from the `bbob-largescale` suite in 20 dimensions at the origin and prints the result. You can adjust the behavior by editing these variables in the script:

- **`function_indices`** — which benchmark function(s) to load (default: `1`)
- **`dimensions`** — problem dimensionality (default: `20`)
- **`instances`** — problem instance(s) (default: `1`)
- **`eval_point`** — the point at which the function is evaluated (default: all zeros)

## Resources

- [COCO documentation](https://numbbo.github.io/coco/)
- [bbob-largescale suite definition](https://coco.gforge.inria.fr/downloads/download16.00/bbob-largescale-functions.pdf)
24 changes: 24 additions & 0 deletions code_snippets/bbob_largescale/call_bbob_largescale.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# /// script
# dependencies = [
# "coco-experiment",
# ]
# ///

import cocoex

### evaluation point
dim = 20
eval_point = [0.0]*dim

### input
suite_name = "bbob-largescale"

### prepare
suite = cocoex.Suite(suite_name,
"instances: 1",
"function_indices: 1 dimensions: 20")
print(suite)

### go
for problem in suite: # this loop may take several minutes or more
print(problem(eval_point))
33 changes: 33 additions & 0 deletions code_snippets/bbob_mixint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# BBOB Mixed-Integer

Evaluates a function from the [COCO](https://github.com/numbbo/coco) **bbob-mixint** suite — benchmark functions with a mix of continuous and integer variables, designed for testing mixed-integer optimization algorithms.

## Quick Start

1. Install [uv](https://docs.astral.sh/uv/) if you don't have it yet:

```bash
pip install uv
```

No extra setup is needed beyond having `uv` installed. The `coco-experiment` package is resolved automatically.

## Usage

```bash
uv run call_bbob_mixint.py
```

## What the Snippet Does

The script evaluates function 1 (instance 1) from the `bbob-mixint` suite in 5 dimensions at the origin and prints the result. You can adjust the behavior by editing these variables in the script:

- **`function_indices`** — which benchmark function(s) to load (default: `1`)
- **`dimensions`** — problem dimensionality (default: `5`)
- **`instances`** — problem instance(s) (default: `1`)
- **`eval_point`** — the point at which the function is evaluated (default: all zeros)

## Resources

- [COCO documentation](https://numbbo.github.io/coco/)
- [bbob-mixint suite definition](https://numbbo.github.io/coco/testsuites/bbob-mixint)
Loading