nxbench.benchmarking package

Submodules

nxbench.benchmarking.benchmark module

nxbench.benchmarking.benchmark.benchmark_suite(algorithms, datasets, backends, threads, graphs)[source]

Run the full suite of benchmarks in parallel using asyncio.

Return type:

list[dict[str, Any]]

Parameters:
nxbench.benchmarking.benchmark.collect_metrics(execution_time, execution_time_with_preloading, peak_memory, graph, algo_config, backend, dataset_name, num_thread, validation_status, validation_message, error=None)[source]
Return type:

dict[str, Any]

Parameters:
nxbench.benchmarking.benchmark.configure_backend(original_graph, backend, num_thread)[source]

Convert an Nx graph for the specified backend.

Return type:

Any

Parameters:
  • original_graph (Graph)

  • backend (str)

  • num_thread (int)

nxbench.benchmarking.benchmark.load_config()[source]

Load benchmark configuration dynamically.

Return type:

dict[str, Any]

async nxbench.benchmarking.benchmark.main_benchmark(results_dir=PosixPath('results'))[source]

Execute benchmarks using Prefect.

Parameters:

results_dir (Path)

nxbench.benchmarking.benchmark.run_algorithm(graph, algo_config, num_thread, backend)[source]

Run the algorithm on the configured backend

Return type:

tuple[Any, float, int, str | None]

Parameters:
async nxbench.benchmarking.benchmark.run_single_benchmark(backend, num_thread, algo_config, dataset_config, original_graph)[source]
Return type:

dict[str, Any] | None

Parameters:
nxbench.benchmarking.benchmark.setup_cache(datasets)[source]

Load and cache datasets to avoid redundant loading.

Return type:

dict[str, tuple[Graph, dict[str, Any]]]

Parameters:

datasets (list[DatasetConfig])

nxbench.benchmarking.benchmark.teardown_specific(backend)[source]

If the backend provides a teardown function, call it.

Parameters:

backend (str)

nxbench.benchmarking.benchmark.validate_results(result, algo_config, graph)[source]
Return type:

tuple[str, str]

Parameters:

nxbench.benchmarking.config module

Benchmark configuration handling.

class nxbench.benchmarking.config.AlgorithmConfig(name, func, params=<factory>, requires_directed=False, requires_undirected=False, requires_weighted=False, validate_result=None, groups=<factory>)[source]

Bases: object

Configuration for a graph algorithm to benchmark.

Parameters:
__init__(name, func, params=<factory>, requires_directed=False, requires_undirected=False, requires_weighted=False, validate_result=None, groups=<factory>)
Parameters:
Return type:

None

func: str
get_callable(backend_name)[source]

Retrieve a callable suitable for the given backend.

Return type:

Any

Parameters:

backend_name (str)

get_func_ref()[source]
get_validate_ref()[source]
groups: list[str]
name: str
params: dict[str, Any]
requires_directed: bool = False
requires_undirected: bool = False
requires_weighted: bool = False
validate_result: str | None = None
class nxbench.benchmarking.config.BenchmarkConfig(algorithms, datasets, machine_info=<factory>, output_dir=<factory>, env_data=<factory>)[source]

Bases: object

Complete benchmark suite configuration.

Parameters:
__init__(algorithms, datasets, machine_info=<factory>, output_dir=<factory>, env_data=<factory>)
Parameters:
Return type:

None

algorithms: list[AlgorithmConfig]
datasets: list[DatasetConfig]
env_data: dict[str, Any]
classmethod from_yaml(path)[source]

Load configuration from YAML file.

Return type:

BenchmarkConfig

Parameters:

path (str | Path)

machine_info: dict[str, Any]
output_dir: Path
to_yaml(path)[source]

Save configuration to YAML file.

Return type:

None

Parameters:

path (str | Path)

class nxbench.benchmarking.config.BenchmarkMetrics(execution_time, memory_used)[source]

Bases: object

Container for benchmark metrics.

Parameters:
__init__(execution_time, memory_used)
Parameters:
Return type:

None

execution_time: float
memory_used: float
class nxbench.benchmarking.config.BenchmarkResult(algorithm, dataset, execution_time, execution_time_with_preloading, memory_used, num_nodes, num_edges, is_directed, is_weighted, backend, num_thread, date, metadata, validation='unknown', validation_message='', error=None)[source]

Bases: object

Container for benchmark execution results.

Parameters:
__init__(algorithm, dataset, execution_time, execution_time_with_preloading, memory_used, num_nodes, num_edges, is_directed, is_weighted, backend, num_thread, date, metadata, validation='unknown', validation_message='', error=None)
Parameters:
Return type:

None

algorithm: str
backend: str
dataset: str
date: int
error: str | None = None
execution_time: float
execution_time_with_preloading: float
is_directed: bool
is_weighted: bool
memory_used: float
metadata: dict[str, Any]
num_edges: int
num_nodes: int
num_thread: int
validation: str = 'unknown'
validation_message: str = ''
class nxbench.benchmarking.config.DatasetConfig(name, source, params=<factory>, metadata=None)[source]

Bases: object

Parameters:
__init__(name, source, params=<factory>, metadata=None)
Parameters:
Return type:

None

metadata: dict[str, Any] | None = None
name: str
params: dict[str, Any]
source: str

nxbench.benchmarking.export module

class nxbench.benchmarking.export.ResultsExporter(results_file)[source]

Bases: object

Handle loading, processing, and exporting of benchmark results.

Parameters:

results_file (Path)

__init__(results_file)[source]

Initialize the results exporter.

Parameters:

results_file (Path) – Path to the benchmark results file (JSON or CSV)

export_results(output_path, form='csv', if_exists='replace')[source]

Export benchmark results in specified format (csv, sql, json).

Return type:

None

Parameters:
load_results()[source]

Load benchmark results from the workflow outputs (JSON or CSV), integrating all known fields into BenchmarkResult and treating unknown fields as metadata.

Return type:

list[BenchmarkResult]

query_results(algorithm=None, backend=None, dataset=None, date_range=None)[source]

Query benchmark results with optional filtering.

Return type:

DataFrame

Parameters:
  • algorithm (str | None)

  • backend (str | None)

  • dataset (str | None)

  • date_range (tuple[str, str] | None)

to_dataframe()[source]
Return type:

DataFrame

nxbench.benchmarking.utils module

class nxbench.benchmarking.utils.MemorySnapshot(snapshot=None)[source]

Bases: object

Class to store and diff memory snapshots.

__init__(snapshot=None)[source]

Initialize with optional tracemalloc snapshot.

compare_to(other)[source]

Compare this snapshot to another and return (current, peak) memory diff in bytes.

Return type:

tuple[int, int]

Parameters:

other (MemorySnapshot)

take()[source]

Take a new snapshot.

nxbench.benchmarking.utils.add_seeding(kwargs, algo_func, algorithm_name)[source]
Return type:

dict

Parameters:
  • kwargs (dict)

  • algo_func (Any)

  • algorithm_name (str)

nxbench.benchmarking.utils.configure_benchmarks(config)[source]
Parameters:

config (BenchmarkConfig | str)

nxbench.benchmarking.utils.get_available_algorithms()[source]

Get algorithms from specified NetworkX submodules and custom algorithms.

Returns:

Dictionary of available algorithms.

Return type:

Dict[str, Callable]

nxbench.benchmarking.utils.get_benchmark_config()[source]
Return type:

BenchmarkConfig

nxbench.benchmarking.utils.get_machine_info()[source]
nxbench.benchmarking.utils.get_python_version()[source]

Get formatted Python version string.

Return type:

str

nxbench.benchmarking.utils.list_available_backends()[source]

Return a dict of all registered backends that are installed, mapped to their version string.

Return type:

dict[str, str]

nxbench.benchmarking.utils.load_default_config()[source]
Return type:

BenchmarkConfig

nxbench.benchmarking.utils.memory_tracker()[source]

Track memory usage of code block.

Returns dict with ‘current’ and ‘peak’ memory usage in bytes. Memory usage is measured as the difference between before and after execution.

nxbench.benchmarking.utils.process_algorithm_params(params)[source]

Process and separate algorithm parameters into positional and keyword arguments. :rtype: tuple[list[Any], dict[str, Any]]

  1. Keys starting with “_” go into pos_args (list).

  2. Other keys become kwargs (dict).

  3. If a param is a string that looks like a float or int, parse it.

4. If param is a dict containing {“func”: “…”} then dynamically load that function.

Parameters:

params (dict[str, Any])

Return type:

tuple[list[Any], dict[str, Any]]

Module contents