nxbench.benchmarking package¶
Submodules¶
nxbench.benchmarking.benchmark module¶
- nxbench.benchmarking.benchmark.benchmark_suite(algorithms, datasets, backends, threads, graphs)[source]¶
Run the full suite of benchmarks in parallel using asyncio.
- nxbench.benchmarking.benchmark.collect_metrics(execution_time, execution_time_with_preloading, peak_memory, graph, algo_config, backend, dataset_name, num_thread, validation_status, validation_message, error=None)[source]¶
- nxbench.benchmarking.benchmark.configure_backend(original_graph, backend, num_thread)[source]¶
Convert an Nx graph for the specified backend.
- async nxbench.benchmarking.benchmark.main_benchmark(results_dir=PosixPath('results'))[source]¶
Execute benchmarks using Prefect.
- Parameters:
results_dir (Path)
- nxbench.benchmarking.benchmark.run_algorithm(graph, algo_config, num_thread, backend)[source]¶
Run the algorithm on the configured backend
- async nxbench.benchmarking.benchmark.run_single_benchmark(backend, num_thread, algo_config, dataset_config, original_graph)[source]¶
- Return type:
- Parameters:
backend (str)
num_thread (int)
algo_config (AlgorithmConfig)
dataset_config (DatasetConfig)
original_graph (Graph)
- nxbench.benchmarking.benchmark.setup_cache(datasets)[source]¶
Load and cache datasets to avoid redundant loading.
nxbench.benchmarking.config module¶
Benchmark configuration handling.
- class nxbench.benchmarking.config.AlgorithmConfig(name, func, params=<factory>, requires_directed=False, requires_undirected=False, requires_weighted=False, validate_result=None, groups=<factory>)[source]¶
Bases:
object
Configuration for a graph algorithm to benchmark.
- Parameters:
- __init__(name, func, params=<factory>, requires_directed=False, requires_undirected=False, requires_weighted=False, validate_result=None, groups=<factory>)¶
- class nxbench.benchmarking.config.BenchmarkConfig(algorithms, datasets, machine_info=<factory>, output_dir=<factory>, env_data=<factory>)[source]¶
Bases:
object
Complete benchmark suite configuration.
- Parameters:
- __init__(algorithms, datasets, machine_info=<factory>, output_dir=<factory>, env_data=<factory>)¶
-
algorithms:
list
[AlgorithmConfig
]¶
-
datasets:
list
[DatasetConfig
]¶
- class nxbench.benchmarking.config.BenchmarkMetrics(execution_time, memory_used)[source]¶
Bases:
object
Container for benchmark metrics.
- __init__(execution_time, memory_used)¶
- class nxbench.benchmarking.config.BenchmarkResult(algorithm, dataset, execution_time, execution_time_with_preloading, memory_used, num_nodes, num_edges, is_directed, is_weighted, backend, num_thread, date, metadata, validation='unknown', validation_message='', error=None)[source]¶
Bases:
object
Container for benchmark execution results.
- Parameters:
- __init__(algorithm, dataset, execution_time, execution_time_with_preloading, memory_used, num_nodes, num_edges, is_directed, is_weighted, backend, num_thread, date, metadata, validation='unknown', validation_message='', error=None)¶
- Parameters:
- Return type:
None
nxbench.benchmarking.export module¶
- class nxbench.benchmarking.export.ResultsExporter(results_file)[source]¶
Bases:
object
Handle loading, processing, and exporting of benchmark results.
- Parameters:
results_file (Path)
- __init__(results_file)[source]¶
Initialize the results exporter.
- Parameters:
results_file (Path) – Path to the benchmark results file (JSON or CSV)
- export_results(output_path, form='csv', if_exists='replace')[source]¶
Export benchmark results in specified format (csv, sql, json).
- load_results()[source]¶
Load benchmark results from the workflow outputs (JSON or CSV), integrating all known fields into BenchmarkResult and treating unknown fields as metadata.
- Return type:
nxbench.benchmarking.utils module¶
- class nxbench.benchmarking.utils.MemorySnapshot(snapshot=None)[source]¶
Bases:
object
Class to store and diff memory snapshots.
- compare_to(other)[source]¶
Compare this snapshot to another and return (current, peak) memory diff in bytes.
- Return type:
- Parameters:
other (MemorySnapshot)
- nxbench.benchmarking.utils.configure_benchmarks(config)[source]¶
- Parameters:
config (BenchmarkConfig | str)
- nxbench.benchmarking.utils.get_available_algorithms()[source]¶
Get algorithms from specified NetworkX submodules and custom algorithms.
- Returns:
Dictionary of available algorithms.
- Return type:
Dict[str, Callable]
- nxbench.benchmarking.utils.get_python_version()[source]¶
Get formatted Python version string.
- Return type:
- nxbench.benchmarking.utils.list_available_backends()[source]¶
Return a dict of all registered backends that are installed, mapped to their version string.
- nxbench.benchmarking.utils.memory_tracker()[source]¶
Track memory usage of code block.
Returns dict with ‘current’ and ‘peak’ memory usage in bytes. Memory usage is measured as the difference between before and after execution.
- nxbench.benchmarking.utils.process_algorithm_params(params)[source]¶
Process and separate algorithm parameters into positional and keyword arguments. :rtype:
tuple
[list
[Any
],dict
[str
,Any
]]Keys starting with “_” go into pos_args (list).
Other keys become kwargs (dict).
If a param is a string that looks like a float or int, parse it.
4. If param is a dict containing {“func”: “…”} then dynamically load that function.