Usage¶
Quick Start¶
Configure Your Benchmarks: Create a YAML configuration file (e.g.,
configs/example.yaml
):algorithms: - name: "pagerank" func: "networkx.pagerank" params: alpha: 0.85 groups: ["centrality"] datasets: - name: "karate" source: "networkrepository"
Start an instance of an orion server in a separate terminal window:
export PREFECT_API_URL="http://127.0.0.1:4200/api"
export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://prefect_user:pass@localhost:5432/prefect_db"
prefect server start
Run Benchmarks Based on the Configuration:
nxbench --config 'nxbench/configs/example.yaml' benchmark run
Export Results:
nxbench --config 'nxbench/configs/example.yaml' benchmark export 'results/9e3e8baa4a3443c392dc8fee00373b11_20241220002902.json' --output-format csv --output-file 'results/results.csv' # convert benchmarked results from a run with hash `9e3e8baa4a3443c392dc8fee00373b11_20241220002902` into csv format.
View Results:
nxbench viz serve # launch the interactive results visualization dashboard.
Advanced Command-Line Interface¶
The CLI provides comprehensive management of benchmarks, datasets, and visualization.
Data Management¶
Download a Specific Dataset:
nxbench data download karate
List Available Datasets by Category:
nxbench data list --category social
Benchmarking¶
Run Benchmarks with Verbose Output:
nxbench --config 'nxbench/configs/example.yaml' -vvv benchmark run
Export Results to a SQL Database:
nxbench --config 'nxbench/configs/example.yaml' benchmark export 'results/9e3e8baa4a3443c392dc8fee00373b11_20241220002902.json' --output-format sql --output-file 'results/benchmarks.sqlite'
Visualization¶
Launch the Dashboard:
nxbench viz serve
Reproducible Benchmarking Through Containerization¶
Running Benchmarks with GPU Support¶
docker-compose up nxbench
Running Benchmarks on CPU Only¶
NUM_GPU=0 docker-compose up nxbench
Starting the Visualization Dashboard¶
docker-compose up dashboard
Running Benchmarks with a Specific Backend¶
docker-compose -f docker/docker-compose.cpu.yaml run --rm nxbench --config 'nxbench/configs/example.yaml' benchmark run --backend networkx
Exporting results from a run with hash 9e3e8baa4a3443c392dc8fee00373b11_20241220002902
¶
docker-compose -f docker/docker-compose.cpu.yaml run --rm nxbench --config 'nxbench/configs/example.yaml' benchmark export 'nxbench_results/9e3e8baa4a3443c392dc8fee00373b11_20241220002902.json' --output-format csv --output-file 'nxbench_results/results.csv'
Adding a New Backend¶
Note: The following guide assumes you have a recent version of NxBench with the new
BackendManager
and associated tools (e.g.,core.py
andregistry.py
) already in place. It also assumes that your backend follows the guidelines for developing custom NetworkX backends
1. Verify Your Backend is Installable¶
Install your backend via
pip
(or conda, etc.). For example, if your backend library ismy_cool_backend
, ensure that:pip install my_cool_backend
Check import: NxBench’s detection system simply looks for
importlib.util.find_spec("my_cool_backend")
. So if your library is not found by Python, NxBench will conclude it is unavailable.
2. Write a Conversion Function¶
In NxBench, a “backend” is simply a library or extension that converts a networkx.Graph
into an alternate representation. You must define one or more conversion functions:
def convert_my_cool_backend(nx_graph: networkx.Graph, num_threads: int):
import my_cool_backend
# Possibly configure multi-threading if relevant:
# my_cool_backend.configure_threads(num_threads)
# Convert the Nx graph to your library’s internal representation:
return my_cool_backend.from_networkx(nx_graph)
3. (Optional) Write a Teardown Function¶
If your backend has special cleanup needs (e.g., free GPU memory, close connections, revert global state, etc.), define a teardown function:
def teardown_my_cool_backend():
import my_cool_backend
# e.g. my_cool_backend.shutdown()
pass
If your backend doesn’t need cleanup, skip this or simply define an empty function.
4. Register with NxBench¶
Locate NxBench’s registry.py (or a similar file where other backends are registered). Add your calls to backend_manager.register_backend(...)
:
from nxbench.backends.registry import backend_manager
import networkx as nx # only if needed
def convert_my_cool_backend(nx_graph: nx.Graph, num_threads: int):
import my_cool_backend
# Possibly configure my_cool_backend with num_threads
return my_cool_backend.from_networkx(nx_graph)
def teardown_my_cool_backend():
# e.g. release resources
pass
backend_manager.register_backend(
name="my_cool_backend", # The name NxBench will use to refer to it
import_name="my_cool_backend", # The importable Python module name
conversion_func=convert_my_cool_backend,
teardown_func=teardown_my_cool_backend # optional
)
Important:
name
is the “human-readable” alias in NxBench.import_name
is the actual module import path. They can be the same (most common) or different if your library’s PyPI name differs from its Python import path.
5. Confirm It Works¶
Check NxBench logs: When NxBench runs, it will detect whether
"my_cool_backend"
is installed by callingimportlib.util.find_spec("my_cool_backend")
.Run a quick benchmark:
nxbench --config my_config.yaml benchmark run
If you see logs like “Chosen backends: [‘my_cool_backend’ …]” then NxBench recognized your backend. If it fails with “No valid backends found,” ensure your library is installed and spelled correctly.
6. (Optional) Version Pinning¶
If you want NxBench to only run your backend if it matches a pinned version (e.g. my_cool_backend==2.1.0
), add something like this to your NxBench config YAML:
environ:
backend:
my_cool_backend:
- "my_cool_backend==2.1.0"
NxBench will:
Detect the installed version automatically (via
my_cool_backend.**version**
or PyPI metadata)Skip running if it doesn’t match
2.1.0
.
That’s it¶
You’ve successfully added a new backend to NxBench! Now, NxBench can detect it, convert graphs for it, optionally tear it down, and track its version during benchmarking.