Running Performance Benchmark Tests

You can use forge/test/benchmark/benchmark.py to run performance benchmark tests:

python forge/test/benchmark/benchmark.py [options]

Available Options:

OptionShortTypeDefaultDescription
--model-mstringrequiredModel to benchmark (e.g. bert, mnist_linear). The test file name without .py extension
--config-cstringNoneModel configuration to benchmark (e.g. tiny, base, large)
--training-tflagFalseBenchmark training mode
--batch_size-bsinteger1Batch size, number of samples to process at once
--loop_count-lpinteger1Number of times to run the benchmark
--input_size-iszintegerNoneInput size of the input sample (if model supports variable input size)
--hidden_size-hsintegerNoneHidden layer size (if model supports variable hidden size)
--output-ostringNoneOutput JSON file to write results to. Results will be appended if file exists
--task-tsstring"na"Task to benchmark (e.g. classification, segmentation)
--data_format-dfstring"float32"Data format (e.g. float32, bfloat16)

Example:

python forge/test/benchmark/benchmark.py -m mobilenetv2_basic -ts classification -bs 8 -df bfloat16 -lp 32 -o forge-fe-benchmark-e2e-mobilenetv2_basic.json

Alternatively, you can run specific model tests using pytest:

pytest [model_path]

Example:

pytest -svv forge/test/benchmark/benchmark/models/yolo_v8.py