Link Search Menu Expand Document

voxl-benchmark-vio

Utility for benchmarking VIO performance on VOXL.

voxl-benchmark-vio orchestrates repeatable, automated replay-based benchmarking of voxl-open-vins-server (or voxl-open-sqrtvins2-server) over one or more recorded logs. For each entry in an input definition JSON it stages the appropriate camera/VIO config, spawns the VIO server, an analyzer that listens to its outputs, and voxl-replay to feed it the log data, then collects per-run metrics (pose, CPU usage, etc.) into a combined summary JSON.

Table of contents

  1. What gets installed
  2. How a benchmark run works
  3. Expected directory layout
    1. Logs
    2. Results
  4. Input JSON structure
    1. Field reference
  5. Usage
    1. Example
    2. Comparing two runs
  6. Notes / gotchas

What gets installed

The Debian/IPK package installs the following binaries into /usr/bin/:

BinaryPurpose
voxl-benchmark-vioMain orchestrator — drives replay + VIO + analyzer for a list of logs.
voxl-vio-analyzerSubscribes to VIO server output pipes and writes per-log metrics JSON.
voxl-cpu-analyzerCPU sampling helper (used internally / standalone).
voxl-benchmark-cmp-resDiffs two benchmark summary JSONs key-by-key.

A legacy Python wrapper (scripts/voxl-benchmark-vio) also exists for an older flow that downloads logs from URLs and shells out to voxl-feature-tracker / voxl-feature-analyzer. It is not the primary entry point — the C++ voxl-benchmark-vio binary is what should be used for VIO benchmarking, and the rest of this page describes that binary.


How a benchmark run works

For each test entry in the input JSON, voxl-benchmark-vio:

  1. Backs up the device’s current /etc/modalai/*.conf and /data/modalai/*.yml into a temp folder under /data/benchmark-logs/ so they can be restored at the end (or on SIGINT / SIGSEGV).
  2. Stages config from the log. Copies <log>/etc/modalai/*.conf/etc/modalai/ and <log>/data/modalai/*.yml/data/modalai/ so the VIO server runs with the calibration/config that was active when the log was recorded.
  3. Optionally overrides the open-vins config with the file passed via -c/--config.
  4. Patches voxl-open-vins-server.conf to force en_ext_feature_tracker=false and to set en_gpu_for_tracking per the test entry’s en_gpu flag.
  5. If en_cam_as_ion_buf is set, appends _ion to every pipe_for_tracking entry in /etc/modalai/vio_cams.conf so the VIO server consumes ion-buffer pipes.
  6. Drops the page cache (echo 3 > /proc/sys/vm/drop_caches) to keep per-run timing comparable.
  7. Forks three processes, in order:
    • voxl-vio-analyzer -o <output_dir>/<id>[_gpu]_vio.json — collects VIO metrics into per-log JSON.
    • voxl-open-vins-server (or voxl-open-sqrtvins2-server if use_sqrt_vins is true).
    • voxl-replay -p <log_path> -y [-b] [-i <pipe>...] — replays the log. -b is added when en_cam_as_ion_buf is true, and one -i <pipe> is added per entry in pipe_includes.
  8. Tracks CPU usage of the VIO server in a background thread for the duration of the replay.
  9. Waits for voxl-replay to exit, then SIGTERMs the VIO server and analyzer. Median and total CPU usage are appended into the per-log JSON under ovins-median-cpu-pct and ovins-total-cpu-pct.
  10. After all tests in all Monte Carlo runs are finished, all per-log JSONs in the output folder are merged into a single summary.json.
  11. Original configs are restored, temp folders removed.

Expected directory layout

Logs

Logs live (by default) under /data/benchmark-logs/. Each log is a directory named after its id and must contain at minimum the things voxl-replay expects (an info.json and the recorded pipe data) plus the original device config that produced it:

/data/benchmark-logs/
└── <log-id>/
    ├── info.json
    ├── run/mpa/<pipe-name>/...        # recorded pipe data consumed by voxl-replay
    │   ...
    ├── etc/modalai/
    │   ├── voxl-open-vins-server.conf
    │   ├── vio_cams.conf
    │   └── ...                         # any other *.conf the log needs
    └── data/modalai/
        └── *.yml                       # camera calibration etc.

If a test entry’s log_folder is set, logs are looked up at <log_folder>/<id>/ instead of /data/benchmark-logs/<id>/.

Results

Per-log analyzer output and the merged summary are written under /data/benchmark-results/<output-name>/:

/data/benchmark-results/<output-name>/
├── <log-id>_vio.json          # one per log (or <id>_gpu_vio.json if en_gpu)
├── <other-log-id>_vio.json
└── summary.json               # merged across all logs

The output directory is wiped at the start of a run if it already exists.


Input JSON structure

The input JSON passed via -p/--path must contain a top-level replay_defs array. Each element describes one log/test:

{
  "replay_defs": [
    {
      "id":                "indoor-flight-01",
      "log_folder":        "/data/benchmark-logs/",
      "url":               "https://storage.googleapis.com/.../indoor-flight-01.tar.gz",
      "pipe_includes":     "tracking,stereo_front_l,stereo_front_r,imu_apps",
      "en_ext_tracker":    false,
      "en_gpu":            false,
      "en_cam_as_ion_buf": false,
      "use_sqrt_vins":     false
    },
    {
      "id":                "outdoor-loop-02",
      "pipe_includes":     "tracking,imu_apps",
      "en_gpu":            true,
      "use_sqrt_vins":     true
    }
  ]
}

Field reference

FieldTypeRequiredMeaning
idstringyesFolder name of the log under <log_folder> (or /data/benchmark-logs/). Also used to name the per-log result JSON.
log_folderstringnoParent directory of the log. Defaults to /data/benchmark-logs/. Must end with /.
urlstringnoInformational / used by the legacy Python wrapper to fetch logs. The C++ binary does not download.
pipe_includesstringnoComma-separated list of pipe names to pass to voxl-replay as -i <pipe> (one flag per pipe). Restricts which recorded pipes are replayed.
en_ext_trackerboolnoForces external feature tracker on/off in voxl-open-vins-server.conf. The orchestrator currently always writes this as false.
en_gpuboolnoSets en_gpu_for_tracking in voxl-open-vins-server.conf. When true, the per-log result is named <id>_gpu_vio.json.
en_cam_as_ion_bufboolnoIf true, passes -b to voxl-replay and appends _ion to every pipe_for_tracking in vio_cams.conf.
use_sqrt_vinsboolnoIf true, runs voxl-open-sqrtvins2-server instead of voxl-open-vins-server.

All bool/string fields are optional and default to false / empty.


Usage

Run on the VOXL target as root (config files under /etc/modalai/ and the page-cache drop require it).

Important: before running a replay, stop voxl-camera-server and voxl-imu-server. The replay supplies its own recorded camera and IMU data over the same MPA pipes, and leaving the live servers running will cause pipe conflicts and contaminate the inputs the VIO server sees.

systemctl stop voxl-camera-server
systemctl stop voxl-imu-server
voxl-benchmark-vio -p <input.json> -o <output-name> [options]
FlagDescription
-p, --path <path>Required. Path to the input definition JSON.
-o, --output <name>Required. Folder name created under /data/benchmark-results/.
-i, --index <n>Run only the test at this index in replay_defs. May be passed multiple times.
-c, --config <path>Override the open-vins config with this file (copied into /etc/modalai/).
-m, --mc_runs <n>Number of Monte Carlo repetitions over the full test list. Default 1.
-v, --verboseShow stdout/stderr from the spawned child processes; otherwise they are silenced to /dev/null.
-h, --helpPrint usage and exit.

Example

voxl-benchmark-vio \
  -p /data/benchmark-logs/my-suite.json \
  -o my-suite-2025-10-28 \
  -m 3 \
  -v

This will run all tests in my-suite.json three times each, write per-log results into /data/benchmark-results/my-suite-2025-10-28/, and produce summary.json at the end.

Comparing two runs

voxl-benchmark-cmp-res \
  /data/benchmark-results/run-A/summary.json \
  /data/benchmark-results/run-B/summary.json

Reports the per-key numerical differences between two summary JSONs.


Notes / gotchas

  • The orchestrator modifies device config in place (/etc/modalai/*.conf, /data/modalai/*.yml). It backs the originals up to /data/benchmark-logs/{etc,data}_config_temp/ and restores them on clean exit, SIGINT, or SIGSEGV. If the process is killed harder (e.g. SIGKILL), manually restore from those temp folders.
  • Page cache is dropped at the start of every Monte Carlo iteration to make timing comparisons less noisy.
  • The output folder under /data/benchmark-results/<output-name>/ is cleared at the start of each run.
  • Each log directory must include the etc/modalai/ and data/modalai/ subtrees that were active when it was recorded; otherwise the VIO server will run against whatever happens to be on the device.