Quick Start

This guide will walk you through running your first evaluation and training experiment.

1. Running an Evaluation

Glue Factory makes it easy to evaluate pre-trained models on standard benchmarks. The dataset will be automatically downloaded to data/.

To evaluate SuperPoint + LightGlue on the HPatches benchmark:

python -m gluefactory.eval.hpatches --conf superpoint+lightglue-official --overwrite
  • --conf: Specifies the configuration file name (found in gluefactory/configs/).
  • --overwrite: Overwrites existing prediction files if they exist.

Customizing Evaluation

You can override configuration parameters directly from the command line using dot notation:

python -m gluefactory.eval.hpatches \
    --conf superpoint+lightglue-official \
    --overwrite \
    eval.estimator=poselib \
    eval.ransac_th=-1

2. Training a Model

Glue Factory usually employs a two-stage training process: pre-training on homographies followed by fine-tuning on MegaDepth.

Homography Pre-training

To pre-train LightGlue with SuperPoint features on the homography dataset:

python -m gluefactory.train sp+lg_homography \
    --conf gluefactory/configs/superpoint+lightglue_homography.yaml
  • sp+lg_homography: The name of your experiment. Outputs will be saved to outputs/training/sp+lg_homography.
  • Note: The default batch size is 128. If you run out of GPU memory, reduce it via data.batch_size=32.

3. Visualization

After running an evaluation, you can inspect the results visually:

python -m gluefactory.eval.inspect hpatches superpoint+lightglue-official

This will open an interactive viewer allowing you to explore matches and error metrics.