Inference and Evaluation
This guide explains how to run inference with a trained OVSAM model to evaluate its performance on a dataset.
Step 1: Download Pre-trained Weights
The official pre-trained models for OVSAM are available on Hugging Face. You can download them by cloning the repository.
git clone https://huggingface.co/HarborYuan/ovsam_models models
This command will download the model checkpoints into a models/
directory at the root of your project. Make sure your configuration files point to the correct checkpoint paths within this directory.
Step 2: Run Inference
Inference is performed using the tools/test.py
script, orchestrated by tools/dist.sh
for multi-GPU evaluation. The command requires specifying the configuration file that defines the model architecture, dataset, and evaluation metrics.
# Example: Run inference on COCO with 8 GPUs
bash tools/dist.sh test seg/configs/ovsam/ovsam_coco_rn50x16_point.py 8
Command Breakdown:
bash tools/dist.sh
: The script for distributed execution.test
: The command to execute, which corresponds totools/test.py
.seg/configs/ovsam/ovsam_coco_rn50x16_point.py
: The configuration file for the OVSAM model trained on COCO. This file also specifies the validation dataset and evaluation metrics to be used.8
: The number of GPUs to use for inference.
Evaluation Output
During the run, the script will iterate through the validation set, generate predictions, and then compute the metrics defined in the val_evaluator
section of your config file. The results, such as mean Intersection over Union (mIoU) and classification scores, will be printed to the console at the end of the process.