Contributing to Optimum-NVIDIA

Contributions are welcome! This guide provides instructions for setting up your development environment, running quality checks, and executing tests.

The official CONTRIBUTING.md in the repository is currently a placeholder stating "Coming soon!", but the following instructions are derived from the project's structure and CI/CD workflows.

Development Setup

  1. Clone the repository:

    git clone https://github.com/huggingface/optimum-nvidia.git
    cd optimum-nvidia
  2. Install in editable mode:

    It is recommended to install the package in editable mode (-e) with the testing and quality dependencies. This ensures that your changes are reflected immediately without needing to reinstall.

    # Install with testing and quality dependencies
    pip install -e '.[tests]'
    Note: The pyproject.toml also defines a quality extra, but it is commented out in setup.py. The required tools are black and ruff.

Code Quality

This project uses ruff for both linting and formatting to ensure code quality and a consistent style. A Makefile provides a convenient way to run these checks.

To check for quality issues and formatting, run:

make quality

This command will run ruff check and ruff format --check.

To automatically fix fixable issues, run:

make fix-quality

This is also enforced in the CI pipeline, as seen in .github/workflows/pr_quality.yml.

Running Tests

The test suite is located in the tests/ directory and uses pytest.

The CI workflow in .github/workflows/pr_tests.yml shows the standard procedure for running tests.

  1. Install test dependencies:

    If you haven't already, install the necessary packages:

    pip install -e '.[tests]'

  2. Run the test suite:

    Execute pytest from the root of the repository.

    # Run all tests
    pytest
    
    # Run a specific test file
    pytest tests/integration/test_causal_lm.py

    The tests require a GPU and will download models from the Hugging Face Hub, so ensure you have an active internet connection and have logged in via huggingface-cli login if you need to access private models.