An intelligent tagging assistant for Adobe Lightroom Classic that uses OpenAI's CLIP model to automatically generate relevant keywords for your photos.
- Automatically analyze images in your Lightroom catalog using CLIP neural network
- Generate relevant keywords based on image content
- Update XMP sidecar files with AI-generated keywords while preserving existing ones
- Configurable confidence threshold and maximum keywords per image
- Supports common RAW formats (NEF, CR2, ARW) and JPEGs
- Uses the Foundation List 2.0.1 keyword hierarchy for consistent tagging
- Python 3.9+
- Adobe Lightroom Classic
- PyTorch
- transformers
- Pillow
- sqlite3
You can install lr-autotag
directly from PyPI:
pip install lr-autotag
Or, for development:
- Clone this repository:
git clone https://github.com/jsakkos/lr-autotag.git
cd lr-autotag
- Install with UV in development mode:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e ".[dev]"
-
Close Lightroom Classic if it's running
-
Run the CLI and point to your Lightroom catalog file or directory of images:
lr-autotag --catalog "path/to/catalog.lrcat"
lr-autotag --images "path/to/images"
- Open Lightroom Classic and wait for the catalog to reload
- Load the XMP sidecar files to see the generated keywords
- Enjoy your newly tagged images!
--catalog
or-c
: Path to the Lightroom catalog file--images
or-i
: Path to a directory of images--threshold
or-t
: Confidence threshold for keyword suggestions (default: 0.5)--max_keywords
or-m
: Maximum number of keywords per image (default: 20)--max_size
or-s
: Maximum image dimension for processing (default: 1024)--help
or-h
: Show help message
- Always ensure you have enough disk space for catalog backups
- Backup files are not automatically cleaned up - you may want to periodically remove old backups
- The backup process might take a few moments for large catalogs
You can adjust these parameters in the script:
threshold
: Confidence threshold for keyword suggestions (default: 0.5)max_keywords
: Maximum number of keywords per image (default: 20)max_size
: Maximum image dimension for processing (default: 1024)
- The script connects to your Lightroom catalog's SQLite database to get image locations
- Each image is processed through the CLIP neural network
- The image embeddings are compared against pre-computed embeddings of the Foundation List keywords
- Keywords with similarity scores above the threshold are selected
- The keywords are written to XMP sidecar files that Lightroom can read
Before any operations that access the Lightroom catalog, the tool automatically creates a timestamped backup of your catalog file. The backup is stored in the same directory as your original catalog with the format: [original_name]_YYYYMMDD_HHMMSS.backup
.
If the backup process fails for any reason, the tool will not proceed with catalog operations to ensure your data's safety.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI's CLIP model
- The Digital Photography School Foundation List[https://lightroom-keyword-list-project.blogspot.com/]
- Adobe Lightroom Classic SDK documentation
This tool is not affiliated with or endorsed by Adobe. Use at your own risk and always backup your Lightroom catalog before using any third-party tools.