In 2022, our top priorities are:
- (v0.4) Windows OS Support
- (v0.4) Python 3.9 Support
- Time-Series Support
- (v0.4) HuggingFace Integration
- Improved Multi-modal Modeling
- Advance SOTA in Stack Ensemble Research (Deeper Stack Ensembles)
- (v0.4) Cloud Training & Deployment
- (v0.4) Parallel Model Training
- Parallel Hyperparameter Tuning
- Distributed Model Training
- Distributed Hyperparameter Tuning
- (v0.4) Semi-supervised Learning
- (v0.4) Automated Model Calibration via Temperature Scaling
- Enhanced Model Distillation
- (v0.4) Online Inference Optimization
- Meta-Learning
- Improved Large-scale Data Handling (10M+ Rows)
- Improved Feature Type Inference
- Improved Feature Preprocessing
- Refactor autogluon.features into Standalone Module
- Covariate Shift Detection
- Covariate Shift Correction
- Exploratory Analysis
- Model Interpretability
- Model Uncertainty
- Model Monitoring
- Model Calibration (Conformal Methods)
- Image Model Inference Optimization
- Text Model Inference Optimization
- Advanced Custom Model Tutorial
- Windows OS Support
- Python 3.9 Support
- HuggingFace Integration
- Torch Migration (Remove MXNet dependency)
- Parallel Model Training (2x training speed-up for bagging/stacking)
- Automated Feature Pruning/Selection
- Semi-supervised & Transductive Learning Support
- Automated Model Calibration via Temperature Scaling
- Cloud Training & Deployment Tutorials
- Feature Preprocessing Tutorial
- Documentation Overhaul
- Hyperparameter Tuning Overhaul
- Memory Usage Optimizations
- Various Performance Optimizations
- Various Bug Fixes
In 2021, our top priorities are:
- Make AutoGluon the most versatile AutoML framework via dedicated multi-modal image-text-tabular support (paper).
- Modularization of the various components of AutoGluon.
- Model Training Speed Optimizations.
- Model Inference Speed Optimizations.
- Model Quality Optimizations.
- Integration with NVIDIA RAPIDS for accelerated GPU training.
- Integration with Intel sklearnex for accelerated CPU training.
- Improved documentation and tutorials.
- Training and Inference containers.
In 2020, we plan to focus on improving code quality, extensibility, and robustness of the package.
We will work towards unifying the APIs of the separate tasks (Tabular, Image, Text) to simplify and streamline development and improve the user experience.
- v0.0.15 Release Notes (December 2020)
- v0.0.14 Release Notes (October 2020, Highlight: Added FastAI Neural Network Model)
- v0.0.13 Release Notes (August 2020, Highlight: Added model distillation (paper))
- v0.0.12 Release Notes (July 2020, Highlight: Added custom model support)
- v0.0.11 Release Notes (June 2020)
- v0.0.10 Release Notes (June 2020, Highlight: Implemented feature importance)
- v0.0.9 Release Notes (May 2020)
- v0.0.8 Release Notes (May 2020)
- v0.0.7 Release Notes (May 2020, Highlight: first addition of the
presets
argument) - v0.0.6 Release Notes (March 2020, first release tagged on GitHub with release notes)
- v0.0.5 Release (February 2020, used in the original AutoGluon-Tabular paper)
- v0.0.4 Release (January 2020)
In 2019, we plan to release the initial open source version of AutoGluon, featuring Tabular, Text, and Image classification and regression tasks, along with Object Detection.
- v0.0.3 Release (December 2019)
- v0.0.2 Release (December 2019, Initial Open Source Release)