Visit me here: https://yorkiedev.github.io/CanIRunThisLLM/
A web-based tool to check if your system can run specific LLM models from Hugging Face. Simply enter a GGUF model URL, your system's VRAM and RAM specifications, and the tool will tell you if you can run the model.
- Check compatibility with GGUF models from Hugging Face
- Supports both direct file URLs and repository URLs
- Provides detailed system requirements
- Includes guides for finding system specifications on Windows, macOS, and Linux
- Real-time model size checking
- User-friendly interface
- Visit the website
- Enter a Hugging Face GGUF model URL
- Enter your system's VRAM and RAM specifications
- Click "Check Compatibility" to see if you can run the model
This is a simple HTML/JavaScript application with no dependencies. To run locally:
- Clone the repository
- Open
index.html
in your browser
You can deploy this project in several ways:
- GitHub Pages: Enable GitHub Pages in your repository settings
- Static Hosting: Deploy to any static hosting service (Netlify, Vercel, etc.)
- Local Server: Serve using any web server (Apache, Nginx, etc.)
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - feel free to use this project however you'd like.