Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a first example for the new Dlib layers to build a transform-type network #3041

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Cydral
Copy link
Contributor

@Cydral Cydral commented Jan 7, 2025

This example demonstrates a minimal implementation of a Very Small Language Model (VSLM) using Dlib's Transformer architecture.
The code showcases key features of the new Transformer layers, including attention mechanisms, positional embeddings, and a classification head, while maintaining a simple character-based tokenization approach.

Using Shakespeare's text as training data, the example illustrates both the training process and text generation capabilities, making it an excellent educational tool for understanding Transformer architecture basics.
The implementation is intentionally kept lightweight with a small parameter count to ensure quick training and generation while still achieving perfect memorization of training sequences, demonstrating the effectiveness of attention mechanisms in sequence learning tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant