DataSet : https://www.kaggle.com/datasets/francoisxa/ds2ostraffictraces
Faster-KAN : https://github.com/AthanasiosDelis/faster-kan
Kolmogorov-Arnold-Network is a approach to neural network inspired by Kolmogorov-Arnold representation theorem. Unlike MLP, KAN utlizes the concept of updatable activation function rather than fixed activation function which allows them to fulfil the functionality of both weights and activation functions. These activations functions are present at edges rather than nodes. All in all, KANs offer a better accuracy and performance along with better iterpretability but lacks on the side of computational efficiency.
For better computational efficiency, I utilized faster-kan by Athanasios Delis which makes use Radial Basis Function, and functions capable of reflectional symmetry.
A Radial Basis Function is a real-valued function, the value of which depends only on the distance from the origin. Among various types of Radial Basis Function, Guassian Radial Basis Function is the most common.
$b_{i}(u) = e^{(\frac{-(u - u_{i})^{2}}{h})}$
Uses function which have reflextionary symmetry, allows us to retain performance while reducing computation time.
$b_{i}(u) = 1 - (tanh(\frac{u - u_{i}}{h}))^{2}$
Trained 2 Faster-KAN models with hidden layer sizes as mentioned below, both the models acheived approaximately the same accuracy and final loss, with smaller model being better in the terms of speed capable of executing 45 (approx) iterations/sec. and larger model executing 13 iterations/sec.
- Input Dimension : 11
- Hidden Layer Size (Original) : 100
- Hidden Layer Size (Small) : 21
- Training Accuracy : 100%
- Validation Accuracy : 99.3%