You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a developer concentrated in mathematical optimization and machine learning approaches, I released a software named perming integrated with PyTorch to deal with supervised learning problems based on perceptron model.
As far as I concerned, Perceptron-based model and algorithm will make the hidden differentiable latent space to represent linearly separable high-dimensional data, so I think it is important to upgrade dask_ml project by developing perceptron algorithm integrated with parallel computing and compiled operators, like activation function included relu, tanh, and so on.
For example, I adopt operators released by PyTorch to make any supervised learning task conform to target and tabular data possible, and I wrap numpy.ndarray dataset to torch.Tensor for a high efficiency in processing cuda computation. Morever, the early_stop stage was involved in training and validation of any algoithm in perming. The underlying code is the simple configuration of perming:
main.model can be deployed to any pipeline related to predicative task, so I recommend dask_ml to release perceptron model for a more compatible support in processing linear inseparable dataset.
The text was updated successfully, but these errors were encountered:
As a developer concentrated in mathematical optimization and machine learning approaches, I released a software named perming integrated with PyTorch to deal with supervised learning problems based on perceptron model.
As far as I concerned, Perceptron-based model and algorithm will make the hidden differentiable latent space to represent linearly separable high-dimensional data, so I think it is important to upgrade dask_ml project by developing perceptron algorithm integrated with parallel computing and compiled operators, like activation function included relu, tanh, and so on.
For example, I adopt operators released by PyTorch to make any supervised learning task conform to target and tabular data possible, and I wrap numpy.ndarray dataset to torch.Tensor for a high efficiency in processing cuda computation. Morever, the early_stop stage was involved in training and validation of any algoithm in perming. The underlying code is the simple configuration of perming:
main.model can be deployed to any pipeline related to predicative task, so I recommend dask_ml to release perceptron model for a more compatible support in processing linear inseparable dataset.
The text was updated successfully, but these errors were encountered: