Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Optimizing Stock Price Prediction with LightGBM and CatBoost: Choosing the Right Model for Efficiency and Accuracy" #17

Closed

Conversation

praveenarjun
Copy link
Contributor

For stock price prediction, you might choose LightGBM or CatBoost depending on the nature of your dataset and specific requirements. If your dataset includes many categorical features, CatBoost might be more suitable. If you need to handle a very large dataset efficiently, LightGBM could be the better choice.

The main Reason using This model LightGBM or CatBoost because It has built-in support for categorical features, which can improve performance and accuracy.

And These model will reduce time complexity

…dely used for their efficiency and performance in machine learning tasks, particularly for structured/tabular data. Here's why you might use each
@praveenarjun
Copy link
Contributor Author

Hi,

I have reviewed the changes in this pull request. If any changes need to be done or if there are any issues, please make a comment so that it will be beneficial.

Additionally, could you please add the labels "gssoc", "level" and "hacktober"

Thank you!

@rohitinu6 rohitinu6 added gssoc-ext GSSoC'24 Extended Version hacktoberfest-accepted Hacktoberfest 2024 level2 25 Points 🥈(GSSoC) labels Oct 4, 2024
@rohitinu6
Copy link
Owner

@praveenarjun
Thank you for your valuable contribution.
Please ensure that you star this repository, your PR will be soon reviewed by the mentors.

@jvedsaqib
Copy link
Collaborator

@praveenarjun can you please be more specific

@praveenarjun
Copy link
Contributor Author

praveenarjun commented Oct 4, 2024

Reasons for Using LightGBM and CatBoost
LightGBM:
Efficiency: LightGBM is designed to be highly efficient and can handle large datasets with faster training times.
Accuracy: It often provides better accuracy compared to other gradient boosting algorithms.
Scalability: LightGBM can handle large-scale data and high-dimensional features.
Support for Categorical Features: It can directly handle categorical features without the need for one-hot encoding.
CatBoost:
Handling Categorical Features: CatBoost is specifically designed to handle categorical features effectively, reducing the need for extensive preprocessing.
Robustness: It is less prone to overfitting and provides robust performance on various datasets.
Ease of Use: CatBoost requires minimal parameter tuning and is easy to use.
Efficiency: It is optimized for fast training and prediction times.
These models are particularly useful for tasks involving structured data and can provide significant improvements in performance and efficiency.

Import necessary libraries

import lightgbm as lgb
from catboost import CatBoostRegressor
from sklearn.metrics import mean_squared_error, mean_absolute_error, mean_absolute_percentage_error, accuracy_score, precision_score, confusion_matrix, recall_score, f1_score

@praveenarjun
Copy link
Contributor Author

praveenarjun commented Oct 4, 2024

I am fasing some issue to merge and pull request can you pls find the issue

@praveenarjun
Copy link
Contributor Author

To Improve Model fastness we can use LightGBM and CatBoost
#15
This is my Issue

@praveenarjun
Copy link
Contributor Author

I used anpther branch for this so u can merge it fro there

@praveenarjun
Copy link
Contributor Author

added the pull request
#39

@praveenarjun
Copy link
Contributor Author

how much time it take to review the code and merge

@jvedsaqib jvedsaqib closed this Oct 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
gssoc-ext GSSoC'24 Extended Version hacktoberfest-accepted Hacktoberfest 2024 level2 25 Points 🥈(GSSoC)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants