You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi this is really good application. I would like to have your advice to use my own dataset. I have images that are very different with widely known dataset such as imagenet containing common object (cars, cat, dog, etc). this is very technical related, it is dynamometer cards that commonly used in oil industry as shown below : https://knepublishing.com/index.php/KnE-Engineering/article/download/3083/6588/15176
Unlike common object, my dataset always comes in neutral position, there is no rotated, no flipped, no color distortion (just B&W) and always comes in the full part (unlike cars that might exist only part of it in images), etc that might exist during implementation in the real world. So in the common object recognition, I assume such data augmentations are needed to introduce many variations to the model.
Do I need to do same data augmentations used by default in this repo ? or is it ok for BYOL to use no augmentations or minimum augmentations ? what do you think the best solution needed for my own dataset especially in term of augmentation.
thank you
The text was updated successfully, but these errors were encountered:
Hi, @ramdhan1989 . Can you please share the source of dataset that you are using ?
I am also working on similar problem. It will be very helpful for me , if you can share your dataset of dynacards.
Hi this is really good application. I would like to have your advice to use my own dataset. I have images that are very different with widely known dataset such as imagenet containing common object (cars, cat, dog, etc). this is very technical related, it is dynamometer cards that commonly used in oil industry as shown below :
https://knepublishing.com/index.php/KnE-Engineering/article/download/3083/6588/15176
Unlike common object, my dataset always comes in neutral position, there is no rotated, no flipped, no color distortion (just B&W) and always comes in the full part (unlike cars that might exist only part of it in images), etc that might exist during implementation in the real world. So in the common object recognition, I assume such data augmentations are needed to introduce many variations to the model.
Do I need to do same data augmentations used by default in this repo ? or is it ok for BYOL to use no augmentations or minimum augmentations ? what do you think the best solution needed for my own dataset especially in term of augmentation.
thank you
The text was updated successfully, but these errors were encountered: