-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why you're updating software version right in the container? #14
Comments
@schernysh maybe you could take a look please? |
@holms - we agree with your thoughts if concurrency were an issue. We also just generally like that someone's looking at this code! However, this system is running on exactly one AWS ec2 instance. Downloading PBJS is not a high-volume activity. So there aren't currently any concurrency issues. We can't afford to redesign the system at this point, but if there's anyone in the community who would like to take that on, the path is to become a prebid.org member and reach out to me for permissions to make this happen. (!) |
@bretg For now as far as I understand the best thing i can do, is to override CMD when launching a service with this docker image. If this gonna work then it's probably not such a big deal :) But the checkout.sh should happen at least once as far as I understand.. is this correct? |
Why are you trying to run this @holms? It's already running behind https://docs.prebid.org/download.html -- unless you're setting a competing managed service, we never imagined anyone else would spin one up, and we aren't really prepared to support anyone doing so. FWIW, there are docs at https://github.com/prebid/prebid-js-build-generator/tree/master/docs. Let us know your use case and we'll need to decide whether to spend the time writing documentation for others to make use of it. If you'd like to submit a pull request to make this system concurrent, we'll review it. Thanks. |
@bretg glad you've answered :) We need to generate our own prebid.js with selected modules and own globalVarName. I'd be happy to use your web service if it would have an api access. We're spinning own workers which would generate prebid.js on fly with certain config for certain placements. Currently we're doing a lot of manual work to achieve this. Currently I just want to automate it completely. Basic need that we have is an api access with custom globalVarName :) I ofcourse can fork both projects and modify globalVarName, but this is technical debt in some way. I'd better submit a PR with setting globalVarName through Environment variable for example. For this particular ticket overriding cmd seems to me does the job. |
There is API access, but the issue is that building the PBJS package isn't fast, so every module is pre-built for every version -- and the global name isn't currently configurable. Making it configurable might not be so hard and it might not be so slow at runtime since I think the definition of the global is limited to core. If you propose an update, will ask the team to review |
Currently I have an idea for overriding globalVarName. In here it say to add this piece to webpack config:
Then probably I'd pass the value through webpack env var. And then I'd edit gulp.js where I'd detect if env var from shell exists and if so, call webpack with this param. And in the end: pass env var to a Docker image. Not too complicated, I think I'd be able to submit PR. Regarding slow api, well that's why there's Amqp for :) My request would be asynchronous. We use workers to ask for new version, meanwhile using cached version or older one, and then when new one is available cache it and serve it. Something like this. P.s. this is different ticket, this one is about Dockerfile. Currently we've launched this project in kubernetes, two containers in one pod, with single instance for now. Still not sure if checking out prebid.js repo on cron is a good idea when multiple instances of containers would be used. |
I'm actually highly confused about this line:
prebid-js-build-generator/docker/Dockerfile.builder
Line 23 in 741643f
I see that you're launched crond which constantly checking out new version, but those crons will never be working at the same time? Imagine I'd launch 10 pods in kubernetes, all cron sessions would be started at different times, and then you'll have pods with different versions for some amount of time. Plus those other cron tasks where you clear bundle, you'll end up with situation where user might get error 500 because bundle is gone at that moment while you're building next one.
I believe this fundamentally incorrect. That's why CI/CD exists for, all containers would be started with new versions instantly. But in this case it seems like you'll have a horrible outcome. You should have versions tags for each docker image, and rest should be done by CI/CD pipelines.
The text was updated successfully, but these errors were encountered: