Skip to content
This repository has been archived by the owner on Jan 23, 2020. It is now read-only.

docker4x/logger-azure:azure-v1.13.0-1 logs also to docker daemon #6

Closed
rocketraman opened this issue Jan 30, 2017 · 5 comments
Closed
Assignees

Comments

@rocketraman
Copy link

rocketraman commented Jan 30, 2017

The container with image docker4x/logger-azure:azure-v1.13.0-1 presumably is responsible for writing logs to Azure storage. However, it also appears to write logs to the docker daemon. Does this mean that the logs are duplicated in multiple places: in the docker storage as well as to the log storage? I use logspout to send my docker daemon logs to elasticsearch, and I have a lot of irrelevant output from editions_logger ending up in ES.

@FrenchBen
Copy link
Contributor

The logger's job is to accumulate the logs from the different services and send them to the azure storage.
This means that if I deploy a service and try to get the logs for one of the specific containers created because of that service, I will get the following error:

$ docker logs b8dd7af5ace8
"logs" command is supported only for "json-file" and "journald" logging drivers (got: syslog)

You could get duplicate logs, if logspout also captured the logs from the editions_logger container, as it does print to stdout what it received for that host.
Aka doing:

$ docker logs editions_logger

Will output the logs it received from other containers on that host.


This bit from the logspout docs is interesting:

logspout will gather logs from other containers that are started without the -t option and are configured with a logging driver that works with docker logs (journald and json-file)

Since the logging driver use doesn't work for docker logs, I'm unsure as to what you're collecting or what your setup looks like; but feel free to share how you have logspout setup to provide you with more info.

@rocketraman
Copy link
Author

I like to collect logs directly from my containers -- that way my logs have the container information as context. In order to make this happen, I start up each of my services with --log-driver=json-file, which makes the logs collectable by logspout. This all works perfectly.

Based on your explanation, presumably everything that is being logged by every container not explicitly started with --log-driver=json-file is being collected by editions-logger, and since editions-logger is logging to standard output and using the json-driver backend, the logs received here are also being sent by logspout to my logging backend.

So if I'm understanding correctly, the good bit is that there won't be duplicates, because containers that I have explicitly set the log driver for won't ever be seen by editions_logger. Everything else gets sent to editions_logger, and then from there it gets written to Azure storage as well (currently) to my logging backend via logspout.

I created issue gliderlabs/logspout#258 for adding container exclusions to logspout -- this would probably be the best solution to allow me to avoid sending logs collected by editions_logger to my logging backend, if I want to turn that off.

If all of the above makes sense, feel free to close this! Thanks.

@FrenchBen
Copy link
Contributor

@rocketraman that's exactly right! I didn't know that logspout couldn't exclude certain containers.
I also opened an internal ticket to provide an option to opt-out of the storage-account logging feature.

I'll close the issue but feel free to add comments (or re-open it) if needed.

@pewallin
Copy link

pewallin commented May 4, 2017

Any word on the opt-out of the built in log forwarder? Seems like it's hogging the syslog daemon which makes integration with external logging services difficult.

@FrenchBen
Copy link
Contributor

@pewallin Should be out with our next stable release

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants