Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker image cannot access other containers by name #68

Open
cdzombak opened this issue Oct 14, 2023 · 3 comments
Open

Docker image cannot access other containers by name #68

cdzombak opened this issue Oct 14, 2023 · 3 comments

Comments

@cdzombak
Copy link
Contributor

cdzombak commented Oct 14, 2023

I have been using tsnsrv inside a docker-compose stack to serve private services. An example is this stack, which I spun up a month or so ago and has been running since. Note that tsnsrv accesses the registry-ui container by name.

Trying to start a new service following the same pattern this week, using the pre-built tsnsrv main image, I'm seeing connections to tsnsrv hang, and the proxied service does not seem to be receiving any traffic.

Here's the docker-compose stack that's not working:

---
version: "3"
services:
  aptly:
    container_name: aptly
    hostname: aptly
    image: urpylka/aptly:latest
    restart: unless-stopped
    volumes:
      - ./aptly.conf:/etc/aptly.conf:ro
      - /srv/dev-disk-by-label-storage/aptly:/opt/aptly

  aptly-tsnsrv:
    container_name: "aptly-tsnsrv"
    hostname: "aptly.tailnet-003a.ts.net"
    image: "ghcr.io/boinkor-net/tsnsrv:main"
    command: ["-name", "aptly", "http://aptly:80"]
    environment:
      - "TS_AUTHKEY=${APTLY_TS_AUTHKEY}"
      - "TS_STATE_DIR=/var/lib/tailscale"
    volumes:
      - /opt/docker/data/aptly/lib_tailscale:/var/lib/tailscale
    cap_add:
      - NET_ADMIN
      - NET_RAW
    restart: "unless-stopped"

tsnsrv logs the following when I make a request to it:

2023/10/14 16:58:32 wgengine: idle peer [BTi8b] now active, reconfiguring WireGuard
2023/10/14 16:58:32 wgengine: Reconfig: configuring userspace WireGuard config (with 1/9 peers)
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Created
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Updating endpoint
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Removing all allowedips
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Updating persistent keepalive interval
2023/10/14 16:58:32 wg: [v2] [BTi8b] - Starting
2023/10/14 16:58:32 wg: [v2] [BTi8b] - Received handshake initiation
2023/10/14 16:58:32 wg: [v2] [BTi8b] - Sending handshake response
2023/10/14 16:58:32 [v1] magicsock: derp route for [BTi8b] set to derp-12 (shared home)
2023/10/14 16:58:32 [v1] peer keys: [BTi8b]
2023/10/14 16:58:32 [v1] v1.50.1-ERR-BuildInfo peers: 148/92
2023/10/14 16:58:32 magicsock: disco: node [BTi8b] d:40d6f5b1c01fdd5c now using 192.168.1.10:41641
2023/10/14 16:58:32 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 64 tcp ok
2023/10/14 16:58:32 [unexpected] localbackend: got TCP conn without TCP config for port 443; from 100.85.220.69:65072
2023/10/14 16:58:32 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 52 tcp non-syn
2023/10/14 16:58:32 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 387 tcp non-syn
2023/10/14 16:58:34 netcheck: [v1] report: udp=true v6=false v6os=false mapvarydest=false hair=false portmap= v4a=24.247.165.150:54236 derp=12 derpdist=1v4:44ms,12v4:27ms,21v4:35ms
2023/10/14 16:58:43 wg: [v2] [BTi8b] - Receiving keepalive packet
2023/10/14 16:58:58 netcheck: [v1] report: udp=true v6=false v6os=false mapvarydest=false hair=false portmap= v4a=24.247.165.150:54236 derp=12 derpdist=1v4:74ms,12v4:39ms,21v4:74ms
2023/10/14 16:59:05 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 52 tcp non-syn
2023/10/14 16:59:05 WARN proxy error error="context canceled"
2023/10/14 16:59:05 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 40 tcp non-syn
2023/10/14 16:59:05 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 40 tcp non-syn

(WARN proxy error error="context canceled" occurs when I kill the curl process making the request.)


If I expose the aptly port via 8003:80, and change the aptly-tsnsrv definition to include the following, then tsnsrv successfully connects to the proxied service and behaves as expected:

    command: ["-name", "aptly", "http://host.docker.internal:8003"]
    extra_hosts:
      - "host.docker.internal:host-gateway"

tsnsrv logs the following for this (successful) request:

2023/10/14 17:03:46 wgengine: idle peer [BTi8b] now active, reconfiguring WireGuard
2023/10/14 17:03:46 wgengine: Reconfig: configuring userspace WireGuard config (with 1/9 peers)
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Created
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Updating endpoint
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Removing all allowedips
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Updating persistent keepalive interval
2023/10/14 17:03:46 wg: [v2] [BTi8b] - Starting
2023/10/14 17:03:46 [v1] peer keys: [BTi8b]
2023/10/14 17:03:46 [v1] v1.50.1-ERR-BuildInfo peers: 0/0
2023/10/14 17:03:46 wg: [v2] [BTi8b] - Received handshake initiation
2023/10/14 17:03:46 wg: [v2] [BTi8b] - Sending handshake response
2023/10/14 17:03:46 [v1] magicsock: derp route for [BTi8b] set to derp-12 (shared home)
2023/10/14 17:03:46 magicsock: disco: node [BTi8b] d:40d6f5b1c01fdd5c now using 192.168.1.10:41641
2023/10/14 17:03:46 Accept: TCP{100.85.220.69:49785 > 100.82.127.34:443} 64 tcp ok
2023/10/14 17:03:46 [unexpected] localbackend: got TCP conn without TCP config for port 443; from 100.85.220.69:49785
2023/10/14 17:03:46 Accept: TCP{100.85.220.69:49785 > 100.82.127.34:443} 52 tcp non-syn
2023/10/14 17:03:46 Accept: TCP{100.85.220.69:49785 > 100.82.127.34:443} 387 tcp non-syn
2023/10/14 17:03:47 INFO served original=/api rewritten=http://host.docker.internal:8003/api [email protected] origin_node=XX.tailnet-003a.ts.net. duration=1.919322ms http_status=301

I'm not knowledgeable about how the images are being built after #57, but: is it possible the new build process results in a binary that's unable to use DNS to lookup the proxied service by name?

@cdzombak
Copy link
Contributor Author

(And, while I'm here, I'll add: tsnsrv is a very neat tool; thank you for creating & maintaining it 😄)

@juev
Copy link

juev commented Mar 3, 2024

@cdzombak
One question, was it possible to call by containers name before #57? Did proxying work?

@cdzombak
Copy link
Contributor Author

@cdzombak One question, was it possible to call by containers name before #57? Did proxying work?

I don’t recall exactly the point in time where this broke. But yes, the docker compose file I posted above worked before #57.

I have since moved to using the official Tailscale Docker image for this use case, as described in https://tailscale.com/blog/docker-tailscale-guide .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants