1

I desperately trying to setup nginx proxy to websocket. Websocket connects but then I'm getting

2018/02/10 19:30:34 [info] 7#7: *238 client closed connection while waiting for request, client: 172.18.0.1, server: 0.0.0.0:8888

Here's the minimal example:

nginx.conf

worker_processes  1;

events {
    worker_connections  1024;
}
error_log /dev/stdout debug;

http  {
    resolver 127.0.0.11 ipv6=off;
    include       mime.types;
    access_log /dev/stdout;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    upstream tornado {
      server ws:8888;
      #server ws:8889; I may add another upstream here
    }

    server {
         add_header X-Frame-Options SAMEORIGIN;
         add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";

        listen 8888;

        server_name pychat.org;
        charset     utf-8;
        client_max_body_size 75M;

        location / {
             proxy_pass http://tornado/;
             #>>> otherwise error 400
             proxy_set_header Upgrade $http_upgrade;
             proxy_set_header Connection "upgrade";
             proxy_set_header Host $host;
             proxy_send_timeout 330;
             proxy_read_timeout 330;
             #>>>>>>
             #proxy_redirect off;
             proxy_set_header X-Real-IP $remote_addr;
             #proxy_set_header X-Scheme $scheme;
        }

    }
}

server.py

import tornado.ioloop
from tornado.websocket import WebSocketHandler, WebSocketClosedError
from tornado import web
class MainHandler(WebSocketHandler):

    def open(self):
        print("WebSocket opened")

    def on_message(self, message):
        print("ws mess" +  message)
        self.write_message(u"You said: " + message)

    def on_close(self):
        print("WebSocket closed")

    def check_origin(self, origin):
        return True

def make_app():
    return tornado.web.Application([
        (r'.*', MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()

docker-compose.yml

version: '2.3'
services:
  nginx:
    build:
      context: ../
      dockerfile: ./dockerfilenginx
    ports:
     - 8888:8888
  ws:
    build:
      context: ../
      dockerfile: ./docker/Dockerfilews

Dockerfilenginx:

FROM alpine:3.6
RUN apk update &&\
apk add vim nginx ca-certificates wget && update-ca-certificates
COPY ./docker/nginx-test.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "pid /tmp/nginx.pid; daemon off;"]

Dockerfilews

FROM alpine:3.6
RUN apk update &&\
 apk add python3 &&\
pip3 install tornado
WORKDIR /usr/src
COPY ./server.py /usr/src
CMD python3 server.py

Then just try to open a WebSocket to the server, e.g. from browser console (shift+ctrl+i).

ws = new WebSocket('ws://localhost:8888')
ws.send("wsdata")

If I remove nginx proxy and just expose websocket port on docker - everything will work.

2 Answers 2

1

can you try this proxy config for nginx, i use it for django and daphne

proxy_http_version 1.1; solved the issue.

location / {
    proxy_pass http://ws:8888;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

    proxy_redirect     off;
    proxy_set_header   Host $host;
    proxy_set_header   X-Real-IP $remote_addr;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Host $server_name;
}
Sign up to request clarification or add additional context in comments.

4 Comments

It doesn't differ from my config. I tried it and getting the same. This configuration works ok w/o docker. Inside of docker I get what I wrote.
have you tried the proxy without the SSL (just ws://)
Yep, it doesn't matter, w/o ssl it doesn't work as well.
can you post the log from nginx (and the ws) when you try to access the server with the javascript
0

I also had this problem, and believed it was related to Nginx, but this was a red-herring.

In my case, my code was periodically raising an exception when sending a message on channels. This only became apparent when I introduced Nginx, which closed the "dead" channel (AFAIK). However, this appeared like a timeout to me, so I checked proxy timeouts and all sorts of things.

Eventually, I added some more logging to my code and realised the error was in fact caused partly by the following (example):

async_to_sync(channel_layer.group_send)(
                        'notifications', {
                            "type": "notification",
                            "data": some_packet
                        }
                    )

My consumer was expecting a "text" attribute when this message was received, and it didn't handle it well.

The socket then closed, Nginx dropped it, and I believed it was some issue with Nginx/Daphne/Docker-networking for 2 days.

Hope this helps someone.

1 Comment

Is there a question ?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.