s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
if os.path.exists(socket_path): os.unlink(socket_path)
s.bind(socket_path)
# Use a backlog of 50, which seems to be fairly common. s.listen(50)
# Adopt this socket into Twisted's reactor setting the endpoint.
endpoint = AdoptedStreamServerEndpoint(reactor, s.fileno(), s.family) endpoint.socket = s # Prevent garbage collection.
return endpoint
unfortunately only one process is going to process the requests as there is no load balancing for unix sockets (ignore the race condition on os.path.exists, I noticed that when we hit the race an exception is thrown and it is silently ignored).
When we execute
if os.path.exists(socket_path): os.unlink(socket_path)
we remove the connection of the worker that was using that unix socket.
Unless I missed something, the only fix is to create a single unix socket for each process and let nginx round robin the requests to the unix sockets.
I was able to reproduce this. Thanks for reporting!
We start multiple regiond processes and each of them is running
def _makeEndpoint( self):
"""Make the endpoint for the webapp."""
socket_path = os.getenv(
"MAAS_ HTTP_SOCKET_ PATH",
get_ maas_data_ path("maas- regiond- webapp. sock"),
)
s = socket. socket( socket. AF_UNIX, socket.SOCK_STREAM) exists( socket_ path):
os. unlink( socket_ path)
if os.path.
# Use a backlog of 50, which seems to be fairly common.
# Adopt this socket into Twisted's reactor setting the endpoint. rverEndpoint( reactor, s.fileno(), s.family)
endpoint. socket = s # Prevent garbage collection.
endpoint = AdoptedStreamSe
return endpoint
unfortunately only one process is going to process the requests as there is no load balancing for unix sockets (ignore the race condition on os.path.exists, I noticed that when we hit the race an exception is thrown and it is silently ignored). exists( socket_ path):
os. unlink( socket_ path)
When we execute
if os.path.
we remove the connection of the worker that was using that unix socket.
Unless I missed something, the only fix is to create a single unix socket for each process and let nginx round robin the requests to the unix sockets.