The Trouble With Socket Timeout

Hi. We’re currently upgrading a Ruby driver at our platform at work. At the socket level, the old version of this driver uses, which boils down to the OS’s select system call. A tried and true solution, working as expected on any scenario: it waits for a certain time, if the time runs out it simply returns nothing and resumes execution. So if a client connects to a server and it stops responding but doesn’t close the connection, the client can decide what to do with that. Here’s an example of that:

require 'socket'

delay = 5

server = 2000

loop do
  client = server.accept
  puts "#{} > Client arrived. Sleeping for #{delay}s."
  sleep delay
  puts "#{} > Done, replying."
  client.puts "Done. Bye!"
require 'socket'

host = ''
port = 2000
timeout = 2

s =, Socket::SOCK_STREAM, 0)
s.connect(Socket.pack_sockaddr_in(port, host))

rs, =[s], [], [], timeout)
if rs
  puts rs[0].read(1000)
  puts 'Timeout'


Run the server, and then run client-io-select.rb. As expected, it will timeout after 2s while the server is deliberately sleeping for 5s. Change the client timeout to 6s and it will print the server reply. The new version of the driver changed that implementation in favour of setting the timeout value as an option of the socket, as specified in the socket man page and other places. So instead of using, it’s using Socket’s setsockopt method before connecting to set both SO_RCVTIMEO and SO_SNDTIMEO, which translate to the OS’s socket options. After connecting it uses the socket read method directly, trusting on Ruby and the OS to handle timeouts, which sounds nice. However, we found that the support for those options is somewhat inconsistent through Ruby MRI’s versions – I didn’t test it on other Ruby implementations – and on different operating systems. An example of a client using this approach:

require 'socket'

host = ''
port = 2000
timeout = 2

tv = [ timeout, 0 ].pack 'l_2'

s = Socket::AF_INET, Socket::SOCK_STREAM, 0
s.setsockopt Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, tv
s.setsockopt Socket::SOL_SOCKET, Socket::SO_SNDTIMEO, tv
s.connect Socket.pack_sockaddr_in port, host

  while data =
    puts data
rescue => e
  puts e


We ran that client on Ruby 1.8.7-p374, 1.9.3-p545 and 2.1.2 at Mac OS X 10.9.4, all of them installed via rvm. The server is the same of the first example. On old Ruby 1.8 the client timed out as expected. On the other Ruby versions it waited the server response instead. Before getting to that conclusion, we also ran some tests using C because we thought that different operating systems could follow or not those socket options. Here is the C client we wrote to test it:

#include <stdio.h>
#include <sys/socket.h>
#include <netdb.h>
#include <string.h>
#include <unistd.h>

int main(int argc, char *argv[])
    char *host = "";
    int port = 2000;
    int timeout = 2;

    int sockfd, n;

    char buffer[256];

    struct sockaddr_in serv_addr;
    struct hostent *server;
    struct timeval tv;

    tv.tv_sec = timeout;

    server = gethostbyname(host);
    bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length);
    serv_addr.sin_port = htons(port);
    serv_addr.sin_family = AF_INET;

    sockfd = socket(AF_INET, SOCK_STREAM, 0);
    setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(struct timeval));
    setsockopt(sockfd, SOL_SOCKET, SO_SNDTIMEO, &tv, sizeof(struct timeval));
    connect(sockfd, (struct sockaddr *)&serv_addr, sizeof(serv_addr));

    n = read(sockfd, buffer, 255);

    if (n < 0) {
        perror("error reading from socket");
        return 1;

    printf("%s\n", buffer);
    return 0;

We ran that client on Mac OS X 10.9.4 with LLVM 5.1, Ubuntu 14.04 with GCC 4.8.2 and on CentOS 5.8 with GCC 4.1.2 . We used the same server of the first example. On OS X the client timed out as expected, but on Ubuntu and CentOS it didn’t. Don’t forget to test it yourself, specially with newer Ruby versions: one of the posts we found while investigating this described a different behaviour because it was based on Ruby 1.8 five years ago. I couldn’t find the reason behind the difference between Ruby versions – it might be a build option that had a default before, but I can’t pinpoint why without some better knowledge of the Ruby codebase. The same applies for the different operating systems. But the lesson is: setting socket options for sockets on those Ruby builds does not produce the expected behaviour currently.




    Hi, Thanks for raising this issue

    For me, I’ve test both socket and The socket option works well on Ruby on the both Linux and windows systems

    Linux mint 17.1 : Ruby 2.2.3p173
    Windows 7: Ruby 2.2.3p173



      On ruby 2.2.3p173
      I tried

      require ‘socket’

      client =‘’, 2000) # Client, connects to server on port 9911
      rhost = client.peeraddr.last # Get the remote server’s IP address

      timeval = [1, 0].pack(“l_2”)
      client.setsockopt Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, timeval
      client.setsockopt Socket::SOL_SOCKET, Socket::SO_SNDTIMEO, timeval

      while data =
      puts data
      rescue => e
      puts e


      The server result was
      2015-10-17 19:12:47 +0300 > Client arrived. Sleeping for 5s.
      2015-10-17 19:12:52 +0300 > Done, replying.

      The client result was
      Done. Bye!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s