pycassa.pool.MaximumRetryException: Retried 1 times. Last failure was TTransportException: TSocket read 0 bytes

Bug #1091067 reported by Haw Loeung
This bug affects 1 person
Affects Status Importance Assigned to Milestone

Bug Description


So another cause of retracers existing seems to be related to stability issues with one of the cassandra nodes (jumbee). Whenever it stops accepting connections, this happens:

2012-12-17 02:48:09,683:INFO:root:Decompressing to /srv/
2012-12-17 02:48:10,794:INFO:root:Writing back to Cassandra
2012-12-17 02:48:10,806:INFO:pycassa.pool:Connection 47028496 ( in pool 47028112 failed: TSocket read 0 bytes
Traceback (most recent call last):
  File "/srv/", line 517, in <module>
  File "/srv/", line 514, in main
  File "/srv/", line 146, in listen
    self.run_forever(channel, self.callback, queue=retrace)
  File "/srv/", line 159, in run_forever
  File "/usr/lib/python2.7/dist-packages/amqplib/client_0_8/", line 97, in wait
    return self.dispatch_method(method_sig, args, content)
  File "/usr/lib/python2.7/dist-packages/amqplib/client_0_8/", line 117, in dispatch_method
    return amqp_method(self, args, content)
  File "/usr/lib/python2.7/dist-packages/amqplib/client_0_8/", line 2060, in _basic_deliver
  File "/srv/", line 372, in callback
    self.update_retrace_stats(release, day_key, retracing_time, True)
  File "/srv/", line 193, in update_retrace_stats
    self.retrace_stats_fam.add(day_key, release + status)
  File "/usr/lib/pymodules/python2.7/pycassa/", line 1025, in add
  File "/usr/lib/pymodules/python2.7/pycassa/", line 577, in execute
    return getattr(conn, f)(*args, **kwargs)
  File "/usr/lib/pymodules/python2.7/pycassa/", line 146, in new_f
    (self._retry_count, exc.__class__.__name__, exc))
pycassa.pool.MaximumRetryException: Retried 1 times. Last failure was TTransportException: TSocket read 0 bytes

Any chance you could take a look into this?



Revision history for this message
Haw Loeung (hloeung) wrote :

From what I understand, it will only retry (max_retries) if it fails due to an TimedOutException or UnavailableException. In this case, it's failing due to TTransportException.

tags: added: canonical-webops-eng
Revision history for this message
Haw Loeung (hloeung) wrote :

We're also seeing this on the frontends. e.g. :

Oops-Id: OOPS-2575daisy.ubuntu.com7
Exception-Type: MaximumRetryException
Exception-Value: Retried 1 times. Last failure was TTransportException: TSocket read 0 bytes
Date: 2013-01-18T00:51:04.569353+00:00

Any chance you could take a look?



Changed in daisy:
importance: Undecided → High
Revision history for this message
Brian Murray (brian-murray) wrote :

Looking at the errors from 2013-02-28 there are 163 occurrences of this TTransportException. It seems to happen when trying to update the counters column family for either the release: source_package counter or the MissingSAS counter.

This also happened before the branch adding MissingSAS and the release:source_package counters were added to daisy, if we look earlier in 2013-02 we can see it happening with KernelCrash.

2013-02-01/69917.daisy.ubuntu.com9: counters_fam.add('KernelCrash', day_key)
2013-02-01/73433.daisy.ubuntu.com11: counters_fam.add('KernelCrash', day_key)
2013-02-05/85220.daisy.ubuntu.com25: counters_fam.add('KernelCrash', day_key)
2013-02-05/52849.daisy.ubuntu.com38: counters_fam.add('KernelCrash', day_key)

I'd imagine it happens less frequently for kernel crashes since there are less of those.

Haw Loeung (hloeung)
Changed in daisy:
status: New → Confirmed
Revision history for this message
Evan (ev) wrote :

Yes. Anytime you see "Retried 1 times", that's a counter increment that failed. By default, Cassandra does not retry counters, since it is not an idempotent operation.

We should resolve this by doing three things:

1. Never retry counters. The result is unpredictable and can lead to over-counting by a lot whenever the Cassandra nodes come under heavy load (like during a compaction).
2. Catch the exceptions and pass on them. We have pycassa wired to statsd, and it should be producing graphs for retries at If it's not, we need to implement that.
3. Anytime we care about the accuracy of a counter, it should be matched with a column family that uses timeuuids (like the oops identifiers) or something else unique in a wide row. This should be matched with a cron job to count the wide row and repair the counter. See for more details on this.

Ebay covered this approach while back:

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.