Discussion:
[rabbitmq-discuss] Publisher Confirms stop occurring when Consumer is present and queue is large
Cameron Davison
2011-10-31 23:24:02 UTC
Permalink
I am using RabbitMQ version 2.6.1 of the server.

I created a durable direct exchange with the java client and then
declared a durable, not exclusive, not autoDelete queue and then bound
the exchange and the queue together. Then I set confirm select to true
to start writing messages to the queue. I am publishing messages to
the broker in batches of 2000 as MINIMAL_PERSISTENT_BASIC and the
calling channel.waitForConfirms() to block until all the messages have
been confirmed. This works rather well with an empty queue with a
consumer, and when there is no consumer present. I then setup a
basicConsumer as a QueueingConsumer with ack=true. I am acking every
1k messages I receive with multiple set to true. I am reading and
writing basically as fast as possible, not much code in the critical
path, just to test that RabbitMQ works as I would expect.

I start the writer process to start writing and waiting for confirms
from rabbit and it starts fine. After about 30 seconds, so at this
point many messages have been queued up in the queue, I start the
reader. As soon as I start the reader the writer process stops
receiving confirms. I can even see in the management console that it
shows the 2000 confirms that the writer channel is waiting on. I tried
restarting the writer but it still does not get any confirms while the
reader process is reading from rabbitmq. It will stay like this until
the reader empties the queue and I assume starts acking things that
were written so the confirms are coming from the consumer ack rather
than the writes to disk. Is this expected behavior? Do I have it
configured to not fsync to disk while there is a consumer reading the
messages, would that even possible?

Thanks,
Cameron
Matthew Sackman
2011-11-01 06:31:39 UTC
Permalink
Hi Cameron,

Thanks for the detailed report.

Your report has prompted us to think carefully about your test, and we
can see how the broker can end up behave this way. This is not desired
behaviour of the broker and thus does constitute a bug. There is no
work-around.

What is happening is that your publisher is waiting for the broker to
send the confirms back to it. For this to happen, internally, the queues
have to send some messages to themselves. These messages are being
starved out because Rabbit prioritises getting rid of messages. Thus the
queue's desire to drive the consumer prevents the queue from processing
internal messages that would lead the queue to issue the confirms back
up, via the channel and out to the client.

At a minimum, we need to adjust some message priorities internally. But
the actual fix may turn out to be bigger than that. Given current
timings, I wouldn't expect a fix for this to be in the next release, but
in the one after that.

Best wishes,

Matthew
Cameron Davison
2011-11-02 05:10:21 UTC
Permalink
Matthew,

Thank you for the reply. Do y'all have a bug tracker that I would be
able to watch so that I can know when y'all address this issue? Do you
know if this same thing would be even worse for a mirrored queue
rabbit mq cluster? I am seeing a lot of degradation in write
throughput while reading when doing mirrored queue cluster. All I
really want is high availability such that if one server crashed the
slave in the cluster will become the master and allow for continued
throughput. Is this the correct way to that?

Cameron
Post by Matthew Sackman
Hi Cameron,
Thanks for the detailed report.
Your report has prompted us to think carefully about your test, and we
can see how the broker can end up behave this way. This is not desired
behaviour of the broker and thus does constitute a bug. There is no
work-around.
What is happening is that your publisher is waiting for the broker to
send the confirms back to it. For this to happen, internally, the queues
have to send some messages to themselves. These messages are being
starved out because Rabbit prioritises getting rid of messages. Thus the
queue's desire to drive the consumer prevents the queue from processing
internal messages that would lead the queue to issue the confirms back
up, via the channel and out to the client.
At a minimum, we need to adjust some message priorities internally. But
the actual fix may turn out to be bigger than that. Given current
timings, I wouldn't expect a fix for this to be in the next release, but
in the one after that.
Best wishes,
Matthew
_______________________________________________
rabbitmq-discuss mailing list
rabbitmq-discuss at lists.rabbitmq.com
https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
Matthew Sackman
2011-11-04 15:38:22 UTC
Permalink
Hi Cameron,
Post by Cameron Davison
Thank you for the reply. Do y'all have a bug tracker that I would be
able to watch so that I can know when y'all address this issue?
We do have a bug tracker but alas it's not public because we all like to
curse and scream at each other and generally feel that making it public
would preclude us from doing that, which would be bad for morale.
Post by Cameron Davison
Do you
know if this same thing would be even worse for a mirrored queue
rabbit mq cluster?
I don't think this bug will affect mirrored queues in a worse way than
non-mirrored queues...
Post by Cameron Davison
I am seeing a lot of degradation in write
throughput while reading when doing mirrored queue cluster. All I
really want is high availability such that if one server crashed the
slave in the cluster will become the master and allow for continued
throughput. Is this the correct way to that?
Yes - you are doing things the right way. Mirrored queues are much
slower than non-mirrored queues due to the additional work that has to
be done. You should probably expect to see about a ten-fold decrease in
performance.

http://old.nabble.com/Mirror-queues-and-poor-write-performance-td32727693.html
reports that I can get a little over 2kHz on a single mirrored queue
with one consumer keeping the queue empty.

But the bug that you've identified will certainly affect mirrored queues
as well as non-mirrored queues.

One thing you could try is on the consumer:
1. Try a fairly healthy qos prefetch size N (eg N=100)
2. Only ack every N/2 (eg 50) msgs but turn on the "multiple" flag in
the ack.

The effect of these changes will be to reduce the consumer-related
messages that the queue has to deal with, and so will hopefully allow
the queue to process the publishes faster (and issue the confirms).

If you're able to try these changes, I'd be interested in what
improvements (if any) they achieve for you.

Best wishes,

Matthew
Jerry Kuch
2011-11-04 17:32:11 UTC
Permalink
Do watch for our upcoming "Cursing and Screaming as a Service" (CaSaaS)
offering! :-)

Jerry

----- Original Message -----
From: "Matthew Sackman" <matthew at rabbitmq.com>
To: rabbitmq-discuss at lists.rabbitmq.com
Sent: Friday, November 4, 2011 8:38:22 AM
Subject: Re: [rabbitmq-discuss] Publisher Confirms stop occurring when Consumer is present and queue is large

Hi Cameron,
Post by Cameron Davison
Thank you for the reply. Do y'all have a bug tracker that I would be
able to watch so that I can know when y'all address this issue?
We do have a bug tracker but alas it's not public because we all like to
curse and scream at each other and generally feel that making it public
would preclude us from doing that, which would be bad for morale.
Post by Cameron Davison
Do you
know if this same thing would be even worse for a mirrored queue
rabbit mq cluster?
I don't think this bug will affect mirrored queues in a worse way than
non-mirrored queues...
Post by Cameron Davison
I am seeing a lot of degradation in write
throughput while reading when doing mirrored queue cluster. All I
really want is high availability such that if one server crashed the
slave in the cluster will become the master and allow for continued
throughput. Is this the correct way to that?
Yes - you are doing things the right way. Mirrored queues are much
slower than non-mirrored queues due to the additional work that has to
be done. You should probably expect to see about a ten-fold decrease in
performance.

http://old.nabble.com/Mirror-queues-and-poor-write-performance-td32727693.html
reports that I can get a little over 2kHz on a single mirrored queue
with one consumer keeping the queue empty.

But the bug that you've identified will certainly affect mirrored queues
as well as non-mirrored queues.

One thing you could try is on the consumer:
1. Try a fairly healthy qos prefetch size N (eg N=100)
2. Only ack every N/2 (eg 50) msgs but turn on the "multiple" flag in
the ack.

The effect of these changes will be to reduce the consumer-related
messages that the queue has to deal with, and so will hopefully allow
the queue to process the publishes faster (and issue the confirms).

If you're able to try these changes, I'd be interested in what
improvements (if any) they achieve for you.

Best wishes,

Matthew
Cameron Davison
2012-08-29 21:07:10 UTC
Permalink
It has been a while since I wrote to the group. I was wondering if
this bug ever was fixed in a more recent release?

Thanks,
Cameron
Post by Jerry Kuch
Do watch for our upcoming "Cursing and Screaming as a Service" (CaSaaS)
offering! :-)
Jerry
----- Original Message -----
From: "Matthew Sackman" <matthew at rabbitmq.com>
To: rabbitmq-discuss at lists.rabbitmq.com
Sent: Friday, November 4, 2011 8:38:22 AM
Subject: Re: [rabbitmq-discuss] Publisher Confirms stop occurring when Consumer is present and queue is large
Hi Cameron,
Post by Cameron Davison
Thank you for the reply. Do y'all have a bug tracker that I would be
able to watch so that I can know when y'all address this issue?
We do have a bug tracker but alas it's not public because we all like to
curse and scream at each other and generally feel that making it public
would preclude us from doing that, which would be bad for morale.
Post by Cameron Davison
Do you
know if this same thing would be even worse for a mirrored queue
rabbit mq cluster?
I don't think this bug will affect mirrored queues in a worse way than
non-mirrored queues...
Post by Cameron Davison
I am seeing a lot of degradation in write
throughput while reading when doing mirrored queue cluster. All I
really want is high availability such that if one server crashed the
slave in the cluster will become the master and allow for continued
throughput. Is this the correct way to that?
Yes - you are doing things the right way. Mirrored queues are much
slower than non-mirrored queues due to the additional work that has to
be done. You should probably expect to see about a ten-fold decrease in
performance.
http://old.nabble.com/Mirror-queues-and-poor-write-performance-td32727693.html
reports that I can get a little over 2kHz on a single mirrored queue
with one consumer keeping the queue empty.
But the bug that you've identified will certainly affect mirrored queues
as well as non-mirrored queues.
1. Try a fairly healthy qos prefetch size N (eg N=100)
2. Only ack every N/2 (eg 50) msgs but turn on the "multiple" flag in
the ack.
The effect of these changes will be to reduce the consumer-related
messages that the queue has to deal with, and so will hopefully allow
the queue to process the publishes faster (and issue the confirms).
If you're able to try these changes, I'd be interested in what
improvements (if any) they achieve for you.
Best wishes,
Matthew
_______________________________________________
rabbitmq-discuss mailing list
rabbitmq-discuss at lists.rabbitmq.com
https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
_______________________________________________
rabbitmq-discuss mailing list
rabbitmq-discuss at lists.rabbitmq.com
https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
Matthias Radestock
2012-08-29 21:48:52 UTC
Permalink
I was wondering if this bug ever was fixed in a more recent release?
Yes.

Matthias.

Continue reading on narkive:
Search results for '[rabbitmq-discuss] Publisher Confirms stop occurring when Consumer is present and queue is large' (Questions and Answers)
23
replies
I like to write. How often should I write a week?
started 2006-10-20 14:08:51 UTC
books & authors
Loading...