-
Notifications
You must be signed in to change notification settings - Fork 7.6k
OperatorObserveOn onComplete can be emitted despite onError being called #2929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
seeing as I don't use AtomicInteger methods on |
80748b7
to
0de98b2
Compare
I expect also that the lines below in if (error != null) {
return;
} I'll remove them in an update to the PR now |
e44d6c4
to
9b40386
Compare
Nice catch, but you don't need to introduce that state variable. Remove |
Ok I'll have a look at that. I agree that the |
9b40386
to
56b5390
Compare
Righto, I've run with your suggestion. I also removed the By the way I renamed |
Looks good. Could you do a perf comparison? |
Sure |
Benchmarks improved a few percent in general:
Ran using
|
I'll just fix a comment and I don't think |
56b5390
to
54b5c9d
Compare
The variability in perfs is pretty large. I'll run the full benchmarks and report back on those. |
i usually run observeOn perf with -r 5 . |
Thanks, I used
I'd call it a draw:
|
I think the main cause of the fluctuation is the thread hopping of the emission of the source. If it hops to the observation thread, that is less traffic I guess. If you have time, you could modify the perf by adding subscribeOn which guarantees there is alway a thread boundary crossed. |
I added
|
Extra perfs so they can be compared with the old anytime. |
Righto, here are the perfs including new onSubscribe ones:
|
95ee64d
to
1012b2b
Compare
@akarnokd I can maybe get a few percent improvement in most of the benchmarks by touching the volatile
|
} else { | ||
if (requested == 0 && completed && queue.isEmpty()) { | ||
if (finished) { | ||
if (error != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid reading volatile twice here.
if (_finished) { | ||
if (error != null) { | ||
//even if there are onNext in the queue we eagerly notify of error | ||
child.onError(error); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant error
.
@akarnokd Seeing as error bypasses the queue do you think I should add |
In fact could clear the queue in |
That would break the queue. Instead, clear the queue on the pollQueue side before emitting onError. |
9a5ca7e
to
099da59
Compare
PR updated with clearing the queue on the pollQueue side before emitting onError. |
I've got this on my list to review. |
One more race condition. We need to increment |
I never understood why it does that many request updates. Here is a simpler version: int emitted = 0;
do {
counter = 1;
long produced = 0;
long r = requested;
while (!child.isUnsubscribed()) {
Throwable error;
if (completed) {
if ((error = this.error) != null) {
child.onError(error);
return;
} else
if (queue.isEmpty()) {
child.onCompleted();
return;
}
}
if (r > 0) {
Object o = queue.poll();
if (o != null) {
child.onNext(on.getValue(o));
r--;
emitted++;
produced++;
} else {
break;
}
} else {
break;
}
}
if (produced > 0) {
REQUESTED.addAndGet(this, -produced);
}
} while (COUNTER_UPDATER.decrementAndGet(this) > 0);
if (emitted > 0) {
request(emitted);
} |
Nice David, I was going to do that in a later PR but now seems good! On Sat, 9 May 2015 18:24 eliyana281180 [email protected] wrote:
|
I've updated the PR with the new |
…d despite onError being called
5153f1a
to
18caf0e
Compare
Rebased into 2 commits |
@akarnokd I was thinking of changing the last lines in the if (emitted > 0)
request(emitted); to if (emitted > 0) {
request(Math.min(RxRingBuffer.SIZE - queue.size(), emitted));
} This was just to reduce the probability of an overflow of the RingBuffer. I haven't seen it happen but it looks to me that it is possible for |
In |
There is no need for that because the queue is bounded and you'd produce up to the capacity and request only replacements after. Even though the downstream requests more, it isn't directly translated to requests for upstream. For example, imagine the queue is full and there are plenty requests available. Now the inner loop can drain the queue completely and exit when it becomes empty. Since the upstream honors backpressure, there won't be any new enqueueing of values during this time and you get |
Thanks. I forgot about the counter = 1 line! On Wed, 13 May 2015 16:31 David Karnok [email protected] wrote:
|
I think that's my fault. I had a version that did batching at one point but it was buggy so I got it "working" and left that for later optimization. It is far better if we can request in batches. |
Ah, wrong Changes look good. |
OperatorObserveOn onComplete can be emitted despite onError being called
This is a fix for a race condition in
OperatorObserveOn
where if thread A gets to L164 and thread B starts the pollQueue loop then it will act as if the stream had completed normally instead of with an error.The effect is that a stream could appear to complete normally when in fact an error had occurred.
Using two boolean volatiles
completed
andfailed
that as a pair were not atomically updated/read exposed us to this race condition.The fix is to use a single volatile integer
status
to represent the states ACTIVE, COMPLETED, ERRORED to replacecompleted
andfailed
.