Skip to content

The default write coalescing configuration increases CPU utilization #1191

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
drew-richardson opened this issue Sep 14, 2018 · 5 comments · Fixed by #1208
Closed

The default write coalescing configuration increases CPU utilization #1191

drew-richardson opened this issue Sep 14, 2018 · 5 comments · Fixed by #1208

Comments

@drew-richardson
Copy link

Please answer these questions before submitting your issue. Thanks!

What version of Cassandra are you using?

$ nodetool -h localhost version
ReleaseVersion: 3.11.1

What version of Gocql are you using?

Master (5a139e)

What did you do?

Create a session with the default WriteCoalesceWaitTime value in an application server, run a few queries, then do nothing but wait for incoming requests.

What did you expect to see?

I expected to see very small CPU utilization.

What did you see instead?

Instead, I saw a much larger CPU utilization. I bisected this down to write coalescing (#1175). Before this commit I see the expected CPU utilization. I can also set WriteCoalesceWaitTime to zero and see the same small CPU utilization.


If you are having connectivity related issues please share the following additional information

Describe your Cassandra cluster

please provide the following information

  • output of nodetool status
  • output of SELECT peer, rpc_address FROM system.peers
  • rebuild your application with the gocql_debug tag and post the output
@hnx116
Copy link

hnx116 commented Sep 15, 2018

I have the same issue, high CPU without traffic.

@IvanSafonov
Copy link

Same issue with the CPU utilization
And queries work at least 3 times slower than before

revision f596bd36e19ecaa7a9be478cf137fedc75036b02
BenchmarkModel/Create_session-8      	   10000	    125255 ns/op	    2989 B/op	      51 allocs/op
BenchmarkModel/Get_session-8         	   10000	    148158 ns/op	    2546 B/op	      41 allocs/op

9bf6ce5bbcf1d790a892d0d470044bc891b798d6
BenchmarkModel/Create_session-8      	   10000	    141681 ns/op	    3485 B/op	      63 allocs/op
BenchmarkModel/Get_session-8         	   10000	    172861 ns/op	    3058 B/op	      53 allocs/op

fccc3082740443f2c7e49a895cae905a9ab8149a
BenchmarkModel/Create_session-8      	   10000	    136531 ns/op	    3485 B/op	      63 allocs/op
BenchmarkModel/Get_session-8         	   10000	    175200 ns/op	    3059 B/op	      53 allocs/op

e898b2baaf086eead546b4a2fadfb552e18ee330
BenchmarkModel/Create_session-8      	    3000	    524392 ns/op	    3476 B/op	      64 allocs/op
BenchmarkModel/Get_session-8         	    2000	    679636 ns/op	    3092 B/op	      54 allocs/op

5a139e8dcc59d560335dbda641b9f42085e59b0a
BenchmarkModel/Create_session-8      	    3000	    507443 ns/op	    3479 B/op	      64 allocs/op
BenchmarkModel/Get_session-8         	    3000	    585601 ns/op	    3095 B/op	      54 allocs/op

@Zariel
Copy link
Contributor

Zariel commented Sep 20, 2018

@IvanSafonov use golang.org/x/perf/cmd/benchstat with 5+ runs before and after to check for performance regressions.

This is related to golang/go#27707

@nejordan15
Copy link

nejordan15 commented Sep 25, 2018

I saw what seemed to be the same issue as well, maxing out cpu for an app that was typically using ~8% cpu on version d93886f of cpu. After updating to 799fb03, cpu was maxing out, even without any traffic.
Will look into if cpu use drops with this addition. EDIT: When #1198 is merged

@alourie
Copy link
Contributor

alourie commented Sep 26, 2018

@nejordan15 and you should probably wait till #1198 is fixed too, but with the current master the CPU usage should be down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants