-
Notifications
You must be signed in to change notification settings - Fork 560
linux write data 100% cpu #77
Comments
Maybe refers to: #75 |
Hi, I submitted a possible fix: you can find more information about it in the thread of #75. Hope it will solve the issue :) Best! |
linux get also up to 100% #0 0x00007fbc25bcb6d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 |
multithread get data from redis as follow: bool Get(const std::string& key, std::string& value, bool async = true)
} thanks. |
Hi @LazyPlanet , Are you experiencing high CPU? Thanks |
Most of threads are in that state, It caused by multi-thread, linux centos 7.2 gcc builded, 16 cores, about 1000 calls. |
@Cylix, after migration to a conditional variable, I'm experiencing 100% CPU very-very rarely. It will be very cool if @LazyPlanet will find how to reproduce this bug. |
The demo , i define a class as follows, when i want a data, i calls Redis().Get(string key); class Redis
public: Redis()
bool Get(const std::string& key, std::string& value, bool async = true)
}; In multhread i calls 500 times get or save , i will cause as follows: Do I misuse it? |
Hi, You should call commit or sync_commit Remember, when you call .get, .set or any other redis command, the command is not sent yet to the redis server. Instead it is buffered and flushed when you can commit or sync_commit. |
Oh, sorry!! |
I am sorry to interrupt you that there is a lock, Is there I am wrong? About 100 connections with redis at this time. void Game::SavePlayBack()
} |
I want to know if I use _client.sync_commit(std::chrono::milliseconds(100)); , can it not cause pthread_cond_wait ? |
In the new example you showed, you again did called You need to call it before.
|
But sometimes it cause error while sometimes not, I am very sorry to ask this question again. |
I have changed my code, but it alse does not work as below: Code example:
If I delete line: |
OK, It has caused this bug all the same. bool Get(const std::string& key, google::protobuf::Message& value, bool async = true)
{
if (!_client.is_connected()) {
_client.connect(_hostname, _port);
if (!_client.is_connected()) { return false; }
}
auto has_auth = _client.auth(_password);
auto get = _client.get(key);
if (async) {
_client.commit();
} else {
_client.sync_commit(std::chrono::milliseconds(100)); ////////////Call this.
}
/*
if (has_auth.get().ko()) {
return false;
}
*/
auto reply = get.get(); //////////////////////Wait Error
if (!reply.is_string()) { return false; }
auto success = value.ParseFromString(reply.as_string());
if (!success) { return false; }
return true;
} `bool Save(const std::string& key, const std::string& value, bool async = true)
Can u help me please? @Cylix |
Which version of the library are you using? If not the latest, can you try to upgrade to see if it solves your issue? Best |
v3.5.3July 2nd, 2017 I will get master and try it, thank you very much! |
yep, if you can try the versions above 4.0+, would be perfect. There are lots of changes and fix. Best |
Problem above has been solved when I use v4.3.0 and cpp_redis::network::set_default_nb_workers(3) |
10 000 connections sounds like a problem of design in your software to be honest. The connect() failure must happen either because you can't create anymore sockets (you used all fd allowed by the OS) or the redis-server is not supporting any more connection from you (flooded by the number of connections). Connection pool should either be done on your side by having a pool of cpp_redis clients & subscribers (both classes are thread safe) and you should control how many clients you have. Destroying a client or subscriber instance automatically disconnect the client and clean all OS resources (sockets). I don't know how you handle it, but for reaching 10k, seems like you are spawning a new client for each command without deleting it. Try to re-use already existing clients. Best |
I think after executing my command, the client should disconnect automatically from redis server, so every command I create a Redis object for connecting redis server to work. |
I am sorry to cause this dead lock, I donnot know if it is a bug, the call stack:
DEAD LOCK: [Switching to thread 7 (Thread 0x7fd463fff700 (LWP 10358))] Thank u very much! @Cylix |
I think it should add a lock in cpp_redis::client::~client() () |
Hi, Do you have a code example to help me reproduce this issue? Best |
I am not sure it will cause this problem 100%, I use your code to code game server. Most of time, it does not have any problem. When running 2 days, it appears and all players cannot operate... In multi threads, many players Save and Get data using Redis().Get(xxx) or Redis().Save(xxx) as follows.... Code example: class Redis
public:
} Thank u very much and sorry to cause this problem. |
int main()
} I use this frame for few days, it works in windows, but crashed in linux(ubuntu). info: and sometimes crashed info: How can i solve this problem? Thank you so much!!! |
Hi @Eggache666, Your issue is unrelated to this current thread, so can you open a new issue instead? Additionally, concerning your additional problem, you may try to first connect to your server using redis-cli. If redis-cli fails to connect, you may double-check if your server is running. If so, which IP/port and if the IP is reachable using your network configuration. Best |
Thread 4 (Thread 0x7f6ca5eaa700 (LWP 910)):
#0 0x00007f6cacedc933 in select () from /lib64/libc.so.6
#1 0x00000000004ad6fa in tacopie::io_service::poll() ()
#2 0x00007f6cad77d230 in ?? () from /lib64/libstdc++.so.6
#3 0x00007f6cacbdadf3 in start_thread () from /lib64/libpthread.so.0
#4 0x00007f6cacee51ad in clone () from /lib64/libc.so.6
Threads: 2344 total, 3 running, 2341 sleeping, 0 stopped, 0 zombie
%Cpu(s): 21.0 us, 32.5 sy, 0.0 ni, 46.3 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem: 16269700 total, 15801636 used, 468064 free, 158872 buffers
KiB Swap: 0 total, 0 used, 0 free. 2272844 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
910 root 20 0 416432 1336 980 R 99.8 0.0 46:30.66 PlayBackServer
The text was updated successfully, but these errors were encountered: