|
I'm doing a server/client program in Java using RMI. When server crashes it's not a problem, clients get a RemoteException and disconnects.
However I have problems when clients crashes. My server uses a Timer to ping all client objects every now and then, when it gets no connection to a client it will catch a RemoteException.
Then, it's supposed to remove the client object from the server (just by removing it from a list), but it's impossible because when I try to do anything with the proxy client object it will throw another RemoteException. How can I solve this problem?
|
Croatia9454 Posts
|
|
You need to be more specific and post some code. There could literally be 1000 things wrong with your code.
If you are just removing the client proxy objects from a list, I don't understand why you would need to invoke any actions upon the object itself...?
edit:
Let me rephrase it better: if you are simply removing an object from the list, you aren't actually doing anything with the object itself. You are merely removing its reference from the container.
edit2:
Also, I don't understand your design: you have your host ping your clients? A better approach would be to have your clients send a pulse to your server, and you can determine which client is down from that.
|
|
They are correct.
You call "getUser()" on a broken client, it will obviously throw exceptions.
A lot of valuable information in that thread if you are patient enough to sift through it.
re: client pinging
It's inefficient as the number of clients can grow without bound, and your server is doing all the work. If you have your clients ping your server, you distribute the workload onto the clients.
|
Was a stupid mistake. I'm thinking about your method of pinging server instead. But I don't know how you could do that more efficient...
|
Your question makes no sense. By itself, the RMI server (registry) has no way of calling a method on a client; the server simply exposes a set of methods that a RMI client can call.
If your question is what happens if a client makes a RMI call on your RMI server and while the RMI server is processing the request, the server becomes unresponsive: you can set the socket timeout on the client and / or server to cleanly handle such a situation without resorting to polling (check out rmi socket factory or write a custom one).
|
On February 14 2011 02:49 Freezard wrote: Was a stupid mistake. I'm thinking about your method of pinging server instead. But I don't know how you could do that more efficient... You have (for example) each client ping the server once every minute. Every two minutes, the server goes through its list of clients and dumps any that it hasn't heard from since its last sweep. This is a lot more efficient for your server, as all it has to do is listen (which it does anyway) and go through a list checking a flag once every couple minutes. If you put your server in charge of the pinging, it has to periodically go through all the clients, ping each one, wait for responses, tag all of them, decide what an appropriate timeout is, etc. The total amount of work may be similar, but you'd rather have that work distributed to the clients which only have to maintain one connection each, rather than the server maintaining many connections.
|
On February 14 2011 04:04 Macavenger wrote:Show nested quote +On February 14 2011 02:49 Freezard wrote: Was a stupid mistake. I'm thinking about your method of pinging server instead. But I don't know how you could do that more efficient... You have (for example) each client ping the server once every minute. Every two minutes, the server goes through its list of clients and dumps any that it hasn't heard from since its last sweep. This is a lot more efficient for your server, as all it has to do is listen (which it does anyway) and go through a list checking a flag once every couple minutes. If you put your server in charge of the pinging, it has to periodically go through all the clients, ping each one, wait for responses, tag all of them, decide what an appropriate timeout is, etc. The total amount of work may be similar, but you'd rather have that work distributed to the clients which only have to maintain one connection each, rather than the server maintaining many connections. OK. So in my case I would add a list of users. Every client pings the server with its user as argument. Server adds the user to the list plus any new clients who connect. After two minutes server checks for every user in the Map<User, Client> if the user also exists in the list. If not - remove that user. End by clearing the list.
Is this what you mean? I guess it's more efficient like you say.
|
On February 14 2011 05:04 Freezard wrote:Show nested quote +On February 14 2011 04:04 Macavenger wrote:On February 14 2011 02:49 Freezard wrote: Was a stupid mistake. I'm thinking about your method of pinging server instead. But I don't know how you could do that more efficient... You have (for example) each client ping the server once every minute. Every two minutes, the server goes through its list of clients and dumps any that it hasn't heard from since its last sweep. This is a lot more efficient for your server, as all it has to do is listen (which it does anyway) and go through a list checking a flag once every couple minutes. If you put your server in charge of the pinging, it has to periodically go through all the clients, ping each one, wait for responses, tag all of them, decide what an appropriate timeout is, etc. The total amount of work may be similar, but you'd rather have that work distributed to the clients which only have to maintain one connection each, rather than the server maintaining many connections. OK. So in my case I would add a list of users. Every client pings the server with its user as argument. Server adds the user to the list plus any new clients who connect. After two minutes server checks for every user in the Map<User, Client> if the user also exists in the list. If not - remove that user. End by clearing the list. Is this what you mean? I guess it's more efficient like you say. Well, I'm not familiar with exactly what data structures you're using, or network coding in Java (I've only done it in C++), but that sounds essentially correct. I'm used to doing it as something like a list of IP/port pairs (clients) that the server has active connections to, and storing a boolean flag for each of them that indicates whether it has heard from them recently, and just clearing that flag, or logging out the client if the flag isn't set.
|
Thank you. I am glad that i found this blog online. I was very impressed with the blog posting here about the topic.
We have an application in prod environment in our organization such as a server/client program in Java using RMI. Both server and client/s uses a Timer to ping each other at regular interval. If either one does not receive a reply when ping each other then after reaching a limit (> ping interval + tolerance), they end up disconnecting each other. In that process, the RMI server knocks out all other clients at certain times. Things seems to working file until a year back but after that the user base moved to a new building. Since then, we have noticed few network issues in the infrastructure and this particular application got effected very badly such that the application outage has been recurring once/twice per week. I have inherited this application code and maintenance from a vendor after i joined the organization. Recently we migrated the database part of the application to Oracle RAC and that effected some of the queries to take longer time than they were before. Ironically, all these issues are being surfaced only in prod environment but not in Test/Dev environment. It's been extremely challenging to resolve the issues. I would truly appreciate if someone provide me some guidance to resolve this issue.
Since past 2 days, 3 of the use case scenarios seems to be taking longer time to finish in the database and by that time, Socket Timeout (set to 15 sec) is being reach in each of the scenario. To our knowledge, we couldn't find any issue with the database sqls but we started looking into those to find root cause. In the meantime, i thought of increasing RMI Server Socket Timeout to 5min as interim solution.
Does this change have any side effects in this client/server model?
Please advise.
Thanks Ram
|
timeouts are usually used for error cases, so increasing the timeouts *should* not change anything for the normal program flow. however, as every engineer knows, changing anything can break everything. ;P
of course a "nicer" solution would be to use multithreading, as to still be able to reply to pings while the database is working. that is however considerably more work, which might not be justified depending on what your further plans with said application are.
/edit: oh and also the same goes for you, better post on stackoverflow.com for instance. this is not really a programmer help forum here, although quite a few programmers frequent it.
|
|
|
|