17

I have a grpc server / client that today will occasionally hang, causing issues. This is being called from a Flask application which is checking in with a background worker process to make sure it's alive / functioning. To make the request to the gRPC server, I have:

try:
        health = self.grpc_client.Health(self.health_ping)
        if health.message == u'PONG':
            return {
                u'healthy': True,
                u'message': {
                    u'healthy': True,
                    u'message': u'success'
                },
                u'status_code': 200
            }
except Exception as e:
        if str(e.code()) == u'StatusCode.UNAVAILABLE':
            return {
                u'healthy': False,
                u'message': {
                    u'healthy': False,
                    u'message': (u'[503 Unavailable] connection to worker '
                                 u'failed')},
                u'status_code': 200}
        elif str(e.code()) == u'StatusCode.INTERNAL':
            return {
                u'healthy': False,
                u'message': {
                    u'healthy': False,
                    u'message': (u'[500 Internal] worker encountered '
                                 u'an error while responding')},
                u'status_code': 200}
        return {
            u'healthy': False,
            u'message': {u'healthy': False, u'message': e.message},
            u'status_code': 500
        }

the client is a stub:

channel = grpc.insecure_channel(address)
stub = WorkerStub(channel)
return stub

the proto is:

syntax = "proto3";

option java_multiple_files = true;
option java_package = "com.company.project.worker";
option java_outer_classname = "ProjectWorker";
option objc_class_prefix = "PJW";

package projectworker;

service Worker {
  rpc Health (Ping) returns (Pong) {}
}

// The request message containing PONG
message Ping {
  string message = 1;
}

// The response message containing PONG
message Pong {
  string message = 1;
}

Using this code, how would I then add a timeout to ensure that I can always respond rather than fail and hang?

3 Answers 3

33

timeout is an optional keyword parameter on RPC invocation so you should change

health = self.grpc_client.Health(self.health_ping)

to

health = self.grpc_client.Health(self.health_ping, timeout=my_timeout_in_seconds)

.

Sign up to request clarification or add additional context in comments.

2 Comments

Ah, ok. Thank you so much for that information! Can't believe I missed that
Is there a synonymous setting for the channel options here github.com/grpc/grpc/blob/v1.36.x/include/grpc/impl/codegen/…? is it grpc.grpclb_call_timeout_ms?
9

To define a timeout on client side, add an optional parameter timeout=<timeout in seconds> when you invoke a service function;

channel = grpc.insecure_channel(...)
stub = my_service_pb2_grpc.MyServiceStub(channel)
request = my_service_pb2.DoSomethingRequest(data='this is my data')
response = stub.DoSomething(request, timeout=0.5)

💡 Note a timeout situation will raise an exception

Comments

6

You may also want to catch and handle timeouts differently than other errors. Sadly the documentation is not quite good on this topic, so here's what you have:

try:
    health = self.grpc_client.Health(self.health_ping, timeout=my_timeout_in_seconds)
except grpc.RpcError as e:
    e.details()
    status_code = e.code()
    status_code.name
    status_code.value

Timeout will return DEADLINE_EXCEEDED status_code.value.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.