1

...aside from the benefit in separate performance monitoring and logging.

For logging, I am confident I can get granularity through manually adding the name of the "routine" to each call. This is how it is now with several discrete Functions for different parts of the system:

typical logs with redacted private data

There are multiple automatic logs: start and finish of the routine, for example. It would be more challenging to find out how expensive certain routines are, but it would not be impossible.

The reason I want the entire logic of the application handled by a single handle function is because of reducing cold starts: one function means only one container that can be persistently kept alive when there are very few users of the app.

If a month is ~2.6m seconds and we assume the system uses 1 GB RAM and 1 GHz CPU frequency at all times, that's:

2600000 * 0.0000025 + 2600000 * 0.000001042 = USD$9.21 a month

...for one minimum instance.

I should also state that all of my functions have the bare minimum amount of global scope code; it just sets up Firebase assets (RTDB and Firestore).

From a billing, performance (based on user wait time), and user/developer experience perspective, is there any reason why it would be smart to keep all my functions discrete?

I'd also accept an answer saying "one single function for all logic is reasonable" as long as there's a reason for it.

Thanks!

1 Answer 1

4

If you have very small app with ~5 end points and very low traffic. Sure you could do something like this. But why not do it:

  • billing and performance

The important thing to realize is that with every request a new instance of your function is created. Which means there could be 10s of them running at the same time.

If you would like to have just 1 instance handling all the traffic you should explore GCP Cloud run, where you have 1 container handling multiple requests and scaling only when it's not sufficient.


Imagine you have several end-points and every one of them have different performance requirements.

  • 1 can need only 128MB or RAM
  • 1 can need 1GB RAM

(FYI: You can control the CPU MHz of the function via the RAM settings too - which can speed up execution in some cases)

If you had only 1 function with 1GB of ram. Every request would allocate such function and in some cases most of the memory could go to waste.

But if you split it into multiple, some requests will require much less resources and can save you $ when we talk about bigger amount of executions / month. (tens of thousands+).

Let's imagine function, 3 second execution, 10k executions/month:

  • 128MB would cost you $0.0693
  • 1024MB would cost you $0.495

As you can see, with small app the difference could be nothing. But if you scale it matters. (*The cost can vary based on datacenter)

As for the logging, I don't think it matters. Usually in bigger systems there could be messages traveling trough several functions so you have to deal with that anyway.

As for the cold start. You just need good UI to facilitate that. At first I was worry about it in our apps but later on, you just get used to it that some action can take ~2s to execute (cold start). And you should have the UI "loading" regardless, because you don't know if the function will take ~100ms or 3s due to bad connection.

Sign up to request clarification or add additional context in comments.

4 Comments

Cheers, that's quite helpful. I'm surprised to hear that every single request creates a new instance (I'd imagine that's a container on the backend). Some reading says it's only for requests in parallel but even that could be quite slow.
I think with the scale I expect at the moment and the frankly unacceptable cold start speed (nearly 10 seconds on average per request) I'm going to have to go with a different backend such as Cloud Run, as you mentioned. Thanks again!
Internally it could be single instance. But you as a developer don't know how google handles the things, to you every request is counted as "invocation". Btw I am surprised by the cold start around ~10s, in our cases it's ~3s in function located in EU and we use Node 12. But yeah I agree that it's not great in several cases.
In that case, I think much of my issues then have to do with that odd 10s invocation time. I'm using us-central1 (we're expecting a global userbase though we are testing from Australia) but I still am expecting better speeds than I'm getting. Perhaps there's a problem in my code that I'm not seeing, and that should be my plan of attack. Thanks again for the help @pagep!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.