5

I have a Laravel app (on Forge) that's posting messages to SQS. I then have another box on Forge which is running Supervisor with queue workers that are consuming the messages from SQS.

Right now, I just have one daemon worker processing a particular tube of data from SQS. When messages come up, they do take some time to process - anywhere from 30 to 60 seconds. The memory usage on the box is fine, but the CPU spikes almost instantly and then everything seems to get slower.

Is there any way to handle this? Should I instead dispatch many smaller jobs (which can be consumed by multiple workers) rather than one large job which can't be split amongst workers?

Also, I noted that Supervisor is only using one of my two cores. Any way to have it use both?

3
  • 2
    Well your queue job is massive. It depends on you app if you can split the code down to smaller packages. For us here it's hard to elaborate if you can do this. In my app I have different smaller jobs, responsible for one specific task. This way we also find if one of them fails instead of the whole package... This is what I found on multiple cores https://gist.github.com/didip/802561 search for multiple cores. Commented Jul 18, 2016 at 18:53
  • Thanks, i broke the jobs down and now it's much better Commented Jul 18, 2016 at 21:44
  • @hogan if you create an answer for this, I can accept it. Commented Jul 20, 2016 at 19:55

1 Answer 1

2

Having memory intensive applications is manageable as long as scaling is provided, but CPU spikes is something that is hard to manage since it happens within one core, and if that happens, sometimes your servers might even get sandboxed.

To answer your question, I see two possible ways to handle your problem.

  1. Concurrent Programming. Have it as it is, and see whether the larger task can be parallelized. (see this). If this is supported, then parallelize the code to ensure that each core handles a specific part of your large task. Finally, gather the results into one coordinating core and assemble the final result. (additionally: This can be efficiently done is GPU programming is considered)
  2. Dispatch Smaller Jobs (as given in the question): This is a good approach if you can manage multiple workers working on smaller tasks and finally there is a mechanism to coordinate everything together. This could be arranged as a Master-Slave setting. This would make everything easy (because parallelizing a problem is a bit hard), but you need to coordinate everything together.
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.