14

How does PHP handle multiple requests from users? Does it process them all at once or one at a time waiting for the first request to complete and then moving to the next.

Actually, I'm adding a bit of wiki to a static site where users will be able to edit addresses of businesses if they find them inaccurate or if they can be improved. Only registered users may do so. When a user edits a business name, that name along with it's other occurrences is changed in different rows in the table. I'm a little worried about what would happend if 10 users were doing this simultaneously. It'd be a real mishmash of things. So does PHP do things one at time in order received per script (update.php) or all at once.

6
  • 8
    Its the webserver that handles requests not PHP. Commented Jan 24, 2014 at 18:45
  • 1
    The web server will process requests in the manner you have configured it to. The database will handle concurrency issues, but you need to decide how to handle and what behavior the users will encounter if they are editing an older version of a record. Commented Jan 24, 2014 at 18:47
  • You might want to consider implementing table or row locking when editing any rows in the database to prevent multiple concurrent accesses to the same row. Commented Jan 24, 2014 at 18:47
  • Every http request is independent over each other, and everything that handles that request will be as well. If your 10 users all load up the same page and all edit it, the LAST person submitting their changes will "win". Commented Jan 24, 2014 at 18:50
  • @Chimera I was thinking of doing this. But will this return an error or deny access to users who aren't updating but simply reading things? Commented Jan 24, 2014 at 18:51

2 Answers 2

9

Requests are handled in parallel by the web server (which runs the PHP script).

Updating data in the database is pretty fast, so any update will appear instantaneous, even if you need to update multiple tables.

Regarding the mish mash, for the DB, handling 10 requests within 1 second is the same as 10 requests within 10 seconds, it won't confuse them and just execute them one after the other.

If you need to update 2 tables and absolutely need these 2 updates to run subsequently without being interrupted by another update query, then you can use transactions.

EDIT:

If you don't want 2 users editing the same form at the same time, you have several options to prevent them. Here are a few ideas:

  1. You can "lock" that record for edition whenever a user opens the page to edit it, and not let other users open it for edition. You might run into a few problems if a user doesn't "unlock" the record after they are done.
  2. You can notify in real time (with AJAX) a user that the entry they are editing was modified, just like on stack overflow when a new answer or comment was posted as you are typing.
  3. When a user submits an edit, you can check if the record was edited between when they started editing and when they tried to submit it, and show them the new version beside their version, so that they manually "merge" the 2 updates.

There probably are more solutions but these should get you started.

Sign up to request clarification or add additional context in comments.

1 Comment

+1 good answer. I would add that Norman might be concerned with 2 users editing something at the same time and submitting them one after another – "Acme" gets changed to "Acme Inc" by user a and "Acme LLC" by user b, but if user b saw that it was "Acme Inc" he might not want to change it at all. One might use a "last updated" timestamp to compare when an edit form was loaded and when the content was last updated to inform that user that content has updated since they opened up an edit form, instead of clobbering existing changes.
5

It depends on which version of Apache you are using and how it is configured, but a common default configuration uses multiple workers with multiple threads to handle simultaneous requests. See http://httpd.apache.org/docs/2.2/mod/worker.html for a rundown of how this works. The end result is that your PHP scripts may together have dozens of open database connections, possibly sending several queries at the exact same time.

However, your DBMS is designed to handle this. If you are only doing simple INSERT queries, then your code doesn't need to do anything special. Your DBMS will take care of the necessary locks on its own. Row-level locking will be fastest for multiple INSERTs, so if you use MySQL, you should consider the InnoDB storage engine.

Of course, your query can always fail whether it's due to too many database connections, a conflict on a unique index, etc. Wrap your queries in try catch blocks to handle this case.

If you have other application-layer concerns about concurrency, such as one user overwriting another user's changes, then you will need to handle these in the PHP script. One way to handle this is to use revision numbers stored along with your data, and refusing to execute the query if the revision number has changed, but how you handle it all depends on your application.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.