Something to note. You appear to want to execute this simultaneously, as in having 30 people in a room with everyone hitting the enter key at the same time to submit a request. if you are trying to represent a natural population this is problematic, as users arrive and depart chaotically. That chaois can sometimes be in short window, such as 300,000 people showing up in two minutes to purchase a limited release album or shoe drop by a celebrity, but it is still not "Simultaneous."
To paraphrase Obiwan, "use the logs, Luke!" Look to your current infrastructure for an expression of the use of this request, or a peer/earlier equivalent. You are going to have some delays between requests of this type. Build a model of what that looks like, either in a raw form, such as 500ms to 2500ms of range, or an actual distribution equation, then place this amount of delay in front of your request.
Execute the 30 thread group as noted above, with each of the users having some amount of delay in front of the request. You will have your concurrency in a natural window. Your risk the other way is you have an unnatural act likely to result in a performance issue that would be a ghost unlikely to ever occur in production.
On the other hand, if you have an engineering requirements where you are required to examine true simultaneity on some code (multiple users, same section of code, same time) due to critical section issues and wanting to better understand the locks and blocks associated with this code, then you can ignore all of my suggestions above.