I agree with @Nandan that setting concurrency can be a good option in your case.
But You can try the below approach as well.
I have observed that, you are copying every csv file in the list and deleting it after copy. As per your pipeline structure, in next trigger run, it will only copy the triggered csv file after copying and deleting every file from the source in the first trigger run.
So, first copy and delete every csv file of source by using debug pipeline with above pipeline structure and for the trigger run change the pipeline structure like below.
In the trigger, add the filter condition for the csv file like below.

And for the file path use the trigger parameters like @triggerBody().fileName. Store the trigger parameter in pipeline parameters.
Use dataset parameter for the file name of the source (give this source to copy activity source and delete activity).
- First take the copy activity and give this dataset as source. In the copy activity source, give the pipeline parameter to the dataset parameter for the filename. Go through this SO answer to understand how to use trigger parameters in the pipeline.
- In sink give your target SQL table name(You can use dataset parameter for the table name and give the same pipeline parameter value to that in copy activity sink) and click on Auto create table.
- After copy activity, use the delete activity and give the same dataset to it. This will delete the exact csv file which is triggered.
This approach will trigger the pipeline for every csv file, copies that to target table and deletes it after copy activity and won't affect by concurrent trigger runs.