1

So as I've stated above: I have a quite big, but quite easy script, which gets a single json file from a website, decodes it, then saves the data in a PostgreSQL database. It takes about 4 to 5 minutes to completely finish (about 300 000 records) on my computer (i3M CPU, laptop) but takes a about 10-15 times longer to do the same thing on the server which I've just rented.

The dedicated server has an intel Xeon quad (3Ghz) CPU, and on an overall level much better specifications with much better internet access, so I'm pretty sure it has nothing to do with that. It runs the latest Debian, Apache/2.4.10, PHP Version 5.6.22-0, PostgreSQL 9.5.
I've tried to copy the settings and modules from the WAMP settings, figured it my help. Unfortunately, it didn't. Not sure what information might help in solving this problem, but I'm more than happy to answer to any of the questions.
I am almost positive it has something to do with some option which I must have skipped, so any help would be much appreciated.

PS: WAMP uses: 2.4.17 Apache, 5.6.16 PHP, 9.5 PostgreSQL.

1
  • 2
    might be better to ask in serverfault.com Commented Jul 9, 2016 at 22:21

2 Answers 2

2

Performance-Issues can be caused by much things.

  • First of all, i'm quite sure (?) your local PC has a SSD Disk, while the Server might run on "default SAS-Drives". This can make a huge difference, when it comes down to writing / reading stuff to the hdd. Especially for "Random-IO" (I.e. much Selects with conditions and a to "small" dbbuffer)
  • Do you have a "root" server, or a rented VM? If it's a VM keep in mind, that you share your resources with other vms as well. (especially the access on the SAS-Disks, but also CPU-Time)

You should write some Scripts to "identify" the bottleneck:

  • Generate a PHP Script which uses as much CPU-Power as possible, add a few million iterations, compare the results. (Guess this is not the problem)
  • Generate a PHP Scirpt, that uses massive DISK-IO - compare the results for thausands of iterations.
  • Generate a PHP script, that uses a massive amount of memory - compare again. (Be carefull on this - using tooo much memory will cause data be written to the harddisk which "destroys" the result)

if you didn't encounter a (unexpected) difference by now, you have eliminated hardware-issues. - Repeat the task for heavy database loads to figure out if the database might be missconfigured. Sometimes it is just a simple "Boolean" flag, that might have heavy performance-impacts.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you for your comment, I've learned a lot while checking the steps you wrote. I had to configure the postsgresql conf file and it is a lot faster now.
1

To expand on dognose's answer, I find that optimizing your DB access can make a big difference in performance.

It might be interesting to see what happens to the run time if you comment out the DB queries. This will tell you how much of the run time is spent on the DB.

If the DB is taking significant time, try batching your requests. Instead of sending a single insert at a time to the DB, try holding them in a variable and sending over 50 to 100 inserts (or more) in a batch. Depending on how you setup your DB connection, there could be significant overhead for each request.

2 Comments

Thanks, I'll definitely try batching the requests, I've never thought about that.
Working with Batches is very important if you use something like "Hibernate" - else you send Millions of dumb Select-Queries.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.