1000 rows/sec is definitely acheivable... the question is whether it's acheivable on your database/network infrastructure.
Even if we had a detailed knowledge of your infrastructure it would still be very hard to say... it depends on so many factors like network speed, network latency, the size of the database rows being transfered etc.
The only way to tell for sure is to test it.
I would look on this as a good thing - the process of building the test is bound to teach you a lot about how it could work... it will throw up a number of issues that you're going to have to consider at some point - how do you handle backlogs when they form? What is the max through-put you can acheive? etc. You'll learn what kind of data-transfer works best for you (e.g. single rows at a time or larger batches) You might want to try it with a mechanisms other than SQL (e.g. queues)
You say that you don't think the network / hard disk access will be an issue - again, you need to test this assumption. Every database has a limiting factor on the performance somewhere (or they'd be infinitely fast!) and it's quite often disk access that is the limiting factor. In this case I would speculate that the network may be the limiting factor, but there's no way to know for sure without measuring it.