If a query is constantly sent to a database at short intervals, say every 5 seconds, could the number of reads generated cause problems in terms of performance or availability? If the database is Oracle are there any tricks that can be used to avoid a performance hit? If the queries are coming from an application is there a way to reduce any impact through software design?
-
2it depends on the query and how many reads it does, the value of x (length of interval), and the spec of the hardware. Perhaps you could explain more about your actual scenario...Mitch Wheat– Mitch Wheat2011-02-22 00:37:09 +00:00Commented Feb 22, 2011 at 0:37
-
2Read: akadia.com/services/ora_bind_variables.htmlOMG Ponies– OMG Ponies2011-02-22 00:40:19 +00:00Commented Feb 22, 2011 at 0:40
-
@Mitch Wheat: The interval is something like 5-10 seconds. The information retrieved will be some statistics to draw graphics so the query will most probably span across more than one table. I have no information about the hardware. What solutions would there be in a worst case scenario?James P.– James P.2011-02-22 00:47:48 +00:00Commented Feb 22, 2011 at 0:47
-
@OMG Ponies: Thanks for sharing. Will take a look.James P.– James P.2011-02-22 00:48:18 +00:00Commented Feb 22, 2011 at 0:48
-
@James P., does the data change every time the query is called or is the data changing less often then the query is called? How long does one individual call take? How much resources does on query utilize?Samuel Neff– Samuel Neff2011-02-22 01:00:51 +00:00Commented Feb 22, 2011 at 1:00
3 Answers
Unless your query is very intensive or horribly written then it won't cause any noticeable issues running once every few seconds. That's not very often for queries that are generally measured in milliseconds.
You may still want to optimize it though, simply because there are better ways to do it. In Oracle and ADO.NET you can use an OracleDependency for the command that ran the query the first time and then subscribe to its OnChange event which will get called automatically whenever the underlying data would cause the query results to change.
2 Comments
It depends on the query. I assume the reason you want to execute it periodically is because the data being returned will changed frequently. If that's the case, then application level caching is obviously not an option.
Past that, is this query "big" in terms of the number of rows returned, tables joined, data aggregated / calculated? If so, it could be a problem if:
You are querying faster than it takes to execute the query. If you are calling it once a second, but it takes 2 seconds to run, that's going to become a problem.
If the query is touching a lot of data and you have a lot of other queries accessing the same tables, you could run into lock escalation issues.
As with most performance questions, the only real answer is to test. In this case test with realistic data in the DB and run this query concurrent with the other query load you expect on the system.
1 Comment
Along the lines of Samuel's suggestion, Oracle provides facilities in JDBC to do database change notification so that your application can subscribe to changes in the underlying data rather than re-running the query every few seconds. If the data is changing less frequently than you're running the query, this can be a major performance benefit.
Another option would be to use Oracle TimesTen as an in memory cache of the data on the middle tier machine(s). That will reduce the network round-trips and it will go through a very optimized retrieval path.
Finally, I'd take a look at using the query result cache to have Oracle cache the results.