Does one ADO methodology fit all situations? I have seen many developers use a variety of different methods in regards to creating ADO connections and recordsets in an attempt to optimize performance. These range from putting ADO connection objects in Application and Session variables and reusing the open connection for all of their pages to creating one ADO connection per page. And of course, everything else in between.
With this in mind, I’ve put together a little script to demonstrate why the following steps should always be taken with each ADO database action in the ASP page. At least this is what we’ve been led to believe…
1. Create the ADO objects needed.
2. Perform the database task (select, insert, update, delete, etc…) with stored procedures if available.
3. Close open database connections on all ADO objects.
4. Destroy all ADO objects.
Windows DNA teaches us that all objects should be created, used, and destroyed as soon as possible in order to free up resources on the server. With connection pooling via IIS, we allow IIS to manage the physical database connections for us after we release the connection and objects in our code. IIS can then simply hand the existing connection off to another user again without the normal overhead. As a side note, Microsoft Access does not take advantage of connection pooling. With a smaller number of visitors as sites using MS Access would have, this methodology still applies without a performance hit. We should try to avoid using connected ADO objects in Application and Session variables. Why? Because an connected ADO object (typically a recordset) is not threadsafe. All subsequent requests for the object are locked down to the single thread of execution it was instantiated on, effectively forcing IIS to make all requests WAIT until the object has been used and released by everybody in front of you, no mater how many other threads are available (125 by default).
I recommend adjusting the code sample and trying it with various database tasks. In this example, I’ve executed 300 database tasks in order to show a variance in performance for a larger load average. Keep in mind, IIS connection pooling is not turned on by default (is on by default in Win2k IIS5.0). You’ll have to adjust the registry setting in order to activate it. Set the following registry entry if it has not already been done:
Parameters StartConnectionPool = 1
As always, I try to test Microsoft theories before preaching them as gospel. What I’ve said so far pretty much matches common theory among engineers.
Let’s run a little test with the Northwind database to see if we are correct in all cases. Pay particular attention to the differences between stored procedures and string queries!
Run the test now…
Download the code…
A raised eyebrow eh? Of course, you’ll want to run the test several times and look for a constant pattern because ASP is not the “perfect” timer. I also put all of these in the same script. During testing, I didn’t notice any consistant difference when these were broken up into separate files. But, you may opt to conduct your own individual file test.
To summarize, we learned that reusing the same connection and objects can improve performance. However, this test assumes the database interaction is all closely tied together. In normal ASP pages, this isn’t always the case. If we are processing HTML or various routines in between database calls, we should seriously consider destroying the current connection and objects beforehand. Then, recreate the objects and connections as needed.
We don’t want to hold onto connections and objects any longer than necessary while processing non-database oriented tasks or run the risk of not explicitly closing connections and destroying our objects because of errors. This is just one more reason why we should try to modularize our ASP code and not interwine our HTML any more than absolutely necessary (not too mention the code is just easier to support…). I would love to hear your feedback on the test and ways to make it more accurate.
Robbe D. Morris