Last post on the topic — at least for now!

Wow! It’s great to see discussion, that’s the best way to learn new things.

Frans makes some very valid counter-points. I’ll eat crow over my original posting for stating:

‘When stored procedures are used SQL Server can cache or pre-compile the ‘execution plan’ that it uses to execute the SQL vs. having to recalculate the execution plan on each request.

I’ve edited the original post to reflect the fact that SQL Server does not pre-compile the stored procedure. I was incorrectly using cache/pre-compile interchangeably and semantically this was incorrect, mea culpa.

I also edited the post to add a note on using parameterized ad-hoc SQL which is also an effective counter-measure to SQL Script injection attacks – as many people pointed out, and which I should have included originally.

At the end of the day it’s important to make your own decisions. However, I do find it somewhat interesting that the two biggest critics of the stored procedure approach build O/R mappers; I expected to get some feedback from Thona – who I am looking forward to eventually meeting some day! I do want to call out that O/R mappers are incredibly valuable technology solutions that can save a ton of time. I don’t have any personal experience with either Thona’s or Frans’ solutions, but I’m sure both can save a ton of dev-time – personally looking forward to replacing my stored procedures with CLR code in SQL Server ‘Yukon’. 

I did stumble across this while catching up this morning:

This benchmark is both invalid and valid – all depending on how you write your code. The stored procedure examples used both encapsulate checks on the parameters, while the ad-hoc SQL approach does not. A more valid approach (IMHO) would be to encapsulate either the ad-hoc SQL or the stored-proc call in a data access layer class. The data access layer (or a higher level business logic class) should validate the inputs – the validation should not occur in the stored procedure. In this case the sproc should look something like:

 @sCustomerID nchar(5),
 @iEmployeeID int,
 @iShipperID int
FROM Orders
CustomerID= @sCustomerID AND
 EmployeeID= @iEmployeeID AND
 ShipVia= @iShipperID

In this case it’s pretty obvious that a sproc is now overkill. I’ve re-posted my original side-note from the first article:

Side-note: if you jumped to the conclusion that the point of all this is to advocate all business logic residing in stored procedures, you missed the point. There needs to be balance. My general rule of thumb is: return only the necessary data, no-more no less. The more efficient you can make your data access code by rapidly opening and closing the connection to the database the better; stored procedures provide you with an effective tool for accomplishing this. Performing business logic is frowned upon, i.e. transforming the data or performing calculations on the data. Use the power of the database to thresh the wheat from the chaff. Use your business logic to turn the wheat into dough 😉

Several people have missed the point . Stored procedures provide an effective way for accessing the data, but they are not the only way. But no matter which you choose neither should enforce business logic rules on the data.

In cases such as the above sproc it doesn’t take a rocket scientist to tell you that ad-hoc SQL is a better/simpler solution. However, take a more complex application such as the ASP.NET Forums used for solutions such as the Xbox forums. There are 472,000 posts in the Xbox forums Game Discussions forum. When viewing posts does it make sense to retrieve all 472K posts and then throw away all but the current 25 that you are viewing? Most likely not (if you didn’t guess, the correct answer is no). In this case a stored procedure is used to page the data in the database and only return the set of data that is applicable. If you think ad-hoc SQL is a better solution for this and to then filter the data in your DAL/BAL … please email me I’ve got some spectacular beach property in Montana that I’d like to sell you 😉

To state it one more time: there needs to be balance. Choose the solution that best solves the problem. If you take the attitude that one approach is always right one is always wrong, the only guarantee is that you’ll always be wrong.

In the forums example, when stating the filtering of data, I don’t simply mean using a WHERE clause to reduce the set. Instead, the ability to selectivly choose a smaller subset from the set that SQL would create to satisify the original WHERE clause, i.e. when there are 1000 results returned but you now want to selectivly page through that set.