Triggers Part 1: Introduction to madness, plus whores

This is part of a 3 part epic mini-series on triggers, view Part 2 and Part 3.

According to the standard developer canon about databases you should avoid triggers like you avoid wearing your “I love goto statements” t-shirt at the company Christmas party (again).  Knowledge of triggers is verboten, almost as bad as saying you love cursors.  Why are they such a bad idea, and why does the DBA let the air out of your Oldsmobile Firenza tires whenever you write one?  Let’s explore these ideas, and leave you being grown up and driving a Oldsmobile for another day. First, a brief overview of triggers if you aren’t familiar with them at all (lucky):

Triggers are blocks of T-SQL that are run upon the firing of certain events within SQL Server, the most common one being an insert, update, or delete to a rows in a table. They can also fire upon login, the creation of tables, and other system events. Triggers have certain constraints that they have to live with such as not being able to return results or perform certain operations such as CREATE DATABASE, and they are fed only certain input – @@ROWCOUNT, the inserted and deleted tables, the ambient transaction. They are typically complained at and about by DBAs and developers as being slow and hard to deal with.

Where do triggers come from?

When a whore and a demon love each other they make a baby, and that baby is a trigger with a cursor in it that handles it own transactions and savepoints whilst trying to send email via a call to the smtp service using xp_cmdshell.

Are triggers in the original relational model?

No, but neither are isolation levels, recovery models, or stored procedures. But never fear, very little that you have heard of is in the original relational model unless you married your sister and eat German chocolate bars in which case you are familiar with joined relations and relvars.

Why are triggers given such a bad name – aren’t they just a certain type of stored procedure that happens at a certain time?

Triggers are given a bad name for a few reasons:

What they are typically used for:

They are quite powerful and it is easy to overuse them for things that are normally better supported by the DBMS in other places. The classic example of this is constraint or foreign key enforcement – a trigger can stop the insert/update from occurring or check the updated data right before the modification. Business logic can live in a trigger, so that it runs “just in time”. Another common pattern is to use a trigger to keep two separate databases in sync – every update to one causes an insert into another across linked server, etc. Triggers are dark-alleyed shortcuts and there are normally better paths to build the functionality you want.

How hard they are to work with:

Most developers don’t know how to properly code a trigger – there I said it.  In addition, they are added T-SQL surface area that doesn’t show up in most tracing mechanisms for laymen (Activity Monitor, syscomments, indirectly in execution plans, a lot of source control systems, etc).

They are slow:

Triggers are inside a transaction doing an insert, update, delete.  This is the essential core of the database system – very close to the DBMS kernel in terms of your application usage. Adding a trigger that takes half a second can bring your system to a messy halt.  In addition, in the past triggers populated the inserted and deleted tables via building a view from reading the transaction log.  This lead many DBAs to see non sequential behavior in their storage mechanisms that were built for sequential writes leading to even greater slowdowns.  As of SQL Server 2005 triggers are implemented via row versioning which thus makes use of the tempdb instead of reading the log, but DBAs never forget.  In either event, the behavior of triggers is simply different than other types of statements, leading DBAs to be even more unhappy – and they are pretty much unhappy in DisneyWorld.

Tune in for Triggers Part 2 Facts and Frequently-Occuring-Problems (FOP)