Question

In MS SQL Server, is there a way to "atomically" increment a column being used as a counter?

Assuming a Read Committed Snapshot transaction isolation setting, is the following statement "atomic" in the sense that you won't ever "lose" a concurrent increment?

update mytable set counter = counter + 1

I would assume that in the general case, where this update statement is part of a larger transaction, that it wouldn't be. For example, I think this scenario is possible:

  • update the counter within transaction #1
  • do some other stuff in transaction #1
  • update the counter with transaction #2
  • commit transaction #2
  • commit transaction #1

In this situation, wouldn't the counter end up only being incremented by 1? Does it make a difference if that is the only statement in a transaction?

How does a site like stackoverflow handle this for its question view counter? Or is the possibility of "losing" some increments just considered acceptable?

 45  20763  45
1 Jan 1970

Solution

 38

According to the MSSQL Help, you could do it like this:

UPDATE tablename SET counterfield = counterfield + 1 OUTPUT INSERTED.counterfield

This will update the field by one, and return the updated value as a SQL recordset.

2010-04-23

Solution

 17

Read Committed Snapshot only deals with locks on selecting data from tables.

In t1 and t2 however, you're UPDATEing the data, which is a different scenario.

When you UPDATE the counter you escalate to a write lock (on the row), preventing the other update from occurring. t2 could read, but t2 will block on its UPDATE until t1 is done, and t2 won't be able to commit before t1 (which is contrary to your timeline). Only one of the transactions will get to update the counter, therefore both will update the counter correctly given the code presented. (tested)

  • counter = 0
  • t1 update counter (counter => 1)
  • t2 update counter (blocked)
  • t1 commit (counter = 1)
  • t2 unblocked (can now update counter) (counter => 2)
  • t2 commit

Read Committed just means you can only read committed values, but it doesn't mean you have Repeatable Reads. Thus, if you use and depend on the counter variable, and intend to update it later, you're might be running the transactions at the wrong isolation level.

You can either use a repeatable read lock, or if you only sometimes will update the counter, you can do it yourself using an optimistic locking technique. e.g. a timestamp column with the counter table, or a conditional update.

DECLARE @CounterInitialValue INT
DECLARE @NewCounterValue INT
SELECT @CounterInitialValue = SELECT counter FROM MyTable WHERE MyID = 1234

-- do stuff with the counter value

UPDATE MyTable
   SET counter = counter + 1
WHERE
   MyID = 1234
   AND 
   counter = @CounterInitialValue -- prevents the update if counter changed.

-- the value of counter must not change in this scenario.
-- so we rollback if the update affected no rows
IF( @@ROWCOUNT = 0 )
    ROLLBACK

This devx article is informative, although it talks about the features while they were still in beta, so it may not be completely accurate.


update: As Justice indicates, if t2 is a nested transaction in t1, the semantics are different. Again, both would update counter correctly (+2) because from t2's perspective inside t1, counter was already updated once. The nested t2 has no access to what counter was before t1 updated it.

  • counter = 0
  • t1 update counter (counter => 1)
  • t2 update counter (nested transaction) (counter => 2)
  • t2 commit
  • t1 commit (counter = 2)

With a nested transaction, if t1 issues ROLLBACK after t1 COMMIT, counter returns to it's original value because it also undoes t2's commit.

2008-10-11