For rec count 6000, even then no exclusive lock is taken on table but for 10000 exclusive lock is taken on table.
For records more than 5000 no exclusive lock was taken on table. To prove this I did a test on AdventureWorks2008R2 DB. [taken from book "Professional SQL Server 2008 InternalsĪnd Troubleshooting" by Christian Bolton et al. Narrow tables with many rows are much more likely to see page locks when range scans are being used to query the data than wide fact or denormalized tables. How SQL Server decides which one is a closely guarded secret, but one factor is the "During query runtime SQL Server will initially be taking row locks or page locks as the lowest level of lock granularity prior to any decision about lock escalation. ForĮxample, if the locks option had been set to 100 and a single object in a query had exceeded 40 locks then SQL Server would attempt to escalate the lock." [taken from book "Professional SQL Server 2008 Internals and Troubleshooting" by Christian Bolton This therefore affects the amount of memory used by SQL Server for locking. The locks property in Sp_configure contains the total number of locks SQL Server is permitted to issue. Forty percent of the value set for the locks property in sp_configure. Forty percent of the memory used by the database engine is used by lock objects when the locks property of sp_configure is set to 0.Ģ. "Exceeding either one of the following conditions will cause lock escalation to be triggered:ġ. I showed him that 5000 is not just the only value which is taken into consideration apart from that following would also be used
I faced one question from OP where he was curious about whole table getting locked during delete operation.One user replied that if locks exceed 5000 rows it will be escalated to table level lock ( ). The lock threshold, so 4,000 locks from tableA that is self-joined (i.e., another set of 4,000 locks) would not qualify." Remember that all locks attained must originate from the same object within the statement to qualify for exceeding Is being used by someone else, then SQL Server will back off and try again if it acquires another 1,250 locks on the same instance of the object. When rows exceed 5000 an attempt is made to convert a higher-level lock on the object (either an IS or an IX lock) to an S lock or an X lock on the table.
Value as a threshold point and not consider other parameters. Also some blogs (which I came across in the Internet) just take 5000 "Books Online states that the threshold for a lock escalation is reached when the statement has acquired in excess of 5,000 row or page level locks on a single instance of an object.