SQL Server : Deadlocks

Deadlocks can be difficult to investigate and identify the causes for it as it does not happen often and can be difficult to reproduce it. Here are some ways to investigate the deadlock issues.

  1. You can run the SQL profiler with deadlock template but it is resource heavy and not a good idea to run on the production environments as it can cause the server to slow down. And you wouldn’t know when the next deadlock will happen.
  2. You can enable the xml_deadlock_report event in extended events. This will capture the information about the deadlocks. Then you can run the query below to get the information in XML format.


select xed.value('@timestamp', 'datetime') as Creation_Date
, xed.query('.') as Extend_Event
from
(
select cast([target_data] as xml) as Target_Data
from sys.dm_xe_session_targets as xt
join sys.dm_xe_sessions as xs on xs.address = xt.event_session_address
where xs.name = N'system_health'
and xt.target_name = N'ring_buffer'
) as XML_Data
cross apply Target_Data.nodes('RingBufferTarget/event[@name="xml_deadlock_report"]') as XEventData(xed)
order by Creation_Date desc

  1. You can turn the deadlock tracing on by running the following in query analyzer:

DBCC TRACEON (1204, -1)
DBCC TRACEON (1222, -1)

Then you can view the trace in SQL Server Agent –> Error Logs

Advertisements

DBA : Find the most expensive queries in SQL Server

SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,

((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC -- logical reads
-- ORDER BY qs.total_logical_writes DESC -- logical writes
-- ORDER BY qs.total_worker_time DESC -- CPU time

–Queries taking longest elapsed time:
SELECT TOP 10
qs.total_elapsed_time / qs.execution_count / 1000000.0 AS average_seconds,
qs.total_elapsed_time / 1000000.0 AS total_seconds,
qs.execution_count,
SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END – qs.statement_start_offset)/2) AS individual_query,
o.name AS object_name,
DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
where qt.dbid = DB_ID()
ORDER BY average_seconds DESC;

–Queries doing most I/O:

SELECT TOP 10
(total_logical_reads + total_logical_writes) / qs.execution_count AS average_IO,
(total_logical_reads + total_logical_writes) AS total_IO,
qs.execution_count AS execution_count,
SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END – qs.statement_start_offset)/2) AS indivudual_query,
o.name AS object_name,
DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
where qt.dbid = DB_ID()
ORDER BY average_IO DESC;

The above queries were taken from
http://blog.sqlauthority.com/2010/05/14/sql-server-find-most-expensive-queries-using-dmv/

DBA : Get table locks in SQL Server

— Note:
— Edit the script below to change the database name and the table name.

select      o.name,*

from  [DatabaseName].sys.dm_tran_locks  l

join  [DatabaseName].sys.objects        o     on    l.resource_associated_entity_id = o.object_id

where resource_type = ‘OBJECT’

and resource_database_id = DB_ID(‘[DatabaseName]’)

–and o.name like ‘[table name]’