SQL Server DBA: Database in Recovery Pending state

I have recently encountered an issue where one of the databases suddenly moved to Recovery Pending status.

I came to know this error when an SSRS report returned the error below:

Database [dbname] cannot be opened due to inaccessible files or insufficient memory or disk space.

I checked the database and it is in Recovery Pending status. Recovery Pending means that recovery cannot be started. Until the cause is fixed, recovery cannot run and the database cannot come online. I then checked the drives in the server and one of the drives is down and not accessible.

We brought the drive back online but still the status of the database is not changed. I had to restart the SQL Server service manually and the database is automatically recovered.

SQL Server DBA : Optimize Index

SQL Server Database can be very slow if the database is not maintained and monitored properly. One of the maintenance plan is to optimize the indexes in the system periodically.

I have recently created a stored procedure to optimize the indexes in the database and scheduled it to run every week.

There are two ways the indexes can be optimized.

  1. Rebuild Index
  2. Reorganize Index

It is not a good idea to rebuild all the indexes in the database. After reading a lot of materials online and from my experience I would recommend rebuild indexes where fragmentation is > 30% and reorganize indexes if the fragmentation is between 10% and 30%.

There are DMVs to find missing indexes, unused indexes etc…. However I am focusing only on optimizing the indexes in this post.

Rebuilding indexes can be done in online mode or offline mode. If it is done in offline mode, then the server will lock the table from any read/writes. If it is online, then it allows the users access the table but can take little longer to rebuild the indexes as it has to create a copy of the old index so the users can continue to access the indexes.

Note: Online index operations can only be performed in Enterprise edition of SQL Server.

index

The above query returns all the indexes with fragmentation %, table name, schema name, row count and page count.

So you can filter the indexes you want to optimize using the above query. I wouldn’t optimize indexes that has few pages or few records. Index_Id 0 is a heap so I have excluded it from my query.

I have written two procedures for reorganize and rebuild just to avoid complexity and it can be merged into one as it shares most of the code.

Once you created the procedures and table place the procedures in a job and schedule to run weekly or monthly based on your database needs. Monitor the audit table regularly and make amendments to the schedule of this job.

Here is the code for audit table and two procedures one for index rebuild and the other for reorganize.

Audit Table Query1 Query2 Query3 Query4 Query5 Query6

Here is the code to copy:

--create schema dba
--go
if object_id('dba.IndexMaintenanceAudit','U') is null
begin
create table dba.IndexMaintenanceAudit
(
Id int identity(1,1),
SchemaName varchar(200),
TableName varchar(200),
IndexName varchar(200),
IndexFillFactor tinyint,
Row_Count int,
AvgFragmentationInPercent float,
Page_Count int,
MaintenanceAction varchar(10),
StartTime datetime,
EndTime datetime
)
end
go
if object_id('dba.IndexOptimize_Rebuild','P') is not null
drop procedure dba.IndexOptimize_Rebuild
go
create procedure dba.IndexOptimize_Rebuild
(
@fragmentation_rebuild float = 30.0,
@fragmentation_reorganize float = 10.0,
@pageCount int = 5,
@rowCount int = 5000,
@tables int = 10
)
as
begin
set nocount on;
-- drop table #RebuildIndex
create table #RebuildIndex
(
Id int identity(1,1),
SchemaName varchar(200),
TableName varchar(200),
IndexName varchar(200),
IndexFillFactor tinyint,
Row_Count int,
AvgFragmentationInPercent float,
Page_Count int,
StartTime datetime,
EndTime datetime
)
insert #RebuildIndex
(
SchemaName,
TableName,
IndexName,
IndexFillFactor,
Row_Count,
AvgFragmentationInPercent,
Page_Count
)
select    top(@tables)
s.name as SchemaName,
b.Name as TableName,
c.Name as IndexName,
c.Fill_factor as IndexFillFactor,
d.[Rows] as RowsCount,
a.Avg_fragmentation_in_percent,
a.Page_count
from    sys.dm_db_index_physical_stats(db_id(),null,null,null,null) a
join sys.tables b on a.object_id = b.object_id
join sys.schemas s on b.schema_id = s.schema_id
join sys.indexes c on b.object_id = c.object_id and a.index_id = c.index_id
join sys.partitions d on b.object_id = d.object_id and a.index_id = d.index_id
where    c.Index_id > 0
and a.Avg_fragmentation_in_percent > @fragmentation_rebuild
and Page_count > @pageCount
and D.[Rows] > @rowCount
order by
a.Avg_fragmentation_in_percent desc
declare    @i int,
@count int,
@cmd nvarchar(4000)
set @i = 1;
select @count = max(Id) from #RebuildIndex;
while (@i <= @count)
begin
update    #RebuildIndex
set        StartTime = getdate()
where    Id  = @i;
-- Online index operations can only be performed in Enterprise edition of SQL Server.
select    @cmd = 'alter index [' + IndexName + '] on [' + SchemaName + '].[' + TableName + '] rebuild with (online = off)'
from    #RebuildIndex
where    Id = @i;
--print @cmd
execute sp_executesql @cmd;
update #RebuildIndex
set    EndTime = getdate()
where    Id  = @i;
insert dba.IndexMaintenanceAudit
(
SchemaName,
TableName,
IndexName,
IndexFillFactor,
Row_Count,
AvgFragmentationInPercent,
Page_Count,
StartTime,
EndTime,
MaintenanceAction
)
select    SchemaName,
TableName,
IndexName,
IndexFillFactor,
Row_Count,
AvgFragmentationInPercent,
Page_Count,
StartTime,
EndTime,
'REBUILD'
from    #RebuildIndex
where    Id  = @i;
set @i = @i + 1;
end
end
go
if object_id('dba.IndexOptimize_Reorganize','P') is not null
drop procedure dba.IndexOptimize_Reorganize
go
create procedure dba.IndexOptimize_Reorganize
(
@fragmentation_rebuild float = 30.0,
@fragmentation_reorganize float = 10.0,
@pageCount int = 5,
@rowCount int = 5000,
@tables int = 5
)
as
begin
set nocount on;
-- drop table #RebuildIndex
create table #RebuildIndex
(
Id int identity(1,1),
SchemaName varchar(200),
TableName varchar(200),
IndexName varchar(200),
IndexFillFactor tinyint,
Row_Count int,
AvgFragmentationInPercent float,
Page_Count int,
StartTime datetime,
EndTime datetime
)
insert #RebuildIndex
(
SchemaName,
TableName,
IndexName,
IndexFillFactor,
Row_Count,
AvgFragmentationInPercent,
Page_Count
)
select    top(@tables)
s.name as SchemaName,
b.Name as TableName,
c.Name as IndexName,
c.Fill_factor as IndexFillFactor,
d.[Rows] as RowsCount,
a.Avg_fragmentation_in_percent,
a.Page_count
from    sys.dm_db_index_physical_stats(db_id(),null,null,null,null) a
join sys.tables b on a.object_id = b.object_id
join sys.schemas s on b.schema_id = s.schema_id
join sys.indexes c on b.object_id = c.object_id and a.index_id = c.index_id
join sys.partitions d on b.object_id = d.object_id and a.index_id = d.index_id
where    c.Index_id > 0
and a.Avg_fragmentation_in_percent < @fragmentation_rebuild
and a.Avg_fragmentation_in_percent > @fragmentation_reorganize
and Page_count > @pageCount
and D.[Rows] > @rowCount
order by
a.Avg_fragmentation_in_percent desc;
--select * from #RebuildIndex
declare    @i int,
@count int,
@cmd nvarchar(4000);
set @i = 1;
select @count = max(Id) from #RebuildIndex;
while (@i <= @count)
begin
update    #RebuildIndex
set        StartTime = getdate()
where    Id  = @i;
-- Online index operations can only be performed in Enterprise edition of SQL Server.
select    @cmd = 'alter index [' + IndexName + '] on [' + SchemaName + '].[' + TableName + '] reorganize'
from    #RebuildIndex
where    Id = @i;
--print @cmd
execute sp_executesql @cmd;
update #RebuildIndex
set    EndTime = getdate()
where    Id  = @i;
insert dba.IndexMaintenanceAudit
(
SchemaName,
TableName,
IndexName,
IndexFillFactor,
Row_Count,
AvgFragmentationInPercent,
Page_Count,
StartTime,
EndTime,
MaintenanceAction
)
select    SchemaName,
TableName,
IndexName,
IndexFillFactor,
Row_Count,
AvgFragmentationInPercent,
Page_Count,
StartTime,
EndTime,
'REORGANIZE'
from    #RebuildIndex
where    Id  = @i;
set @i = @i + 1;
end
end
go

Powershell : Script to delete files older than N days

I recently setup a job to backup transaction logs for live databases every 15 mins. To prevent these files filling up the drive, I needed to to write a script to delete old log files so that it can be automated.

Here is the Powershell script I have written to delete files older than 5 days. The no of days and the path are parameters, so the values can be changed as required.

Powershell

This then needed to be added as a scheduled task in windows scheduler. When a Powershell script needs to be scheduled, in the action window write Powershell.exe in program and specify the script file name with path in the argument text box as below.

Scheduler

Here is the text from Powershell script for anyone to copy and paste.

## This deletes files older than 7 days
## You can alter the parameter values to change the path and no of days
$Path = "I:\TransactionLogBackup\UK_NAV"
$days = -7
Get-ChildItem -path $Path | where {$_.Lastwritetime -lt (date).adddays($days)} | remove-item

SQl Server DBA : Transaction Log Backup

Here is the stored procedure to backup the transaction log files:
backupThis procedure takes the database name and backupPath as input parameters. So this procedure can be used for any databases on the server. Place this procedure in a job and schedule it to run every ‘N’ minutes.

Make sure you copy the transaction log backup files to a different server so that can be used for disaster recovery. Also write a batch file to delete the old files otherwise it will fill your drive.

Here is the code if you want to copy and paste:

if object_id('dbo.BackupDatabaseTransactionLogs','P') is not null
drop procedure dbo.BackupDatabaseTransactionLogs
go
create procedure dbo.BackupDatabaseTransactionLogs
(
@databaseName nvarchar(100),
@backupPath nvarchar(500)
)
as
begin
set nocount on;
declare    @date nvarchar(20),
@backupFileName nvarchar(100),
@script    nvarchar(500),
@logFileName nvarchar(100)
set @date = convert(varchar(10),getdate(),112) +'_' + replace(convert(varchar(10),getdate(),108),':','');
set @backupFileName = @databaseName + @date +'.trn';
set @script = 'BACKUP LOG ['+@databaseName+'] TO  DISK = N'''+ @backupPath + '' + @backupFileName + ''' WITH COMPRESSION';
exec(@script);
end
go

SQL Server DBA : Reduce the size of a large database log file

I recently came across a situation where one of the drives in production server is almost full.

There are two large databases and they were backed up daily once. No other backup plans were in place. That drive is filled by the log files of these two databases (~200GB each).

This has happened because the databases were in full recovery mode but transaction logs are not backed up.

Here are the steps to solve this issue:

  1. Take a full backup of the databases (Not required but as a precaution)
  2. Set the recovery model to simple
  3. Shrink the log file
  4. Set the recovery model back to full
  5. Take a full backup
  6. Setup a job to backup transaction log frequently

Note: Shrinking the log file is not a good option to do it regularly. I had to use the shrinking option because this is one-off and the log file has grown that big is due to lack of log backups.

Here is the script to perform the above tasks:

-- Full Backup
backup database TestDB to  disk = N'H:\Backup\TestDB_20150922_1308.bak'
with name = N'TestDB-Full Database Backup', compression,  stats = 10
go
alter database TestDB
set recovery simple;
go
-- See the name of the log file
sp_helpfile
go
-- shrink the truncated log file to 100 mb.
dbcc shrinkfile (TestDB_log, 100);
go
-- reset the database recovery model.
alter database TestDB
set recovery full;
go

SQL Server DBA : Move (database) log files to another drive

I have recently encountered a situation where I had to move the log files of a database to a different drive.

Here is the code to move the file to a different drive. The database will not be accessible while performing this task.

use master
go
-- Set database to single user mode
alter database TestDB
set single_user
go
-- Detach the database
exec sp_detach_db 'TestDB'
go
-- Attach the database
exec sp_attach_DB @dbname = 'TestDB',
@filename1 = 'H:\Data\TestDB.mdf',
@filename2 = 'H:\Data\TestDB_0.ndf',
@filename3 = 'I:\Log\TestDB_1.ldf'
go
alter database TestDB
set multi_user
go

SQL Server : Bringing a database online from restoring or standby modes

If the database is in restoring state run the command below to bring the database online:

RESTORE DATABASE <Database Name> WITH RECOVERY


If the database is in read-only (standby) mode you can bring the database online in two steps

Step 01: Disconnect all the connections to the database
USE master
ALTER DATABASE <Database Name> SET OFFLINE WITH ROLLBACK IMMEDIATE
ALTER DATABASE <Database Name> SET ONLINE

Step 02: Bring the database online
RESTORE DATABASE <Database Name> WITH RECOVERY

SQL Server : Restore Database using multiple backup files

Here is an example script to restore a database from multiple backup files and the destination database has several data files.


USE [master]
RESTORE DATABASE [TestDB]
FROM  DISK = N'G:\Temp\TestDB_20141031_033500_00.bak',
DISK = N'G:\Temp\TestDB_20141031_033500_01.bak',
DISK = N'G:\Temp\TestDB_20141031_033500_02.bak'
WITH  FILE = 1,
MOVE N'TestDB_Data' TO N'E:\SQLData\TestDB_Data.mdf',
MOVE N'TestDB_1_Data' TO N'E:\SQLData\TestDB_1_Data.ndf',
MOVE N'TestDB_2_Data' TO N'E:\SQLData\TestDB_2_Data.ndf',
MOVE N'TestDB_3_Data' TO N'E:\SQLData\TestDB_3_Data.ndf',
MOVE N'TestDB_Log' TO N'H:\SQLLog\TestDB_Log.ldf',
REPLACE, RECOVERY

SQL Server : Logshipping Errors

Error 1:

Logshipping backup job fails at the primary server with the error message below:


Message
2014-10-31 14:36:18.44    *** Error: Backup failed for Server ‘<Server Name>’. (Microsoft.SqlServer.SmoExtended) ***
2014-10-31 14:36:18.45    *** Error: An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) ***
2014-10-31 14:36:18.45    *** Error: BACKUP LOG cannot be performed because there is no current database backup.
BACKUP LOG is terminating abnormally.(.Net SqlClient Data Provider) ***
2014-10-31 14:36:18.50    —– END OF TRANSACTION LOG BACKUP   —–

Solution

This message is thrown because the database is never backed up fully before. Do a full backup of the primary database to solve this issue.

Error 2:

Restore database job at the secondary server failed with the error message below:

Error: The restore operation cannot proceed because the secondary database ‘<Database Name>’ is not in NORECOVERY/STANDBY mode.(Microsoft.SqlServer.Management.LogShipping)

Solution :

Restore the secondary database with the full backup of the primary database with NORECOVERY and REPLACE option. An example is given below:
USE [master]
RESTORE DATABASE [TestLST] FROM  DISK = N'G:\Temp\TestLS.bak' WITH  FILE = 1,
MOVE N'TestLS' TO N'E:\SQLData\TestLS.mdf',
MOVE N'TestLS_log' TO N'H:\SQLLog\TestLS_log.ldf',
NORECOVERY,  NOUNLOAD,  REPLACE,  STATS = 5
GO

Error 3:

After some disk issues at the production server the logshipping continually failed. So I tried a full back up the primary database and restored the secondary database with this full  backup and started the log shipping again. And the logshipping backup of the primary database failed with the error message below:

Message
2014-10-31 13:34:00.33    *** Error: Backup failed for Server ‘<Server Name>’. (Microsoft.SqlServer.SmoExtended) ***
2014-10-31 13:34:00.33    *** Error: An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) ***
2014-10-31 13:34:00.33    *** Error: BACKUP detected corruption in the database log. Check the errorlog for more information.
BACKUP LOG is terminating abnormally.
10 percent processed.
20 percent processed.
30 percent processed.
40 percent processed.
50 percent processed.
60 percent processed.
70 percent processed.
80 percent processed.
Processed 8 pages for database ‘<Database Name>’, file ‘<Database Name>_Data’ on file 1.
Processed 2013584 pages for database ‘<Database Name>’, file ‘<Database Name>_1_Data’ on file 1.
Processed 1380536 pages for database ‘<Database Name>’, file ‘<Database Name>_2_Data’ on file 1.
Processed 1671736 pages for database ‘<Database Name>’, file ‘<Database Name>_3_Data’ on file 1.
90 percent processed.(.Net SqlClient Data Provider) ***
2014-10-31 13:34:00.37    —– END OF TRANSACTION LOG BACKUP   —–
Exit Status: 1 (Error)

Solution:

One of the solutions is to configure the log shipping all over again so the logshipping job will initialize the secondary database.

The other solution is to follow the steps below: (Ref: http://www.sqlskills.com/blogs/paul/how-can-a-log-backup-fail-but-a-full-backup-succeed/)

  1. Stop all user activity in the primary database
  2. Switch to the SIMPLE recovery model (breaking the log backup chain and removing the requirement that the damaged portion of log must be backed up)
  3. Switch to the FULL recovery model
  4. Take a full database backup (thus starting a new log backup chain)
  5. Start the logshipping job

Error 4

Logshipping primary database backup job failed with the following error:

Message
2014-11-03 14:53:02.80    *** Error: Backup failed for Server ‘<Server>’. (Microsoft.SqlServer.SmoExtended) ***
2014-11-03 14:53:02.85    *** Error: An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) ***
2014-11-03 14:53:02.85    *** Error: Cannot open backup device ‘\\<Server>\LogShippingBackups\DatabaseName\DatabaseName_20141103145302.trn’. Operating system error 5(failed to retrieve text for this error. Reason: 15105).
BACKUP LOG is terminating abnormally.(.Net SqlClient Data Provider) ***

Solution

SQL Server agent does not have appropriate permission to access the shared folder. Granting access solves this problem.