You might be dealing with very large databases whose size is
in hundreds of GB’s but still you will be
having a same maintenance window where backup job may be taking much time to complete,for this type of situations
we can consider using data transfer options available in the backup command,
before I go onto the available options let me explain how
backup works internally.
During a backup, SQL Server creates one reader thread for
each volume that the database files reside on.
The reader thread simply reads the contents of the files. Each time that it reads a portion of the
file, it stores them in a buffer. There
are multiple buffers in use, so the reader thread will keep on reading as long
as there are free buffers to write to.
SQL Server also creates one writer thread for each backup device to
write the contents of the buffers out to disk or tape. The writer thread writes the data from the
buffer to the disk or tape. Once the
data has been written, the buffer can be reused by the reader thread.
Let us consider below example.
BACKUP DATABASE
AdventureWorks TO
DISK = 'D:\backups\AdventureWorks1.bak',
DISK = 'E:\backups\AdventureWorks2.bak'
WITH BUFFERCOUNT = 10,
MAXTRANSFERSIZE = 1048576
Let us assume that adventureworks has spread across 4
volumes then the backup will create 4 reader threads to read from 4 volumes into
10 buffer files(buffercount=10) each of size 1 MB(maxtransfersize=1MB) and two
writer threads will be created to write to two
files(it depends on how many files you are writing into) from buffers.
If buffercount and maxtransfersize parameters are not
used,sql server will dynamically determine the numbers to use and size of each
buffers and the total memory used by the backup buffers will be buffercount*maxtransfersize
In our example it will be 10* 1 MB=10 MB
Above memory will be allocated from non buffer pool memory
also known as memtoleave memory.If you specify the values larger than the
available memory,sql server will raise an insufficient memory error.
Coming back to our actual issue, we were trying to backup a
DB of size 100 GB over the network with the below sql command
BACKUP DATABASE [DBNAME] TO DISK = N'\\XXXXX\sql_backups\daily\DBNAMe_FULL_20131113_213000.BAK'
WITH INIT, SKIP, NOREWIND, NOUNLOAD,
STATS = 10, COMPRESSION
After running for several hours(7 hrs) the command used to
fail with the below error.
BackupIoRequest::ReportIoError: write failure on backup
device '\\XXXXX\sql_backups\ daily\Dbanme_FULL_20131113_213000.BAK'. Operating
system error 64(The specified network name is no longer available.).
After which we had changed the command to below:
BACKUP DATABASE [DBName] TO
DISK = N'\\XXXXXXX\sql_backups\daily\DBNAME_FULL1_20131113_213000.BAK',
DISK = N'\\XXXXXXX\sql_backups\daily\DBANME_FULL2_20131113_213000.BAK',
DISK = N'\\XXXXXX\sql_backups\ daily\DBANAME_FULL3_20131113_213000.BAK',
DISK = N'\\XXXXXXX\sql_backups\daily\DBNAME_FULL4_20131113_213000.BAK'
WITH INIT,FORMAT, SKIP, NOREWIND, NOUNLOAD, STATS = 1, COMPRESSION,BUFFERCOUNT=12,
MAXTRANSFERSIZE=4194304
With the above values backup time has decreased to almost
half and I will be trying with further different values and see how quick the
backup completes and will share that info in future.
And lastly if you want to know how much memory that your
backup command is using with these values specified in your backup command you
can enable below trace flags and the information will be written into sql
server log file.
DBCC TRACEON(3605,3213,-1)
Hope above information helps you all…