Tuesday, March 11, 2014

Error formatting query, probably invalid parameters [SQLSTATE 42000] (Error 22050). The step failed.

Scheduled job may fail with below error but the sql script may succeed when you run it from SQL Server Management studio.

Error:

Error formatting query, probably invalid parameters [SQLSTATE 42000] (Error 22050).  The step failed.

Troubleshooting:


I had faced similar type of issue a while back and the issue was delete statement was failing while executing through job but that was successful when it is run from SSMS and the resolution is to enable “Set QUOTED_IDENTIFIER on” on job step ..
So with the above background when I came though the same error again I initially thought  that the error is the same so I had gone ahead and added “set QUOTED_IDENTIFIER on” code in the SQL Job step and started the job but unfortunately job failed again ,so I have to pay much attention on the script of the SQL agent job now.
Script is just checking the databases which are not backed up for last 24 hrs and sending the result via email to DBA team by making use of sp_send_db_mail SP.
This SP has an useful parameter @append_query_error which helped me in identyfying  the issue  easily so when I enable the parameter and executed the job ,I  got the mail with actual error attached and the error is below.
Msg 916, Level 14, State 1, Server XXXXXX, Line 1
The server principal "XXXXX" is not able to access the database "Admin" under the current security context.
So this is access related issue which is masked by some formatting error and  I had given enough privileges to the account and that’s it….issue is fixed.
Hope above information will help you in troubleshooting similar type of issues in your environment.


Wednesday, November 27, 2013

Error: 17805, Severity: 20, State: 3 Invalid buffer received from client

Error: 17805, Severity: 20, State: 3 Invalid buffer received from client


If you had configured alerts in SQL Server for SEV 20 you might receive alerts if such an event occurred at SQL Server instance, so what exactly is this error all about? Which has caused this and what is the resolution? Below are the details…
1.)    In the application code C# or VB if datatype is not specified  for the parameters then .NET framework will try to select correct datatype based on the data that is passed and if that is unsuccessful above error is raised. So it is always recommended to define the datatype of the parameters for SqlParameter object in application programming.

2.)    Another reason is if the size parameter is incorrectly specified i.e more than the maximum length allowed by the SQL server then this error will occur.

Ex:  nvarchar is a variable-length Unicode character data of n characters. "n" must be a value from 1 through 4000. If you specify a size that is more than 4000 for an nvarchar parameter, you receive the above error message

3.)    Final cause is, declaring an instance of the SqlClient class to be static in C# or to be shared in Visual Basic. The instance can be accessed concurrently from more than one thread in the application. Causing above error

Resolution:


So overall to eliminate these errors we need to follow below guidelines…
ü  Specify the SqlDbType enumeration for the SqlParameter object so that there is no inferred type.

ü  Specify a parameter size that is within the limits of the data type.

ü  Do not use a SqlClient class in a Finalize method or in a C# destructor.


References:


Thursday, November 14, 2013

SPEEDING UP DATABASE BACKUPS IN SQL SERVER

You might be dealing with very large databases whose size is in hundreds of GB’s but still you will be  having a same maintenance window where backup job may be taking much time to complete,for this type of situations we can consider using data transfer options available in the backup command,
before I go onto the available options let me explain how backup works internally.

During a backup, SQL Server creates one reader thread for each volume that the database files reside on.  The reader thread simply reads the contents of the files.  Each time that it reads a portion of the file, it stores them in a buffer.  There are multiple buffers in use, so the reader thread will keep on reading as long as there are free buffers to write to.  SQL Server also creates one writer thread for each backup device to write the contents of the buffers out to disk or tape.  The writer thread writes the data from the buffer to the disk or tape.  Once the data has been written, the buffer can be reused by the reader thread.

Let us consider below example.

BACKUP DATABASE AdventureWorks TO
DISK = 'D:\backups\AdventureWorks1.bak',
DISK = 'E:\backups\AdventureWorks2.bak'
WITH BUFFERCOUNT = 10, MAXTRANSFERSIZE = 1048576

Let us assume that adventureworks has spread across 4 volumes then the backup will create 4 reader threads to read from 4 volumes into 10 buffer files(buffercount=10) each of size 1 MB(maxtransfersize=1MB) and two writer threads will be created to write to two files(it depends on how many files you are writing into) from buffers.

If buffercount and maxtransfersize parameters are not used,sql server will dynamically determine the numbers to use and size of each buffers and the total memory used by the backup buffers will be buffercount*maxtransfersize

In our example it will be 10* 1 MB=10 MB

Above memory will be allocated from non buffer pool memory also known as memtoleave memory.If you specify the values larger than the available memory,sql server will raise an insufficient memory error.
Coming back to our actual issue, we were trying to backup a DB of size 100 GB over the network with the below sql  command

BACKUP DATABASE [DBNAME] TO DISK = N'\\XXXXX\sql_backups\daily\DBNAMe_FULL_20131113_213000.BAK' WITH INIT, SKIP, NOREWIND, NOUNLOAD,  STATS = 10, COMPRESSION
After running for several hours(7 hrs) the command used to fail with the below error.
BackupIoRequest::ReportIoError: write failure on backup device '\\XXXXX\sql_backups\ daily\Dbanme_FULL_20131113_213000.BAK'. Operating system error 64(The specified network name is no longer available.).

After which we had changed the command to below:
BACKUP DATABASE [DBName] TO
DISK = N'\\XXXXXXX\sql_backups\daily\DBNAME_FULL1_20131113_213000.BAK',
DISK = N'\\XXXXXXX\sql_backups\daily\DBANME_FULL2_20131113_213000.BAK',
DISK = N'\\XXXXXX\sql_backups\ daily\DBANAME_FULL3_20131113_213000.BAK',
DISK = N'\\XXXXXXX\sql_backups\daily\DBNAME_FULL4_20131113_213000.BAK'

WITH INIT,FORMAT, SKIP, NOREWIND, NOUNLOAD,  STATS = 1, COMPRESSION,BUFFERCOUNT=12,
 MAXTRANSFERSIZE=4194304

With the above values backup time has decreased to almost half and I will be trying with further different values and see how quick the backup completes and will share that info in future.

And lastly if you want to know how much memory that your backup command is using with these values specified in your backup command you can enable below trace flags and the information will be written into sql server log file.

DBCC TRACEON(3605,3213,-1)
Hope above information helps you all…


Friday, April 26, 2013

SPID OF RUNNING JOBS IN SQL SERVER

Today I was alerted by one of our long running monitor Job saying that one particular job is running for longer duration than its Average running time.


Now I had logged into my SQL server and queried the sysprocesses table with the below command to get the SPID of the job which is taking much time…

Select * from sys.sysprocesses where program_name like ‘%SQLAgent - TSQL JobStep%’

but unfortunately I got more than 10 records which means there are 10 jobs running on the instance and you know that Program_name columns value has Hexadecimal form,So I need to get the SPID of the Job and the Job Name by using an SQL query ..


So if you guys have the same situation then below query will help you in retrieving the SPID and job names …..

QUERY:

SELECT DISTINCT SP.SPID, SJ.NAME FROM MASTER.DBO.SYSPROCESSES SP JOIN MSDB.DBO.SYSJOBS SJ ON

MASTER.DBO.FN_VARBINTOHEXSTR(CONVERT(VARBINARY(16), JOB_ID)) COLLATE LATIN1_GENERAL_CI_AI =

SUBSTRING(REPLACE(PROGRAM_NAME, 'SQLAGENT - TSQL JOBSTEP (JOB ', ''), 1, 34)

Hope this helps…..

Thursday, April 25, 2013

Transaction Log shipping Restore Job running very slow in STAND BY MODE-Explanation

Description:


Sometimes you may notice your restore job on the secondary server will be taking much more time than usual, so what is wrong??

This is because of long running transactions (uncommitted transactions) spanning multiple log backups and the LS being configured as STANDBY…

Detailed Explanation:

• When restoring transaction log with the “STAND BY” option, SQL Server will go over the transaction log and save all uncommitted transactions to a file with .tuf(Transaction UNDO File) extension.

• You will not see the uncommitted data in the secondary DB even if you read the data in “READ Uncommitted Isolation Level". Thus is, because the SQL Server performs UNDO for all uncommitted transactions and saves their data to the tuf file. It simply saves the entire content of the pages that it had to undo.



The tuf file will be used when you restore the next transaction log file. First, the SQL Server will REDO all the uncommitted transactions that are in the tuf file and then continue to restore the next transaction log.The tuf file will contain the uncommitted transactions not just from the latest transaction log but uncommitted transactions in all previous transaction logs that have been restored.

• So with the above explanation let us assume a rebuild index on a big table that has run for 45 mins and let’s say 3 log backups were taken in this duration, so first log backup uncommitted transactions will be written to tuf file while restoration and when the second log backup is being restored, the sql server will redo all the uncommitted transactions from the tuf file and will continue the restoring the log backup but the second log backup also has uncommitted transactions so SQL server will again UNDO the index operation and will save the uncommitted transactions to tuf file and this process continues until transaction is committed in further log backups causing the delay in LOG RESTORE…

I hope above explanation had helped you understand the reason behind slow restore on secondary stand by log shipping server.

References: Dan's madebysql Blog.

An attempt was made to send an email when no email session has been established

You may come across a scenario where you will be able to send mails through DB Test mail but the notifications configured on the job schedules will not trigger any mails on job failures/success…So what’s wrong???
As a first step you need to check whether SQL Server agent is configured to use DB MAIL profile or not,below are the steps to check the same….

• Right click on SQL Server Agent Select PropertiesSelect ALERT SYSTEM tab Make sure that ENABLE MAIL PROFILE is checkedSelect mail profile Click on OK.

o So if above is not enabled this is reason of your behavior if in case if you are facing the error in spite of enabling this setting then follow the below steps for resolution.

• These steps need your SQL AGENT to be restarted so please be careful to check on any running jobs and go ahead ….
o Uncheck “Enable Mail profile” on SQL Agent Properties. Stop the Agent and Start the Agent.
o Now Enable database mail in properties and restart the SQL Server Agent.
o This will let you to get rid of error message “An attempt was made to send an email when no email session has been established” and you will be able to successfully get the mail from the agent notification on Job failures/Success

Tuesday, April 23, 2013

DELETE failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'

You may come across a scenario where your SQL Agent job scheduled to run a Delete statement against a table fails with the below error.


DELETE failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

But the same delete statement runs fine when you run it from SSMS, so what’s wrong on my agent job??? Below is the reason for failure…
From the error message it is quite clear that delete operation has failed on a table which has indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.
So for these type of operations we need to Set Quoted_Identifier to ON, whenever a new connection is made through SSMS, by default, QUOTED_IDENTIFIER will be on but this is not the case for SQL Server Agent Job Step.

So for the Steps that are running Delete statements as a Job step should have the below code appended to it…

SET QUOTED_IDENTIFIER ON
Go
DELETE STATEMENT

Above change will make your job step to succeed…

Additional Info:

• You can use DBCC USEROPTIONS which returns the SET options active for the current connection.
• You can also enable Quoted_Identifier to ON at Server Level by clicking on Server properties Connections TabCheck “Quoted Identifier” Click Ok.