Thursday, March 14, 2013

Find Processes which are consuming Memory in SQL Server


Which Individual Processes are taking up Memory in SQL SERVER
Here is the simple script that can be used to find individual processes which are taking up memory in SQL Server. This will include program name, login name and read, write etc. Also this script will provide CPU time for each process.
I have used top 10 in the script just to find out the top most 10 processes which are taking or consuming much memory in SQL Server.

SELECT TOP 10
            SESSION_ID,
            LOGIN_TIME,
            HOST_NAME,
            PROGRAM_NAME,
            LOGIN_NAME,
            STATUS,
            CPU_TIME,
            MEMORY_USAGE,
            TOTAL_SCHEDULED_TIME,
            TOTAL_ELAPSED_TIME,
            LAST_REQUEST_START_TIME,
            LAST_REQUEST_END_TIME,
            READS,
            WRITES,
            LOGICAL_READS,
            TRANSACTION_ISOLATION_LEVEL,
            LOCK_TIMEOUT,
            DEADLOCK_PRIORITY,
            ROW_COUNT,
            PREV_ERROR

FROM SYS.DM_EXEC_SESSIONS ORDER BY MEMORY_USAGE DESC

Friday, February 22, 2013

How to disable and enable all the Jobs in SQL Server

Disable and enable all the SQL jobs at one time

I have found in many of the blogs where it has been advised to update the system table in msdb database to enable/disable the SQL job(s). This does not seem good to me. Microsoft recommends do not try to update the system table directly.

Here is the simple script that I have written to disable/enable all the SQL jobs at one time.

Disable the Jobs:

DECLARE @sql_command VARCHAR(MAX)
SET @sql_command=''
SELECT @sql_command=@sql_command+' EXEC msdb..sp_update_job @job_id='''+cast(job_id AS VARCHAR(100))+''',@enabled=0;' FROM msdb..sysjobs 
WHERE enabled=1
EXEC(@sql_command)

Enable the Jobs:

DECLARE @sql_command VARCHAR(MAX)
SET @sql_command=''
SELECT @sql_command=@sql_command+' EXEC msdb..sp_update_job @job_id='''+cast(job_id AS VARCHAR(100))+''',@enabled=1;' FROM msdb..sysjobs 
WHERE enabled=0
EXEC(@sql_command)

NOTE: Remember if you have to disable all the SQL jobs in a server, you can stop SQL server agent job as well. This will not allow any schedule jobs to run automatically. If this is not possible then you can use above script. If you have to disable some of the jobs you can expand the where clause. You can also use the LIKE keyword filter some of the jobs. Up to you to modify the script and get the required action done.

Hope this helps.

Truncate and Shrink Transaction Log in SQL Server

Transaction Log Truncation and Shrinking in SQL Server

Let’s understand the concept of transaction log truncation and shrinking in SQL Server.

Transaction log is very important and critical in SQL Server especially when we talk about the truncation and shrinking. Let me share something about these 2 terms so that it will be very much clear for most of the DBAs. I am just guessing you already know about the transaction log architecture. Transaction log internally contains the Virtual log files for the processing of transactions. This section never talks about the complete architecture of transaction log. Please refer SQL server Books line to have very good picture of the architecture.

Let’s categorize this based on Recovery Models. As we know there are 3 recovery models available in SQL server Simple, Full and Bulk Logged. The major difference between all of them is related to the transaction log behaviour. Let’s talk one by one.

Simple Recovery Model:

In Simple recovery model when checkpoint occurs, the virtual log files for which the transaction has been committed becomes reusable for other transactions. Remember all transaction gets stored in Virtual log file with LSN number. Once the transaction gets committed, SQL server makes those virtual log files reusable for another transaction. In Simple recovery model this gets handled by SQL server whenever checkpoint occurs. This makes sense as these virtual files, for which transaction has been committed, are not required for rollback. And hence SQL server makes these virtual files reusable. This is called TRUNCATION of log.

Truncation of log is nothing but actually truncation of active portion of the transaction log. The PHYSICAL SIZE of the log remains unchanged until you specify Auto Shrink Option ON for the database. Most important in SIMPLE recovery model this is handled by SQL server automatically after every checkpoint.  

So the summary is when the transactions are committed/Rolled back, SQL server can reuse the space occupied y those transactions after every checkpoint.

Full or Bulk Logged Recovery Models:

In full or bulk logged recovery models, virtual files never get reused by SQL server until you take the transaction log backup. After every checkpoint, SQL server will write all the dirty pages to DISK but will never make the virtual log file reusable. If you take the full backup of the database then also those virtual log files will not become reusable. In other words Truncation of log will not happen. This gets happened when we take the transaction log backup. The moment we take the transaction log backup, transaction log will be truncated (Active portion of log will get truncated but no change in the PHYSICAL SIZE). Means after transaction log backup, virtual log files for the committed transactions will become reusable.

Sometimes your transaction log never gets truncated even though you have taken the transaction log backup in full or bulk logged recovery model. The only reason could be for this is active transaction due to transactional replication or long running transactions. You can find out the details by using below query.
 

SELECT name, log_reuse_wait_desc FROM sys.databases
 

If the log_reuse_wait_desc display ACTIVE_TRANSACTION, means there are some active transactions which are not allowing your logs to truncate. If it says LOG_BACKUP, means you have to take the transaction log backup.

Remember when we have database in full or bulk logged recovery model, sometimes log never grow. This is because you would have not taken the full backup of the database yet. If you do not take the full database backup, this is exactly same as SIMPLE recovery model. After every checkpoint transaction log gets truncated. Once you take the full backup, it works exactly as mentioned above.

Now let’s talk about SHRINKING of transaction log. Shrinking means reducing the physical size of the file. How to do this and how SQL server decides which part of the log file should be shrinked?

First of all let me talk about the SQL command that could be helpful for shrinking the files. I will not talk about the shrinking database here instead only about shrinking transaction log file.

We use DBCC SHRINKFILE command to do so. Below is the syntax. The first parameter is logical name of the log file and second parameter is size in MB up to which you want to shrink the log file.
 

DBCC SHRINKFILE ('DatabaseName_log', 2)
 

Remember transaction log can only be shrunk up to the boundary of a virtual log file. You can use DBCC LOGINFO command to get the information about virtual log file. So if you find that you cannot shrink the transaction log to a particular size, run DBCC LOGINFO and check the size of the virtual log files to know how large each of them are, and to what sizes you can actually shrink the file to. Wherever you see status as 2 that will be your boundary up to which you can shrink the log file. If you want to shrink more then take the transaction log backup and if your log gets truncated, your virtual log file boundary will change again and DBCC SHRINKFILE will shrink the file till last boundary.

For example, assume that a 600 MB log file that contains six virtual log files has a logical log that starts in virtual log 3 and ends in virtual log 4 when you run a DBCC SHRINKFILE statement with a target_size of 275 MB, which is three-quarters of the way into virtual log 3:

 
Virtual log files 5 and 6 are freed immediately, because they do not contain part of the logical log. However, to meet the specified target size, virtual log file 4 should also be freed, but it cannot because it holds the end portion of the logical log. After freeing virtual log files 5 and 6, the Database Engine fills the remaining part of virtual log file 4 with dummy records. This forces the end of the log file to the end of virtual log file 1. In most systems, all transactions starting in virtual log file 4 will be committed within seconds. This means that the entire active portion of the log is moved to virtual log file 1. The log file now looks similar to this:

 
The DBCC SHRINKFILE statement also issues an informational message that states that it could not free all the space requested, and that you can run a BACKUP LOG statement to free the remaining space. After the active portion of the log moves to virtual log file 1, a BACKUP LOG statement truncates the entire logical log that is in virtual log file 4:

Because virtual log file 4 no longer holds any portion of the logical log, you can now run the same DBCC SHRINKFILE statement with a target size of 275 MB. Virtual log file 4 is then freed and the size of the physical log file is reduced to the size you requested.

A shrink-file operation can remove only inactive virtual log files. If no target size is specified, a shrink-file operation removes only the inactive virtual log files beyond the last active virtual log file in the file. If a target size is specified, a given shrink-file operation removes only enough inactive virtual log files to approach but not exceed the target size. After shrinking, the log file is typically somewhat larger than the target size, and it will never be smaller. The virtual log files make it difficult to predict how much the log file will actually shrink.

That’s all about the transaction log truncation and shrinking I can think of. To summarize truncation is nothing but truncation of active portion of the transaction log but no change in the physical size of the file. Shrinking means changing the physical size of the file by DBCC SHRINKFILE command. How it works completely depends on the Recovery model what you have for your database.

Hope this is helpful.

Wednesday, February 20, 2013

Migration from SQL Server 2005 to SQL Server 2008 R2

20 Steps to Migrate from SQL server 2005 to SQL Server 2008 R2.

Here is the HIGH LEVEL PLAN to migrate from SQL Server 2005 to SQL Server 2008 R2. I have included all in below 20 steps.

Let me share high level plan to migrate from SQL server 2005 to SQL Server 2008 R2. Again it depends how much downtime you can have to do the migration. But here are the plans that can help to reduce the downtime very less as far as databases are concerned.

Here are the plans step by step.

1.      First check with the business, what could be the maximum downtime and when the migration will be done. Generally it happens at weekend after business hours.

2.      Install upgrade advisor tool in SQL server 205 server. This tool is free from Microsoft and you can download the same from here. Analyze the output of upgrade advisor tool very carefully. Find is it safe to move the databases from 2005 to 2008 R2. This will give you the list of blockers that may block you doing migration. Resolve all the blockers and then proceed just to be in safe side.

3.      Get the new server according to your configuration. From the DBA side make sure you install SQL server 208 R2 in the server (Edition depends on your organization’s License) and apply recent patch of SQL server 2008 R2. It won’t heart doing some test against the storage drives by using SQLIO. SQLIO is the tool from Microsoft to test storage hard drives or SAN performance.

4.      Take the full backup of all the databases 1 day prior to migration. After taking full backup schedule differential backup and transaction log backup immediately in the existing SQL server 2005 server. Disable all the previous backup schedules if any. Since you already took the full backup and you scheduled differential and transaction log backup again there is no meaning in keeping previous backup schedule.

5.      Copy the full backup files taken in step 4 in the new SQL server and start restoring all the databases with NO RECOVERY option. Do this before down time and make sure this gets completed before downtime starts. Remember this process may take time if you have the databases of bigger size.

6.      Start moving all the SQL logins to the new server by using sp_help_revlogin. We can do this before downtime starts.

7.      Start moving all the SQL jobs, Maintenance plan, SSIS packages to new server. SQL jobs we can move by generating the SQL script. As far as SSIS packages and maintenance plans are concerned, you can export the same to new server. Remember your all SQL jobs, maintenance plan are in disable state else all will start running and failing unnecessary. Again you can do this before downtime starts.

8.     Once the downtime starts make sure all the connections are closed. Ask application team to stop all the services. Do not trust them if they say YES we have stopped all the services. Use below commands and suggestions.

9.      Once downtime starts make sure all the databases in SINGLE USER mode. You can use below command to kill all the session forcibly and bring the database in single user mode.

ALTER DATABASE <DB_NAME> SET SINGLE_USER WITH ROLLBACK IMMEDIATE

This make sure that your database is in single user mode and no transaction can happen even if application team has left some services running by mistake as they always do.

10.   Take transaction log backup now manually for all the databases and disable all the backup jobs you created in step 4.

11.   Move last differential backup and all the transaction log backups including latest one of all the databases to the new server.

12.    Restore last differential backup with NO RECOCVERY and after that restore all the log backups sequentially with NO RECOVERY except last one. Finally restore last transaction log backup with RECOVERY. This you may need for all the databases.

13.   Now all you databases are ONLINE in new server and you made sure you did not miss any transaction. It will be very good idea to run DBCC CHECKDB for all the databases now to make sure there are no consistency error in the database(s).

14.   Change the database COMPATIBILITY LEVEL to 100.

15.   Ask Application team to change the connection string to point to new server.

16.   Enable all the SQL jobs and maintenance plan jobs. If you need new maintenance plan this is the best time to create it.

17.   Discuss with Business about the backup policy and implement in the new server.

18.   Verify backup jobs are scheduled and running fine. Do not forget to schedule transaction log backup of the databases for which recovery model is FULL and Bulk Logged.

19.   Verify Database maintenance jobs like Re indexing, update statistics and consistency checker jobs and make sure all are running fine.

20.   Talk to application team and see if they have any issues connecting to the new server. If any fix it.

These are the 20 steps for Migration I can think of. This way I am not going to take much downtime from SQL perspective. Most of the things that take hell of time, I would prefer to do in prior.

Hope this is helpful for the DBAs planning to work in Migration.

Monday, February 18, 2013

Start SQL Service without Tempdb

Start SQL Service without Temp DB: Can we???

Hello!!! Yes we can definitely start SQL service without Temp DB. Suppose you installed SQL server and specified the location of Temp DB. Now somebody stop the SQL services and changed the path of temp db at windows level with something that is not the actual path of temp db stored in the master database. Then what we will do. Will we be able to start the SQL service and change the location of temp db again?

Answer is YES. We can definitely do this. How!!!

We need to start SQL service in minimal configuration and single user mode. How we can do this!!! Here it is:

How to start SQL service with minimal configuration and in single user mode

Start SQL server configuration manager -> Go to SQL service -> Right click -> Go to Properties -> Click on Advanced Tab -> Scroll down and Go to Start up parameters. Enter –f and –m option just at starting. Something like as follows:

-f;-m;-dC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG;-lC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf

Click on Apply -> Go to Service tab and start the service now. You can see it will be started successfully. Now you can connect to the SQL server.

Connect to SQL and issue following command to change alter the path of temp db with the correct value. Here is the SQL command to alter the path (change the path accordingly):

USE master
Go
ALTER DATABASE tempdb
    MODIFY FILE (NAME = 'tempdev', FILENAME = 'F:\MSSQL\Data\tempdb.mdf');
Go
ALTER DATABASE tempdb
    MODIFY FILE (NAME = 'templog', FILENAME = 'L:\MSSQL\Log\templog.ldf');
Go                             

Cool!!! Now go ahead and remove the –f and –m option from the start up parameters and start the SQL service again.

Now you are back in action. Just go and meet the person who actually changed the path of temp db after stopping the SQL service (I know no body does that but we need to be ready for all the situations).

Hope this helpful!!!!




Sunday, February 17, 2013

Date of Transaction Log File is Unchanged

Transaction log file date is not getting changed at windows level



Last week I came up with very strange experience and I would like to share this here. I was at client place and one day client asked me that application user is not able to see some of the transactions for one of the server. Applications team were sure that some transactions have been done very recently within fraction of seconds.
Let me tell you that this was the server where 10 days back I moved the data and log files to different LUNs and removed the older LUNSs. After that I renamed the new LUNs drive letter as same as older LUNs. I did this to reduce the space available in the LUNs as most of the space was getting wasted due to small size of databases.

Windows team from client went to the server and looked at the transaction log file date and they found transaction log file date at windows explorer never changed from last 10 days. Quickly they thought somehow transaction is not happening at SQL server level and there are some problems with the database.
Client came to me and started saying please check why transaction log file is not getting changed. There is definitely we missed something when we changed the location of data and log files.

This was the first time I heard this. We DBA never think about changing the date at windows level. For us only thing matters are transactions (insert/update/delete etc.) are happening at database level. To prove transaction is happening or not, quickly I created one table, inserted some data, updated something and retrieved the rows from the table. I found everything was fine but the transaction log file date did not change at that time as well. Still it was showing older date.
That time I was sure that there is no problem in the transactions, there are some other issues that application team has to look into. I asked let me know the data criteria that you are trying to retrieve to. I tried the same and SQL server was able to return the data successfully through SSMS.

Now the question is why transaction log file date is not getting changed even though we are doing some transaction in the database. I was having something in my mind and I tried shrinking the log file with very less space (It is not recommended to shrink the log files in Production until and unless you are very sure what you are doing). The moment I executed DBCC SHRINKFILE, I found transaction log file date got changed.
So here is the conclusion: Insert/update may not change the transaction log file date at windows level. The date will only be changed when there are some changes in the physical size of the file (shrink or truncate etc.). If there is not physical change at file level, date will not change.

Also this is just meaningless as far as DBAs are concerned but sometimes we have to answer the clients immediately.
Please let me know if you have any suggestions/questions/feedbacks on this as I heard this first time in my career.




 

 

 

 

 

Friday, February 15, 2013

Backup Details for Older Days


How to find details of older backups available in the server for all the databases

Here is the script that can be used to find out the backup details for last 7 days for all the databases in a server.

SELECT    CONVERT(CHAR(100),
SERVERPROPERTY('Servername')) AS Server,  
msdb.dbo.backupset.database_name,   
msdb.dbo.backupset.backup_start_date,   
msdb.dbo.backupset.backup_finish_date,  
msdb.dbo.backupset.expiration_date,  
CASE msdb..backupset.type      WHEN 'D' THEN 'Database'     
WHEN 'L' THEN 'Log'   
END AS backup_type,   
msdb.dbo.backupset.backup_size,   
msdb.dbo.backupmediafamily.logical_device_name,   
msdb.dbo.backupmediafamily.physical_device_name,    
msdb.dbo.backupset.name AS backupset_name, 
msdb.dbo.backupset.description 
FROM   msdb.dbo.backupmediafamily  
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id  
WHERE  (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 7)  
ORDER BY     msdb.dbo.backupset.database_name,    msdb.dbo.backupset.backup_finish_date

Wednesday, February 13, 2013

Bulk Logged Recovery Model in SQL server


Understanding Bulk Logged Recovery Model

 

I interviewed 2-3 candidates recently and asked them simple question “What is Bulk Logged Recovery Model and How IT WORKS”. I know candidate will be very happy hearing such easy question in the interview. I also think same thing and before I go deep, I ask simple questions so that candidate is very much friendly.

I was surprised after getting answer from all of them. The answer was “Bulk operations never get logged in the transaction log and this way it increases the performance”. I can expect such answer from the person who is fresher or having 2-3 years of experience. But if the person has more than 5 years of experience and answer like this will force me to reject the candidate right away.

This makes me to write something about BULK LOGGED RECOVERY model which can remove some of the confusion about this mode.

 

First of all there is difference between NO LOGGED and MINIMALLY LOGGED. In Bulk Logged Recovery model, bulk operations are minimally logged and not fully logged. This does not mean that bulk operations are never logged in transaction log.

Here are the operations which get minimally logged when you database is in bulk logged recovery model:

·         Bulk import operations (bcp, BULK INSERT, and INSERT... SELECT).

·         SELECT INTO operations

·         WRITETEXT and UPDATETEXT statements when inserting or appending new data into the text, ntext, and image data type columns

·         Partial updates to large value data types, using the .WRITE clause in the UPDATE statement when inserting or appending new data

·         CREATE INDEX operations (including indexed views).

·         ALTER INDEX REBUILD or DBCC DBREINDEX operations.

 

Actually in bulk logged recovery model, allocation of bulk operations are stored in transaction log and not complete data page. Since data pages are not getting logged in the transaction log, this recovery model does not take much space in the log. And this is the reason transaction log does not grow when we do costly bulk operations especially rebuilding the indexes. But when we take the transaction log backup, log backup will have complete data pages as well to provide you the ability to restore bulk operations. This is the reason of having bigger size of transaction log backup compare to transaction log file sometimes. Here the diagram which can give clear picture of this concept.

 

 

Data files are the one which basically holds the data pages of the bulk insert operations. This is the reason if a minimally-logged operation has been performed since the last log backup, and one or more data files were damaged and offline because of the disaster, a tail-of-the-log backup cannot be performed and so all user transactions performed since the last log backup will be lost.

Compared to the full recovery model, which fully logs all transactions, the bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions.

Let me talk something about the advantage and disadvantage of this recovery model in SQL server.

 

Advantage:

Since bulk operations are minimally logged, this increases the performance of the bulk operations. Also it does not allow the log to grow unexpectedly when we do some costly operations like rebuild index, create index etc.

Disadvantage:

If transaction log is damaged, changes since the last backup must be redone. Also if any bulk operations occurred since the most recent log backup, changes since the last backup must be redone.

Remember:

1.    You cannot use point in time restore option if your database is set to bulk logged recovery model.

2.    You have to take the transaction log backups else your log will not be truncated.

3.    If you change recovery model to bulk logged from Full then you do not disturb the log backup chain. But it is always a very good practice to take the log backup before and after making changes of recovery model.

4.    Microsoft recommends using this recovery model when and only when we do bulk operations (Mentioned above). Once the operation(s) is/are completed, switch back to FULL recovery model so that your database can be restored to point in time if disaster happens.

 

Hope this helps you to understand the concept of BULK LOGGED RECOVERY model in SQL server. I feel this recovery model very important for the databases where we do not need point in time recovery and we have so many bulk operations running.

Wednesday, February 6, 2013

SQL Server Database Backup Getting Skipped





Why some of the Microsoft SQL Server Databases are skipped while performing a backup using T-SQL

I created one stored procedure to take the backups of all my databases. I executed the same through SQL job and it was executing. Since total databases size is around 300 GB, I just left this running and was monitoring the status of the job. Once the job got executed successfully I thought my backups are completed. I went to the location where I took the backup and I found backups are not available for 3 of my databases. Strange!!!!.
 

I looked into my stored procedure again and again and could not find anything wrong there. Just I was asking SQL server to loop through my all user databases through CURSOR and back up all of them to a network location.


Here is the Stored Procedure script that I used.
 

CREATE PROCEDURE [dbo].[DBA_DBs_Copy_Only_Backup]

      -- Add the parameters for the stored procedure here

AS
BEGIN

      SET NOCOUNT ON;

      DECLARE @name VARCHAR(50) -- database name 
      DECLARE @path VARCHAR(256) -- path for backup files 
      DECLARE @fileName VARCHAR(256) -- filename for backup 
      DECLARE @fileDate VARCHAR(20) -- used for file name

      -- specify database backup directory
      SET @path = 'Specify your local/network path here' 

       -- specify filename format
      SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112) + REPLACE(CONVERT(VARCHAR(20),GETDATE(),108),':','')

      DECLARE db_cursor CURSOR FOR 
      SELECT name
      FROM master.dbo.sysdatabases
      WHERE name NOT IN ('tempdb')  -- exclude these databases

      OPEN db_cursor  
      FETCH NEXT FROM db_cursor INTO @name  

      WHILE @@FETCH_STATUS = 0  
      BEGIN  
            SET @fileName = @path + @name + '_' + @fileDate + '.BAK'

            BACKUP DATABASE @name TO DISK = @fileName 
            WITH COPY_ONLY, SKIP, NOINIT
            FETCH NEXT FROM db_cursor INTO @name  

      END  

      CLOSE db_cursor  
      DEALLOCATE db_cursor

END
 

Finally I found there is problem with the cursor. To resolve this issue I made the CURSOR STATIC in my code and after that no databases were getting skipped from backup. This is what I did to make the CURSOR static.
 
DECLARE db_cursor CURSOR STATIC FOR
 
Remember by default cursor is dynamic and it may change the value dynamically when your database state changes to any state. By making cursor STATIC we make sure that the data retrieved from the table gets stored in tempdb as reference and it never change till the completion of your script.