Saturday, November 27, 2010

How to setup sql server 2005 transaction log ship on large database


I tried a lot of combinations in my life in order to find out an effective method for implementing Transaction Log Shipping between servers which are in a workgroup, not under domain. I realized the things you learn from article and books are for small and medium sized databases. When you database become 10 GB or bigger, thing's become a lot harder than it looks. Additionally many things changed in SQL Server 2005. So, it's even more difficult to configure log shipping properly nowadays.

Here's the steps that I finally found that works. Let's assume there are 2 servers with SQL Server 2005. Make sure both servers have latest SP. There's Service Pack 1 released already.

1. Create a new user Account named "SyncAccount" on both computers. Use the exact same user name and password.

2. Make sure File Sharing is enabled on the local area connection between the server. Also enable file sharing in Firewall.

3. Make sure the local network connection is not regular LAN. It must be a gigabit card with near zero data corruption. Both cable and switch needs to be perfect. If possible, connect both servers using Fibre optic cable directly on the NIC in order to avoid a separate Switch.

4. Now create a folder named "TranLogs" on both servers. Let's assume the folder is on E:\Tranlogs.

5. On Primary Database server, share the folder "Tranlogs" and allow SyncAccount "Full Access" to it. Then allow SyncAccount FullAccess on TranLogs folder. So you are setting the same permission from both "Sharing" tab and from "Security" tab.

6. On Secondary database server, allow SyncAccount "Full Access" right on TranLogs folder. No need to share it.

7. Test whether SyncAccount can really connect between the servers. On Secondary Server, go to Command Prompt and do this:

8.

9. Now you have a command prompt which is running with SyncAccount privilege. Let's confirm the account can read and write on "TranLog" shares on both servers.

10.

11. This is exactly what SQL Agent will be doing during log ship. It will copy log files from primary server's network share to it's own log file folder. So, the SyncAccount needs to be able to both read files from primary server's network share and write onto its own tranlogs folder. The above test verifies the result.

12. This is something new in SQL Server 2005: Add SyncAccount in SQLServer Agent group "SqlServer2005SqlAgentUser….". You will find this Windows User Group after installing SQL Server 2005.

13. Now go to Control Panel->Administrative Tools->Services and find the SQL Server Agent service. Go to its properties and set SyncAccount as the account on the Logon tab. Restart the service. Do this on both servers.

14.

15. I use sa account to configure the log shipping. So, do this on both servers:

a. Enable "sa" account. By default, sa is disabled in SQL Server 2005.

b. On "sa" account turn off Password Expiration Policy. This prevents sa password from expiring automatically.

16. On Secondary server, you need to allow remote connections. By default, SQL Server 2005 disables TCP/IP connection. As a result, you cannot login to the server from another server. Launch the Surface Area Configuration tool from Start->Programs->MS SQL Server 2005 and go to "Remote Connection" section. Choose the 3rd option which allows both TCP/IP based remote connection and local named pipe based connections.

17. On Secondary Server firewall, open port 1433 so that primary server can connect to it.

18. Restart SQL Server. Yes, you need to restart SQL Server.

18. On Primary server, go to Database properties->Options and set Recovery Model to "Full". If it was already set to full before, it will be wise to first set it to Simple, then shrink the transaction log file and then make it "Full" again. This will truncate the transaction log file for sure.

19. Now take a Full Backup of the database. During backup, make sure you put the backup file on a physically separate hard drive than the drive where MDF is located. Remember, not different logical drives, different physical drives. So, you should have at least 2 hard drives on the server. During backup, SQL Server reads from MDF and writes on the backup file. So, if both MDF and the backup is done on the same hard drive, it's going to take more than double the time to backup the database. It will also keep the Disk fully occupied and server will become very slow.

20. After backup done, RAR the database. This ensures when you copy the database to the other server there's no data corruption while the file was being transferred. If you fail to unRAR the file on the secondary server, you get assurance that there's some problem on the network and you must replace network infrastructure. The RAR also should be done on a separate hard drive than the one where the RAR is located. Same reason, read is on one drive and write is on another drive. Better if you can directly RAR to the destination server using network share. It has two benefits:

a. Your server's IO is saved. There's no write, only read.

b. Both RAR and network copy is done in one step.

21.

22. By the time you are done with the backup, RAR, copy over network, restore on the other server, the Transaction Log file (LDF) on the primary database server might become very big. For us, it becomes around 2 to 3 GB. So, we have to manually take a transaction log backup and ship to the secondary server before we configure Transaction Log Shipping.

23.

24. When you are done with copying the transaction log backup to the second server, first restore the Full Backup on the secondary server:

25.

26. But before restoring, go to Options tab and choose RESTORE WITH STANDBY:

27.

28. When the full backup is restored, restore the transaction log backup.

29. REMEMBER: go to options tab and set the Recovery State to "RESTORE WITH STANDBY" before you hit the OK button.

30. This generally takes a long time. Too long in fact. Every time I do the manual full backup, rar, copy, unrar, restore, the Transaction Log (LDF) file becomes 2 to 3 GB. As a result, it takes a long time to do a transaction log backup, copy and restore and it takes more than an hour to restore it. So, within this time, the log file on the primary server again becomes large. As a result, when log shipping starts, the first log ship is huge. So, you need to plan this carefully and do it only when you have least amount of traffic.

31. I usually have to do this manual Transaction Log backup twice. First one is around 3 GB. Second one is around 500 MB.

32. Now you have a database on the secondary server ready to be configured for Log shipping.

33. Go to Primary Server, select the Database, right click "Tasks" -> "Shrik". Shrink the Log File.

34. Go to Primary server, bring on Database options, go to Transaction Log option and enable log shipping.

35.

36. Now configure the backup settings line this:

37.

38. Remember, the first path is the network path that we tested from command prompt on the secondary server. The second path is the local hard drive folder on the primary server which is shared and accessible from the network path.

39. Add a secondary server. This is the server where you have restored the database backup

40.

41. Choose "No, the secondary database is initialized" because we have already restored the database.

42. Go to second tab "Copy Files" and enter the path on the secondary server where log files will be copied to. Note: The secondary server will fetch the log files from the primary server network share to it's local folder. So, the path you specify is on the secondary server. Do not get confused from the picture below that's it's the same path as primary server. I just have same folder configuration on all servers. It can be D:\tranlogs if you have the tranlogs folder on D: drive on secondary server.

43.

44. On third tab, "Restore Transaction Log" configure it as following:

45.

46. It is very important to choose "Disconnect users in database…". If you don't do this and by any chance Management Studio is open on the database on secondary server, log shipping will keep on failing. So, force disconnect of all users when database backup is being restored.

47. Setup a Monitor Server which will automatically take care of making secondary server the primary server when your primary server will crash.

48.

49. In the end, the transaction log shipping configuration window should look like this:

50.

51. When you press OK, you will see this:

52. Do not be happy at all if you see everything shows "Success". Even if you did all the paths, and settings wrong, you will still see it as successful. Login to the secondary server, go to SQL Agents->Jobs and find the Log Ship restore job. If the job is not there, your configuration was wrong. If it's there, right click and select "View History". Wait for 15 mins to have one log ship done. Then refresh and see the list. If you see all OK, then it is really ok. If not, then there are two possibilities:

a. See if the Log Ship Copy job failed or not. If it fails, then you entered incorrect path. There can be one of the following problem:

  1. The network location on primary server is wrong
  2. The local folder was specified wrong
  3. You did not set SyncAccount as the account which runs SQL Agent or you did but forgot to restart the service.

b. If restore fails, then the problems can be one of the following:

i. SyncAccount is not a valid login in SQL Server. From SQL Server Management Studio, add SyncAccount as a user.

ii. You forgot to restore the database on secondary server as Standby.

iii. You probably took some manual transaction log backup on the primary server in the meantime. As a result, the backup that log shipping took was not the right sequence.

53. If everything's ok, you will see this:


Thursday, November 25, 2010

Restore Transaction logs SQL Server


The scenario

A coworker calls you in a panic because he accidentally deleted some production data, and he wants you to restore the lost records.

If you are lucky, you have a data auditing system in place, and you can restore these records from an audit table. If you do not have a tool that will read a transaction log so that you can undo transactions, you will likely need to restore the altered database to a certain point in time on the same or separate server than the server hosting the current database instance.

The restoration process

Note that, for the purpose of this article, I am assuming that your database recovery mode is set to FULL.

The first step in the process is to perform a tail-log backup. You want to perform this type of backup before a database restore to ensure that any records that have changed since the last backup are available to be included in the restore process.

Next you should locate where the database backup files are stored on the machine or the network. It may be a good idea to copy these files to your target server if you are going to be restoring the database on a different server. In the backup file location, find the very last full database backup that was completed (these files usually end with the extension .bak); you need to restore this full backup. The script below applies the full backup file to the NewDatabase database:

RESTORE DATABASE NewDatabase
FROM DISK = 'D: \BackupFiles\TestDatabaseFullBackup.bak'
WITH
MOVE 'PreviousDatabase' TO 'D:\DataFiles \TestDatabase.mdf',
MOVE 'PreviousDatabase_log' TO 'D:\DataFiles \TestDatabase_Log.ldf',
NORECOVERY

The code specifies that the location of the full backup file is on your server's D drive and that you are restoring the file to the database named NewDatabase. The statement moves the data file and the log file from the full backup to new files for my TestDatabase database. The last statement in the script, NORECOVERY, is very crucial. The NORECOVERY mode is one of three available options, which are outlined below.

|> NORECOVERY: Tells SQL Server that you are not finished restoring the database and that subsequent restore files will occur. While the database is in this state, the database is not yet available, so no connections are allowed.
|> RECOVERY: Tells SQL Server that you are finished restoring the database, and it is ready to be used. This is the default option, and it is by far the one that is used most often.
|> STANDBY: Tells SQL Server that the current database is not yet ready to be fully recovered and that subsequent log files can be applied to the restore. You can use this option so that connections are available to the restore database if necessary. However, future transaction logs can only be applied to the database if no current connections exist.

Once you restore the full backup using the NORECOVERY option, you can begin applying the transaction log backups or the differential backup.

Differential backup
A differential backup is a backup of any changes to the database that have occurred since the last full database backup. If you have multiple differential backups, you will only need to restore the very last one taken. In this situation, there are no differential backups, so you can move directly to the transaction log backups.

Transaction log backups
A transaction log backup keeps track of all transactions that have occurred since the last transaction log backup; it also allows you to restore your database to a point in time before a database error occurred. Transaction log backups occur in sequence, creating a chain. When restoring a sequence of transaction log backups to a point in time, it is required that the transaction log files are restored in order.

When you use a database maintenance plan to create the transaction log backups, a time indicator is typically included in the transaction log file name. The script below applies three transaction log backups using the NORECOVERY option, and the last statement restores the database to availability to the time frame at the very end of the last transaction log file.

RESTORE LOG NewDatabase
FROM DISK = ''D: \BackupFiles\TestDatabase_TransactionLogBackup1.trn'
WITH NORECOVERY

RESTORE LOG NewDatabase
FROM DISK = ''D: \BackupFiles\ TestDatabase_TransactionLogBackup2.trn'
WITH NORECOVERY

RESTORE LOG NewDatabase
FROM DISK = ''D: \BackupFiles\ TestDatabase_TransactionLogBackup3.trn'
WITH NORECOVERY

RESTORE LOG NewDatabase
FROM DISK = ''D: \BackupFiles\ TestDatabase_TransactionLogBackup4.trn'
WITH RECOVERY

Restoring to a point in time
In the example above, you restore the database to the end of the last transaction log. If you want to recover your database to a specific point in time before the end of the transaction log, you must use the STOPAT option. The script below restores the fourth transaction log in the log sequence to 4:01 PM -- just before the database mishap occurred.

RESTORE LOG NewDatabase
FROM DISK = ''D: \BackupFiles\ TestDatabase_TransactionLogBackup4.trn'
WITH STOPAT = N'6/28/2007 4:01:45 PM', RECOVERY

Now that you have the database restore to a point where you need it to be, it is time to decide how to help the developers in order to make their situation a little bit easier. My suggestion is to copy the table the developers need to a separate table on the server so that you or they can correct the data problem.

Be prepared

Restoring your database to a point in time is one of those things that you never want to have to use, but you need to be able to complete it if necessary. I took an overview approach as to how to restore your SQL Server database using transaction logs for a certain circumstance. It's important to note that companies use different approaches for backing up data, so it is very important to be thoroughly involved in your company's database backup process. Be sure to test restores and restore situations often so that you are ready when a disaster does occur.


Database Mail in SQL Server 2005


The SQL Mail problems, that we faced in SQL Server 7.0 and 2000, are no more. SQL Server 2005 supports and uses SMTP email now and there is no longer a need to MAPI client to send email. In SQL Server 2005, the mail feature is called Database Mail. In this article, I am going to demonstrate step-by-step, with illustrations, how to configure Database Mail and send email from SQL Server.

Database Mail has four components.

1.     Configuration Component

Configuration component has two sub components. One is the Database Mail account, which contains information such as the SMTP server login, Email account, Login and password for SMTP mail.

The Second sub component is Database Mail Profile. Mail profile can be Public, meaning members of DatabaseMailUserRole in MSDB database can send email. For private profile, a set of users should be defined.

2.     Messaging Component

Messaging component is basically all of the objects related to sending email stored in the MSDB database.

3.     Database Mail Executable

Database Mail uses the DatabaseMail90.exe executable to send email.

4.     Logging and Auditing component

Database Mail stores the log information on MSDB database and it can be queried using sysmail_event_log.

Step 1

Before setting up the Database Mail profile and accounts, we have to enable the Database Mail feature on the server. This can be done in two ways. The first method is to use Transact SQL to enable Database Mail. The second method is to use a GUI.

In the SQL Server Management Studio, execute the following statement.

use master
go
sp_configure 'show advanced options',1
go
reconfigure with override
go
sp_configure 'Database Mail XPs',1
--go
--sp_configure 'SQL Mail XPs',0
go
reconfigure
go

Alternatively, you could use the SQL Server Surface area configuration. Refer Fig 1.0.


Fig 1.0

Step 2

The Configuration Component Database account can be enabled by using the sysmail_add_account procedure. In this article, we are going create the account, "MyMailAccount," using mail.optonline.net as the mail server and

makclaire@optimumonline.net as the e-mail account.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'MyMailAccount',
@description = 'Mail account for Database Mail',
@email_address = 'makclaire@optonline.net',
@display_name = 'MyAccount',
@username='makclaire@optonline.net',
@password='abc123',
@mailserver_name = 'mail.optonline.net'

Step 3

The second sub component of the configuration requires us to create a Mail profile.

In this article, we are going to create "MyMailProfile" using the sysmail_add_profile procedure to create a Database Mail profile.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'MyMailProfile',
@description = 'Profile used for database mail'

Step 4

Now execute the sysmail_add_profileaccount procedure, to add the Database Mail account we created in step 2, to the Database Mail profile you created in step 3.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'MyMailProfile',
@account_name = 'MyMailAccount',
@sequence_number = 1

Step 5

Use the sysmail_add_principalprofile procedure to grant the Database Mail profile access to the msdb public database role and to make the profile the default Database Mail profile.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
@profile_name = 'MyMailProfile',
@principal_name = 'public',
@is_default = 1 ;

Step 6

Now let us send a test email from SQL Server.

Please execute the statement below.

declare @body1 varchar(100)
set @body1 = 'Server :'+@@servername+ ' My First Database Email '
EXEC msdb.dbo.sp_send_dbmail @recipients='mak_999@yahoo.com',
@subject = 'My Mail Test',
@body = @body1,
@body_format = 'HTML' ;

You will get the message shown in Fig 1.1.


Fig 1.1

Moreover, in a few moments you will receive the email message shown in Fig 1.2.


Fig 1.2

You may get the error message below, if you haven't run the SQL statements from step 1.

Msg 15281, Level 16, State 1, Procedure sp_send_dbmail, Line 0
SQL Server blocked access to procedure 'dbo.sp_send_dbmail' of
component 'Database Mail XPs' because this component is turned off as part of
the security configuration for this server. A system administrator can enable
the use of 'Database Mail XPs' by using sp_configure. For more information
about enabling 'Database Mail XPs', see "Surface Area Configuration"
in SQL Server Books Online.

You may see this in the database mail log if port 25 is blocked. Refer Fig 1.3.


Fig 1.3

Please make sure port 25 is not blocked by a firewall or anti virus software etc. Refer Fig 1.4.


Fig 1.4

Step 7

You can check the configuration of the Database Mail profile and account using SQL Server Management Studio by right clicking Database Mail [Refer Fig 1.5] and clicking the Configuration. [Refer Fig 1.6]


Fig 1.5


Fig 1.6

Step 8

The log related to Database Mail can be viewed by executing the statement below. Refer Fig 1.7.

SELECT * FROM msdb.dbo.sysmail_event_log


Fig 1.7

Conclusion

This article has demonstrated step-by-step instructions, with illustrations, how to configure Database Mail and send email from SQL Server.


Wednesday, November 24, 2010

The log or differential backup cannot be restored because no files are ready to rollforward.


If you have found this page, it is likely that you encountered the following error when you tried to restore a differential backup using Microsoft SQL Server 2005.

Restore failed for Server ''. (Microsoft.SqlServer.Smo)

Additional Information:
System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo)

What this error is telling you is that there is no database that was left in non-operational mode, and thus has not been cleaned up such that uncommitted transactions have not been rolled back.

The easy way to reproduce this error is to backup your database using full recover model, and do full and differential backups. Once you have your full and differential backup files you, if you want to restore your database all you have to do is restore the full backup first, and then one of the differential files (differential backups have all the changes since the last full backup) that brings you up to the point you want to restore to.

You will get the above error when you try to restore the differential backup (after you just restored the full backup).

Unfortunately, you forgot one critical detail (just like I did at first). You MUST restore all but the last (in this case the full backup) with NORECOVERY option.

In the Microsoft SQL Server Management Studio there are three options on the Option "page" when you restore a database.

Option 1 (the default): Leave the database ready to use by rolling back uncommitted transactions. Additional transaction logs cannot be restored.(RESTORE WITH RECOVERY)

Option 2: Leave the database non-operational, and do not roll back uncommitted transactions. Additional transaction logs can be restored.(RESTORE WITH NORECOVERY)

To properly restore a database that is backup up using full recovery mode with full and differential backups, here are the steps you need to follow to not get the above error.

Restore Full backup
  1. Open the Restore Database window in Microsoft SQL Server Management Studio
  2. Ensure the To database field is filled in with the name you want.
  3. Choose From device as the Source for restore.
  4. Choose the full backup file you want to restore. In most cases this is the most recent full backup file.
  5. Click the Options page on the left navigation.
  6. Choose Leave the database non-operational, and do not roll back uncommitted transactions. Additional transaction logs can be restored.(RESTORE WITH NORECOVERY). This is the most important step!!!
Restore Differential backup
  1. Open the Restore Database window in Microsoft SQL Server Management Studio
  2. Ensure the To database field is filled in with the name you want. The same that you specified in step 2 for the Restore Full backup
  3. Choose From device as the Source for restore.
  4. Choose the differential backup file you want to restore. In most cases this is the most recent differential backup file.
  5. Click the Options page on the left navigation.
  6. Choose the default: Leave the database ready to use by rolling back uncommitted transactions. Additional transaction logs cannot be restored.(RESTORE WITH RECOVERY) Make sure to choose this if you want to use your database after the restore.

Database Backup Types in SQL Server



Database backups are at the core of any SQL Server disaster recovery planning for any production system. Backups may be used to provide a means of recovery to a point-in-time when the database was last operational. Microsoft® SQL Server™ provides several types of backups that may be combined to formulate a customized disaster recovery plan depending on the nature of the data and the recovery requirements. It is highly recommended that all SQL Server databases be backed up periodically.

SQL Server backup media

A database may be backed up to disk or to tape. The examples in this article assume a disk backup directly into a disk file (as opposed to a disk backup device). Any database can be backed up to a random disk file at any time. The file may either be initialized (using WITH INIT) or appended with the new backup.

Types of backups in SQL Server

SQL Server provides several different kinds of backups including Complete, Differential, Transaction Log, and File(s) and Filegroup(s) backup. A combination of these backups may be used to formulate a robust disaster recovery strategy. The following paragraphs explain each SQL Server backup type

Complete database backup

A complete database backup creates a stand-alone image of the entire database. A complete database backup is self-dependent and may be restored to either the same or a new database on the same or a different server. This provides plenty of flexibility at the time when this backup has to be restored. A complete backup may be restored without the need for any other kind of backup. It may also be performed for databases in any recovery model. Restoring a complete database backup typically would be considered a starting point for a disaster recovery situation where the entire database is lost or damaged. It is recommended that a complete database backup be performed at regular intervals for all production databases. It is also recommended that a complete backup should be performed for system databases if there are any changes performed to the SQL Server operating environment such as creating or removing databases, configuring security, creating and modifying DTS/SSIS packages or scheduled jobs, adding and removing linked servers, etc.

Backup syntax

BACKUP DATABASE Northwind
TO DISK = 'c:\backups\northwind.bak'
WITH INIT

Restore syntax (Same database)

RESTORE DATABASE Northwind
FROM DISK = 'c:\backups\northwind.bak'

Restore syntax (New database and/or server)

RESTORE DATABASE Northwind_new
FROM DISK = 'c:\backups\northwind.bak'
WITH MOVE 'northwind' TO 'c:\new_location\Northwind_new.mdf'
     MOVE 'northwind_log' TO 'c:\new_location\Northwind_new_log.ldf'

Differential database backup

A differential backup backs up only modified extents since the last complete backup. An extent is a group of 8 data pages each consisting of 8 KB (64 KB in total). By definition, differential backups are cumulative. The most recent differential backup contains all changes from all previous differential backups performed since the most recent complete database backup. Differential backups may be considered as an alternative for databases that are large and are modified infrequently. These would include data warehouse type of databases. Differential backups have several limitations including the following:

  • They do not provide point-in-time restore capabilities
  • They may only be restored after a complete database backup is restored
  • They may not be performed on the master database

Backup syntax

BACKUP DATABASE Northwind
TO DISK = 'c:\backups\northwind_diff.bak'
WITH INIT, DIFFERENTIAL

Restore syntax (Same database - Note that a complete database backup is restored first using WITH NORECOVERY)

RESTORE DATABASE Northwind
FROM DISK = 'c:\backups\northwind.bkp'
WITH NORECOVERY

RESTORE DATABASE Northwind
FROM DISK = 'c:\northwind_diff.bkp'
WITH RECOVERY

Transaction log backup

An SQL Server database consists of two components: data file(s) and transaction log file(s). A transaction log captures the modifications made to the database. A simple transaction may place several records in the transaction log. Each of these records is known as a log record and is assigned a unique identification number known as the log sequence number (LSN). Log records that belong to the same transaction are linked together through the LSN. If SQL Server service shuts down unexpectedly, upon restart the recovery process examines the entries in the transaction log and if there are transactions that have not been rolled forward completely, the recovery process rolls back the changes performed as part of these incomplete transactions. This operation is extremely important as it forms the basis of transactional recovery. Entries in the transaction log are also used if transactional replication is configured for the specific database.

A transaction log backup backs up all transactions since either the previous transaction log backup, or the complete database backup if there have been no transaction log backups performed for the database in the past. This backup may then be used to apply the backed-up changes, in case disaster recovery is required. Transaction log backups may only be applied to a database in an unrecovered state. A database may be in an unrecovered state if it is being restored from a set of backups as part of a disaster recovery procedure, or if it is configured as a standby database on a warm backup server. A transaction log backup also truncates the inactive portion of the transaction log, unless the database is configured as a Publisher in transactional replication and there are transactions pending propagation to Subscribers.

Each transaction log backup contains a First and Last log sequence number (LSN). Consecutive transaction log backups should have sequential LSNs for the boundary log records. These LSN values may be examined using the RESTORE HEADERONLY command. If LastLSN from the previously restored transaction log backup does not match the FirstLSN from the backup that is currently being restored, the restore operation fails with the following error: "This backup set cannot be restored because the database has not been rolled forward far enough. You must first restore all earlier logs before restoring this log". If the above message is generated while restoring a particular transaction log backup, which is part of a set of transaction log backups that are to be restored, any attempts to restore further transaction log backups will fail with this message. There could be several reasons for consecutive transaction log backups being out of sequence. Some of the most common reasons noted from support experience have been:

  • The database recovery model has been changed to Simple and back to either Full or Bulk-Logged. Switching the recovery mode to Simple causes the transaction log to be truncated
  • Another transaction log backup was performed between the previous successfully restored backup and the one generating this message
  • The transaction log was manually truncated between the two backups
  • The database was in Bulk-Logged recovery model and non-logged operations were performed
  • Transaction log backups are not allowed for databases in Simple Recovery model. While in Simple Recovery model, a database's transaction log is truncated every time a CHECKPOINT is invoked for the database

Transaction log backups provide the possibility of performing a point-in-time restore or point-of-failure restore. You can also perform a restore to a named transaction with transaction log backups.

Backup syntax

BACKUP LOG Northwind
TO DISK = 'c:\backups\northwind_log_1.bak'
WITH INIT

Restore syntax (Same database - Note that a complete database backup is restored first using WITH NORECOVERY, then a sequence of transaction log backups)

RESTORE DATABASE Northwind
FROM DISK = 'c:\backups\northwind.bkp'
WITH NORECOVERY

RESTORE LOG Northwind
FROM DISK = 'c:\northwind_log_1.bkp'
WITH RECOVERY

File(s) and Filegroup(s) backup

Use BACKUP to back up database files and filegroups instead of the full database when time constraints make a full database backup impractical. To back up a file instead of the full database, put procedures in place to ensure that all files in the database are backed up regularly. Also, separate transaction log backups must be performed. After restoring a file backup, apply the transaction log to roll the file contents forward to make it consistent with the rest of the database.


Wednesday, November 3, 2010

Java Batch Updates


  Statements

The batch update facility allows a Statement object to submit a set of heterogeneous update commands together as a single unit, or batch, to the underlying DBMS. In the example below all of the update operations required to insert a new employee into a fictitious company database are submitted as a single batch.

// turn off autocommit
con.setAutoCommit(false);

Statement stmt = con.createStatement();

stmt.addBatch("INSERT INTO employees VALUES (1000, 'Joe Jones')");
stmt.addBatch("INSERT INTO departments VALUES (260, 'Shoe')");
stmt.addBatch("INSERT INTO emp_dept VALUES (1000, 260)");

// submit a batch of update commands for execution
int[] updateCounts = stmt.executeBatch();


In the example, autocommit mode is disabled to prevent the driver from committing the transaction when Statement.executeBatch() is called. Disabling autocommit allows an application to decide whether or not to commit the transaction in the event that an error occurs and some of the commands in a batch cannot be processed successfully. For this reason, autocommit should always be turned off when batch updates are done. The commit behavior of executeBatch is always implementation defined when an error occurs and autocommit is true.

To keep our discussion of batch updates general, we define the term element to refer to an individual member of a batch. As we have seen, an element in a batch is just a simple command when a Statement object is being used. Although we are focusing on using Statement objects to do batch updates in this section, the discussion that follows applies to PreparedStatment and CallableStatement objects, as well.

In the new JDBC API, a Statement object has the ability to keep track of a list of commands -or batch-that can be submitted together for execution. When a Statement object is created, its associated batch is empty-the batch contains no elements. The Statement.addBatch() method adds an element to the calling statement's batch. The method Statement.clearBatch() (not shown above) can be called to reset a batch if the application decides not to submit a batch of commands that has been constructed for a statement.

Successful execution

The Statement.executeBatch() method submits a statement's batch to the underlying data source for execution. Batch elements are executed serially (at least logically) in the order in which they were added to the batch. When all of the elements in a batch execute successfully, executeBatch() returns an integer array containing one entry for each element in the batch. The entries in the array are ordered according to the order in which the elements were processed (which, again, is the same as the order in which the elements were originally added to the batch). An entry in the array may have the following values:

  1. If the value of an array entry is greater than or equal to zero, then the batch element was processed successfully and the value is an update count indicating the number of rows in the database that were effected by the element's execution.
  2. A value of -2 indicates that a element was processed successfully, but that the number of effected rows is unknown.
Calling executeBatch() closes the calling Statement object's current result set if one is open. The statement's internal list of batch elements is reset to empty once executeBatch() returns. The behavior of the executeQuery, executeUpdate, or execute methods is implementation defined when a statement's batch is non-empty.

ExecuteBatch() throws a BatchUpdateException if any of the elements in the batch fail to execute properly, or if an element attempts to return a result set. Only DDL and DML commands that return a simple update count may be executed as part of a batch. When a BatchUpdateException is thrown, the BatchUpdateException.getUpdateCounts() method can be called to obtain an integer array of update counts that describes the outcome of the batch execution.

Handling failures during execution

A JDBC driver may or may not continue processing the remaining elements in a batch once execution of an element in a batch fails. However, a JDBC driver must always provide the same behavior when used with a particular DBMS. For example, a driver cannot continue processing after a failure for one batch, and not continue processing for another batch.

If a driver stops processing after the first failure, the array returned by BatchUpdateException.getUpdateCounts() will always contain fewer entries than there were elements in the batch. Since elements are executed in the order that they are added to the batch, if the array contains N elements, this means that the first N elements in the batch were processed successfully when executeBatch() was called.

When a driver continues processing in the presence of failures, the number of elements, N, in the array returned by BatchUpdateException.getUpdateCounts()is always equal to the number of elements in the batch. The following additional array value is returned when a BatchUpdateException is thrown and the driver continues processing after a failure:

  1. A value of -3 indicates that the command or element failed to execute successfully. This value is also returned for elements that could not be processed for some reason-such elements fail implicitly.
JDBC drivers that do not continue processing after a failure never return -3 in an update count array. Drivers of this type simply return a status array containing an entry for each command that was processed successfully.

A JDBC technology based application can distinguish a JDBC driver that continues processing after a failure from one that does not by examining the size of the array returned by BatchUpdateException.getUpdateCounts(). A JDBC driver that continues processing always returns an array containing one entry for each element in the batch. A JDBC driver that does not continue processing after a failure will always return an array whose number of entries is less than the number of elements in the batch.

  PreparedStatements

An element in a batch consists of a parameterized command and an associated set of parameters when a PreparedStatement is used. The batch update facility is used with a PreparedStatement to associate multiple sets of input parameter values with a single PreparedStatement object. The sets of parameter values together with their associated parameterized update command can then be sent to the underlying DBMS engine for execution as a single unit.

The example below inserts two new employee records into a database as a single batch. The PreparedStatement.setXXX() methods are used to create each parameter set (one for each employee), while the PreparedStatement.addBatch() method adds a set of parameters to the current batch.

// turn off autocommit
con.setAutoCommit(false);

PreparedStatement stmt = con.prepareStatement(
"INSERT INTO employees VALUES (?, ?)");

stmt.setInt(1, 2000);
stmt.setString(2, "Kelly Kaufmann");
stmt.addBatch();

stmt.setInt(1, 3000);
stmt.setString(2, "Bill Barnes");
stmt.addBatch();

// submit the batch for execution
int[] updateCounts = stmt.executeBatch();


Finally, PreparedStatement.executeBatch() is called to submit the updates to the DBMS. Calling PreparedStatement.executeBatch() clears the statement's associated list of batch elements. The array returned by PreparedStatement.executeBatch() contains an element for each set of parameters in the batch, similar to the case for Statement. Each element either contains an update count or the generic `success' indicator (-2).

Error handling in the case of PreparedStatement objects is the same as error handling in the case of Statement objects. Some drivers may stop processing as soon as an error occurs, while others may continue processing the rest of the batch. As for Statement, the number of elements in the array returned by BatchUpdateException.getUpdateCounts() indicates whether or not the driver continues processing after a failure. The same three array element values are possible, as for Statement. The order of the entries in the array is the same order as the order in which elements were added to the batch.

  Callable Statements

The batch update facility works the same with CallableStatement objects as it does with PreparedStatement objects. Multiple sets of input parameter values may be associated with a callable statement and sent to the DBMS together. Stored procedures invoked using the batch update facility with a callable statement must return an update count, and may not have out or inout parameters. The CallableStatement.executeBatch() method should throw an exception if this restriction is violated. Error handling is analogous to PreparedStatement.


Difference between RequestDispatcher's forward(ServletRequest request, ServletResponse response) method and HttpServletResponse's sendRedirect(String location) method

The forward method of RequestDispatcher will forward the ServletRequest and ServletResponse that it is passed to the path that was specified in getRequestDispatcher(String path). The response will not be sent back to the client and so the client will not know about this change of resource on the server. This method is useful for communicating between server resources, (servlet to servlet). Because the request and response are forwarded to another resource all request parameters are maintained and available for use. Since the client does not know about this forward on the server, no history of it will be stored on the client, so using the back and forward buttons will not work. This method is faster than using sendRedirect as no network round trip to the server and back is required.

An example using forward:
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
  RequestDispatcher rd = request.getRequestDispatcher("pathToResource");
  rd.forward(request, response);
}


The sendRedirect(String path) method of HttpServletResponse will tell the client that it should send a request to the specified path. So the client will build a new request and submit it to the server, because a new request is being submitted all previous parameters stored in the request will be unavailable. The client's history will be updated so the forward and back buttons will work. This method is useful for redirecting to pages on other servers and domains.

An example using sendRedirect:
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
  response.sendRedirect("pathToResource");
}