Tuesday, February 15, 2011

How to Minimize SQL Server Blocking

Blocking occurs when one connection to SQL Server locks one or more records, and a second connection to SQL Server requires a conflicting lock type on the record or records locked by the first connection
. This causes the second connection to wait until the first connection releases its locks. By default, a connection will wait an unlimited amount of time for the blocking lock to go away. Blocking is not the same thing as a deadlock.

A certain amount of blocking is normal and unavoidable. Too much blocking can cause connections (representing applications and users) to wait extensive periods of time, hurting overall SQL Server performance. In the worst cases, blocking can escalate as more and more connections are waiting for locks to be released, creating extreme slowdowns. The goal should be to reduce blocking as much as possible.

Locks held by SELECT statements are only held as long as it takes to read the data, not the entire length of the transaction. On the other hand, locks held by INSERT, UPDATE, and DELETE statements are held until the entire transaction is complete. This is done in order to allow easy rollback of a transaction, if necessary.

Some causes of excessive blocking, and ways to help avoid blocking, include:

  • Long-running queries. Anytime a query of any type, whether it is a SELECT, INSERT, UPDATE, or DELETE, takes more than a few seconds to complete, blocking is likely. The obvious solution to this is to keep transactions as short as possible. There are many tips on this web site on how to help reduce transaction time, but some of them include optimize Transact-SQL code, optimize indexes, break long transactions into multiple, smaller transactions, avoiding cursors, etc.
  • Canceling queries, but not rolling them back. If your application's code allows a running query to be cancelled, it is important that the code also roll back the transaction. If this does not happen, locks held by the query will not be released, which means blocking can occur.
  • Distributed client/server deadlock. No, this is not your typical deadlock that is handled automatically by SQL Server, but a very special situation that is not automatically resolved by SQL Server.

    Here is what can happen. Let's say that an application opens two connections to SQL Server. The application then asynchronously starts a transaction and sends a query through the first connection to SQL Server, waiting for results. The application then starts a second transaction and sends a query through the second connection to SQL Server, waiting for results. At some point, one of the queries from one of the connections will begin to return results, and the application will then begin to process them.

    As the application processes the results, at some point what could happen is that the remainder of the results become blocked by the query running from the other connection. In other words, the first query can not complete because it is being blocked by the second query. So in essence, this connection is blocked and cannot continue until the second query completes. But what happens is that the second query tries to return its results, but because the application is blocked (from the first query), its results cannot be processed. So this means that this query cannot complete, which means the block on the first query can never end, and a deadlock situation occurs. Neither connection will give up, so neither connection ever ends, and the deadlock situation never ends.

    SQL Server is unable to resolve this type of deadlock, so unless you want to write applications that hang forever, you can take these steps to prevent this unusual situation: 1) Add a query time-out for each of the queries, or 2) Add a lock time-out for each of the queries, or 3) Use a bound connection for the application.

In many ways, the best way to avoid blocking is to write well-tuned applications that follow the tuning advice found on this website.


By default, blocking locks do not time out. The waiting connection waits until the lock is released, and the block is over. If you like, you can set a lock time-out so that a connection does not wait indefinitely for the blocking lock to be released. This is accomplished using the LOCK_TIMEOUT setting.

When the LOCK_TIMEOUT setting is used to set a maximum amount of time that a connection can wait for a blocking lock to go away, the connection that has the lock and is causing the blocking problem is not affected, but the connection waiting for the block is halted, and receives an error message. When this happens, then error message 1222, "Lock request time-out period exceeded" is sent to the application.

This means that the application needs to include the appropriate error-handling code to deal with this situation and take the appropriate action, which includes rolling back the transaction. If the application does not know how to deal with this error message, and the transaction is not rolled back, it is possible that the application can continue as if the transaction was not automatically cancelled. Because of this, you should not use the LOCK-TIMEOUT setting unless your application(s) that will be affected by it know what to do when they receive this message from SQL Server.

The syntax for the SET LOCK_TIMEOUT is:

SET LOCK_TIMEOUT timeout_period

Where timeout_period is the number of milliseconds that a connection waits for a blocking lock to go away before an error is returned from SQL Server to the application. A value of -1 is the default, which means to wait indefinitely. A value of 0 tells SQL Server not to wait at all, and to return the error immediately.

This command is based on a per connection basis, and stays with the connection until the connection is broken, or a new SET LOCK_TIMEOUT command is issued.


Avoid INSERTing, UPDATEing, or DELETEing large numbers of records in a single transaction. If you do, all the records affected by your action will be locked until the transaction is done. If you find that you need to perform mass data changes, it is better to batch them into smaller, shorter transactions to prevent unnecessary locking.

In addition, changing the backup method from full to simple will also reduce the overhead incurred by long running transactions. Once you are done with the long running activity, you can switch back to the full backup method.

Lock request time out period exceeded. Error: 1222

The First Solution
The first and the easy solution is to identify that locking (offending) process and then killing that process. But this will

cause you to loose any information if your offending query was a DDL statment. You must re-run that SQL statment again.

For finding the offending process execute the command

Running sp_who2 is easy, all that is required is to type sp_who2 and execute it.

sp_who2 shows all the sessions that are currently established in the database.

The sp_who2 internal procedure allows users to view current activity on the database. This command provides a view into several system tables (e.g., syslocks, sysprocesses, etc.). The sp_who command returns the following information:

Spid—The system process ID.
status—The status of the process (e.g., RUNNABLE, SLEEPING).
loginame—Login name of the user.
hostname—Machine name of the user.
blk—If the process is getting blocked, this value is the SPID of the blocking process.
dbname—Name of database the process is using.
Cmd—The command currently being executed (e.g., SELECT, INSERT)
CPUTime—Total CPU time the process has taken.
DiskIO—Total amount of disk reads for the process.
LastBatch—Last time a client called a procedure or executed a query.
ProgramName—Application that has initiated the connection (e.g., Visual Basic, MS SQL Query Analyzer)

Every session (or porcess) has its own unique SPID's, or Server process Id's.
Search for the column BlkBy (acutally Block by), this columns tell you which Process (SPID) is blocking this process. The column BlkBy has a recursive relationship with SPID.

Process that are being blocked by any other process are listed in this column. Search for the Process that is blocking other process. For e.g. you find '65′ SPID blocking many other process. After finding that process next step is to kill the offending process using the KILL command.

Simply write
Kill . For the above case write

Kill 65 .

Again perform sp_who2 you will see that the offending process has been killed. This will remove your Locking Error.

Another approach is to execute this Select

Select Distinct object_name(l.rsc_objid), l.req_spid, p.loginame
from master.dbo.syslockinfo l (nolock) join
master.dbo.sysprocesses p (nolock) on l.req_spid=p.spid
where object_name(l.rsc_objid) is not null

This script will show which table was locked, the process that locked the table and the login name used by the process. Once you find out which process is locking a table, you can issue a "kill" on that SPID. The offending process will be causing a lock on the syscolumns, syscomments, and sysobjects tables.

The second or alternative solution

Though sometime there is requirement that we can not terminate anything. If we know which transaction is locking up resources and database, we need to still run the same transaction.

Alternate Fix/WorkAround/Solution:
In this scenario following changes must be done in the offending transaction:
1) Modify the Transaction use query hints (use RECOMPILE,MAXDOPhints)
2) Run big Transaction in smaller transactions.
3) Upgrade Hardware if possible.

Friday, February 11, 2011

Windows XP Multiuser Remote Desktop ( Terminal Server )

Remote Desktop ConnectionAn interesting feature, on Windows XP, is the ability to be remote controlled from a second PC: the so called "Remote Desktop Connection" can be used from a dial-up connection or in a local ethernet network.

XP (and Media Center Edition), differently than the Server versions of Windows, has a limit: a single PC can be controlled by a single "local" user (the "real" person on place), OR a single "remote" user. If someone logs into the computer security from remote, the local user is disconnected. The following procedure deactivates this block and allows multiple persons to connect and to use a single computer from remote.
Very useful, for example, if you've a very strong PC and you want your wife/friend/brother to use an old computer like a "terminal" to use applications on the new one, at the same time of you. Other application of the same technique: you're at work and you want to connect to your home PC, without blocking your wife that is using the same computer to check email ;)

UPDATE: it seems that XP is limited, also after this modification, to 3 concurrent users. So don't waste time trying to raise the maximum number of connections over three (see step 5) because, at this time, I don't think there's a way to use the same XP PC with more than 3 persons at the same time (e.g. a local user and 2 remote users).

This procedure is an "hack": do it at your own risk:

Start your Windows in Safe Mode (tap on F8 first of the Windows Loading Splash Screen);
click on "My Computer" with right mouse button and choose "Properties";
go to "Remote" tab and uncheck "Allow users to connect remotely to this computer" (if it's already unchecked, just do nothing);
click OK.

Go to Start -> Control Panel;
open "Administrative Tools" and then "Services";
double click "Terminal Services", in the list;
choose "Disabled" for "Startup Type" option;
click OK.

Go to C:\windows\system32\dllcache;
rename the termsrv.dll file to termsrv.original or another name you like;
copy into the folder this unrestricted old version of termsrv.dll;
go to C:\windows\system32 (the upper folder of the current one);
do the same operation: rename termserv.dll also here, and put another copy of the file I linked above.

Click Start, then "Run…", type "regedit" (without quotes) and press ENTER;
navigate in the Windows Registry Tree to reach this path:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Licensing Core;
click with right mouse button on blank space in the right part of the registry window, choose "New" > DWORD, name the new key "EnableConcurrentSessions" (without quotes), then edit it and set its value to 1;
close the editor.

Click Start, then "Run…", type "gpedit.msc" (without quotes) and press ENTER;
open Computer Configuration > Administrative Templates > Windows Components > Terminal Services;
double click "Limit number of connections", choose "Enabled" and set the maximum number of concurrent connections you want to allow (2 or more), then Restart Windows in normal mode.

Go back to Remote tab of My Computer's properties (see step 1) and activate "Allow users to connect remotely to this computer";
Go back to "Terminal services" in "Services" (see step 2) and set its "Startup type" to "Manual"

Now restart Windows. Your operating system should be ready to accept multiple remote desktop connections ;)
Remember that you've to prepare different Windows Users for every "phisical" user that want to connect to your desktop, to autenticate with separate logins/passwords. User accounts configuration is reachable in the control panel, and the list of users that can connect to the PC is editable in the remote tab of My computer.

Concurrent Remote Desktop Sessions in Windows XP SP2 ( Terminal Server )

I mentioned before that Windows XP does not allow concurrent sessions for its Remote Desktop feature. What this means is that if a user is logged on at the local console, a remote user has to kick him off (and ironically, this can be done even without his permission) before starting work on the box. This is irritating and removes much of the productivity that Remote Desktop brings to Windows. Read on to learn how to remove that limitation in Windows XP SP2

A much touted feature in SP2 (Service Pack 2) since then removed was the ability to do just this, have a user logged on locally while another connects to the terminal remotely. Microsoft however removed the feature in the final build. The reason probably is that the EULA (End User License Agreement) allows only a single user to use a computer at a time. This is (IMHO) a silly reason to curtail Remote Desktop's functionality, so we'll have a workaround.

Microsoft did try out the feature in earlier builds of Service Pack 2 and it is this that we're going to exploit here. We're going to replace termserv.dll (The Terminal Server) with one from an earlier build (2055).

To get Concurrent Sessions in Remote Desktop working, follow the steps below exactly:

  1. Download the termserv.zip file below and extract it somewhere. (You have to be registered to see the file)
  2. Reboot into Safe Mode. This is necessary to remove Windows File Protection.
  3. Copy the termserv.dll in the zip to %windir%\System32 and %windir%\ServicePackFiles\i386. If the second folder doesn't exist, don't copy it there. Delete termserv.dll from the dllcache folder: %windir%\system32\dllcache
  4. Merge the contents of Concurrent Sessions SP2.reg file into the registry.
  5. Make sure Fast User Switching is turned on. Go Control Panel -> User Accounts -> Change the way users log on or off and turn on Fast User Switching.
  6. Open up the Group Policy Editor: Start Menu > Run > 'gpedit.msc'. Navigate to Computer Configuration > Administrative Templates > Windows Components > Terminal Services. Enable 'Limit Number of Connections' and set the number of connections to 3 (or more). This enables you to have more than one person remotely logged on.
  7. Now reboot back into normal Windows and try out whether Concurrent Sessions in Remote Desktop works. It should!

If anything goes wrong, the termserv_sp2.dll is the original file you replaced. Just rename it to termserv.dll, reboot into safe mode and copy it back.

The termserv.dl_ file is provided in the zip is for you slipstreamers out there. Just replace that file with the corresponding file in the Windows installation disks.

termserv.zip357.92 KB

Tuesday, February 8, 2011

Copy table data in one table with data from another table

This blog post illustrates how to update more than one column in a table with values from columns in another table and explains how to do it in the three RDBMS that we support.

Table Structures and values:

TableA has four columns: a, b, c, d (a is the primary key column)
TableB has five columns: a1, b1, c1, d1, e1 (a1 and b1 together constitute the primary key for this table)

The foreign key relationship between the two tables is based on A.a = B.a1

The data in these 2 tables is as follows:
I.    TableA
a    b    c    d
1    x    y    z
2    a    b    c
3    t    x    z

II.    TableB
a1    b1    c1    d1    e1
1    x1    y1    z1    40
2    a1    b1    c1    50

The requirement is to write a SQL to update columns b, c and d in TableA from the columns b1, c1 and d1 from TableB where-ever the join condition satisfies and e1 > 40 in TABLEB.


SET (b, c, d) = (SELECT b1, c1, d1 from TABLEB WHERE TABLEB.a1 = TABLEA.a and TABLEB.e1 > 40)

Results after the update:

a    b    c    d
1     x          y           z
2     a1        b1         c1
3     t           x           z

SQL Server:

SET     b = TABLEB.b1,
c = TABLEB.c1,
d = TABLEB.d1
AND TABLEB.e1 > 40

Note: This is an extension in SQL Server i.e. the FROM clause – it does make it simple to understand and is a nice feature.

Results after the update:

a    b    c    d
1     x          y           z
2     a1        b1         c1
3     t           x           z


–Same as Oracle–

SET (b, c, d) = (SELECT b1, c1, d1 from TABLEB WHERE TABLEB.a1 = TABLEA.a and TABLEB.e1 > 40)

Results after the update:

a    b    c    d
1     x          y           z
2     a1        b1         c1
3     t           x           z


It is very important to make sure that your where clause for the update statement is correct since that is what identifies the records that the update statement is going to qualify and do the update upon.  If it is incorrect, then you can get wrong results.  The reason I am mentioning this is because I have seen people write wrong where clauses and then wondering what went wrong because they specified the correct condition in the SET clause.

In the above example, if the Where condition was omitted, the other record's columns would be updated to NULL value and this will be the final result set:

a    b    c    d
1     Null      Null      Null
2     a1        b1         c1
3     Null     Null      Null

Thursday, February 3, 2011

Witness server- DATABASE MIRRORING SQL Server 2005

Witness server role:

In the database mirroring topology you may have an optional third server called the witness. Witness server is required for enabling automatic failover from principal to mirror server or vice-versa. Unlike principal and mirror servers, the witness server does not serve the database. The role of the witness is to verify whether a given partner server is up and functioning. Supporting automatic failover is the only function for witness server. It uses quorum to identify which server holds the principal copy and which server holds the mirror copy of the database.

Database Mirroring Witness

To support automatic failover, a database mirroring session must be configured in high-safety mode and also possess a third server instance, known as the witness. The witness is an optional instance of SQL Server that enables the mirror server in a high-safety mode session to recognize whether to initiate an automatic failover. Unlike the two partners, the witness does not serve the database. Supporting automatic failover is the only role of the witness.

In high-performance mode, the witness can adversely affect availability. If a witness is configured for a database mirroring session, the principal server must be connected at least to one of the other server instances, the mirror server or the witness, or both of them. Otherwise, the database becomes unavailable and forcing service (with possible data loss) is impossible. Therefore, for high-performance mode, we strongly recommend that you always keep the witness set to OFF. For information about the impact of a witness on high-performance mode.

The following illustration shows a high-safety mode session with a witness.

Mirroring session with a witness

A specific server instance can act as a witness in concurrent database mirroring sessions, each for a different database. Different sessions can be with different partners. The following illustration shows a server instance that is a witness in two database mirroring sessions with different partners.

Server instance that is a witness for 2 databases

A single-server instance can also function at the same time as a witness in some sessions and a partner in other sessions. However, in practice, a server instance typically functions as either a witness or a partner. This is because the partners require sophisticated computers that have enough hardware to support a production database, whereas the witness can run on any available Windows system that supports SQL Server 2005.

We strongly recommend that the witness reside on a separate computer from the partners. Unlike database mirroring partners, which are supported only by SQL Server 2005 Standard Edition and SQL Server 2005 Enterprise Edition, witnesses are also supported by SQL Server 2005 Workgroup Edition and SQL Server 2005 Express Edition. A witness can run on any reliable computer system that supports SQL Server 2005, but we recommend that every server instance that is used as a witness correspond to the minimum configuration that is required for SQL Server 2005 Standard Edition. For more information about these requirements,

Throughout a database mirroring session, all the server instances monitor their connection status. If the partners become disconnected from each other, they rely on the witness to make sure that only one of them is currently serving the database. If a synchronized mirror server loses its connection to the principal server but remains connected to the witness, the mirror server contacts the witness to determine whether the witness has lost its connection to the principal server:

  • If the principal server is still connected to the witness, automatic failover does not occur. Instead, the principal server continues to server the database while accumulating log records to send the mirror server when the partners reconnect.
  • If the witness is also disconnected from the principal server, the mirror server knows that principal database has become unavailable. In this case, the mirror server immediately initiates an automatic failover.
  • If the mirror server is disconnected from the witness and also from the principal server, automatic failover is not possible, regardless of the state of the principal server.

The requirement that at least two of the server instances be connected is known as quorum. Quorum makes sure that the database can only be served by one partner at a time. For information about how quorum works and its impact on a session

Asynchronous Database Mirroring (High-Performance Mode)

When transaction safety is set to OFF, the database mirroring session operates asynchronously. Asynchronous operation supports only one operating mode—high-performance mode. This mode enhances performance at the expense of high availability. High-performance mode uses just the principal server and the mirror server. Problems on the mirror server never impact the principal server. On the loss of the principal server, the mirror database is marked DISCONNECTED but is available as a warm standby.

High-performance mode, supports only one form of role switching: forced service (with possible data loss), which uses the mirror server as a warm standby server. Forced service is one of the possible responses to the failure of the principal server. Because data loss is possible, you should consider other alternatives before forcing service to the mirror. For more information, see "Responding to Failure of the Principal," later in this topic.

The following figure shows the configuration of a session using high-performance mode.

Partner-only configuration of a session

In high-performance mode, as soon as the principal server sends the log for a transaction to the mirror server, the principal server sends a confirmation to the client, without waiting for an acknowledgement from the mirror server. Transactions commit without waiting for the mirror server to write the log to disk. Asynchronous operation permits the principal server to run with minimum transaction latency.

The mirror server attempts to keep up with the log records sent by the principal server. But the mirror database might lag somewhat behind the principal database, though typically the gap between the databases is small. However, the gap can become substantial if the principal server is under a heavy work load or the system of the mirror server is over loaded.

High-performance mode can be useful in a disaster-recovery scenario in which the principal and mirror servers are separated by a significant distance and where you do not want small errors to impact the principal server.

Log shipping can be a supplement to database mirroring and is a favorable alternative to asynchronous database mirroring. For information about the advantages of log shipping, see Configuring High Availability. For information on using log shipping with database mirroring.

If you use Transact-SQL to configure high-performance mode, whenever the SAFETY property is set to OFF, we strongly recommend that the WITNESS property also be set to OFF. A witness can coexist with high-performance mode, but the witness provides no benefit and introduces risk.

If the witness is disconnected from the session when either partner goes down, the database becomes unavailable. This is because, even though high-performance mode does not require a witness, if one is set, the session requires a quorum consisting of two or more server instances. If the session loses quorum, it cannot serve the database.

When a witness is set in a high-performance mode session, the enforcement of quorum means that:

  • If the mirror server is lost, the principal server must be connected to the witness. Otherwise, the principal server takes its database offline until either the witness or mirror server rejoins the session.
  • If the principal server is lost, forcing service to the mirror server requires that the mirror server be connected to the witness.

For information about the types of quorums, 

When the principal fails, the database owner has several choices, as follows:

  • Leave the database unavailable until the principal becomes available again.
    If the principal database and its transaction log are intact, this choice preserves all of the committed transactions at the expense of availability.
  • Stop the database mirroring session, manually update the database, and then begin a new database mirroring session.
    If the principal database is lost but the principal server is still running, immediately attempt to back up the tail of the log on the principal database. If the tail-log backup succeeds, removing mirroring may be your best alternative. After removing mirroring, you can restore the log onto the former mirror database, which preserves all of the data.

    If the tail-log backup failed and you cannot wait for the principal server to recover, consider forcing service, which has the advantage of maintaining the session state.
  • Force service (with possible data loss) on the mirror server.
    Forced service is strictly a disaster recovery method and should be used sparingly. Forcing service is possible only if the principal server is down, the session is asynchronous (transaction safety is set to OFF), and either the session does not have any witness (the WITNESS property is set to OFF) or the witness is connected to the mirror server (that is, they have quorum).
    Forcing service causes the mirror server to assume the role of principal and serve its copy of the database for clients. When service is forced, whatever transaction logs the principal has not yet sent to the mirror server are lost. Therefore, you should limit forced service to situations where possible data loss is acceptable and immediate database availability is critical. For information on how forced service works and on best practices for using it
Quorum: How a Witness Affects Database Availability

Whenever a witness is set for a database mirroring session, quorum is required. Quorum is a relationship that exists when two or more server instances in a database mirroring session are connected to each other. Typically, quorum involves three interconnected server instances. When a witness is set, quorum is required to make the database available. Designed for high-safety mode with automatic failover, quorum makes sure that a database is owned by only one partner at a time.

If a particular server instance becomes disconnected from a mirroring session, that instance loses quorum. If no server instances are connected, the session loses quorum and the database becomes unavailable. Three types of quorum are possible:

  • A full quorum includes both partners and the witness.
  • A witness-to-partner quorum consists of the witness and either partner.
  • A partner-to-partner quorum consists of the two partners.

The following figure shows these types of quorum.

Quorums: full; witness and partner; both partners

As long as the current principal server has quorum, this server owns the role of principal and continues to serve the database, unless the database owner performs a manual failover. If the principal server loses quorum, it stops serving the database. Automatic failover can occur only if the principal database has lost quorum, which guarantees that it is no longer serving the database.

A disconnected server instance saves its most recent role in the session. Typically, a disconnected server instance reconnects to the session when it restarts and regains quorum.

The witness should be set only when you intend to use high-safety mode with automatic failover. In high-performance mode, for which a witness is never required, we strongly recommend setting the WITNESS property to OFF. For information about how a witness affects database availability in a high-performance mode session,

In high-safety mode, quorum allows automatic failover by providing a context in which the server instances with quorum arbitrate which partner owns the role of principal. The principal server serves the database if it has quorum. If the principal server loses quorum when the synchronized mirror server and witness retain quorum, automatic failover occurs.

The quorum scenarios for high-safety mode are as follows:

  • A full quorum that consists of both partners and the witness.
    Ordinarily, all three server instances participate in a three-way quorum, called a full quorum. With a full quorum, the principal and mirror servers continue to perform their respective roles (unless manual failover occurs).
  • A witness-to-partner quorum that consists of the witness and either partner.
    If the network connection between the partners is lost because one of the partners has been lost, two cases are possible:
    • The mirror server is lost, and the principal server and witness retain quorum.
      In this case, the principal sets its database to DISCONNECTED and runs with mirroring in a SUSPENDED state. (This is referred to as running exposed, because the database is currently not being mirrored.) When the mirror server rejoins the session, the server regains quorum as mirror and starts resynchronizing its copy of the database.
    • The principal server is lost, and the witness and the mirror server retain quorum.
      In this case, automatic failover occurs. For more information.
    Rarely, the network connection between failover partners is lost while both partners remain connected to the witness. In this event, two, separate witness-to–partner quorums exist, with the witness as a liaison. The witness informs the mirror server that the principal server is still connected. Therefore, automatic failover does not occur. Instead, the mirror server retains the mirror role and waits to reconnect to the principal. If the redo queue contains log records at this point, the mirror server continues to roll forward the mirror database. On reconnecting, the mirror server will resynchronize the mirror database.
  • A partner-to-partner quorum that consists of the two partners.
    As long as the partners retain quorum, the database continues in a SYNCHRONIZED state, and manual failover remains possible. Without the witness, automatic failover is not possible; but when the witness regains quorum, the session resumes regular operation, and automatic failover is supported again.
  • The session loses quorum.
    If all the server instances become disconnected from each other, the session is said to have lost quorum. As server instances reconnect to each other, they regain quorum with each other.
    • If the principal server reconnects with either of the other server instances, the database becomes available.
    • If the principal server remains disconnected, but the mirror and witness reconnect to each other, automatic failover cannot occur because data loss might occur. Therefore, the database remains unavailable, until the principal server rejoins the session.
    • When all three server instances have reconnected, full quorum is regained, and the session resumes its regular operation.

When a session has a partner-to-partner quorum, if either partner loses quorum, the session loses quorum. Therefore, if you expect the witness to remain disconnected for lots of time, we recommend that you temporarily remove the witness from the session. Removing the witness removes the requirement for quorum. Then, if the mirror server becomes disconnected, the principal server can continue to serve the database. For information about how to add or remove a witness.

How Quorum Affects Database Availability

The following illustration shows how the witness and the partners cooperate to make sure that, at given time, only one partner owns the role of principal and only the current principal server can bring its database online. Both scenarios start with full quorum, and Partner_A in the principal role and Partner_B in the mirror role.

How the witness and partners cooperate

Scenario 1 shows how after the original principal server (Partner_A) fails, the witness and mirror agree that the principal, Partner_A, is not available any longer and form quorum. The mirror, Partner_B then assumes the principal role. Automatic failover occurs, and Partner_B, brings its copy of the database online. Then Partner_B goes down, and the database goes offline. Later, the former principal server, Partner_A, reconnects to the witness regaining quorum, but on communicating with the witness, Partner_A learns that it cannot bring its copy of the database online, because Partner_B now owns the principal role. When Partner_B rejoins the session, it brings the database back online.

In Scenario 2, the witness loses quorum, while the partners, Partner_A and Partner_B, retain quorum with each other, and the database remains online. Then the partners lose their quorum, too, and the database goes offline. Later, the principal server, Partner_A, reconnects to the witness regaining quorum. The witness confirms that Partner_A still owns the principal role, and Partner_A brings the database back online.

Wednesday, February 2, 2011

The Remote Copy Of Database Has Not Been Rolled Forward To A Point In Time That Is Encompassed In The Local Copy

I set up DB mirror between a primary (SQL1)  and a mirror (SQL2); no witness.  I have a problem when I issue command:

alter database DBmirrorTest

Set Partner = N'TCP://SQL2.mycom.com:5022';

The error message is:

The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy of the database log.

I have the steps below prior to the command. (Note that both servers' service accounts use the same domain account.  The domain account I login to do db mirror setup is a member of the local admin group.)

1. backup database DBmirrorTest on SQL1

2. backup database log

3. copy db and log backup files to SQL2

4. restore db with norecovery

5. restore log with norecovery

6. create endpoints on both SQL1 and SQL2





7. enable mirror on mirror server SQL2

:connect SQL2

alter database DBmirrorTest

Set Partner = N'TCP://SQL1.mycom.com:5022';


8. Enable mirror on primary server SQL1

:connect SQL1

alter database DBmirrorTest

Set Partner = N'TCP://SQL2.mycom.com:5022';


This is where I got the error. 

The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy

SQL Server Replication 2

If your current SQL Server is overwhelmed with work and performance is suffering, one option is to move part of its workload onto one or more additional SQL Servers using replication.

Replication allows you to "scale out" your SQL Server by distributing the processing load over multiple servers. For example, you might want to consider moving your reporting function from the main (transaction server) to another server (reporting server). This task is relatively easily accomplished by implementing transactional replication.

Replication offers many opportunities to scale out your current SQL Server, helping to boost the overall performance of your SQL Server-based applications. [6.5, 7.0, 2000, 2005] 


If you plan is to use replication as part of your SQL Server-based application, and your goal is high scalability and performance, then this factor should be considered when the database is first designed. The fact that you will be using replication, and the type of replication you plan to use, can directly affect how your databases should be designed.

In addition, using replication requires a different hardware strategy, which means you also have to plan for this. [6.5, 7.0, 2000, 2005]


For optimum replication performance, consider these hardware suggestions for servers involved in replication:

  • Use multiple CPUs, including Dual- and Quad-Core CPUs.
  • Add additional RAM (besides what you need for using SQL Server without using replication).
  • If you need maximum performance for your transactional or merge replication, you should place the log of every database you want replicated on it own separate disk array. The transaction log is used heavily during transactional and merge replication, and the more you can isolate disk I/O for these files, the higher the performance.
  • Connect all SQL Servers involved in replication using high-speed network connections and switches.
  • Ideally, the distribution component of replication should be on its own dedicated server, not located on the server running the publishing or subscribing components of replication.

[6.5, 7.0, 2000, 2005]


Don't publish more data than you need, whether you are talking about tables, rows, or columns in a table. You can use vertical and/or horizontal filtering to prevent specific rows or columns of a table from being published.

If your tables include columns with IMAGE, NTEXT, or TEXT columns, you should seriously consider filtering out these columns (vertical filtering) because these data types can greatly slow down replication [6.5, 7.0, 2000, 2005] Updated 8-3-2003


If close to real-time replication is not required, then don't use continuous replication. Instead, schedule replication to occur at regular intervals, such as once an hour, or every four hours. This reduces unnecessary server overhead. [6.5, 7.0, 2000, 2005]


When creating the distribution database and its log, don't take the defaults for the location of the database. Instead, manually specify that the log and database be placed on appropriate disk arrays.

For best performance, consider putting the distribution database on a RAID 1 or RAID 10 disk array (RAID 5 is not ideal because of the many writes produced by replication), and putting the database log file on its own dedicated RAID 1 disk array. [6.5, 7.0, 2000, 2005]


If the distribution and publishing components of replication have to be on the same server (because of resource considerations), and this server is reaching its performance capacity, then use Pull Subscriptions instead of Push Subscriptions. This will offload some of the work to the subscribing servers. Of course, this assumes the subscribing servers have the resources to do this. [7.0, 2000, 2005] 


When creating a publication, SQL Server gives you the ability to specify if one or more of the subscribers will be Microsoft Jet databases. If you need this feature, then turn it on. But if you will not have any Microsoft Jet subscribers, then don't select this option. What happens when you select this is option is that the Snapshot Agent will use BCP character format, not the faster BCP native format, when running. This also means that any SQL Server subscribers will have to go through extra work translating the BCP character format when it receives the publication, which causes a slight performance hit. [7.0, 2000] 


For best performance, avoid replicating columns in your publications that include TEXT, NTEXT or IMAGE data types. These data types require more overhead than standard data types. [7.0, 2000, 2005] 


If you are filtering the data on the publisher before it gets to any of the subscribers, be sure that the column or columns you are filtering on have an appropriate index. If not, then SQL Server will perform table scan to filter the data, putting an increased load on the publisher and reducing performance. [6.5, 7.0, 2000, 2005] Updated 9-17-2004

If the replication load is very heavy between the publisher and the distributor, or between the distributor and the subscriber(s), and each of these is on their own dedicated SQL Server, and each are connected by a fast LAN connection, and the latency among the servers is longer that you prefer, consider this option. Instead of using public network connections between each of the servers, use a dedicated private network instead. This can be accomplished by adding an additional NIC to each server, and connecting them using a private hub, switch, or cross-over cable.

This technique will boost performance because data replication no longer has to compete with all of the data already traveling on your public network. To ensure the highest performance, use 100Mbs or 1Gbs NICs and matching connection hardware and cables. [6.5, 7.0, 2000, 2005] 


When you first click on the Replication Monitor group in Enterprise Manager, after having installed Replication on your server, you see a message asking you if you want to enable polling for Replication Monitor.

What this means is that the replication agent status information, found in the Replication Monitor group in Enterprise Manager, will automatically be refreshed periodically if you answer Yes, or it won't be automatically refreshed if you answer No.

If you answer Yes, then you will see another screen that allows you to specify how long the refresh interval will be.

As you might imagine, if you answer Yes, SQL Server will have to incur some additional overhead to automatically refresh the status information. For the most part, the default 10 seconds refresh interval won't produce a significant amount of overhead, and it is a good place to start. If you enter a refresh interval of less than 10 seconds, such as 1 or 2 seconds, you will probably notice the overhead, and you will probably be unhappy with this selection. If you are currently having performance problems, you may want to increase the default 10 seconds to a larger number, such as 30 seconds or so.

Once you set the refresh interval, you can always change it by right-clicking on Replication Monitor in Enterprise Manager and selecting "Refresh Rate and Settings." [7.0, 2000] 


If the subscriber data is being indexed heavily for the decision support needs of the organization, keep in mind that those same indexes can slow down how long it takes data to be moved from the distributor to the subscriber. The more indexes there are, the slower the replication process. This is because any new data moved to the subscriber must be indexed. In some cases, it might be beneficial to create an indexed view on the publisher, and then publish it as a table to subscribers. [2000, 2005] 


The distribution agent, the log reader agent, the merge agent, and the snapshot agent all log their activity as they work. The amount of data they log depends on their verbosity setting. The more verbose the setting, the more overhead there is. The less verbose the setting, the less overhead there is. According to Microsoft, setting all of these settings to their lowest setting can boost replication performance from 10 to 15 percent. In most cases, there are two verbosity settings that need to be made:

HistoryVerboseLevel and OutputVerboseLevel.

These settings are made using the various agent utility programs, which are located in the \Binn folder. Here are the commands used to turn the verbosity levels of each of these agents their least verbose setting:

Distribution Agent

distrib -HistoryVerboseLevel 1
distrib -OutputVerboseLevel 0

Log Reader Agent

logread -HistoryVerboseLevel 1
logread -OutputVerboseLevel 0

Merge Agent

replmerg -HistoryVerboseLevel 1
replmerg -OutputVerboseLevel 0

Snapshot Agent

snapshot -HistoryVerboseLevel 1
snapshot -OutputVerboseLevel 0

The default value for the HistoryVerboseLevel is 2 (not 1), and the default value for the OutputVerboseLevel is 2 (not 0).

Of course, if you are troubleshooting replication, you will not want to reduce these values. But if you are in production, and all is going well, you should consider changing the relevant default values in order to boost performance. [7.0, 2000] 

If you need replication agents to run frequently, say every minute or so, it is more efficient to set them to run continuously than it is from them to run frequently. This is because these agents require overhead to start and stop as they are used, which increases overhead. So if you have set the replication agents to run often, consider setting them to run continuously instead. On the other hand, if you only need the agents to run occasionally, once an hour or so, or longer, then this configuration is still more efficient than running them continuously. You may have to perform tests to see which option best meets your specific replication needs. [7.0, 2000, 2005]


To optimize the performance of the distribution database, manually size it to what you believe will be the biggest size it will ever get. This can reduce overhead as SQL Server will not have to increase its size automatically as needed. In addition, be sure not to turn on the "Auto Shrink" and the "Auto Close" database options, as they will incur unnecessary overhead to the database. [7.0, 2000, 2005]


If your SQL Server 2000 replication configuration will cause a large number of rows to be affected, consider using stored procedure replication. Significant overhead can be reduced if the execution of stored procedures is replicated instead of the data changes caused by the execution of one or more stored procedures. This is generally accomplished in snapshot or transactional replication when one or multiple stored procedures are specified as articles. [2000, 2005]


To help reduce overhead and the size of the distribution database, consider reducing the amount of time that log history and transaction retention is held. To do this, right-click on the "Replication" folder, then select "Configure Publishing, Subscribers, and Distributors." Next, click on the "Properties" button for the distribution database to make your change. [2000]


Avoid creating triggers on tables that contain subscribed data. They can significantly add to overhead and slow replication performance, including snapshot replication, transactional replication, and merge replication configurations. [7.0, 2000, 2005] 


To help you identify any potential replication performance issues, consider tracking these Performance Monitor counters:

  • Dist: Delivered Cmds/sec: Tracks the number of commands per second sent to subscribers from the distributor.
  • Dist: Delivered Trans/sec: Tracks the number of transactions per second sent to subscribers from the distributor.
  • Dist: Delivery Latency: The average amount of time it takes to for a transaction to be moved from the distributor to a subscriber.
The first two counters help you to get a feel for how busy your replication setup is. There are no absolute numbers to watch out for. Instead, you need to watch these counters over time to see if there is any significant change, and if so, you need to investigate why, as this could indicate a potential performance issue. The Delivery Latency gives you a good feel for how much time it takes subscribers to get updated. Again, there is no ideal number for this counter. Instead, evaluate whether or not the latency you are seeing is livable. If it is not, then you will have to take steps to reduce latency to an acceptable figure. [6.5, 7.0, 2000, 2005]

SQL Server Replication

SQL Server replication allows database administrators to distribute data to various servers throughout an organization. You may wish to implement replication in your organization for a number of reasons, such as:
  • Load balancing. Replication allows you to disseminate your data to a number of servers and then distribute the query load among those servers.

  • Offline processing. You may wish to manipulate data from your database on a machine that is not always connected to the network.

  • Redundancy. Replication allows you to build a fail-over database server that's ready to pick up the processing load at a moment's notice.

In any replication scenario, there are two main components:
  • Publishers have data to offer to other servers. Any given replication scheme may have one or more publishers.

  • Subscribers are database servers that wish to receive updates from the Publisher when data is modified.

There's nothing preventing a single system from acting in both of these capacities. In fact, this is often done in large-scale distributed database systems. Microsoft SQL Server supports three types of database replication. This article provides a brief introduction to each of these models, while future articles will explore them in further detail. They are:
  • Snapshot replication acts in the manner its name implies. The publisher simply takes a snapshot of the entire replicated database and shares it with the subscribers. Of course, this is a very time and resource-intensive process. For this reason, most administrators don't use snapshot replication on a recurring basis for databases that change frequently. There are two scenarios where snapshot replication is commonly used. First, it is used for databases that rarely change. Second, it is used to set a baseline to establish replication between systems while future updates are propagated using transactional or merge replication.

  • Transactional replication offers a more flexible solution for databases that change on a regular basis. With transactional replication, the replication agent monitors the publisher for changes to the database and transmits those changes to the subscribers. This transmission can take place immediately or on a periodic basis.

  • Merge replication allows the publisher and subscriber to independently make changes to the database. Both entities can work without an active network connection. When they are reconnected, the merge replication agent checks for changes on both sets of data and modifies each database accordingly. If changes conflict with each other, it uses a predefined conflict resolution algorithm to determine the appropriate data. Merge replication is commonly used by laptop users and others who can not be constantly connected to the publisher.

Each one of these replication techniques serves a useful purpose and is well-suited to particular database scenarios.

If you're working with SQL Server 2005, you'll need to choose your edition based upon your replication needs. Each edition has differing capabilities:
  • Express Edition has extremely limited replication capabilities. It's able to act as a replication client only.
  • Workgroup Edition adds limited publishing capabilities. It's able to serve five clients using transactional replication and up to 25 clients usin merge replication. It can also act as a replication client.
  • Standard Edition has full, unlimited replication capabilities with other SQL Server databases.
  • Enterprise Edition adds a powerful tool for those operating in mixed databa

    SQL Server Replication

    se environments -- it's capable of replication with Oracle databases
As you've undoubtedly recognized by this point, SQL Server's replication capabilities offer database administrators a powerful tool for managing and scaling databases in an enterprise environment.



Question: How do I convert a string to URL-encoding?

Answer: You can convert a string to the URL-encoded form (suitable for transmission as a query string or, generally speaking, as part of a URL) using the JavaScript functions escape, encodeURI and encodeURIComponent. Below is a detailed discussion of these functions.

escape   In all browsers that support JavaScript, you can use the escape function. This function works as follows: digits, Latin letters and the characters + - * / . _ @ remain unchanged; all other characters in the original string are replaced by escape-sequences %XX, where XX is the ASCII code of the original character. Example:

escape("It's me!") // result: It%27s%20me%21

For Unicode input strings, the function escape has a more complex behavior. If the input is a Unicode string, then non-ASCII Unicode characters will be converted to the Unicode escape-sequences %uXXXX. For example, escape will encode the capital Cyrillic letter A as %u0410.

encodeURI and encodeURIComponent   In addition to escape, modern browsers support two more functions for URL-encoding: encodeURI and encodeURIComponent. These functions are similar to escape, except that they leave intact some characters that escape encodes (e.g. apostrophe, tilde, parentheses); moreover, encodeURIComponent encodes some characters (+ / @) that escape leaves intact. Unlike escape, the functions encodeURI and encodeURIComponent do not produce %uXXXX for Unicode input; instead, they produce %XX%XX. For example, encodeURI and encodeURIComponent will encode the capital Cyrillic letter A as %D0%90.

The following tables illustrate the differences between escape, encodeURI, and encodeURIComponent for

  • lowerASCII characters (codes 1 thru 127),
  • upperASCII characters (codes 128 thru 255), and
  • Unicode characters.

  • Differences between encodeURI, encodeURIComponent, and escape:
    lower ASCII characters (codes 1-127)

    chr     escape(chr)   encodeURI(chr)  encodeURIComponent(chr)
    _ _ _ _
    - - - -
    . . . .
    * * * *
    + + + %2B
    / / / %2F
    @ @ @ %40
    ~ %7E ~ ~
    ! %21 ! !
    ' %27 ' '
    ( %28 ( (
    ) %29 ) )
    # %23 # %23
    $ %24 $ %24
    & %26 & %26
    , %2C , %2C
    : %3A : %3A
    ; %3B ; %3B
    = %3D = %3D
    ? %3F ? %3F

    all other lower-ASCII characters produce identical results:

    space %20 %20 %20
    " %22 %22 %22
    % %25 %25 %25
    < %3C %3C %3C
    > %3E %3E %3E
    [ %5B %5B %5B
    \ %5C %5C %5C
    ] %5D %5D %5D
    ^ %5E %5E %5E
    { %7B %7B %7B
    | %7C %7C %7C
    } %7D %7D %7D
    ... ... ... ...


    Differences between encodeURI, encodeURIComponent, and escape:
    upper ASCII characters (codes 128-255)

    As shown in the table below, encodeURI, encodeURIComponent, and escape produce different results for upper ASCII characters. For example, the non-breaking space character (ASCII-code 0xA0, or 160) will be encoded as %A0 if you use escape, and %C2%A0 if you use encodeURI or encodeURIComponent. The small ü or u-umlaut letter (ASCII-code 0xFC, or 252) will be encoded as %FC if you use escape, and %C3%BC if you use encodeURI or encodeURIComponent.
    chr    escape(chr)   encodeURI(chr)  encodeURIComponent(chr)
      %A0 %C2%A0 %C2%A0
    ¡ %A1 %C2%A1 %C2%A1
    ¢ %A2 %C2%A2 %C2%A2
    £ %A3 %C2%A3 %C2%A3
    ¤ %A4 %C2%A4 %C2%A4
    ¥ %A5 %C2%A5 %C2%A5
    ¦ %A6 %C2%A6 %C2%A6
    § %A7 %C2%A7 %C2%A7
    ¨ %A8 %C2%A8 %C2%A8
    © %A9 %C2%A9 %C2%A9
    ª %AA %C2%AA %C2%AA
    « %AB %C2%AB %C2%AB
    ¬ %AC %C2%AC %C2%AC
    ­ %AD %C2%AD %C2%AD
    ® %AE %C2%AE %C2%AE
    ¯ %AF %C2%AF %C2%AF
    ° %B0 %C2%B0 %C2%B0
    ± %B1 %C2%B1 %C2%B1
    ² %B2 %C2%B2 %C2%B2
    ³ %B3 %C2%B3 %C2%B3
    ´ %B4 %C2%B4 %C2%B4
    µ %B5 %C2%B5 %C2%B5
    ¶ %B6 %C2%B6 %C2%B6
    · %B7 %C2%B7 %C2%B7
    ¸ %B8 %C2%B8 %C2%B8
    ¹ %B9 %C2%B9 %C2%B9
    º %BA %C2%BA %C2%BA
    » %BB %C2%BB %C2%BB
    ¼ %BC %C2%BC %C2%BC
    ½ %BD %C2%BD %C2%BD
    ¾ %BE %C2%BE %C2%BE
    ¿ %BF %C2%BF %C2%BF
    À %C0 %C3%80 %C3%80
    Á %C1 %C3%81 %C3%81
    Â %C2 %C3%82 %C3%82
    Ã %C3 %C3%83 %C3%83
    Ä %C4 %C3%84 %C3%84
    Å %C5 %C3%85 %C3%85
    Æ %C6 %C3%86 %C3%86
    Ç %C7 %C3%87 %C3%87
    È %C8 %C3%88 %C3%88
    É %C9 %C3%89 %C3%89
    Ê %CA %C3%8A %C3%8A
    Ë %CB %C3%8B %C3%8B
    Ì %CC %C3%8C %C3%8C
    Í %CD %C3%8D %C3%8D
    Î %CE %C3%8E %C3%8E
    Ï %CF %C3%8F %C3%8F
    Ð %D0 %C3%90 %C3%90
    Ñ %D1 %C3%91 %C3%91
    Ò %D2 %C3%92 %C3%92
    Ó %D3 %C3%93 %C3%93
    Ô %D4 %C3%94 %C3%94
    Õ %D5 %C3%95 %C3%95
    Ö %D6 %C3%96 %C3%96
    × %D7 %C3%97 %C3%97
    Ø %D8 %C3%98 %C3%98
    Ù %D9 %C3%99 %C3%99
    Ú %DA %C3%9A %C3%9A
    Û %DB %C3%9B %C3%9B
    Ü %DC %C3%9C %C3%9C
    Ý %DD %C3%9D %C3%9D
    Þ %DE %C3%9E %C3%9E
    ß %DF %C3%9F %C3%9F
    à %E0 %C3%A0 %C3%A0
    á %E1 %C3%A1 %C3%A1
    â %E2 %C3%A2 %C3%A2
    ã %E3 %C3%A3 %C3%A3
    ä %E4 %C3%A4 %C3%A4
    å %E5 %C3%A5 %C3%A5
    æ %E6 %C3%A6 %C3%A6
    ç %E7 %C3%A7 %C3%A7
    è %E8 %C3%A8 %C3%A8
    é %E9 %C3%A9 %C3%A9
    ê %EA %C3%AA %C3%AA
    ë %EB %C3%AB %C3%AB
    ì %EC %C3%AC %C3%AC
    í %ED %C3%AD %C3%AD
    î %EE %C3%AE %C3%AE
    ï %EF %C3%AF %C3%AF
    ð %F0 %C3%B0 %C3%B0
    ñ %F1 %C3%B1 %C3%B1
    ò %F2 %C3%B2 %C3%B2
    ó %F3 %C3%B3 %C3%B3
    ô %F4 %C3%B4 %C3%B4
    õ %F5 %C3%B5 %C3%B5
    ö %F6 %C3%B6 %C3%B6
    ÷ %F7 %C3%B7 %C3%B7
    ø %F8 %C3%B8 %C3%B8
    ù %F9 %C3%B9 %C3%B9
    ú %FA %C3%BA %C3%BA
    û %FB %C3%BB %C3%BB
    ü %FC %C3%BC %C3%BC
    ý %FD %C3%BD %C3%BD
    þ %FE %C3%BE %C3%BE
    ÿ %FF %C3%BF %C3%BF


    Differences between encodeURI, encodeURIComponent, and escape:
    Unicode (non-ASCII) characters

    As a simple example of Unicode (non-ASCII) characters, the table below shows the URL encodings for part of the Cyrillic subset of the Unicode character set (\u0410 thru \u042F). Note that the same Unicode character may produce the encoding %uXXXX if you use escape and the encoding %XX%XX if you use encodeURI or encodeURIComponent. Importantly, the functions escape, encodeURI, and encodeURIComponent work like that for other parts of the Unicode character set as well: only the escape function may return the encoding of the form %uXXXX.
    chr    escape(chr)   encodeURI(chr)  encodeURIComponent(chr)
    А %u0410 %D0%90 %D0%90
    Б %u0411 %D0%91 %D0%91
    В %u0412 %D0%92 %D0%92
    Г %u0413 %D0%93 %D0%93
    Д %u0414 %D0%94 %D0%94
    Е %u0415 %D0%95 %D0%95
    Ж %u0416 %D0%96 %D0%96
    З %u0417 %D0%97 %D0%97
    И %u0418 %D0%98 %D0%98
    Й %u0419 %D0%99 %D0%99
    К %u041A %D0%9A %D0%9A
    Л %u041B %D0%9B %D0%9B
    М %u041C %D0%9C %D0%9C
    Н %u041D %D0%9D %D0%9D
    О %u041E %D0%9E %D0%9E
    П %u041F %D0%9F %D0%9F
    Р %u0420 %D0%A0 %D0%A0
    С %u0421 %D0%A1 %D0%A1
    Т %u0422 %D0%A2 %D0%A2
    У %u0423 %D0%A3 %D0%A3
    Ф %u0424 %D0%A4 %D0%A4
    Х %u0425 %D0%A5 %D0%A5
    Ц %u0426 %D0%A6 %D0%A6
    Ч %u0427 %D0%A7 %D0%A7
    Ш %u0428 %D0%A8 %D0%A8
    Щ %u0429 %D0%A9 %D0%A9
    Ъ %u042A %D0%AA %D0%AA
    Ы %u042B %D0%AB %D0%AB
    Ь %u042C %D0%AC %D0%AC
    Э %u042D %D0%AD %D0%AD
    Ю %u042E %D0%AE %D0%AE
    Я %u042F %D0%AF %D0%AF

    Encode a query string in javascript

    encodeURIComponent will work. (You may or may not want the leading '?', depending on what the script is expecting.)

    var c= 'd e'
    var query= '?a=b&c='+encodeURIComponent(c);
    var uri= 'http://www.example.com/script?query='+encodeURIComponent(query);
    .location= uri;

    Takes me to:


    When you hover over that it may appear once-decoded in the browser's status bar, but you will end up in the right place.

    escape/unescape() is the wrong thing for encoding query parameters, it gets Unicode characters and pluses wrong. There is almost never a case where escape() is what you really need.

    How to get international unicode characters from a a form input field/servlet parameter into a string


    [ I have a servlet based app that generates and processes HTML forms. I would like to support mutiple languages/character sets that will be stored in unicode UTF-8 in the database. I am setting the following tags, for example:

    <meta content="text/html; charset=Shift_JIS" http-equiv="Content-Type"> and
    <form accept-charset="Shift_JIS"...

    I am also setting the locale and content type on the HttpServletResponse to "ja" and "text/html; charset=Shift_JIS" respectively.

    Unfortunately, the CharacterEncoding on the HttpServletRequest from the post is always null even though it's set properly in the browser.

    Does anyone have a sample on how to get international unicode characters from a a form input field/servlet parameter into a string and from a java string into a form input or text field? ]


    If the request.getCharacterEncoding() is null, the default parsing value of the String is ISO-8859-1.

    So if you want to get an Unicode String (UTF-8) you have to do something like that:

    String myparam = request.getParameter("myparamname");
    if (myparam != null)
    myparam = new String(myparam.getBytes("8859_1"),"UTF8");

    Tuesday, February 1, 2011

    SQL SERVER – Rules for Optimizining Any Query

    This subject is very deep subject but today we will see it very quickly and most important points. May be following up on few of the points of this point will help users to right away improve the performance of query. In this article I am not focusing on in depth analysis of database but simple tricks which DBA can apply to gain immediate performance gain.

    • Table should have primary key
    • Table should have minimum of one clustered index
    • Table should have appropriate amount of non-clustered index
    • Non-clustered index should be created on columns of table based on query which is running
    • Following priority order should be followed when any index is created a) WHERE clause, b) JOIN clause, c) ORDER BY clause, d) SELECT clause
    • Do not to use Views or replace views with original source table
    • Triggers should not be used if possible, incorporate the logic of trigger in stored procedure
    • Remove any adhoc queries and use Stored Procedure instead
    • Check if there is atleast 30% HHD is empty – it improves the performance a bit
    • If possible move the logic of UDF to SP as well
    • Remove * from SELECT and use columns which are only necessary in code
    • Remove any unnecessary joins from table
    • If there is cursor used in query, see if there is any other way to avoid the usage of this (either by SELECT … INTO or INSERT … INTO, etc)

    There are few hardware upgrades can be considered as well like separating index on different disk drive or moving tempdb to another drive. However, I am not suggesting them here as they are not quick way to improve the performance of query.