Friday, March 18, 2011

Table Variables In SQL Server



Microsoft introduced table variables with SQL Server 2000 as an alternative to using temporary tables. In many cases a table variable can outperform a solution using a temporary table, although we will need to review the strengths and weaknesses of each in this article.

Table variables store a set of records, so naturally the declaration syntax looks very similar to a CREATE TABLE statement, as you can see in the following example:

DECLARE @ProductTotals TABLE

(

  ProductID int,

  Revenue money

)

While connected to the Northwind data-base, we could write the following SELECT statement to populate the table variable.

INSERT INTO @ProductTotals (ProductID, Revenue)

  SELECT ProductID, SUM(UnitPrice * Quantity)

    FROM [Order Details]

    GROUP BY ProductID

You can use table variables in batches, stored procedures, and user-defined functions (UDFs). We can UPDATE records in our table variable as well as DELETE records.

UPDATE @ProductTotals

  SET Revenue = Revenue * 1.15

WHERE ProductID = 62

 

DELETE FROM @ProductTotals

WHERE ProductID = 60

 

 

SELECT TOP 5 *

FROM @ProductTotals

ORDER BY Revenue DESC

You might think table variables work just like temporary tables (CREATE TABLE #ProductTotals), but there are some differences.

Scope

Unlike the majority of the other data types in SQL Server, you cannot use a table variable as an input or an output parameter. In fact, a table variable is scoped to the stored procedure, batch, or user-defined function just like any local variable you create with a DECLARE statement. The variable will no longer exist after the procedure exits - there will be no table to clean up with a DROP statement.

Although you cannot use a table variable as an input or output parameter, you can return a table variable from a user-defined function – we will see an example later in this article. However, because you can't pass a table variable to another stored procedure as input – there still are scenarios where you'll be required to use a temporary table when using calling stored procedures from inside other stored procedures and sharing table results.

The restricted scope of a table variable gives SQL Server some liberty to perform optimizations.

Performance

Because of the well-defined scope, a table variable will generally use fewer resources than a temporary table. Transactions touching table variables only last for the duration of the update on the table variable, so there is less locking and logging overhead.

Using a temporary table inside of a stored procedure may result in additional re-compilations of the stored procedure. Table variables can often avoid this recompilation hit. For more information on why stored procedures may recompile, look at Microsoft knowledge base article 243586 (INF: Troubleshooting Stored Procedure Recompilation).

Other Features

Constraints are an excellent way to ensure the data in a table meets specific requirements, and you can use constraints with table variables. The following example ensures ProductID values in the table will be unique, and all prices are less then 10.0.

DECLARE @MyTable TABLE

(

  ProductID int UNIQUE,

  Price money CHECK(Price < 10.0)

)

You can also declare primary keys. identity columns, and default values.

DECLARE @MyTable TABLE

(

  ProductID int IDENTITY(1,1) PRIMARY KEY,

  Name varchar(10) NOT NULL DEFAULT('Unknown')

)

So far it seems that table variables can do anything temporary tables can do within the scope of a stored procedure, batch, or UDF), but there are some drawbacks.

Restrictions

You cannot create a non-clustered index on a table variable, unless the index is a side effect of a PRIMARY KEY or UNIQUE constraint on the table (SQL Server enforces any UNIQUE or PRIMARY KEY constraints using an index).

Also, SQL Server does not maintain statistics on a table variable, and statistics are used heavily by the query optimizer to determine the best method to execute a query. Neither of these restrictions should be a problem, however, as table variables generally exist for a specific purpose and aren't used for a wide range of ad-hoc queries.

The table definition of a table variable cannot change after the DECLARE statement. Any ALTER TABLE query attempting to alter a table variable will fail with a syntax error. Along the same lines, you cannot use a table variable with SELECT INTO or INSERT EXEC queries. f you are using a table variable in a join, you will need to alias the table in order to execute the query.

SELECT ProductName, Revenue

FROM Products P

  INNER JOIN @ProductTotals PT ON P.ProductID = PT.ProductID

You can use a table variable with dynamic SQL, but you must declare the table inside the dynamic SQL itself. The following query will fail with the error "Must declare the variable '@MyTable'."

DECLARE @MyTable TABLE

(

  ProductID int ,

  Name varchar(10)

)

 

EXEC sp_executesql N'SELECT * FROM @MyTable'

It's also important to note how table variables do not participate in transaction rollbacks. Although this can be a performance benefit, it can also catch you off guard if you are not aware of the behavior. To demonstrate, the following query batch will return a count of 77 records even though the INSERT took place inside a transaction with ROLLBACK.

DECLARE @ProductTotals TABLE

(

  ProductID int ,

  Revenue money

)

 

BEGIN TRANSACTION

 

  INSERT INTO @ProductTotals (ProductID, Revenue)

    SELECT ProductID, SUM(UnitPrice * Quantity)

    FROM  [Order Details]

    GROUP BY ProductID

 

ROLLBACK TRANSACTION

 

SELECT COUNT(*) FROM @ProductTotals

 

 

Choosing Between Temporary Tables and Table Variables

Now you've come to a stored procedure that needs temporary resultset storage. Knowing what we have learned so far, how do you decide on using a table variable or a temporary table?

First, we know there are situations that which demand the use of a temporary table. This in-cludes calling nested stored procedures which use the resultset, certain scenarios using dy-namic SQL, and cases where you need transaction rollback support.

Secondly, the size of the resultset will determine which solution to choose. If the table stores a resultset so large you require indexes to improve query performance, you'll need to stick to a temporary table. In some borderline cases try some performance benchmark testing to see which approach offers the best performance. If the resultset is small, the table variable is always the optimum choice.

An Example: Split

Table variables are a superior alternative to using temporary tables in many situations. The ability to use a table variable as the return value of a UDF is one of the best uses of table vari-ables. In the following sample, we will address a common need: a function to parse a delimited string into pieces. In other words, given the string "1,5,9" – we will want to return a table with a record for each value: 1, 5, and 9.

The following user-defined function will walk through an incoming string and parse out the individual entries. The UDF insert the en-tries into a table variable and returns the table variable as a result. As an example, calling the UDF with the following SELECT statement:

SELECT * FROM fn_Split('foo,bar,widget', ',')

will return the following result set.

position value
1 foo
2 bar
3 widget

We could use the resultset in another stored procedure or batch as a table to select against or filter with. We will see why the split function can be useful in the next OdeToCode article. For now, here is the source code to fn_Split.

if exists (select * from dbo.sysobjects where id = ob-ject_id(N'[dbo].[fn_Split]') and xtype in (N'FN', N'IF', N'TF'))

drop function [dbo].[fn_Split]

GO

 

 

SET QUOTED_IDENTIFIER OFF

GO

SET ANSI_NULLS OFF

GO

 

CREATE  FUNCTION fn_Split(@text varchar(8000), @delimiter varchar(20) = ' ')

RETURNS @Strings TABLE

(   

  position int IDENTITY PRIMARY KEY,

  value varchar(8000)  

)

AS

BEGIN

 

DECLARE @index int

SET @index = -1

 

WHILE (LEN(@text) > 0)

  BEGIN 

    SET @index = CHARINDEX(@delimiter , @text) 

    IF (@index = 0) AND (LEN(@text) > 0) 

      BEGIN  

        INSERT INTO @Strings VALUES (@text)

          BREAK 

      END 

    IF (@index > 1) 

      BEGIN  

        INSERT INTO @Strings VALUES (LEFT(@text, @index - 1))  

        SET @text = RIGHT(@text, (LEN(@text) - @index)) 

      END 

    ELSE

      SET @text = RIGHT(@text, (LEN(@text) - @index))

    END

  RETURN

END

 

GO

 

SET QUOTED_IDENTIFIER OFF

GO

 

SET ANSI_NULLS ON

GO

Summary

Next time you find yourself using a temporary table, think of table variables instead. Table variables can offer performance benefits and flexibility when compared to temporary tables, and you can let the server clean up afterwards.



Thursday, March 17, 2011

Copy or move database diagram from for sql server 2005



The entire diagram's information is stored in the dbo.sysdiagrams.
If you wish to see the table dbo.sysdiagrams which is in System Table folder of A,then write the following statement


select * from A.dbo.sysdiagrams	 


It will list all the diagram information.

Now if you need to move or copy the all the diagrams of database A to the B then write the following sql statement and run:


  insert into B.dbo.sysdiagrams 
select [name], principal_id, version,definition from A.dbo.sysdiagrams

It will copy a specific diagrams from database A to database B


insert into B.dbo.sysdiagrams
select [name],diagram_id , version,definition from A.dbo.sysdiagrams
where diagram_id =13





SQL – Temporary Tables


Introduction

Temporary Tables are a great T-SQL feature that lets you store and process intermediate results by using the same selection, update, and join capabilities that you can use with typical SQL Server tables.
"In the early days when I used Access I used to create tables that I treated as temporary and then delete them whenever I finished my task". Using SQL Server this is much simpler."

Types of Temporary Tables in SQL

You can create two types of temporary tables in SQL, local and global temporary tables. The two types of temporary tables, local and global, differ from each other in their names, their visibility, and their availability.
Local Temporary Tables
  • Local temporary tables prefix with single number sign (#) as the first character of their names, like (#table_name).
  • Local temporary tables are visible only in the current session OR you can say that they are visible only to the current connection for the user.
  • They are deleted when the user disconnects from instances of Microsoft SQL Server.
Global Temporary Tables
  • Global temporary tables prefix with double number sign (##) as the first character of their names, like (##table_name).
  • Global temporary tables are visible to all sessions OR you can say that they are visible to any user after they are created.
  • They are deleted when all users referencing the table disconnect from Microsoft SQL Server.
For example:
If you create a table named employees, the table can be used by any person who has the security permissions in the database to use it, until the table is deleted. If you create a local temporary table named #employees, you are the only person who can work with the table, and it is deleted when you disconnect. If you create a global temporary table named ##employees, any user in the database can work with this table. If no other user works with this table after you create it, the table is deleted when you disconnect. If another user works with the table after you create it, SQL Server deletes it when both of you disconnect.
SQL statements for creating the temporary table using the CREATE TABLE statement:

CREATE TABLE #MyTempTable (cola INT PRIMARY KEY)
INSERT INTO #MyTempTable VALUES (1)
SELECT * FROM #MyTempTable

When you create local or global temporary tables, the CREATE TABLE syntax supports constraint definitions with the exception of FOREIGN KEY constraints. If a FOREIGN KEY constraint is specified in a temporary table, the statement returns a warning message indicating that the constraint was skipped, and the table is still created without the FOREIGN KEY constraints. Temporary tables cannot be referenced in FOREIGN KEY constraints.

Naming Of Temporary Tables

Temporary tables are always created in tempdb. No matters it is created from the stored procedure internally or from the SQL Query Analyzer window.
If a local temporary table is created in a stored procedure or application that can be executed at the same time by several users, SQL Server has to be able to distinguish the tables created by the different users. SQL Server does this by internally appending a numeric suffix to each local temporary table name. The full name of a temporary table as stored in the sysobjects table in tempdb consists of table name specified in the CREATE TABLE statement and the system-generated numeric suffix. To allow for the suffix, table_name specified for a local temporary name cannot exceed 116 characters.
Reason:
"Table names in SQL are the combination of owner.table_name and it must be unique within the database. Table_name can contain a maximum of 128 characters, except for local temporary table names (names prefixed with a single number sign (#)) that cannot exceed 116 characters."
For example:
CREATE TABLE #Yaks (YakID int,YakName char(30))
SELECT Name FROM tempdb.dbo.sysobjects WHERE Name LIKE '#yak%'
The above select query will return the result something like below displayed. name
——————————————————————-
#Yaks__________________________________________________00000000001D
(1 row(s) affected)

SQL Server stores temporary tables as a database object with a type of unique number appended on the end of the name. But in case of temporary tables it may be the possibility that two users can create the temporary table with the same name as we know that all temporary tables stored in the tempdb database, so if the name of both temporary tables will be same then SQL server will through an error to the user because objects name in the SQL server must be unique in the database. So, to avoid this problem whenever anyone creates any temporary table SQL server automatically appends a unique string with the table name that is given by user while creating the table, But don't worry you need to do anything with this extra string, It does all this for you automatically. You just have to refer to your #table_name with which you have created the temporary table. You can refer this name anywhere in your code where you want.


Deletion of Temporary Tables
  • A local temporary table created in a stored procedure is dropped automatically when the stored procedure completes. The table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The table cannot be referenced by the process which called the stored procedure that created the table.
  • All other local temporary tables are dropped automatically at the end of the current session.
  • Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
For example:
SQL statements for deleting the temporary table using the DROP TABLE statement:
DROP TABLE #Table_name
A local temporary table created within a stored procedure or trigger is distinct from a temporary table with the same name created before the stored procedure or trigger is called.

Use of Temporary Table

Temporary tables are used in several ways. Most commonly uses
  • to keep the result of a called stored procedure,
  • to reduce the number of rows for joins,
  • to aggregate data from different sources, or
  • to replaces cursors and for parameterized views.

SQL Server cursors have huge overhead. Maintenance of code is much easier if you use temporary tables to the T-SQL. It will be much easier to debug your stored procedure when your using temporary tables as the data will be saved in temporary tables.


Limitations of Temporary Tables

Temporary tables are created in the tempdb database and create additional overhead for SQL Server, reducing overall performances. This happens because all reads and writes to the temporary tables are done within the tempdb database.

Using Temporary Tables Effectively

If you do not have any option other than to use temporary tables, use them effectively. There are few steps to be taken.
  • Only include the necessary columns and rows rather than using all the columns and all the data which will not make sense of using temporary tables. Always filter your data into the temporary tables.
  • When creating temporary tables, do not use SELECT INTO statements, as it places locks on system objects. Instead of SELECT INTO statements, create the table using Transact-SQL DDL statements and use INSERT INTO to populate the temporary table.
  • Use indexes on temporary tables. Earlier days, I always forget to use an index on temporary. Specially, for large temporary tables consider using clustered and non-clustered indexes on temporary tables. You will have to test to see if indexes help or hurt overall performance.
  • After you finish the using your temporary table, delete them. This will free the tempdb resources. Yes, I agree that temporary tables are deleted when connection is ended. But do not wait until such time.
  • When creating a temporary table do not create them with a transaction. If you create it with a transaction, it will lock some system tables (syscolumns, sysindexes, syscomments). This will prevent others from executing the same query.
In general, temp tables should be avoided, if possible. Because they are created in the tempdb database, they will create additional overhead for SQL Server, slowing overall performance. As an alternative to temp tables, consider the following alternatives:
  • Rewrite your code so that the action you need completed can be done using a standard query or stored procedure, without using a temp table.
  • Use a derived table.
  • Consider using a correlated sub-query.
  • Use a permanent table instead.
  • Use a UNION statement to mimic a temp table.
  • Use a table variable.

It all depends on your requirement at that time when you are creating your query, but as written in Microsoft SQL Server Book Online,
"Consider using table variables instead of temporary tables. Temporary tables are useful in cases when indexes need to be created explicitly on them, or when the table values need to be visible across multiple stored procedures or functions. In general, table variables contribute to more efficient query processing."

If you are using SQL Server 2000 or higher, you can take advantage of the new TABLE variable type. These are similar to temporary tables except with more flexibility and they always stay in memory.
The code using a table variable might look like below:

—————————————————————–
DECLARE @TibetanYaks TABLE (
YakID int,
YakName char(30) )

INSERT INTO @TibetanYaks (YakID, YakName)

SELECT YakID, YakName
FROM dbo.Yaks WHERE YakType = 'Tibetan'
—————————————————————–

Table variables don't need to be dropped when you are done with them.


Which to use
  • If you have less than 100 rows generally use a table variable. Otherwise use a temporary table. This is because SQL Server won't create statistics on table variables.
  • If you need to create indexes on it then you must use a temporary table.
  • When using temporary tables always create them and create any indexes and then use them. This will help reduce recompilations. The impact of this is reduced starting in SQL Server 2005 but it's still a good idea.
Conclusion
Generally, temporary tables should be avoided as much as possible. If you need to use them follow the steps above so that you have the minimum impact on server performance.


Friday, March 11, 2011

UNC path


Universal/Uniform Naming Convention. A UNC path describes the location of a volume, directory, or file.

The format for a UNC path is \\server\volume\directory\file and is not case-sensitive. For example:
\\Shared1_svr\Shared1\WGroups\Network\Orders.xls
Rather than describe the location of a file or directory by drive letter, the Network Group will typically communicate a UNC path to describe the actual location of a file or directory. Windows drive letter mappings are arbitrary, whereas a UNC path is specific and applies to all operating systems.

Note: The UNC method started with the UNIX operating system. UNIX uses the forward-slash character as a path separator. Many network services (ex. FTP) have their origins in the UNIX operating system, so they use forward-slashes instead of the backslashes that DOS/Windows uses. It is important to recognize this distinction when using these services.


Tuesday, March 8, 2011

How can I make my SQL queries case sensitive?


If you installed SQL Server with the default collation options, you might find that the following queries return the same results: 
 
CREATE TABLE mytable 

    mycolumn VARCHAR(10) 

GO 
 
SET NOCOUNT ON 
 
INSERT mytable VALUES('Case') 
GO 
 
SELECT mycolumn FROM mytable WHERE mycolumn='Case' 
SELECT mycolumn FROM mytable WHERE mycolumn='caSE' 
SELECT mycolumn FROM mytable WHERE mycolumn='case'
 
You can alter your query by forcing collation at the column level: 
 
SELECT myColumn FROM myTable  
    WHERE myColumn COLLATE Latin1_General_CS_AS = 'caSE' 
 
SELECT myColumn FROM myTable  
    WHERE myColumn COLLATE Latin1_General_CS_AS = 'case' 
 
SELECT myColumn FROM myTable  
    WHERE myColumn COLLATE Latin1_General_CS_AS = 'Case' 
 
-- if myColumn has an index, you will likely benefit by adding 
-- AND myColumn = 'case'
 
If you want to do this in a more global way, instead of modifying each individual query, you can force the collation at the database level, or at the column level, using the ALTER DATABASE and ALTER TABLE commands, respectively. You can see the current collation level on the properties tab of the database server, through Enterprise Manager (if you're going to change this setting, MAKE NOTE OF THIS VALUE): 
 
 
And you can see the description from running the following query: 
 
SELECT DATABASEPROPERTYEX('<database name>', 'Collation')
 
As changing this setting can impact applications and SQL queries, I would isolate this test first. In SQL Server 2000, you can easily run an ALTER TABLE statement to change the sort order of a specific column, forcing it to be case sensitive. First, execute the following query to determine what you need to change it back to: 
 
EXEC sp_help 'mytable'
 
The second recordset should contain the following information, in a default scenario: 
 
Column_Name Collation 
----------- ---------------------------------------------- 
mycolumn    SQL_Latin1_General_CP1_CI_AS
 
Whatever the 'Collation' column returns, you now know what you need to change it back to after you make the following change, which will force case sensitivity: 
 
ALTER TABLE mytable 
    ALTER COLUMN mycolumn VARCHAR(10) 
    COLLATE Latin1_General_CS_AS 
GO 
 
SELECT mycolumn FROM mytable WHERE mycolumn='Case' 
SELECT mycolumn FROM mytable WHERE mycolumn='caSE' 
SELECT mycolumn FROM mytable WHERE mycolumn='case'
 
If this screws things up, you can change it back, simply by issuing a new ALTER TABLE statement (be sure to replace my COLLATE identifier with the one you found previously): 
 
ALTER TABLE mytable 
    ALTER COLUMN mycolumn VARCHAR(10) 
    COLLATE SQL_Latin1_General_CP1_CI_AS
 
If you are stuck with SQL Server 7.0, you can try this workaround, which might be a little more of a performance hit (you should only get a result for the FIRST match): 
 
SELECT mycolumn FROM mytable WHERE 
    mycolumn = 'case' AND 
    CAST(mycolumn AS VARBINARY(10)) = CAST('Case' AS VARBINARY(10)) 
 
SELECT mycolumn FROM mytable WHERE 
    mycolumn = 'case' AND 
    CAST(mycolumn AS VARBINARY(10)) = CAST('caSE' AS VARBINARY(10)) 
 
SELECT mycolumn FROM mytable WHERE 
    mycolumn = 'case' AND 
    CAST(mycolumn AS VARBINARY(10)) = CAST('case' AS VARBINARY(10)) 
 
-- if myColumn has an index, you will likely benefit by adding 
-- AND myColumn = 'case'

Friday, March 4, 2011

SQL Server Optimization tips


  • Set a reasonable size for your database.
    First of all, before database creation, you should estimate how large
    your database will be.
    To estimate the reasonable database size, you should first estimate
    the size of each table individually, and then add the values obtained.

  • Set a reasonable size for your transaction log.
    The general rule of thumb for setting the transaction log size is
    to set it to 20-25 percent of the database size. The smaller the size
    of your database, the greater the size of the transaction log
    should be, and vice versa. For example, if the estimated database
    size is equal to 10Mb, you should set the size of the transaction log
    to 4-5Mb, but if the estimated database size is over 500Mb,
    then 50Mb can be enough for the size of the transaction log.


  • Leave the autogrow feature on for the data files and for the log files.
    Leave this feature to let SQL Server increase allocated resources
    when necessary without DBA intervention. The Autogrow feature is
    necessary when there is no DBA in your firm, or your DBA has limited
    experience.


  • Set a reasonable size for the Autogrow increment.
    Automatically growing does result in some performance degradation,
    therefore you should set a reasonable size for the autogrow increment
    to avoid automatic growing too often. Try to set the initial size
    of the database and the size of the autogrow increment so that
    automatic growing will only arise at most once per week.


  • Don't set the autoshrink feature.
    Autoshrinking results in some performance degradation, therefore you
    should shrink the database manually or create the schedule task
    to shrink database periodically during off-peak times, rather than
    setting the autoshrink feature.


  • Set the maximum size of the data and log files.
    Specify the maximum size for which the files can grow in order to
    prevent disk drives from running out of space.


  • Create a user-defined filegroup and make it the default filegroup.
    In general, it's a good decision to store and manage system and
    user objects separately from one another. This is so that the user
    objects will not compete with system objects for space in the primary
    filegroup. Usually a user-defined filegroup is not created for the
    small database if, for example, your database is less than 100Mb.


  • Create a user-defined filegroup and create some tables in it to run
    maintenance tasks (backups, DBCC, update statistics, and so on)
    against these tables.

    LOAD TABLE and DUMP TABLE are no longer supported in SQL Server 7.0
    (and higher), but you can place a table in its own filegroup and can
    backup and restore only this table. This will allow you to group user
    objects with similar maintenance requirements into the same filegroup.


  • If you have several physical disks, try to create as many files
    per filegroup as there are physical disks, and put one file
    per disk.

    This will improve performance, because when table is accessed
    sequentially, a separate thread is created for each file to read
    the table's data in parallel.


  • Don't create many data and log files on the same physical disk.
    Leaving the autogrow feature on for the data and for the log files
    can cause fragmentation of those files if there are many files
    on the same physical disk.
    In most cases, it's enough to have 1-2 database files on the same
    physical disk.


  • For a heavily accessed table, place this table in one filegroup and
    place the table's indexes in a different filegroup on different
    physical disks.

    This will improve performance, because separate threads will be
    created for the table's and index's data in parallel.


  • For a heavily accessed table with text/image columns, place this
    table in one filegroup and place text/image columns in a different
    filegroup on different physical disks.

    You can use a CREATE TABLE statement with TEXTIMAGE_ON keyword to
    place text/image columns in a different filegroup.
    See SQL BOL for details.


  • Place the log files on different physical disk(s) than data files.
    Because logging is more write-intensive, it's important that
    the disks containing SQL Server log files have sufficient
    disk I/O performance.


  • If one of the join queries is used most often, place the tables
    used in this query in different filegroups on different
    physical disks.


  • If you have read-only tables, place these tables in different
    filegroups on different physical disks and use ALTER DATABASE
    statements to make just these filegroups READ ONLY.

    This not only increases read performance, but it prevents any
    data changes and allows you to control permissions to this data.


  • Use Windows NT Performance Monitor to determine the appropriate
    number of the data and log files by checking the Disk Queue Length
    counter.

    The more database files and filegroups, the more difficult
    administering this database will be. Consider reducing the number
    of files if Disk Queue length is above 3, and continue monitoring.

  • SQL Server 2000 vs Oracle 9i




    Introduction

    Often people in newsgroups ask about some comparison of Oracle and Microsoft SQL Server. In this article, I compare SQL Server 2000 with Oracle 9i Database regarding price, performance, platforms supported, SQL dialects and products limits.

    Platform comparison

    SQL Server 2000 only works on Windows-based platforms, including Windows 9x, Windows NT, Windows 2000 and Windows CE.
    In comparison with SQL Server 2000, Oracle 9i Database supports all known platforms, including Windows-based platforms, AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel, Sun Solaris and so on.

    Hardware requirements

    To install SQL Server 2000, you should have the Intel or compatible platforms and the following hardware:

    Hardware Requirements
    Processor Pentium 166 MHz or higher
    Memory 32 MB RAM (minimum for Desktop Engine),
    64 MB RAM (minimum for all other editions),
    128 MB RAM or more recommended
    Hard disk space 270 MB (full installation),
    250 MB (typical),
    95 MB (minimum),
    Desktop Engine: 44 MB
    Analysis Services: 50 MB minimum and 130 MB typical
    English Query: 80 MB

    Oracle 9i supports Intel or compatible platforms, AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel, Sun Solaris and so on.

    To install Oracle 9i under the Intel or compatible platforms, you should have the following hardware:

    Hardware Requirements
    Processor Pentium 166 MHz or higher
    Memory RAM: 128 MB (256 MB recommended)
    Virtual Memory: Initial Size 200 MB, Maximum Size 400 MB
    Hard disk space 140 MB on the System Drive
    plus 4.5 GB for the Oracle Home Drive (FAT)
    or 2.8 GB for the Oracle Home Drive (NTFS)

    To install Oracle 9i Database under the UNIX Systems, such as AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, and Sun Solaris, you should have the following hardware:

    Hardware Requirements
    Memory A minimum of 512 MB RAM
    Swap Space A minimum of 2 x RAM or 400 MB, whichever is greater
    Hard disk space 4.5 GB

    Software requirements

    SQL Server 2000 comes in six editions: Enterprise, Standard, Personal, Developer, Desktop Engine, and SQL Server CE (a compatible version for Windows CE) and requires the following software:

    Operating System Enterprise Edition Standard Edition Personal Edition Developer Edition Desktop Engine SQL Server CE
    Windows CE No No No No No Yes
    Windows 9x No No Yes No Yes No
    Windows NT 4.0 Workstation with Service Pack 5 No No Yes Yes Yes No
    Windows NT 4.0 Server with Service Pack 5 Yes Yes Yes Yes Yes No
    Windows NT 4.0 Server Enterprise Edition with Service Pack 5 Yes Yes Yes Yes Yes No
    Windows 2000 Professional No No Yes Yes Yes No
    Windows 2000 Server Yes Yes Yes Yes Yes No
    Windows 2000 Advanced Server Yes Yes Yes Yes Yes No
    Windows 2000 DataCenter Yes Yes Yes Yes Yes No
    Windows XP Professional No No Yes Yes Yes No

    Oracle 9i Database comes in three editions: Enterprise, Standard and Personal and requires the following software:

    Platform Operating System Version Required Patches
    Windows-based Windows NT 4.0 Service Pack 5
    Windows-based Windows 2000 Service Pack 1
    Windows-based Windows XP Not Necessary
    AIX-Based AIX 4.3.3 Maintenance Level 09 and IY24568,
    IY25282, IY27614, IY30151
    AIX-Based AIX 5.1 AIX 5L release 5.1 ML01+ (IY22854),
    IY26778, IY28766, IY28949, IY29965, IY30150
    Compaq Tru64 UNIX Tru64 5.1 5.1 patchkit 4
    Compaq Tru64 UNIX Tru64 5.1A 5.1A patchkit 1
    HP-UX HP-UX version 11.0 (64-bit) Sept. 2001 Quality Pack, PHCO_23792,
    PHCO_24148, PHKL_24268, PHKL_24729,
    PHKL_ 25475, PHKL_25525, PHNE_24715,
    PHSS_23670, PHSS_24301, PHSS_24303,
    PHSS_24627, PHSS_22868
    Linux SuSE Linux Enterprise Server 7
    (or SLES-7) with kernel 2.4.7,
    and glibc 2.2.2
    Not Necessary
    Sun Solaris Solaris 32-Bit 2.6 (5.6), 7 (5.7)
    or 8 (5.8)
    Not Necessary
    Sun Solaris Solaris 64-Bit 8 (5.8) Update 5

    Performance comparison

    It is very difficult to make the performance comparison between SQL Server 2000 and Oracle 9i Database. The performance of your databases depends rather from the experience of the database developers and database administrator than from the database's provider. You can use both of these RDBMS to build stable and efficient system. However, it is possible to define the typical transactions, which used in inventory control systems, airline reservation systems and banking systems. After defining these typical transactions, it is possible to run them under the different database management systems working on the different hardware and software platforms.

    TPC tests

    The Transaction Processing Performance Council (TPC.Org) is independent organization that specifies the typical transactions (transactions used in inventory control systems, airline reservation systems and banking systems) and some general rules these transactions should satisfy.

    The TPC produces benchmarks that measure transaction processing and database performance in terms of how many transactions a given system and database can perform per unit of time, e.g., transactions per second or transactions per minute.

    The TPC organization made the specification for many tests. There are TPC-C, TPC-H, TPC-R, TPC-W and some old tests, such as TPC-A, TPC-B and TPC-D. The most popular test is the TPC-C test (OLTP test).

    At the moment the article was wrote, SQL Server 2000 held the top TPC-C by performance results with Distributed Partitioned Views-based cluster systems.


    At the moment the article was wrote, SQL Server 2000 held the top TPC-C by price/performance results.

    Note. Because most organizations really do not run very large databases, so the key points on which SQL Server 2000 won the TPC-C benchmarks do not really matter to the vast majority of companies.

    Price comparison

    One of the main Microsoft SQL Server 2000 advantage in comparison with Oracle 9i Database is that SQL Server is cheaper. Other SQL Server advantage is that Microsoft includes the Online analytical processing (OLAP) and Data Mining as standard features in SQL Server 2000 Enterprise Edition. So, you can save up to four times with SQL Server 2000 Enterprise Edition if you use OLAP and Data Mining.

    The price comparisons below were based on the

    Compare pricing for SQL Server 2000 Standard Edition and Oracle9i Standard Edition:

    Number of CPUs Oracle9i Standard Edition SQL Server 2000 Standard Edition
    1 $15,000 $4,999
    2 $30,000 $9,998
    4 $60,000 $19,996
    8 $120,000 $39,992
    16 $240,000 $79,984
    32 $480,000 $159,968

    Compare pricing for SQL Server 2000 Enterprise Edition (which include OLAP and Data Mining) and Oracle9i Enterprise Edition with OLAP and/or Data Mining:

    Number of CPUs Oracle9i Enterprise Edition Oracle9i Enterprise Edition with OLAP or Data Mining Oracle9i Enterprise Edition With OLAP and Data Mining SQL Server 2000 Enterprise Edition
    1 $40,000 $60,000 $80,000 $19,999
    2 $80,000 $120,000 $160,000 $39,998
    4 $160,000 $240,000 $320,000 $79,996
    8 $320,000 $480,000 $640,000 $159,992
    16 $640,000 $960,000 $1,280,000 $319,984
    32 $1,280,000 $1,920,000 $2,560,000 $639,968

    Note. This is not a full price comparison between SQL Server 2000 and Oracle 9i Database. It is only a brief comparison. You can have any discounts and the prices can be increased or decreased in the future. See Microsoft and Oracle to get more information about the price of their products.

    Features comparison

    Both SQL Server 2000 and Oracle 9i Database support the ANSI SQL-92 entry level and do not support the ANSI SQL-92 intermediate level. In the Features comparison section of this article I want to make the brief comparison of the Transact-SQL with PL/SQL and show some SQL Server 2000 and Oracle 9i Database limits.

    T-SQL vs PL/SQL

    The dialect of SQL supported by Microsoft SQL Server 2000 is called Transact-SQL (T-SQL). The dialect of SQL supported by Oracle 9i Database is called PL/SQL. PL/SQL is more powerful language than T-SQL. This is the brief comparison of PL/SQL and T-SQL:

    Feature PL/SQL T-SQL
    Indexes B-Tree indexes,
    Bitmap indexes,
    Partitioned indexes,
    Function-based indexes,
    Domain indexes
    B-Tree indexes
    Tables Relational tables,
    Object tables,
    Temporary tables,
    Partitioned tables,
    External tables,
    Index organized tables
    Relational tables,
    Temporary tables
    Triggers BEFORE triggers,
    AFTER triggers,
    INSTEAD OF triggers,
    Database Event triggers
    AFTER triggers,
    INSTEAD OF triggers
    Procedures PL/SQL statements,
    Java methods,
    third-generation language
    (3GL) routines
    T-SQL statements
    Arrays Supported Not Supported

    SQL Server 2000 and Oracle 9i limits

    Here you can find some SQL Server 2000 and Oracle 9i Database limits:

    Feature SQL Server 2000 Oracle 9i Database
    database name length 128 8
    column name length 128 30
    index name length 128 30
    table name length 128 30
    view name length 128 30
    stored procedure name length 128 30
    max columns per index 16 32
    max char() size 8000 2000
    max varchar() size 8000 4000
    max columns per table 1024 1000
    max table row length 8036 255000
    max query size 16777216 16777216
    recursive subqueries 40 64
    constant string size in SELECT 16777207 4000
    constant string size in WHERE 8000 4000

    Conclusion

    It is not true that SQL Server 2000 is better than Oracle 9i or vice versa. Both products can be used to build stable and efficient system and the stability and effectiveness of your applications and databases depend rather from the experience of the database developers and database administrator than from the database's provider. But SQL Server 2000 has some advantages in comparison with Oracle 9i and vice versa.

    The SQL Server 2000 advantages:

    • SQL Server 2000 is cheaper to buy than Oracle 9i Database.
    • SQL Server 2000 holds the top TPC-C performance and price/performance results.
    • SQL Server 2000 is generally accepted as easier to install, use and manage.
    The Oracle 9i Database advantages:

    • Oracle 9i Database supports all known platforms, not only the Windows-based platforms.
    • PL/SQL is more powerful language than T-SQL.
    • More fine-tuning to the configuration can be done via start-up parameters.