Thursday, December 27, 2007

Caching Sql Parameters in C#.Net

Here I am going to discuss about how to cache the sql parameters.

Suppose if you want to add records to the table, either you use Direct-SQL and Stored Procedure. Stored Procedure is compiled one so it’s faster than Direct-SQL.

If you are using stored procedure, you call the stored procedure from the client side either by EXEC Or Parameterized call.

Parameterized call reuse the existing execution plan instead of creating the new execution plan every time when you call using EXEC.

So, Parameterized call is more efficient than EXEC.

If you are using Parameterized call, you have to create the array of Sql Parameters to pass the parameter value. You have to create the array of parameters every time when you call the stored procedure.

If you are calling 100 times, 100 times you have to create the array of parameters.

Cache:

Instead of creating the parameters every time, you can the cache the array of sql parameters in the first call. In next call, you can get the Clone of Sql Parameters from Cache. And then assign the values to the clone sql parameter.


Microsoft released the Enterprise Library having the parameter caching mechanism.

Here I am going to discuss the parameter caching is similar to that in a simple way.

Parameter Cache Code:


public class ParameterCache
{

private static Hashtable paramCache = Hashtable.Synchronized(new Hashtable());


// Create and return a copy of the IDataParameter array.
private static IDataParameter[] CloneParameters(IDataParameter[] originalParameters)
{
IDataParameter[] clonedParameters = new IDataParameter[originalParameters.Length];

for (int i = 0, j = originalParameters.Length; i < j; i++)
{
clonedParameters[i] = (IDataParameter)((ICloneable)originalParameters[i]).Clone();
}

return clonedParameters;
}

// Empties all items from the cache
public static void Clear()
{
paramCache.Clear();
}

// Add a parameter array to the cache for the command.
public static void AddParameterSetToCache(string connectionString, string storedProcedure, IDataParameter[] parameters)
{
string key = CreateHashKey(connectionString, storedProcedure);
paramCache[key] = parameters;
}

// Gets a parameter array from the cache for the command. Returns null if no parameters are found.
public static IDataParameter[] GetCachedParameterSet(string connectionString, string storedProcedure)
{
string key = CreateHashKey(connectionString, storedProcedure);
IDataParameter[] cachedParameters = (IDataParameter[])(paramCache[key]);
return CloneParameters(cachedParameters);
}

// Gets if a given stored procedure on a specific connection string has a cached parameter set
public static bool IsParameterSetCached(string connectionString, string storedProcedure)
{
string hashKey = CreateHashKey(connectionString, storedProcedure);
return paramCache[hashKey] != null;
}

// create the hash key based on connectionstring and stored procedure name
private static string CreateHashKey(string connectionString, string storedProcedure)
{
return connectionString + ":" + storedProcedure;
}

}




The above shared class is for caching and accessing the cached parameters.

HashTable – Used to store the array of parameters based on hash key. It is used as a cache.

CreateHashKey – Caching the Sql parameters based on hash key formed from connectionstring and stored procedure name.

AddParameterSetToCache – Cache the parameters in HashTable based on connectionstring and stored procedure name.

CloneParameters – Get the clone of cached parameters.

GetCachedParameterSet – Get the parameters from the HashTable(Cache) based on HashKey.

IsParameterSetCached – Used to check whether parameters already cached based on hashkey.


Clear – Clear the HashTable(Cache)


Customer Class:


public class Customer
{

#region C# 3.0 Automatic Properties

public long CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

#endregion

}



How to use ParameterCache shared class:



//Add the Customer Information to Customer Table
public void AddCustomer(Customer cust, string connectionString)
{
SqlConnection conn = null;
SqlCommand cmd = null;
SqlParameter[] param = null;
string spName = "AddCustomer";

try
{
conn = new SqlConnection(connectionString);
cmd = new SqlCommand(spName, conn);
cmd.CommandType = CommandType.StoredProcedure;

param = GetParameters(cust, connectionString, spName);

conn.Open();
cmd.ExecuteNonQuery();

}
catch (Exception)
{
throw;
}
finally
{
if (conn != null && conn.State == ConnectionState.Open)
conn.Close();

if (cmd != null)
cmd.Dispose();

param = null;
}




}



The above method is to add the records to the customer table. For that you have to prepare the parameters for stored procedure. GetParameters() method used to prepare the parameters.



// Prepare the Sql Parameters for Adding Customer
// Get the Clone parameters if already cached based on connectionstring and store procedure name
// Otherwise create the new Sql parameter array and then add to cache for next time use.
private SqlParameter[] GetParameters(Customer cust, string connectionString, string storeProcedure)
{
SqlParameter[] param = null;

if (ParameterCache.IsParameterSetCached(connectionString, storeProcedure))
{
param = (SqlParameter[])ParameterCache.GetCachedParameterSet(connectionString, storeProcedure);

param[0].Value = cust.CustomerID;
param[1].Value = cust.Name;
}
else
{
param = new SqlParameter[2];

param[0] = new SqlParameter("@CustomerID", SqlDbType.BigInt, 8);
param[0].Value = cust.CustomerID;
param[1] = new SqlParameter("@Name", SqlDbType.VarChar, 50);
param[1].Value = cust.Name;

ParameterCache.AddParameterSetToCache(connectionString, storeProcedure, param);
}

return param;
}




The above method is for returning array of sql parameters. First it checks the cache, if already exists then get the parameters from the cache. And then assign the values of clone sql parameters.
If not exists, create the new array of sql parameters and add prepared parameters into cache for next time use.

Only first time call, it takes some time to cache the parameters. Next call onwards prepare the sql parameters faster.

Conclusion:
You can get better performance on caching the sql parameters if you are calling the stored procedure frequently.

Tuesday, December 25, 2007

Parallel Extensions to .Net Framework - PLINQ

Microsoft released the CTP version of Parallel Extensions to the .Net Framework.
With the use of parallel extensions, we can write the program that using MULTI core (Dual,QUAD,etc) processors.

Usually we write the program for single core processor, so our hardware resources are not fully utilized. But with the use of Parallel Extensions, we can utilize the hardware resources effectively.

You can download the CTP version of Parallel Extension here


Parallel Extensions provide several new ways to express parallelism in your code:


Declarative data parallelism(PLINQ) - Parallel Language Integrated Query (or Parallel LINQ) is an implementation of LINQ-to-Objects that executes queries in parallel, scaling to utilize the available cores and processors of the machine. Because queries are declarative, you are able to express what you want to accomplish, rather than how you want to accomplish it.


Imperative data parallelism - Parallel Extensions also contains mechanisms to express common imperative data-oriented operations such as for and foreach loops, automatically dividing the work in the loop to run on parallel hardware.


Imperative task parallelism - Rather than using data to drive parallelism, Parallel Extensions enables you to express potential parallelism via expressions and statements that take the form of lightweight tasks. Parallel Extensions schedules these tasks to run on parallel hardware and provides capabilities to cancel and wait on tasks.

Code(PLINQ):

1. Create the new Project from VS 2008.
2. Reference the System.Threading.dll from the Parallel Extension installation path.
3. Create the static class like the following.


class ValidNumbers
{
//Sequential Method
public static int SequentialValidNumberCount(IEnumerable<string> list)
{
return list.Where(n => IsNumber(n)).Count();
}

//Parallel Method
public static int ParallelValidNumberCount(IEnumerable<string> list)
{
return list.AsParallel().Where(n => IsNumber(n)).Count();
}

//validate the string whether its number
private static bool IsNumber(string input)
{
return !(Regex.IsMatch(input, "[^0-9]"));
}
}



In the Static class, we have declared the two methods for calculating the number of integers in the array.

SequentialValidNumberCount - This is sequential way of running the method. It uses single core.

ParallelValidNumberCount - This is Parallel way of running the method. AsParallel() is the extension method. It uses all the core in the processor.


Calling code:



string[] arr = { "1", "a", "2" };
int count = 0;

//Sequetial way of accessing
count = ValidNumbers.SequentialValidNumberCount(arr);

//Parallel way of accessing
count = ValidNumbers.ParallelValidNumberCount(arr);




Parallel Method runs faster than sequential method because it uses all the core in the processor. You can see the difference between two in the Task Manager Performance monitor.

Wednesday, December 19, 2007

Exception Hunter from Red Gate

Redgate introduced the new tool called Exception Hunter. It hunts the exception in the code.

By using this tool, we can avoid unhandled exception or application crashes at run time. It anaylsis the code that you have written, generates the reports of possible exceptions will occur in the code.

So We can find the unhandle exception easily in the development cycle itself instead in QA cycle.

You can download it here

Tuesday, December 18, 2007

ASP.NET 3.5 Extensions CTP Preview

Microsoft released the CTP version of ASP.Net 3.5 Extensions. It provides the added features to framework 3.5. you can download it here .

ASP.Net 3.5 Extensions contains:

1. MVC (Model View Controller):
Its provides clear seperation of presention and logic . And also it provides URL rerouting option.

2. Dynamic Data Support :
It provides faster creation of dynamic data driven websites.

you can create simple data driven websites like
1. Create Dynamic Data Web Application Project
2. Add LINQ To SQL Class Item to the project.
3. Drag the required tables from database into LINQ to SQL Class.
4. Set the EnableTemplates to True in Web.Config like
<dynamicData dataContextType="" enableTemplates="true">
5. Run the Application to view, add, edit and delete the table records.

3. Silverlight Controls:
It provides additional silverlight controls.

4. AJAX Enhancements:
It provides browser history support for the AJAX ASPX pages.

5. ADO.Net Entity Framework:
It prviodes entity framework that enables the deveopers to model the database closely to real world application.

Monday, December 10, 2007

C# 3.0 Extension Methods

Microsoft introduced the cool and nice features of Extension Methods in C# 3.0.
You can add the methods to the any type like string, int, Collection or even customer class.

you have define and implement the extended methods in static class only and method also static.

Extension Class and Methods:


static class MyExtensionMethods
{

// this methods apply to all string type
public static bool IsNumber(this string input)
{
return !(Regex.IsMatch(input, "[^0-9]"));
}

// this methods apply to all type that derive from object type
public static bool IsExistsIn(this object obj, IEnumerable list)
{
foreach (object o in list)
{
if (o.Equals(obj))
return true;
}

return false;
}
}



Using Extension Methods:

string number = "45";
bool isNum = false;

// returns true
isNum = number.IsNumber();

// returns false
isNum = "45fg".IsNumber();


bool isExists = false;
string[] arr = { "Ayyanar", "Senthil", "Vaithy" };

// returns true
isExists = "Ayyanar".IsExistsIn(arr);

// returns false
isExists = "Ayya".IsExistsIn(arr);



Now we will see how to use this extension methods to custom class object.

Class:


class Customer
{

#region Automatic Properties

public long CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

#endregion

#region Override Methods

public override bool Equals(object obj)
{
Customer c = (Customer)obj;

if (c.CustomerID == this.CustomerID && c.Name == this.Name)
return true;
else
return false;
}

public override int GetHashCode()
{
return base.GetHashCode();
}

#endregion
}



Using Extension Method for Customer Object:


// C# 3.0 Collection Initializers


List<Customer> custList = new List<Customer>{
new Customer { CustomerID = 1, Name = "Ayyanar" },
new Customer { CustomerID = 2, Name = "Senthil" },
new Customer { CustomerID = 3, Name = "Vaithy" }
};

// C# 3.0 Object Initializers
Customer c = new Customer { CustomerID = 1, Name = "Ayyanar" };

// return true
isExists = c.IsExistsIn(custList);

C# 3.0 Collection Initializers

C# 3.0 has a nice feature of creating collection of objects in a simple way like C# 3.0 object initializers.

Class Structure:

class Customer
{

#region Automatic Properties

public long CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

#endregion

}


Conventional Method:

// C# 3.0 object Initialization Method
Customer custmObj1 = new Customer { CustomerID = 1, Name = "Ayyanar" };
Customer custmObj2 = new Customer { CustomerID = 2, Name = "Senthil" };
Customer custmObj3 = new Customer { CustomerID = 3, Name = "Vaithy" };

//List of Customer

List<Customer> listCust = new List<Customer>();

// Adding the object into the Customer List
listCust.Add(custmObj1);
listCust.Add(custmObj2);
listCust.Add(custmObj3);



C# 3.0 :


List<Customer> listStr = new List<Customer> {
new Customer{CustomerID = 1, Name = "Ayyanar"},
new Customer{CustomerID = 2, Name = "Senthil"},
new Customer{CustomerID = 3, Name = "Vaithy"},
};





In single statement, we can populate the multiple object into the list.

C# 3.0: Anonymous Types

C# 3.0 Anonumous types is a nice feature. you can create the object for the class without declaring the class, private variables or properties. suppose if you want to create a class with two properties,

Conventional Method:

class Customer
{

#region Automatic Properties

public long CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

#endregion
}


Object Creation:

Customer custm = new Customer();
custm.CustomerID = 1;
custm.Name = "Rich";


C# 3.0 Anonymous Types:

you can create a object for the anonymous class(No Class Name) with property initialization.


var cust = new { CustomerID = 1, Name = "Rich" };

The above code will create the object for Anonymous class with two properties of CustomerID and Name.

Visual Studio provides the intellisense for anonymous object without compiling.

Compiler will create the class(Anonymous) for anonymous object like(not exact) the below.


class Anonymous_1
{

public long CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

}

if you create the multiple anonymous object, compiler will use the same class if type of the property is same.


var cust = new { CustomerID = 1, Name = "Rich" };

var cust1 = new { CustomerID = 2, Name = "David" };

From the above code, compiler will use the same anonymous class for creating the object.


var cust = new { CustomerID = 1, Name = "Rich" };

var cust1 = new { CustomerID = "2", Name = "David" };


From the above code, compiler will use the two class for each object because second anonymous object of property CustomerID type differ from first object.


Once you created the anonymous type object, you can't reassign the value for the property of anonymous type object.

cust.CustomerID = 3;

The above code will throw compiler error.

Sunday, December 09, 2007

SCRUM Development Process - Agile Project Management

Here we are going to discuss about Agile Project Management with SCRUM. Nowadays most of companies started to using SCRUM development process. Its practice to follow in project/product development framework. This process is most suited to the project that requirements frequently changing and adding more enhancements.

Definition:
An iterative, incremental process for developing any product or managing any work. It produces a potentially shippable set of functionality at the end of every iteration.

Attributes of SCRUM process:
• An agile process to manage and control development work.
• A wrapper for existing engineering practices.
• A team-based approach to iteratively, incrementally develop systems and products when requirements are rapidly changing
• A process that controls the chaos of conflicting interests and needs.
• A way to improve communications and maximize co-operation.


SCRUM Process Diagram:



Detailed SCRUM Process Diagram:




Terminology:

Product Owner

o This is the person who makes decisions about features that will be added to the project
o This person may be a customer, but can be a person in the company who acts as the customer’s representative
o This person will be the ultimate judge of a projects successful completion

SCRUM Master

o Equivalent to a project manager for a small project or a team lead on a larger project, with a size of 4 to 8 people.
o The scrum master ensures that the rules of scrum are followed
 By the development team
 By management and customers

Product Backlog

o The product backlog is generally equivalent to the requirements document in traditional projects
o Features are added to the product backlog and include the following information
 ID
 Priority
• Multiple priority scales can be used
o 1-5
o Showstopper, Major, Minor, Trivial
 Description
 Preliminary Hour Estimate
o It can also contain
 Customer/Contract information
 Requirement Owner
 Anticipated Version
 Assigned Development Team
 Assigned QA Team
 Feature Status
o Features are prioritized by the Product Owner

SPRINT

• A Sprint is a single development cycle
• Developers work as a team during a sprint to complete tasks that are defined at the beginning of the sprint
• Typical duration of 4 weeks, but is flexible
• The output of the sprint should be completed software modules that provide demonstrable capability
• The entire team is responsible for all of the tasks on the sprint
o If someone finishes their tasks early, they help others complete the remaining tasks
• The end date of the sprint is fixed. Tasks must get dropped from a sprint rather than move the end date
• Sprint Planning Meeting:
• Occurs at the beginning of sprint
• Features from the product backlog are broken down into smaller tasks and assigned to developers
• Estimates refined by developers based on implementation responsibility
• Attended by development team, product owner and the scrum master
The scrum master runs the meeting

Sprint Backlog

• The Sprint Backlog tracks the progress of an individual sprint
• All of the tasks that were defined during the sprint planning meeting are listed on the sprint backlog
• The estimates for tasks are stored and updated on this document
• Daily SCRUM:
• Daily progress meeting
o Attended by the development team
o Run by the scrum master
• Three questions answered by each team member
o What did you do yesterday?
o What is keeping you from accomplishing your tasks?
o What do you plan on doing today?
• Estimates are updated on the sprint backlog
o Number of hours worked are logged
o Hours remaining are “re-estimated”
• Information gathered at these meetings can be used to track progress of tasks


SCRUM Rules
• The dates are fixed
o If tasks are not going to be finished, they are dropped from this iteration
o Priorities are VERY important
• No Feature Creep
o Features may not be added to a sprint by the product owner after the Sprint Planning Meeting
o Exception – the burn down chart shows that all tasks will be completed early and the team agrees to take on the next task from the product backlog
o Sprints can be cancelled and restarted by the product owner if there is a high priority need
• All tasks must be TESTED to be complete
o Developers must COMPLETE the tasks
o The next sprint will start immediately after this one ends, so there will not be time for revisiting this task


Challenges

o It takes practice by the entire team
o People tend to over-commit during the Sprint Planning Meeting
o Proper estimates must include time for:
 Design
 Development
 Reviews (Design and Code)
 Unit Test Creation
 Testing
 Integration

Wednesday, December 05, 2007

C# 3.0 : Object Initialization

Usually object can be initialized by passing value to contructor. if we want to initialize the differnt property value in different situation, we need to create a more constructor for all the situation.

Conventional Method:

class Customer
{
#region Constructors

public Customer() {}

public Customer(long customerID)
{
CustomerID = customerID;
}

public Customer(long customerID, string name)
{
CustomerID = customerID;
Name = name;
}

#endregion


#region Automatic Properties

public long CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

#endregion
}

Initialization:

Customer custObj = new Customer(1);
Customer custObj1 = new Customer(1, "Pankaj");
Customer custObj2 = new Customer();
custObj2.CustomerID = 1;
custObj2.Name = "Pankaj";

C# 3.0:

But in C# 3.0, no need to create too much contructor for initializatoin. you can initialize the property without creating the constructor.

class Customer
{

#region Automatic Properties

public long CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

#endregion
}

Initialization:

Customer custSingle = new Customer { CustomerID = 1 };
Customer cust = new Customer { CustomerID = 1, Name = "Pankaj" };


Constructor and Initialization:

you can also call the constructor and initialize the property at the same time.

Customer custConst = new Customer(1) { CustomerID = 2, Name = "Pankaj" };


First call the constructor and then initialize the property.

So finally CustomerID value will be 2 instead of 1.

Saturday, November 24, 2007

C#.Net 3.0 features: Automatic Property

Microsoft introduced the new features in C#.Net 3.0 called Automatic Property.

No more need to declare the private variable for properties. It automatically creates the private variables for properties. But still you can use the conventional way of declaring the private variables to initialize the specific default value to private variable and also other calculation purposes.

Conventional Method:

class Customer
{
private long m_customerID;
private string m_name;

public string CustomerID
{
get
{
return m_customerID;
}
set
{
m_customerID = value;
}
}

public string Name
{
get
{
return m_name;
}
set
{
m_name = value;
}
}

}




C#.Net 3.0 New Method:

class Customer
{

public string CustomerID
{
get;
set;
}

public string Name
{
get;
set;
}

}


From the above declaration, you can find that the no private variable declared.


You may ask the question what would be default value for property. Here is the answer.

1. Value type = 0
2. Reference type = null

Friday, November 23, 2007

Performance difference between EXEC and sp_executesql

I presumed that you already know about the execution plan. Sometimes we may come to situation to use Dynamic SQL instead of direct T-SQL.

If we are using Direct T-SQL (not dynamic) in stored procedure, SQL Server reused execution plan from the cache. i.e. SQL Server will not compile the Stored Procedure again.

If we are using dynamic sql in stored procedure, SQL Server may not use the execution plan. It will recreate the execution plan every time with different string of SQL.

So, we have to think about the performance while using dynamic sql.

To execute the dynamic SQL in stored procedure, we have to use the following way.

1. EXEC (Non- parameterized)
2. sp_executesql (Parameterized)



There will be performance difference between above two.

Execution plan will not be created until you execute the dynamic sql. If you execute the dynamic sql using EXEC, execution plan will be created for every execution even values only changing. If you use sp_executesql, SQL Server Optimizer will try to use same execution plan. Because dynamic sql string will be the same, values only going to change. So it will be treated as Stored Procedure having input parameters.

Use the following query to test,


CREATE TABLE [dbo].[Item]
(
ID INT
)

GO

INSERT INTO [dbo].[Item](ID) VALUES (1)
INSERT INTO [dbo].[Item](ID) VALUES (2)

GO


DBCC FREEPROCCACHE

DECLARE @ItemID INT
DECLARE @Query NVARCHAR(200)

SET @Query = 'SELECT * FROM [dbo].[Item] WHERE ID = '

SET @ItemID = 1
EXEC( @Query + @ItemID)

SET @ItemID = 2
EXEC( @Query + @ItemID)

SET @Query = 'SELECT * FROM [dbo].[Item] WHERE ID = @ID'

SET @ItemID = 1
EXEC sp_executesql @Query, N'@ID INT', @ID = @ItemID

SET @ItemID = 2
EXEC sp_executesql @Query, N'@ID INT', @ID = @ItemID




To view the execution plan, use the following query.

SELECT usecounts, sql FROM sys.syscacheobjects



Results:

UseCounts SQL

1 SELECT * FROM [dbo].[Item] WHERE ID = 1
2 (@ID INT)SELECT * FROM [dbo].[Item] WHERE ID = @ID
1 SELECT usecounts, sql FROM sys.syscacheobjects
1 SELECT * FROM [dbo].[Item] WHERE ID = 2


From the results, executed the dynamic sql using sp_executesql uses same execution plan. EXEC create the execution plan every time.



Conclusion:

Always try to use sp_executesql to execute the dynamic sql to improve the performance.

Tuesday, October 09, 2007

Avoiding Global temporary table problems between databases

Global temp tables are used to store values that can be used across the stored procedures in particular application run. it will disappears when application is stopped.

Global temp tables are created by following query.

CREATE TABLE [dbo].[##Temp] (ID INT)

if global temp tables are created and used
by single instance only, problem(creating 2 temp table with same name) will not occur. The problem will occur when another instance creating same temp table at same time. This problem will occur on same database or another database with same name of temp table creation.

To avoid this confliction, we will use GUID with temp table name. But this method is more constlier and difficult to handle. Because temp table name is so lengthy.

EXEC ( 'CREATE TABLE [dbo].[##Temp' + CONVERT(VARCHAR, NEWID()) + ' ] (ID INT)' )


Otherwise we can use identity value of some table with temp table name.

EXEC ( 'CREATE TABLE [dbo].[##Temp' + @ID + ' ] (ID INT)' )


The above method will solve the problem even multiple instances running on same database. Because each instance will create unique identity value and using that id with temp table name.

Again problem will occur between different databases creating the same temp table with same identity value. This problem will occur when the multiple instance running on different databases at the same time. Because each instance trying to create temp table with same name.

To avoid confliction between different databases, use DB_ID() or DB_NAME() with temp table name.

EXEC ( 'CREATE TABLE [dbo].[##Temp' + CONVERT(VARCHAR, DB_ID()) + @ID + ' ] (ID INT)' )

EXEC ( 'CREATE TABLE [dbo].[##Temp' + CONVERT(VARCHAR, DB_NAME()) + @ID + ' ] (ID INT)' )

Because DB_ID() or DB_NAME() will be unique to each database.

Tuesday, September 11, 2007

Adding Documentation for Enum Members in .Net

To add the documentation for each member of enum, we have to add the tag for each members like the below.

CODE:
-----


///<summary>
/// Priority
///</summary>
public enum Priority
{
///<summary>
/// Low Priority
///</summary>
Low,
///<summary>
/// Medium Priority
///</summary>
Medium,
///<summary>
/// High Priority
///</summary>
High,


}

Tuesday, September 04, 2007

Randomly selecting records from the Table - SQL Server

If you would like to retrieve the 10 records randomly, you have to use either RAND() or NEWID() function.

RAND():
------
SELECT TOP 10 EmployeeID, RAND() AS RNumber FROM dbo.Employee ORDER BY 2


NEWID():
--------
SELECT TOP 10 EmployeeID FROM dbo.Employee ORDER BY NEWID()


The above 2 methods of retrieving the records from the table have own mertis and demerits.

RAND() methods will give same sample of records in every run. so you have to come with own logic to produce different random number for every records.

NEWID() method give easy solution for the above problem. The idea behind this method is having of unique identifier for each rows. SQL Server maintain this for every rows.

There will be performance overhead problem in this approach when you are using this for TABLE having more records. Because its scans through whole table. you can avoid the full scan by limiting the records using WHERE condition.

SQL Server 2005.
---------------

SQL Server 2005 provides new option to get the records randonly. That new option is
TABLESAMPLE. This keyword is used with FROM clause. This approach could not read
through entire table. its just take the sample records instead of scanning entire
table.

TABLESAMPLE:
------------

SELECT EmployeeID FROM dbo.Employee TABLESAMPLE (50 ROWS)

SELECT EmployeeID FROM dbo.Employee TABLESAMPLE (50 PERCENT)

SELECT TOP 10 EmployeeID FROM dbo.Employee TABLESAMPLE (50 ROWS)


The above first query will not return 50 rows exactly.
First it will convert the ROWS into percent. And the select the records randomly.

Selection of random records based on DATA pages(8K) for that table instead of rows identifier. so it will not produce the expected result.

To get the expected result, we have to use the TOP clause in the select query. The TOP #(number) should be less than selection of records specified in the TABLESAMPLE.

Sometimes,Even this appraoch will not produce the expected result.so you have to use this approach carefully with your logic.

Thursday, July 26, 2007

Handling Database NULL in .Net while using SqlParameter (.Net 2.0)

If you want to set the NULL for database column while inserting or updating, normal “null” OR “DBNULL.value” is not suitable.

C# code:

param[0] = new SqlParameter("@MiddleName", SqlDbType.VarChar , 50);
param[0].Value = null;

param[0] = new SqlParameter("@MiddleName", SqlDbType.VarChar , 50);
param[0].Value = DBNULL.value;


If you are using Sqlparameter for passing the parameter value to stored procedure, the above null types does not support. It will throw an error like “Parameter @MiddleName not specified.”.

C# code:

using System.Data.SqlTypes;

param[0] = new SqlParameter("@MiddleName", SqlDbType.VarChar , 50);
param[0].Value = SqlString.Null;

The above code will execute properly without any error.

The same way you have to use for other data types like
DataTime --> SqlDateTime.Null
int32 --> SqlInt32.Null, etc

Tuesday, July 24, 2007

Performance Difference between Parameterized and Non-Parameterized call to Stored Procedure

You can call the sql stored procedures either with parameter ( using SqlParameter) or normal sql string ( using EXEC dbo.spName). There will be significant performance difference between two.

Non – Parameterized Stored Procedure Call:
If you call the stored procedure using EXEC ( like normal sql query) with command type as TEXT, execution plan is not reused. SQL Server caches the execution plan for SP to reuse. If you used EXEC for calling SP, new execution plan will create for every different string of EXEC statement. It will not reuse the execution plan from the SQL Server cache. SQL Server normally doing the parsing, optimizing, compiling process while creating the new execution plan.


Parameterized Stored Procedure Call:

If you call Stored procedure with Parameters ( example: SqlParameter) with Command Type as Stored Procedure, execution plan of the SP is reused again instead of creating the new one with different parameters.

Conclusion:

Use Parameterized method of calling when you call the stored procedure instead of EXEC method. You can’t feel the performance when few call to stored procedure. You can feel this performance difference when more calls to stored procedure (10000 calls frequently).

Monday, July 09, 2007

OUTPUT Command in SQL Server 2005

The output parameter that can be used in stored procedures. This is about returning effected data on a table with a few feature in SQL Server 2005.
SQL Server 2000

A simple question for you. If you want to retrieve last inserted identity value what do you do? Obviously SCOPE_IDENTITY() or @@IDENTITY will be your answer. There is a small different between these too, which I am not going to discuss right now. Even though both will satisfy the current requirement, I will use SCOPE_IDENTITY(), which is the correct one.

CREATE TABLE TempTable
(
ID INT IDENTITY(1 , 1)
, Code VARCHAR(25)
, Name VARCHAR(50)
, Salary Numeric(10 , 2)
)
INSERT INTO TempTable ( Code , Name , Salary )
VALUES ( 'A001' , 'John' , 100 )
INSERT INTO TempTable ( Code , Name , Salary )
VALUES ( 'A002' , 'Ricky' , 200 )

SELECT SCOPE_IDENTITY() AS LastInsertID

However, this will only valid when you need the last inserted ID. A Problem arises when you need the last updated or deleted data. In SQL Server 2000, you don't have any other option other than writing a trigger or triggers to capture them via inserted and/or deleted tables.

more...

Thursday, July 05, 2007

Features in Microsoft SQL Server 2008 CTP

Microsoft is making a series of announcements at Tech*Ed related to SQL Server 2008 -- previously codenamed "Katmai". I've got some details on some of the new features including the MERGE statement, Table Valued Parameters, Change Data Capture and the Declarative Management Framework. There should also be a download of the June CTP available inside Connect.
more...

Other Links:
Four Pillar

Thursday, June 28, 2007

Outlook with .NET 2.0

For accessing the outlook and its features you have to add reference of Microsoft Outlook 11.0 object library Version 9.2 (COM component) to your project.This COM component provides various objects through we can access the outlook.

1. Microsoft.Office.Interop.Outlook.Application
2. Microsoft.Office.Interop.Outlook.Explorer
3. Microsoft.Office.Interop.Outlook.Inspector
4. Microsoft.Office.Interop.Outlook.MAPIFolder
5. Microsoft.Office.Interop.Outlook.MailItem
6. Microsoft.Office.Interop.Outlook.AppointmentItem
7. Microsoft.Office.Interop.Outlook.TaskItem
8. Microsoft.Office.Interop.Outlook.ContactItem

more...

Monday, June 18, 2007

Debugging SQL Server 2005 Stored Procedures in Visual Studio

With Microsoft SQL Server 2000 it was possible to debug stored procedures from directly within Query Analyzer (see Debugging a SQL Stored Procedure from inside SQL Server 2000 Query Analyzer for more information). With SQL Server 2005, however, this functionality was moved out of SQL Server Management Studio and into the Visual Studio IDE. Using this technique, it is possible to step into your stored procedures, one statement at a time, from within Visual Studio. It is also possible to set breakpoints within your stored procedures' statements and have these breakpoints hit when debugging your application more...

Tuesday, June 05, 2007

New XML Capabilities in SQL Server 2005

Prior to SQL Server 2005, if developers wanted to convert XML data to relational data, they had to use combinations of the stored procedure sp_xml_preparedocument and the OPENXML function. While still valid, this methodology introduces some overhead. SQL 2005 provides native support for the XML data type, and new methods to directly parse and read the data more

Wednesday, May 30, 2007

Passing a Table to A Stored Procedure in SQL Server 2005

In this article, he trying to present a solution to the above scenario by using XML as the format to pass a table to a stored procedure. The CALLER can transform the table (Query result) to an XML variable and pass to the stored procedure. The CALLEE can either convert the XML parameter back to a TABLE variable or directly use XQuery on the XML variable. more

Monday, May 14, 2007

Enable CLR in SQL Server 2005

if you need to run CLR object in SQL Server 2005. you have to enable the CLR option.
I have used following command to enable the CLR features.


EXEC sp_configure 'show advanced options' , '1';
go
reconfigure;
go
EXEC sp_configure 'clr enabled' , '1'
go
reconfigure;

Tuesday, April 24, 2007

Compressing ViewState in ASP.Net 2.0

Developers often worry about performance of their web sites. Every developer wants that his web site be optimized for good performance. There are several factors affecting performance of your web site and one of them is ViewState. In this article he going to show a way by which you can compress ViewState thus improving the performance. more..

Thursday, April 05, 2007

Deployment:Customize User Interfaces and Pass User Input to Installer Classes

This article going to demonstrate how to customize your MSI install to prompt the user for some information and then pass this information to an installer class. This can be useful when needing to do something during an install based on the user input.There are two key parts to this process the first is the addition of a custom user interface dialog and the second is passing whatever information is entered into the new user interface to the installer class in order to do something with this information during installation. more..

ASP.Net 2.0: Export GridView to Excel

The focus of the article is the Export to Excel functionality - the Gridview and it's data binding are only for demonstrating the Export functionality.
The code fragments for the Export to Excel functionality below are not linked to the backend structure and can be re-used across projects for the common functionality provided. In this article, we will assume you are starting with a web page which holds a GridView named GridView1. The GridView in our demo code is bound to a table named "ContactPhone" in a SQL Express database. The following code which exports the databound GridView to Excel is not dependent on the specific databindings and can be used without changes for your scenario. more..

Wednesday, April 04, 2007

SQL Server 2005 Reporting Services

Included with SQL Server 2005 is a group of interrelated applications, collectively known as SQL Server Reporting Service (SSRS). SSRS includes all the development and management pieces necessary to publish end user reports in HTML, PDF, Excel, and CSV formats. Originally released as a SQL 2000 separate add on that could be downloaded from the web, all the Reporting Server pieces are now bundled in SQL 2005. With Reporting Services built into SQL, expect to see product adoption rise quickly. Microsoft's accounting package, Solomon, will soon discontinue use of Crystal Reports in favor of SSR. more..

Data partitioning in SQL Server 2005

Data partitioning, a new feature added to SQL Server 2005, provides a way to divide large tables and indexes into smaller parts. By doing so, it makes the life of a database administrator easier when doing backups, loading data, recovery and query processing.Data partitioning improves the performance, reduces contention and increases availability of data.A Table can be partitioned based on any column in the table. Microsoft defines that column as the partition key. more..