, , , ,

How to show icon in browser header / Fevicon

To complete this task you have to

visit http://www.html-kit.com/favicon/

Provide your image which you want to use a icon

it will generate icon for you. Follow the steps given in site.

Write following code in header section of page.

<link rel="shortcut icon" href="Images/favicon.ico" type="image/ico">
<link rel="shortcut icon" type="image/ico" href="Images/favicon.ico">
Share:
Read More
, , , , ,

Tips For Database Optimization/Operations with C#

Recommended reading C# optimization tips

1) Return Multiple Resultsets

The database code if has request paths that go to the database more than once then, these round-trips decreases the number of requests per second your application can serve.
Solution:
Return multiple resultsets in a single database request, so that you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.


2) Connection Pooling and Object Pooling
Connection pooling is a useful way to reuse connections for multiple requests, rather than paying the overhead of opening and closing a connection for each request. It's done implicitly, but you get one pool per unique connection string. Make sure you call Close or Dispose on a connection as soon as possible. When pooling is enabled, calling Close or Dispose returns the connection to the pool instead of closing the underlying database connection.
Account for the following issues when pooling is a part of your design:
  • Share connections
  • Avoid per-user logons to the database
  • Do not vary connection strings
  • Do not cache connections
3) Use SqlDataReader Instead of Dataset wherever it is possible
If you are reading a table sequentially you should use the DataReader rather than DataSet. DataReader object creates a read only stream of data that will increase your application performance because only one row is in memory at a time.

4) Keep Your Datasets Lean
Remember that the dataset stores all of its data in memory, and that the more data you request, the longer it will take to transmit across the wire.
Therefore Only put the records you need into the dataset.

5) Avoid Inefficient queries
How it affects performance:
Queries that process and then return more columns or rows than necessary, waste processing cycles that could best be used for servicing other requests.

Cause of Inefficient queries:
  • Too much data in your results is usually the result of inefficient queries.
  • The SELECT * query often causes this problem. You do not usually need to return all the columns in a row. Also, analyze the WHERE clause in your queries to ensure that you are not returning too many rows. Try to make the WHERE clause as specific as possible to ensure that the least number of rows are returned.
  • Queries that do not take advantage of indexes may also cause poor performance.
6) Unnecessary round trips
How it affects performance:
Round trips significantly affect performance. They are subject to network latency and to downstream server latency. Many data-driven Web sites heavily access the database for every user request. While connection pooling helps, the increased network traffic and processing load on the database server can adversely affect performance.
Solution:
Keep round trips to an absolute minimum.

7) Too many open connections
Connections are an expensive and scarce resource, which should be shared between callers by using connection pooling. Opening a connection for each caller limits scalability.
Solution:
To ensure the efficient use of connection pooling, avoid keeping connections open and avoid varying connection strings.

8) Avoid Transaction misuse
How it affects performance:
If you select the wrong type of transaction management, you may add latency to each operation. Additionally, if you keep transactions active for long periods of time, the active transactions may cause resource pressure.
Solution:
Transactions are necessary to ensure the integrity of your data, but you need to ensure that you use the appropriate type of transaction for the shortest duration possible and only where necessary.

9) Avoid Over Normalized tables
Over Normalized tables may require excessive joins for simple operations. These additional steps may significantly affect the performance and scalability of your application, especially as the number of users and requests increases.

10) Reduce Serialization
Dataset serialization is more efficiently implemented in .NET Framework version 1.1 than in version 1.0. However, Dataset serialization often introduces performance bottlenecks.
You can reduce the performance impact in a number of ways:
  • Use column name aliasing
  • Avoid serializing multiple versions of the same data
  • Reduce the number of DataTable objects that are serialized
11) Do Not Use CommandBuilder at Run Time
How it affects performance:
CommandBuilder objects such as as SqlCommandBuilder and OleDbCommandBuilder are useful when you are designing and prototyping your application. However, you should not use them in production applications. The processing required to generate the commands affects performance.
Solution:
Manually create stored procedures for your commands, or use the Visual Studio® .NET design-time wizard and customize them later if necessary.

12) Use Stored Procedures Whenever Possible
  • Stored procedures are highly optimized tools that result in excellent performance when used effectively.
  • Set up stored procedures to handle inserts, updates, and deletes with the data adapter
  • Stored procedures do not have to be interpreted, compiled or even transmitted from the client, and cut down on both network traffic and server overhead.
  • Be sure to use CommandType.StoredProcedure instead of CommandType.Text
13) Avoid Auto-Generated Commands
When using a data adapter, avoid auto-generated commands. These require additional trips to the server to retrieve meta data, and give you a lower level of interaction control. While using auto-generated commands is convenient, it's worth the effort to do it yourself in performance-critical applications.

14) Use Sequential Access as Often as Possible
With a data reader, use CommandBehavior.SequentialAccess. This is essential for dealing with blob data types since it allows data to be read off of the wire in small chunks. While you can only work with one piece of the data at a time,
the latency for loading a large data type disappears. If you don't need to work the whole object at once, using Sequential Access will give you much better performance.
Share:
Read More
, , ,

Best Practices to Improve ASP.Net Web Application Performance. Part - 4

Recommended read part - 3

14) Avoid Unnecessary Indirection
How it affects performance:
When you use byRef, you pass pointers instead of the actual object.
Many times this makes sense (side-effecting functions, for example), but you don't always need it. Passing pointers results in more indirection, which is slower than accessing a value that is on the stack.
Solution:
When you don't need to go through the heap, it is best to avoid it there by avoiding indirection.

15) Use "ArrayLists" in place of arrays
How it improves performance
An ArrayList as everything that is good about an array PLUS automatic sizing, Add, Insert, Remove, Sort, Binary Search.
All these great helper methods are added when implementing the IList interface.
Tradeoffs:
The downside of an ArrayList is the need to cast objects upon retrieval.

16) Always check Page.IsValid when using Validator Controls
Always make sure you check Page.IsValid before processing your forms when using Validator Controls.

17) Use Paging
Take advantage of paging's simplicity in .net. Only show small subsets of data at a time, allowing the page to load faster.
Tradeoffs:
Just be careful when you mix in caching. Don't cache all the data in the grid.

18) Store your content by using caching
How it improves performance:
ASP.NET allows you to cache entire pages, fragment of pages or controls. You can cache also variable data by specifying the parameters that the data depends. By using caching you help ASP.NET engine to return data for repeated request for the same page much faster.
When and Why Use Caching:
A Proper use and fine tune of caching approach of caching will result on better performance and scalability of your site. However improper use of caching will actually slow down and consume lots of your server performance and memory usage.
Good candidate to use caching is if you have infrequent chance of data or static content of web page.


19) Use low cost authentication
Authentication can also have an impact over the performance of your application. For example passport authentication is slower than form-base authentication which in here turn is slower than Windows authentication.

20) Minimize the number of web server controls
How it affects performance:
The use of web server controls increases the response time of your application because they need time to be processed on the server side before they are rendered on the client side.
Solution:
One way to minimize the number of web server controls is to taking into consideration, the usage of HTML elements where they are suited, for example if you want to display static text.

21) Avoid using unmanaged code
How it affects performance:
Calls to unmanaged code are a costly marshaling operation.
Solution:
Try to reduce the number calls between the managed and unmanaged code. Consider to do more work in each call rather than making frequent calls to do small tasks.

22) Avoid making frequent calls across processes
If you are working with distributed applications, this involves additional overhead negotiating network and application level protocols. In this case network speed can also be a bottleneck. Try to do as much work as possible in fewer calls over the network.

23) Cleaning Up Style Sheets and Script Files
  •  A quick and easy way to improve your web application's performance is by going back and cleaning up your CSS Style Sheets and Script Files of unnecessary code or old styles and functions. It is common for old styles and functions to still exist in your style sheets and script files during development cycles and when improvements are made to a website.
  • Many websites use a single CSS Style Sheet or Script File for the entire website. Sometimes, just going through these files and cleaning them up can improve the performance of your site by reducing the page size. If you are referencing images in your style sheet that are no longer used on your website, it's a waste of performance to leave them in there and have them loaded each time the style sheet is loaded.
  • Run a web page analyzer against pages in your website so that you can see exactly what is being loaded and what takes the most time to load.
24) Design with ValueTypes
Use simple structs when you can, and when you don't do a lot of boxing and unboxing.
Tradeoffs:
ValueTypes are far less flexible than Objects, and end up hurting performance if used incorrectly. You need to be very careful about when you treat them like objects. This adds extra boxing and unboxing overhead to your program, and can end up costing you more than it would if you had stuck with objects.

25) Minimize assemblies
Minimize the number of assemblies you use to keep your working set small. If you load an entire assembly just to use one method, you're paying a tremendous cost for very little benefit. See if you can duplicate that method's functionality using code that you already have loaded.

26) Encode Using ASCII When You Don't Need UTF
By default, ASP.NET comes configured to encode requests and responses as UTF-8.
If ASCII is all your application needs, eliminated the UTF overhead can give you back a few cycles. Note that this can only be done on a per-application basis.

27) Avoid Recursive Functions / Nested Loops
These are general things to adopt in any programming language, which consume lot of memory. Always avoid Nested Loops, Recursive functions, to improve performance.

28) Minimize the Use of Format ()
When you can, use toString () instead of format (). In most cases, it will provide you with the functionality you need, with much less overhead.

29) Place StyleSheets into the Header
Web developers who care about performance want browser to load whatever content it has as soon as possible. This fact is especially important for pages with a lot of content and for users with slow Internet connections. When the browser loads the page progressively the header, the logo, the navigation components serve as visual feedback for the user. When we place style sheets near the bottom part of the html, most browsers stop rendering to avoid redrawing
elements of the page if their styles change thus decreasing the performance of the page. So, always place StyleSheets into the Header.

30) Put Scripts to the end of Document
Unlike StyleSheets, it is better to place scripts to the end of the document. Progressive rendering is blocked until all StyleSheets have been downloaded. Scripts cause progressive rendering to stop for all content below the script until it is fully loaded. Moreover, while downloading a script, browser does not start any other component downloads, even on different hostnames.
So,always have scripts at the end of the document.

31) Make JavaScript and CSS External
Using external files generally produces faster pages because the JavaScript and CSS files are cached by the browser.
Inline JavaScript and CSS increases the HTML document size but reduces the number of HTTP requests. With cached external files, the size of the HTML is kept small without increasing the number of HTTP requests thus improving the performance.
Share:
Read More
, , ,

Best Practices to Improve ASP.Net Web Application Performance. Part - 3

Recommended Read Part - 2

6) Use the String builder to concatenate string
How it affects performance:

String is Evil when you want to append and concatenate text to your string. All the activities you do to the string are stored in the memory as separate references and it must be avoided as much as possible. i.e. When a string is modified, the run time will create a new string and return it, leaving the original to be garbage collected. Most of the time this is a fast and simple way to do it, but when a string is being modified repeatedly it begins to be a burden on performance: all of those allocations eventually get expensive.

Solution:
Use String Builder when ever string concatenation is needed so that it only stores the value in the original string and no additional reference is created.

7) Avoid throwing exceptions
How it affects performance:
Exceptions are probably one of the heaviest resource hogs and causes of slowdowns you will ever see in web applications, as well as windows applications.
Solution:
You can use as many try/catch blocks as you want. Using exceptions gratuitously is where you lose performance. For example, you should stay away from things like using exceptions for control flow.

8) Use Finally Method to kill resources
The finally method gets executed independent of the outcome of the Block. Always use the finally block to kill resources like closing database connection, closing files.

9) Use Client Side Scripts for validations
User Input is Evil and it must be thoroughly validated before processing to avoid overhead and possible injections to your applications.
How It improves performance:
Client site validation can help reduce round trips that are required to process user's request. In ASP.NET you can also use client side controls to validate user input. However, do a check at the Server side too to avoid the infamous Javascript disabled scenarios.

10) Avoid unnecessary round trips to the server
How it affects performance:
Round trips significantly affect performance. They are subject to network latency and to downstream server latency.
Many data-driven Web sites heavily access the database for every user request. While connection pooling helps, the increased network traffic and processing load on the database server can adversely affect performance.
Solution:
  • Keep round trips to an absolute minimum
  • Implement Ajax UI whenever possible. The idea is to avoid full page refresh and only update the portion of the page that needs to be changed.
11) Use Page.ISPostBack
Make sure you don't execute code needlessly. Use Page.ISPostBack property to ensure that you only perform page initialization logic when a page is first time loaded and not in response to client postbacks.

12) Include Return Statements with in the Function/Method
How it improves performance
Explicitly using return allows the JIT to perform slightly more optimizations. Without a return statement, eachfunction/method is given several local variables on stack to transparently support returning values without the keyword. Keeping these around makes it harder for the JIT to optimize, and can impact the performance of your code.
Look through your functions/methods and insert return as needed. It doesn't change the semantics of the code at all, and it can help you get more speed from your application.

13) Use Foreach loop instead of For loop for String Iteration
Foreach is far more readable, and in the future it will become as fast as a For loop for special cases like strings. Unless string manipulation is a real performance hog for you, the slightly messier code may not be worth it.
Share:
Read More
, , ,

A name was started with an invalid character. Error processing resource

This error fires when you run your application on local host. Generally the reason behind this error is configuration of IIS.

The general cause of this error is installation of .net 2005 before installing IIS

To know how to configure IIS click here.

Here is the solution of this error Go to run type cmd
Browse for windows directory:
For ex.
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe –i
Or
C:\WINNT\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe –i

Hope this will work
Share:
Read More
, , ,

How to configure site on IIS (Internet information server)

When I was new to web development I do not have any idea about IIS.
That is a major task to configure web site to IIS. IIS have too many properties which is to be set to run and asp.net application.

Here are few things which we have to keep while setup of machines for .net 2005

Steps to configure IIS:-
  1. After windows installation we have to install IIS.
  2. Install Sql server 2005
  3. Install .net 2005
If you install .net 2005 before installing IIS it will raise an error after configuring asp.net site
The error will be

“A name was started with an invalid character. Error processing resource”

For solution click here

Following are the steps to configure IIS
Go to Start >> Run >>type inetmgr
Screen of IIS will appear

Follow following steps:-














Display IIS window


















Open tree and come to default websites



















Right click on default web sites



















Open new>> Virtual directory



















Click on next button


















Provide an alias name to you site (for ex. test)



















Select an physical name directory



















Select all the options



















Click Finish

Now open you internet explorer and type http://Localhost/test

"test" is your site name so make sure you provide your site name.


Share:
Read More
, , ,

How to show images with Animation

You will find so many scripts to do the same task but, only one issue will arise with all the scripts “does these are browser compatible”. May the answer is yes or may be no.
But, here is the solution you have to use “SpryEffects.js”. This is truly browser compatible is easy to implement.
I have a small example that is using this “JS” to animate images.

Download “SpryEffects.js” from net.

<script language="javascript" type="text/javascript">
function MM_effectGrowShrink(targetElement, duration, from, to, toggle, referHeight, growFromCenter)
{
Spry.Effect.DoGrow(targetElement, {duration: duration, from: from, to: to, toggle: toggle, referHeight: referHeight, growCenter: growFromCenter});
}
function showimage(str_path)
{
var obj=document.getElementById("a1");
str_path=str_path.replace('images/','images/large/');
var int_left=screen.width;
if (int_left>1024)
{
int_left=int_left-1024;
int_left=int_left/2;
int_left=int_left+251;
}
else
{
int_left=50;
}

obj.style.top="50px";
obj.style.left=int_left +"px";
obj.style.display = 'block';

var viframe=document.getElementById('<%=img_show.ClientID%>').src=str_path;
//MM_effectGrowShrink('a1', 1000, '0%', '100%', false, false, true);
MM_effectSlide('a1', 1000, '0%', '100%', false)
}
function MM_effectAppearFade(targetElement, duration, from, to, toggle)
{
Spry.Effect.DoFade(targetElement, {duration: duration, from: from, to: to, toggle: toggle});
}
function hidediv(str_path)
{
//MM_effectGrowShrink('a1', 1000, '100%','0%',false, false, true);
//var obj=document.getElementById("a1");
MM_effectSlide('a1', 1000, '100%','0%', false)
}

function MM_effectSquish(targetElement)
{
Spry.Effect.DoSquish(targetElement);
}

function MM_effectSlide(targetElement, duration, from, to, toggle)
{
Spry.Effect.DoSlide(targetElement, {duration: duration, from: from, to: to, toggle: toggle});
}

</script>
<div ></div>
<div style="left:50px;top:50px;position:absolute;width:701px">
<img src="images/jagriti_004.jpg" onclick="showimage(this.src);" />
</div>
<div id="a1" style="position: absolute; background-color: Gray; display: none; width: 711px;
height: 500px;">

<img id="img_show" src="" runat="server" alt="Click to Close Enlarged View" style="margin: 0px;
padding: 0px; border: solid 5px #ffffff; height: 500px; width: 701px;" align="absmiddle" onclick="hidediv();" onmouseover="style.cursor='pointer'"/>

</div>

You can also download code which is using C#.

Related articles
Share:
Read More
, , ,

Best Practices to Improve ASP.Net Web Application Performance. Part - 2

Recommended Read Part - 1

3) Disable View State of a Page if possible


View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page.
When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls.
View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.

How it affects performance:
  • There are a number of drawbacks to the use of view state, however.
  • It increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server.
  • View state increases the memory allocations on the server. Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed.
Solution:
Pages that do not have any server postback events can have the view state turned off.
The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, simply set the EnableViewState property to false, or set it globally within the page using this setting:
<%@ Page EnableViewState="false" %>
If you turn view state off for a page or control, make sure you thoroughly test your pages to verify that they continue to function correctly.

4) Set debug=false in web.config
When you create the application, by default this attribute is set to "true" which is very useful while developing.
However, when you are deploying your application, always set it to "false".
How it affects performance:
Setting it to "true" requires the pdb information to be inserted into the file and this results in a comparatively larger file and hence processing will be slow.
Solution:
Therefore, always set debug="false" before deployment.

5) Avoid Response.Redirect
Response.Redirect () method simply tells the browser to visit another page.
How it affects performance:
Redirects are also very chatty. They should only be used when you are transferring people to another physical web server.
Solution:
For any transfers within your server, use .transfer! You will save a lot of needless HTTP requests. Instead of telling the browser to redirect, it simply changes the "focus" on the Web server and transfers the request. This means you don't get quite as many HTTP requests coming through, which therefore eases the pressure on your Web server and makes
your applications run faster.
Tradeoffs:
  • ".transfer" process can work on only those sites running on the server. Only Response.Redirect can do that.
  • Server.Transfer maintains the original URL in the browser. This can really help streamline data entry techniques,
    although it may make for confusion when debugging
Note: To reduce CLR Exceptions count, Use Response.Redirect (".aspx", false) instead of response.redirect (".aspx").

Continue reading part 3
Share:
Read More
, , ,

Best Practices to Improve ASP.Net Web Application Performance. Part -1

Performance tuning can be tricky. It's especially tough in Internet-related projects with lots of components running
around, like HTML client, HTTP network, Web server, middle-tier components, database components, resourcemanagement components, TCP/IP networks, and database servers. Performance tuning depends on a lot of parameters and sometimes, by changing a single parameter, performance can increase drastically.

Introduction
This document lists out some tips for optimizing ASP.Net Web applications and many traps and pitfalls are discussed as follows :

Tips For Web Application
1) Turn off Tracing unless until required
Tracing is one of the wonderful features which enable us to track the application's trace and the sequences. However,
again it is useful only for developers and you can set this to "false" unless you require to monitor the trace logging.
How it affects performance:
Enabling tracing adds performance overhead and might expose private information, so it should be enabled only while
an application is being actively analyzed.
Solution:
When not needed, tracing can be turned off using


2) Turn off Session State, if not required
One extremely powerful feature of ASP.NET is its ability to store session state for users, such as a shopping cart on an e-commerce site or a browser history. How it affects performance: Since ASP.NET Manages session state by default, you pay the cost in memory even if you don't use it. I.e. whether you
store your data in in-process or on state server or in a Sql Database, session state requires memory and it's also time consuming when you store or retrieve data from it.
Solution:
You may not require session state when your pages are static or when you do not need to store information captured in the page. In such cases where you need not use session state, disable it on your web form using the directive,
<@%Page EnableSessionState="false"%>
In case you use the session state only to retrieve data from it and not to update it, make the session state read only by
using the directive,
<@%Page EnableSessionState ="ReadOnly"%>

Continue reading part 2
Share:
Read More
, , , ,

URL rewriting and SEO


downloadDownload c# code

What is URL REWRITING?
URL rewriting is the process of intercepting an incoming Web request and redirecting the request to a different resource. When performing URL rewriting, typically the URL being requested is checked and, based on its value, the request is redirected to a different URL. For example, in the case where a website restructuring caused all of the Web pages in the /people/ directory to be moved to a /info/employees/ directory, you would want to use URL rewriting to check if a Web request was intended for a file in the /people/ directory. If the request was for a file in the /people/ directory, you'd want to automatically redirect the request to the same file, but in the /info/employees/ directory instead.

Common Uses of URL Rewriting
Creating data-driven ASP.NET websites often results in a single Web page that displays a subset of the database's data based on querystring parameters. For example, in designing an e-commerce site, one of your tasks would be to allow users to browse through the products for sale. To facilitate this, you might create a page called displayproduct.aspx that would display the products for a given category. The category's products to view would be specified by a querystring parameter. That is, if the user wanted to browse the Widgets for sale and all Widgets had a CategoryID of 5, the user would visit:
http://yousite.com/displayproduct.aspx?CategoryID=5.

There are two downsides to creating a website with such URLs. First, from the end user's perspective, the URL http://yousite.com/displayCategory.aspx?CategoryID=5 is a mess. But for user we have to choose URL’s that:
  • Are short.
  • Are easy to type.
  • Visualize the site structure.
  • "Hackable," allowing the user to navigate through the site by hacking off parts of the URL.
  • Are easy to remember
The URL http://yousite.com/displayproduct.aspx?CategoryID=5 meets none of the above criteria, nor is it easy to remember. Asking users to type in querystring values makes a URL hard to type and makes the URL "hackable" only by experienced Web developers who have an understanding of the purpose of querystring parameters and their name/value pair structure.

A better approach is to allow for a sensible, memorable URL, such as http://yoursite.com/products/Books. By just looking at the URL you can find what will be displayed—information about Books. The URL is easy to remember and share, too. you can tell to anybody, "Check out yoursite.com/products/books," and he'll likely be able to bring up the page without needing to ask what the URL was. (Try doing that with, say, an http://login.live.com/?id=1 page!) The URL also appears, and should behave, "hackable." That is, if the user hacks of the end of the URL, and types in http://yoursite.com/products, they should see a listing of all products, or at least a listing of all categories of products they can view

For example http://login.live.com/?id=1 the view of the page is different and if you enter http://login.live.com/?id=2 or http://login.live.com/?id=3 the behavior is different.

What Happens When a Request Reaches IIS
Before we examine exactly how to implement URL rewriting, it's important that we have an understanding of how incoming requests are handled by Microsoft® Internet Information Services (IIS). When a request arrives at an IIS Web server, IIS examines the requested file's extension to determine how handle the request. Requests can be handled natively by IIS—as are HTML pages, images, and other static content—or IIS can route the request to an ISAPI extension. (An ISAPI extension is an unmanaged, compiled class that handles an incoming Web request. Its task is to generate the content for the requested resource.)

For example, if a request comes in for a Web page named Info.asp, IIS will route the message to the asp.dll ISAPI extension. This ISAPI extension will then load the requested ASP page, execute it, and return its rendered HTML to IIS, which will then send it back to the requesting client. For ASP.NET pages, IIS routes the message to the aspnet_isapi.dll ISAPI extension. The aspnet_isapi.dll ISAPI extension then hands off processing to the managed ASP.NET worker process, which processes the request, returning the ASP.NET Web page's rendered HTML.

What is SEO
Search engine optimization (SEO) is the process of improving the volume and quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results for targeted keywords. Usually, the earlier a site is presented in the search results or the higher it "ranks", the more searchers will visit that site. SEO can also target different kinds of search, including image search, local search, and industry-specific vertical search engines.

As a marketing strategy for increasing a site's relevance, SEO considers how search algorithms work and what people search for. SEO efforts may involve a site's coding, presentation, and structure, as well as fixing problems that could prevent search engine indexing programs from fully spidering a site.

Static URLs are known to be better than dynamic URLs for a number of reasons:

1. Static URLs typically rank better in search engines.
2. Search engines are known to index the content of dynamic pages much more slowly than that of static pages.
3. Static URLs look friendlier to end users.

Example of a dynamic URL
http://www.yourdomain.com/profile.aspx?mode=view&u=7

dynamic URLs into static looking HTML URLs.

Examples of the above dynamic URL re-written:
http://www.yourdomain.com/profile-mode-view-u-7.html
or
http://www.yourdomain.com/profile/mode/view/u/7/


SEO contains
There are three HTML Tags, which affect ranking; these are Title Tags, Keyword Tags and Description Tags.

Title Tags - HTML Title describes the contents web page. This title is most likely to appear in the results for search engines and on bookmarks, and should be made relevant to the contents on the web page.

Keyword Tags - Keyword frequency, weight, prominence and proximity are a few techniques that can help improve search engine rankings. Most of these techniques have one thing in common - use of keywords. Keywords are words people type into a search engine... words that lead them to site. Having the right keywords in your website's source code in the form of a Meta tag is the first step to better search engine positioning.

Description Tags - Description tags are HTML tags that describe in brief the contents of the page. These are visible to the surfer when your page appears in the results of a search. These can be effectively used to increase the frequency of keywords in the HTML of your web page.

Alt Tags Composition & Upload

ALT tags are basically image descriptions within a website. You can attach text to an image to describe it so that Search Engines can also find it. ALT descriptions help you rank higher in search engines. Search engine algorithms calculate the number of times keywords are repeated and give higher rank to pages that use them often. Keywords in the ALT descriptive text help you increase their frequency on the page. Search engines assume the terms are more relevant and important if they're used in the page content.

For example:-

TITLE=http://an-it-solution.blogspot.com/

META NAME="Description" CONTENT="http://an-it-solution.blogspot.com/”
META NAME="Keywords" CONTENT="http://an-it-solution.blogspot.com/”
Meta Name="abstract" Content="http://an-it-solution.blogspot.com/"
Meta Name="http://an-it-solution.blogspot.com/
Meta Name="http://an-it-solution.blogspot.com/"

Conclusion
The major task of URL rewriting is to make the URL static and short so, the URL becomes readable, memorable and ranked on search engines.
Share:
Read More
, , ,

Validation of viewstate MAC failed.

Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster.


I am facing this error from last two days, I tried so many resolutions to resolve this problem.

I had tried to know the reason of this error but, unfortunately I am not able to reach the cause of this error.

But, I had find a solution of this error.

Update your web.config with following attributes.
<pages validateRequest="false" enableEventValidation="false" viewStateEncryptionMode ="Never" enableViewStateMac="false">

Download Sample Web.config

Share:
Read More
, , , , ,

Use Google maps in Asp.net C#

downloadDownload this with example with code

This is very simple to use Google maps to your application

To use Google map you must have an Google account

go to HTTP://code.google.com/apis/maps/signup.html

login and create key for your website.

Please use the example attached with this post.

After generating key replace this key in web.config.

For any queries please let me know.

Share:
Read More
, , , ,

Exporting datagrid to Excel using C#.net

Datagrid is one of the most coolest controls in the Asp.net.

One thing that all developers need is to put the data grid data into excel sheet. In this article I will show you that how you can export your datagrid data to Excel file.

Exporting datagrid to excel might sounds complex but its pretty simple. Let's see how this can be done.

Here is the code:

protected void img_Export_Click(object sender, EventArgs e)
{
// export to excel
DataTable dt = new DataTable();
dt.Columns.Add("a");
dt.Columns.Add("b");
dt.Columns.Add("c");
dt.Rows.Add();
dt.Rows[0]["a"] = "aaaaaaaaaaaaaa";
dt.Rows[0]["b"] = "bbbbbbb";
dt.Rows[0]["c"] = "ccccccccccc";
dt.Rows.Add();
dt.Rows[1]["a"] = "11111111111111111";
dt.Rows[1]["b"] = "222222222222";
dt.Rows[1]["c"] = "333333333333";

DataGrid1.DataSource = dt;

DataGrid1.DataBind();
/**************************************************/
Response.Clear();
Response.Buffer = true;
Response.ContentType = "application/vnd.ms-excel";
Response.Charset = "";
this.EnableViewState = false;
Response.Cache.SetExpires(DateTime.Now.AddSeconds(1));
Response.Write("");
Response.Write("\r\n");
Response.Write("");
System.IO.StringWriter oStringWriter = new System.IO.StringWriter();
System.Web.UI.HtmlTextWriter oHtmlTextWriter = new System.Web.UI.HtmlTextWriter(oStringWriter);
this.ClearControls(DataGrid1);
DataGrid1.RenderControl(oHtmlTextWriter);
Response.AppendHeader("content-disposition", "attachment;filename=DailyApplicationReport_" + DateTime.Now.Year + DateTime.Now.Month + DateTime.Now.Day + "_" + DateTime.Now.Hour + DateTime.Now.Minute + DateTime.Now.Second + ".xls");
Response.Write(oStringWriter.ToString());
Response.End();

}


private void ClearControls(Control control)
{
for (int i = control.Controls.Count - 1; i >= 0; i--)
{
ClearControls(control.Controls[i]);
}
if (!(control is TableCell))
{
if (control.GetType().GetProperty("SelectedItem") != null)
{
LiteralControl literal = new LiteralControl();
control.Parent.Controls.Add(literal);
try
{
literal.Text = (string)control.GetType().GetProperty("SelectedItem").GetValue(control, null);
}
catch
{
}
control.Parent.Controls.Remove(control);
}
else
if (control.GetType().GetProperty("Text") != null)
{
LiteralControl literal = new LiteralControl();
control.Parent.Controls.Add(literal);
literal.Text = (string)control.GetType().GetProperty("Text").GetValue(control, null);
control.Parent.Controls.Remove(control);
}
}
return;
}
}


Download code
Share:
Read More
, , ,

Code of logout in c#.net

Try this code may this will help. Paste this code on masterpage’s load event.

If you are not using master pages then you have to paste this code of all the pages (on load event).


Response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1));
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.SetNoStore();

On logout

You have to write

Session.Abandon();
Session.Clear();

And redirect to login page.
Share:
Read More
, ,

How to add unlimited zero's after decimal in c#.net

here is the code:

int yournumber = 12;
Console.WriteLine(yournumber.ToString("0.00##################"));
Console.WriteLine(yournumber.ToString("0.000#################"));
Console.WriteLine(yournumber.ToString("0.0000################"));
Share:
Read More
, , ,

Convert Video to .Flv Using c#.net on Web.

Having trouble playing .flv in IIS click here for solution

Having trouble Setting up IIS click here for solution

The method to convert video to .flv is too easy. You can download following files from .net
1)ffmpeg.exe
2) ffplay.exe
3) pthreadGC2.dll

After downloading all the files

Follow the steps written:-

1)Make a new .net web site or windows application.
2)Copy and paste all the 3 above written files to root location
3)Copy and Paste code written below
4)Put an upload to page and rename to “ fileuploadImageVideo”
5)put and button and rename to btn_Submit
6)Make 3 folders OriginalVideo, ConvertVideo, Thumbs
7)Import Class “using System.IO;”



private bool ReturnVideo(string fileName)
{
string html = string.Empty;
//rename if file already exists

int j = 0;
string AppPath;
string inputPath;
string outputPath;
string imgpath;
AppPath = Request.PhysicalApplicationPath;
//Get the application path
inputPath = AppPath + "OriginalVideo";
//Path of the original file
outputPath = AppPath + "ConvertVideo";
//Path of the converted file
imgpath = AppPath + "Thumbs";
//Path of the preview file
string filepath = Server.MapPath("~/OriginalVideo/" + fileName);
while (File.Exists(filepath))
{
j = j + 1;
int dotPos = fileName.LastIndexOf(".");
string namewithoutext = fileName.Substring(0, dotPos);
string ext = fileName.Substring(dotPos + 1);
fileName = namewithoutext + j + "." + ext;
filepath = Server.MapPath("~/OriginalVideo/" + fileName);
}
try
{
this.fileuploadImageVideo.SaveAs(filepath);
}
catch
{
return false;
}
string outPutFile;
outPutFile = "~/OriginalVideo/" + fileName;
int i = this.fileuploadImageVideo.PostedFile.ContentLength;
System.IO.FileInfo a = new System.IO.FileInfo(Server.MapPath(outPutFile));
while (a.Exists == false)
{

}
long b = a.Length;
while (i != b)
{

}


string cmd = " -i \"" + inputPath + "\\" + fileName + "\" \"" + outputPath + "\\" + fileName.Remove(fileName.IndexOf(".")) + ".flv" + "\"";
ConvertNow(cmd);
string imgargs = " -i \"" + outputPath + "\\" + fileName.Remove(fileName.IndexOf(".")) + ".flv" + "\" -f image2 -ss 1 -vframes 1 -s 280x200 -an \"" + imgpath + "\\" + fileName.Remove(fileName.IndexOf(".")) + ".jpg" + "\"";
ConvertNow(imgargs);


return true;
}
private void ConvertNow(string cmd)
{
string exepath;
string AppPath = Request.PhysicalApplicationPath;
//Get the application path
exepath = AppPath + "ffmpeg.exe";
System.Diagnostics.Process proc = new System.Diagnostics.Process();
proc.StartInfo.FileName = exepath;
//Path of exe that will be executed, only for "filebuffer" it will be "flvtool2.exe"
proc.StartInfo.Arguments = cmd;
//The command which will be executed
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.CreateNoWindow = true;
proc.StartInfo.RedirectStandardOutput = false;
proc.Start();

while (proc.HasExited == false)
{

}
}
protected void btn_Submit_Click(object sender, EventArgs e)
{
ReturnVideo(this.fileuploadImageVideo.FileName.ToString());
}


Now run the application select a video file, that will get converted and come to ConvertVideo Folder

You can also download this ex. with Code from

Click Here to download
Share:
Read More
, , ,

Cluster and Non Cluster indexes

When I first started using SQL Server as a novice, I was initially confused as to the differences between clustered and non-clustered indexes. As a developer, and new DBA, I took it upon myself to learn everything I could about these index types, and when they should be used. This article is a result of my learning and experience, and explains the differences between clustered and non-clustered index data structures for the DBA or developer new to SQL Server. If you are new to SQL Server, I hope you find this article useful.

As you read this article, if you choose, you can cut and paste the code I have provided in order to more fully understand and appreciate the differences between clustered and non-clustered indexes.



Part I: Non-Clustered Index
Creating a Table

To better explain SQL Server non-clustered indexes; let’s start by creating a new table and populating it with some sample data using the following scripts. I assume you have a database you can use for this. If not, you will want to create one for these examples.

Create Table DummyTable1
(
EmpId Int,
EmpName Varchar(8000)
)

When you first create a new table, there is no index created by default. In technical terms, a table without an index is called a “heap”. We can confirm the fact that this new table doesn’t have an index by taking a look at the sysindexes system table, which contains one for this table with an of indid = 0. The sysindexes table, which exists in every database, tracks table and index information. “Indid” refers to Index ID, and is used to identify indexes. An indid of 0 means that a table does not have an index, and is stored by SQL Server as a heap.

Now let’s add a few records in this table using this script:

Insert Into DummyTable1 Values (4, Replicate ('d',2000))
GO

Insert Into DummyTable1 Values (6, Replicate ('f',2000))
GO

Insert Into DummyTable1 Values (1, Replicate ('a',2000))
GO

Insert Into DummyTable1 Values (3, Replicate ('c',2000))
GO

Now, let’s view the contests of the table by executing the following command in Query Analyzer for our new table.

Select EmpID From DummyTable1
GO

Empid

4

6

1

3

As you would expect, the data we inserted earlier has been displayed. Note that the order of the results is in the same order that I inserted them in, which is in no order at all.

Now, let’s execute the following commands to display the actual page information for the table we created and is now stored in SQL Server.

dbcc ind(dbid, tabid, -1) – This is an undocumented command.

DBCC TRACEON (3604)
GO

Declare @DBID Int, @TableID Int
Select @DBID = db_id(), @TableID = object_id('DummyTable1')

DBCC ind(@DBID, @TableID, -1)
GO

This script will display many columns, but we are only interested in three of them, as shown below.

PagePID


IndexID


PageType

26408


0


10

26255


0


1

26409


0


1

Here’s what the information displayed means:

PagePID is the physical page numbers used to store the table. In this case, three pages are currently used to store the data.

IndexID is the type of index,

Where:

0 – Datapage

1 – Clustered Index

2 – Greater and equal to 2 is an Index page (Non-Clustered Index and ordinary index),

PageType tells you what kind of data is stored in each database,

Where:

10 – IAM (Index Allocation MAP)

1 – Datapage

2 – Index page

Now, let us execute DBCC PAGE command. This is an undocumented command.

DBCC page(dbid, fileno, pageno, option)

Where:

dbid = database id.

Fileno = fileno of the page. Usually it will be 1, unless we use more than one file for a database.

Pageno = we can take the output of the dbcc ind page no.

Option = it can be 0, 1, 2, 3. I use 3 to get a display of the data. You can try yourself for the other options.

Run this script to execute the command:

DBCC TRACEON (3604)
GO

DBCC page(@DBID, 1, 26408, 3)
GO

The output will be page allocation details.

DBCC TRACEON (3604)
GO

dbcc page(@DBID, 1, 26255, 3)
GO


The data will be displayed in the order it was entered in the table. This is how SQL stores the data in pages. Actually, 26255 & 26409 both display the data page.

I have displayed the data page information for page 26255 only. This is how MS SQL stores the contents in data pages as such column name with its respective value.

Record Type = PRIMARY_RECORD

EmpId = 4

EmpName = ddddddddddddddddddddddddddddddddddddddddddddddddddd
ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd
ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd
ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd
ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd



Record Type = PRIMARY_RECORD

EmpId = 6

EmpName = ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff



Record Type = PRIMARY_RECORD

EmpId = 1

EmpName = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa



This displays the exact data storage in SQL, without any index on table. Now, let’s go and create a unique non-clustered index on the EmpID column.



Creating a Non-Clustered Index

Now, we will create a unique non-clustered index on the empid column to see how it affects the data, and how the data is stored in SQL Server.

CREATE UNIQUE NONCLUSTERED INDEX DummyTable1_empid
ON DummyTable1 (empid)
GO

Now, execute the DBCC ind (dbid, tabid, -1)

DBCC TRACEON (3604)
GO

Declare @DBID Int, @TableID Int
Select @DBID = db_id(), @TableID = object_id('DummyTable1')
DBCC ind(@DBID, @TableID, -1)
GO

Here are the results:

PagePID


IndexID


PageType

26408


0


10

26255


0


1

26409


0


1

26411


2


10

26410


2


2

Now, we see two more rows than before, which now contains index page details. Page 26408 displays the page allocation details, and pages 26255 and 26409 display the data page details, as before.

In regard to the new pages, page 26411 displays the page allocation details of an index page and page 26410 displays the index page details.

MS SQL generates a page (pagetype = 10) for an index and explains the page allocation details for an index. It shows the number of index page have been occupied for an index.

Let us see what would be the output for page 26411, that is page type = 10



IAM: Single Page Allocations @0x308A608E

-----------------------------------------

Slot 0 = (1:26410)



Let us view page 26410 to see the index page details.

DBCC TRACEON (3604)
GO

DBCC page(10, 1, 26410, 3)
GO

SQL populates the index column data in order. The last column (?) is pointed to the row locator.

Here are the results, using two different methods:
Method I

FileID


PageID


EMPID


?

1


26410


1


0x8F66000001000200

1


26410


3


0x2967000001000000

1


26410


4


0x8F66000001000000

1


26410


6


0x8F66000001000100

The row location display in one of two ways:

* If the table does not have a clustered index, the row locator will be combination of fileno, pageno and the no of rows in a page.
* If the table does have clustered index, the row location will be clustered index key value.

Non-clustered indexes are particularly handy when we want to return a single row from a table.

For example, to search for employee ID (empid = 3) in a table that has a non-clustered index on the empid column, SQL Server looks through the index to find an entry that lists the exact page and row in the table where the matching empid can be found, and then goes directly to that page and row. This greatly speeds up accessing the record in question.

Select EmpID, EmpName From DummyTable1 WHERE EMPID = 3

Now, let’s insert some more rows in our table and view the data page storage of our non-clustered index.

Insert Into DummyTable1 Values (10, Replicate ('j',2000))
GO

Insert Into DummyTable1 Values (2, Replicate ('b',2000))
GO

Insert Into DummyTable1 Values (5, Replicate ('e',2000))
GO

Insert Into DummyTable1 Values (8, Replicate ('h',2000))
GO

Insert Into DummyTable1 Values (9, Replicate ('i',2000))
GO

Insert Into DummyTable1 Values (7, Replicate ('g',2000))
GO

Now, let’s view the data in our table.

Execute:

Select EmpID From DummyTable1

Here are the results:

EmpID

4

6

1

3

10

2

5

8

9

7

As you may notice above, the data is still in the order we entered it, and not in any particular order. This is because adding the non-clustered index didn’t change how the data was stored and ordered on the data pages.

Now, let’s view the results of the DBCC IND command. In order to find out what happened when the new data was added to the table.

DBCC TRACEON (3604)
GO

Declare @DBID Int, @TableID Int
Select @DBID = db_id(), @TableID = object_id('DummyTable1')
DBCC ind(@DBID, @TableID, -1)
GO

Here are the results:

PagePID


IndexID


PageType

26408


0


10

26255


0


1

26409


0


1

26412


0


1

26413


0


1

26411


2


10

26410


2


2

Let us execute the page 26410 again and get the index page details.

DBCC TRACEON (3604)
GO

dbcc page(10, 1, 26410, 3)
GO

SQL Server populates the index column data in order. The last column (?) is pointed to the row locator.

Here are the results:

Method I

FileID


PageID


EMPID


?

1


26410


1


0x8F66000001000200

1


26410


2


0x2C67000001000000

1


26410


3


0x2967000001000000

1


26410


4


0x8F66000001000000

1


26410


5


0x2C67000001000100

1


26410


6


0x8F66000001000100

1


26410


7


0x2D67000001000000

1


26410


8


0x2C67000001000200

1


26410


9


0x2967000001000200

1


26410


10


0x2967000001000100

As I explained earlier, there are two types of row locations. We have seen Method I. Now, let’s try Method II with the help of a clustered and non-clustered index in a table. DummyTable1 already has a non-clustered index. Let’s now add a new column to the DummyTabl1 table and add a clustered index on that column.

Alter Table DummyTable1 Add EmpIndex Int IDENTITY(1,1)
GO

This will link the clustered index key value, instead of the row locator, and be will the combination of fileno, pageno and no of rows in a page.

This adds the Empindex column to DummyTable1. I have used an identity column so that we will not have null values on that column.

You can execute the DBCC ind and DBCC page to check if there any change after the new column is added to the table. If you don’t want to check this yourself, I can tell you that adding the new column did not affect the total number of pages currently allocated to the table by SQL Server.

Now, let’s add a unique clustered index on the empindex column and then view the differences in page 26410.

First, we execute the DBCC ind command. This displays a new set of pages for dummytable1.

DBCC TRACEON (3604)
GO

Declare @DBID Int, @TableID Int
Select @DBID = db_id(), @TableID = object_id('DummyTable1')
DBCC ind(@DBID, @TableID, -1)
GO

Here are the results:

PagePID


IndexID


PageType

26415


1


10

26414


0


1

26416


1


2

26417


0


1

26418


0


1

26420


2


10

26419


2


2

Pages 26415 and 26420 have page allocation details. Pages 26414, 26417 and 26418 have data page details.

Now, let’s view pages 26416 and 26419 and see the output.

DBCC TRACEON (3604)
GO

DBCC page(10, 1, 26416, 3)
GO

Here are the results:

FileID


PageID


ChildPageID


EMPID

1


26416


26414


0

1


26416


26417


5

1


26416


26418


9

This displays the output of the clustered index page, which has got a link to data page (ChildPageID). EMPID is an index column that contains the starting row of the page.

DBCC TRACEON (3604)
GO

DBCC page(10, 1, 26419, 3)
GO

Here are the results:

Method II

FileID


PageID


EMPID


EMPIndex

1


26419


1


1

1


26419


2


2

1


26419


3


3

1


26419


4


4

1


26419


5


5

1


26419


6


6

1


26419


7


7

1


26419


8


8

1


26419


9


9

1


26419


10


10

It is interesting to see the differences now. There is a difference between Method I and Method II. Method II is now linked to a clustered index key.

The main difference between Method I and Method II is the link to a row in a data page.
Share:
Read More
, , , ,

Play Video with asp.net and c#

.flv is the preferred format to play videos online. You can download .flv player from http://www.any-flv-player.com and then go to Publish>>Publish for web.

This will generate code copy and paste code to .aspx page. It will start playing your .flv file

You can face a major problem on server or IIS it will not play video. Go to settings and set MIME type to .flv
Share:
Read More
, , ,

How to make user in Sql server database.

To make a user in sql server you have the rights. to create user go to Security=>>

Logins>> Right click on Logins

Login Name : jayant

Select option:Sql server authentication

Un check : Enforece password policy. Refer Image Below





Now close sql management studio and re login with jayant

Try to access other databases, it will show error, means you have sucessfuly make a new use for new datbase.

Share:
Read More
, ,

Crosstab report in SQL server (Pivot)

Microsoft introduces new operators PIVOT and UNPIVOT in SQL Server 2005. Traditionally we create queries using the CASE statement and aggregate function in order to produce cross-tab reports. This article illustrates the usage of the new operators, PIVOT and UNPIVOT.
Let us assume that we have a table as described below.
Create table #MyTable
(yearofJoining int,
EmpId int,
Deptid int)
go
insert into #MyTable select 1990,1,1
insert into #MyTable select 1991,2,2
insert into #MyTable select 1990,3,4
insert into #MyTable select 1991,4,2
insert into #MyTable select 1990,5,1
insert into #MyTable select 1990,6,3
insert into #MyTable select 1992,7,3
insert into #MyTable select 1990,8,4
insert into #MyTable select 1993,9,1
insert into #MyTable select 1994,10,2
insert into #MyTable select 1990,11,3
insert into #MyTable select 1995,12,3
insert into #MyTable select 1995,14,3
insert into #MyTable select 1995,15,3
insert into #MyTable select 1995,16,6
go

In order to create a cross tab report, we used to execute the query as described below.
--Original Cross Tab query
select YearofJoining,
count(case [DeptId] when 1 then 1 else null end) as [Department-1],
count(case [DeptId] when 2 then 1 else null end) as [Department-2],
count(case [DeptId] when 3 then 1 else null end) as [Department-3]
from #MyTable where deptid in(1,2,3)
group by Yearofjoining
This would produce the result as shown below.
YearofJoining Department-1 Department-2 Department-3
------------- ------------ ------------ ------------
1990Microsoft introduces new operators PIVOT and UNPIVOT in SQL Server 2005. Traditionally we create queries using the CASE statement and aggregate function in order to produce cross-tab reports. This article illustrates the usage of the new operators, PIVOT and UNPIVOT.
Let us assume that we have a table as described below.
Create table #MyTable
(yearofJoining int,
EmpId int,
Deptid int)
go
insert into #MyTable select 1990,1,1
insert into #MyTable select 1991,2,2
insert into #MyTable select 1990,3,4
insert into #MyTable select 1991,4,2
insert into #MyTable select 1990,5,1
insert into #MyTable select 1990,6,3
insert into #MyTable select 1992,7,3
insert into #MyTable select 1990,8,4
insert into #MyTable select 1993,9,1
insert into #MyTable select 1994,10,2
insert into #MyTable select 1990,11,3
insert into #MyTable select 1995,12,3
insert into #MyTable select 1995,14,3
insert into #MyTable select 1995,15,3
insert into #MyTable select 1995,16,6
go

In order to create a cross tab report, we used to execute the query as described below.
--Original Cross Tab query
select YearofJoining,
count(case [DeptId] when 1 then 1 else null end) as [Department-1],
count(case [DeptId] when 2 then 1 else null end) as [Department-2],
count(case [DeptId] when 3 then 1 else null end) as [Department-3]
from #MyTable where deptid in(1,2,3)
group by Yearofjoining
This would produce the result as shown below.
YearofJoining Department-1 Department-2 Department-3
------------- ------------ ------------ ------------
1990      2 0 2
1991 0 2 0
1992 0 0 1
1993 1 0 0
1994 0 1 0
1995 0 0 3


The same results can be reproduced using the operator, PIVOT.
--New PIVOT Operator in SQL 2005
SELECT YearofJoining, [1] as [Department-1],[2] as [Department-2],
[3] as [Department-3] FROM
(SELECT YearOfJoining,Deptid from #MyTable) p
PIVOT
( Count(DeptId) for DEPTID in ([1],[2],[3]))
AS pvt
ORDER BY Yearofjoining
Now let us assume the we have have a table as decribed below.
create table #MyTable2 (BatchID int ,Status int)
go
insert into #MyTable2 select 1001 ,1
insert into #MyTable2 select 1001 ,2
insert into #MyTable2 select 1002 ,0
insert into #MyTable2 select 1002 ,3
insert into #MyTable2 select 1002 ,4
insert into #MyTable2 select 1003 ,4
insert into #MyTable2 select 1004 ,4
go


In order to create a cross tab report, we used to execute the query as described below.
--Original Cross Tab query
select batchid,
sum(case status when 0 then 1 else 0 end) as [status-0],
sum(case status when 1 then 1 else 0 end) as [status-1],
sum(case status when 2 then 1 else 0 end) as [status-2],
sum(case status when 3 then 1 else 0 end) as [status-3],
sum(case status when 4 then 1 else 0 end) as [status-4]
from #MyTable2
group by batchid
This would produce the result as shown below.
BatchId Status-0 Status-1 Status-2 Status-3 Status-4
----------- ----------- ----------- ----------- ----------- -----------
1001 0 1 1 0 0
1002 1 0 0 1 1
1003 0 0 0 0 1
1004 0 0 0 0 1



--New PIVOT Operator in SQL 2005
SELECT BatchId, [0]as [Status-0],
[1]as [Status-1],
[2] as [Status-2],
[3]as [Status-3],
[4]as [Status-4]
FROM
(SELECT BatchId,status from #MyTable2) p
PIVOT
( count(Status) for status in ([0],[1],[2],[3],[4]))
AS pvt
ORDER BY BatchId
Notice, in the traditional cross tab query I am using the aggregate function sum and in the new PIVOT query I am using count.
Let us try using the aggregate function sum in the new PIVOT query.
--New PIVOT Operator in SQL 2005
SELECT BatchId, [0]as [Status-0],
[1]as [Status-1],
[2] as [Status-2],
[3]as [Status-3],
[4]as [Status-4]
FROM
(SELECT BatchId,status from #MyTable2) p
PIVOT
(sum(Status) for status in ([0],[1],[2],[3],[4]))
AS pvt
ORDER BY BatchId
You would get the result as described below.
BatchId Status-0 Status-1 Status-2 Status-3 Status-4
----------- ----------- ----------- ----------- ----------- -----------
1001 NULL 1 2 NULL NULL
1002 0 NULL NULL 3 4
1003 NULL NULL NULL NULL 4
1004 NULL NULL NULL NULL 4

As you see, when we use the new operator PIVOT with the same aggregate function we will not get the desired results. Instead of getting the Yes/No status we get the actual status value.
Let's assume we have a table that looks like the cross tab report described below.
create table #mycrosstab(
BatchId int, [Status-0] int, [Status-1] int,
[Status-2] int, [Status-3] int, [Status-4] int)
go
insert into #mycrosstab select 1001,0,1, 1, 0, 0
insert into #mycrosstab select 1002,1,0, 0, 1, 1
insert into #mycrosstab select 1003,0,0, 0, 0, 1
insert into #mycrosstab select 1004,0,0, 0, 0, 1
go
Let's try to reverse the crosstab report in order to get the original table using the new operator UNPIVOT.
SELECT BatchId, Status,StatusValue
FROM
(SELECT BatchId , [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4]
FROM #mycrosstab ) p
UNPIVOT
(StatusValue FOR status IN
( [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4])
)AS unpvt

BatchId Status StatusValue
1001 Status-0 0
1001 Status-1 1
1001 Status-2 1
1001 Status-3 0
1001 Status-4 0
1002 Status-0 1
1002 Status-1 0
1002 Status-2 0
1002 Status-3 1
1002 Status-4 1
1003 Status-0 0
1003 Status-1 0
1003 Status-2 0
1003 Status-3 0
1003 Status-4 1
1004 Status-0 0
1004 Status-1 0
1004 Status-2 0
1004 Status-3 0
1004 Status-4 1
Using the case statement and UNPIVOT operator, you can bring back the original table as described below
SELECT BatchId, MyStatus= case
when status= 'Status-0' and statusvalue =1 then '0'
when status= 'Status-1' and statusvalue =1 then '1'
when status= 'Status-2' and statusvalue =1 then '2'
when status= 'Status-3' and statusvalue =1 then '3'
when status= 'Status-4' and statusvalue =1 then '4' end
,StatusValue
FROM
(SELECT BatchId , [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4]
FROM #mycrosstab ) p
UNPIVOT
(StatusValue FOR status IN
( [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4])
)AS unpvt
BatchId MyStatus StatusValue
1001 NULL 0
1001 1 1
1001 2 1
1001 NULL 0
1001 NULL 0
1002 0 1
1002 NULL 0
1002 NULL 0
1002 3 1
1002 4 1
1003 NULL 0
1003 NULL 0
1003 NULL 0
1003 NULL 0
1003 4 1
1004 NULL 0
1004 NULL 0
1004 NULL 0
1004 NULL 0
1004 4 1

2 0 2
1991 0 2 0
1992 0 0 1
1993 1 0 0
1994 0 1 0
1995 0 0 3


The same results can be reproduced using the operator, PIVOT.
--New PIVOT Operator in SQL 2005
SELECT YearofJoining, [1] as [Department-1],[2] as [Department-2],
[3] as [Department-3] FROM
(SELECT YearOfJoining,Deptid from #MyTable) p
PIVOT
( Count(DeptId) for DEPTID in ([1],[2],[3]))
AS pvt
ORDER BY Yearofjoining
Now let us assume the we have have a table as decribed below.
create table #MyTable2 (BatchID int ,Status int)
go
insert into #MyTable2 select 1001 ,1
insert into #MyTable2 select 1001 ,2
insert into #MyTable2 select 1002 ,0
insert into #MyTable2 select 1002 ,3
insert into #MyTable2 select 1002 ,4
insert into #MyTable2 select 1003 ,4
insert into #MyTable2 select 1004 ,4
go


In order to create a cross tab report, we used to execute the query as described below.
--Original Cross Tab query
select batchid,
sum(case status when 0 then 1 else 0 end) as [status-0],
sum(case status when 1 then 1 else 0 end) as [status-1],
sum(case status when 2 then 1 else 0 end) as [status-2],
sum(case status when 3 then 1 else 0 end) as [status-3],
sum(case status when 4 then 1 else 0 end) as [status-4]
from #MyTable2
group by batchid
This would produce the result as shown below.
BatchId Status-0 Status-1 Status-2 Status-3 Status-4
----------- ----------- ----------- ----------- ----------- -----------
1001 0 1 1 0 0
1002 1 0 0 1 1
1003 0 0 0 0 1
1004 0 0 0 0 1



--New PIVOT Operator in SQL 2005
SELECT BatchId, [0]as [Status-0],
[1]as [Status-1],
[2] as [Status-2],
[3]as [Status-3],
[4]as [Status-4]
FROM
(SELECT BatchId,status from #MyTable2) p
PIVOT
( count(Status) for status in ([0],[1],[2],[3],[4]))
AS pvt
ORDER BY BatchId
Notice, in the traditional cross tab query I am using the aggregate function sum and in the new PIVOT query I am using count.
Let us try using the aggregate function sum in the new PIVOT query.
--New PIVOT Operator in SQL 2005
SELECT BatchId, [0]as [Status-0],
[1]as [Status-1],
[2] as [Status-2],
[3]as [Status-3],
[4]as [Status-4]
FROM
(SELECT BatchId,status from #MyTable2) p
PIVOT
(sum(Status) for status in ([0],[1],[2],[3],[4]))
AS pvt
ORDER BY BatchId
You would get the result as described below.
BatchId Status-0 Status-1 Status-2 Status-3 Status-4
----------- ----------- ----------- ----------- ----------- -----------
1001 NULL 1 2 NULL NULL
1002 0 NULL NULL 3 4
1003 NULL NULL NULL NULL 4
1004 NULL NULL NULL NULL 4

As you see, when we use the new operator PIVOT with the same aggregate function we will not get the desired results. Instead of getting the Yes/No status we get the actual status value.
Let's assume we have a table that looks like the cross tab report described below.
create table #mycrosstab(
BatchId int, [Status-0] int, [Status-1] int,
[Status-2] int, [Status-3] int, [Status-4] int)
go
insert into #mycrosstab select 1001,0,1, 1, 0, 0
insert into #mycrosstab select 1002,1,0, 0, 1, 1
insert into #mycrosstab select 1003,0,0, 0, 0, 1
insert into #mycrosstab select 1004,0,0, 0, 0, 1
go
Let's try to reverse the crosstab report in order to get the original table using the new operator UNPIVOT.
SELECT BatchId, Status,StatusValue
FROM
(SELECT BatchId , [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4]
FROM #mycrosstab ) p
UNPIVOT
(StatusValue FOR status IN
( [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4])
)AS unpvt

BatchId Status StatusValue
1001 Status-0 0
1001 Status-1 1
1001 Status-2 1
1001 Status-3 0
1001 Status-4 0
1002 Status-0 1
1002 Status-1 0
1002 Status-2 0
1002 Status-3 1
1002 Status-4 1
1003 Status-0 0
1003 Status-1 0
1003 Status-2 0
1003 Status-3 0
1003 Status-4 1
1004 Status-0 0
1004 Status-1 0
1004 Status-2 0
1004 Status-3 0
1004 Status-4 1
Using the case statement and UNPIVOT operator, you can bring back the original table as described below
SELECT BatchId, MyStatus= case
when status= 'Status-0' and statusvalue =1 then '0'
when status= 'Status-1' and statusvalue =1 then '1'
when status= 'Status-2' and statusvalue =1 then '2'
when status= 'Status-3' and statusvalue =1 then '3'
when status= 'Status-4' and statusvalue =1 then '4' end
,StatusValue
FROM
(SELECT BatchId , [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4]
FROM #mycrosstab ) p
UNPIVOT
(StatusValue FOR status IN
( [Status-0] , [Status-1] , [Status-2] , [Status-3], [Status-4])
)AS unpvt
BatchId MyStatus StatusValue
1001 NULL 0
1001 1 1
1001 2 1
1001 NULL 0
1001 NULL 0
1002 0 1
1002 NULL 0
1002 NULL 0
1002 3 1
1002 4 1
1003 NULL 0
1003 NULL 0
1003 NULL 0
1003 NULL 0
1003 4 1
1004 NULL 0
1004 NULL 0
1004 NULL 0
1004 NULL 0
1004 4 1
Share:
Read More
,

Event after Load

Yes there is an event LoadComplete()
for ex.
protected void Page_LoadComplete(object sender, EventArgs e)

This event is for tasks that require that all other controls on the page be loaded.

Suppose I have to show messages on page and then I have to redirect to some other page.

I Have a small example; I have a page which connect me to some more URL’s but to some reasons I have shift that page to another location.

Now I have to show message on page
“this page has shifted to another location. This page will automatically redirect to another page”

So, this can be done only after page load so, this is the best event to place that code
Share:
Read More