Thursday, December 27, 2007

ASP.NET interview questions, part 1

Explain the differences between Server-side and Client-side code?
Server side scripting means that all the script will be executed by the server and
interpreted as needed. ASP doesn t have some of the functionality like sockets,
uploading, etc. For these you have to make a custom components usually in VB
or VC++. Client side scripting means that the script will be executed immediately
in the browser such as form field validation, clock, email validation, etc. Client
side scripting is usually done in VBScript or JavaScript. Download time, browser
compatibility, and visible code - since JavaScript and VBScript code is included
in the HTML page, then anyone can see the code by viewing the page source.
Also a possible security hazards for the client computer.
What type of code (server or client) is found in a Code-Behind class?
C#
Should validation (did the user enter a real date) occur server-side or clientside?
Why?
Client-side validation because there is no need to request a server side date when
you could obtain a date from the client machine.
What does the "EnableViewState" property do? Why would I want it on or off?
Enable ViewState turns on the automatic state management feature that enables
server controls to re-populate their values on a round trip without requiring you to
write any code. This feature is not free however, since the state of a control is

passed to and from the server in a hidden form field. You should be aware of
when ViewState is helping you and when it is not. For example, if you are
binding a control to data on every round trip (as in the datagrid example in tip
#4), then you do not need the control to maintain it s view state, since you will
wipe out any re-populated data in any case. ViewState is enabled for all server
controls by default. To disable it, set the EnableViewState property of the control
to false.
What is the difference between Server.Transfer and Response.Redirect? Why
would I choose one over the other?
Server.Transfer() : client is shown as it is on the requesting page only, but the all
the content is of the requested page. Data can be persist accros the pages using
Context.Item collection, which is one of the best way to transfer data from one
page to another keeping the page state alive. Response.Dedirect() :client know the
physical loation (page name and query string as well). Context.Items loses the
persisitance when nevigate to destination page. In earlier versions of IIS, if we
wanted to send a user to a new Web page, the only option we had was
Response.Redirect. While this method does accomplish our goal, it has several
important drawbacks. The biggest problem is that this method causes each page to
be treated as a separate transaction. Besides making it difficult to maintain your
transactional integrity, Response.Redirect introduces some additional headaches.
First, it prevents good encapsulation of code. Second, you lose access to all of the
properties in the Request object. Sure, there are workarounds, but they re
difficult. Finally, Response.Redirect necessitates a round trip to the client, which,
on high-volume sites, causes scalability problems. As you might suspect,
Server.Transfer fixes all of these problems. It does this by performing the transfer
on the server without requiring a roundtrip to the client.
Can you give an example of when it would be appropriate to use a web service as
opposed to a non-serviced .NET component?
When to Use Web Services:
Communicating through a Firewall When building a distributed application
with 100s/1000s of users spread over multiple locations, there is always
the problem of communicating between client and server because of
firewalls and proxy servers. Exposing your middle tier components as
Web Services and invoking the directly from a Windows UI is a very valid
option.
Application Integration When integrating applications written in various
languages and running on disparate systems. Or even applications running
on the same platform that have been written by separate vendors.
Business-to-Business Integration This is an enabler for B2B intergtation
which allows one to expose vital business processes to authorized supplier
and customers. An example would be exposing electronic ordering and
invoicing, allowing customers to send you purchase orders and suppliers
to send you invoices electronically.
Software Reuse This takes place at multiple levels. Code Reuse at the Source
code level or binary componet-based resuse. The limiting factor here is
that you can reuse the code but not the data behind it. Webservice

overcome this limitation. A scenario could be when you are building an
app that aggregates the functionality of serveral other Applicatons. Each
of these functions could be performed by individual apps, but there is
value in perhaps combining the the multiple apps to present a unifiend
view in a Portal or Intranet.
When not to use Web Services: Single machine Applicatons When the apps
are running on the same machine and need to communicate with each
other use a native API. You also have the options of using component
technologies such as COM or .NET Componets as there is very little
overhead.
Homogeneous Applications on a LAN If you have Win32 or Winforms apps
that want to communicate to their server counterpart. It is much more
efficient to use DCOM in the case of Win32 apps and .NET Remoting in
the case of .NET Apps.
Let s say I have an existing application written using Visual Studio
(VBInterDev and this application utilizes Windows COM+ transaction
services. How would you approach migrating this application to .NET?
Can you explain the difference between an ADO.NET Dataset and an ADO
Recordset?
In ADO, the in-memory representation of data is the recordset. In ADO.NET, it
is the dataset. There are important differences between them.
A recordset looks like a single table. If a recordset is to contain data from
multiple database tables, it must use a JOIN query, which assembles the
data from the various database tables into a single result table. In contrast,
a dataset is a collection of one or more tables. The tables within a dataset
are called data tables; specifically, they are DataTable objects. If a dataset
contains data from multiple database tables, it will typically contain
multiple DataTable objects. That is, each DataTable object typically
corresponds to a single database table or view. In this way, a dataset can
mimic the structure of the underlying database. A dataset usually also
contains relationships. A relationship within a dataset is analogous to a
foreign-key relationship in a database -that is, it associates rows of the
tables with each other. For example, if a dataset contains a table about
investors and another table about each investor s stock purchases, it could
also contain a relationship connecting each row of the investor table with
the corresponding rows of the purchase table. Because the dataset can hold
multiple, separate tables and maintain information about relationships
between them, it can hold much richer data structures than a recordset,
including self-relating tables and tables with many-to-many relationships.
In ADO you scan sequentially through the rows of the recordset using the
ADO MoveNext method. In ADO.NET, rows are represented as
collections, so you can loop through a table as you would through any
collection, or access particular rows via ordinal or primary key index.
DataRelation objects maintain information about master and detail records

and provide a method that allows you to get records related to the one you
are working with. For example, starting from the row of the Investor table
for "Nate Sun," you can navigate to the set of rows of the Purchase table
describing his purchases. A cursor is a database element that controls
record navigation, the ability to update data, and the visibility of changes
made to the database by other users. ADO.NET does not have an inherent
cursor object, but instead includes data classes that provide the
functionality of a traditional cursor. For example, the functionality of a
forward-only, read-only cursor is available in the ADO.NET DataReader
object. For more information about cursor functionality, see Data Access
Technologies.
Minimized Open Connections: In ADO.NET you open connections only long
enough to perform a database operation, such as a Select or Update. You
can read rows into a dataset and then work with them without staying
connected to the data source. In ADO the recordset can provide
disconnected access, but ADO is designed primarily for connected access.
There is one significant difference between disconnected processing in
ADO and ADO.NET. In ADO you communicate with the database by
making calls to an OLE DB provider. In ADO.NET you communicate
with the database through a data adapter (an OleDbDataAdapter,
SqlDataAdapter, OdbcDataAdapter, or OracleDataAdapter object), which
makes calls to an OLE DB provider or the APIs provided by the
underlying data source. The important difference is that in ADO.NET the
data adapter allows you to control how the changes to the dataset are
transmitted to the database - by optimizing for performance, performing
data validation checks, or adding any other extra processing. Data
adapters, data connections, data commands, and data readers are the
components that make up a .NET Framework data provider. Microsoft and
third-party providers can make available other .NET Framework data
providers that can be integrated into Visual Studio.
Sharing Data Between Applications. Transmitting an ADO.NET dataset
between applications is much easier than transmitting an ADO
disconnected recordset. To transmit an ADO disconnected recordset from
one component to another, you use COM marshalling. To transmit data in
ADO.NET, you use a dataset, which can transmit an XML stream.
Richer data types.COM marshalling provides a limited set of data types -
those defined by the COM standard. Because the transmission of datasets
in ADO.NET is based on an XML format, there is no restriction on data
types. Thus, the components sharing the dataset can use whatever rich set
of data types they would ordinarily use.
Performance. Transmitting a large ADO recordset or a large ADO.NET
dataset can consume network resources; as the amount of data grows, the

stress placed on the network also rises. Both ADO and ADO.NET let you
minimize which data is transmitted. But ADO.NET offers another
performance advantage, in that ADO.NET does not require data-type
conversions. ADO, which requires COM marshalling to transmit records
sets among components, does require that ADO data types be converted to
COM data types.
Penetrating Firewalls.A firewall can interfere with two components trying to
transmit disconnected ADO recordsets. Remember, firewalls are typically
configured to allow HTML text to pass, but to prevent system-level
requests (such as COM marshalling) from passing.
Can you give an example of what might be best suited to place in the
Application_Start and Session_Start subroutines?
The Application_Start event is guaranteed to occur only once throughout the
lifetime of the application. It s a good place to initialize global variables. For
example, you might want to retrieve a list of products from a database table and
place the list in application state or the Cache object. SessionStateModule
exposes both Session_Start and Session_End events.
If I m developing an application that must accomodate multiple security levels
though secure login and my ASP.NET web appplication is spanned across
three web-servers (using round-robbin load balancing) what would be the
best approach to maintain login-in state for the users?
What are ASP.NET Web Forms? How is this technology different than what is
available though ASP?
Web Forms are the heart and soul of ASP.NET
Web Forms are the User Interface
(UI) elements that give your Web applications their look and feel. Web Forms are
similar to Windows Forms in that they provide properties, methods, and events
for the controls that are placed onto them. However, these UI elements render
themselves in the appropriate markup language required by the request, e.g.
HTML. If you use Microsoft Visual Studio .NET, you will also get the familiar
drag-and-drop interface used to create your UI for your Web application.
How does VB.NET/C# achieve polymorphism?
By using Abstract classes/functions.
Can you explain what inheritance is and an example of when you might use it?
Inheritance is a fundamental feature of an object oriented system and it is simply the
ability to inherit data and functionality from a parent object. Rather than
developing new objects from scratch, new code can be based on the work of other
programmers, adding only new features that are needed.
How would you implement inheritance using VB.NET/C#?
When we set out to implement a class using inheritance, we must first start with
an existing class from which we will derive our new subclass. This existing class,
or base class, may be part of the .NET system class library framework, it may be
part of some other application or .NET assembly, or we may create it as part of
our existing application. Once we have a base class, we can then implement one

or more subclasses based on that base class. Each of our subclasses will
automatically have all of the methods, properties, and events of that base class ?
including the implementation behind each method, property, and event. Our
subclass can add new methods, properties, and events of its own - extending the
original interface with new functionality. Additionally, a subclass can replace the
methods and properties of the base class with its own new implementation -
effectively overriding the original behavior and replacing it with new behaviors.
Essentially inheritance is a way of merging functionality from an existing class
into our new subclass. Inheritance also defines rules for how these methods,
properties, and events can be merged.

NET deployment questions

What do you know about .NET assemblies?
Assemblies are the smallest units of versioning and deployment in the .NET
application. Assemblies are also the building blocks for programs such as Web
services, Windows services, serviced components, and .NET remoting
applications.
What s the difference between private and shared assembly?
Private assembly is used inside an application only and does not have to be
identified by a strong name. Shared assembly can be used by multiple
applications and has to have a strong name.
What s a strong name?
A strong name includes the name of the assembly, version number, culture
identity, and a public key token.
How can you tell the application to look for assemblies at the locations other
than its own install?
Use the directive in the XML .config file for a given application.
Explain what a diffgram is, and a good use for one?
A DiffGram is an XML format that is used to identify current and original
versions of data elements. The DataSet uses the DiffGram format to load and
persist its contents, and to serialize its contents for transport across a network
connection. When a DataSet is written as a DiffGram, it populates the DiffGram
with all the necessary information to accurately recreate the contents, though not
the schema, of the DataSet, including column values from both the Original and
Current row versions, row error information, and row order.
Where would you use an iHTTPModule, and what are the limitations of
anyapproach you might take in implementing one?
One of ASP.NET s most useful features is the extensibility of the HTTP pipeline,
the path that data takes between client and server. You can use them to extend
your ASP.NET applications by adding pre- and post-processing to each HTTP
request coming into your application. For example, if you wanted custom
authentication facilities for your application, the best technique would be to
intercept the request when it comes in and process the request in a custom HTTP
module.
What are the disadvantages of viewstate/what are the benefits?

Describe session handling in a webfarm, how does it work and what are the
limits?
How would you get ASP.NET running in Apache web servers - why would
you even do this?
Whats MSIL, and why should my developers need an appreciation of it if at
all?
In what order do the events of an ASPX page execute. As a developer is it
important to undertsand these events?
Every Page object (which your .aspx page is) has nine events, most of which you
will not have to worry about in your day to day dealings with ASP.NET. The
three that you will deal with the most are: Page_Init, Page_Load,
Page_PreRender.
Which method do you invoke on the DataAdapter control to load your
generated dataset with data?
System.Data.Common.DataAdapter.Fill(System.Data.DataSet);
If my DataAdapter is sqlDataAdapter and my DataSet is dsUsers then it is called
this way:
sqlDataAdapter.Fill(dsUsers);
ata in the Repeater control?
Which template must you provide, in order to display data in a Repeater
control?
ItemTemplate
How can you provide an alternating color scheme in a Repeater control?
AlternatingItemTemplate Like the ItemTemplate element, but rendered for every
other row (alternating items) in the Repeater control. You can specify a different
appearance for the AlternatingItemTemplate element by setting its style
properties.
What property must you set, and what method must you call in your code, in
order to bind the data from some data source to the Repeater control?
You must set the DataMember property which Gets or sets the specific table in
the DataSource to bind to the control and the DataBind method to bind data from
a source to a server control. This method is commonly used after retrieving a data
set through a database query.
What base class do all Web Forms inherit from?
System.Web.UI.Page
What method do you use to explicitly kill a user s session?
The Abandon method destroys all the objects stored in a Session object and
releases their resources. If you do not call the Abandon method explicitly, the
server destroys these objects when the session times out.
Syntax: Session.Abandon

How do you turn off cookies for one page in your site?
Use the Cookie.Discard Property which Gets or sets the discard flag set by the
server. When true, this property instructs the client application not to save the
Cookie on the user s hard disk when a session ends.
Which two properties are on every validation control?
ControlToValidate &
ErrorMessage properties
What tags do you need to add within the asp:datagrid tags to bind columns
manually?
How do you create a permanent cookie?
Setting the Expires property to MinValue
means that the Cookie never expires.
What tag do you use to add a hyperlink column to the DataGrid?
What is the standard you use to wrap up a call to a Web service?
Which method do you use to redirect the user to another page without
performing a round trip to the client?
Server.transfer()
What is the transport protocol you use to call a Web service?
SOAP. Transport Protocols: It is essential for the acceptance of Web Services that
they are based on established Internet infrastructure. This in fact imposes the
usage of of the HTTP, SMTP and FTP protocols based on the TCP/IP family of
transports. Messaging Protocol: The format of messages exchanged between Web
Services clients and Web Services should be vendor neutral and should not carry
details about the technology used to implement the service. Also, the message
format should allow for extensions and different bindings to specific transport
protocols. SOAP and ebXML Transport are specifications which fulfill these
requirements. We expect that the W3C XML Protocol Working Group defines a
successor standard.
True or False: A Web service can only be written in .NET.
False.
What does WSDL stand for?
Web Services Description Language
What property do you have to set to tell the grid which page to go to when using
the Pager object?
Where on the Internet would you look for Web services?
UDDI repositaries like uddi.microsoft.com, IBM UDDI node , UDDI Registries in
Google Directory , enthusiast sites like XMethods.net
What tags do you need to add within the asp:datagrid tags to bind columns
manually?
Column tag and an ASP:databound tag.
Which property on a Combo Box do you set with a column name, prior to
setting the DataSource, to display data in the combo box?
How is a property designated as read-only?
In VB.NET:

Public ReadOnly Property PropertyName As ReturnType
Get Your Property Implementation
goes in here
End Get
End Property
in C#
public returntype PropertyName
{
get{
//property implementation goes here
}
// Do not write the set implementation
}
Which control would you use if you needed to make sure the values in two
different controls matched?
Use the CompareValidator control to compare the values
of 2 different controls.
True or False: To test a Web service you must create a windows application or
Web application to consume this service?
False.
How many classes can a single .NET DLL contain?
Unlimited.

Windows code security questions

What s the difference between code-based security and role-based security?
Which one is better?
Code security is the approach of using permissions and permission sets for a
given code to run. The admin, for example, can disable running executables off
the Internet or restrict access to corporate database to only few applications. Rolebased
security most of the time involves the code running with the privileges of
the current user. This way the code cannot supposedly do more harm than mess
up a single user account. There s no better, or 100% thumbs-up approach,
depending on the nature of deployment, both code-based and role-based security
could be implemented to an extent.
How can you work with permissions from your .NET application?
You can request permission to do something and you can demand certain
permissions from other apps. You can also refuse permissions so that your app is
not inadvertently used to destroy some data.
How can C# app request minimum permissions?
using System.Security.Permissions;
[assembly:FileDialogPermissionAttribute(SecurityAction.Request
Minimum, Unrestricted=true)]
What s a code group?
A code group is a set of assemblies that share a security context.
What s the difference between authentication and authorization?
Authentication happens first. You verify user s identity based on credentials.
Authorization is making sure the user only gets access to the resources he has
credentials for.
What are the authentication modes in ASP.NET?
None, Windows, Forms and Passport.
Are the actual permissions for the application defined at run-time or compiletime?

The CLR computes actual permissions at runtime based on code group membership
and the calling chain of the code.

ASP.NET DataGrid questions

What is datagrid?
The DataGrid Web server control is a powerful tool for displaying information
from a data source. It is easy to use; you can display editable data in a
professional-looking grid by setting only a few properties. At the same time, the
grid has a sophisticated object model that provides you with great flexibility in
how you display the data.
What s the difference between the System.Web.UI.WebControls.DataGrid and
System.Windows.Forms.DataGrid ?
The Web UI control does not inherently support master-detail data structures. As
with other Web server controls, it does not support two-way data binding. If you
want to update data, you must write code to do this yourself. You can only edit
one row at a time. It does not inherently support sorting, although it raises events
you can handle in order to sort the grid contents. You can bind the Web Forms
DataGrid to any object that supports the IEnumerable interface. The Web Forms
DataGrid control supports paging. It is easy to customize the appearance and
layout of the Web Forms DataGrid control as compared to the Windows Forms
one.
How do you customize the column content inside the datagrid?
If you want to customize the content of a column, make the column a template
column. Template columns work like item templates in the DataList or Repeater
control, except that you are defining the layout of a column rather than a row.
How do you apply specific formatting to the data inside the cells?
You cannot specify formatting for columns generated when the grid s
AutoGenerateColumns property is set to true, only for bound or template
columns. To format, set the column s DataFormatString property to a stringformatting
expression suitable for the data type of the data you are formatting.
How do you hide the columns?
One way to have columns appear dynamically is to create them at design time, and
then to hide or show them as needed. You can do this by setting a column s
Visible property.
How do you display an editable drop-down list?
Displaying a drop-down list requires a template column in the grid. Typically, the
ItemTemplate contains a control such as a data-bound Label control to show the
current value of a field in the record. You then add a drop-down list to the
EditItemTemplate. In Visual Studio, you can add a template column in the
Property builder for the grid, and then use standard template editing to remove the
default TextBox control from the EditItemTemplate and drag a DropDownList
control into it instead. Alternatively, you can add the template column in HTML
view. After you have created the template column with the drop-down list in it,
there are two tasks. The first is to populate the list. The second is to preselect the
appropriate item in the list - for example, if a book s genre is set to fiction,
when the drop-down list displays, you often want fiction to be preselected.

How do you check whether the row data has been changed?
The definitive way to determine whether a row has been dirtied is to handle the
changed event for the controls in a row. For example, if your grid row contains a
TextBox control, you can respond to the control s TextChanged event. Similarly,
for check boxes, you can respond to a CheckedChanged event. In the handler for
these events, you maintain a list of the rows to be updated. Generally, the best
strategy is to track the primary keys of the affected rows. For example, you can
maintain an ArrayList object that contains the primary keys of the rows to update.

Sunday, December 16, 2007

Life Time of a Server Control


Introduction
• What is a Server Control?
• Control Properties
• Control Methods
• Control Events
• Conclusions

Introduction: If you haven't read my previous article, I would recommend doing so prior to reading this one, as it builds upon the foundations laid out in that article. Because ASP.Net is object-oriented and event-driven, the various events and execution of a Server Control can get a bit confusing. This article takes you through the Lifecycle of a Server Control and explains exactly what is happening at each stage of it's existence.



What is a Server Control? In ASP.Net, any class which can render an HTML interface in an ASPX page (i.e. write HTML to the output stream) is classified under the Namespace System.Web.UI.Control. This includes the Page class, Literal Controls, HTML Controls, Web Controls, User Controls, and Custom Controls. This means that all Server Controls inherit System.Web.UI.Control. That means that all Server Controls have certain properties, methods, and events in common. Let's take a look:
Control Properties:
ClientID - The ClientID is the "ID" attribute assigned to the HTML object which this Control represents in the page. It is used for client-side operations such as JavaScript functions. As any Server Control which can maintain state and do PostBacks must have a client side HTML "ID" attribute in order to use the JavaScript state and event management client-side functions, this property automatically supplies one if the developer does not.
Controls - This is a Collection of the Controls contained inside this one. Like a Windows Form, an ASPX page can host multiple Controls, and is, in fact, a Control. In practical terms, a Control is contained inside another control if the HTML for the control is between the beginning and ending tags of another HTML Control. Example:


This is Message One


In the example above, the Label WebControl is a member of the Controls Collection of the Form HTMLControl.
• EnableViewState - This Boolean value indicates whether or not the Control should "remember" it's state between PostBacks1.
• ID - This can be a bit confusing, as it looks a lot like ClientID. The ID property is used on the server-side to access the Control programmatically (in the CodeBehind class).
• NamingContainer - This property references the parent Control of this Control and implements the iNamingContainer Interface. The purpose of the iNamingContainer Interface is to make sure that the ID property of all Controls is unique. Any Control contained in a Control which implements iNamingContainer will have a Unique ID regardless of whether it has the same ID as another Control. The ID is derived from the assigned ID and the ID of the parent Container.
• Page - This property references the Page object in which this Control resides.
• Parent - This property references the Control which immediately contains this one in it's hierarchy of containers.
• Site - This property contains information about the Control's Web Site.
• TemplateSourceDirectory - returns the virtual directory of the Page or User Control which contains the Control.
• UniqueID - Returns the unique, hierarchically-qualified ID of the Control. This is different from ID and ClientID, in that multiple Controls can have the same ID, but not the same UniqueID. When hosted inside a NamingContainer, the UniqueID is a combination of the ID of the NamingContainer and the ID of the Control.
• Visible - This Boolean value indicates whether or not the control will be rendered in the page. When set to False, no HTML for the Control (or any child controls contained therein) will appear in the Page.
Control Methods
• DataBind - Binds data from a Data Source to a Server Control. This method is used most often with templated data-bound Controls.
• Dispose - Frees up resources used by a Control when it is no longer needed.
• Equals (inherited from Object) - Used to determine whether 2 object instances are equal.
• FindControl - Searches the current naming container for a specific Control
• GetHashCode (inherited from Object) - Returns a type-specific hash code for an Object
• GetType (inherited from Object) - Returns a System.Type object with Type metadata regarding that Type
• HasControls - This method returns a Boolean value that indicates whether the Control contains other Controls
• RenderControl - This is the method which writes the HTML for the Control to the output stream.
• ResolveUrl - This method resolves a relative URL to an absolute URL, according to the TemplateSourceDirectory property.
• ToString (inherited from Object) - This method returns a string that represents this Object.
Control Events
Because HTTP is stateless, a Server Control has a very short lifetime. It exists from the time the Page is requested until the Page is sent to the browser, at which time the Page, and its' entire contents, are disposed of. During this lifetime, the control must be re-instantiated and started first. Then Events from the client browser can be processed by the control. Finally, all controls in a Page render their HTML, and the Page does it's clean-up. At any point in that LifeCycle, the developer may want to add code to do something. Therefore, the various "insertion points" for code are made available via a number of Events which fire during a Control's lifetime:
• Init - This event occurs as the control is being instantiated.
• LoadViewState - Once all controls are instantiated, the ViewState can be loaded for each control. This is handled automatically, and is used to maintain state of controls which can do so. As you should recall from my first article1, ViewState is maintained on the client in a hidden form field. The Control reads the hidden "__VIEWSTATE" field's data from the form, and loads the ViewState data from the last Request into the ViewState object (Collection) on the server.
• LoadPostData (if iPostBackDataHandler is implemented for this Control) - The control processes PostBack data, and updates its' properties accordingly. The LoadPostData Sub takes 2 arguments: The PostDataKey and the PostCollection. The PostDataKey (string) contains the ID of the Control which caused the PostBack. The PostDataCollection is similar to the Request.Form Collection; it contains all the data posted from the form. This Sub is used to do any post data processing that may be necessary at this point. It returns a Boolean value. If it returns True, the RaisePostDataChangedEvent is fired.
• Load - This Event Handler performs functions that are common to all requests - generally where the "meat" of your code usually goes. The controls have now been initialized. ViewState has been restored, and the state of the controls is now as it was on the client.
• RaisePostDataChangedEvent (if iPostBackDataHandler is implemented for this Control) - This event is raised when the LoadPostData Sub returns True. It is generally usually used to raise any events from the Control that need to be fired as a result of Post Data changing.
• RaisePostBackEvent (if iPostBackEventHandler is implemented for this Control) - This event is used by Server Controls which process PostBack events coming from Server Controls. Notice that this event occurs just after the RaisePostDataChangedEvent. That is so that any RaisePostDataChangedEvents can be fired first, allowing other Controls to react to them in this Event Handler.
• PreRender - This Event Fires just before the SaveViewState method is called. It can be used to make any changes to the control which will still be needed for the current Request, and must be made after handling PostBack events. At this point, the Control is ready to be rendered (written) to the HTML output stream.
• SaveViewState - This Method writes the new (modified) ViewState value to the output document's hidden "__VIEWSTATE" field.
• Render - This Method writes the HTML output of the Control to the output stream.
• Dispose - Perform any final cleanup prior to unloading.
• Unload - Fires just prior to the Control unloading.
Conclusions: As HTTP is stateless, Server Controls have a very short lifetime. To the user, they see the same HTML interface with each PostBack, and it looks as if they are working with the same Controls with each Page refresh. In truth, however, Server Controls must be rebuilt with each request, and put into their last state prior to processing PostBack data, and performing any new operations.
Understanding the sequence of Events that occur within the LifeCycle of a Server Control is important to writing effective ASP.Net applications

Friday, December 14, 2007

Features in SQL Server 2000

1) User-Defined Functions.
User-Defined Functions (UDFs) - one or more Transact-SQL statements that can be used to encapsulate code for reuse. User-defined functions cannot make permanent changes to the data or modify database tables. You can’t use insert, update or select statements to modify a table. UDF can change only local objects for this UDF, such as local cursors or variables. This basically means that the function can’t perform any changes to a resource outside the function itself.

2) Distributed partitioned views.
Distributed partitioned views allow you to partition tables horizontally across multiple servers.
So, you can scale out one database server to a group of database servers that cooperate to provide the same performance levels as a cluster of database servers.

3) New data types.
There are new data types:
• bigint data type: - This type is an 8-byte integer type.
• sql_variant data type: - This type is a type that allows the storage of data values of different data types.
• table data type: - This type lets an application store temporary results as a table that you can manipulate by using a select statement or even action queries—just as you can manipulate any standard user table.

4) INSTEAD OF and AFTER Triggers.
There are INSTEAD OF and AFTER Triggers in SQL Server 2000. INSTEAD OF triggers are executed instead of the INSERT, UPDATE or DELETE triggering action. “AFTER” triggers are executed after the triggering actions.

5) Cascading Referential Integrity Constraints.
There are new ON DELETE and ON UPDATE clauses in the REFERENCES clause of the CREATE TABLE and ALTER TABLE statements.
The ON DELETE clause controls what actions are taken if you attempt to delete a row to which existing foreign keys point.
The ON UPDATE clause defines the actions that are taken if you attempt to update a candidate key value to which existing foreign keys point.

The ON DELETE and ON UPDATE clauses have two options:
• NO ACTION: - NO ACTION specifies that the deletion/updation fail with an error.
• CASCADE: - CASCADE specifies that all the rows with foreign keys pointing to the deleted/updated row are also deleted / updated.

6) XML Support.
SQL Server 2000 can use XML to insert, update, and delete values in the database, and database engine can return data as Extensible Markup Language (XML) documents.

7) Indexed Views.
Unlike standard views, in which SQL Server resolves the data-access path dynamically at execution time, the new indexed views feature lets you store views in the database just as you store tables. Indexed views, which are persistent, can significantly improve application performance by eliminating the work that the query processor must perform to resolve the views.


Rules of Normalization
Normalization is used to avoid redundancy of data and inconsistent dependencies within a table. A certain amount of Normalization will often improve performance.
First Normal Form: -
For a table to be in First Normal Form (1NF) each row must be identified, all columns in the table must contain atomic values, and each field must be unique.

Second Normal Form: -
For a table to be in Second Normal Form (2NF); it must be already in 1NF and it contains no partial key functional dependencies. In other words, a table is said to be in 2NF if it is a 1NF table where all of its columns that are not part of the key are dependent upon the whole key - not just part of it.

Third Normal Form: -
A table is considered to be in Third Normal Form (3NF) if it is already in Second Normal Form and all columns that are not part of the primary key are dependent entirely on the primary key. In other words, every column in the table must be dependent upon "the key, the whole key and nothing but the key." Eliminate columns not dependent upon the primary key of the table.

Referential Integrity: - The referential integrity of a database concerns the parent/child relationship between tables. The relationships between tables are classified as one-to-one, one-to-many, many-to-one, and many-to-many. If the child records have no parent records then they are called orphan records.

Indexing
An index is a separate table that lists in order, ascending or descending, the contents of a particular table with pointers to the records in the table. An index increases the speed at which rows are retrieved from a database. The index can consist of one column or can be a composite index composed of many columns. There are two types of indexes.
i) Clustered Indexes: - A clustered index means that the data is sorted and placed in the table in sorted order, sorted on what columns are contained in the index. Since the rows are in sorted order and an entity can exist in sorted order only once, there can be only one clustered index per table. The data rows are actually part of the index. A table should have at least one clustered index unless it is a very small table.
ii) NonClustered Indexes: - In a Nonclustered index, the data rows are not part of the index. A Nonclustered index stores pointers to the data rows in the table and is not as efficient as a clustered index but is much preferable to doing a table scan.
If you create an index on each column of a table, it improves the query performance, as the query optimizer can choose from all the existing indexes to come up with an efficient execution plan. At the same time, data modification operations (such as INSERT, UPDATE, DELETE) will become slow, as every time data changes in the table, all the indexes need to be updated. Another disadvantage is that, indexes need disk space, the more indexes you have, more disk space is used.

A View is a copy of a table or tables that doesn’t exist except as a result set that is created when the view is queried.

A Trigger is a special kind of stored procedure that becomes active only when data is modified in a specified table using one or more data modification operations (Update, Insert, or Delete).

UNION combines the results of two or more queries into a single result set consisting of all the rows belonging to all queries in the union.

Joined Tables
Tables are joined for the purpose of retrieving related data from two or more tables by comparing the data in columns and forming a new table from the rows that match. The types of joins are:
i) Inner Join: An inner join is the usual join operation using a comparison operator. It displays only the rows with a match for both join tables.
ii) Outer Join: An outer join includes the left outer join, right outer join, and full outer join.
The relational operators for an outer join are:
a) Left outer join: Left join returns all the rows from the first table in the JOIN clause with NULLS for the second table’s columns if no matching row was found in the second table.
b) Right outer join: Right join returns all the rows from the second table in the JOIN clause with NULLS for the first table’s columns if no matching row was found in the first table.
c) Full outer join: When a Full Outer Join occurs and a row from either the first table or the second table does not match the selection criteria, the row is selected and the columns of the other tables are set to NULL.
iii) Cross Join: A Cross Join results in the cross product of two tables and returns the same rows as if no WHERE clause was specified in an old, non-ANSI-style.

Stored Procedure
A stored procedure is essentially a routine you create that will execute an SQL statement. There are four major reasons why using them can be beneficial.

1. They can dramatically increase security.
If you set up a series of Stored Procedures to handle all of your interaction to the data then this means that you can remove all the user rights on all of your tables and such. For example, say I create a stored procedure for inserting a new employee. I can remove everyone's rights to the actual EMPLOYEES table and require them to only do INSERTS via the stored procedure. I have effectively forced them to always insert data my way.

2. They can assist you in centralizing your code.
If I ever need to change the structure of that EMPLOYEES table I don't have to worry about any applications crashing if when they try to insert something new. Since all interaction is via that stored procedure I just have to make the updates in that one stored procedures code and nowhere else.

3. They are executed on the Server's machine.
Because they actually reside on the server's machine, they will use the process resources there as well. Generally your Database Server will be much more 'beefy' as far as processor and memory resources go than your clients machines.

4. Better than that. (They are precompiled)
The database can convert your stored procedure into binary code and execute it as one command rather than parse the SQL statement through an interpreter as if it was text. Execution speeds can be vastly improved by this alone.


Q. What are the types of User-Defined Functions available in SQL Server 2000?
There are three types of UDF in SQL Server 2000:
 Scalar functions: returns one of the scalar data types. Text, ntext, image, cursor or timestamp data types are not supported.
 Inline table-valued functions: returns a variable of data type table whose value is derived from a single SELECT statement.
 Multi-statement table-valued functions: return a table that was built with many TRANSACT-SQL statements.

Q. What is the difference between User-Defined functions and Stored Procedures?
 A stored procedure may or may not return values whereas a UDF always return values.
 The function can't perform any actions that have side effects. This basically means that the function can't perform any changes to a resource outside the function itself. You can't create a procedure that modifies data in a table, performs cursor operations on cursors that aren't local to the procedure, sends email, creates database objects, or generates a result set that is returned to the user.
 SELECT statements that return values to the user aren't allowed. The only allowable SELECT statements assign values to local variables.
 Cursor operations including DECLARE, OPEN, FETCH, CLOSE, and DEALLOCATE can all be performed in the cursor. FETCH statements in the function can't be used to return data to the user. FETCH statements in functions can be used only to assign values to local variables using the INTO keyword. This limitation is also minor because you can populate a table variable within a cursor and then return the table to the user.
 UDFs can return only one rowset to the user, whereas stored procedures can return multiple rowsets.
 UDFs cannot call stored procedures (except extended procedures), whereas stored procedures can call other procedures.
 UDFs also cannot execute dynamically constructed SQL statements.
 UDFs cannot make use of temporary tables. As an alternative, you are allowed to use table variables within a UDF. Recall however, that temporary tables are somewhat more flexible than table variables. The latter cannot have indexes (other than a primary and unique key); nor can a table variable be populated with an output of a stored procedure.
 RAISERROR statement cannot be used within a UDF. In fact, you can't even check the value of the @@ERROR global variable within a function. If you encounter an error, UDF execution simply stops, and the calling routine fails. You are allowed to write a message to the Windows error log with xp_logevent if you have permission to use this extended procedure.

Q. Difference between a "where" clause and a "having" clause
Having clause is used only with group functions whereas Where is not used with

Q. What is the basic difference between a join and a union?
A join selects columns from 2 or more tables. A union selects rows.

Q. What are foreign keys?
These are attributes of one table that have matching values in a primary key in another table, allowing for relationships between tables.

Q. What is a synonym? How is it used?
A synonym is used to reference a table or view by another name. The other name can then be written in the application code pointing to test tables in the development stage and to production entities when the code is migrated. The synonym is linked to the AUTHID that created it.

Q. What is a Cartesian product?
A Cartesian product results from a faulty query. It is a row in the results for every combination in the join tables.

Q. What is denormalization and when would you go for it?
As the name indicates, denormalization is the reverse process of normalization. It's the controlled introduction of redundancy in to the database design. It helps improve the query performance as the number of joins could be reduced.

Q. What's the difference between a primary key and a unique key?
Both, primary and unique key enforces uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULL values, but unique key allows one NULL only.

Q. What is user defined datatypes and when you should go for them?
User defined datatypes let you extend the base SQL Server datatypes by providing a descriptive name, and format to the database. Take for example, in your database, there is a column called Flight_Num which appears in many tables. In all these tables it should be varchar (8). In this case you could create a user defined datatype called Flight_num_type of varchar (8) and use it across all your tables.

Q. Define candidate key, alternate key, and composite key.
A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys. A key formed by combining at least two or more columns is called composite key.

Q. What are defaults? Is there a column to which a default can't be bound?
A default is a value that will be used by a column, if no value is supplied to that column while inserting data. IDENTITY columns and timestamp columns can't have defaults bound to them.

Q. What is a transaction and what are ACID properties?
A transaction is a logical unit of work in which, all the steps must be performed or none. ACID stands for Atomicity, Consistency, Isolation, and Durability.

Q. Explain different isolation levels
An isolation level determines the degree of isolation of data between concurrent transactions. The default SQL Server isolation level is Read Committed. Here are the other isolation levels:

Read Uncommitted: - A transaction can read any data, even if it is being modified by another transaction. This is the least safe isolation level but allows the highest concurrency.

Read Committed: - A transaction cannot read data that is being modified by another transaction that has not committed. This is the default isolation level in Microsoft SQL Server.

Repeatable Read: - Data read by a current transaction cannot be changed by another transaction until the current transaction finishes. Any type of new data can be inserted during a transaction.

Serialized: - Data read by the current transaction cannot be changed by another transaction until the current transaction finishes. No new data can be inserted that would affect the current transaction.

Q. What type of Index will get created with: CREATE INDEX myIndex ON myTable (myColumn)?
Non-clustered index; important thing to note: By default a clustered index gets created on the primary key, unless specified otherwise.

Q. What's the difference between DELETE TABLE and TRUNCATE TABLE commands?
DELETE TABLE is a logged operation, so the deletion of each row gets logged in the transaction log, which makes it slow. TRUNCATE TABLE also deletes all the rows in a table, but it won't log the deletion of each row, instead it logs the deallocation of the data pages of the table, which makes it faster. Of course, TRUNCATE TABLE can be rolled back. The TRUNCATE TABLE command sets the identity value to one whereas the DELETE command retains the identity value. The TRUNCATE TABLE cannot activate a trigger.

Q. What are constraints? Explain different types of constraints.
Constraints enable the RDBMS enforce the integrity of the database automatically, without needing you to create triggers, rule or defaults. Types of constraints: NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY

Q. What is RAID and what are different types of RAID configurations?
RAID stands for Redundant Array of Inexpensive Disks, used to provide fault tolerance to database servers. There are six RAID levels 0 through 5 offering different levels of performance, fault tolerance.

Q. What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other's piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user's process.
A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.

Q. What is database replication? What are the different types of replication you can set up in SQL Server?
Replication is the process of copying/moving data between databases on the same or different servers. SQL Server supports the following types of replication scenarios:
• Snapshot replication
• Transactional replication (with immediate updating subscribers, with queued updating subscribers)
• Merge replication

Q. What are cursors? Explain different types of cursors. What are the disadvantages of cursors? How can you avoid cursors?
Cursor is a database object used to manipulate data in a set on a row-by-row basis, instead of the typical SQL commands that operate on all the rows in the set at one time.

Cursor Type --------- Description
ForwardOnly ---------You can only scroll forward through records. This is the default cursor type.
Static ---------A static copy of a set of records that you can use to find data or generate reports. Additions, changes, or deletions by other users are not visible. The Recordset is fully navigable, forward and backward.
Dynamic ---------Additions, changes, and deletions by other users are visible, and all types of movement through the Recordset are allowed.
Keyset ---------Like the dynamic cursor type, except that you can’t see records that other users add. Deletions and other modifications made by other users are still visible.

Disadvantages of cursors: Each time you fetch a row from the cursor, it results in a network roundtrip; where as a normal SELECT query makes only one roundtrip, however large the resultset is. Cursors are also costly because they require more resources and temporary storage (results in more IO operations). Further, there are restrictions on the SELECT statements that can be used with some types of cursors.
Another situation in which developers tend to use cursors: You need to call a stored procedure when a column in a particular row meets certain condition. You don't have to use cursors for this. This can be achieved using WHILE loop, as long as there is a unique key to identify each row.

Q. How will you copy the structure of a table without copying the data?
Create table EMPTEMP as select * from EMP where 1 = 2;

Q. What are the different ways of moving data/databases between servers and databases in SQL Server?
There are lots of options available; you have to choose your option depending upon your requirements. Some of the options you have are: BACKUP/RESTORE, detaching and attaching databases, replication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT scripts to generate data.


Q. What is database replication? What are the different types of replication you can set up in SQL Server?
Replication is the process of copying/moving data between databases on the same or different servers. SQL Server supports the following types of replication scenarios:
 Snapshot replication
 Transactional replication (with immediate updating subscribers, with queued updating subscribers)
 Merge replication

Q. How many triggers you can have on a table? How to invoke a trigger on demand?
You can create multiple triggers per each action. But in 7.0 there's no way to control the order in which the triggers fire. In SQL Server 2000 you could specify which trigger fires first or fires last using sp_settriggerorder

Q. What is an extended stored procedure? Can you instantiate a COM object by using T-SQL?
An extended stored procedure is a function within a DLL (written in a programming language like C, C++ using Open Data Services (ODS) API) that can be called from T-SQL, just the way we call normal stored procedures using the EXEC statement. Yes, you can instantiate a COM object from T-SQL by using sp_OACreate stored procedure.

Q. What is a self join? Explain it with an example.
Self join is just like any other join, except that two instances of the same table will be joined in the query. Here is an example: Employees table which contains rows for normal employees as well as managers. So, to find out the managers of all the employees, you need a self join.

CREATE TABLE emp (empid int, mgrid int, empname char(10) )
INSERT emp SELECT 1,2,'Vyas'
INSERT emp SELECT 2,3,'Mohan'
INSERT emp SELECT 3,NULL,'Shobha'
INSERT emp SELECT 4,2,'Shridhar'
INSERT emp SELECT 5,2,'Sourabh'

SELECT t1.empname [Employee], t2.empname [Manager]
FROM emp t1, emp t2 WHERE t1.mgrid = t2.empid

Here's an advanced query using a LEFT OUTER JOIN that even returns the employees without managers

SELECT t1.empname [Employee], COALESCE(t2.empname, 'No manager') [Manager]
FROM emp t1 LEFT OUTER JOIN emp t2 ON t1.mgrid = t2.empid


SQL Server Limitations
Object ---------Maximum sizes/numbers
Bytes per text, ntext or image column--------- 2 GB-2
Clustered indexes per table--------- 1
Columns per index--------- 16
Columns per foreign key--------- 16
Columns per primary key--------- 16
Columns per base table--------- 1,024
Columns per SELECT statement--------- 4,096
Columns per INSERT statement--------- 1,024
Connections per client--------- Maximum value of configured connections
Database size--------- 1,048,516 TB
Databases per instance of SQL Server--------- 32,767
Files per database--------- 32,767
File size (data) --------- 32 TB
Identifier length (in characters) --------- 128
Locks per connection--------- Max. locks per server
Nested stored procedure levels--------- 32
Nested subqueries--------- 32
Nested trigger levels--------- 32
Nonclustered indexes per table--------- 249
Objects in a database--------- 2,147,483,6474
Parameters per stored procedure--------- 2,100
REFERENCES per table--------- 253
Rows per table--------- Limited by available storage
Tables per database--------- Limited by number of objects in a database
Tables per SELECT statement--------- 256
Triggers per table--------- Limited by number of objects in a database
UNIQUE indexes or constraints per table--------- 249 nonclustered and 1 clustered

Thursday, December 13, 2007

Sample C# Questions Online Tests

public static void Main() {
Coordinates c1 = new Coordinates();
Coordinates c2 = new Coordinates();
int x = 30;
c1.X = 30;
c2.X = 30;
Test(ref c1, c2, x);
Console.WriteLine("C1.X=" + c1.X.ToString() + ", C2.X=" + c2.X.ToString() + ", X=" + x.ToString());
Console.Read();
}
public static void Test(ref Coordinates Coord1, Coordinates Coord2, int X) {
Coord1 = new Coordinates();
Coord2 = new Coordinates();
Coord1.X = 0;
Coord2.X = 0;
X = 0;
}


What is the console output for the above sample code?
1 C1.X=30, C2.X=0, X=00
2 C1.X=0, C2.X=0, X=0
3 C1.X=30, C2.X=30, X=30
4 C1.X=0, C2.X=0, X=30
5 C1.X=0, C2.X=30, X=30




Which one of the following describes the OO concept of Aggregation?
1 A system of objects that are not related
2 A system of objects that are built using each other
3 A system of objects that define each other
4 A system of objects that implement each other
5 A system of objects inherited from each other






public interface ILocation {}

struct Point : ILocation {
public int Y;
public int X;

public Point() {
Y = 0;
X = 0;
}
public Point(int x, int y) {
Y = y;
X = x;
}
}

public void MovePoint() {
Point pt = new Point();
pt.X = 300;
pt.Y = 300;
}



Why does the sample code above NOT compile?
1 Structs cannot implement interfaces.
2 You can only initialize a struct variable using the "new" statement.
3 You cannot initialize a struct variable using the "new" statement.
4 Structs can only have a parameterless constructor.
5 Structs cannot have an explicit parameterless constructor.

Application Architecture for .NET

Binding Data to DataGrid Control (ASP.Net)

Bind the DataGrid control to the DataSet. This makes the control automatically display all of the data in rows and columns. The user can add, edit, and delete records using the DataGrid with no more work from you. You can use the DataGrid's properties if you want to restrict access. For example, you can make the DataGrid disallow editing.

Private Const SELECT_STRING As String = _
"SELECT * FROM Contacts ORDER BY LastName, FirstName"
Private Const CONNECT_STRING As String = _
"Data Source=Bender\NETSDK;Initial " & _
"Catalog=Contacts;User Id=sa"

' The DataSet that holds the data.
Private m_DataSet As DataSet

' Load the data.
Private Sub Form1_Load(ByVal sender As Object, ByVal e As _
System.EventArgs) Handles MyBase.Load
Dim data_adapter As SqlDataAdapter

' Create the SqlDataAdapter.
data_adapter = New SqlDataAdapter(SELECT_STRING, _
CONNECT_STRING)

' Map Table to Contacts.
data_adapter.TableMappings.Add("Table", "Contacts")

' Fill the DataSet.
m_DataSet = New DataSet()
data_adapter.Fill(m_DataSet)

' Bind the DataGrid control to the Contacts DataTable.
dgContacts.SetDataBinding(m_DataSet, "Contacts")
End Sub
Now use the SqlDataAdapter's Update method to update the database.

' Save any changes to the data.
Private Sub Form1_Closing(ByVal sender As Object, ByVal e _
As System.ComponentModel.CancelEventArgs) Handles _
MyBase.Closing
If m_DataSet.HasChanges() Then
Dim data_adapter As SqlDataAdapter
Dim command_builder As SqlCommandBuilder

' Create the DataAdapter.
data_adapter = New SqlDataAdapter(SELECT_STRING, _
CONNECT_STRING)

' Map Table to Contacts.
data_adapter.TableMappings.Add("Table", "Contacts")

' Make the CommandBuilder generate the
' insert, update, and delete commands.
command_builder = New _
SqlCommandBuilder(data_adapter)

' Uncomment this code to see the INSERT,
' UPDATE, and DELETE commands.
'Debug.WriteLine("*** INSERT ***")
'Debug.WriteLine(command_builder.GetInsertCommand.CommandText)
'Debug.WriteLine("*** UPDATE ***")
'Debug.WriteLine(command_builder.GetUpdateCommand.CommandText)
'Debug.WriteLine("*** DELETE ***")
'Debug.WriteLine(command_builder.GetDeleteCommand.CommandText)

' Save the changes.
data_adapter.Update(m_DataSet)
End If
End Sub

Wednesday, December 12, 2007

Creating Custom Delegates and Events in C #

Creating Custom Delegates and Events in C #

Controls used on your forms have events associated with them you can respond to with event handlers in your application. C# makes it easy to have the same behavior in other parts of your application by creating your own delegates and events, and raising the events based on your program logic.

There are three pieces that are related and must be included to make our custom events work. They are a delegate, an event, and one or more event handlers. In this article we will examine delegates and events, how they relate, and how to use them to create custom event handling for your applications. The code to implement the delegates and events is also shown. Details about the code are included in comments.
Delegates
A delegate is a class that can contain a reference to an event handler function that matches the delegate signature. It provides the object oriented and type-safe functionality of a function pointer. The .NET runtime environment implements the delegate; all you need to do is declare one with the desired signature. It is customary for delegates to have two arguments, an object type named sender and an EventArgs type named e.

When you double click on a control in the forms designer of Visual Studio.NET, you are provided with a stub for an event handler that matches this signature. This signature is not required and you may want a different signature for your custom events. When a delegate is created and added to an event invocation list, the event handler is called when the event is raised. Multiple delegates can be added to a single event invocation list and will be called in the order they were added.
Events
Events are notifications, or messages, from one part of an application to another that something interesting has happened. When an event is raised, all the delegates in the invocation list are invoked in the order they were added. Each delegate contains a reference to an event handler, and each event handler executed. The sender of the event does not know which part of the application will handle the event, or even if it will be handled.

It just sends the notification and is finished with its responsibilities. It is often necessary to provide some information about the event when the notification is sent. This information is normally included in the EventArgs argument to the event handler. You will probably want to develop a class derived from EventArgs to send information for custom events.
Suppose you are creating an application for a bank to manage accounts and want to raise an event when a transaction would cause the balance falls below some minimum balance. There is no button to click when this happens, and you would not want to depend on a person noticing the balance is low and performing some action to indicate the balance is low to the rest of the application.

A low balance happens because of a withdrawal of some amount that reduces the account balance below the minimum required balance. You want the notification to be automatic, so appropriate action can be taken by some other part of the application without user intervention. This is where custom delegates and events are useful. The C# code below will demonstrate how to use delegates and events for this example.
First, create a class for our event arguments that is derived from EventArgs. We will include properties for the account number, current balance, required minimum balance, a message describing the event, and a transaction ID. You can include any information that would be useful for you application.
public class AccountBalanceEventArgs : EventArgs
{
private string acctnum;
public string AccountNumber
{
get
{
return acctnum;
}
}
private decimal balance;
public decimal AccountBalance
{
get
{
return balance;
}
}
private decimal minbal;
public decimal MinimumBalance
{
get
{
return minbal;
}
}
private string msg;
public string Message
{
get
{
return msg;
}
}
private int transID;
public int TransactionID
{
get
{
return transID;
}
}
// AccountBalanceEventArgs constructor
public AccountBalanceEventArgs(string AcctNum, decimal CurrentBalance,
decimal RequiredBalance, string MessageText, int transactionID)
{
acctnum = AcctNum;
balance = CurrentBalance;
minbal = RequiredBalance;
msg = MessageText;
transID = transactionID;
}
}
Now create the delegate. Our delegate will have a return type of void and take an instance of our AccountBalanceEventArgs class as the only argument. The delegate does not have to be declared inside a class. All our event handlers will have the same signature as this delegate. In other words, they will return void and take a single AccountBalanceEventArgs argument.
public delegate void AccountBalanceDelegate(AccountBalanceEventArgs);
Our next task is to create a class containing one or more methods that will raise the event if the correct conditions exist in the application logic. In this example, an instance of Account will raise the AccountBalanceLow event when a transaction, if completed, would cause the current balance of the account to fall below the required minimum balance. The event is included in the class.
public class Account
{
// Create an event for the Account class
// It has the form public event delegateName eventName
public event AccountBalanceDelegate AccountBalanceLow;
private string acctnum;
private decimal balance;
private decimal minBalance;
// This method could cause the balance to fall below the required minimum.
// We will raise the event if the balance is not high enough to withdraw
// amount without falling below the required minimum balance.
// transID is some extra information about which transaction caused
// the event to be raised, so it will be included in the event arguments.
public void Withdraw(decimal amount, int transID)
{
// if the transaction would reduce the balance below the minimum,
// raise the event
if ((balance - amount) < minBalance)
{
DispatchAccountBalanceLowEvent(transID);
}
else
{
// everything is ok, so reduce the balance and no event is raised
balance -= amount;
}
}
// This method adds an event handler (delegate) to the event invocation list.
// Any method that returns void and takes a single AccountBalanceEventArgs
// argument can subscribe to this event and receive notification messages about an
// AccountBalanceLow event.
public void SubscribeAccountBalanceLowEvent(AccountBalanceDelegate eventHandler)
{
AccountBalanceLow += eventHandler;
}
// This method removes an event handler (delegate) from the event invocation list.
// Any method that has already subscribed to the event can unsubscribe.
public void UnsubscribeAccountBalanceLowEvent(AccountBalanceDelegate eventHandler)
{
AccountBalanceLow -= eventHandler;
}
// This method raises the event, which causes all the delegates in the event
// invocation list to execute their event handlers. The event handlers are executed
// in the order the delegates were added.
private void DispatchAccountBalanceLowEvent(int transaction)
{
// make sure the are some delegates in the invocation list
if (AccountBalanceLow != null)
{
AccountBalanceLow(new AccountBalanceEventArgs(
acctnum, balance, minBalance,
"Withdrawal Failed: Account balance would be below minimum required",
transaction));
}
}
// the rest of the Account class implementation is omitted
}
Our final piece is to create a class with methods to handle the events. Name it whatever is meaningful for your application. Remember, the methods that will be event handlers for our event must have a signature that matches the delegate.
public class EventHandlerClass
{
// This method will be an event handler. It can be named anything you want,
// but it must have the same signature as the delegate AccountBalanceDelegate
// declared above.
public void HandleAccountLowEvent(AccountBalanceEventArgs e)
{
// do something useful here
// e.AccountNumber, e.AccountBalance, e.MinimumBalance,
// e.Message, and e.TransactionID are all available to use
}
// This method will be another event handler. It can be named anything you
// want, but it must have the same signature as the delegate
// AccountBalanceDelegate declared above.
public void HandleAccountLowEvent2(AccountBalanceEventArgs e)
{
// do something useful here
// e.AccountNumber, e.AccountBalance, e.MinimumBalance,
// e.Message, and e.TransactionID are all available to use
}
// the rest of the EventHandlerClass class implementation is omitted.
}
We now have all the code necessary to implement our custom event and have it handled by our event handlers. All we need is to tie everything together. Somewhere in your code, where it makes sense for your application, you would create instances of Account and EventHandlerClass, subscribe to the event notification, make a withdrawal and do something useful if the event is received.
// somewhere in your code create instances of the Account class
// and the EventHandlerClass class...
EventHandlerClass handler = new EventHandlerClass();
Account acct = new Account();
acct.SubscribeAccountBalanceLowEvent(
new AccountBalanceDelegate(handler.HandleAccountLowEvent));
acct.SubscribeAccountBalanceLowEvent(
new AccountBalanceDelegate(handler.HandleAccountLowEvent2));
// if the next line causes the current balance to fall below the mimimum
// balance, the event will be raised and handler.HandleAccountLowEvent will be
// called followed by handler.HandleAccountLowEvent2
acct.Withdraw(1000.00M, 1);
acct.UnsubscribeAccountBalanceLowEvent(
new AccountBalanceDelegate(handler.HandleAccountLowEvent2));
// now only handler.HandleAccountLowEvent will be called if the event is raised
acct.Withdraw(1000.00M, 2);
acct.UnsubscribeAccountBalanceLowEvent(
new AccountBalanceDelegate(handler.HandleAccountLowEvent));
// now no event handlers will be called
acct.Withdraw(1000.00M, 3);
If we needed to add another event to our code, most of the work is already finished. For example, if we wanted to add an AccountBalanceHigh event, we could use the same delegate and AccountBalanceEventArgs class.

We would need to declare the AccountBalanceHigh event, add the appropriate subscribe, unsubscribe and dispatch methods, create the event handlers, and raise the event when the balance for an account gets too high. If you look back over the code you can see it would take more time to implement the event handlers than to add the new event.
It is not difficult to implement our own delegates and events that allow us to send a notification that something interesting has happened from one part of our application to another. C# provides all the necessary tools to include this capability with a minimum of effort.

Sunday, December 9, 2007

Features of C# 2.0

The list of .NET 2.0 and C# 2.0 new features is extracted from the appendix of the book Practical .NET2 and C#2. All mentioned features are thoroughly covered in the book.
Contents
• Assembly
• Application localization
• Application build process
• Application configuration
• Application deployment
• CLR
• Delegate
• Threading/Synchronization
• Security
• Reflection/Attribute
• Interoperability
• C# 2.0
• Exceptions
• Collections
• Debugging
• Base classes
• IO
• Windows Forms 2.0
• ADO.NET 2.0
• ADO.NET 2.0: SQL Server data provider (SqlClient)
• XML
• .NET Remoting
• ASP.NET 2.0
• Web Services
Assembly
The use of the AssemblyKeyFile attribute to sign an assembly is to be avoided. It is now preferred that you use the /keycontainer and /keyfile options of csc.exe, or the new project properties of Visual Studio 2005.
The new System.Runtime.CompilerServices.InternalsVisibleToAttribute attribute allows you to specify assemblies which have access to non-public types within the assembly to which you apply the attribute (kind of 'assemblies friendship').
The ildasm.exe 2.0 tool offers, by default, the possibility of obtaining statistics in regards to the byte size of each section of an assembly and the display of its metadata. With ildasm.exe 1.x, you needed to use the /adv command line option.
Application localization
The resgen.exe tool can now generate C# or VB.NET code which encapsulates access to resources in a strongly typed manner.
Application build process
The .NET platform is now delivered with a new tool called msbuild.exe. This tool is used to build .NET applications and is used by Visual Studio 2005, but you can use it to launch your own build scripts.
Application configuration
The .NET 2.0 platform features a new, strong typed management of your configuration parameters. Visual Studio 2005 also contains a configuration parameter editor which generates the code needed to take advantage of this feature.
Application deployment
The new deployment technology named ClickOnce allows a fine management of the security, updates, as well as on-demand installation of applications. Visual Studio 2005 offers some practical facilities to take advantage of this technology.
CLR
A major bug with version 1.x of the CLR which made it possible to modify signed assemblies has been addressed in version 2.
The System.GC class offers two new methods named AddMemoryPressure() and RemoveMemoryPressure() which allow you to give the GC an indication in regards to the amount of unmanaged memory held. Another method CollectionCount(int generation) allows you to know the number of collections applied to the specified generation.
New features have been added to the ngen.exe tool to support assemblies using reflection, and to automate the update of the compiled version of an assembly when one of its dependencies has changed.
The ICLRRuntimeHost interface used from unmanaged code to host the CLR replaces the ICorRuntimeHost interface. It allows access to a new API permitting the CLR to delegate a certain number of core responsibilities such as the loading of assemblies, thread management, or the management of memory allocations. This API is currently only used by the runtime host for SQL Server 2005.
Three new mechanisms named Constrained Execution Region (CER), Critical Finalizer, and Critical Region (CR) allow advanced developers to increase the reliability of applications such SQL Server 2005 which are likely to deal with a shortage of system resources.
A memory gate mechanism can be used to evaluate, before an operation, if sufficient memory is available.
You can now quickly terminate a process by calling the FailFast() static method which is part of the System.Environment class. This method bypasses certain precautions such as the execution of finalizers or the pending finally blocks.
Delegate
A delegate can now reference a generic method or a method that is part of a generic type. We then see appearing the notion of generic delegates.
With the new overloads of the Delegate.CreateDelegate(Type, Object, MethodInfo) method, it is now possible to reference a static method and its first argument from a delegate. The calls to the delegates then do not need this first argument and is similar to the use of instance method calls.
In addition, the invocation of methods through the use of delegates is now more efficient.
Threading/Synchronization
You can easily pass information to a new thread that you created by using the new ParametrizedThreadStart delegate. Also, new constructors of the Thread class allow you to set the maximum size of the thread stack in bytes.
The Interlocked class offers new methods and allows to deal with more types such as IntPtr or double.
The WaitHandle class offers a new static method named SignalAndWait(). In addition, all classes deriving from WaitHandle offer a new static method named OpenExisting().
The EventWaitHandle can be used instead of its subclasses AutoResetEvent and ManualResetEvent. In addition, it allows to name an event and thus share it amongst multiple processes.
The new class Semaphore allows you take advantage of Win32 semaphores from your managed code.
The new method SetMaxThreads() of the ThreadPool class allows to modify the maximal number of threads within the CLR thread pool from managed code.
The .NET 2.0 framework offers new classes which allow to capture and propagate the execution context of the current thread to another thread.
Security
The System.Security.Policy.Gac class allows the representation of a new type of evidence based on the presence of an assembly in the GAC.
The following new permission classes have been added: System.Security.Permissions.KeyContainerPermission, System.Net.NetworkInformation.NetworkInformationPermission, System.Security.Permissions.DataProtectionPermission, System.Net.Mail.SmtpPermission, System.Data.SqlClient.SqlNotificationPermission, System.Security.Permissions.StorePermission, System.Configuration.UserSettingsPermission, System.Transactions.DistributedTransactionPermission, and System.Security.Permissions.GacIdentityPermission.
The IsolatedStorageFile class presents the following new methods: GetUserStoreForApplication(), GetMachineStoreForAssembly(), GetMachineStoreForDomain(), and GetMachineStoreForApplication().
The .NET 2.0 framework allows to launch a child process within a different security context than the parent process.
The .NET 2.0 framework offers new types within the System.Security.Principal namespace allowing the representation and manipulation of Windows security identifiers.
The .NET 2.0 framework presents new types within the System.Security.AccessControl namespace to manipulate Windows access control settings.
The .NET 2.0 framework offers new hashing methods within the System.Security.Cryptography namespace.
The .NET 2.0 framework offers several classes giving access to the functionality offered by the Windows Data Protection API (DAPI).
The System.Configuration.Configuration class allows the easy management of the application configuration file. In particular, you can use it to encrypt your configuration data.
The .NET 2.0 framework offers new types within the System.Security.Cryptography.X509Certificates and System.Security.Cryptography.Pkcs namespaces which are specialized for the manipulation of X.509 and CMS/Pkcs7 certificates.
The new namespace named System.Net.Security offers the new classes SslStream and NegociateStream which allow the use of the SSL, NTLM, and Kerberos security protocols to secure data streams.
Reflection/Attribute
You now have the possibility of loading an assembly in reflection-only mode. Also, the AppDomain class offers a new event named ReflectionOnlyAssemblyResolve triggered when the resolution of an assembly fails in the reflection-only context.
The .NET 2.0 framework introduces the notion of conditional attribute. Such an attribute has the particularity of being taken into consideration by the C# 2 compiler only when a certain symbol is defined.
Interoperability
The notion of function pointers and delegates are now interchangeable using the new GetDelegateForFunctionPointer() and GetFunctionPointerForDelegate() methods of the Marshal class.
The HandleCollector class allows you to supply to the garbage collector an estimate on the number of Windows handles currently held.
The new SafeHandle and CriticalHandle classes allow to harness Windows handles more safely than with the IntPtr class.
The tlbimp.exe and tlbexp.exe tools present a new option named /tlbreference which allow the explicit definition of a type library without having to go through the registry. This allows the creation of compilation environments which are less fragile.
Visual Studio 2005 offers features to take advantage of the reg-free COM technology of Windows XP within a .NET application. This technology allows the use of a COM class without needing to register it into the registry.
Structures related to COM technology such as BINDPTR, ELEMDESC, and STATDATA have been moved from the System.Runtime.InteropServices namespace to the new System.Runtime.InteropServices.ComTypes namespace. This namespace contains new interfaces which redefine certain standard COM interfaces such as IAdviseSink or IConnectionPoint.
The new namespace named System.Runtime.InteropServices contains new interfaces such as _Type, _MemberInfo, or _ConstructorInfo which allow unmanaged code to have access to reflection services. Of course, the related managed classes (Type, MemberInfo, ConstructorInfo...) implement these interfaces.
C# 2.0
Undoubtedly, the highlight feature in .NET 2.0 and C# 2.0 is generics.
C# 2.0 allows the declaration of anonymous methods (which can be seen as closures).
C# 2.0 presents a new syntax to define iterators.
The csc.exe compiler offers the following new options /keycontainer, /keyfile, /delaysign, /errorreport and /langversion.
C# 2.0 brings forth the notions of namespace alias qualifier, of global:: qualifier, and of external alias to avoid certain identifier conflicts.
C# 2.0 introduces the new compiler directives #pragma warning disable and #pragma warning restore.
The C# 2.0 compiler is now capable of inferring a delegation type during the creation of a delegate object. This makes source code more readable.
The .NET 2.0 framework introduces the notion of nullable types which can be exploited through a special C# 2.0 syntax.
C# 2.0 now allows you to spread the definition of a type across multiple source files within the same module. This new feature is called partial type.
C# 2.0 allows the assignment of a different visibility to the accessor of a property or indexer.
C# 2.0 allows the definition of static classes.
C# 2.0 now allows the definition of a table field with a fixed number of primitive elements within a structure.
Visual Studio 2005 intellisense feature now uses the XML information contained within /// comments.
Visual Studio 2005 allows you to build UML-like class diagrams in-sync with your code.
Exceptions
The SecurityException class and Visual Studio 2005 have been improved to allow you to more easily test and debug your mobile code.
The Visual Studio 2005 debugger offers a practical wizard to obtain a complete set of information relating to an exception.
Visual Studio 2005 allows you to be notified when a problematic event known by the CLR occurs. These events sometime provoke managed exceptions.
Collections
The whole set of collection types within the .NET framework have been revised in order to account for generic types. Here is a comparison chart between the System.Collections and System.Collections.Generic namespaces.
System.Collections.Generics System.Collections
Comparer Comparer
Dictionary HashTable
List LinkedList ArrayList
Queue Queue
SortedDictionary SortedList SortedList
Stack Stack
ICollection ICollection
IComparable System.IComparable
IComparer IComparer
IDictionary IDictionary
IEnumerable IEnumerable
IEnumerator IEnumerator
IList IList
The System.Array class has no generic equivalent and is still current. Indeed, since the beginning of .NET, the collection model proposed by this class supports a certain level of genericity. It presents new methods such as void Resize(ref T[] array, int newSize), void ConstrainedCopy(...), and IList AsReadOnly(T[] array).
Debugging
The System.Diagnostics namespace provides new attributes DebuggerDisplayAttribute, DebuggerBrowsable, DebuggerTypeProxyAttribute, and DebuggerVisualizerAttribute which allow you to customize the display of the state of your objects while debugging.
.NET 2.0 allows indicating through attributes the assemblies, modules, or zones of code that you do not wish to debug. This feature is known as Just My Code.
C# 2.0 programmers now have access to the Edit and Continue feature allowing them to modify their code while debugging it.
.NET 2.0 presents the new enumeration named DebuggableAttribute.DebuggingModes which is a set of binary flags on the debugging modes we wish to use.
Base classes
The primitive types (integer, boolean, floating point numbers) now expose a method named TryParse() which allows to parse a value within a string without raising an exception in the case of failure.
The .NET 2.0 framework offers several implementations derived from the System.StringComparer abstract class which allows to compare strings in a culture and case sensitive manner.
The new Sytem.Diagnostics.Stopwatch class is provided especially to accurately measure elapsed time.
The new DriveInfo class allows the representation and manipulation of volumes.
The .NET 2.0 framework introduces the notion of trace source allowing a better management of traces. Also, the following trace listener classes have been added: ConsoleTraceListener, DelimitedListTraceListener, XmlWriterTraceListener, and WebPageTraceListener.
Several new functionalities have been added to the System.Console class in order to improve data display.
IO
The .NET 2.0 framework offers the new class System.Net.HttpListener which allows to take advantage of the HTTP.SYS component of Windows XP SP2 and Windows Server 2003 to develop a HTTP server.
In .NET 2.0, the classes that are part of the System.Web.Mail namespace are now obsolete. To send mail, you must use the classes within the System.Net.Mail namespace. This new namespace now contains classes to support the MIME standard.
New methods now allow you to read and write a file in a single call.
New classes are now available to compress/decompress a data stream.
A new unmanaged version System.IO.UnmanagedMemoryStream of the MemoryStream class allows you to avoid copying of data onto the CLR’s object heap and is thus more efficient.
The new System.Net.FtpWebRequest class implements a FTP client.
The new namespace System.Net.NetworkInformation contains types which allow to query the network interfaces available on a machine in order to know their states, their traffic statistics, and to be notified on state changes.
Web resource caching services are now available in the new System.Net.Cache namespace.
The new System.IO.Ports.SerialPort class allows the use of a serial port in a synchronous or event based manner.
Windows Forms 2.0
Visual Studio 2005 takes advantage of the notion of partial classes in the management of Windows Forms. Hence, it will not mix anymore the generated code with our own code in the same file.
Windows Forms 2.0 offers the BackgroundWorker class which standardizes the development of asynchronous operations within a form.
The appearance (i.e. the visual style) of controls is better managed by Windows Forms 2.0 as it does not need to use the comctl32.dll DLL to obtain a Windows XP style.
Windows Forms 2.0 and Visual Studio 2005 contain the framework and development tools for a quick and easy development of presentation and edition windows for data.
Windows Forms 2.0 presents the new classes BufferedGraphicsContext and BufferedGraphics which allow a fine control on a double buffering mechanism.
The ToolStrip, MenuStrip, StatusStrip, and ContextMenuStrip controls: these controls, respectively, replace the ToolBar, MainMenu, StatusBar, and ContextMenu controls (which are still present for backward compatibility reasons). In addition to nicer visual style, these new controls are particularly easy to manipulate during the design of a window, thanks to a consistent API. New functionality has been added such as the possibility of sharing a render between controls, the support for animated GIFs, opacity, transparency, and the facility of saving the current state (position, size…) in the configuration file. The hierarchy of the classes derived from the class System.Windows.Forms.ToolStripItem constitutes as many elements which can be inserted in this type of control.
The DataGridView and BindingNavigator controls: these controls are part of a new framework to develop data driven forms. This framework is the subject of the Viewing and editing data section a little later in this chapter. Know that it is now preferable to use a DataGridView for the display of any data table or list of objects instead of the Windows Forms 1.0 DataGrid control.
The FlowLayoutPanel and TableLayout controls: these controls allow the dynamic positioning of the child controls that it contains when the user modifies its size. The layout philosophy of the FlowLayoutPanel control is to list the child controls horizontally or vertically in a way where they are moved when the control is resized. This approach is similar to what we see when we resize an HTML document displayed by a browser. The layout philosophy of the TablePanel control is comparable to the anchoring mechanism where the child controls are resized based on the size of the parent control. However, here the child controls are found in the cells of a table.
The SplitterPanel and SplitContainer controls: the combined use of these controls allows the easy implementation of splitting of a window in a way that it can be resized, as we had with version 1.1 using the Splitter control.
The WebBrowser control: this control allows the insertion of a web browser directly in a Windows Forms form.
The MaskedTextBox control: this control displays a TextBox in which the format of the text to insert is constrained. Several types of masks are offered by default, such as dates or US telephone number. Of course, you can also provide your own masks.
The SoundPlayer and SystemSounds controls: the SoundPlayer class allows you to play sounds in .wav format while the SystemSounds class allows you to retrieve the system sounds associated with the current user of the operating system.
ADO.NET 2.0
ADO.NET 2.0 presents new abstract classes such as DbConnection or DbCommand in the new namespace System.Data.Common which implements the IDbConnection or IDbCommand interfaces. The use of these new classes is now preferred to the use of the interfaces.
ADO.NET 2.0 offers an evolved architecture of abstract factory classes which allow decoupling the data access code from the underlying data provider.
ADO.NET 2.0 presents new features to construct connection strings independently of the underlying data provider.
ADO.NET 2.0 offers a framework allowing the programmatic traversal of a RDBMS schema.
The indexing engine used internally by the framework when you use instances of the DataSet and DataTable classes have been revised in order to be more efficient during the loading and manipulation of data.
Instances of the DataSet and DataTable classes are now serializable into a binary form using the new SerializationFormat RemotingFormat{get;set;} property. You can achieve a gain of 3 to 8 times in relation to the use of XML serialization.
The DataTable class is now less dependant on the DataSet class as the XML features of this one have been added.
The new method DataTable DataView.ToTable() allows the construction of a DataTable containing a copy of a view.
ADO.NET 2.0 now offers a bridge between the connected and disconnected modes which allow the DataSet/DataTable and DataReader classes to work together.
Typed DataSets directly take into account the notion of relationships between tables. Now, thanks to partial types, the generated code is separated from your own code. Finally, the new notion of TableAdapter allows you to create some sort of typed SQL requests directly usable from your code.
ADO.NET 2.0 allows to store data updates in a more efficient manner, thanks to batch updates.
ADO.NET 2.0: SQL Server data provider (SqlClient)
You now have the possibility of enumerating SQL Server data sources.
You have more control on connection pooling.
The SqlClient data provider of ADO.NET 2.0 allows the execution of commands in an asynchronous way.
You can harness the bulk copy services of the SQL Server tool bcp.exe using the SqlBulkCopy class.
You can obtain statistics about the activity of a connection.
There is a simplified and freely distributed version of SQL Server 2005 which offers several advantages over the previous MSDE and Jet products.
Transaction
The new namespace named System.Transactions (contained in the Systems.Transactions.dll) offers, at the same time, a unified transactional programming model and a new transactional engine which has the advantage of being extremely efficient on certain types of lightweight transactions.
XML
The performance of all classes involved in XML data handling have been significantly improved (by a factor of 2 to 4 in classic use scenarios according to Microsoft).
The new System.Xml.XmlReaderSettings class allows to specify the type of verifications which must be done when using a subclass of XmlReader to read XML data.
It is now possible to partially validate a DOM tree loaded within an instance of XmlDocument.
It is now possible to modify a DOM tree stored in an XmlDocument instance through the XPathNavigator cursor API.
The XslCompiledTransform class replaces the XslTransform class which is now obsolete. Its main advantage is in compiling XSLT programs into MSIL code before applying a transformation. According to Microsoft, this new implementation improves performance by a factor of 3 to 4. Moreover, Visual Studio 2005 can now debug XSLT programs.
Support for the XML DataSet class has been improved. You can now load XSD schemas with names repeated in different namespaces, and load XML data containing multiple schemas. Also, XML load and save methods have been added to the DataTable class.
The 2005 version of SQL Server brings forth new features in regards to the integration of XML data inside a relational database.
XML serialization can now serialize nullable information and generic instances. Also, a new tool named sgen.exe allows the pre-generation of an assembly containing the code to serialize a type.
.NET Remoting
The new IpcChannel channel is dedicated to the communication between different processes on a same machine. Its implementation is based on the notion of Windows named pipe.
If you use a channel of type TCP, you now have the possibility of using the NTLM and Kerberos protocols to authenticate the Windows user under which the client executes, to encrypt the exchanged data and impersonate your requests.
New attributes of the System.Runtime.Serialization namespace allow the management of problems inherent to the evolution of a serializable class.
It is possible to consume an instance of a closed generic type, with the .NET Remoting technology, whether you are in CAO or WKO mode.
ASP.NET 2.0
Visual Studio .NET 2005 is now supplied with a web server which allows the testing and debugging of your web applications during development.
It is now easy to use the HTTP.SYS component to build a web server which hosts ASP.NET without needing to use IIS.
ASP.NET 2.0 presents a new model for the construction of classes representing web pages. This model is based on partial classes, and is different than the one offered in ASP.NET 1.x.
The CodeBehind directive of ASP.NET v1.x is no longer supported.
In ASP.NET 2.0, the model used for dynamic compilation of your web application has significantly improved, and is now based on several new standard folders. In addition, ASP.NET 2.0 offers two new pre-compilation modes: the in-place pre-compilation, and the deployment pre-compilation.
To counter the effects of large viewstates in ASP.NET 1.x, ASP.NET 2.0 stores information in a base64 string, more efficiently, and introduces the notion of control-state.
ASP.NET 2.0 introduces a new technique which allows to postback a page to another page.
Certain events have been added to the lifecycle of a page.
ASP.NET 2.0 offers an infrastructure to allow the process of the same request across multiple threads of a pool. This allows us to avoid running out of threads within the pool when several long requests are executed at the same time.
New events have been added to the HttpApplication class.
The manipulation of configuration files has been simplified because of the Visual Studio 2005 intellisense, a new web interface, a new UI integrated in IIS, and because of new base classes.
ASP.NET 2.0 offers a framework allowing the standard management of events occurring during the life of a web application.
You can now configure ASP.NET 2.0 so that it can detect whether it is possible to store a session identifier in a client-side cookie, or if it should automatically switch over to the URI mode if cookies are not supported.
ASP.NET 2.0 now allows you to supply your own session or session ID management mechanism.
The cache engine of ASP.NET 2.0 offers interesting new features. You can now use the VaryByControl sub-directive in your pages. You can substitute dynamic fragments within your cached pages. You can associate your cached data dependencies towards tables and rows of a SQL Server data source. Finally, you can create your own types of dependencies.
ASP.NET 2.0 offers new server controls allowing declarative binding to a data source.
ASP.NET 2.0 offers a new hierarchy of server-side controls for the presentation and the edition of data. These controls have the peculiarity of being able to use a data source control to read and write data.
ASP.NET 2.0 offers a simplified template syntax.
ASP.NET 2.0 adds the notion of master pages which allows the easy reuse of a page design across all pages of a website.
ASP.NET 2.0 now offers an extensible architecture to allow insertion of navigational controls within your site.
With ASP.NET 2.0, you can use the Forms authentication mode without being forced to use cookies.
ASP.NET 2.0 allow the management of user authentication data as well of the roles to which they may belong, through the use of a database. Hence, several new server-side controls have been added to greatly simplify the development of ASP.NET applications which support authentication.
ASP.NET 2.0 presents a new framework allowing the storage and access of users' profiles.
ASP.NET 2.0 offers a framework facilitating the management and maintenance of the overall appearance of a site, thanks to the notions of themes and skins.
ASP.NET 2.0 also offers a framework dedicated to the creation of web portals through the use of what is called WebParts.
ASP.NET 2.0 offers a framework allowing the modification of rendered HTML code if the initiating HTTP request comes from a system with a small screen such as a mobile phone. Concretely, the rendering of each server control is done in a way to use less screen space. This modification is done through the use of adapter objects which are requested automatically and implicitly by ASP.NET during the rendering of the page. The "Inside the ASP.NET Mobile Controls" article on MSDN offers a good starting point on this new ASP.NET 2.0 feature.
Web Services
The proxy classes generated by wsdl.exe now offers a new asynchronous model which allows cancellation.
Create Elegant Code with Anonymous Methods, Iterators, and Partial Classes


Contents
Iterators
Iterator Implementation
Recursive Iterations
Partial Types
Anonymous Methods
Passing Parameters to Anonymous Methods
Anonymous Method Implementation
Generic Anonymous Methods
Anonymous Method Example
Delegate Inference
Property and Index Visibility
Static Classes
Global Namespace Qualifier
Inline Warning
Conclusion
Sidebars
What are Generics?



Fans of the C# language will find much to like in Visual C#® 2005. Visual Studio® 2005 brings a wealth of exciting new features to Visual C# 2005, such as generics, iterators, partial classes, and anonymous methods. While generics is the most talked-about and anticipated feature, especially among C++ developers who are familiar with templates, the other new features are important additions to your Microsoft® .NET development arsenal as well. These features and language additions will improve your overall productivity compared to the first version of C#, leaving you to write cleaner code faster. For some background information on generics, you should take a look at the sidebar "What are Generics?"
Iterators
In C# 1.1, you can iterate over data structures such as arrays and collections using a foreach loop:
string[] cities = {"New York","Paris","London"};
foreach(string city in cities) { Console.WriteLine(city); }
In fact, you can use any custom data collection in the foreach loop, as long as that collection type implements a GetEnumerator method that returns an IEnumerator interface. Usually you do this by implementing the IEnumerable interface:
public interface IEnumerable { IEnumerator GetEnumerator(); }
public interface IEnumerator { object Current{get;}
bool MoveNext();
void Reset();
}
Often, the class that is used to iterate over a collection by implementing IEnumerable is provided as a nested class of the collection type to be iterated. This iterator type maintains the state of the iteration. A nested class is often better as an enumerator because it has access to all the private members of its containing class. This is, of course, the Iterator design pattern, which shields iterating clients from the actual implementation details of the underlying data structure, enabling the use of the same client-side iteration logic over multiple data structures, as shown in Figure 1.

Figure 1 Iterator Design Pattern
In addition, because each iterator maintains separate iteration state, multiple clients can execute separate concurrent iterations. Data structures such as the Array and the Queue support iteration out of the box by implementing IEnumerable. The code generated in the foreach loop simply obtains an IEnumerator object by calling the class's GetEnumerator method and uses it in a while loop to iterate over the collection by continually calling its MoveNext method and current property. You can use IEnumerator directly (without resorting to a foreach statement) if you need explicit iteration over the collection.
But there are some problems with this approach. The first is that if the collection contains value types, obtaining the items requires boxing and unboxing them because IEnumerator.Current returns an Object. This results in potential performance degradation and increased pressure on the managed heap. Even if the collection contains reference types, you still incur the penalty of the down-casting from Object. While unfamiliar to most developers, in C# 1.0 you can actually implement the iterator pattern for each loop without implementing IEnumerator or IEnumerable. The compiler will choose to call the strongly typed version, avoiding the casting and boxing. The result is that even in version 1.0 it's possible not to incur the performance penalty.
To better formulate this solution and to make it easier to implement, the Microsoft .NET Framework 2.0 defines the generic, type-safe IEnumerable and IEnumerator interfaces in the System.Collections.Generics namespace:
public interface IEnumerable
{ IEnumerator GetEnumerator(); }
public interface IEnumerator : IDisposable
{ ItemType Current{get;}
bool MoveNext();
}
Besides making use of generics, the new interfaces are slightly different than their predecessors. Unlike IEnumerator, IEnumerator derives from IDisposable and does not have a Reset method. The code in Figure 2 shows a simple city collection implementing IEnumerable, and Figure 3 shows how the compiler uses that interface when spanning the code of the foreach loop. The implementation in Figure 2 uses a nested class called MyEnumerator, which accepts as a construction parameter a reference back to the collection to be enumerated. MyEnumerator is intimately aware of the implementation details of the city collection, an array in this example. The MyEnumerator class maintains the current iteration state in the m_Current member variable, which is used as an index into the array.
The second and more difficult problem is implementing the iterator. Although that implementation is straightforward for simple cases (as shown in Figure 3), it is challenging with more advanced data structures, such as binary trees, which require recursive traversal and maintaining iteration state through the recursion. Moreover, if you require various iteration options, such as head-to-tail and tail-to-head on a linked list, the code for the linked list will be bloated with various iterator implementations. This is exactly the problem that C# 2.0 iterators were designed to address. Using iterators, you can have the C# compiler generate the implementation of IEnumerator for you. The C# compiler can automatically generate a nested class to maintain the iteration state. You can use iterators on a generic collection or on a type-specific collection. All you need to do is tell the compiler what to yield in each iteration. As with manually providing an iterator, you need to expose a GetEnumerator method, typically by implementing IEnumerable or IEnumerable.
You tell the compiler what to yield using the new C# yield return statement. For example, here is how you use C# iterators in the city collection instead of the manual implementation of Figure 2:
public class CityCollection : IEnumerable
{
string[] m_Cities = {"New York","Paris","London"};
public IEnumerator GetEnumerator()
{
for(int i = 0; i yield return m_Cities[i];
}
}
You can also use C# iterators on non-generic collections:
public class CityCollection : IEnumerable
{
string[] m_Cities = {"New York","Paris","London"};
public IEnumerator GetEnumerator()
{
for(int i = 0; i yield return m_Cities[i];
}
}
In addition, you can use C# iterators on fully generic collections, as shown in Figure 4. When using a generic collection and iterators, the specific type used for IEnumerable in the foreach loop is known to the compiler from the type used when declaring the collection—a string in this case:
LinkedList list = new LinkedList();
/* Some initialization of list, then */
foreach(string item in list)
{
Trace.WriteLine(item);
}
This is similar to any other derivation from a generic interface.
If for some reason you want to stop the iteration midstream, use the yield break statement. For example, the following iterator will only yield the values 1, 2, and 3:
public IEnumerator GetEnumerator()
{
for(int i = 1;i< 5;i++)
{
yield return i;
if(i > 2)
yield break;
}
}
Your collection can easily expose multiple iterators, each used to traverse the collection differently. For example, to traverse the CityCollection class in reverse order, provide a property of type IEnumerable called Reverse:
public class CityCollection
{
string[] m_Cities = {"New York","Paris","London"};
public IEnumerable Reverse
{
get
{
for(int i=m_Cities.Length-1; i>= 0; i--)
yield return m_Cities[i];
}
}
}
Then use the Reverse property in a foreach loop:
CityCollection collection = new CityCollection();
foreach(string city in collection.Reverse)
{
Trace.WriteLine(city);
}
There are some limitations to where and how you can use the yield return statement. A method or a property that has a yield return statement cannot also contain a return statement because that would improperly break the iteration. You cannot use yield return in an anonymous method, nor can you place a yield return statement inside a try statement with a catch block (and also not inside a catch or a finally block).


Iterator Implementation
The compiler-generated nested class maintains the iteration state. When the iterator is first called in a foreach loop (or in direct iteration code), the compiler-generated code for GetEnumerator creates a new iterator object (an instance of the nested class) with a reset state. Every time the foreach loops and calls the iterator's MoveNext method, it begins execution where the previous yield return statement left off. As long as the foreach loop executes, the iterator maintains its state. However, the iterator object (and its state) does not persist across foreach loops. Consequently, it is safe to call foreach again because you will get a new iterator object to start the new iteration. This is why IEnumerable does not define a Reset method.
But how is the nested iterator class implemented and how does it manage its state? The compiler transforms a standard method into a method that is designed to be called multiple times and that uses a simple state machine to resume execution after the previous yield statement. All you have to do is indicate what and when to yield to the compiler using the yield return statement. The compiler is even smart enough to concatenate multiple yield return statements in the order they appear:
public class CityCollection : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return "New York";
yield return "Paris";
yield return "London";
}
}
Let's take a look at the GetEnumerator method of the class shown in the following lines of code:
public class MyCollection : IEnumerable
{
public IEnumerator GetEnumerator()
{
//Some iteration code that uses yield return
}
}
When the compiler encounters a class member with a yield return statement such as this, it injects the definition of a nested class called GetEnumerator$__IEnumeratorImpl, as shown in the C# pseudocode in Figure 5. (Remember that all of the features discussed in this article—the names of the compiler-generated classes and fields—are subject to change, in some cases quite drastically. You should not attempt to use reflection to get at those implementation details and expect consistent results.)
The nested class implements the same IEnumerable interface returned from the class member. The compiler replaces the code in the class member with an instantiation of the nested type, assigning to the nested class's member variable a reference back to the collection, similar to the manual implementation shown in Figure 2. The nested class is actually the one providing the implementation of IEnumerator.


Recursive Iterations
Iterators really shine when it comes to iterating recursively over a data structure such as a binary tree or any complex graph of interconnecting nodes. With recursive iteration, it is very difficult to manually implement an iterator, yet using C# iterators it is done with great ease. Consider the binary tree in Figure 6. The full implementation of the tree is part of the source code available with this article.
The binary tree stores items in nodes. Each node holds a value of the generic type T, called Item. Each node has a reference to a node on the left and a reference to a node on the right. Values smaller than Item are stored in the left-side subtree, and larger values are stored in the right-side subtree. The tree also provides an Add method for adding an open-ended array of values of the type T, using the params qualifier:
public void Add(params T[] items);
The tree provides a public property called InOrder of type IEnumerable. InOrder calls the recursive private helper method ScanInOrder, passing to ScanInOrder the root of the tree. ScanInOrder is defined as:
IEnumerable ScanInOrder(Node root);
It returns the implementation of an iterator of the type IEnumerable, which traverses the binary tree in order. The interesting thing about ScanInOrder is the way it uses recursion to iterate over the tree using a foreach loop that accesses the IEnumerable returned from a recursive call. With in-order iteration, every node iterates over its left-side subtree, then over the value in the node itself, then over the right-side subtree. For that, you need three yield return statements. To iterate over the left-side subtree, ScanInOrder uses a foreach loop over the IEnumerable returned from a recursive call that passes the left-side node as a parameter. Once that foreach loop returns, all the left-side subtree nodes have been iterated over and yielded. ScanInOrder then yields the value of the node passed to it as the root of the iteration and performs another recursive call inside a foreach loop, this time on the right-side subtree.
The InOrder property allows you to write the following foreach loop to iterate over the entire tree:
BinaryTree tree = new BinaryTree();
tree.Add(4,6,2,7,5,3,1);
foreach(int num in tree.InOrder)
{
Trace.WriteLine(num);
}
// Traces 1,2,3,4,5,6,7
You can implement pre-order and post-order iterations in a similar manner by adding additional properties.
While the ability to use iterators recursively is obviously a powerful feature, it should be used with care as there can be serious performance implications. Each call to ScanInOrder requires an instantiation of the compiler-generated iterator, so recursively iterating over a deep tree could result in a large number of objects being created behind the scenes. In a balanced binary tree, there are approximately n iterator instantiations, where n is the number of nodes in the tree. At any given moment, approximately log(n) of those objects are live. In a decently sized tree, a large number of those objects will make it past the Generation 0 garbage collection. That said, iterators can still be used to easily iterate over recursive data structures such as trees by using stacks or queues to maintain a list of nodes still to be examined.


Partial Types
C# 1.1 requires you to put all the code for a class in a single file. C# 2.0 allows you to split the definition and implementation of a class or a struct across multiple files. You can put one part of a class in one file and another part of the class in a different file, noting the split by using the new partial keyword. For example, you can put the following code in the file MyClass1.cs:
public partial class MyClass
{
public void Method1()
{...}
}
In the file MyClass2.cs, you can insert this code:
public partial class MyClass
{
public void Method2()
{...}
public int Number;
}
In fact, you can have as many parts as you like in any given class. Partial type support is available for classes, structures, and interfaces, but you cannot have a partial enum definition.
Partial types are a very handy feature. Sometimes it is necessary to modify a machine-generated file, such as a Web service client-side wrapper class. However, changes made to the file will be lost if you regenerate the wrapper class. Using a partial class, you can factor those changes into a separate file. ASP.NET 2.0 uses partial classes for the code-beside class (the evolution of codebehind), storing the machine-generated part of the page separately. Windows® Forms uses partial classes to store the visual designer output of the InitializeComponent method as well as the member controls. Partial types also enable two or more developers to work on the same type while both have their files checked out from source control without interfering with each other.
You may be asking yourself, what if the various parts define contradicting aspects of the class? The answer is simple: a class (or a struct) can have two kinds of aspects or qualities: accumulative and non-accumulative. The accumulative aspects are things that each part of the class can choose to add, such as interface derivation, properties, indexers, methods, and member variables.
For example, the following code shows how a part can add interface derivation and implementation:
public partial class MyClass
{}
public partial class MyClass : IMyInterface
{
public void Method1()
{...}
public void Method2()
{...}
}
The non-accumulative aspects are things that all the parts of a type must agree upon. Whether the type is a class or a struct, type visibility (public or internal) and the base class are non-accumulative aspects. For example, the following code does not compile because not all the parts of MyClass concur on the base class:
public class MyBase
{}
public class SomeOtherClass
{}
public partial class MyClass : MyBase
{}
public partial class MyClass : MyBase
{}
//Does not compile
public partial class MyClass : SomeOtherClass
{}
In addition to having all parts define the same non-accumulative parts, only a single part can override a virtual or an abstract method, and only one part can implement an interface member.
C# 2.0 supports partial types as follows: when the compiler builds the assembly, it combines from the various files the parts of a type and compiles them into a single type in Microsoft intermediate language (MSIL). The generated MSIL has no recollection which part came from which file. Just like in C# 1.1 the MSIL has no record of which file was used to define which type. Also worth noting is that partial types cannot span assemblies, and that a type can refuse to have other parts by omitting the partial qualifier from its definition.
Because all the compiler is doing is accumulating parts, a single file can contain multiple parts, even of the same type, although the usefulness of that is questionable.
In C#, developers often name a file after the class it contains and avoid putting multiple classes in the same file. When using partial types, I recommend indicating in the file name that it contains parts of a type such as MyClassP1.cs, MyClassP2.cs, or employing some other consistent way of externally indicating the content of the source file. For example, the Windows Forms designer stores its portion of the partial class for the form in Form1.cs to a file named Form1.Designer.cs.
Another side effect of partial types is that when approaching an unfamiliar code base, the parts of the type you maintain could be spread all over the project files. In such cases, my advice is to use the Visual Studio Class View because it displays an accumulative view of all the parts of the type and allows you to navigate through the various parts by clicking on its members. The navigation bar provides this functionality as well.


Anonymous Methods
C# supports delegates for invoking one or multiple methods. Delegates provide operators and methods for adding and removing target methods, and are used extensively throughout the .NET Framework for events, callbacks, asynchronous calls, and multithreading. However, you are sometimes forced to create a class or a method just for the sake of using a delegate. In such cases, there is no need for multiple targets, and the code involved is often relatively short and simple. Anonymous methods is a new feature in C# 2.0 that lets you define an anonymous (that is, nameless) method called by a delegate.
For example, the following is a conventional SomeMethod method definition and delegate invocation:
class SomeClass
{
delegate void SomeDelegate();
public void InvokeMethod()
{
SomeDelegate del = new SomeDelegate(SomeMethod);
del();
}
void SomeMethod()
{
MessageBox.Show("Hello");
}
}
You can define and implement this with an anonymous method:
class SomeClass
{
delegate void SomeDelegate();
public void InvokeMethod()
{
SomeDelegate del = delegate()
{
MessageBox.Show("Hello");
};
del();
}
}
The anonymous method is defined in-line and not as a member method of any class. Additionally, there is no way to apply method attributes to an anonymous method, nor can the anonymous method define generic types or add generic constraints.
You should note two interesting things about anonymous methods: the overloaded use of the delegate reserved keyword and the delegate assignment. You will see later on how the compiler implements an anonymous method, but it is quite clear from looking at the code that the compiler has to infer the type of the delegate used, instantiate a new delegate object of the inferred type, wrap the new delegate around the anonymous method, and assign it to the delegate used in the definition of the anonymous method (del in the previous example).
Anonymous methods can be used anywhere that a delegate type is expected. You can pass an anonymous method into any method that accepts the appropriate delegate type as a parameter:
class SomeClass
{
delegate void SomeDelegate();
public void SomeMethod()
{
InvokeDelegate(delegate(){MessageBox.Show("Hello");});
}
void InvokeDelegate(SomeDelegate del)
{
del();
}
}
If you need to pass an anonymous method to a method that accepts an abstract Delegate parameter, such as the following
void InvokeDelegate(Delegate del);
first cast the anonymous method to the specific delegate type.
A concrete and useful example for passing an anonymous method as a parameter is launching a new thread without explicitly defining a ThreadStart delegate or a thread method:
public class MyClass
{
public void LauchThread()
{
Thread workerThread = new Thread(delegate()
{
MessageBox.Show("Hello");
});
workerThread.Start();
}
}
In the previous example, the anonymous method serves as the thread method, causing the message box to be displayed from the new thread.


Passing Parameters to Anonymous Methods
When defining an anonymous method with parameters, you define the parameter types and names after the delegate keyword just as if it were a conventional method. The method signature must match the definition of the delegate to which it is assigned. When invoking the delegate, you pass the parameter's values, just as with a normal delegate invocation:
class SomeClass
{
delegate void SomeDelegate(string str);
public void InvokeMethod()
{
SomeDelegate del = delegate(string str)
{
MessageBox.Show(str);
};
del("Hello");
}
}
If the anonymous method has no parameters, you can use a pair of empty parens after the delegate keyword:
class SomeClass
{
delegate void SomeDelegate();
public void InvokeMethod()
{
SomeDelegate del = delegate()
{
MessageBox.Show("Hello");
};
del();
}
}
However, if you omit the empty parens after the delegate keyword altogether, you are defining a special kind of anonymous method, which could be assigned to any delegate with any signature:
class SomeClass
{
delegate void SomeDelegate(string str);
public void InvokeMethod()
{
SomeDelegate del = delegate
{
MessageBox.Show("Hello");
};
del("Parameter is ignored");
}
}
Obviously, you can only use this syntax if the anonymous method does not rely on any of the parameters, and you would want to use the method code regardless of the delegate signature. Note that you must still provide arguments when invoking the delegate because the compiler generates nameless parameters for the anonymous method, inferred from the delegate signature, as if you wrote the following (in C# pseudocode):
SomeDelegate del = delegate(string)
{
MessageBox.Show("Hello");
};
Additionally, anonymous methods without a parameter list cannot be used with delegates that specify out parameters.
An anonymous method can use any class member variable, and it can also use any local variable defined at the scope of its containing method as if it were its own local variable. This is demonstrated in Figure 7. Once you know how to pass parameters to an anonymous method, you can also easily define anonymous event handling, as shown in Figure 8.
Because the += operator merely concatenates the internal invocation list of one delegate to another, you can use the += to add an anonymous method. Note that with anonymous event handling, you cannot remove the event handling method using the -= operator unless the anonymous method was added as a handler by first storing it to a delegate and then registering that delegate with the event. In that case, the -= operator can be used with the same delegate to unregister the anonymous method as a handler.


Anonymous Method Implementation
The code the compiler generates for anonymous methods largely depends on which type of parameters or variables the anonymous methods uses. For example, does the anonymous method use the local variables of its containing method (called outer variables), or does it use class member variables and method arguments? In each case, the compiler will generate different MSIL. If the anonymous method does not use outer variables (that is, it only uses its own arguments or the class members) then the compiler adds a private method to the class, giving the method a unique name. The name of that method will have the following format:
__AnonymousMethod$()
As with other compiler-generated members, this is subject to change and most likely will before the final release. The method signature will be that of the delegate to which it is assigned.
The compiler simply converts the anonymous method definition and assignment into a normal instantiation of the inferred delegate type, wrapping the machine-generated private method:
SomeDelegate del = new SomeDelegate(__AnonymousMethod$00000000);
Interestingly enough, the machine-generated private method does not show up in IntelliSense®, nor can you call it explicitly because the dollar sign in its name is an invalid token for a C# method (but a valid MSIL token).
The more challenging scenario is when the anonymous method uses outer variables. In that case, the compiler adds a private nested class with a unique name in the format of:
__LocalsDisplayClass$
The nested class has a back reference to the containing class called , which is a valid MSIL member variable name. The nested class contains public member variables corresponding to every outer variable that the anonymous method uses. The compiler adds to the nested class definition a public method with a unique name, in the format of:
__AnonymousMethod$()
The method signature will be that of the delegate to which it is assigned. The compiler replaces the anonymous method definition with code that creates an instance of the nested class and makes the necessary assignments from the outer variables to that instance's member variables. Finally, the compiler creates a new delegate object, wrapping the public method of the nested class instance, and calls that delegate to invoke the method. Figure 9 shows in C# pseudocode the compiler-generated code for the anonymous method definition in Figure 7.


Generic Anonymous Methods
An anonymous method can use generic parameter types, just like any other method. It can use generic types defined at the scope of the class, for example:
class SomeClass
{
delegate void SomeDelegate(T t);
public void InvokeMethod(T t)
{
SomeDelegate del = delegate(T item){...}
del(t);
}
}
Because delegates can define generic parameters, an anonymous method can use generic types defined at the delegate level. You can specify the type to use in the method signature, in which case it has to match the specific type of the delegate to which it is assigned:
class SomeClass
{
delegate void SomeDelegate(T t);
public void InvokeMethod()
{
SomeDelegate del = delegate(int number)
{
MessageBox.Show(number.ToString());
};
del(3);
}
}


Anonymous Method Example
Although at first glance the use of anonymous methods may seem like an alien programming technique, I have found it quite useful because it replaces the need for creating a simple method in cases where only a delegate will suffice. Figure 10 shows a real-life example of the usefulness of anonymous methods—the SafeLabel Windows Forms control.
Windows Forms relies on the underlying Win32® messages. Therefore, it inherits the classic Windows programming requirement that only the thread that created the window can process its messages. Calls on the wrong thread will always trigger an exception under Windows Forms in the .NET Framework 2.0. As a result, when calling a form or a control on a different thread, you must marshal that call to the correct owning thread. Windows Forms has built-in support for solving this predicament by having the Control base class implement the interface ISynchronizeInvoke, defined like the following:
public interface ISynchronizeInvoke
{
bool InvokeRequired {get;}
IAsyncResult BeginInvoke(Delegate method,object[] args);
object EndInvoke(IAsyncResult result);
object Invoke(Delegate method,object[] args);
}
The Invoke method accepts a delegate targeting a method on the owning thread, and it will marshal the call to that thread from the calling thread. Because you may not always know whether you are actually executing on the correct thread, the InvokeRequired property lets you query to see if calling Invoke is required. The problem is that using ISynchronizeInvoke complicates the programming model significantly, and as a result it is often better to encapsulate the interaction with the ISynchronizeInvoke interface in controls and forms that will automatically use ISynchronizeInvoke as required.
For example, instead of a Label control that exposes a Text property, you can define the SafeLabel control which derives from Label, as shown in Figure 10. SafeLabel overrides its base class Text property. In its get and set, it checks whether Invoke is required. If so, it needs to use a delegate to access the property. That implementation simply calls the base class implementation of the property, but on the correct thread. Because SafeLabel only defines these methods so that they can be called through a delegate, they are good candidates for anonymous methods. SafeLabel passes the delegate, wrapping the anonymous methods to the Invoke method as its safe implementation of the Text property.


Delegate Inference
The C# compiler's ability to infer from an anonymous method assignment which delegate type to instantiate is an important capability. In fact, it enables yet another C# 2.0 feature called delegate inference. Delegate inference allows you to make a direct assignment of a method name to a delegate variable, without wrapping it first with a delegate object. For example, take a look at the following C# 1.1 code:
class SomeClass
{
delegate void SomeDelegate();
public void InvokeMethod()
{
SomeDelegate del = new SomeDelegate(SomeMethod);
del();
}
void SomeMethod()
{...}
}
Instead of the previous snippet, you can now write:
class SomeClass
{
delegate void SomeDelegate();
public void InvokeMethod()
{
SomeDelegate del = SomeMethod;
del();
}
void SomeMethod()
{...}
}
When you assign a method name to a delegate, the compiler first infers the delegate's type. Then the compiler verifies that there is a method by that name and that its signature matches that of the inferred delegate type. Finally, the compiler creates a new object of the inferred delegate type, wrapping the method and assigning it to the delegate. The compiler can only infer the delegate type if that type is a specific delegate type—that is, anything other than the abstract type Delegate. Delegate inference is a very useful feature indeed, resulting in concise, elegant code.
I believe that as a matter of routine in C# 2.0, you will use delegate inference rather than the old method of delegate instantiation. For example, here is how you can launch a new thread without explicitly creating a ThreadStart delegate:
public class MyClass
{
void ThreadMethod()
{...}
public void LauchThread()
{
Thread workerThread = new Thread(ThreadMethod);
workerThread.Start();
}
}
You can use a double stroke of delegate inference when launching an asynchronous call and providing a completion callback method, as shown in Figure 11. There you first assign the method name to invoke asynchronously into a matching delegate. Then call BeginInvoke, providing the completion callback method name instead of a delegate of type AsyncCallback.


Property and Index Visibility
C# 2.0 allows you to specify different visibility for the get and set accessors of a property or an indexer. For example, it is quite common to want to expose the get as public, but the set as protected. To do so, add the protected visibility qualifier to the set keyword. Similarly, you can define the set method of an indexer as protected, (see Figure 12).
There are a few stipulations when using property visibility. First, the visibility qualifier you apply on the set or the get can only be a stringent subset of the visibility of the property itself. In other words, if the property is public, then you can specify internal, protected, protected internal, or private. If the property visibility is protected, you cannot make the get or the set public. In addition, you can only specify visibility for the get or the set, but not both.


Static Classes
It is quite common to have classes with only static methods or members (static classes). In such cases there is no point in instantiating objects of these classes. For example, the Monitor class or class factories such as the Activator class in the .NET Framework 1.1 are static classes. Under C# 1.1, if you want to prevent developers from instantiating objects of your class you can provide only a private default constructor. Without any public constructors, no one can instantiate objects of your class:
public class MyClassFactory
{
private MyClassFactory()
{}
static public object CreateObject()
{...}
}
However, it is up to you to enforce the fact that only static members are defined on the class because the C# compiler will still allow you to add instance members, although they could never be used. C# 2.0 adds support for static classes by allowing you to qualify your class as static:
public static class MyClassFactory
{
static public T CreateObject()
{...}
}
The C# 2.0 compiler will not allow you to add a non-static member to a static class, and will not allow you to create instances of the static class as if it were an abstract class. In addition, you cannot derive from a static class. It's as if the compiler adds both abstract and sealed to the static class definition. Note that you can define static classes but not static structures, and you can add a static constructor.


Global Namespace Qualifier
It is possible to have a nested namespace with a name that matches some other global namespace. In such cases, the C# 1.1 compiler will have trouble resolving the namespace reference. Consider the following example:
namespace MyApp
{
namespace System
{
class MyClass
{
public void MyMethod()
{
System.Diagnostics.Trace.WriteLine("It Works!");
}
}
}
}
In C# 1.1, the call to the Trace class would produce a compilation error (without the global namespace qualifier ::). The reason the error would occur is that when the compiler tries to resolve the reference to the System namespace, it uses the immediate containing scope, which contains the System namespace but not the Diagnostics namespace. C# 2.0 allows you to use the global namespace qualifier :: to indicate to the compiler that it should start its search at the global scope. You can apply the :: qualifier to both namespaces and types, as shown in Figure 13.


Inline Warning
C# 1.1 allows you to disable specific compiler warnings using project settings or by issuing command-line arguments to the compiler. The problem here is that this is a global suppression, and as such suppresses warnings that you still want. C# 2.0 allows you to explicitly suppress and restore compiler warnings using the #pragma warning directive:
// Disable 'field never used' warning
#pragma warning disable 169
public class MyClass
{
int m_Number;
}
#pragma warning restore 169
Disabling warnings should be generally discouraged in production code. It is intended only for analysis when trying to isolate a problem, or when you lay out the code and would like to get the initial code structure in place without having to polish it up first. In all other cases, avoid suppressing compiler warnings. Note that you cannot programmatically override the project settings, meaning you cannot use the pragma warning directive to restore a warning that is suppressed globally.


Conclusion
The new features in C# 2.0 presented in this article are dedicated solutions, designed to address specific problems while simplifying the overall programming model. If you care about productivity and quality, then you want to have the compiler generate as much of the implementation as possible, reduce repetitive programming tasks, and make the resulting code concise and readable. The new features give you just that, and I believe they are an indication that C# is coming of age, establishing itself as a great tool for the developer who is an expert in .NET.