Thursday, February 4, 2010

foreground GC and background GC together in .Net 4.0

What's new in Garbage Collector in .Net 4.0

With the introduction of background GC (applicable only for Generation 2 and improvement over concurrent GC), you can do ephemeral GCs (Generation 0 and 1) as the foreground GCs.

How it works

When the background GC is in progress, the background GC thread will check frequent safe points and see if there is any requirement for foreground GCs and if yes, then it blocks the background GC and user threads to perform foreground GCs. Once foreground GCs is done, the background and user thread is resumed. The background GC is currently available for workstation GC only.

This means that now only unusual circumstances should lead to long latency times.


Various Garbage Collectors

CLR provides two garbage collectors

  1. Workstation GC: designed for use by desktop applications
  2. Server GC: designed for use by server applications. ASP.Net loads Server GC on multiprocessor machines. On single processor machines it loads workstation GC with concurrent GC on.

We can use Garbage collection in our applications with the following options:

Workstation GC with concurrent GC off

This is designed for high throughput on single processor machines.

How it works

  1. A managed thread is doing allocations;

  2. It runs out of allocations

  3. It triggers a GC which will be running on this very thread;
  4. GC calls SuspendEE to suspend managed threads;
  5. GC does its work;
  6. GC calls RestartEE to restart the managed threads;
  7. Managed threads start running again.


Configuration: This mode can be configured by setting the following values in the web.config

<configuration>

<runtime>

<gcConcurrent enabled="false"/>

</runtime>

</configuration>


Workstation GC with concurrent GC on

This is designed for interactive applications where the response time is critical. Concurrent GC allows for shorter pause time.

How it works:

The concurrent GC pauses the main thread of the application for the short interval of time during the entire GC time frame. This helps the applications to be more responsive. Since Gen0 and Gen1 collections are very fast so concurrent GC doesnot work for these generations. It makes sense to make concurrent GC for generation 2


Server GC

In this case separate GC thread and a separated heap for each CPU is created. GC happens on these threads instead of on the allocating thread. The flow looks like this:


  • A managed thread is doing allocations;
  • It runs out of allocations on the heap its allocating on;
  • It signals an event to wake the GC threads to do a GC and waits for it to finish;
  • GC threads run, finish with the GC and signal an event that says GC is complete (When GC is in progress, all managed threads are suspended just like in Workstation GC);
  • Managed threads start running again.


    Configuration: This mode can be configured by setting the following values in the web.config

    <configuration>

    <runtime>

    <gcServer enabled="true"/>

    </runtime>

    </configuration>


Important Points:

Concurrent GC is available only for Workstation GC. That means Server GC is always blocking GC.

Concurrent GC is only for the full garbage collection. Generation 0 and Generation 1 GCs are always blocking GCs.


Garbage Collection Overview

Garbage collector in .Net reclaims the memory of unused resources.

It is performed in three different generations. All the new objects that we create fall under the category of Generation 0. Garbage collector collects the unused objects from Generation 0 first before it goes to Generation 1 and then 2. Everytime GC runs it advances the generation of the object (which survived from Garbage collection) to next level till the Generation 2.

.Net System.GC class implements several methods that can be used by programmers. Here is the brief about each of the methods:

  1. System.GC.Collect: This method forces garbage collection. This method should not be called explicitly to start garbage collection as it adversely effects the performance of the application.
  2. System.GC.WaitForPendingFinalizers: This method suspends the execution of the current thread until the finalization thread has emptied the finalization queue. As with GC.Collect this method should be not be called.
  3. System.GC.KeepAlive: This method is used to prevent an object to be garbage collected prematurely. This could happen if your managed code is not using the object however unmanaged code is using the object.
  4. System.GC.SuppressFinalize: This prevents the finalizer being called for a specified object. Use this method when you implement the dispose pattern.

A short note on Finalization

.Net garbage collection mechanism keeps track of the objects life time using the strong and weak references. However when it comes to unmanaged resources like file, network connections it doesnot maintain their life time. You need to write the code to free the unmanaged resources. Net provides Object.Finalize method that can be used to free unmanaged resources. Whenever a new object, having a Finalize method, is allocated on the heap a pointer to the object is placed in an internal data structure called Finalization queue. When the object is not reachable (means ready for garbage collection), GC removes the object from Finalization Queue and put that in another internal data structure called Freachable Queue. A special runtime thread empties the Freachable queue by executing the Finalize method.

The next time garbage collector runs it sees that the finalized objects are truly garbage (means their finalize method has been executed) and then the memory of those object is freed.

It is recommended to avoid using Finalize method unless required as it delays the garbage collection of those objects to the next time when GC runs.

Use Dispose Pattern

The Dispose pattern defines the way we should implement finalizer functionality on all managed classes that maintain resources that the caller must be allowed to explicitly release. To implement the Dispose pattern, do the following:

  • Create a class that derives from IDisposable.
  • Add a private member variable to track whether IDisposable.Dispose has already been called. Clients should be allowed to call the method multiple times without generating an exception. If another method on the class is called after a call to Dispose, you should throw an ObjectDisposedException.
  • Implement a protected
    virtual
    void override of the Dispose method that accepts a single bool parameter. This method contains common cleanup code that is called either when the client explicitly calls IDisposable.Dispose or when the finalizer runs. The bool parameter is used to indicate whether the cleanup is being performed as a result of a client call to IDisposable.Dispose or as a result of finalization.
  • Implement the IDisposable.Dispose method that accepts no parameters. This method is called by clients to explicitly force the release of resources. Check whether Dispose has been called before; if it has not been called, call Dispose(true) and then prevent finalization by calling GC.SuppressFinalize(this). Finalization is no longer needed because the client has explicitly forced a release of resources.
  • Create a finalizer, by using destructor syntax. In the finalizer, call Dispose(false).

    Code

    public sealed class MyClass: IDisposable

    {

    // Variable to track if Dispose has been called

    private bool disposed = false;

    // Implement the IDisposable.Dispose() method

    public void Dispose(){

    // Check if Dispose has already been called

    if (!disposed)

    {

    // Call the overridden Dispose method that contains common cleanup code

    // Pass true to indicate that it is called from Dispose

    Dispose(true);

    // Prevent subsequent finalization of this object. This is not needed

    // because managed and unmanaged resources have been explicitly released

    GC.SuppressFinalize(this);

    }

    }


    // Implement a finalizer by using destructor style syntax

    ~MyClass() {

    // Call the overridden Dispose method that contains common cleanup code

    // Pass false to indicate the it is not called from Dispose

    Dispose(false);

    }


    // Implement the override Dispose method that will contain common

    // cleanup functionality

    protected virtual void Dispose(bool disposing){

    if(disposing){

    // Dispose time code

    . . .

    }

    // Finalize time code

    . . .

    }

    …}

    Passing true to the protected Dispose method ensures that dispose specific code is called. Passing false skips the Dispose specific code. The Dispose(bool) method can be called directly by your class or indirectly by the client.

If you reference any static variables or methods in your finalize-time Dispose code, make sure you check the Environment.HasShutdownStarted property. If your object is thread safe, be sure to take whatever locks are necessary for cleanup.

Use the HasShutdownStarted property in an object's Dispose method to determine whether the CLR is shutting down or the application domain is unloading. If that is the case, you cannot reliably access any object that has a finalization method and is referenced by a static field.


References:

http://blogs.msdn.com/maoni/archive/2004/09/25/234273.aspx

http://msdn.microsoft.com/en-us/library/ms998549.aspx#scalenetchapt06_topic5

http://msdn.microsoft.com/en-us/library/cc713687%28VS.100%29.aspx

http://blogs.msdn.com/maoni/archive/2008/11/19/so-what-s-new-in-the-clr-4-0-gc.aspx

http://blogs.microsoft.co.il/blogs/sasha/archive/2008/08/25/garbage-collection-notifications-in-net-3-5-sp1.aspx

Sunday, January 17, 2010

Separating Code Concerns using AOP

One of the new assignments I took is designing guidelines for separating the code concerns in .net applications.

Let's look at the example to understand what problem needs to be solved.
The following method retrieve products from the database.

public
List<Product> GetProducts()

{

Step1: Create Connection with the database

Step2: Call Stored Procedure to fetch the records.

Step3: Return list of products.

}

As we go on improving our code, we need to perform other activities like:

  1. Authorization: whether the current user has rights to fetch the complete product list
  2. Activity Capture: to log the activity performed by the user
  3. Caching

In order to implement this, we start incorporating the code in the above method as shown below (Example for Caching, Authorization, and Activity Capture):

public
List<Product> GetProducts()

{

Perform Activity Capture: Concern(Activity Capture)

    if( User is Authorized) Concern(Authorization)

    {

        if(Data is in Cache) Concern(Caching)

{

    Return data from cache

}

else

{

Create Connection with the database     

            Call Stored Procedure to fetch the records.

            Store the result in Cache.

            return the list of products

}

}

}

We created "GetProducts" method to retrieve Products but now start adding more concerns to it (marked in red). These concerns should be removed from the body of this method. This type of programming is called Aspect Oriented Programming (AOP).

Unfortunately .Net doesnot have inbuilt support for AOP. You need to use reflection a lot in order to perform this separation, however that would have negative impact on run-time performance of the applications. There are some very good frameworks that support AOP. In this blog, I will be discussing two approaches to do AOP in .Net

PostSharp: This framework uses Attribute Programming to separate concerns. Let's look how our above method will be written using PostSharp

[Authorization]
[Caching]
[ActivityCapture]
public
List<Product> GetProducts()
{

Create Connection with the database

    Call Stored Procedure to fetch the records.

    return the list of products

}

The body of the above method contains code only related to fetching the products and all other concerns are placed as attributes on top of the Method. PostSharp allows you to write your own attributes. You can specify the code that needs to be performed before or after the execution of the method body. Postsharp then does the compile time weaving to place the code at appropriate location.


You can download sample code that contains Authorization performed using PostSharp.

For more details on PostSharp visit: http://www.postsharp.org/

Keep looking into this blog for update on this topic.

Sunday, January 3, 2010

Cryptography Library in .Net

During the development of Application framework, I created a library that would help to perform cryptography functions in the application. Here I am providing brief overview of cryptography and code that performs cryptography functions.

Encryption is a technique by which plain text information is converted in data stream (cipher text) which looks like meaningless.
Decryption is the process of converting the cipher text (encrypted data stream) to readable plain text.

.Net Cryptography supports symmetric encryption, asymmetric encryption and hashing to convert plain text into cipher text.


Symmetric Encryption

These cryptography algorithms use the same key for encryption and decryption. Algorithms that operate on 1 bit or 1 byte of plaintext at a time are called stream ciphers whereas
algorithms that operate on blocks of bits at a time are called block ciphers.

Where to use Symmetric Encryption
These algorithms should be used to encrypt the messages within one application because same key is used for encryption and decryption. Using symmetric encryption with third party applications is not recommended as you need to share the encryption key.

Important Symmetric Encryption Algorithms

Data Encryption Standard:

DES is a block cipher which uses 56-bit fixed length key to generate cipher text. Any 56 bit value can be a key.
Due to short length of the key this algorithm is vulnerable to brute-force attack.

TripleDES:


Triple DES improves the DES algorithm by applying DES three times using three different keys by which effective key length becomes 168 bits.

Advanced Encryption Standard (AES) aka Rijndael:


This is block cipher and supports key lengths of 128, 192 and 256 bits. It is recommended to uses 256 bits key.

What is Initialization Vector
To ensure that encryption of the same string and with the same key is different everytime we perform the encryption, the output (cipher block) of previous block is appended to the next block to perform the encryption. But for the first block initialization vector is used. It is important to use the random IV everytime we perform the encryption operation.


Hashing

This is one way hash function which takes the variable length string and converts that to the fixed length binary sequence. Using the hash value you can't retrieve the original value. This is only one way conversion; however you can always compare the two hash values to check whether those are same or not.

Where to use Hashing
Hashing should be used where you need to protect the information and doesnot want the original text. For example: User's Passowrd.
User's Password can be hashed and stored in the database and during the login calculate the hash of the password entered by the user. We compare the hash value just calculated with the value stored in the database, if both matches that means user has entered the valid password.

Hashing Algorithms

MD5:
This algorithm produces 128 bit hash value.

SHA1:
This algorithm produces 160 bit hash value. Always use SHA because it produces larger hash value as compared to MD5.

Salt Value
One problem with hashing is that if two user's selected the same password then the hash value will also be same. One way to ensure that the hash of two same strings will never be same is to add the salt value (unique value) to the original text before hashing. The salt value can be generated using RNGCryptoServiceProvider. You require this salt value during comparison. You have two options either to store the salt as the part of the hash value so that later it can be extracted from the hash value or store the salt value as separate. The attached code appends the salt value as part of the hash itself.


Difference between Hashing and HMAC

During development of Security Framework of Rest Services, I figured out that why hashing alone is not sufficient and we need to perform the HMAC. Here is the difference between two.

Hashing: It produces standard 20 byte fixed length hash (using SHA1). Now if we send the request to the service along with the hash value of the Request Parameters for tamper proofing, the service will also compute the hash value of all the request parameters to check whether the information has been tampered or not. Things look fine but actually here is one problem. Some Attacker changes the request parameters, recalculates the hash of the changed request parameters and then sends the request using the new request parameters and new hash value. Now service recalculates the hash value from the request parameters and found the hash value to be correct (because the hash value is itself replaced with the new one.). So hashing in this scenario will not work for tamper proofing.
Hashing should be used only within one layer of the application or between layers where communication between the layers is encrypted and highly secured like storing password because the hash is calculated in the business layer and stored in the database. The communication between the business layer and database is generally behind the firewall and secured.

So in the above scenario, we have to use HMAC in which a secret key is shared between the two parties and using that secret key the HMAC is calculated. What HMAC does is, it takes the hash of the shared key + message, prepends the key to that hash, and then re-hashes the result. This makes it cryptographically sound and thus used for digital signing.

HMAC = hash (sharedkey + hash (sharedkey + message))

Forms Authentication Ticket, Roles Cookie uses HMac for tamper proofing.


Performing Encryption/Digital Signing in ASP.Net

.Net has machine key settings in the machine.config file by which forms authentication ticket, roles cookie are encrypted and signed.

The default values for the machinekey are

<pages enableViewStateMac="true" viewStateEncryptionMode="Auto" ... />


<machineKey validationKey="AutoGenerate,IsolateApps"

decryptionKey="AutoGenerate,IsolateApps"

validation="SHA1" decryption="Auto" />


When you configure ViewState, the <pages> element is used in conjunction with the <machineKey> element.

The <machineKey> attributes are

  • validationKey: This key specifies the HMAC key which is used for making viewstate tamper proof, signing the forms authentication ticket, signing Roles cookie.
  • decryptionKey: This specifies the key that will be used to encrypt or decrypt the data. This key is used to encrypt data of forms authentication ticket, roles cookie.
  • decryption: This specifies the symmetric algorithm that will be used for encryption and decryption. The values can be AES, 3DES, DES
  • validation: This specifies the algorithm used to generate HMAC for making viewstate tamper proof, signing forms authentication ticket. The various values can be SHA1, MD5, AES and 3DES. The values AES and 3DES are used in ASP.Net 1.1 because the decryption was only introduced in 2.0

Always use SHA1 because this produces larger hash as compared to MD5.

Forms authentication defaults to SHA1 for tamper proofing (if <forms protection="validation" or "All").
When <forms protection="All"> or <forms protection = "Encryption">, then forms authentication hashes the forms authentication ticket by using either MD5 or HMACSHA1


Performing Cryptography Operations:

In every application there are scenarios where we need to encrypt/ decrypt, hash or digital sign the information. The attached code provides the library for performing these operations. Brief overview of the library is explained below:

As mentioned Microsoft uses machineKey to encrypt/ decrypt, digitally sign the authentication cookie, roles cookie so it is better to use same keys for crypto operation which we perform in the application.

The first step would be to generate the decryption and validation keys. We should keep in mind that our applications will be deployed in web farm environment or not. It is always better to specify the specific values of decryption and validation keys so that in future web farm deployment will be easier. Attached code contains the Console Project which describes the way to generate decryption and validation keys.

MachineKey Wrapper: Code contains machine Key wrapper that reads machineKey settings from the config file.

Encryption/ Decryption: The code contains various operations for performing encryption and decryption including encryption and decryption of XML documents/ elements. These methods use decryption and decryptionKey element of the machineKey settings.

Hashing: Code contains methods for creating the hash and then comparing the hash value. Salt is randomly generated and stored as part of hash value. These methods use only the validation attribute of machineKey settings.

Note: The validationKey attribute is used is not used for hashing, this value is used for HMAC which is different from hashing as explained above.

Digital Signing (HMAC): Code contains various methods for creating and comparing the signature. The validation Key mentioned in the machineKey settings of web.config file is the secret key which HMAC uses for generating signature.

Cryptography Library in .Net

References:

http://msdn.microsoft.com/en-us/library/ms998288.aspx

http://msdn.microsoft.com/en-us/library/system.security.cryptography.x509certificates.x509certificate2.aspx

http://www.4guysfromrolla.com/webtech/LearnMore/Security.asp

http://msdn.microsoft.com/en-us/library/5e9ft273%28VS.100%29.aspx

http://dev.ionous.net/2009/03/hmac-vs-raw-sha-1.html

WCF Generic Error Handling using IErrorHandler

This is one of the series of WCF Blogs I have written. Click here to visit all my WCF Blogs.

Error Handling in WCF using IErrorHandler Interface

Source Code: Download

Exceptions are a critical component of a robust system and can be indicators of a variety of situations. For example, a caller may not have provided correct or complete information to a service, a service may have encountered an issue attempting to complete an operation, or a message may be formatted according to an unsupported version.

In this blog, I will talk about the effect exceptions have in WCF and the features WCF provides for communicating and processing exceptions. I will also describe the difference between exceptions and faults, the ways to create faults to send to a caller, and ways to process exceptions on both the service and caller. Finally, I will describe ways to centralize exception processing, catching unexpected exceptions or performing additional processing on exceptions and faults, such as logging.

A WCF service typically wraps calls to underlying business logic libraries, and as would be expected in any managed code, these libraries may raise standard .NET exceptions to their callers. Exceptions are raised up the call stack until either they are handled by a layer or reach the root application’s context, at which point they are typically fatal to the calling application, process, or thread (depending on what type of application is running).

Although unhandled exceptions are not fatal to WCF itself, WCF makes the assumption that they indicate a serious issue in the service’s capability to continue communications with the client. In those cases, WCF will fault the service channel, which means any existing sessions (for example, for security, reliable messaging, or state sharing) will be destroyed. If a session is part of the service call, the client channel will no longer be useful, and the client-side proxy will need to be re-created for the client to continue calling the service.

By default, exceptions that reach the service host that are not derived from FaultException are considered indications of a potentially fatal condition. The exception is replaced by FaultException and the original exception’s details are omitted unless the IncludeExceptionDetailInFaults option is enabled. The FaultException is then serialized as a SOAP fault for communication back to the caller.

The fatal condition created by unhandled exceptions can be prevented by catching exceptions before they reach the service host and throwing a FaultException manually.

Most services which require error handling also require additional information to be passed with the error notification. This information can be transferred to the client as a standard WCF data contract, in the disguise of a fault. The contractual specification that a particular service operation can result in the specified fault is called a fault contract.

The WCF service can produce a fault that is part of its fault contract, by throwing an exception. Throwing an exception is the most natural thing to do to indicate failure, for a .NET developer. The service is expected to throw the FaultException<TDetail> generic exception, with TDetail being the actual fault type that is being conveyed to the client. For example, the following service code conveys the ServiceFault fault to the client:

class Service : IService {

public void MyMethod() {

ServiceFault fault = new ServiceFault(...);

throw new FaultException(fault);

}

}

WCF has an excellent built-in extensibility mechanism for converting exceptions to faults. This extensibility point can be consumed through the IErrorHandler interface, which provides two methods: HandleError and ProvideFault. The HandleError method is called on a separate thread after the call has already completed, to possibly log the error and perform other book-keeping operations. The ProvideFault method, on the other hand, is called on the worker thread that is invoking the service call, and accepts the exception that was thrown by the service. It is expected to provide a fault message that will be sent to the client, and thus fits exactly what we are trying to accomplish. At runtime, an implementation of these methods can be hooked up to the ChannelDispatcher on the service side, and automatically get called whenever an unhandled exception escapes the service code.

We will begin with the core of the error handler. Our first attempt could be converting any exception to a FaultException<TDetail> with TDetail as the exception type. For example, if an ArgumentException could be thrown from downstream code, then we convert that to the specific fault. We need a mapping mechanism between .NET exceptions and faults.

In the code attached, I have created class library project that you can use in any of the Service. This Library contains the following classes

IExceptionToFaultConverter is having a method ConvertExceptionToFaultDetail. This method will be responsible for converting any type of Exception to the Fault. You can implement this interface in your service and write any your own logic to convert the exceptions to faults.

ErrorHandler is implementing the IErrorHandler interface and thus implementing HandleError and ProvideFault methods.

ErrorHandlerBehaviourAttribute is used to apply the behavior to your service so that whenever exception is raised, that will be handled in the Channel Dispatcher.

In the Service, we have to implement the IExceptionToFaultConverter method and also specify the behavior to the service.

The behavior can be specified in the following way at the service Level.

[ErrorHandlerBehaviour(

ExceptionToFaultConverter = typeof(ServiceFaultConverter))]

public class Service : IService

{

………….

}

We are here adding the Service behavior and specifying the Converter which we have implemented above.

Summary

The approach outlined in this article allows service developers to focus on their business logic and call downstream facilities directly. It absolves service developers from the need to worry about letting only permitted faults escape the service boundary, and provides a convenient mechanism for mapping .NET exceptions to well-defined WCF faults.
Click here to download source code

Speed up your Development using Ado.Net Entity Framework


This is one of my posts on the ADO.Net Entity Framework. Click here to see all my posts on Entity Framework.

Relational database systems are really considered the lifeline of every enterprise application and, in many cases, of the enterprise itself. These remarkable systems store information in logical tables containing rows and columns, allowing data access and manipulation through Structured Query Language (SQL) calls and data manipulation languages (DMLs). Relational databases are unique in the enterprise because they form the foundation from which all applications are born. In addition, unlike other software applications, databases are often shared across many function
al areas of a business.

What Is ORM?

ORM is an automated way of connecting an object model, sometimes referred to as a domain model, to a relational database by using metadata as the descriptor of the object and data.

Entity Framework

The Entity Framework looks like an interesting technology which is more powerful and advanced than LINQ to SQL. Both technologies have a different kind of philosophy but several features have similar implementations. The EF is more than just an ORM (Object Relational Mapping) tool. It allows developers to query and manipulate data using a conceptual model instead of a physical storage model. It will also become the foundation of new application blocks like Astoria (ADO.NET Data Services) which will enable you to expose any data store as web services and Jasper (Data Access Incubation Projects) which can be used to build dynamic data layers.

It is important to understand that there are many benefits to using EF rather than other data access techniques. These benefits will become more evident as you work with them, but the following are the few of them.

  • EF automates the object-to table and table-to-object conversion, which simplifies development. This simplified development leads to quicker time to market and reduced development and maintenance costs.
  • Applications are freed from hard-coded dependencies on a particular data engine or storage schema.
  • Mappings between the conceptual model and the storage-specific schema can change without changing the application code.
  • Multiple conceptual models can be mapped to a single storage schema.
  • Language-integrated query support provides compile-time syntax validation for queries against a conceptual model.
  • EF requires less code as compared to embedded SQL, handwritten stored procedures, or any other interface calls with relational databases.
  • EF provides transparent caching of objects on the client (that is, the application tier), thereby improving system performance. A good ORM is a highly optimized solution that will make your application faster and easier to support.

EF Architecture

The ADO.NET Entity Framework is a layered framework which abstracts the relational schema of a database and presents a conceptual model.


Data Source: The bottom layer is the data which can be stored in one or many databases.

Data Providers: The data will be accessed by an ADO.NET data provider. At this moment only SQL Server is supported but in the near future there will be data providers for Oracle, MySQL, DB2, etc.

Entity Data Model (EDM): An EDM is defined by the following three model and mapping files that have corresponding file name extensions:

· Conceptual schema definition language file (.csdl) - defines the conceptual model.

· Store schema definition language file (.ssdl) - defines the storage model, which is also called the logical model.

· Mapping specification language file (.msl) - defines the mapping between the storage and conceptual models.

The Entity Framework uses these XML-based models and mapping files to transform create, read, update, and delete operations against entities and relationships in the conceptual model to equivalent operations in the data source. The EDM even supports mapping entities in the conceptual model to stored procedures in the data source.

Entity Client: EntityClient is an ADO.NET managed provider that supports accessing data described in an Entity Data Model. The mission of EntityClient is to provide a gateway for entity-level queries. Through EntityClient one queries against a conceptual model, not against a specific store implementation of that model. EntityClient does not directly communicate with the data store but it requires a separate, store-specific, provider. EntityClient employs its own language, Entity SQL. An Entity SQL query needs no change in order to work over different store implementations of the same model. That is achieved through EntityClient’s pluggable architecture. Its query pipeline compiles the Entity SQL text to a command tree that is passed to the store provider for native SQL generation.

Entity SQL (ESQL): Entity SQL is a derivative of Transact-SQL, designed to query and manipulate entities defined in the Entity Data Model. It supports inheritance and associations.

LINQ to Entities: This is a strong-typed query language for querying against entities defined in the Entity Data Model.

Summary

ORM is the act of connecting object code, whether it is in C#, Java, or any other object-oriented language, to a relational database. This act of mapping is an efficient way to overcome the mismatch that exists between object-oriented development languages and relational databases. Such a mismatch can be classified as an inequality between the native object oriented language operations and functions and those of a relational database. For example, it is impossible to take an object model and save it directly into a database without some manipulation. This occurs because the database doesn’t have the ability to handle inheritance or polymorphism, two basic tenets of object-oriented development. An ORM tool is an excellent solution to overcome the inherent difference between object code and relational databases.

Microsoft Identity Model in Web Farm

Microsoft Identity Model for claims based authentication uses SessionSecurityTokenHandler to create cookie. This cookie is encrypted using DPAPI. This will not work if you are going to deploy your applications in Webfarm or in general Microsoft Azure platform. To support Web Farm deployment we need to remove the default Cookie Transform used by the Session Security Handler and then add our own customized cookie transform.

We need to provide two cookie transforms

  1. For encryption based upon the machine key settings
  2. HMAC-SHA1 cookie transform for tamper proofing of the cookie.

In next few days, my team will be providing both the cookie transforms and sample code to demonstrate this.

Thanks,

Ashwani

Custom Code for SAML 1.1 digital verification

Attached is the custom code for Digital verification of SAML 1.1 Token

Source Code

WCF Client Authentication using X509 certificates on SSL

In one of my project; there is a requirement

a. Web Services (WCF) Clients should be authenticated by X509 certificates.

b. Clients should validate the web services using X509 certificate (using SSL).

c. All these services should be built using basicHttpBinding and to be consumed by .Net 2.0 clients.

I am describing here the complete solution to achieve this and the settings to be done in IIS.

SSL Layer: In order to run web services on SSL; you need to get certificate from the certificate authority like VeriSign; however in development environment sometimes you donot have certificate from valid Authority and need to generate self signed certificate. Microsoft provides a utility to generate self signed certificates.

makecert.exe -r -pe -a sha1 -n CN=”Ent.com” -sr LocalMachine -ss My -sky exchange -b 01/01/2000 -e 01/01/2036 Ent.cer

The above command creates the self signed certificates and places this certificate in the Local Computer Account under personal store.

The configuration in IIS is simple as shown below:

a. Right Click WebSite (say Ent.com, where your WCF services are hosted). Go to properties and then go to Directory Security tab.


b. Click Server Certificate button. This will open the wizard to install certificate. Click “Next” and you will see the following screen. Select option “Assign an existing certificate” and click “Next”.

c. Now select the certificate you created “Ent.com” and click “Next”.


  1. Select the port number (for SSL default is 443) and click finish.
  2. Now if you want to enable 128 bit encryption you can edit the certificate details by clicking “Edit” button on the Directory Security tab and click the checkbox “Require 128-bit encryption”.


Some configuration changes need to be done in web.config file (WCF hosting configuration file). These configuration settings are described at the end of article.


That’ it you are done with installing SSL.

Now you can Add Web Reference in your .Net 2.0 project. However when you try to access the Web Service from your client application you will get the following error:


“The underlying connection was closed: Could not establish trust relationship for the
SSL/TLS secure channel.”


This exception is raised by client during SSL handshake because Server certificate is not issued b
y valid Authority.

In order to fix this problem, you need to tell client that it allows your certificate. The following code needs to be written and called before calling the service method.

protected void button1_Click(object sender, EventArgs e)

{

Testing.Service service1 = new Testing.Service();

ServicePointManager.ServerCertificateValidationCallback += new System.Net.Security.RemoteCertificateValidationCallback(customCertificateValidation);

string data = service1.GetData(5, true);

label1.Text = data;

}

private bool customCertificateValidation(object sender, X509Certificate cert, X509Chain chain, System.Net.Security.SslPolicyErrors error)

{

//analyze the certificate and then return true.

return true;

}

The custom certificate validation method allows clients applications to decide which server certificates they can trust.

Note: Allowing self signed certificates is not recommended in Production environment.



X509 Client Certificate Authentication:

The next thing to do is client authentication using X509 certificates. In order to do this you need to change configuration in IIS and also in web.config file.

IIS settings

a. Right Click web site (say Ent.com). Go to properties and then go to Directory Security tab.


b. Click Edit button and then on “Client Certificate” Section select the option “Require client certificates”


Client Side Changes

Now when you call the web service you need to provide client Certificate. Here is the code:

protected void button1_Click(object sender, EventArgs e)

{

Testing.Service service1 = new Testing.Service();

//As an example loading certificate from file system.

X509Certificate cert = X509Certificate2.CreateFromCertFile(@"C:\Test1.cer");

service1.ClientCertificates.Add(cert); //adding cleint certificate.

ServicePointManager.ServerCertificateValidationCallback += new System.Net.Security.RemoteCertificateValidationCallback(customXertificateValidation);

string data = service1.GetData(5, true);

label1.Text = data;

}

private bool customXertificateValidation(object sender, X509Certificate cert, X509Chain chain, System.Net.Security.SslPolicyErrors error)

{

//analyze the certificate and then return true.

return true;

}

The “Test1” certificate used here is self signed certificate. In production environment this certificate should be issued by valid authority. However to fix the problem in development environment you need to add the certificate in the “Trusted Root Certification Authorities” on you WCF hosting machine.


Web.config changes:

The complete config file is:

<system.serviceModel>

<services>

<service name="Service" behaviorConfiguration="ServiceBehavior">

<endpoint address="https://Ent.com/Services/Service.svc" binding="basicHttpBinding" contract="IService" bindingConfiguration="Binding">

<identity>

<dns value="localhost"/>

identity>

endpoint>

service>

services>

<behaviors>

<serviceBehaviors>

<behavior name="ServiceBehavior">

<serviceMetadata httpsGetEnabled="true"/>

<serviceDebug includeExceptionDetailInFaults="true"/>

behavior>

serviceBehaviors>

behaviors>

<bindings>

<basicHttpBinding>

<binding name="Binding">

<security mode="Transport">

<transport clientCredentialType="Certificate"/>

security>

binding>

basicHttpBinding>

bindings>

system.serviceModel>


Description:

  • In order to use SSL you need to tell that you are using transport layer security using the tag.

<security mode="Transport">security>

  • In order to use client certificates for authentication you need to specify ClientCertificateType as Certificate

    <security mode="Transport">
    <transport clientCredentialType="Certificate"/>

    <security/>

However after changes these settings when you try to run the application you will get strange error saying:

Client found response content type of '', but expected 'text/xml'.
The request failed with an empty response.

In order to fix this error you need to comment the following line in web.config file

< ! - - < address="mex" binding="mexHttpsBinding" contract="IMetadataExchange">- - >

That’s all. Now you can use WCF services from .Net 2.0 clients on SSL and using client certificate authentication.


The complete Code can be downloaded from here

You can generate the certificate using Microsoft tool as described above.


Ashwani Kumar

Solutions Architect

Globallogic Inc.