The objective of this blog is to list out the patterns and practices most frequently followed to design the Data Access Layer of an application.

Designing a Data Access layer

Layered architectures are generally preferred for applications because of code reuse, flexibility, performance and maintainability. In a layered architecture the Data Access is primarily responsible of communicating with the Database, whereas the business layer focuses on business logic and rules. The presentation layer concentrates on the UI.

Creating Data Access Layer

  1. LINQ To SQL using Entity Framework

Entity Framework is a strategic data access solution for applications that require a conceptual data model with strongly typed data, inheritance and relationships.

LINQ to SQL provides a run-time infrastructure for managing relational data as objects without losing the facility of being able to query data, and use stored procedures. It does this by translating language-integrated queries into SQL for execution by the database, and then translating the results back into objects defined by the application domain. The application can manipulate the objects while LINQ to SQL stays in the background, tracking changes to the data automatically. LINQ does not aim to circumvent a data abstraction layer at the database level, based on stored procedures; in fact it makes their use easier for the programmer.

2.     Custom Data Abstraction Layer

Using standard ADO.NET Components– translate the information between the relational representation of the database and the application’s domain-specific object models.

There are two central components of ADO.NET that accomplish this: the DataSet, and the .NET Framework data provider, which is a set of components including the Connection, Command, DataReader, and DataAdapter objects.

Choosing the data representation & transferring format between layers

LINQ to SQL works by mapping relational database schema to .NET classes.  This mapping is provided in the Data Context which is the main source used to perform all query operations against the database. Classes modelled to map database tables with in the data context are known as Entity Classes. Properties of entity classes maps to table columns and each instance of the entity class represents a row with in the database table. These in-memory objects or entities are used as a transferring media of data via layers, but restricting the data context scope with in the class library.

DAO Interface

See the re-usable sample DAO Interface below.


public interface IDataAccessObject{T GetEFContext<T>();DataSet GetDataset(string storedProcedureName, Collection<SqlParameter> parameters, ConnectionKey databaseConnectionName); 

object ExecuteProcedureGetFirstColumn(string storedProcedureName, Collection<SqlParameter> parameters, ConnectionKey databaseConnectionName);


Task<object> ExecuteProcedureGetFirstColumnAsync(string storedProcedureName, Collection<SqlParameter> parameters, ConnectionKey databaseConnectionName);


int ExecuteProcedureGetRowsAffected(string storedProcedureName, Collection<SqlParameter> parameters, ConnectionKey databaseConnectionName);


Task<int> ExecuteProcedureGetRowsAffectedAsync(string storedProcedureName, Collection<SqlParameter> parameters, ConnectionKey databaseConnectionName);


object ExecuteProcedureGetReturnValue(string storedProcedureName, Collection<SqlParameter> parameters, ConnectionKey databaseConnectionName);


Task<object> ExecuteProcedureGetReturnValueAsync(string storedProcedureName, Collection<SqlParameter> parameters, ConnectionKey databaseConnectionName);


SqlParameter GetSqlParameter(string parameterName, SqlDbType sqlDbType, int size, object value);


SqlParameter GetSqlParameter(string parameterName, SqlDbType sqlDbType, object value);

SqlConnection GetConnection( ConnectionKey databaseConnectionName, bool enableAsync = false);






Explaining the above DAO Interface:


GetEFContext method accepts a Generic type, to load the mentioned Connection object. It returns a DBContext object.


GetDataSet method uses the Custom ADO.NET Component to return the DataSet resulting from a Stored Procedure


ExecuteProcedureGetFirstColumn/ExecuteProcedureGetFirstColumnAsync executes and return a Scalar. The Async method returns the Task<object> and uses async- await to return the output asynchronously.


ExecuteProcedureGetRowsAffected/ ExecuteProcedureGetRowsAffectedAsync executes a Non-Query on the DB to return the number of rows affected.


ExecuteProcedureGetReturnValue/ExecuteProcedureGetReturnValueAsync executes and returns an Output value in Parameter Direction Output.


GetSqlParameter basically initializes and returns an Output of type SQL Parameter.


GetConnection returns the ADO.NET SQL connection object


Applications can use Entity Framework in conjunction with ADO.NET Components to establish a connection to the Database.

A trade-off needs to be done in terms of the complexity of the transactions and the number of tables and stored procedures to be loaded into the database project if the EF model is DB First.

DB heavy applications- Can use both EF with LINQ to SQL and ADO.NET component classes

EF Code-First logic can be used with POCO classes, which do not require the database components to be loaded first.

POCO Classes- Designing Entities

The choice of the entities to populate within the data context depends on the requirements.

A sample POCO class diagram below

Sample shows- The Building, Employee classes as master class and the relationship between the Building and Employee class. As each employee is associated to a Building.

Similarly Each Building is associated to a Region… so on and so forth.

EF – Code First Migrations allow us to automatically update the DB with any model changes in the POCO classes.

Code First Migrations has two commands to be familiar with

Add-Migration will scaffold the next migration based on changes you have made to your model.

Update-Database will apply any pending changes to the database.

Managing the Data Context with EF

The data context object consists of the information of its entity classes, their tracking information and the mapping information. To load and persist the data context object in memory consumes a considerable amount of memory, so it is instantiated and used within the method scope and disposed of after use.

Managing database connections

The connection string is required at every instantiation of a data context instance. It is therefore read at the application start and stored in a common global variable that can be accessed throughout the application.

   Manipulating Data using LINQ to SQL

There are many ways of manipulating data in LINQ to SQL. There is no restriction in using tables, stored procedures, views or functions.  All are possible, and the choice is purely an implementation decision. One way to implement Create, Read, Update, and Delete (CRUD) operations is to configure the behaviour on the entity class at design time by specifying the stored procedures, or to use base tables directly, as follows.


if (Db.TransitPassCardTypes.Where(g => g.Name == CardTypeModel.Name.Trim().ToLower()).FirstOrDefault() != null){throw new ArgumentException(“A card type with this name already exists”, “CardTypeName”);}var newCardType = new TransitPassCardType();newCardType.Name = CardTypeModel.Name;

newCardType.UpdatedOn = DateTime.Now;





Using Stored Procedures (SPs) & Functions to Query Data

When a Stored Procedure or a SQL Function is added to a data context, it is marked as FunctionAttribute and its parameters are marked as ParameterAttributes, this identifies the procedure or Function as a method within the datacontext.

LINQ to SQL introduces various ways for handling Stored Procedures with known results; those with return results that change at execution according to parameters passed, and those that return data as output parameters.

DB First Scenario– Functions are automatically created by the wizard. They return an Output of type ObjectResult<object>. This is all part of the System generated code


public virtual ObjectResult<GetCourseFacilities_Result> GetCourseFacilities(string courseIDs){var courseIDsParameter = courseIDs != null ?new ObjectParameter(“CourseIDs”, courseIDs) :new ObjectParameter(“CourseIDs”, typeof(string)); 

return ((IObjectContextAdapter)this).ObjectContext.ExecuteFunction<GetCourseFacilities_Result>(“GetCourseFacilities”, courseIDsParameter);


Using SP to query Data

using (LearnerEntities _dataContext = DataAccessObject.GetEFContext<LearnerEntities>())


courseFacilities_result = _dataContext.GetCourseFacilities(strCourseIds);




Handling Transactions

Though it is possible to define our own database transactions to group a set of database operations, it is always wiser to use the System.Transactions model, for it will take care of resources grouped under a single transaction and operations on multiple databases. Therefore these operations need to be scoped with the use of the Transactionscope object within the Business Layer, since a single unit of operation needs to be completed as a one business process. In the event of an error, it will roll back the entire operation, whereas Complete will commit the operation if successful.


public bool SaveTransitPass(TransitPass oTP){using (TransactionScope transcope = new TransactionScope()){SSPDAO dao = new SSPDAO();bool status = dao. SaveTransitPass(oTP);


return status;






Handling Concurrency Conflicts

During an update, each field is checked for concurrency conflicts, this checking frequency is controlled by theUpdate Check property of each field that can be set to ‘Always’ or ‘WhenChanged’.

Within the data context, the concurrency conflict exceptions are caught as a ChangeConflictException. The ChangeConflicts property of the data context exposes the collection of objects that caused the conflicts while SubmitChanges executes.

try{TP.SubmitChanges();}catch (ChangeConflictException cex){


foreach (ObjectChangeConflict conflictObject in TP.ChangeConflicts)


// expose the neccessary information.





Features of the Data Context

Object Identity

The data context is intelligent enough to track the entities that were loaded already, so if a request was made for the same object again, it would not result in two entity instances being loaded but only one. This behaviour of maintaining the uniqueness of the identity of the loaded objects in the data context is called Object Identity. This ensures that in-memory objects are not duplicated.


Lazy-Loading can be done as required. Include method is used to achieve this.


            var CardToIssue = (from c in Db.TransitPassCardswhere c.CardType.Id == CardTypeId&& c.AssetStatus.Id == (int)EF.Asset.AssetStatusCode.Active&& (c.CardStatus.Id == (int)EF.TransitPass.TransitPassStatusCode.AssignedToBuilding|| c.CardStatus.Id == (int)EF.TransitPass.TransitPassStatusCode.Returned)&& c.AssetNumber == CardNumber

select c








Object Tracking and Loading

Once an entity is loaded, the data context will by default, persist its old values and the new values, as indicated by the ObjectTrackingEnabled status being set to true. When retrieving entities with one-to-one, or one-to-many, relationships with other entities, the related entities will also be loaded by default at the time of retrieval. This is indicated by DeferredLoadingEnabled status set to true.


public int SaveChanges(TransitPass.ActionTaken Action){var UpdateTimeToSet = DateTime.Now;foreach (var entry in this.ChangeTracker.Entries<TransitPass.TransitPassCard>()){//if (entry.State != System.Data.EntityState.Unchanged && entry.State != System.Data.EntityState.Detached)

//We need to add to history even if entry.State is reported as unchanged, since the ActionTaken version of SaveChanges was called



return this.SaveChanges();



public void SetChangeDetection(bool ToSet)


this.Configuration.AutoDetectChangesEnabled = ToSet;




In the above example the Change Tracking is enabled


Data context also ensures that changes about to be committed do not conflict with the changes already made. This is named as an Optimistic offline lock.  As well as tracking the changes made since the entities were loaded, it allows changes to be committed in a transactional manner, known as Unit of work.


Some ‘best practices’ with LINQ to SQL

Simple optimization techniques can be used within the application to improve LINQ to SQL performance.

Access when needed. ‘Heavy’ objects such as data context must be accessed only needed and disposed quickly after use.
Keep things simple. To avoid data context being bulky, populate only those entities that are actually required for the current process.
Turn off tracking if it is not required. To avoid unnecessary identity management, when you merely wish to retrieve read-only data, set the ObjectTrackingEnabled status to false.
Specify Drilldown data. To avoid unneccessary loading of information, use AssociateWith to specify the extent of drilldown to just what is required, and no more.
Analyze the SQL Queries executed. With the use of the ‘LINQ to SQL Debug Visualizer’ that comes inSqlServerQueryVisualizer.dll, you are able to analyze the queries that are executed at runtime and can then optimize the queries as required. Since it is not integrated with .NET framework it needs to be downloaded and installed.

Data Adapter for Azure Hosted applications

Azure Hosted applications more than often have components which require them to access data from On-Premise databases. The data residing On-Premise is mostly business defined.


Service bus relay messaging or Service Bus Brokered messaging system is generally used to transport the On-premise data to Azure.


To design a clean DataAccessLayer Repository class in such cases is necessary.

For Separation of concerns, follow the following process while creating Repository classes.


  • Create an Interface which exposes the Result Object of the method
  • Implement the Interface in a BaseRepository/AzureRepository Class
  • Mark the implemented method in the BaseRepository as virtual
  • Use Service Bus Helper methods to get data from Service Bus Client
  • Create a class OnPremise Repository, inherit the OnPremiseRepository from the AzureRepository
  • Override the virtual method to create actual method which returns data from On-premise DB


In future if the On-premise data is moving to Azure, can get rid of the AzureRepository and Change the On-premise repository as the main repository class.



Azure Repository

public virtual List<Model.LearnerSeatAvailabilityModel> GetActivitySeatAvailabilityForUser(int activityId,string userAlias){return MS.IT.CFE.Framework.Common.Utilities.Execute<List<Model.LearnerSeatAvailabilityModel>>.Try(3).Do(() =>



var serviceBusClient

= ServiceBusClientFactory.CreateChannel<ILearningCentralService>(ConfigurationProvider);

return serviceBusClient.GetActivitySeatAvailabilityForUser(activityId, userAlias);


}).RetryFailureAction((ex) =>


if (ex is FaultException<LDServiceFaultContract>) //valid Learner Classic Service error, no need to retry throw it for the exception filter to handle


Diagnostic.LogWarning(ex, DiagnosticCode.TransactionService_Error);

throw ex;


ServiceBusChannelFactoryCreate.HandleRetry<ILearningCentralService>(DiagnosticCode.ServiceBus_Connectivity, ex);


.FailureAction((ex) =>


Diagnostic.LogError(DiagnosticCode.ServiceBus_Connectivity, ex);






public class ActivityOnPremiseRepository : ActivityRepository{public ActivityOnPremiseRepository(){}


public override List<LearnerSeatAvailabilityModel> GetActivitySeatAvailabilityForUser(int activityId, string userAlias)


DataSet dataSet = DataAccessObject.GetDataset(“lds.GetLearnerActivitySeatAvailability”, GetSelectStoredProcedureNametParameters(activityId, userAlias), ConnectionKey.Elms);

List<LearnerSeatAvailabilityModel> learnerSeatAvailabilityModelCollection = new List<LearnerSeatAvailabilityModel>();

if (dataSet != null && dataSet.Tables != null && dataSet.Tables.Count > 0)





return learnerSeatAvailabilityModelCollection;