.NET

Entity Framework & Direct SQL

Entity Framework is a great time saver.

When I think back to the days when I had to manually construct SQL statements, parse the results, control updates, and of course ensure my actual database design was in sync with what my code expected – well it sends a shiver down my spine!

I remember back in the day I even built myself a little utility that would take my database and spit out boiler plate code for basic CRUD operations.

But then EF was released – and although it was a little bit flakey at first (with people tutting and muttering about NHibernate), it was still a lovely little thing.  These days of course I use it all the time.  In fact as you may have seen in one of my earlier posts I’ve even copied more than a little for an Azure Table Storage version.

And of course couple EF with decent patterns such as the repository pattern and Dependency Injection and you have a rock solid foundation.

But, (there’s always a but) – EF is sometimes a bit slow when compared with issuing SQL commands directly – especially with batch operations.

For this reason the EF context exposes a Database property which in turn exposes a number of options for issue SQL commands directly.

I will show you how to use the two most common ones – SqlQuery and ExecuteSqlCommand

As I like to decouple as much as possible, and because I like my interfaces to only depend on core libraries I’m going to hide away some of the EF specific stuff.

So, first of all I like to have an IRepository<TEntity> interface and a Repository<TEntity> base class, that way each repository gets the basic CRUD methods and anything else I might want;

public interface IRepository where TEntity: class
{
void Delete(object id);
void Delete(TEntity entity);
System.Linq.IQueryable GetAll();
System.Linq.IQueryable GetAll(object filter);
IPagedResponseViewModel GetPaged(int take, int skip, string orderBy, bool orderByAscending, object filter);
TEntity GetById(object id);
TEntity GetFullObject(object id);
void Insert(TEntity entity);
void Update(TEntity entity);
void Commit();
void Dispose();
// Direct SQL Stuff
int ExecuteSqlCommand(string sql, object[] parameters);
ICollection SqlQuery(string sql, object[] parameters);
int ExecuteSqlCommand(string sql);
ICollection SqlQuery(string sql);
}
 public abstract class RepositoryBase : IRepository where TEntity : class
    {
        internal DataContext context;
        internal DbSet dbSet;

        public RepositoryBase(DataContext context)
        {
            this.context = context;
            this.dbSet = context.Set();
        }

.....
        public virtual int ExecuteSqlCommand(string sql)
        {
            return context.Database.ExecuteSqlCommand(sql);
        }
        public virtual int ExecuteSqlCommand(string sql, object[] parameters)
        {
            return context.Database.ExecuteSqlCommand(sql, parameters);
        }
        public virtual ICollection SqlQuery(string sql)
        {
            return context.Database.SqlQuery(sql).ToList();
        }
        public virtual ICollection SqlQuery(string sql, object[] parameters)
        {
            return context.Database.SqlQuery(sql, parameters).ToList();
        }
        public virtual void Commit()
        {
            context.SaveChanges();
        }
        public virtual void Dispose()
        {
            context.Dispose();
        }
    }

As you can see all we are really doing is encapsulating the SqlQuery and ExecuteSqlCommand methods by taking in a string or a string and a list of parameters.

First lets look at ExecuteSQLCommand – as this is very straight forward, we simply pass in our SQL string such as

ExecuteSQLCommand("UPDATE SomeTable SET SomeColumn='Some Value'");

EF issues the command and returns an int indicating the number of affected rows.
If you want you can pass in parameters like this

ExecuteSQLCommand("UPDATE SomeTable SET SomeColumn='Some Value' WHERE ID=@p0", new object[]{3});

Now for SQLQuery. You’ll may notice SQLQuery uses TEntity (if you’re not familiar with Generics we use TEntity to refer to any class we pass in during the instantiation of our repository – therefore anything that refers to TEntity refers to whatever object we want to use – e.g. we would have

public class MyObject{
    int Id { get; set; }
    string someProperty { get; set; }
    ....
}

public class MyRepository: Repository<MyObject>{
...
}

so when then instantiate the actual repository

var myrepo = new MyRepository();

We get all the commands from the base Repository class referencing our MyObject class model.

Anyway, if we ignore our repository for now, if we were to just query Database.SQLQuery directly we’d use

Database.SQLQuery<MyObject>("SELECT * FROM MyObjects");

this will result in a list of MyObject – EF actually attempts to convert the results it receives back to the model you pass in.
So all we have done now is automate that because we already know the model from when we instantiated our repository – thus when calling our encapsulated method we just use

SQLQuery("SELECT ALL FROM MyObjects");

Again we can pass in parameters just like with SQLExecuteCommand.

So as you can see this gives you complete flexibility in using EF – as mentioned earlier – if I have to iterate through and update large datasets I sometimes construct my SQL Directly and use these methods instead.

Note : This post was created in response to a user question on my course about ASP.NET Development Techniques.  My Blog viewers can get the course for just $10 by clicking this link 

The Frustration of Tutorials and Walkthroughs

I have an issue with how most software developers learn to write software.

Like me, a lot of developers learn on their own by searching for who to do stuff in the internet. This is great and of course I doubt there is much you CAN’T find with this method.

However, when I start learning new technologies – for example when I first started looking at MVC quite a few years back (I started on MVC2) – I found that all the tutorials, including Microsoft’s, taught in such a way that was really quite bad programming practice.

Now I’m not saying it’s ALL bad, but recently I’ve been looking at various training courses online, and the majority all show the same thing – they teach you how to CODE, but they don’t really teach you how to write software.

Example. One MVC course which promises to teach you how to be a ‘complete’ developer, went through the basics – create a project, create a class, here’s what a for loop is, here’s what a while loop does. etc etc.
However there was NO mention of SOLID principals, no real explanation of Object Orientated programming, of using interfaces and abstractions. Dependency Inversion and a whole multitude of good programming practices simply ignored.

As a junior developer I found this VERY frustrating.

Of course, now I’ve been around a bit, I do understand a lot more – but it’s actually been quite a painful journey if truth be told. It does however explain why a lot of software houses don’t really like to employee freelancers who have only ever worked on their own.

So anyway, to address this woeful shortcoming, I have actually created my own course.

I created it on a site called Udemy.com – if you’ve never used it have a look – it’s pay per course rather than subscription based, but they do often run very good deals – and some of the courses are really very good.

Please, checkout my course here

There’s a substantial discount on what will normally be charged, and I’ve had over 1000 students within a few days! So I guess there are some people out there who actually want to learn how to write software properly!

The great thing about Udemy.com is that you get life time access to the courses – and that includes UPDATES. For example in my course I’ll be adding lectures on Module Injection and splitting your views up using Partial Views – another underused technique.

And I would welcome input please – what do you think most programmers miss my learning from random searching?

Knockout Component Loading with requirejs won’t load html file

Knockout 3.2 introduces component loading – this is a great feature that allows you to load HTML and JS modules into your code, thus enabling you to split your code in to self contained modules.

Think of it like PartialViews in MVC but for KnockoutJS.

The first thing you need to do is ‘register’ your component, e.g. (And this is taken from the KnockoutJS documentation)

ko.components.register('like-widget', {
    viewModel: function(params) {
        // Data: value is either null, 'like', or 'dislike'
        this.chosenValue = params.value;
        
        // Behaviors
        this.like = function() { this.chosenValue('like'); }.bind(this);
        this.dislike = function() { this.chosenValue('dislike'); }.bind(this);
    },
    template:
        '<div class="like-or-dislike" data-bind="visible: !chosenValue()">\
            <button data-bind="click: like">Like it</button>\
            <button data-bind="click: dislike">Dislike it</button>\
        </div>\
        <div class="result" data-bind="visible: chosenValue">\
            You <strong data-bind="text: chosenValue"></strong> it\
        </div>'
});

 

Then your main page would implement the following
    <ul data-bind="foreach: products">
        <li class="product">
            <strong data-bind="text: name"></strong>
            <like-widget params="value: userRating"></like-widget>
        </li>
    </ul>
with the following javascript to load your view model
    function Product(name, rating) {
        this.name = name;
        this.userRating = ko.observable(rating || null);
    }

    function MyViewModel() {
        this.products = [
            new Product('Garlic bread'),
            new Product('Pain au chocolat'),
            new Product('Seagull spaghetti', 'like') // This one was already 'liked'
        ];
    }

    ko.applyBindings(new MyViewModel());
All pretty cool – except we’re embedding our ‘module’ in the component registration.  What we need to do is have it all in a separate file.  Again the Knockout Documentation shows us how to simply do this – we use an AMD module loader such as RequireJA, thus we can store everything in separate files, thus:
component-like-widget.js 
define(['knockout'], function(ko) {

    function LikeWidgetViewModel(params) {
        this.chosenValue = params.value;
    }

    LikeWidgetViewModel.prototype.like = function() {
        this.chosenValue('like');
    };

    LikeWidgetViewModel.prototype.dislike = function() {
        this.chosenValue('dislike');
    };

    return LikeWidgetViewModel;

});
component-like-widget.html
<div class="like-or-dislike" data-bind="visible: !chosenValue()">
            <button data-bind="click: like">Like it</button>
            <button data-bind="click: dislike">Dislike it</button>
        </div>
        <div class="result" data-bind="visible: chosenValue">
            You <strong data-bind="text: chosenValue"></strong> it
And this was loaded from an external file
        </div>
So now our component registration needs to change to take account of the external files
ko.components.register('like-or-dislike', {
    viewModel: { require: 'files/component-like-widget' },
    template: { require: 'text!files/component-like-widget.html' }
});
 And then finally our main HTML page just implements the component similar to above (except we’re now adding products dynamically as well)

HTML

    <ul data-bind="foreach: products">
        <li class="product">
            <strong data-bind="text: name"></strong>
            <like-or-dislike params="value: userRating"></like-or-dislike>
        </li>
    </ul>
    <button data-bind="click: addProduct">Add a product</button>

script

    function Product(name, rating) {
        this.name = name;
        this.userRating = ko.observable(rating || null);
    }

    function MyViewModel() {
        this.products = ko.observableArray(); // Start empty
    }

    MyViewModel.prototype.addProduct = function() {
        var name = 'Product ' + (this.products().length + 1);
        this.products.push(new Product(name));
    };

    ko.applyBindings(new MyViewModel());

Now, this is pretty much was the Knockout documentation says – it works lovely on their own pages, but could I get it to work in my project?  Nope.  Not at all.

So, time to start digging.  First thing let’s have a look at what is being brought back from the server (I Use fiddler).

The javascript file loads fine – but when it ties to load the HTML file it does;t actually try to load it, instead it tries to get a file called ‘text’

Well I guess this makes sense – after all we’ve told it to load ‘text!widget-like-component.html’

So er, what am I missing?

Well I assumed requirejs automatically knew what when it saw !text it would know how to handle it.  Well you know what they same about ASS U ME

Now this is mainly because I’ve never really delved deep into requirejs – I just use it and out of the box it always works fine.  So, just in case anyone else out there is in the same boat here’s the fix.

To make requirejs know how to handle !text we need a !text plug in – an NuGet package apply named ‘Text Plugin for RequireJS’ – so install it with NuGet, and voila! The world is back to normal.

Entity Framework and Interface issues

There is, and has been for quite some time a serious issue with Entity Framework – namely that it doesn’t support complex interfaces.

A simple entity based on an interface such as this is fine

public class Root : IRoot
{
public int RootId { get; set; }
public string Data { get; set; }
}

However as soon as your start to introduce child objects, for example an
ICollection Leafs
well now we have an issue, because the interface needs to define the collection as
ICollection Leafs
and that’s where it all goes a bit pear-shaped because EF can’t figure out that Leafs is a navigation property because it’s based on an interface instead of a concrete type.

I have spend many a wasted few hours searching for a fix to this, the majority of the articles I find simply state that EF does not support Interfaces.

In the past I’ve worked around it by using a DTO, so basically my DTO class is what EF will use to create my database mappings, but I then need to use something like AutoMapper, or roll my own mapper in order to convert my DTO to a concrete class based on an interface and vice versa.

This also has the issue that although retrieving and adding NEW entities works OK, as soon as you try to do an update to an existing entity with Attach it all starts to fall apart again.
Once again I managed to fixed this by simply ensuring that when I’m mapping from my concrete to the DTO I get the EXISTING DTO from the database, and then loop through all it’s properties copying and updating as required.

So recently, in a fit of despair, I sat down and hammered out a way to find an alternative solution that was a bit more elegant.

OK, so first we have our standard interface thus:

public interface IRoot
{
string Data { get; set; }
System.Collections.Generic.ICollection Leafs { get; }
int RootId { get; set; }
}

So we have an interface properly defined, but EF doesn’t know how to get the navigation property. So we need to create one along side our Collection that the interface expects, so lets go with

public ICollection Leafs { get; set;}
public ICollection LeafNavigation {get; set;}

OK that’s great but now of course these are two separate lists, so we’ll add an internal list and connect our two ICollections up to it. We’ll also ass some constructors for good measure.

public class Root : IRoot
{
private List _leafs;

public Root(ICollection leafs) {
this._leafs = leafs as List;
}

public Root(List leafs)
{
this._leafs = leafs;
}

public Root() {
this._leafs = new List();
}

public int RootId { get; set; }
public string Data { get; set; }

public ICollection Leafs { get { return _leafs.ConvertAll(l => (ILeaf)l); } }

public ICollection LeafNavigation { get { return _leafs; } set { _leafs = value.ToList(); } }
}

So now EF is happy, we’ve implemented the interface so we can use IRoot rather than Root. But now we have another issue – basically I can’t seem to ADD Leaf Entities to IRoot.Leafs – maybe someone can point out what I did wrong – because although the code lets me add the leaf with

IRoot root = new Root();
root.Add(new Leaf(){ Data="Some data"});

it just does’t actually ADD it to the underlying collection (I’ve tried a number of permutations – like I said if I have done something obvious PLEASE let me know!)

Anyway, I managed a simple work around – basically lets just add and Add method to the actual class that takes a Leaf and adds it to the underly connection thus the finished class and interface look like this.

public interface IRoot
{
void AddLeaf(ILeaf leaf);

string Data { get; set; }
System.Collections.Generic.ICollection Leafs { get; }
int RootId { get; set; }
}

public class Root : IRoot
{
private List _leafs;

public Root(ICollection leafs) {
this._leafs = leafs as List;
}

public Root(List leafs)
{
this._leafs = leafs;
}

public Root() {
this._leafs = new List();
}

public void AddLeaf(ILeaf leaf) {
_leafs.Add(leaf as Leaf);
}

public int RootId { get; set; }
public string Data { get; set; }

public ICollection Leafs { get { return _leafs.ConvertAll(l => (ILeaf)l); } }

public ICollection LeafNavigation { get { return _leafs; } set { _leafs = value.ToList(); } }
}

And Voila, this works a treat, I can add Leafs, and update them, along with the root and EF just binds everything as normal. Add, Update and Get all work seamlessly.

I’ve uploaded this to GitHub – and I hope this helps anyone who may be having similar issues!
https://github.com/squareconnection/EFInterfaces

Azure Table Storage with Repository Pattern

I love Windows Azure.  I love the flexibility it gives.  You can start small at a low cost, and gradually increase storage, CPU and memory as required.

This is especially great for start-ups or if you are piloting a potential new app/service.

With this in mind I was recently engaged to build such a pilot application.  The brief was that to begin with the costs had to be as low as possible. In particular we needed cheap data storage but with the potential that the requirement could be but terabytes worth of storage in the future.

These two requirements pretty much ruled out Azure SQL for a backend.  Although Azure SQL is great, it isn’t the cheapest option, and there are finite limits on the sizes of databases you can have.  So if we did go down that route we’d need to look at partitioning, sharding or some other managed way of splitting the data across databases.

Finally, because the majority of the data would be rows of transactions Azure Table Storage seemed to fit the bill nicely.

And so we started to design/build a new WebAPI with an Azure Table Storage backend.

There is also something else I love – SOLID principals.  Requirements change.  And I like to be able to change one bit of my code without it causing headaches throughout the application.

IF I were going to use SQL I would employ a Unit Of Work/Repository pattern and Entity Framework for the data persistence layer  – and so I decided I wanted to employ the same approach to this project.

I want Table Storage at the back, with a WebAPI in the front, however I wanted the WebAPI to be totally ignorant about storage mechanism used – after all, it may turn out that I need to change to something else at some point in the future.

So what I need to do is create a Table Storage provider that I can pass any Entity too and have it perform the relevant CRUD operations.

I then want to stick a repository and Unit Of Work in front of that provider.  That way if I change to SQL in the future I just have to swap out my repository implementation for one that uses Entity Framework instead.

Finally, I could also just implement the Azure stuff directly within a repository – but by putting it in a provider and referencing that provider it allows me more flexibility if I wanted to mix providers – e.g. half SQL half noSQL etc.

One other point before I begin – Azure can store different ‘models’ all in the same table. So we could store customers AND invoiceLines in the same table!  Therefore we have a ParitionKey that allows you to separate the different models in some way.  You don’t HAVE to do it like this – you could still have a table per entity, but my app is going to be multi-tenant and so I want one table per tenant – the discussions as to why and what is best are quite in depth, and in fact there is a very good discussion on it in the Patterns and Practices Library.

For the purposes of this article we are going to have a ‘TenantId’ and this will be used to define our table name, and each Entities name will be the ParitionKey.  In this way if this were a traditional SQL app the ParitionKey would in effect be our table names.  I hope that makes sense!

Building the Repository

This won’t be a complete blow by blow walkthrough.  I am going to assume you know some basics – how to create a project, a solution etc.

OK, so first I need my backend.  And as stated I want to use Windows Azure Table Storage rather than SQL as I want something very flexible, but cheap.  I’m also not too fussed about relationships (at least for now!).

So first of all we need an Azure Account and in particular a Storage Account.  There are lots of tutorials on how to do this – but in a nutshell, login to Windows Azure – click New – > Data Services –> Storage.

Once created you’ll need your access keys.  With the Storage Account selected just click Manage Access Keys.

Next lets create our WebAPI – in Visual Studio create a new Project.  As this will be split into a number of projects first create an Empty Solution.

Now add a new project to the solution.  Select Windows from the list of templates and ‘Class Library’ from the options presented.  Call it ‘Model’ and click OK.

Create a class called Customer and lets create some basic stuff in it.

namespace Model
{
    public class Customer
    {
        public Guid CustomerId { get; set; }
        public string Name { get; set; }
        public string Company { get; set; }
        public string Email { get; set; }
        public string Telephone { get; set; }
        public string VATNumber { get; set; }
    }
}

OK, so now we have the model we need a way of persisting it.

Using Azure Tables is fairly straight forward.  You create a CloudStorageAccount object, pass to it a connection string, then create a CloudTableClient and a Cloud table.  The connection string info comes from the ‘Manage Access Keys’ you did in the Azure Portal.

So for example:

string connectionString = "DefaultEndpointsProtocol=http;AccountName=<your storage account>;AccountKey=<your account key>";
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
            CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = tableClient.GetTableReference("MyTable");
            table.CreateIfNotExists();

You then use various methods to persist and read the data.

However what I don’t want to to have to do this manually for every model.  What I really want to do is use the repository pattern to create a base that I can pass ANY model to and have figure out what to so with it.

So what we need is an AzureTableStorageProvider that we can wrap all our functionality in, and then expose it via a Repository class.  So the first job is to create our provider.

Create a new Class Library Project called AzureTSProvider, and then create a new class called TableSet.

You also need to add the Azure NuGet packages – so right click the new project, select ‘Manage NuGet Packages’ and search for Windows Azure Storage.  Click Install, accept the Licenses and you’re good to go.

This is going to be a Generic Class – this allows us to pass in ANY class we construct, and act on it regardless.  We do this by setting <TEntity> in the class declaration and stating ‘where TEntity : class’.  We need a new() keyword in the declaration as well to tell our generic to create a new object it the one passed to it is null.

– NOTE – it doesn’t have to be TEntity by the way – you can call it whatever you like!

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace AzureTSProvider
{
    public class TableSet<TEntity>
    where TEntity : class,
        new()
    {
        private List<dynamic> internalList;
        private string partitionKey;
        private string tableName;
        private string connectionString;

        internal CloudTableClient tableClient;
        internal CloudTable table;

        public TableSet(string connectionString, string tableName)
        {
            this.partitionKey = typeof(TEntity).Name;
            this.tableName = tableName;
            this.connectionString = connectionString;

            //pluralise the partition key (because basically it is the 'table' name).
            if (partitionKey.Substring(partitionKey.Length - 1, 1).ToLower() == "y")
                partitionKey = partitionKey.Substring(0, partitionKey.Length - 1) + "ies";

            if (partitionKey.Substring(partitionKey.Length - 1, 1).ToLower() != "s")
                partitionKey = partitionKey + "s";

            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
            tableClient = storageAccount.CreateCloudTableClient();
            table = tableClient.GetTableReference(tableName);
            table.CreateIfNotExists();
        }

        public virtual TEntity GetByID(object id)
        {
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.Equal, id.ToString()));
            var result = table.ExecuteQuery(query).First();

            return result;
        }

        public virtual List<TEntity> GetAll()
        {
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey)); //get all customers - because Customer is our partition key
            var result = table.ExecuteQuery(query).ToList();
        }

        public virtual void Insert(TEntity entity)
        {
            TableOperation insertOperation = TableOperation.Insert(entity);
            table.Execute(insertOperation);
        }
    }
}

To DTO or not to DTO?

I’m going to fast forward a bit here to explain why I did what I’m going to do next.  Imagine I’ve hooked this into by repository and Unit Of Work, and then a WebAPI Controller that creates a customer object and tries to persist it through my framework.

I immediately ran into an issue.  You see in order for table storage to work with our model the entities we create have to implement inherit from TableEntity.  I don’t want to do this.  Why?  Because I want my API to have no reference to Azure whatsoever.  Or even my Repository.  Everything and anything to do with the persistence of the actual data needs to be neatly encapsulated in my provider classes.

OK, so what we need is a Data Transformation Object (DTO).  The DTO will inherit from TableEntity, and I could either manually copy propereies between my Entity and the DTO or use something like Automapper.

But now my issue is that I have to pass in a DTO along with my Entity.  We could just say this is a requirement but apart from not being very neat I’m basically having to duplicate every entity I create with a DTO version – all the properties would be identical.

So I decided that the best way forward would be to just create a DTO on the fly using the properties of my passed in entity.

Thank goodness for Dynamic Objects – as these allow me to do just this!

First I create an empty ‘base DTO’ which is a class that inherits from DynamicObject and ITableEntity.  It creates us an object that implements everything Azure wants but because it inhertits from DynamicObject it is expandable and so allows us to add more properties to it  This is created in my provider library – so create a new class called TableEntityDTO and paste in the following;

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.Collections.Generic;
using System.Dynamic;

namespace AzureTSProvider
{
    public class TableEntityDTO   : DynamicObject,ITableEntity
    {
        #region ITableEntity properties
        // Summary:
        //     Gets or sets the entity's current ETag. Set this value to '*' in order to
        //     blindly overwrite an entity as part of an update operation.
        public string ETag { get; set; }
        //
        // Summary:
        //     Gets or sets the entity's partition key.
        public string PartitionKey { get; set; }
        //
        // Summary:
        //     Gets or sets the entity's row key.
        public string RowKey { get; set; }
        //
        // Summary:
        //     Gets or sets the entity's time stamp.
        public DateTimeOffset Timestamp { get; set; }
        #endregion

        // Use this Dictionary store table's properties. 
        public IDictionary<string, EntityProperty> properties { get; private set; }

        public TableEntityDTO()
        {
            properties=new Dictionary<string,EntityProperty>();
        }

        public TableEntityDTO(string PartitionKey, string RowKey)
        {
            this.PartitionKey = PartitionKey;
            this.RowKey = RowKey;
            properties = new Dictionary<string, EntityProperty>();
        }

        #region override DynamicObject's mehtods
        public override bool TryGetMember(GetMemberBinder binder, out object result)
        {
            if (!properties.ContainsKey(binder.Name))
                properties.Add(binder.Name, ConvertToEntityProperty(binder.Name, null));
            result = properties[binder.Name];
            return true;
        }

        public override bool TrySetMember(SetMemberBinder binder, object value)
        {
            EntityProperty property = ConvertToEntityProperty(binder.Name, value);

            if (properties.ContainsKey(binder.Name))
                properties[binder.Name] = property;
            else
                properties.Add(binder.Name, property);

            return true;
        }

        public bool TrySetMember(string binder, object value)
        {
            EntityProperty property = ConvertToEntityProperty(binder, value);

            if (properties.ContainsKey(binder))
                properties[binder] = property;
            else
                properties.Add(binder, property);

            return true;
        }

        #endregion

        #region ITableEntity implementation

        public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
        {
            this.properties = properties;
        }

        public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
        {
            return this.properties;
        }

        #endregion

        /// <summary>
        /// Convert object value to EntityProperty.
        /// </summary>
        private EntityProperty ConvertToEntityProperty(string key, object value)
        {
            if (value == null) return new EntityProperty((string)null);
            if (value.GetType() == typeof(byte[])) 
                return new EntityProperty((byte[])value);
            if (value.GetType() == typeof(bool)) 
                return new EntityProperty((bool)value);
            if (value.GetType() == typeof(DateTimeOffset)) 
                return new EntityProperty((DateTimeOffset)value);
            if (value.GetType() == typeof(DateTime)) 
                return new EntityProperty((DateTime)value);
            if (value.GetType() == typeof(double)) 
                return new EntityProperty((double)value);
            if (value.GetType() == typeof(Guid)) 
                return new EntityProperty((Guid)value);
            if (value.GetType() == typeof(int)) 
                return new EntityProperty((int)value);
            if (value.GetType() == typeof(long)) 
                return new EntityProperty((long)value);
            if (value.GetType() == typeof(string)) 
                return new EntityProperty((string)value);
            throw new Exception("This value type" + value.GetType() + " for " + key);
            throw new Exception(string.Format("This value type {0} is not supported for {1}",key));
        }

         /// <summary>
         /// Get the edm type, if the type is not a edm type throw a exception.
         /// </summary>
        private Type GetType(EdmType edmType)
        {
            switch (edmType)
            {
                case EdmType.Binary : 
                    return typeof(byte[]);
                case EdmType.Boolean : 
                    return typeof(bool);
                case EdmType.DateTime : 
                    return typeof(DateTime);
                case EdmType.Double : 
                    return typeof(double);
                case EdmType.Guid : 
                    return typeof(Guid);
                case EdmType.Int32 : 
                    return typeof(int);
                case EdmType.Int64 : 
                    return typeof(long);
                case EdmType.String : 
                    return typeof(string);
                default: throw new TypeLoadException(string.Format("not supported edmType:{0}" ,edmType));
            }
        }
        }
}

Now in my TableSet class I create two methods. The first is CreateDTO – which takes my source entity, creates a dynamic entity, copies all the properties from my source entity and then copies all the properties from my DTO entity.

I also have a ‘GetId’ method that I use to scan each source entity property to see if it’s an ID – we’ll need this for the ‘RowKey’ that Azure needs to create a ‘primary key’.  It’s fairly simplistic but suits my needs.

Finally I have a StripDTO method that essential maps our DTO back to the base Entity for return queries.

#region object mapping
        dynamic CreateDTO(object a)
        {
            TableEntityDTO dto = new TableEntityDTO();
            object rowKey = null;

            Type t1 = a.GetType();
            Type t2 = dto.GetType();

            //now set all the entity properties
            foreach (System.Reflection.PropertyInfo p in t1.GetProperties())
            {
                dto.TrySetMember(p.Name, p.GetValue(a, null) == null ? "" : p.GetValue(a, null));
                if (IsId(p.Name))
                    rowKey = p.GetValue(a, null);
            }

            if (rowKey == null)
                rowKey = Guid.NewGuid();

            dto.RowKey = rowKey.ToString();
            dto.PartitionKey = partitionKey;

            return dto;
        }

        TEntity StripDTO(Microsoft.WindowsAzure.Storage.Table.DynamicTableEntity a)
        {
            TEntity result = new TEntity();

            Type t1 = result.GetType();
            var dictionary = (IDictionary<string, EntityProperty>)a.Properties;

            foreach (PropertyInfo p1 in t1.GetProperties())//for each property in the entity,
            {
                foreach (var value in dictionary)//see if we have a correspinding property in the DTO
                {
                    if (p1.Name == value.Key)
                    {
                        p1.SetValue(result, GetValue(value.Value));
                    }
                }

            }

            return result;
        }

        private object GetValue(EntityProperty source)
        {
            switch (source.PropertyType)
            {
                case EdmType.Binary:
                    return (object)source.BinaryValue;
                case EdmType.Boolean:
                    return (object)source.BooleanValue;
                case EdmType.DateTime:
                    return (object)source.DateTimeOffsetValue;
                case EdmType.Double:
                    return (object)source.DoubleValue;
                case EdmType.Guid:
                    return (object)source.GuidValue;
                case EdmType.Int32:
                    return (object)source.Int32Value;
                case EdmType.Int64:
                    return (object)source.Int64Value;
                case EdmType.String:
                    return (object)source.StringValue;
                default: throw new TypeLoadException(string.Format("not supported edmType:{0}", source.PropertyType));
            }
        }

        private bool IsId(string candidate)
        {
            bool result = false;

            if (candidate.ToLower() == "id")
                result = true;

            if (candidate.ToLower().Substring(candidate.Length - 2, 2) == "id")
                result = true;

            return result;
        }

        # endregion

Now we update our CRUD methods to use the DTOs thus;

public virtual TEntity GetByID(object id)
        {
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.Equal, id.ToString()));
            var dto = table.ExecuteQuery(query).First();
            TEntity mapped = StripDTO(dto);

            return mapped;
        }

        public virtual List<TEntity> GetAll()
        {
            List<TEntity> mappedList = new List<TEntity>();
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey)); //get all customers - because Customer is our partition key
            var result = table.ExecuteQuery(query).ToList();

            foreach (var item in result)
            {
                mappedList.Add(StripDTO(item));
            }
            return mappedList;
        }

        public virtual void Insert(TEntity entity)
        {
            dynamic mapped = CreateDTO(entity);
            TableOperation insertOperation = TableOperation.Insert(mapped);
            table.Execute(insertOperation);
        }

OK, the final step for the provider side is to create a Context class that our Repository will use.  Create a new Class called TSContext and enter the following;

using System;

namespace AzureTSProvider
{
    public abstract class TSContext
    {
        private string tableName { get; set; }
        private string connectionString { get; set; }

        public TSContext(string connectionString, string tableName)
        {
            this.tableName = tableName;
            this.connectionString = connectionString;
        }

        public virtual TableSet<TEntity> Set<TEntity>()
            where TEntity : class, new()
        {
            var set = new TableSet<TEntity>(connectionString, tableName);

            return set;
        }
    }
}

The final step on our journey is create our abstracted Repository base, our actual CustomerRepository and our UnitOfWork – I’m not going to go into too much detail with these as there are lots of articles that really go into the nitty gritty of them.

Create a new Project call DAL.  Add in references to our Model and AzureProvider projects. Create a class called Azure Context that inherits from our TSContext and paste in:

using AzureTSProvider;
using Model;
using System;

namespace DAL
{
    public class AzureContext : TSContext
    {
        private static string tableName;
        private static string connection;

        public AzureContext(string connectionString, string table)
            : base(connectionString, table)
        {
            tableName = table;
            connection = connectionString;
        }

        public TableSet<Customer> Customers { get; set; }
    }
}

Create a new class called RepositoryBase.

using AzureTSProvider;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace DAL
{
    public abstract class RepositoryBase<TEntity> where TEntity : class, new()
    {
        internal AzureContext context;
        internal TableSet<TEntity> dbset;

        public RepositoryBase(AzureContext context)
        {
            this.context = context;
            this.dbset = context.Set<TEntity>();
        }

        public virtual TEntity GetByID(object id)
        {
            return dbset.GetByID(id);
        }

        public virtual List<TEntity> GetAll()
        {
            return dbset.GetAll();
        }

        public virtual void Insert(TEntity entity)
        {
            dbset.Insert(entity);
        }

    }
}

Followed by our CustomerRepsitory that inherits from the base class.

using Model;
using System;

namespace DAL
{
    public class CustomerRepository : RepositoryBase<Customer>
    {

        public CustomerRepository(AzureContext context)
            : base(context)
        {
            if (context == null)
                throw new ArgumentNullException("Context cannot be null!");
        }

    }
}

Last but not least is our Unit of work that ties it all together.  Note I am manually entering my Azure details here.  You’d normally pass it through from your WebAPI or whatever else calls it, make a call from a config file, or use Dependency Injection.  The ‘tenantId’ – which is used as our partition key once we get there, would come from some other table – for example when a tenant logs in we’d grab a unique TEXT string (Partition keys ONLY accept alphanumerics and cannot start with a number, so GUIDS or ints are no good!) for that tenant and pass it through – e.g. the users email address with all the non-alphanumerics stripped out.

using System;

namespace DAL
{
    public class UnitOfWork: IDisposable
    {
        AzureContext context;

        private CustomerRepository customerRepository;

        public UnitOfWork(string tenantId)
        {
            string connectionString = "DefaultEndpointsProtocol=http;AccountName=<my storage name>;AccountKey=<my account key>";

            this.context = new AzureContext(connectionString, tenantId);
        }

        public CustomerRepository CustomerRepository
        {
            get
            {
                if (customerRepository == null)
                    customerRepository = new CustomerRepository(context);

                return customerRepository;
            }
        }

        public void Dispose()
        {
            this.Dispose(true);
            GC.SuppressFinalize(this);
        }

        protected virtual void Dispose(bool disposing)
        {
            if (!disposing)
            {
                return;
            }
        }
    }
}

 

We are now good to go! To use it we just make the following call form our WebAPI or where ever:

UnitOfWork proxy = new UnitOfWork("test001");
            Customer customer = new Customer()
            {
                Company = "Square Connection Ltd",
                Email = "Bretthargreaves@hotmail.com",
                Name = "Brett Hargreaves",
                Telephone = "12345 12345678",
                VATNumber = "123 456789 GB"
            };
            proxy.CustomerRepository.Insert(customer);

Obviously all this is just a starting point.  One of the biggest glaring differences between this and a ‘normal’ repository is that changes are immediately persisted.  What we need to do next is implement some change tracking in our Azure Provider and a ‘SaveChanges’ method that then commits everything.

But that’s a blog for another day!

Bundling Not Working when deployed

If you deploy an MVC site to a server that DOESN’T have the full MVC package installed (e.g. you just want to copy the binaries with your site) then bundling will not work correctly.

To expand on this the page simply will not render the css or scripts but if you look at the source HTML the actual bundled scripts are written correctly – it’s as if the actual ‘files’ they relate to don’t actually exist.

This fix is to simply ensure the system.web.optimisation.dll has copied to your site bin folder and then add the following to your web.config:

<system.webServer>
<modules>

<remove name=”BundleModule” />

<add name=”BundleModule” type=”System.Web.Optimization.BundleModule” />

</modules>

 

401.1 error on Windows 2008 when you deploy a Web Site that uses Integrated Authentication

If you deploy an ASP.NET website on Windows 2008 that uses Integrated Authentication AND you are using host headers – then when you try to browse on the server itself you will be given a 401.1 error.  

This often leads users to go looking at the local access rights etc on the server as it’s as if the user just can’t read the directory.

In fact this is caused by a ‘loopback’ check that Windows 2008 (and 2003 + SP1) has in order to prevent reflection attacks on your server.

If you access the site remotely it works fine – however this can be a real pain whilst debugging!  It can also cause problems if (for example) you have a second site on the same server that exposes a service the first site requires and you are also accessing it via a FQDN.

The answer to this issue can be found here: 

http://support.microsoft.com/kb/896861

In brief, the fix is:

  1. Set the
    DisableStrictNameChecking

    registry entry to 1. For more information about how to do this, click the following article number to view the article in the Microsoft Knowledge Base:

    281308 Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name
  2. Click Start, click Run, type regedit, and then click OK.
  3. In Registry Editor, locate and then click the following registry key:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0
  4. Right-click MSV1_0, point to New, and then click Multi-String Value.
  5. Type BackConnectionHostNames, and then press ENTER.
  6. Right-click BackConnectionHostNames, and then click Modify.
  7. In the Value data box, type the host name or the host names for the sites that are on the local computer, and then clickOK.
  8. Quit Registry Editor, and then restart the IISAdmin service.

A quick fix, that can stop hours of scratching your head!

MVC4 / ASP.NET 4.5 on Windows 2008 gives 403.14 – Forbidden

If you manually deploy and ASP.NET MVC application to a windows 2008 Server that has the .NET Framework installed but NOT the actual MVC Tools the server won’t know how to handle and route incoming requests.

There are a number of issues, depending on the version of Visual Studio and patch level you are using.

Older versions of VS simply were not putting all the required DLLs in the bin folder – specifically System.Web.Razor, System.Web.WebPages, System.Web.MVC etc.

So the quick fix (or part of the fix) is the manually add these files to the deployed application.  Later Visual Studio SPs actually addressed this issue.

The next problem is that even with these files the server still didn’t know what to do, and therefore the final piece of the puzzle is a web.config change to tell it what to do.

A lot of posts suggest adding the following under <system.webServer>

<modules runAllManagedModulesForAllRequests=”true” />

This will work, however is not really recommended as it handles ALL requests (e.g. html, css, jpg) as apposed to just the cshtml files.

Therefore a better solution is as follows:

<modules>

      <remove name=”UrlRoutingModule-4.0″ />

      <add name=”UrlRoutingModule-4.0″ type=”System.Web.Routing.UrlRoutingModule” preCondition=”” />

      <!– any other modules you want to run in MVC e.g. FormsAuthentication, Roles etc. –>

    </modules>

 

This is generally considered a far better fix.

Now you just have the usual rights issues to deal with….