Are you a Coder, a Hacker or a Developer?

Coder, Hacker or Developer – which do you think best sums you up?  I’ll wager that most would consider themselves a ‘Developer’.

But in my many years experience – from learning the basics of coding, to my present day role of Architect and teacher/mentor – I’ve come to the conclusion that actually the three titles are all part of the career path in software development.

For me the titles perform a very good job of defining the role, but I feel the perceptions associated with each can be misleading.

Coder

Coder’s code.  To code means to string some computer language together in logical structures to perform some function.  That’s it.  And in today’s modern Agile world this is an extremely important skill.  Agile is to some extent about breaking down the work into manageable tasks or ‘User Stories’.  As a User I want to do x.

So a Coder will be assigned a task – write some code that completes a function.

All these functions are eventually strung together to making the whole.  But essentially most tasks come down to coding – hence why it’s the first step of all developers.

Hacker

To some a hacker is cool.  To Business & Government hackers are evil!

But let’s examine the term.  Hacker.  To ‘hack’ into an existing system and, well, bend it to your will.

Yes, in the wrong hands this skill can cause some real damage – but so can ANY tool in the wrong (or inept) hands.

But to me Hackers represent the next level of coding skills.  Hackers can obviously code, but hackers can also look into the base foundations that all coders use day in day out, and enhance them. Hackers are able to get right there into the code and make it do things you wouldn’t normally be able to do – how?  Because Hackers truly understand the underlying system they are hacking.

This is also an incredibly important skill.  Rarely do business requirements fit any standard mould.  I have heard coders say – ‘it can’t be done, the system just won’t allow that’.  The Hacker will see the challenge.  They’ll learn the underlying mechanism and find away to make it do what they need it too.

So don’t ever think hacker is a detrimental – term – for me it’s the next level in becoming a truly greater developer.

Developer

For all their technical ability, Hackers still lack an important skill – Business Acumen.  Developers (in the varied sense) are often viewed is ‘techies’.  But a true Developer understands the business needs – and can draw upon their Coder and Hacker skills to find the most effective, efficient solution to the Business Problem.

The true Developer is the most misunderstood role.  A Developer not only understands the underlying possibilities, they can decipher the business speak and apply their knowledge to the problem at hand.

In this sense ‘Developer’ is often referred to as Senior Developer – and it’s not about age or years of service (although that often helps!) – it’s about having the full skill set.  The most successful developers, the one’s who’ve built billion dollar companies, are the ones who also understand Marketing, Sales, Finance and Leadership – not necessarily in the depth needed to specialise – but certainly enough to carry their own and talk the talk.

Anyone can learn the basic skills to lay some bricks, or plumb some pipes – but a Developer understands how the whole fits together!

Architect

To me an Architect is the natural progression of a Developer – They take the business skills to the next level and help drive the full Software Development Life Cycle (SDLC).

The next level is about seeing things beyond the current project, to the multitude of projects that businesses have to take on. It’s about having the knowledge to see where IT can help and drive the solution BEFORE being asked.

It’s about looking for efficiencies of scale – how sharing code or better still, services, between the different projects can help reduce overhead, development time and quality.

 

All these ‘levels’ are equally important – but if you want to push your career or technical ability – then you need to understand the different roles available – and focus on what’s needed to take you to the next level.

HTML and CSS For Everyone!

Web Development seems to many non-techies as a black art.

A core skill for any web developer is HTML and CSS – the ‘languages’ used to build websites and an increasing number of mobile apps.

But what if I were to tell you then these two skills – HTML and CSS – are actually becoming key for EVERYONE in business?

The world today is centered around communications –email, social media, blogs and more. But to be heard in the noise of mass information your’s needs to standout.

If you regularly send marketing emails, manage a blog or social media account, then HTML and CSS is a must have skill – as only with an understanding of these can you hope to build something appealing and eye catching.

Because HTML and CSS are often considered a ‘programming language’ – many shy away.

Learning the basic core skills to create stunning material using HTML and CSS is actually very easy – if you have learned how to use Word or Excel – then you can learn this!

Just think what you could do if you mastered the basics? Create awesome emails to your customers, tweak your company’s or your personal WordPress site, speak on par with your technical teams, create a stunning personalised resume website!

Business today demands and ever increase skill base – and if HTML and CSS aren’t at the top of the list of things to learn in 2016 – they should be!

To learn more and get a discount off my own HTML and CSS Online course and book click Web Development Essentials – HTML and CSS

Visual Studio Code – Why?

All Articles in series
Visual Studio Code Why?
Installing Visual Studio Code on OSX – From Scratch

Visual Studio Code – Why?

I am a Microsoft developer. I have been for over 10 years, since Visual Basic was first released.

This article is the first in a series an introducing to the latest Developer product from Microsoft (still in beta at the time of writing) – Visual Studio Code. This wonderful piece of software runs on Windows, Linux and OSX. As I use an Apple MacBook Pro as my workhorse the potential for this is like suddenly discovering the wheel – it’s really is going go be a revolution!

But why? And perhaps more importantly WHY do I use a MacBook when I am a professional Microsoft developer!?

Originally I wasn’t a professional developer – it was a hobby – but it was something I always knew in my heart I wanted to be.

Unfortunately when I started working the opportunities weren’t really around – at least not where I lived in the North of England.

And so I did the next best thing – I became an IT consultant – and I focussed on Microsoft products – Windows 3.1 and Windows NT.

The point is, since then I have been Microsoft through and through. So I often ask myself why I own an iPhone, and 4 iPads, and two years ago I bought a MacBook Pro.

All I can say is the hardware is just gorgeous. The OS to some extents doesn’t really bother me that much – which is odd considering my professional focus!. But then actually, if I think about it, it DOES make sense as for the past 10 years my development career has been about Web Development – which at the back is IS Microsoft (.NET) but for the end user is actually just HTML – and therefore OS agnostic.

In 2012 I needed a new laptop. I wanted a nice screen – after all I spent my days staring at a screen all day long as I wrote code – and the MacBook Pro with Retina display just ticked all the boxes. I also wanted something thin, light and powerful. At the time the ONLY thing on the market was the MacBook. Before then I’d never used a Mac, but now I could’t imagine NOT using a Mac. I wonder if it’s because deep down I’ve always been a bit artistic – and Apple (at least originally) has always attracted the more artistic types because of the aesthetics of its products.

But this left me with a quandary. I was a Microsoft developer! And so for the past few years I’ve used Windows Virtual Machines running on my Mac. I hardly ever touched OSX at all! It was simply the shell for running my VMs.

Actually this worked really well – as being a developer you often have to build and rebuild your OS as you install various tools, then uninstall them or upgrade them. And using VMs makes this process sooo much easier!

And then there is the plain and simple fact that as a developer I need to understand various diverse platforms – Windows, OS X, Android, iOS. In fact in my current position I am involved with a number of applications that use iOS applications that talk to a .NET back end. This pretty much sealed the deal – by using a MacBook with Windows VMs in top I really could have my cake and eat it!

So then Visual Studio comes along. And now I have a reason to be a Microsoft developer AND an OSX user. Again the question – WHY? Well first of all the fact that with ASP.NET 5 and Mono I can actually build applications using Microsoft .NET but deploy them to Linux/Unix – which tend to be a bit cheaper to host – not to mention the exciting possibilities Docker is giving.

Yes, Docker is now available on Windows, and IIS in the latest version of Windows so much faster and has a much smaller memory overhead (not to mention how much faster it can spin up) – but there’s something about the flexibility of being able to deploy to either Windows OR Linux that gets me excited – I can’t help but feel that this, combined with Docker, is going to be a game changer in the industry.

And so the potential to build software completely on OSX, Windows or Linux, using the same code base? Well now I’m feeling the sort of excitement lottery winners feel when they see there numbers come up! Sad, I know.

But now here’s a problem. I’m a Microsoft developer, ex Microsoft OS consultant, and i’ve just been using OSX as a shell for my Windows VMs. I know practically NOTHING about OSX and more importantly the underlying Linux OS that OSX is built on. I’ve seen the past few months playing with VS Code, and I finally think I’m starting to get to grips with it!

So this series of articles is really about my experience installing, setting up, and building full .NET websites using VS Code, .NET 5 and mono. All on an OS i don’t really know, and using technologies (like bower, nodejs, nopm, grunt, gulp) that I’ve never had to really worry about before (because Visual Studio just DOES IT for you!).

I hope you enjoy the series, and gleam some benefit from it!

The first article in the series – installing VS Code and it’s dependencies can be found here
Installing Visual Studio Code on OSX – From Scratch

Installing Visual Studio Code on OSX – from scratch!

All Articles in series:
Visual Studio Code Why?
Installing Visual Studio Code on OSX – From Scratch

Installing Visual Studio Code

This guide will assume a complete clean install of OSX. That means that I will cover every single pre-requisite that is required. I will also assume the reader has not much experience with OSX – if you are coming from a Windows .NET development background installation of many of the pre-requisites are not as straight forward as you’ll be used too!

Installing Code

First of all – the easy bit. Installation of the main Visual Studio Code program.

Go to https://code.visualstudio.com the website will detect you are running OSX, Windows or Linux and present you with the relevant link. Simply click the link to download the program.

This will download Visual Studio Code.app.

This is a self contained program and doesn’t require any installation. But lets move it from the default download location – which should be downloads – to the apps folder. Simple drag from the download folder onto the app folder in your Dock (if you’ve created a shortcut for it) or to the Applications folder in finder.

Now run the program by going into Applications (from finder or the Dock) and clicking it. The first time you run it you’ll be prompted to open the file as it was downloaded from the internet. Just click Open.

If you want it to be always in the Dock, right click the Icon choose options and ‘Keep In Dock’.

Another useful option for launching VS Code is to enable launch from a Terminal window. On OSX a lot can and will be performed through terminal so it’s hand to be able to simply type ‘code .’ and have it launch.

To set this launch Terminal by going to the Applications folder, then the Utilities folder and clicking ‘Terminal’. Again once running it will be handy to create a shortcut to it by right clicking it and choose Options->Keep In Dock.

What we need to do is edit a file called –bash_profile – this is like a startup file that you can use to set environment variables and things. It is also hidden. So to edit it in the Terminal window type nano ~/.bash_profile

You will be presented with a blank screen (unless you’ve installed some software that has already put some info in this.

Enter the following

code () {
if [[ $# = 0 ]]
then
open -a "Visual Studio Code"
else
[[ $1 = /* ]] && F="$1" || F="$PWD/${1#./}"
open -a "Visual Studio Code" --args "$F"
fi
}

Now press CTRL+O and accept the default filename to save the changes. Now press CRTL+X to exit nano.

Normally this script will execute on startup, but to run it now type

source ~/.bash_profile

so now if you type (with the period)

code .

VS Code will launch.

Node.js & NPM

Node.js is a JavaScript runtime execution engine. Using it you can create JavaScript applications. It is also used when building applications in VSCode to automate tasks.

Node also includes a utility called NPM – Node Package Manager. NPM is like NuGet – we can use it to install software and update software.

We will be using NPM a lot, so the next thing we need to do is install Node.

In a browser navigate to

http://nodejs.org

On the main page you will be presented with the current release of Node.js for your OS. Click the link to download the installer which be called something like node-v0.12.7.pkg (the actual name will vary depending on what the current version is). Simply run the installer by double clicking it.

In the dialog that appears click continue, then accept the license agreement, then click the install button.

Once installed you’ll be presented with a box telling you to make sure /usr/local/bin is in your path – this is so it can be executed from anywhere.

On a default OSX installation this is already set. You can confirm this by typing echo $PATH in the terminal window.

To confirm everything is working as we need now type the following

node

this will ‘launch’ nodejs and you’ll see a ‘>’. Now type

console.log(‘node is working’)

If it’s working ‘node is working’ should be output to the window. You’ll also see ‘undefined’ appear – don’t worry about that.

To quit out of node press CTRL+C twice.

Mono

The next piece of software we need is Mono. Mono is an open source implementation of .NET. Basically this means it allows you to run .NET applications on Linux and OSX based systems!

To install Mono in a browser navigate to

http://mono-project.com

Again the front page will have a link to download Mono. Click the link then double click the .pkg file that is downloaded to start the installer.

Click continue, accept the License agreement and click install.

.NET Execution Environment (ASP.NET 5)

Now we have Mono, we can install the ASP.NET 5 runtime itself. This has to be done via a terminal Window and in stages. So launch terminal (if it’s not already running).

First install the .NET Version Manager (dnvm). To do this in terminal type the following

curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh

This will download dnvm and automatically update your .bash_profile with the required environment settings.

Now we can use the dnvm to install or upgrade .NET Core Clr– known as the .NET Execution Environment (DNX) – by typing the following

dnvm upgrade -r coreclr

Finally, we must install DNX for Mono by typing

dnvm upgrade -r mono

The final step is we need to update our .bash_profile again. We need to ensure dnvm and dnu commands in our path, and also enable a setting to fix a IOException that we get with Mono and .NET

So again in terminal edit our profile like we did before with

Nano ~/.bash_profile

Make sure you have a reference to the dnvm.sh – it will either simply have source dnvm.sh, or a longer more verbose version.
After that line then add

Export MONO_MANAGED_WATCHER=disabled

Save the file by pressing CTRL+O

Then quit out with CTRL+X

Bower

The final software we need to install is called Bower. Bower is another package manager – like NuGet – but specifically for Web Projects.

We install Bower using NPM (The Node Package Manager). From a terminal window type the following

Entity Framework & Direct SQL

Entity Framework is a great time saver.

When I think back to the days when I had to manually construct SQL statements, parse the results, control updates, and of course ensure my actual database design was in sync with what my code expected – well it sends a shiver down my spine!

I remember back in the day I even built myself a little utility that would take my database and spit out boiler plate code for basic CRUD operations.

But then EF was released – and although it was a little bit flakey at first (with people tutting and muttering about NHibernate), it was still a lovely little thing.  These days of course I use it all the time.  In fact as you may have seen in one of my earlier posts I’ve even copied more than a little for an Azure Table Storage version.

And of course couple EF with decent patterns such as the repository pattern and Dependency Injection and you have a rock solid foundation.

But, (there’s always a but) – EF is sometimes a bit slow when compared with issuing SQL commands directly – especially with batch operations.

For this reason the EF context exposes a Database property which in turn exposes a number of options for issue SQL commands directly.

I will show you how to use the two most common ones – SqlQuery and ExecuteSqlCommand

As I like to decouple as much as possible, and because I like my interfaces to only depend on core libraries I’m going to hide away some of the EF specific stuff.

So, first of all I like to have an IRepository<TEntity> interface and a Repository<TEntity> base class, that way each repository gets the basic CRUD methods and anything else I might want;

public interface IRepository where TEntity: class
{
void Delete(object id);
void Delete(TEntity entity);
System.Linq.IQueryable GetAll();
System.Linq.IQueryable GetAll(object filter);
IPagedResponseViewModel GetPaged(int take, int skip, string orderBy, bool orderByAscending, object filter);
TEntity GetById(object id);
TEntity GetFullObject(object id);
void Insert(TEntity entity);
void Update(TEntity entity);
void Commit();
void Dispose();
// Direct SQL Stuff
int ExecuteSqlCommand(string sql, object[] parameters);
ICollection SqlQuery(string sql, object[] parameters);
int ExecuteSqlCommand(string sql);
ICollection SqlQuery(string sql);
}
 public abstract class RepositoryBase : IRepository where TEntity : class
    {
        internal DataContext context;
        internal DbSet dbSet;

        public RepositoryBase(DataContext context)
        {
            this.context = context;
            this.dbSet = context.Set();
        }

.....
        public virtual int ExecuteSqlCommand(string sql)
        {
            return context.Database.ExecuteSqlCommand(sql);
        }
        public virtual int ExecuteSqlCommand(string sql, object[] parameters)
        {
            return context.Database.ExecuteSqlCommand(sql, parameters);
        }
        public virtual ICollection SqlQuery(string sql)
        {
            return context.Database.SqlQuery(sql).ToList();
        }
        public virtual ICollection SqlQuery(string sql, object[] parameters)
        {
            return context.Database.SqlQuery(sql, parameters).ToList();
        }
        public virtual void Commit()
        {
            context.SaveChanges();
        }
        public virtual void Dispose()
        {
            context.Dispose();
        }
    }

As you can see all we are really doing is encapsulating the SqlQuery and ExecuteSqlCommand methods by taking in a string or a string and a list of parameters.

First lets look at ExecuteSQLCommand – as this is very straight forward, we simply pass in our SQL string such as

ExecuteSQLCommand("UPDATE SomeTable SET SomeColumn='Some Value'");

EF issues the command and returns an int indicating the number of affected rows.
If you want you can pass in parameters like this

ExecuteSQLCommand("UPDATE SomeTable SET SomeColumn='Some Value' WHERE ID=@p0", new object[]{3});

Now for SQLQuery. You’ll may notice SQLQuery uses TEntity (if you’re not familiar with Generics we use TEntity to refer to any class we pass in during the instantiation of our repository – therefore anything that refers to TEntity refers to whatever object we want to use – e.g. we would have

public class MyObject{
    int Id { get; set; }
    string someProperty { get; set; }
    ....
}

public class MyRepository: Repository<MyObject>{
...
}

so when then instantiate the actual repository

var myrepo = new MyRepository();

We get all the commands from the base Repository class referencing our MyObject class model.

Anyway, if we ignore our repository for now, if we were to just query Database.SQLQuery directly we’d use

Database.SQLQuery<MyObject>("SELECT * FROM MyObjects");

this will result in a list of MyObject – EF actually attempts to convert the results it receives back to the model you pass in.
So all we have done now is automate that because we already know the model from when we instantiated our repository – thus when calling our encapsulated method we just use

SQLQuery("SELECT ALL FROM MyObjects");

Again we can pass in parameters just like with SQLExecuteCommand.

So as you can see this gives you complete flexibility in using EF – as mentioned earlier – if I have to iterate through and update large datasets I sometimes construct my SQL Directly and use these methods instead.

Note : This post was created in response to a user question on my course about ASP.NET Development Techniques.  My Blog viewers can get the course for just $10 by clicking this link 

The Importance of Flexible Application Design

I don’t normally blow my own trumpet, but this week the team I work with won a prestigious ‘Innovation’ award at Capita Plc for an iPad/ASP.NET/WebAPI solution we have built to address a specific business need.

The application essentially allows building Surveyors to record information about properties using an iPad in offline or online mode, sync that data up to a backend SQL database via a ASP.NET WebAPI service, and then expose the data through an ASP.NET Web Portal.  There’s also the usual Reports and dashboards that managers and such like generally swoon over.

The product itself is a great time saver, it allows surveys to be taken and reported on in a fraction of the time compared to pen and paper, or even an excel spreadsheet (by hooking costs into items that are picked by the surveyor).

As good as the solution is, from a business perspective, what really impressed the judges was how easily it it could adapted to running any kind of survey you want without have to re-code anything.

Let me explain a bit more.

Version 1 of the solution was built for a specific type of survey, and as such the original developer involved built the application with certain settings hard coded.  So for example, in this particular survey they wanted to segregate any one building into Blocks, Levels and Items, therefore to accommodate these business requirements the developer created Block, Level and Item entities.

The backend database used these entities, the front end iPad app used these entities.  And it worked fine.

Specific Entities

But then as these things go other areas of the business saw what had been done and naturally wanted the same thing for them.

Business versus Technical Design

The problem was that was different area of the business wanted something slightly different.  some wanted 4 or 5 levels, some wanted only 1 block, some didn’t really want to record costs but rather just snippets of information such as diary events during the course of a building project.

The original plan was to use the v1 as a core but then re-develop both the backend, and the iPad app for each client.

Now from a technical design point of view this is great.  We get to independently give each client exactly what they want.

However from a business perspective this really wasn’t very good.  You see there are 3 major issue with this way forward.

  • Each new client would take 3-4 months to re-develop
  • Multiple codebases – both front and backend for each client
  • no central location for management

Ultimately these 3 issues really come down to one common business problem – cost.

You see, many times the business would want it NOW.  Or at the very least within a couple of weeks, obviously being told 3-4 months PLUS testing is no good.

Secondly, although some commissions were large value, some were only for a few thousand (as it was only a handful of small properties/locations).  Again at 3-4 months the cost just becomes prohibitive.

Third, with multiple sets of code bases and no central management location, looking after all these different implementations would requirement far more support overhead – and therefore costs.

Start from scratch or accept the negatives?

It is about this time that I got involved with the project.

Immediately I saw the issue but more importantly the solution.

To some this may be obvious, but when you’re a true techie, especially someone who has already invented the wheel once, it’s quite hard to see more flexible alternatives.  After all, what does a techie care about costs?

Now I think I’m quite lazy!  But believe me, this has often been a useful trait.

You see I’m lazy in that I hate boring repetitive tasks,  I also hate having to re-do the same thing again and again.  Once done I want it finished so I can move onto another challenge.

So to me the solution was to have a generic ‘level’ that could recurse itself.  Each level then has a type associated with it, and a parentId.

In this way a ‘level’ can be anything you want.  A building, a floor, an area of ground, a room, a swing – whatever you wan and link it in a parent child hierarchy accordingly.  We can then define WHAT it actually is within a template that is also stored as a template entity.

The iPad app and the WebUI simple interrogated the template to work out what kind of control to use, and thus as the template changed the UI just flowed around it accordingly.

Flexible Entities

So what we can now do is build HOW the app will work within the app itself, without having to rebuild either of the UI’s or the backend.  We also get to keep everything in a central location and report across ALL different survey types.

This is nothing new.  There are LOTS of apps that do this, but its surprising how often this design pattern can be used.

The Business MUST come first

Now, this of course caused quite a heated ‘discussion’ between myself and the original developer.  He (quite rightly) pointed out that such a method would complicate the code more, making reporting a bit more difficult and increase the amount of required processing power.

However, although this is all true, the fact is, as developers we shouldn’t care about such things, it is our job (or at least should be) to bring the best possible value to the business – and if that means we have a harder time of it then so be it.

Now don’t get me wrong – I’m not suggesting we should embrace creating horribly complex code – quite the opposite – if you also apply SOLID patterns to your solutions then you can easily split up this complexity into nice manageable chunks.

And as for the processing power, well not really.  Even our 2nd generation iPads quite happily reconstructed the models according to a template on the fly – and we had some BIG templates with LOTS of data.

Reporting caused a few problems initially, but again Microsoft provide all the tools in the form of Analysis and Tabular services to help out when things got too hairy.

But let’s get back on track.  Eventually I won the discussion 🙂

The resultant solution now means new clients can be accommodate within DAYS, not months.  This has a huge impact on costs and allows smaller project to make use of our system.  It also gives the business confidence that it can bid for new work knowing that if they win (and by being able to do the job faster and therefore cheaper, they often win), we can turn around the new system for them within a few days and let them get on with the job as efficiently as possible.

Because lets not forget – software is about adding more value to the business, either with cost savings or a better proposition.

The Frustration of Tutorials and Walkthroughs

I have an issue with how most software developers learn to write software.

Like me, a lot of developers learn on their own by searching for who to do stuff in the internet. This is great and of course I doubt there is much you CAN’T find with this method.

However, when I start learning new technologies – for example when I first started looking at MVC quite a few years back (I started on MVC2) – I found that all the tutorials, including Microsoft’s, taught in such a way that was really quite bad programming practice.

Now I’m not saying it’s ALL bad, but recently I’ve been looking at various training courses online, and the majority all show the same thing – they teach you how to CODE, but they don’t really teach you how to write software.

Example. One MVC course which promises to teach you how to be a ‘complete’ developer, went through the basics – create a project, create a class, here’s what a for loop is, here’s what a while loop does. etc etc.
However there was NO mention of SOLID principals, no real explanation of Object Orientated programming, of using interfaces and abstractions. Dependency Inversion and a whole multitude of good programming practices simply ignored.

As a junior developer I found this VERY frustrating.

Of course, now I’ve been around a bit, I do understand a lot more – but it’s actually been quite a painful journey if truth be told. It does however explain why a lot of software houses don’t really like to employee freelancers who have only ever worked on their own.

So anyway, to address this woeful shortcoming, I have actually created my own course.

I created it on a site called Udemy.com – if you’ve never used it have a look – it’s pay per course rather than subscription based, but they do often run very good deals – and some of the courses are really very good.

Please, checkout my course here

There’s a substantial discount on what will normally be charged, and I’ve had over 1000 students within a few days! So I guess there are some people out there who actually want to learn how to write software properly!

The great thing about Udemy.com is that you get life time access to the courses – and that includes UPDATES. For example in my course I’ll be adding lectures on Module Injection and splitting your views up using Partial Views – another underused technique.

And I would welcome input please – what do you think most programmers miss my learning from random searching?

Knockout Component Loading with requirejs won’t load html file

Knockout 3.2 introduces component loading – this is a great feature that allows you to load HTML and JS modules into your code, thus enabling you to split your code in to self contained modules.

Think of it like PartialViews in MVC but for KnockoutJS.

The first thing you need to do is ‘register’ your component, e.g. (And this is taken from the KnockoutJS documentation)

ko.components.register('like-widget', {
    viewModel: function(params) {
        // Data: value is either null, 'like', or 'dislike'
        this.chosenValue = params.value;
        
        // Behaviors
        this.like = function() { this.chosenValue('like'); }.bind(this);
        this.dislike = function() { this.chosenValue('dislike'); }.bind(this);
    },
    template:
        '<div class="like-or-dislike" data-bind="visible: !chosenValue()">\
            <button data-bind="click: like">Like it</button>\
            <button data-bind="click: dislike">Dislike it</button>\
        </div>\
        <div class="result" data-bind="visible: chosenValue">\
            You <strong data-bind="text: chosenValue"></strong> it\
        </div>'
});

 

Then your main page would implement the following
    <ul data-bind="foreach: products">
        <li class="product">
            <strong data-bind="text: name"></strong>
            <like-widget params="value: userRating"></like-widget>
        </li>
    </ul>
with the following javascript to load your view model
    function Product(name, rating) {
        this.name = name;
        this.userRating = ko.observable(rating || null);
    }

    function MyViewModel() {
        this.products = [
            new Product('Garlic bread'),
            new Product('Pain au chocolat'),
            new Product('Seagull spaghetti', 'like') // This one was already 'liked'
        ];
    }

    ko.applyBindings(new MyViewModel());
All pretty cool – except we’re embedding our ‘module’ in the component registration.  What we need to do is have it all in a separate file.  Again the Knockout Documentation shows us how to simply do this – we use an AMD module loader such as RequireJA, thus we can store everything in separate files, thus:
component-like-widget.js 
define(['knockout'], function(ko) {

    function LikeWidgetViewModel(params) {
        this.chosenValue = params.value;
    }

    LikeWidgetViewModel.prototype.like = function() {
        this.chosenValue('like');
    };

    LikeWidgetViewModel.prototype.dislike = function() {
        this.chosenValue('dislike');
    };

    return LikeWidgetViewModel;

});
component-like-widget.html
<div class="like-or-dislike" data-bind="visible: !chosenValue()">
            <button data-bind="click: like">Like it</button>
            <button data-bind="click: dislike">Dislike it</button>
        </div>
        <div class="result" data-bind="visible: chosenValue">
            You <strong data-bind="text: chosenValue"></strong> it
And this was loaded from an external file
        </div>
So now our component registration needs to change to take account of the external files
ko.components.register('like-or-dislike', {
    viewModel: { require: 'files/component-like-widget' },
    template: { require: 'text!files/component-like-widget.html' }
});
 And then finally our main HTML page just implements the component similar to above (except we’re now adding products dynamically as well)

HTML

    <ul data-bind="foreach: products">
        <li class="product">
            <strong data-bind="text: name"></strong>
            <like-or-dislike params="value: userRating"></like-or-dislike>
        </li>
    </ul>
    <button data-bind="click: addProduct">Add a product</button>

script

    function Product(name, rating) {
        this.name = name;
        this.userRating = ko.observable(rating || null);
    }

    function MyViewModel() {
        this.products = ko.observableArray(); // Start empty
    }

    MyViewModel.prototype.addProduct = function() {
        var name = 'Product ' + (this.products().length + 1);
        this.products.push(new Product(name));
    };

    ko.applyBindings(new MyViewModel());

Now, this is pretty much was the Knockout documentation says – it works lovely on their own pages, but could I get it to work in my project?  Nope.  Not at all.

So, time to start digging.  First thing let’s have a look at what is being brought back from the server (I Use fiddler).

The javascript file loads fine – but when it ties to load the HTML file it does;t actually try to load it, instead it tries to get a file called ‘text’

Well I guess this makes sense – after all we’ve told it to load ‘text!widget-like-component.html’

So er, what am I missing?

Well I assumed requirejs automatically knew what when it saw !text it would know how to handle it.  Well you know what they same about ASS U ME

Now this is mainly because I’ve never really delved deep into requirejs – I just use it and out of the box it always works fine.  So, just in case anyone else out there is in the same boat here’s the fix.

To make requirejs know how to handle !text we need a !text plug in – an NuGet package apply named ‘Text Plugin for RequireJS’ – so install it with NuGet, and voila! The world is back to normal.

Entity Framework and Interface issues

There is, and has been for quite some time a serious issue with Entity Framework – namely that it doesn’t support complex interfaces.

A simple entity based on an interface such as this is fine

public class Root : IRoot
{
public int RootId { get; set; }
public string Data { get; set; }
}

However as soon as your start to introduce child objects, for example an
ICollection Leafs
well now we have an issue, because the interface needs to define the collection as
ICollection Leafs
and that’s where it all goes a bit pear-shaped because EF can’t figure out that Leafs is a navigation property because it’s based on an interface instead of a concrete type.

I have spend many a wasted few hours searching for a fix to this, the majority of the articles I find simply state that EF does not support Interfaces.

In the past I’ve worked around it by using a DTO, so basically my DTO class is what EF will use to create my database mappings, but I then need to use something like AutoMapper, or roll my own mapper in order to convert my DTO to a concrete class based on an interface and vice versa.

This also has the issue that although retrieving and adding NEW entities works OK, as soon as you try to do an update to an existing entity with Attach it all starts to fall apart again.
Once again I managed to fixed this by simply ensuring that when I’m mapping from my concrete to the DTO I get the EXISTING DTO from the database, and then loop through all it’s properties copying and updating as required.

So recently, in a fit of despair, I sat down and hammered out a way to find an alternative solution that was a bit more elegant.

OK, so first we have our standard interface thus:

public interface IRoot
{
string Data { get; set; }
System.Collections.Generic.ICollection Leafs { get; }
int RootId { get; set; }
}

So we have an interface properly defined, but EF doesn’t know how to get the navigation property. So we need to create one along side our Collection that the interface expects, so lets go with

public ICollection Leafs { get; set;}
public ICollection LeafNavigation {get; set;}

OK that’s great but now of course these are two separate lists, so we’ll add an internal list and connect our two ICollections up to it. We’ll also ass some constructors for good measure.

public class Root : IRoot
{
private List _leafs;

public Root(ICollection leafs) {
this._leafs = leafs as List;
}

public Root(List leafs)
{
this._leafs = leafs;
}

public Root() {
this._leafs = new List();
}

public int RootId { get; set; }
public string Data { get; set; }

public ICollection Leafs { get { return _leafs.ConvertAll(l => (ILeaf)l); } }

public ICollection LeafNavigation { get { return _leafs; } set { _leafs = value.ToList(); } }
}

So now EF is happy, we’ve implemented the interface so we can use IRoot rather than Root. But now we have another issue – basically I can’t seem to ADD Leaf Entities to IRoot.Leafs – maybe someone can point out what I did wrong – because although the code lets me add the leaf with

IRoot root = new Root();
root.Add(new Leaf(){ Data="Some data"});

it just does’t actually ADD it to the underlying collection (I’ve tried a number of permutations – like I said if I have done something obvious PLEASE let me know!)

Anyway, I managed a simple work around – basically lets just add and Add method to the actual class that takes a Leaf and adds it to the underly connection thus the finished class and interface look like this.

public interface IRoot
{
void AddLeaf(ILeaf leaf);

string Data { get; set; }
System.Collections.Generic.ICollection Leafs { get; }
int RootId { get; set; }
}

public class Root : IRoot
{
private List _leafs;

public Root(ICollection leafs) {
this._leafs = leafs as List;
}

public Root(List leafs)
{
this._leafs = leafs;
}

public Root() {
this._leafs = new List();
}

public void AddLeaf(ILeaf leaf) {
_leafs.Add(leaf as Leaf);
}

public int RootId { get; set; }
public string Data { get; set; }

public ICollection Leafs { get { return _leafs.ConvertAll(l => (ILeaf)l); } }

public ICollection LeafNavigation { get { return _leafs; } set { _leafs = value.ToList(); } }
}

And Voila, this works a treat, I can add Leafs, and update them, along with the root and EF just binds everything as normal. Add, Update and Get all work seamlessly.

I’ve uploaded this to GitHub – and I hope this helps anyone who may be having similar issues!
https://github.com/squareconnection/EFInterfaces

Azure Table Storage with Repository Pattern

I love Windows Azure.  I love the flexibility it gives.  You can start small at a low cost, and gradually increase storage, CPU and memory as required.

This is especially great for start-ups or if you are piloting a potential new app/service.

With this in mind I was recently engaged to build such a pilot application.  The brief was that to begin with the costs had to be as low as possible. In particular we needed cheap data storage but with the potential that the requirement could be but terabytes worth of storage in the future.

These two requirements pretty much ruled out Azure SQL for a backend.  Although Azure SQL is great, it isn’t the cheapest option, and there are finite limits on the sizes of databases you can have.  So if we did go down that route we’d need to look at partitioning, sharding or some other managed way of splitting the data across databases.

Finally, because the majority of the data would be rows of transactions Azure Table Storage seemed to fit the bill nicely.

And so we started to design/build a new WebAPI with an Azure Table Storage backend.

There is also something else I love – SOLID principals.  Requirements change.  And I like to be able to change one bit of my code without it causing headaches throughout the application.

IF I were going to use SQL I would employ a Unit Of Work/Repository pattern and Entity Framework for the data persistence layer  – and so I decided I wanted to employ the same approach to this project.

I want Table Storage at the back, with a WebAPI in the front, however I wanted the WebAPI to be totally ignorant about storage mechanism used – after all, it may turn out that I need to change to something else at some point in the future.

So what I need to do is create a Table Storage provider that I can pass any Entity too and have it perform the relevant CRUD operations.

I then want to stick a repository and Unit Of Work in front of that provider.  That way if I change to SQL in the future I just have to swap out my repository implementation for one that uses Entity Framework instead.

Finally, I could also just implement the Azure stuff directly within a repository – but by putting it in a provider and referencing that provider it allows me more flexibility if I wanted to mix providers – e.g. half SQL half noSQL etc.

One other point before I begin – Azure can store different ‘models’ all in the same table. So we could store customers AND invoiceLines in the same table!  Therefore we have a ParitionKey that allows you to separate the different models in some way.  You don’t HAVE to do it like this – you could still have a table per entity, but my app is going to be multi-tenant and so I want one table per tenant – the discussions as to why and what is best are quite in depth, and in fact there is a very good discussion on it in the Patterns and Practices Library.

For the purposes of this article we are going to have a ‘TenantId’ and this will be used to define our table name, and each Entities name will be the ParitionKey.  In this way if this were a traditional SQL app the ParitionKey would in effect be our table names.  I hope that makes sense!

Building the Repository

This won’t be a complete blow by blow walkthrough.  I am going to assume you know some basics – how to create a project, a solution etc.

OK, so first I need my backend.  And as stated I want to use Windows Azure Table Storage rather than SQL as I want something very flexible, but cheap.  I’m also not too fussed about relationships (at least for now!).

So first of all we need an Azure Account and in particular a Storage Account.  There are lots of tutorials on how to do this – but in a nutshell, login to Windows Azure – click New – > Data Services –> Storage.

Once created you’ll need your access keys.  With the Storage Account selected just click Manage Access Keys.

Next lets create our WebAPI – in Visual Studio create a new Project.  As this will be split into a number of projects first create an Empty Solution.

Now add a new project to the solution.  Select Windows from the list of templates and ‘Class Library’ from the options presented.  Call it ‘Model’ and click OK.

Create a class called Customer and lets create some basic stuff in it.

namespace Model
{
    public class Customer
    {
        public Guid CustomerId { get; set; }
        public string Name { get; set; }
        public string Company { get; set; }
        public string Email { get; set; }
        public string Telephone { get; set; }
        public string VATNumber { get; set; }
    }
}

OK, so now we have the model we need a way of persisting it.

Using Azure Tables is fairly straight forward.  You create a CloudStorageAccount object, pass to it a connection string, then create a CloudTableClient and a Cloud table.  The connection string info comes from the ‘Manage Access Keys’ you did in the Azure Portal.

So for example:

string connectionString = "DefaultEndpointsProtocol=http;AccountName=<your storage account>;AccountKey=<your account key>";
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
            CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = tableClient.GetTableReference("MyTable");
            table.CreateIfNotExists();

You then use various methods to persist and read the data.

However what I don’t want to to have to do this manually for every model.  What I really want to do is use the repository pattern to create a base that I can pass ANY model to and have figure out what to so with it.

So what we need is an AzureTableStorageProvider that we can wrap all our functionality in, and then expose it via a Repository class.  So the first job is to create our provider.

Create a new Class Library Project called AzureTSProvider, and then create a new class called TableSet.

You also need to add the Azure NuGet packages – so right click the new project, select ‘Manage NuGet Packages’ and search for Windows Azure Storage.  Click Install, accept the Licenses and you’re good to go.

This is going to be a Generic Class – this allows us to pass in ANY class we construct, and act on it regardless.  We do this by setting <TEntity> in the class declaration and stating ‘where TEntity : class’.  We need a new() keyword in the declaration as well to tell our generic to create a new object it the one passed to it is null.

– NOTE – it doesn’t have to be TEntity by the way – you can call it whatever you like!

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace AzureTSProvider
{
    public class TableSet<TEntity>
    where TEntity : class,
        new()
    {
        private List<dynamic> internalList;
        private string partitionKey;
        private string tableName;
        private string connectionString;

        internal CloudTableClient tableClient;
        internal CloudTable table;

        public TableSet(string connectionString, string tableName)
        {
            this.partitionKey = typeof(TEntity).Name;
            this.tableName = tableName;
            this.connectionString = connectionString;

            //pluralise the partition key (because basically it is the 'table' name).
            if (partitionKey.Substring(partitionKey.Length - 1, 1).ToLower() == "y")
                partitionKey = partitionKey.Substring(0, partitionKey.Length - 1) + "ies";

            if (partitionKey.Substring(partitionKey.Length - 1, 1).ToLower() != "s")
                partitionKey = partitionKey + "s";

            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
            tableClient = storageAccount.CreateCloudTableClient();
            table = tableClient.GetTableReference(tableName);
            table.CreateIfNotExists();
        }

        public virtual TEntity GetByID(object id)
        {
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.Equal, id.ToString()));
            var result = table.ExecuteQuery(query).First();

            return result;
        }

        public virtual List<TEntity> GetAll()
        {
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey)); //get all customers - because Customer is our partition key
            var result = table.ExecuteQuery(query).ToList();
        }

        public virtual void Insert(TEntity entity)
        {
            TableOperation insertOperation = TableOperation.Insert(entity);
            table.Execute(insertOperation);
        }
    }
}

To DTO or not to DTO?

I’m going to fast forward a bit here to explain why I did what I’m going to do next.  Imagine I’ve hooked this into by repository and Unit Of Work, and then a WebAPI Controller that creates a customer object and tries to persist it through my framework.

I immediately ran into an issue.  You see in order for table storage to work with our model the entities we create have to implement inherit from TableEntity.  I don’t want to do this.  Why?  Because I want my API to have no reference to Azure whatsoever.  Or even my Repository.  Everything and anything to do with the persistence of the actual data needs to be neatly encapsulated in my provider classes.

OK, so what we need is a Data Transformation Object (DTO).  The DTO will inherit from TableEntity, and I could either manually copy propereies between my Entity and the DTO or use something like Automapper.

But now my issue is that I have to pass in a DTO along with my Entity.  We could just say this is a requirement but apart from not being very neat I’m basically having to duplicate every entity I create with a DTO version – all the properties would be identical.

So I decided that the best way forward would be to just create a DTO on the fly using the properties of my passed in entity.

Thank goodness for Dynamic Objects – as these allow me to do just this!

First I create an empty ‘base DTO’ which is a class that inherits from DynamicObject and ITableEntity.  It creates us an object that implements everything Azure wants but because it inhertits from DynamicObject it is expandable and so allows us to add more properties to it  This is created in my provider library – so create a new class called TableEntityDTO and paste in the following;

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.Collections.Generic;
using System.Dynamic;

namespace AzureTSProvider
{
    public class TableEntityDTO   : DynamicObject,ITableEntity
    {
        #region ITableEntity properties
        // Summary:
        //     Gets or sets the entity's current ETag. Set this value to '*' in order to
        //     blindly overwrite an entity as part of an update operation.
        public string ETag { get; set; }
        //
        // Summary:
        //     Gets or sets the entity's partition key.
        public string PartitionKey { get; set; }
        //
        // Summary:
        //     Gets or sets the entity's row key.
        public string RowKey { get; set; }
        //
        // Summary:
        //     Gets or sets the entity's time stamp.
        public DateTimeOffset Timestamp { get; set; }
        #endregion

        // Use this Dictionary store table's properties. 
        public IDictionary<string, EntityProperty> properties { get; private set; }

        public TableEntityDTO()
        {
            properties=new Dictionary<string,EntityProperty>();
        }

        public TableEntityDTO(string PartitionKey, string RowKey)
        {
            this.PartitionKey = PartitionKey;
            this.RowKey = RowKey;
            properties = new Dictionary<string, EntityProperty>();
        }

        #region override DynamicObject's mehtods
        public override bool TryGetMember(GetMemberBinder binder, out object result)
        {
            if (!properties.ContainsKey(binder.Name))
                properties.Add(binder.Name, ConvertToEntityProperty(binder.Name, null));
            result = properties[binder.Name];
            return true;
        }

        public override bool TrySetMember(SetMemberBinder binder, object value)
        {
            EntityProperty property = ConvertToEntityProperty(binder.Name, value);

            if (properties.ContainsKey(binder.Name))
                properties[binder.Name] = property;
            else
                properties.Add(binder.Name, property);

            return true;
        }

        public bool TrySetMember(string binder, object value)
        {
            EntityProperty property = ConvertToEntityProperty(binder, value);

            if (properties.ContainsKey(binder))
                properties[binder] = property;
            else
                properties.Add(binder, property);

            return true;
        }

        #endregion

        #region ITableEntity implementation

        public void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
        {
            this.properties = properties;
        }

        public IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
        {
            return this.properties;
        }

        #endregion

        /// <summary>
        /// Convert object value to EntityProperty.
        /// </summary>
        private EntityProperty ConvertToEntityProperty(string key, object value)
        {
            if (value == null) return new EntityProperty((string)null);
            if (value.GetType() == typeof(byte[])) 
                return new EntityProperty((byte[])value);
            if (value.GetType() == typeof(bool)) 
                return new EntityProperty((bool)value);
            if (value.GetType() == typeof(DateTimeOffset)) 
                return new EntityProperty((DateTimeOffset)value);
            if (value.GetType() == typeof(DateTime)) 
                return new EntityProperty((DateTime)value);
            if (value.GetType() == typeof(double)) 
                return new EntityProperty((double)value);
            if (value.GetType() == typeof(Guid)) 
                return new EntityProperty((Guid)value);
            if (value.GetType() == typeof(int)) 
                return new EntityProperty((int)value);
            if (value.GetType() == typeof(long)) 
                return new EntityProperty((long)value);
            if (value.GetType() == typeof(string)) 
                return new EntityProperty((string)value);
            throw new Exception("This value type" + value.GetType() + " for " + key);
            throw new Exception(string.Format("This value type {0} is not supported for {1}",key));
        }

         /// <summary>
         /// Get the edm type, if the type is not a edm type throw a exception.
         /// </summary>
        private Type GetType(EdmType edmType)
        {
            switch (edmType)
            {
                case EdmType.Binary : 
                    return typeof(byte[]);
                case EdmType.Boolean : 
                    return typeof(bool);
                case EdmType.DateTime : 
                    return typeof(DateTime);
                case EdmType.Double : 
                    return typeof(double);
                case EdmType.Guid : 
                    return typeof(Guid);
                case EdmType.Int32 : 
                    return typeof(int);
                case EdmType.Int64 : 
                    return typeof(long);
                case EdmType.String : 
                    return typeof(string);
                default: throw new TypeLoadException(string.Format("not supported edmType:{0}" ,edmType));
            }
        }
        }
}

Now in my TableSet class I create two methods. The first is CreateDTO – which takes my source entity, creates a dynamic entity, copies all the properties from my source entity and then copies all the properties from my DTO entity.

I also have a ‘GetId’ method that I use to scan each source entity property to see if it’s an ID – we’ll need this for the ‘RowKey’ that Azure needs to create a ‘primary key’.  It’s fairly simplistic but suits my needs.

Finally I have a StripDTO method that essential maps our DTO back to the base Entity for return queries.

#region object mapping
        dynamic CreateDTO(object a)
        {
            TableEntityDTO dto = new TableEntityDTO();
            object rowKey = null;

            Type t1 = a.GetType();
            Type t2 = dto.GetType();

            //now set all the entity properties
            foreach (System.Reflection.PropertyInfo p in t1.GetProperties())
            {
                dto.TrySetMember(p.Name, p.GetValue(a, null) == null ? "" : p.GetValue(a, null));
                if (IsId(p.Name))
                    rowKey = p.GetValue(a, null);
            }

            if (rowKey == null)
                rowKey = Guid.NewGuid();

            dto.RowKey = rowKey.ToString();
            dto.PartitionKey = partitionKey;

            return dto;
        }

        TEntity StripDTO(Microsoft.WindowsAzure.Storage.Table.DynamicTableEntity a)
        {
            TEntity result = new TEntity();

            Type t1 = result.GetType();
            var dictionary = (IDictionary<string, EntityProperty>)a.Properties;

            foreach (PropertyInfo p1 in t1.GetProperties())//for each property in the entity,
            {
                foreach (var value in dictionary)//see if we have a correspinding property in the DTO
                {
                    if (p1.Name == value.Key)
                    {
                        p1.SetValue(result, GetValue(value.Value));
                    }
                }

            }

            return result;
        }

        private object GetValue(EntityProperty source)
        {
            switch (source.PropertyType)
            {
                case EdmType.Binary:
                    return (object)source.BinaryValue;
                case EdmType.Boolean:
                    return (object)source.BooleanValue;
                case EdmType.DateTime:
                    return (object)source.DateTimeOffsetValue;
                case EdmType.Double:
                    return (object)source.DoubleValue;
                case EdmType.Guid:
                    return (object)source.GuidValue;
                case EdmType.Int32:
                    return (object)source.Int32Value;
                case EdmType.Int64:
                    return (object)source.Int64Value;
                case EdmType.String:
                    return (object)source.StringValue;
                default: throw new TypeLoadException(string.Format("not supported edmType:{0}", source.PropertyType));
            }
        }

        private bool IsId(string candidate)
        {
            bool result = false;

            if (candidate.ToLower() == "id")
                result = true;

            if (candidate.ToLower().Substring(candidate.Length - 2, 2) == "id")
                result = true;

            return result;
        }

        # endregion

Now we update our CRUD methods to use the DTOs thus;

public virtual TEntity GetByID(object id)
        {
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.Equal, id.ToString()));
            var dto = table.ExecuteQuery(query).First();
            TEntity mapped = StripDTO(dto);

            return mapped;
        }

        public virtual List<TEntity> GetAll()
        {
            List<TEntity> mappedList = new List<TEntity>();
            var query = new TableQuery().Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey)); //get all customers - because Customer is our partition key
            var result = table.ExecuteQuery(query).ToList();

            foreach (var item in result)
            {
                mappedList.Add(StripDTO(item));
            }
            return mappedList;
        }

        public virtual void Insert(TEntity entity)
        {
            dynamic mapped = CreateDTO(entity);
            TableOperation insertOperation = TableOperation.Insert(mapped);
            table.Execute(insertOperation);
        }

OK, the final step for the provider side is to create a Context class that our Repository will use.  Create a new Class called TSContext and enter the following;

using System;

namespace AzureTSProvider
{
    public abstract class TSContext
    {
        private string tableName { get; set; }
        private string connectionString { get; set; }

        public TSContext(string connectionString, string tableName)
        {
            this.tableName = tableName;
            this.connectionString = connectionString;
        }

        public virtual TableSet<TEntity> Set<TEntity>()
            where TEntity : class, new()
        {
            var set = new TableSet<TEntity>(connectionString, tableName);

            return set;
        }
    }
}

The final step on our journey is create our abstracted Repository base, our actual CustomerRepository and our UnitOfWork – I’m not going to go into too much detail with these as there are lots of articles that really go into the nitty gritty of them.

Create a new Project call DAL.  Add in references to our Model and AzureProvider projects. Create a class called Azure Context that inherits from our TSContext and paste in:

using AzureTSProvider;
using Model;
using System;

namespace DAL
{
    public class AzureContext : TSContext
    {
        private static string tableName;
        private static string connection;

        public AzureContext(string connectionString, string table)
            : base(connectionString, table)
        {
            tableName = table;
            connection = connectionString;
        }

        public TableSet<Customer> Customers { get; set; }
    }
}

Create a new class called RepositoryBase.

using AzureTSProvider;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace DAL
{
    public abstract class RepositoryBase<TEntity> where TEntity : class, new()
    {
        internal AzureContext context;
        internal TableSet<TEntity> dbset;

        public RepositoryBase(AzureContext context)
        {
            this.context = context;
            this.dbset = context.Set<TEntity>();
        }

        public virtual TEntity GetByID(object id)
        {
            return dbset.GetByID(id);
        }

        public virtual List<TEntity> GetAll()
        {
            return dbset.GetAll();
        }

        public virtual void Insert(TEntity entity)
        {
            dbset.Insert(entity);
        }

    }
}

Followed by our CustomerRepsitory that inherits from the base class.

using Model;
using System;

namespace DAL
{
    public class CustomerRepository : RepositoryBase<Customer>
    {

        public CustomerRepository(AzureContext context)
            : base(context)
        {
            if (context == null)
                throw new ArgumentNullException("Context cannot be null!");
        }

    }
}

Last but not least is our Unit of work that ties it all together.  Note I am manually entering my Azure details here.  You’d normally pass it through from your WebAPI or whatever else calls it, make a call from a config file, or use Dependency Injection.  The ‘tenantId’ – which is used as our partition key once we get there, would come from some other table – for example when a tenant logs in we’d grab a unique TEXT string (Partition keys ONLY accept alphanumerics and cannot start with a number, so GUIDS or ints are no good!) for that tenant and pass it through – e.g. the users email address with all the non-alphanumerics stripped out.

using System;

namespace DAL
{
    public class UnitOfWork: IDisposable
    {
        AzureContext context;

        private CustomerRepository customerRepository;

        public UnitOfWork(string tenantId)
        {
            string connectionString = "DefaultEndpointsProtocol=http;AccountName=<my storage name>;AccountKey=<my account key>";

            this.context = new AzureContext(connectionString, tenantId);
        }

        public CustomerRepository CustomerRepository
        {
            get
            {
                if (customerRepository == null)
                    customerRepository = new CustomerRepository(context);

                return customerRepository;
            }
        }

        public void Dispose()
        {
            this.Dispose(true);
            GC.SuppressFinalize(this);
        }

        protected virtual void Dispose(bool disposing)
        {
            if (!disposing)
            {
                return;
            }
        }
    }
}

 

We are now good to go! To use it we just make the following call form our WebAPI or where ever:

UnitOfWork proxy = new UnitOfWork("test001");
            Customer customer = new Customer()
            {
                Company = "Square Connection Ltd",
                Email = "Bretthargreaves@hotmail.com",
                Name = "Brett Hargreaves",
                Telephone = "12345 12345678",
                VATNumber = "123 456789 GB"
            };
            proxy.CustomerRepository.Insert(customer);

Obviously all this is just a starting point.  One of the biggest glaring differences between this and a ‘normal’ repository is that changes are immediately persisted.  What we need to do next is implement some change tracking in our Azure Provider and a ‘SaveChanges’ method that then commits everything.

But that’s a blog for another day!