Tuesday, September 9, 2008

Using the repository pattern to achieve persistence ignorance in practice

I recently experimented with migrating a project from Linq2Sql to Linq2NHibernate. It’s a small windows time tracker application that features offline capability.

The original app built a year ago used Linq2Sql’s class designer to create domain classes from existing database tables. Along with the domain classes it created a DataContext class:

public partial class DomainDataContext: System.Data.Linq.DataContext
 public System.Data.Linq.Table<customer> Customers
  get { return this.GetTable<customer>(); }
 public System.Data.Linq.Table<project> Projects
  get   { return this.GetTable<project>(); }

Tables of T are in fact Microsofts implementation of the repository pattern. I have two issues with Table<T> as a repository implementation. One, I like my repositories to take the shape of a collection more in line with what repositories were originally: A facade that let’s you access data through a collection metaphor. The method names should be Add, Remove and Clear as you would expect from a normal collection. In Linq2Sql MS renamed those to InsertOnSubmit, DeleteOnSubmit and so on.

Issue number two with Table<T> is that the methods Insert/DeleteOnSubmit are not defined in an interface but on Table<T> directly. That means I have to rely on a concrete class. Bad OOD karma! The thing is, these methods are really part of another pattern, Unit of Work. There is a muddy mismatch between the two and a need for a unified way to access data through repositories.

In order to be accessed in a manner closer to real collections, I could let each repository implement ICollection<T>:

public interface Repositories : IDisposable
 ICollection<customer> Customers { get; }
 ICollection<project> Projects { get; }

That’s all well and dandy as long as my repositories are simple in-memory collections or in-memory collections persisted using Xml. If I want to switch to repositories backed by an Linq2Sql or Linq2NHibernate, troubles arise. The result is that each time a repository is queried, the whole table is loaded into RAM and filtered there. Ooops. The trouble has to do with the way that Linq compiles queries.

Linq is able to choose between running queries in-memory or capturing the query expression in an expression tree then translating it into Sql for execution on the database server. The (not so secret) secret consists of two interfaces, IEnumerable<T> and IQueryable<T>.

If the collection you query against implements IQueryable<T>, then the expression is translated to Sql using Linq2Sql. If the collection implements IEnumerable<T>, the query is run in memory when the GetEnumerator() method is called.

When switching from in-memory collections to ORM backed repositories, I can no longer let my repositories implement ICollection<T> only, since Table<T> and Linq<T> implement IQueryable<T> instead. In other words I’m forced to change my interface to:

public interface Repositories : IDisposable
 IQueryable<customer> Customers { get; }
 IQueryable<project> Projects { get; }

Only now, I’m back to having repositories that are queryable but do not include any way to add or delete objects.

What I really like is a way to leave my Repositories interface alone while still being able to switch between database persistence, file based persistence, no persistence, pen-and-paper based persistence, coffee based persistence … anyway, you get the point.

What I need is a new interface:

public interface QueryableCollection<t> : IQueryable<t>, ICollection<t> { }

Allowing me to declare my repositories as:

public interface Repositories : IDisposable
 QueryableCollection<customer> Customers { get; }
 QueryableCollection<project> Projects { get; }

That way I can easily swap persistence mechanism, even have two different schemes running at the same time.

Here are is my repository implementation for NHibernate:

public class NHRepositories : Repositories, ConnectionProvider
 private readonly ISession _session;

 public QueryableCollection<customer> Customers
  get { return new NHRepositoryAdapter<customer>(_session); }

 public QueryableCollection<project> Projects
  get { return new NHRepositoryAdapter<project>(_session); }


NHRepositoryAdapter exposes NHibernate’s Query<T> as a QueryableCollection<T>:

internal class NHRepositoryAdapter<t> : QueryableCollection<t>
 private readonly ISession _session;

 public NHRepositoryAdapter(ISession session)
  _session = session;
 public IEnumerator<t> GetEnumerator()
  return _session.Linq<t>().GetEnumerator();

To satisfy the in memory collections I made an adapter to expose an IList<T> as a QueryableCollection<T> using Linq’s built-in AsQueryable() method:

public class QueryableList<t> : IList<t>, QueryableCollection<t>
 private readonly List<t> _list;
 private readonly IQueryable<t> _queryable;

 public QueryableList()
  _list = new List<t>();
  _queryable = _list.AsQueryable();

 public IEnumerator<t> GetEnumerator()
  return _list.GetEnumerator();

 public Expression Expression
  get { return _queryable.Expression; }


Couldn’t I just implement my repositories by inheriting List<T>, implementing IQueryable<T> and then delegating calls to IQueryable<T>’s members to Enumerable.AsQueryable()?. That would save the tedious wrapper code. Unfortunately that results in stack overflow errors when Linq calls the getters for the three properties Expression, Provider and ElementType defined in IQueryable<T>. I suppose the reason is that AsQueryable is in fact an extension method and thus doesn’t obey normal inheritance rules. Calling base.AsQueryable() gives the same result as this.AsQueryable() even though the getters have been overriden in the subclass.

Another concern to air is: Does the persistence mechanism really change often enough to justify this abstraction and added complexity? Not always. In this particular app, yes. One one the requirements is smooth operation online as well as offline. I can achieve that easily using my QueryableCollection interface. When running offline my repositories use xml as storage. When online and when synchronizing it uses NHibernate with a Sql Server database behind.

Another way of achieving offline functionality would be to only let the app talk to a SqlCe 3.5 database via Linq2Sql or Linq2NHibernate and then let ADO.NET Synchronization Services to sync it with the master Sql Server database. Then you wouldn’t need the abstraction I made, but complexity would only be relocated to configuring Synchronization services.

Anyway this solution allows me to maximum flexibility in persistence ignorance. The payback is a new interface and an adapter two adapter classes for each storage mechanism. It’s not feasible in all solutions but can be if you need the ability to manage offline/online synchronization manually or store data in several places using the same repository abstraction.

2 Responses to 'Using the repository pattern to achieve persistence ignorance in practice'
  1. Morten Lyhr said,

    ON SEPTEMBER 9TH, 2008 AT 11:40 PM

    Great post Søren!

    But its not persistence ignorence you have achieved, its ORM ignorence.

    Actually I was wondering how to make a “POCO LINQ” repository that was not tied to any specific ORM. I guess you beat me to it :-)

  2. Rasmus Kromann-Larsen said,

    ON OCTOBER 10TH, 2008 AT 11:50 PM

    Nice post.

    I’m about to play around with LINQ2NHibernate myself, in a LINQ-less solution that was recently kicked up to 3.5. I think your post might be the inspiration for my repositories.

    - Rasmus.

Friday, June 27, 2008

Dear Santa, bring us Boo 1.0

I wish the programming language Boo had a greater momentum and larger user group. I’d love to use it for writing production quality enterprise apps, but I don’t dare. To be frank, even though the authors do an excellent job of adding features and fixing bugs, there’s just substantially fewer hands available, compared to the forces behind C# 3.0 and VB.NET 9.0.

The ideas behind Boo are fresh and experimenting and they let us do great things with little effort. My hands ache every time I have to transform some collection into another using 10 lines of C# 2.0 when I could have done it using 2 lines of Boo. Getting lambda expressions and extension methods in C# 3 is a step forward, but Boo is already moving further ahead and giving us extension properties and a built-in abstract macro facility that enables us to write in-language DSLs.

Still, the risk of switching to Boo for real world apps is too big, and the tool support is too small at this time. Boo also needs to let me define my own generic types and methods before our relationship can move to the serious phase.

I wish there was some way I could support the authors of the Boo programming language. Money? Don’t have that much. Programming time? My family will leave me if I spend more pc-time.

Instead, here are a couple of words of appreciation: Boo brings the best from the functional style languages and the CLR. It provides ultimate power while still keeping tight focus on simplicity.

In a perfect world… (sigh)

Saturday, May 3, 2008

Edit and Continue effectively disabled in Visual Studio 2008

I find edit and continue to be a productivity booster and I use it every day. Or, I used to use it before I got a habit of using LINQ. I also find LINQ to be a productivity booster because I can express my intend at a higher level of abstraction than before LINQ. I rarely write foreach loops anymore since often it’s more brief and to the point to use one of LINQ’s extension methods and lambda expressions.

Whenever you have a method that contains one or more lambda expressions, edit and continue stops working. It’s not that it’s actually disabled in VS. You can go ahead and edit your method when debugging, it just won’t allow you to continue. So it’s effectivelyEdit and NO Continue ™.

It didn’t start as a problem for me, but becoming friends with LINQ and really getting it under my skin means a rough estimate of 75% of my methods contain LINQ code these days. Why don’t my two best friends, LINQ and Edit’n'continue, like each other? I prey the explanation is: It’s hard to do and Microsoft didn’t get it ready before they shipped VS 2008.

Service pack 1 maybe?

2 Responses to 'Edit and Continue effectively disabled in Visual Studio 2008'

Subscribe to comments with RSS or TrackBack to 'Edit and Continue effectively disabled in Visual Studio 2008'.

  1. Morten Lyhr said,

    ON JUNE 13TH, 2008 AT 7:48 PM

    I really dont see the point in E&C?

    Why do I have to use my time in the debugger?

    Stay out of the debugger, with unit test and TDD.

    As usual Jeremy D. Miller — The Shade Tree Developer, sums it up nicely.

    Occasionally you’ll see a claim that TDD == 0 debugging. That’s obviously false, but effective usage of TDD drives debugging time down and that’s still good. From my experience, when the unit tests are granular the need for the debugger goes way down. When I do have to fire up the debugger I debug through the unit tests themselves. Cutting down the scope of any particular debugging session helps remarkably. The caveat is that you really must be doing granular unit tests. A lot of debugging usage is often a cue to rethink how you’re unit testing.

    Taken from http://codebetter.com/blogs/jeremy.miller/archive/2006/03/31/142091.aspx

  2. Soren said,

    ON JUNE 20TH, 2008 AT 11:02 AM

    I’m not a debugger lover :) I’d certainly love to use it less and I too think that doing TDD helps in that regard. But even unit tests and the code under test have to be debugged once in a while.

    Given that a debugger is sometimes necessary, E&C just makes the ride much smoother. The whole experience is more organic, like I’m molding a sculpture with your hands.

    Contrast that with the rigid feeling of writing, compiling, running tests. The pause from the time when you have a thought till the time when it’s effect becomes observable is very small with E&C.

    The point you are making is against relying overly on debugging, not against E&C. A debugger capable of E&C is preferable over one that isn’t.

Friday, May 2, 2008

Design by C#ntract

Designing by contract is a way of writing the specification into the class or method itself. It holds the promise of making specification and test code more visible to the users of the unit so as to minimize misunderstandings and catch error conditions early.

It’s just not something we C# developers are used to being able to do. Not in C# anyway. Changing to the Boo language where is can be done safely by the use of macros, or to spec# which (like Boo) is still in development, is something few peoplehave the guts to do in production.

So when I read The Wandering Glitch’s series about doing Design By Contract in C#using the new functional possibilities, I was thrilled. It goes a long way to let us specify pre and post conditions. Andrew Matthews even does post conditions that reference state from before method entry as in: age > old(age).

His code also has the benefit that when you pass in an expression that results in an exception you get a string representing the original expression thrown back at you as the exception’s message.

There was a couple of things that I thought could be improved a bit:

  1. 1. The error message that you get back from the Expression.ToString() is not nice to look at. A typical string representation of an Expression could be: () => (value(DbcTest.Person+<>c__DisplayClass0).age >= 0)
  2. 2. It seems like overkill to do serialization to capture the old state of simple types like int and strings.

The first one is easy. With the help of a simple regular expression, we can throw away the ugly part and leave behind the important stuff, so that the exception gives me this message:

Violation of precondition: age >= 0

There’s not a whole lot of code behind this:

public static void Require(this T obj,
   Expression> booleanExpression)
  var compiledPredicate
                  = booleanExpression.Compile();
  if (!compiledPredicate())
     throw new ContractViolationException(
        “Violation of precondition: ”
        + booleanExpression.ToNiceString());

static readonly Regex noiseRemoverRegex1 =
  new Regex(@”value[^)]*).”, RegexOptions.Compiled);
static readonly Regex noiseRemoverRegex2 =
  new Regex(@”.*=>s((.*))”, RegexOptions.Compiled);
private static string ToNiceString(
  this Expression expression)
   var output = expression.ToString();
   output = noiseRemoverRegex1.Replace(output, “”);
   output = noiseRemoverRegex2.Replace(output, “$1″);
   return output;

Which can then be used thus:

this.Require(() => age >= 0);

So far, all we have, is another way of writing Debug.Assert statements with a little extra oomph.

Comparing a variable to the previous value

How do we store the old value of a variable for later comparison? Closures are capable of freezing the value of local variables. But what if the variable is a reference type and the value that you want to compare to its old value is a member of that object? If the object is immutable (like strings are), no problem. Then you know the value hasn’t changed, because it can’t change.

But if you’re trying to validate old_person => old_person.Age == person.Age you’ll be in trouble because the value of Age will compare against itself and give a false positive in the above case. To overcome that Matthews uses serialization to make a deep clone of Person and all its members and its members’ members and so on. But that has a huge cost. You don’t know how bit the serialized object graph is going to be, but I’ll bet you it will include lots more objects than you can compare in a line of code!

So I opted for a simpler approach that allows me only to compare old values of value types. Value types, unlike reference types, can be captured on block entry:

public class EnsureBlock : IDisposable
  where T : struct
  protected const string ViolationTemplate
     = “Violation of contract: {0}”;
  private readonly Func _predicate;
  private readonly T _oldT;
  private readonly string _predicateString;     

  public EnsureBlock(
     Expression> predicate,
   T oldValue)
     _predicateString = predicate.ToNiceString();
     _predicate = predicate.Compile();
     _oldT = oldValue;

  public void Dispose()
     if (!predicate(oldValue))
        throw new ContractViolationException(

// Only syntactic sugar:
public static class EnsureBlockExtension
  public static EnsureBlock Ensure(
     this object obj,
   Expression> predicate,
   T oldValue)
   where T : struct
     return new EnsureBlock(predicate, oldValue);

The little "where T : struct" after the class declaration restricts the use of the captured variable to simple types and user defined structs. Objects cannot be passed in, so the following is allowed by the compiler:

int age = 12;
using (this.Ensure(old_age => age > old_age, age))

The following, however, is not accepted by the compiler because p is a reference type:

var p = new Person();

using (this.Ensure(old_person => p.Age > old_person.Age, p))

So what we really want to do is pass in the p.Age member as the old value:

var p = new Person();

using (this.Ensure(old_age => p.Age > old_age, p.Age))

It’s definitely not perfect. We still need to compare object references against their old values sometimes. There are two ways to go about that:

  1. 1. Make an overload of Ensure that has a "where T : class" and that uses Matthews code to serialize the object graph.
  2. 2. Walk the Expression tree and manually capture all members that are references within the expression. Obviously that’s not a trivial thing to do.

Maybe I’ll give it a try some day. If you don’t beat me to it!

One Response to 'Design by C#ntract'

Subscribe to comments with RSS or TrackBack to 'Design by C#ntract'.

  1. Søren on DBC « The Wandering Glitch said,

    ON MAY 5TH, 2008 AT 1:27 PM

    […] than comment on the blog, he went away and did something about it. And it’s pretty good! Go take a look, and then pick up the baton from him. Your challenge is to extract the parmeter objects from the […]

Monday, April 21, 2008

Grammar noise cancellation

Imagine a simple Irony grammar written in C# and capable of parsing the following string into a list of greetings:

mjallo dude howdy dudess hey "gal"

This could be written like so:

class GreetGrammar : Grammar
 public GreetGrammar()
   var stringlit = new StringLiteral("stringlit");
   var id = new IdentifierTerminal("id");
   var homergreeting = new NonTerminal("homergreeting", "mjallo" + id);
   var cowboygreeting = new NonTerminal("cowboygreeting", "howdy" + id);
   var greeting = new NonTerminal("greeting",
                          homergreeting |
                          "hey" + stringlit |
   var program = new NonTerminal("program",
   Root = program;

There’s quite a few characters of noise in this, compared to a clean EBNF syntax. Let’s enumerate what’s distracting:

  1. 1. Every declaration has the name declared twice: As an identifier and again in a string.
  2. 2. "new NonTerminal" is repeated a lot and hinders readability.
  3. 3. "+" is used as a so called sequence operator, whereas BNF uses a space.
  4. 4. "var" is distracting.
  5. 5. The *, ? and + operators in EBNF are written as method calls: Star(), Q() and Plus() respectively.

If we switch to the Boo language, then with three little Boo macros, we can get rid of 1 and 2 to render this:

class BooGrammar(Grammar):
  def constructor():
     stringliteral stringlit
     identifier id
     rule cowboygreeting, "howdy" + id
     rule homergreeting, "mjallo" + id
     rule greeting, homergreeting | ("hey" + stringlit) | cowboygreeting
     rule program, greeting.Star()
     Root = program 

That’s a little bit nicer, innit? Note that the var keyword is implicit in Boo, so that’s a free ride.

The only new noise added is the parenthesis around "hey" + stringLit. I believe it has something to do with a difference (between C# and Boo) in operator precedence and operator overloading. If you omit the parenthesis, the grammar compiles, but it won’t parse the input string properly.

So how about noise 3 and 5? Can we get rid of those as well?

The problem with the sequence operator (3) is that space is not an operator in either C# or Boo, so there’s nothing to overload.

Let’s look at the three operators, *, ? and +. Well in general purpose languages like C# and Boo, * and + are binary operators. In EBNF they are unary. Therefore it’s not possible to steal them. And ? is a ternary operator in C#, and it’s not an operator at all in Boo.

Still, this is a small step towards noise free executable grammars in .NET.

Thursday, April 17, 2008

Looking for an open source text editor component with syntax highlighting

I’m trying to give a little back to an open source project, namely Irony that I blogged about. My idea was this: We’re getting used to intellisense features in mainstream .NET languages such as syntax coloring, error highlighting, code completion, signature tool tips and more. When writing a DSL to be used and understood by business experts and developers, those things are not readily available.

The benefit of writing an add-in for Visual Studio that allows intellisense for DSLs parsed with Irony is big. Writing a managed language service for Visual Studio is no simple task. I want to avoid the complexity of the Visual Studio domain model and keep it simple using a stand alone text editor component. So I went searching…

There had to be lots of open source syntax highlighting text editors out there, I thought. And there is. I found a couple of well written candidates like SharpDevelop IDE TextEditor, the xacc  IDE editor and SyntaxBox from the Puzzle Framework. Finally, there are quite a few commercial offerings, but introducing a dependency to a commercial product in an open source project is not the road to popularity.

Both SyntaxBox and SharpDevelop let you extend the built-in syntax schemes with your own. This is done  by writing an XML based grammar. It’s possible to do and I have done it a couple of times. But having already written my grammar once using Irony, it just doesn’t feel right having to do it over again — this time using a different syntax.

Running an Irony-generated parser gives as output not only an AST, but also a list of tokens. If I could feed that list of tokens to the editor and bypass the built in lexing mechanism, I’d be laughing now. But it’s not that easy.

The problems with the three components that I tried using, fell into three categories:

  • The editor is entangled in references to other parts of the project, making it impossible to reuse it in other projects like Irony.
  • The built in lexer cannot by turned off, so lexing is being done twice, and two sets of formats for each token have to be merged — resulting in awfully bad performance.
  • The extension points are leaky abstractions that requires you to know of the inner workings of the text rendering.

Not unsolvable issues, but issues that result in poor maintainability and hacks en masse. Plus it takes a lot more (spare)time than I have.

Do you know of any open source text editor component that supports syntax highlighting AND lets me replace the tokenizer/lexer with something else?

2 Responses to 'Looking for an open source text editor component with syntax highlighting'

Subscribe to comments with RSS or TrackBack to 'Looking for an open source text editor component with syntax highlighting'.

  1. Daniel Grunwald said,

    ON APRIL 18TH, 2008 AT 11:38 AM

    You can supply our own implementation of IHighlightingStrategy for SharpDevelop’s text editor and consume your own tokens in that. The interface isn’t as clean as it should be (at least in SharpDevelop 2.x), but it’s certainly possible. I would like to hear what problems you had exactly with SharpDevelop’s editor.

  2. Soren said,

    ON APRIL 19TH, 2008 AT 8:41 PM


    the #dev editor is a great editor, and I really hope you will help me get this scenario working.

    Having failed several attempts to make a custom IHighlightingStrategy work, Finally, I tried cutting to the bone by implementing the simplest possible scenario: Not tampering LineSegment.Words at all within MarkTokens(). I expected the text to be all black and otherwise work normally. But the editor starts behaving weird and it’s not possible to enter or edit text, because it gets cut off at a fixed column.

    Try downloading http://skarpt.dk/blog/SDTextEditorTest.zip. It references ICSharpCode.TextEditor.dll version

    Looking at the default implementaion I get a feeling that MarkTokens needs to be doing something else, something that is not obvious from the interface definition?

Wednesday, April 16, 2008

No reason to laugh at Irony

Being a DSL geek I just want to give a shout out to Roman Ivantsovs Irony project. Irony is a LALR parser generator that lets you write your grammar in C#. If you’re looking to build your own little Domain Specific Language, Irony is a good alternative to ANTLR andGold Parser Builder.

A grammar written in C# with Irony looks as much like EBNF notation as possible given the constraints that writing them in C# imposes. For example, this definition of three non terminals in EBNF:

Expr  ::=  n | v | Expr BinOp Expr | UnOp Expr | '(' Expr ')'
BinOP ::=  '+' | '-' | '*' | '/' | '**'
UnOp  ::=  '-'

…translates into this C# code for an Irony grammar:

Expr.Rule = n | v | Expr + BinOp + Expr |
           UnOp + Expr | "(" + Expr + ")";
BinOp.Rule = Symbol("+") | "-" | "*" | "/" | "**";
UnOp.Rule = Symbol("-"); 

(Add to this a line for declaring each of the three non terminal variables - I left them out to prove a point ;-)

The syntax of the C# and the ANTLR versions are impressively similar, though there still is some noise left. It’s not as pretty as what Ghilad Bracha can do in Newspeak, but then again, C# wins by being a language that is actually used by developers.

Gold Parser Builder and ANTLR may be more mature than Irony, but what I like the most about the latter is the fact that the grammar is compiled along with the application that uses the grammar to parse DSL code. That means less switching between Visual Studio and ANTLR og Gold. Less fuss. Shorter path from language design to language test to language use.

The abstract syntax three (AST) that my Irony generated parser makes is nice and clean. It can even filter out punctuation characters, so that for instance parenthesis do not make up nodes themselves, only the expression within the parenthesis is a node in the tree.

That’s it for the extremely brief introduction to Irony. You can check out some more detailed samples by downloading. Irony is definitely a part of my toolbox so stay tuned for more on Irony.

One Response to 'No reason to laugh at Irony'

Subscribe to comments with RSS or TrackBack to 'No reason to laugh at Irony'.

  1. Soren On Software » Looking for an open source text editor component with syntax highlighting said,

    ON APRIL 17TH, 2008 AT 10:27 PM

    […] trying to give a little back to an open source project, namely Irony that I blogged about. My idea was this: We’re getting used to intellisense features in mainstream .NET languages […]

Bluefield projects


The terms greenfield site and brownfield site have their origin in urban planning. Greenfield means using fresh farm land for building projects. Brownfield means reusing existing industrial sites for new purposes.

A greenfield software project is a fresh start project that lets you make design decisions without regards to an existing codebase. Productivity is high, as you are not bound by an existing code base. A brown field project is the opposite: Maintaining legacy code. Productivity is low. Working on a brown field project often feels like dragging through mud.

Here are two paradoxes:

  • Brownfield projects are more common than green field projects. All the while most developers treasure greenfield more than brownfield.
  • The computer science schools spend more time teaching greenfield-related techniques than brownfield. Meanwhile, a greenfield project may start green but before long it becomes brown. Even the code you wrote yourself will seem unintelligible in 6 months.

Most projects that I participate in have a color somwwhere between green and brown. Often there is a legacy app written in VB6 and it has to be rewritten in C#. The existing app is considered the blueprint and all of its functionality must be duplicated in the new app. Plus all the new features that the customer expects to be thrown in while we’re at it.

These kind of projects requires you to reuse some legacy code and allows you to make some greenfield decisions.

How to color the projects? Black field project? Too sinister. Blue field? Well the symbolism is not as clear as the terms green and brown. What’s your favorite color?

Wednesday, January 23, 2008

Design by Contract in more than 20 lines

Some time ago, Ayende wrote a post, Boo: Design By Contract in 20 lines. It shows what can be done with very little effort using Boo’s meta programming constructs.

I shamelessly ripped his code in order to take it a bit further. I wanted keywords that resemble those of Microsoft incubation DBC enabled language called spec#: Requires, ensures and invariant. There are more features in spec#, but these are the basic three.

I also wanted requirements to be places inside the method, not only as attributes outside of it. So here’s the code for download.

Heres a class that makes use of these constructs:

import DesignByContract
import System

[invariant(women > 0)] #Invariant declared outside of class still has access to fields
class Promise:
 eternalLife = false
 women = 8
 beer = "A lot"
 forgiven = true

 def SmellBad():
  women = 0

 [requires(beer == "Unlimited")] #Requirement outside of method scope
 def GetDrunk():
  beer = "none"
  ensures women > 3:   #Ensures statement that works on a block
   women = 4

 [ensures(eternalLife)] # Ensures statement outside of method scope
 def DuplicateCode():
  requires beer == "A lot" #Requires statement inside
  if forgiven:
   eternalLife = true

And finally, here’s a specification, or test if you will:

import Specter.Framework

context "Invariant, requires and ensures":
 promise as duck

  promise = Promise()

 specify { promise.SmellBad() }.Must.Throw()

 specify { promise.GetDrunk() }.Must.Throw()

 specify { promise.DuplicateCode() }.Must.Not.Throw()

The added requires, ensures and invariant statements in the Promise class act as a form of specification for the clients of the class. They communicate (and check runtime) what conditions are to be met if the output is to be valid. Now, the specification in the last code segment is written using Specter and it also acts as a specification for the class.

Note: If you haven’t tried Specter, which is a really nice DSL on top of NUnit, here’s what happens: Each context is made into a nUnit test class and each specify statement is translated into a test method, which can then be run using for instance the NUnit gui or with its own console runner. The syntax is way nicer than NUnit’s Assert.ThisAndThat(…) and the test methods are automatically named.

Both the internal specification in the form of requires, ensures and invariant statements and the external specification embodied in the specter context (unit test) tell us something about how the system is expected to behave under certain conditions.

When to use which then? Good question, glad you asked. I haven’t thought about that for very long, but perhaps you guys have an idea? I suppose the main difference is that the external specification is run on demand, like a test. The internal specification is run every time the method is run, in production as well. The internal specification has better be relatively cheap compared to the external one. So, any thoughts?