View slide show
Monday, September 07, 2009
An Introduction to Applied Evolutionary Metaheuristics
View slide show
Sunday, August 09, 2009
The Broken Waterfall
The traditional predictive approach to project management is being rejected in favour an adaptive or Agile approach.
This is not a matter of buzz-words or faddish management technologies, instead it is a genuine commitment to help clients get the software they actually want - on time and within budget.
The Problem
There is a problem with the delivery of software. The more complex a project the greater the chance the project will be delivered over budget and behind schedule. As a project grows in complexity there comes a point where this potential for failure becomes almost a guarantee. Most experienced project managers understand this and strain their sinews to prevent it from happening and most experienced programmers have lived through the intense disappointment of seeing their work fail to achieve its initial promise. Yet time and again, despite the best efforts of genuinely talented and motivated people, software projects are delivered late, cost too much and do not function as the client expected - Why is this?
For each failed software project the problem typically turns out to be the plan. Now that may seem trivially obvious. Looking back over a failed project it is easy to suggest that if only the plan had been more precise then the project could have been more controlled and so more successful.
This is not correct.
The problem does not lie in the quality of the planning, the problem lies in the type of plan, specifically the attempt to create an up-front plan that covers the entire project life-cycle. This is not so obvious - how can you run a project without deciding what you need up-front?
To understand why up-front planning impedes the successful delivery of quality software it is first necessary to understand what is meant by a plan in this traditional sense, and then see how this concept can be dispensed with and replaced with a new type of planning mechanism.
What's in a Plan
At the start of a traditional project there is the familiar requirements-capture phase. This typically involves the writing of various specifications, a user specification that outlines the requirements in the language of the client, a functional specification that outlines the requirements in the language of the programmer and then perhaps a fully detailed technical specification that describes the requirements in a pseudo programming language.
Once complete, these detailed specifications provide the basis for all future work. They allow predictions to be made about the project's costs as well as its anticipated schedule. Specification documents also serve a secondary function. They give both the client and the engineers a form of 'contract' that, upon project delivery, allows everybody to compare what was promised with what was actually delivered.
This up-front planning process is often called the 'waterfall' model, it is a highly structured methodology that steps through requirements-capture, analysis, design, coding, and testing in a strict, pre-planned sequence. Progress is generally measured in terms of deliverable artifacts: requirement specifications, design documents, test plans and code reviews.
The Waterfall is Broken
There are good reasons why traditional, up-front planning fails. Unfortunately these reasons tend to make both clients and engineers feel uncomfortable so they are rarely spoken out loud.
Firstly, up-front planning means that the specification documents are written before any software is built. Experts, using all their intellectual powers and experience, attempt to imagine the software and in doing so mentally traverse all of its myriad details. Since no software has yet been built, the hypothetical assertions contained within these documents cannot be tested experimentally. In science an hypothesis that cannot be tested is called pseudo-science and by the same token a specification whose assumptions cannot be tested should be considered pseudo-planning.
Secondly, at the start of any reasonably complex project there is always an inescapable knowledge gap. This gap exists between:
- The business knowledge brought by the client
- The technical knowledge brought by the engineers
To begin with these two bodies of knowledge do not mix well as the clients do not really understand the language of software engineering and the engineers do not really understand the language of the client's specific business. This will change as time goes on and eventually the distinct bodies of information will mix and become one shared information landscape. However, at the start of a project when traditional up-front planning occurs, this inevitable knowledge gap leads to two critical and incorrect assumptions:
1. The client knows what they want their new software to do
Many clients come to a project with good idea of what they want, perhaps they have spent time and effort working this out, perhaps they have a legacy system that shows them much of they want and what they do not want. However at the start of a project the client cannot know what they want in sufficient detail to create a complete and precise plan. They can provide a business vision and they can provide business constraints but they cannot state in detail the processes required to deliver their vision because they have not yet absorbed the necessary details of the engineering environment. A superficial understanding can be gleaned during the initial planning meetings but this will not produce a sufficient understanding of the software they are commissioning.
2. The engineers know how to implement the client's business vision
Many engineers come to a project with a good idea of how to build business systems. They will have spent considerable time and effort building other, perhaps similar systems. However at the start of a project engineers cannot know how to implement the precise details of a specific business application because they have not yet absorbed the detailed business knowledge brought by the client. A superficial understanding can be gleaned during the initial planning meetings but this will not produce a sufficient understanding of the software they are being asked to deliver.
Predictive planning fails because an accurate plan requires a genuine, non-superficial understanding of both the client's business knowledge and the engineer's technical knowledge. Traditional specifications are created at the start of a project when both parties have not had enough time to come to such an understanding. It takes much effort to synthesize the two bodies of knowledge into a coherent whole, far more than can reasonably be assigned during the requirements-capture phase.
This means that plans created at the start of the project cannot be more than partially informed guesswork. Given that the nature of complex systems make them particularly sensitive to changes in small details, a plan for a complex system created with incomplete knowledge must perforce be a recipe for failure by degrees.
Does this really make up-front planning redundant? Is there a way to make the synthesis of the client and technical knowledge more efficient, perhaps by using advanced planning software? If this could be achieved then perhaps the planners could write effective up-front specifications that lead to accurate long-term costings and schedules.
Unfortunately there is another, more fundamental reason why detailed specifications must fail - regardless of their precision.
A specification is a description that attempts to outline features and functions in a natural language such as English. Yet software is actually written in the very precise syntax of a machine language. Engineers know that only computer code can truly express the details of a software vision, a natural language specification cannot be logically accurate enough. This means that natural language specifications must leave many implementation details open to interpretation forcing the engineer to skilfully choose from a set of implied options. Yet complex systems are sensitive to precisely these sorts of technical details, different choices will lead to different systems and, as often as not, unfulfilled client expectations.
Therefore, even where a specification guesses correctly, the natural language descriptions will contain subtle choices and hidden contradictions. It is only when the fuzzy language of the specification is transformed into the precise reality of the code that these choices and contradictions become apparent.
This leads to a profound truth about the nature of specifications: Greater precision does not lead to greater control. Instead the greater the precision the more varied and subtle the choices and contradictions become.
Planning For Success
Understanding these fundamental flaws at the heart of traditional software delivery, many forward looking managers and engineers are now moving towards a new project control methodology. In contrast to up-front or predictive planning this new methodology uses repeated bursts of short-term adaptive planning.
Agile Software Development throws out long-term planning and with it the traditional concept of a specification. Instead agile projects start with everybody discussing and sharing a simple vision of the end product. The vision is really no more than a mission statement that, at this early stage, explicitly removes the need for engineers to fully understand the business and for the client to fully understand the technology.
This means that an agile project can get started almost straight away, with the absolute minimum of requirements-capture. Instead of a long, costly and ultimately self-defeating planning phase, the engineers get to work building the first version (iteration) of what will become a rolling beta. Armed with a very short term plan covering just one or two weeks of work, the engineers build the first iteration and deliver it to the client for discussion and criticism. The rolling-beta is still only a sketch, an outline of the most important functions and how they might fit together. Mistakes and incorrect assumptions will have been made, indeed given the knowledge gap they cannot be avoided, but the mistakes are identified and quickly eliminated as the rolling-beta is regularly assessed by the client and engineers in close collaboration.
Once the first iteration is signed-off then the process begins again, a new short term plan is created and work begins on the second iteration. This iterative development continues and as the knowledge gap closes so the requirements and hence the software become ever more detailed and coherent.
Embracing Function Creep
As this hands-on process continues the client comes to properly understand the technical environment, what is expensive and what is possible, and as their knowledge grows so they begin to see new possibilities.
Clients changing their minds or adding new features during development is traditionally called function creep and remains the enemy of traditional planners. Yet to suppress this is to deny that clients can learn and modify their expectations as they see their software progressing. Rather than trying to ignore the client's input, the agile iterative process welcomes it as new and valuable knowledge.
Thus the client is encouraged to re-specify their product as it is being written. This is the ultimate guarantee that, in the end, the client will be satisfied. It is hard for a client to be surprised or disappointed with their software if they have played an active part in designing and deciding the goals at each iteration.
Equally, as the iterative process progresses the engineers will also come to a genuine understanding of the business. This allows the engineers to discuss the business processes with the client in a manner that allows a useful exchange of knowledge to take place. Questions to the client can be appropriately framed using the business terminology both the client and the engineers now share. Since the frequent iterations and short-term planning means that any incorrect business assumptions are quickly discovered, such mistakes can be corrected with the minimum of effort.
Engineers too, once they come to a genuine understanding of the business, can start to usefully contribute to the re-specification of the rolling-beta. New ideas and inspirations, whatever their source, can be welcomed, discussed and possibly incorporated as the software adapts over time.
Job Satisfaction
In summary, an agile software system evolves under the twin constraints of the client's business vision and the engineering environment's technical limitations. As the client and engineers come to a mutual understanding so new ideas bubble up and are incorporated as bad old ideas are identified and discarded. Before starting each iteration everybody discusses, negotiates and quickly reaches an understanding of what is actually required to fulfil the next set of short-term goals.
Thus an agile system organically grows its natural complexity out of a fundamental simplicity. As a result there are fewer surprises, the project risks are minimised and the client is more likely to get software that works.
Tuesday, April 28, 2009
Domain Driven RIA: Managing Deep Object Graphs in Silverlight-3
- The Source Code from Assembla
- The screencast: A detailed walk-through [duration 28 minutes]
Note: If you have no experience with RIA Services then you may prefer to start with my previous demo, A Domain-Driven DB4O Silverlight-3 RIA, which has links to RIA Services documentation and Microsoft presentations to get you started.
Introduction
RIA Services is a Rich Internet Application (RIA) framework that promises to streamline n-tier Line of Business application development. Reading through the RIA documentation and listening to the RIA team's presentations I was struck by two things:
- How potentially useful this framework was.
- How skewed the material was in favour of a data-driven design approach
Where is the Database?
- Silverlight 3
Handles the client-side application logic and user interface - RIA Services
Provides the client<->server interaction and client-side domain - DB4O
A server-side datastore for domain entity de/serialization
Here is a sneak preview of the software in action.

The Objectives
Using a combination of RIA Services and DB4O I want to test the following:
- Server - When I fetch an instance of the aggregrate root class I expect its inner hierarchy be eagerly fetched.
- Client - I want certain collections to be lazy-loaded and so remain unloaded until they are requested.
- I do not expect to write my own WCF Service nor do I want to write and Data Transfer Objects (DTOs).
- I want to databind my domain entities to silverlight controls. I expect the controls to correctly display my eagerly fetched data as well as handling lazy-loaded data.
- Finally, I want to prove that new domain entities can be created on the client and efficiently serialized to the server-side data-store as a batched unit-of-work
The Domain

I have a small hierarchical domain consisting of a single User aggregate root that bounds a one-to-many inner collection of Holding Entities that each contain a further collection of Transaction entities.
- User.Holdings[n].Transactions[n]
3: public abstract class Entity
4: {
5: [Key]
6: public Guid Id { get; set; }
7: }
8:
9: public partial class User : Entity, IAggregateRoot
10: {
11: public string Name { get; set; }
12: public string Password { get; set; }
13: private List<Holding> _holdings = new List<Holding>();
14: public List<Holding> Holdings
15: {
16: get { return this._holdings; }
17: set { this._holdings = value; }
18: }
19: }
20:
21: public partial class Holding : Entity
22: {
23: public Guid UserId { get; set; }
24: public string Symbol { get; set; }
25: private List<Transaction> _transactions = new List<Transaction>();
26: public List<Transaction> Transactions
27: {
28: get { return this._transactions; }
29: set { this._transactions = value; }
30: }
31: }
32:
33: public class Transaction : Entity
34: {
35: public Guid HoldingId { get; set; }
36: public TransactionType Type { get; set; }
37: public int Quantity { get; set; }
38: public decimal Price { get; set; }
39: }
Domain Loading Strategy
- Server
Fetching a User should eagerly fetch all of its dependent Holdings. Each Holding should eagerly fetch all its dependent Transactions. - Client
Fetching a User should eagerly fetch all of its dependent Holdings. However due to the potential for large numbers of Transactions, each Holding should not fetch any Transactions instead the Transactions collection must be lazy-loaded.
The Datastore Setup
- DataFile.Name
Specifies the name of the DB4O datastore file held in the App_Data folder - DataFile.GenerateSampleData
Determines whether the datastore is reset with newly generated sample data whenever the Cassini web application is re-started (useful for testing).
1: <appSettings>
2: <add key="DataFile.Name" value="DataStore.db4o"/>
3: <add key="DataFile.GenerateSampleData" value="true"/>
4: </appSettings>
5:
6: public static void ServerOpen()
7: {
8: if (db4oServer != null)
9: {
10: return;
11: }
12:
13: var filename = Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), ConfigFileName);
14:
15: var generateSampleData = bool.Parse(GenerateSampleData);
16: if (generateSampleData && File.Exists(filename))
17: {
18: File.Delete(filename);
19: }
20: db4oServer = Db4oFactory.OpenServer(GetConfig(), filename, 0);
21: if (generateSampleData)
22: {
23: SampleData.Generate();
24: }
25: }
In order to create the server-side eager fetch strategy outlined above, the DB4O datastore requires some configuration. The following GetConfig() method shows the Domain being scanned for types that implement IAggregateRoot with DB4O instructed to automatically fetch, save and delete the inner dependecies for those types.
1: private static IConfiguration GetConfig()
2: {
3: var config = Db4oFactory.NewConfiguration();
4: config.UpdateDepth(2);
5: var types = Assembly.GetExecutingAssembly().GetTypes();
6: for (var i = 0; i < types.Length; i++)
7: {
8: var type = types[i];
9: if (type.GetInterface(typeof (IAggregateRoot).Name) == null)
10: {
11: continue;
12: }
13: var objectClass = config.ObjectClass(type);
14: objectClass.CascadeOnUpdate(true);
15: objectClass.CascadeOnActivate(true);
16: objectClass.CascadeOnDelete(true);
17: objectClass.Indexed(true);
18: }
19: return config;
20: }
RIA Services
N-Tier applications are defined by the machine boundary that exists between the client and the server. Getting to grips with RIA Services begins by understanding how it tries to help you write applications that span that machine boundary.
RIA Services discovers your intentions via a combination of Convention and Metadata. For example, I intend to utilize my User class on the client and so I need to be able to fetch User instances from the data store. This implies that somewhere I must write a server-side service method to perform the User fetch.
RIA Services simply asks that I put that User fetch service method in a class that derives from the RIA DataService class and that I follow some simple naming rules for the method signature. For more information on these conventions see .NET RIA Services Overview for Mix 2009 Preview
This is what the conventional User fetch method looks like.
1: [EnableClientAccess]
2: public class DataStore : DomainService
3: {
4:
5: public IQueryable<User> GetUser(string name, string password)
6: {
7: using (var db = DataService.GetSession<User>())
8: {
9: return db.GetList(x => x.Name.Equals(name) && x.Password.Equals(password)).AsQueryable();
10: }
11: }
12: ... other code
13: }
The presence of this method stimulates RIA into generating a client-side version of my User class however it will only carry over simple properties such as the User.Name and User.Password. So what happens if I want to make client-side use of a more complex property such as the User.Holdings collection?
This is achieved in two steps.
- The Holding class must define a UserId property. When a new Holding is instantiated this property must be set to the Id of its parent User
- The User.Holdings Collection must be decorated with the appropriate attributes.
1: [MetadataType(typeof (UserMetadata))]
2: public partial class User
3: {
4: internal sealed class UserMetadata
5: {
6: [Include]
7: [Association("User_Holdings", "Id", "UserId")]
8: public List<Holding> Holdings { get; set; }
9: }
10: }
You will note there are two attributes being used here. What are they doing?
- [Association]
This attribute is informing RIA that the Holdings collection can be reconstructed on the client by comparing the User.Id to the Holding.UserId. When these match the Holding belongs to the collection. - [Include]
This attribute is more mysterious. Perhaps, like me, you might assume it means "Include this property in the generated code". This is not correct. In fact it means "Automatically recreate this collection on the client", in other words the client-side collection will be eagerly fetched and made available without any further intervention on your part. This is the behaviour we want for the User.Holdings collection and gives us our first clue about how we might set up the lazy loading for the Holding.Transactions collection.
Sharing Domain Functions
On the client I want to add a new Holding to my User.Holdings collection. Being a conscientious domain-driven coder I want to ensure that my code follows the Law of Demeter, which means I cannot reach into the Holdings collection directly like this:
User.Holdings.Add(...)
Instead I need to write a method to do this for me:
User.AddHolding(...)
This is easy to write for my server-side domain but if I intend the same features to be available on the client I must tell RIA services about those intentions and so allow it to generate the appropriate client-side code.
- Ensure the class with shared features is partial
- Put the shared code in a partial segment stored a code file called MyClass.shared.cs
- Decorate the shared methods with the [Shared] attribute
1: public partial class User
2: {
3: [Shared]
4: public Holding AddHolding(Holding holding)
5: {
6: this.Holdings.Add(holding);
7: return holding;
8: }
9: }
More Shared Code
When I create a new Holding I would prefer to use a factory method found in my DomainFactory class. This is a useful method so I want it to be available on the client as well as the server. As it happens the Factory class also contains a number of methods I would like to share, so instead of creating a partial class and sharing out individual methods as before I can just share the entire Factory class.
The following code is held in a file DomainFactory.shared.cs
1: [Shared]
2: public class DomainFactory
3: {
4:
5: [Shared]
6: public static User User(string name, string password)
7: {
8: return new User
9: {
10: Id = Guid.NewGuid(),
11: Name = name,
12: Password = password,
13: };
14: }
15:
16: [Shared]
17: public static Holding Holding(User user, string symbol)
18: {
19: return new Holding
20: {
21: Id = Guid.NewGuid(),
22: UserId = user.Id,
23: Symbol = symbol
24: };
25: }
26:
27: [Shared]
28: public static Transaction Transaction(Holding holding, TransactionType type, int quantity, decimal price)
29: {
30: return new Transaction
31: {
32: Id = Guid.NewGuid(),
33: HoldingId = holding.Id,
34: Type = type,
35: Quantity = quantity,
36: Price = price
37: };
38: }
39: }
Some Client-Side Code
Now we have informed RIA about our intentions it is time to see some client-side code that shows the resulting RIA generated client domain in use. This code is taken from the silverlight application that accompanies the web application.
First of all, here is the code that does some setup and then the initial fetch for the User.
1: public HomePage()
2: {
3: this.InitializeComponent();
4: this._dataStore.Submitted += this.DataStoreSubmitted;
5: this._dataStore.Loaded += this.DataStoreLoaded;
6: this._dataStore.LoadUser("biofractal", "x", null, "LoadUser");
7: this.Holdings.SelectionChanged += this.Holdings_SelectionChanged;
8: }
12: private void DataStoreLoaded(object sender, LoadedDataEventArgs e)
13: {
14: var userState = e.UserState;
15: if(userState==null)
16: {
17: return;
18: }
19: switch (userState.ToString())
20: {
21: case "LoadUser":
22: var user = e.LoadedEntities.First() as User;
23: this.User.DataContext = user;
24: this.Holdings.ItemsSource = user.Holdings;
25: break;
26: }
27: }
The _dataStore variable references an instance of the DataStore class which is derived from the RIA client-side DomainContext class. This class is auto-generated by RIA Services. It is the primary RIA generated artefact.
The DataStore.LoadUser() calls the GetUser() service method on the server. This is an asynchronous service call so the return must be caught in the DataStore.Loaded() event handler. Here the silverlight controls can be data-bound to their data sources and, because the User.Holdings collection was decorated with the [Include] attribute, RIA will ensure that it is automatically fetched. Using the Holdings collection as a binding data source will therefore display the correct list of Holdings for the current User without requiring an explicit fetch.
Lazy Loading the Transactions
In contrast to the User.Holdings collection, the Holding.Transactions collection is not automatically loaded when the User is initially fetched. Instead the client-side domain behaviour requires that the Transactions collection is lazy loaded on-demand. How is this achieved using RIA Services?
As before, the metadata is used to inform RIA of our intentions. The [Association] attribute is again used to decorate the collection definition in a partial class segment held in distinct code file (Holding.meta.cs). However this time there is no [Include] attribute.
1: [MetadataType(typeof (HoldingMetadata))]
2: public partial class Holding
3: {
4: internal sealed class HoldingMetadata
5: {
6: #region Properties
7:
8: [Association("Holding_Transactions", "Id", "HoldingId")]
9: public List<Transaction> Transactions { get; set; }
10:
11: #endregion
12: }
13: }
As a result RIA Services will generate the appropriate client-side code for the manipulation of Transactions however as there is no [Include] attribute RIA will not automatically fetch the members of a Transactions collection when its parent Holding is instantiated.
To manually load a list of Transactions it is necessary to write a parameterized server-side service method to perform the datastore lookup.
1: [EnableClientAccess]
2: public class DataStore : DomainService
3: {
4: ... other code
5:
6: public IQueryable<Transaction> GetTransactionsForHolding(Guid holdingId)
7: {
8: using (var db = DataService.GetSession<Transaction>())
9: {
10: return db.GetList(x => x.HoldingId.Equals(holdingId)).AsQueryable();
11: }
12: }
13: }
The GetTransactionsForHolding(...) method is scanned by RIA Services causing it to generate a client-side equivalent method on the DataStore class. This can then be used in client-side code to fetch a set of Transactions belonging to a specified Holding. The code below shows this happening. The call is being made within the SelectionChanged event of the Accordion control.
1: private void Holdings_SelectionChanged(object sender, SelectionChangedEventArgs e)
2: {
3: if (e.AddedItems.Count == 0)
4: {
5: return;
6: }
7: var holding = e.AddedItems[0] as Holding;
8: if (holding == null || holding.Transactions.Count > 0)
9: {
10: return;
11: }
12: this._dataStore.LoadTransactionsForHolding(holding.Id);
13: }
When an Accordion item is opened by a user click it fires the SelectionChanged event above. The newly selected Holding is extracted from the Accordion and its Holding.Id is passed into the RIA generated LoadTransactionsForHolding(...) method. This automatically calls the GetTransactionsForHolding(...) service method which returns the appropriate list of Transactions for the specified Holding.Id.
Where do these Transactions go? How is it that simply calling this method automatically fills the correct Holding.Transactions collection and displays that collection in the data-bound Accordion?
The list of Transactions is loaded into a flat list of Transactions generated and maintained by RIA Services. When a Holding.Transactions collection is requested RIA will dynamically create and return the correct list of Transactions as a conseqence of the information specified in the [Association] attribute. This is why each Transaction needs a HoldingId and each Holding a UserId. Finally, because RIA generated collections are ObservableCollections then changes automatically stimulate any data-bound containers to refresh themselves.
This means that a call to the LoadTransactionsForHolding() method will set off a chain of events that results in the lazy-loading of the selected list of Holding.Transactions and its subsequent display in the newly expanded Accordion item.
Creating and Saving Domain Instances
RIA Services makes the creating and saving new domain instances particularly easy. Once again the process begins with a statement of intention. This time RIA must be informed of our intention to add new Holdings to the User.Holdings collection and new Transactions to the Holding.Transactions collection. This is achieved via convention, by adding service methods whose signatures follow the convention shown below.
1: [EnableClientAccess]
2: public class DataStore : DomainService
3: {
4: ...other code
5:
6: public void CreateHolding(Holding holding)
7: {
8: using (var db = DataService.GetSession<User>())
9: {
10: var user = db.GetFirst(x => x.Id.Equals(holding.UserId));
11: user.AddHolding(holding);
12: db.Save(user);
13: }
14: }
15:
16: public void CreateTransaction(Transaction transaction)
17: {
18: using (var db = DataService.GetSession<Holding>())
19: {
20: var holding = db.GetFirst(x => x.Id.Equals(transaction.HoldingId));
21: holding.AddTransaction(transaction);
22: db.Save(holding);
23: }
24: }
25: }
Adding these service methods tells RIA that we intend to add new Holdings and Transactions via client-side code. Without these methods any attempt to add an item will result in a runtime error. For example, if the CreateHolding() method above is commented out and a new Holding is added to the User.Holdings collection via client-side code, the following error is displayed.

Serializing New Entities
The following code shows how to add and save new domain items.
1: public partial class HomePage : Page
2: {
3: private readonly DataStore _dataStore = new DataStore();
4: private ProgressDialog _progressDialog;
5:
6: public HomePage()
7: {
8: this.InitializeComponent();
9: this._dataStore.Submitted += this.DataStoreSubmitted;
10:
11: ...other code
12: }
13:
14: private void ShowProgressDialog(string message)
15: {
16: this._progressDialog = new ProgressDialog(message);
17: this._progressDialog.Show();
18: }
19:
20: private void DataStoreSubmitted(object sender, SubmittedChangesEventArgs e)
21: {
22: if (e.EntitiesInError.Count() != 0)
23: {
24: this._progressDialog.ShowError();
25: }
26: else
27: {
28: this._progressDialog.Close();
29: }
30: }
31:
32: private void SubmitChanges_Click(object sender, RoutedEventArgs e)
33: {
34: this.ShowProgressDialog("Saving Changes...");
35: this._dataStore.SubmitChanges();
36: }
37:
38: private void NewHolding_Click(object sender, RoutedEventArgs e)
39: {
40: var user = ((User) this.User.DataContext);
41: if (user == null)
42: {
43: return;
44: }
45: user.Holdings.Add(DomainFactory.Holding(user, NewHoldingSymbol.Text));
46: this.Holdings.SelectedItem = this.Holdings.Items[this.Holdings.Items.Count-1];
47: }
49: private void Buy_Click(object sender, RoutedEventArgs e)
50: {
51: var holding = ((Button) e.OriginalSource).DataContext as Holding;
52: if (holding == null)
53: {
54: return;
55: }
56: holding.AddTransaction(DomainFactory.Transaction(holding, TransactionType.Buy, 42, 0.42m));
57: }
58:
59: private void Sell_Click(object sender, RoutedEventArgs e)
60: {
61: var holding = ((Button) e.OriginalSource).DataContext as Holding;
62: if (holding == null)
63: {
64: return;
65: }
66: holding.AddTransaction(DomainFactory.Transaction(holding, TransactionType.Sell, 42, 0.42m));
67: }
68:
69: ...other code
70:
71: }
This code shows how to add new domain items to their correct location in the domain hierarchy using the shared DomainFactory class discussed earlier. These changes are then asynchronously submitted as a batched unit of work to the server, displaying a progress dialog to keep the user informed. The return is trapped so that the progress dialog can be dismissed and any errors displayed.
The Verdict
How did RIA Services and DB4O manage?
- Server - When I fetch an instance of the aggregrate root class I expect its inner hierarchy be eagerly fetched.
The server-side domain de/serialisation behaviour was handled by DB4O. Being an object database it is simple to create this behaviour using a few lines of initialisation code. - Client - I want certain collections to be lazy-loaded and so remain unloaded until they are requested.
RIA Services provides a set of attributes that allow both eager and lazy loading to be specified as client-side behaviour and wired up with minimal code. - I do not expect to write my own WCF Service nor do I want to write and Data Transfer Objects (DTOs).
RIA Services replaces the explicit WCF layer with an implicit data transafer layer via its DomainService class and the data manipulation methods you write to extend it. - I want to databind my domain entities to silverlight controls. I expect the controls to correctly display my eagerly fetched data as well as handling lazy-loaded data.
Because RIA Services generates its own observable collections the silverlight databinding flows smoothly with little intervention. The lazy loading of new data stimulates the silverlight bound controls to refresh and so display changes as they occur. - Finally, I want to prove that new domain entities can be created on the client and efficiently serialized to the server-side data-store as a batched unit-of-work.
RIA Services implements a Unit of Work pattern that allows only those items that have been changed to be batched and serialized to the server-side data-store when required.
I think that RIA Services plus DB4O performed well in handling the demands of my simple Line of Business Rich Internet Application. I would certainly recommend you try it out for yourself to see what you think. Good Luck.
Tuesday, March 31, 2009
A Domain-Driven, DB4O Silverlight RIA
- Building Amazing Business Centric Applications with Microsoft Silverlight 3
"Come hear how simple it is to build end-to-end data-intensive Silverlight applications with the new set of features in Silverlight 3 and .NET RIA Services. Explore Silverlight improvements that help to enable rapid development for business applications and to make your development process more productive" - Building Data-Driven Applications in ASP.NET and Silverlight
"Learn how Microsoft is simplifying the traditional n-tier application pattern by bringing together ASP.NET and Silverlight. Learn about patterns for working with data, implementing reusable and independently testable application logic, and application services that readily scale with growing requirements"
- You can define and use your domain objects on the server and then effectively re-use domain logic on the client.
- You can transmit domain objects between the client and server without polluting your domain or requiring an additional DTO transformation layer.


Tuesday, March 17, 2009
Convert dotnet assemblies to Silverlight
- Download Installer
- Download Source
- SVN http://subversion.assembla.com/svn/biofractal/trunk/Blog/Silverlighter

The knowledge I needed to write The Silverlighter was gleaned primarily from the excellent article by David Betz called Reusing .NET Assemblies in Silverlight. This article clearly explains how the dotnet->Silverlight conversion process works and why it is not a hack. I thought this article was a fascinating insight into the similarities between the dotnet and Silverlight assemblies and is well worth a read.
But before you get carried away (I know I did) and imagine that you are just a click away from re-using your favourite 3rd party assemblies, a word or two of caution.
- ArrayList
- Hashtable
- SortedList
- NameValueCollection
- BinarySerializer and the [Serializable} attribute
- Threading.SetData
- TCP Socket related features
Anyway, The Silverlighter app does some cool IL trickery so if you have dotnet dlls you know are fine and just want to use them in Silverlight without any nonsense from Visual Studio then it might be just the thing you need.
Additional Notes
There are a few options available to help tweak the functionality.
Finally the path to the ILdasm.exe is exposed just in case you have it at a different location on your system. If you don't have ILdasm.exe anywhere you can get by installing .NET Framework 2.0 Software Development Kit (SDK)
Monday, March 16, 2009
Silverlight and Object Databases
Friday, February 27, 2009
Generic Lists of Anonymous Type
[cross-posted to StormId blog]
Anonymous types can be very useful when you need a few transient classes for use in the middle of a process.
Of course you could just write a class in the usual way but this can quickly clutter up your domain with class definitions that have little meaning beyond the scope of their transient use as part of another process.
For example, I often use anonymous types when I am generating reports from my domain. The snippet below shows me using an anonymous type to store data values that I have collected from my domain.
for (var k = 0; k < optionCount; k++)
{
var option = options[k];
var optionTotal = results[option.Id];
var percent = (questionTotal > 0) ? ((optionTotal/(float)questionTotal) * 100): 0;
reportList.Add(new
{
Diagnostic = diagnostic.Name,
Question = question.Text,
Option = option.Text,
Count = optionTotal,
Percent = percent
});
}
Here I am generating a report on the use of diagnostics (a type of survey). It shows how often each option of each question in each diagnostic has been selected by a user, both count and percent.
You can see that the new anonymous type instance is being added to a list called reportList. This list is strongly typed as can been seen by this next bit of code where I order the list using LINQ.
reportList = reportList
.OrderBy(x => x.Diagnostic)
.ThenBy (x => x.Question)
.ThenBy (x => x.Percent)
.ToList();
This is where the problem comes in, how is it possible to create a strongly typed (generic) list for an anonymous type? The answer is to use a generics trick, as the following code snippet shows.
public static List<T> MakeList<T>(T example)
{
return new List<T>();
}
The MakeList method takes in a parameter of type <T> and returns a generic list of the same type. Since this method will accept any type then we can pass an anonymous type instance with no problems. The next snippet shows this happening.
var exampleReportItem = new
{
Diagnostic = string.Empty,
Question = string.Empty,
Option = string.Empty,
Count = 0,
Percent = 0f
};
var reportList = MakeList(exampleReportItem);
So here is the context for all these snippets. The following code gathers my report data and stores it in a strongly typed list containing a transient anonymous type.
var exampleReportItem = new
{
Diagnostic = string.Empty,
Question = string.Empty,
Option = string.Empty,
Count = 0,
Percent = 0f
};
var reportList = MakeList(exampleReportItem);
for (var i = 0; i < count; i++)
{
var diagnostic = diagnostics[i];
var questionCount = diagnostic.Questions.Count;
for (var j = 0; j < questionCount; j++)
{
var question = diagnostic.Questions[j];
var questionTotal = results[question.Id];
var options = question.Options;
var optionCount = options.Count;
for (var k = 0; k < optionCount; k++)
{
var option = options[k];
var optionTotal = results[option.Id];
var percent = (questionTotal > 0) ? ((optionTotal/(float)questionTotal) * 100): 0;
reportList.Add(new
{
Diagnostic = diagnostic.Name,
Question = question.Text,
Option = option.Text,
Count = optionTotal,
Percent = percent
});
}
}
}
Perhaps you are wondering how the type of the anonymous exampleReportItem is the same as the type of the anonymous object I add to the reportList?
This works because of the way the type identities are assigned for anonymous types. If two anonymous types share the same public signature, that is if their property names and types are the same (you can't have methods on anonymous types) then the compiler treats them as the same type.
This is how the MakeList method can do its job. The exampleReportItem instance sent to the MakeList function has exactly the same properties as the anonymous type added to the generic reportList. Because they have the same signatures then they are recognised as the same anonymous type and all is well.