Quantcast
Channel: Tom Hollander's blog
Viewing all 36 articles
Browse latest View live

Building a Pub/Sub Message Bus with WCF and MSMQ

$
0
0

In recent years there has been a lot of talk about event-driven architecture as a technique to build more scalable and maintainable systems. I've found this to be a very interesting pattern that makes sense in a number of scenarios, but it's never been very well supported on the Microsoft platform, and many who have attempted it have found it painful. A number of years ago I worked on a system using a pub/sub message bus built on .NET Remoting, MSMQ and HTTP, and it wasn't at all pretty. Everything was difficult and required custom code, from hosting the queue listeners, encoding and decoding messages, dealing with reliability and managing subscriptions.

So it was with some apprehension that I made another attempt to adopt this pattern in my current project. However a lot has changed in the last few years, and I'm pleased to say that my experience was many, many times better than the one I'd been through all those years ago. Before I get on to the solution, I want to make clear that I'm describing just one approach to implementing this pattern, and there are other approaches that may be more appropriate for applications with different requirements. Specifically the application I'm working on is a largely green-field .NET application, so interoperability across platforms was not a consideration (lucky me!).

The solution we ended up with was built with .NET Framework 3.0 and makes extensive use of Windows Communication Foundation (WCF), Microsoft Message Queuing (MSMQ) 4.0 and Internet Information Services (IIS) 7.0, all hosted on Windows Server 2008. Here's what we did.

Defining the Service Contract

The first step was to define the contracts which the publisher would use to notify any subscribers that an interesting event occurred. In our case we had a number of different types of events, but in order to reuse as much code as possible we used a generic service contract:

[ServiceContract]
public interface IEventNotification<TLog>
{
    [OperationContract(IsOneWay = true)]
    void OnEventOccurred(TLog value);
}    

Now for any given event type, we can simply define a data contract to carry the payload (not shown here), and provide a derived service contract type as shown below:

[ServiceContract]
public interface IAccountEventNotification : IEventNotification<AccountEventLog>
{
}

Implementing the Publisher

One of the key aspects of a publisher/subscriber pattern is that there should be ultra-loose coupling between the publisher and the subscriber. Critically, the publisher should not know anything about the subscribers, including how many there are or where they live. Originally we tried using MSMQ's PGM multicasting feature to accomplish this - essentially this lets you define a single queue address that will stealthily route the same message to multiple destination queues. While this feature does work, it had a couple of limitations that made it inappropriate in our scenario. First, the only way to use multicast queue addressing with WCF is to use the MsmqIntegrationBinding, which is less flexible than the NetMsmqBinding. Second, multicast addressing only works with non-transactional queues, which would have had an unacceptable impact of the reliability of our system.

So we abandoned this option and decided to implement our own lightweight multicasting directly within the publisher code. While technically this breaches the golden rule of the publisher knowing nothing about the subscribers, the information about the subscribers is completely contained in a configuration file. This means we can add, change or remove subscribers before or after deployment with no impact on the application code.

We had already built a component we called the ServiceFactory (no relation to the p&p Web Service Software Factory) which is a simple abstraction for creating local or WCF instances via a configuration lookup. This component isn't publicly available, but you could easily substitute your favourite Dependency Injection framework and achieve similar results. In our case, the web.config for one of our web services may have its dependent services defined as follows:

<serviceFactory>
    <services>
        <add name="EmailUtility" contract="MyProject.IEmailUtility, MyProject" type="MyProject.EmailUtility, MyProject" mode="SameAppDomain" instanceMode="Singleton" enablePolicyInjection="false" />

       
<add name="SubsctiberXAccountEventNotification" contract="MyProject.Contracts.IAccountEventNotification, MyProject.Contracts" mode="Wcf" endpoint="SubsctiberXAccountEventNotification" />
        <add name="SubsctiberYAccountEventNotification" contract="MyProject.Contracts.IAccountEventNotification, MyProject.Contracts" mode="Wcf" endpoint="SubsctiberYAccountEventNotification" />
    </services>
</serviceFactory>

Previously we had used the ServiceFactory for creating individual instances, with code like this:

IEmailUtility email = ServiceFactory.GetService<IEmailUtility>();

 

As you can see from the configuration above, this would result in a singleton instance of a local class called EmailUtility being returned, but different configuration could result in a WCF proxy being returned instead. It was a simple matter to reuse this same ServiceFactory component to return all configured services matching a specific contract. We used this capability to build the NotificationPublisher class as follows:

public class NotificationPublisher<TInterface, TLog>
    where TInterface : class, IEventNotification<TLog>                    
{
    public static void OnEventOccurred(TLog value)
    {
        List<TInterface> subscribers = ServiceFactory.GetAllServices<TInterface>();

        foreach (TInterface subscriber in subscribers)
        {
            subscriber.OnEventOccurred(value);
        }
    }
}

With this class in place, all that is required for the publisher to publish event is to instantiate a NotificationPublisher with the appropriate generic parameters and call the OnEventOccurred method. Assuming we are using the IAccountEventNotification interface and the above configuration, this would result in the event being fired over WCF to the services defined by the SubscriberXAccountEventNotification and SubscriberYAccountEventNotification endpoints.

Configuring the Publisher

The final missing piece on the publisher side is the WCF configuration. As mentioned previously, we chose to use MSMQ to provide reliable, asynchronous message delivery. Programming with MSMQ used to be quite a painful experience, but with WCF the programming model is no different than for any other transport - all you need to do is configure the right bindings. In our case we chose the NetMsmqBinding, which provides full access to WCF functionality for core MSMQ features (as opposed to the MsmqIntegrationBinding, which provides richer MSMQ support at the cost of more limited WCF functionality).

Here's an example of the client-side WCF configuration.

<system.serviceModel>

    <bindings>
        <netMsmqBinding>
            <binding name="TransactionalMsmqBinding" exactlyOnce="true" deadLetterQueue="System" />
        </netMsmqBinding>
    </bindings>

    <client>
        <endpoint name="SubscriberXAccountEventNotification"
            address="net.msmq://localhost/private/SubscriberX/accounteventnotification.svc"
            binding="netMsmqBinding" bindingConfiguration="TransactionalMsmqBinding"
            contract="MyProject.Contracts.IAccountEventNotification" />

        <
endpoint name="SubscriberYAccountEventNotification"
            address="net.msmq://localhost/private/SubscriberY/accounteventnotification.svc"
            binding="netMsmqBinding" bindingConfiguration="TransactionalMsmqBinding"
            contract="MyProject.Contracts.IAccountEventNotification" />
      </client>
</system.serviceModel>

There's nothing too fancy in this - the key thing to note is the exactlyOnce="true" setting which is required for transactional queues. The other thing that my stand out is the unusual net.msmq:// addressing syntax, which is used by the NetMsmqBinding in lieu of the more familiar FormatName addresses. The queues themselves are private queues called "SubscriberX/accounteventnotification.svc" and "SubscriberY/accounteventnotification.svc". Why did I give the queues such silly names? Read on...

Hosting and Configuring the Subscribers

In the past, if building MSMQ clients was annoying, building MSMQ services was a nightmare. You had to build your own host (typically in an NT Service) or make use of the somewhat inflexible MSMQ Triggers functionality. You then had to do a whole lot of work to ensure your service didn't lose messages, and that it wasn't killed by "poison messages", which are messages that will constantly cause your service to fail due to a malformed payload or problems with the service.

Just like on the client side, WCF takes a lot of the hard work away on the service side - but it doesn't directly help with hosting the service and listening to the queue. Luckily this problem is solved beautifully by IIS 7.0 and Windows Activation Services (WAS), which is available on Windows Vista and Windows Server 2008. In a nutshell this enables IIS to listen to MSMQ, TCP and Named Pipes and activate your WCF service, just as IIS 6.0 does for HTTP. If this all sounds great, it is - but be warned that it can be a bit fiddly to set up.

First, you need to set up an "application" in IIS that points to your service, including the .svc file and the web.config file. This is no different to what you'd normally do for an IIS-hosted service exposed over HTTP.

Next, you need to create the message queue - you can do this with the Computer Management console in Vista or Server Manager in Windows Server 2008. The name of the queue must match the application name plus the .svc file name, for example "SubscriberX/accounteventnotification.svc" (this fact is unfortunately not well documented). Make sure you mark the queue as transactional when you create it, as you can't change this later. You'll also need to set permissions on the queue so that the account running the "Net.Msmq Listener" service (NETWORK SERVICE by default) can receive messages, and whatever account is running the client/publisher can send messages (NETWORK SERVICE by default, too).

Finally you'll need to configure IIS and WAS to enable the Net.Msmq listener for the web site, and for the specific application (make sure you've installed the Windows components for WAS and non-HTTP activation before you proceed!). The easiest way to do this is using appcmd.exe which lives in the \System32\InetSrv folder:

  • appcmd set site "Default Web Site" -+bindings.[protocol='net.msmq',bindingInformation='localhost']
  • appcmd set app "Default Web Site/SubscriberX" /enabledProtocols:net.msmq

With the IIS configuration in place, it's time to make sure the service's WCF configuration is correct. As you might expect, this looks pretty similar to the client configuration you saw earlier.

<system.serviceModel>
    <bindings>
        <netMsmqBinding>
            <binding name="TransactionalMsmqBinding" exactlyOnce="true" deadLetterQueue="System" receiveErrorHandling="Move"/>
        </netMsmqBinding>
    </bindings>

    <services>
        <service name="SubscriberX.NotificationService">
            <endpoint contract="MyProject.Contracts.IAccountEventNotification"
                bindingConfiguration="TransactionalMsmqBinding"
                binding="netMsmqBinding"
                address="net.msmq://localhost/private/SubscriberX/accounteventnotification.svc"/>
        </service>
    </services>  
</system.serviceModel>

One thing worth calling out here is the receiveErrorHandling="Move". This innocent-looking attribute probably saved us a month of work, as it tells WCF to move any messages that have repeatedly failed to be processed onto an MSMQ subqueue called "poison" and continue processing the next message, rather than the faulting the service. Note that subqueues, as well as the long-awaited ability to transactionally read from a remote queue, are some more new features in MSMQ 4.0 in Vista and Windows Server 2008.

Implementing the Subscribers

The only thing remaining is to implement the subscriber. Most of the code will of course be specific to the business requirements, so I'll only spend time describing the implementation of the service interface. In our system it is very important to make sure that no messages are accidentally lost. Since MSMQ can provide guaranteed delivery it may not be obvious how a message could just vanish. In fact most messages are lost after MSMQ has successfully delivered the message to the service. This can happen if the service receives the message and then fails before the message is successfully processed (possibly due to a bug or configuration problem). The best way of avoiding this problem is to use a transaction that spans receiving the message from the queue and any processing business logic. If anything fails, the transaction will be rolled back - including receiving the message from the queue! If the problem was a temporary glitch, the message may be successfully processed again. If the problem is permanent or caused by a malformed message, the message will be considered to be "poison" after several retries, and as mentioned earlier will be moved to a special "posion" subqueue where it can be dealt with manually by an administrator.

Making all of this work is surprisingly simple, since all of these capabilities are supported by MSMQ (provided you're using transactional queues) and WCF. All that you need to do is decorate your service implementation methods with a couple of attributes that state that your business logic should enlist in the transaction started when the message was pulled off the queue.

public class NotificationService : IAccountEventNotification
{
    [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]
    public void OnEventOccurred(AccountEventLog value)
    {
        // Business-specific logic
    }
}

Conclusion

While this has been one of the longer blog posts I've done in a while, the solution is extremely powerful and surprisingly simple to implement thanks to some great advances in WCF, MSMQ and IIS. In the past, many people (including myself) have spent months trying to implement pub/sub patterns, often with less-than-spectacular results. But using these new technologies eliminates huge amounts of custom code - in fact the few code and configuration snippets in this post are really all that it takes to make this work.


Windows Media Center - Is it still just for geeks?

$
0
0

I was impressed with Windows Media Center from the very first time I tried it. However when we got our first Media Center PC a number of years ago, it was quite a trying experience getting it all set up. Getting the PC to talk to the cable set top box through the "IR Blaster", getting a decent quality picture from the US-bought PC into the Australian-bought TV via a PAL SCART connector, arguing with the zoom settings to get a 16:9 picture with no letterboxing, getting the sound card to output Dolby Digital over the SPDIF optical connector - all of these problems were ultimately solved, but all were painful enough that it wasn't at all surprising that the only people I knew with Media Centers were geeks.

Still when it worked, it worked very well - the interface was beautifully simple to use, it had the right set of features, and the ongoing subscription cost of zero was pretty hard to beat. So we stuck with it, upgrading the original box to Vista, successfully getting it working in Australia (despite the TV networks trying their hardest to prevent the availability of an Electronic Program Guide), and eventually replacing it with a newer snazzier box. Over time the experience has gotten steadily better, and lately it's required very little tender loving care to keep it running. It's still not perfect, but one telling fact is that my parents now have one up running. Granted my parents are geekier than most, but their tolerance for misbehaving technology is much lower than mine.

We recently moved to a new house with two living areas. The Media Center PC is set up in the lounge room, but we've been spending most of our time in the family room (mainly to keep our new kittens away from the new leather couches in the lounge room!). The problem with this arrangement is that in the family room we had to watch shows at the time they were actually scheduled - and after 4 years of not knowing when anything was scheduled this was pretty hard to take. So last week we decided to try out a Media Center Extender - we went with the Linksys DMA2100. If you're unfamiliar with Media Center Extenders, these are essentially small set top boxes that communicate with an existing Media Center PC over a wired or wireless network, giving you access to the same interface and content from a TV in another room. I'd read good things about the Linksys device, but many people warned that you really need a wired network or 802.11n to get decent quality video streaming. We're running 802.11g, but I was prepared to undertake a cabling job if necessary.

But here's the amazing part of the story - pretty well everything about the entire experience was flawless. I would have got everything up and running in about 10 minutes, except unfortunately it required that I upgrade the PC to Vista SP1. That said, it told me exactly what was wrong and what I had to do, and once the upgrade was done the boxes introduced each other and got along fine. The video streaming over 802.11g was a little choppy at first, but some slight rearranging of the router sorted that out. So now we have a Media Center in each room, thanks to a ~$240 box that so far has done everything it promised with practically no fuss. At this rate, we can only hope that Windows Media Center could be an option for people without even the slightest geekiest tendencies before too long.

MSMQ, WCF and IIS: Getting them to play nice (Part 1)

$
0
0

A few weeks ago I posted an article describing how my current team built a publish/subscribe message bus using WCF and MSMQ. At that time we had only deployed the application in a single-server test environment. While there were a few tricks to getting that working, once we tried deploying to a multiple server environment we found a whole lot of new challenges. In the end they were all quite solvable, but it seems that not a lot of people have attempted to use the MSMQ bindings for WCF, hosted in IIS 7 WAS, so there isn't a lot of help out on the web. The best source of information I'd found is an MSDN article, Troubleshooting Queued Messaging (which unfortunately I didn't find until after we'd already solved most of our problems). But even that article is a bit lacking, so I thought I'd share some of the things we learned about getting this all working.

The Scenario

The goal here is to set up reliable, asynchronous communication between a client application and a service, which may be on different machines. We will be using MSMQ as a transport mechanism, as it supports reliable queued communication. MSMQ will be deployed on a third server (typically clustered to eliminate a single point of failure). The client application will use WCF's NetMsmqBinding to send messages to a private queue on the MSMQ server. The service will be hosted in IIS 7, and will use Windows Activation Services (WAS) to listen for new messages on the message queue. This listening is done by a Windows Service called SMSvcHost.exe. When a message arrives, it activates the service within an IIS worker process, and the service will process the message. The overall architecture is shown in the following diagram.

  image

The Basics

Let's start simple by setting everything up on a single server, with no security or transactions to complicate things. This first instalment is a bit of a recap of my earlier post, but I'm including it again here as it will be an important foundation for the more complex steps shown in the next instalments.

Install the necessary Windows components

Before writing any code, make sure you're running Windows Vista or Windows Server 2008, and that you've installed the following components (I've taken the names from Vista's "Windows Features" dialog; Windows Server 2008 has slightly different options but all should be there somewhere).

  1. Microsoft Message Queue (MSMQ) Server > MSMQ Server Core and MSMQ Active Directory Domain Services Integration (needed for Transport Security in Part 2)
  2. Microsoft .NET Framework 3.0 > Windows Communication Foundation Non-HTTP Activation
  3. Internet Information Services > World Wide Web Services
  4. Windows Process Activation Service
  5. Distributed Transaction Controller (DTC) - Always installed with Windows Vista, may need to be added for Windows Server 2008

Of course, you'll also want Visual Studio 2005 or 2008 installed so you can write the necessary code.

Define the contract

As with all WCF applications, a great starting point is to define the service contract. The only real trick when building MSMQ services is to ensure that every operation contract is defined with IsOneWay=true. In my example we'll have just one very simple operation, but you could easily add more or use more complicated data contracts.

    [ServiceContract]
    public interface IMsmqContract
    {
        [OperationContract(IsOneWay = true)]
        void SendMessage(string message);
    }

I won't bother with showing any sample client code to call the service, as this is no different from any other WCF client.

Create the Message Queue

Message Queues don't just create themselves, so if you want to build a MSMQ-based application, you'll need to create yourself some queues. The easiest way to do this is from the Computer Management console in Windows Vista, or Server Manager in Windows Server 2008.

In general, message queues can be called whatever you want. However when you are hosting your MSMQ-enabled service in IIS 7 WAS, the queue name must match the URI of your service's .svc file. In this example we'll be hosting the service in an application called MsmqService with an .svc file called MsmqService.svc, so the queue must be called MsmqService/MsmqService.svc. Queues used for WCF services should always be private. While the term "private queue" could imply that the queue cannot be accessed from external machines, this isn't actually true - the only thing that makes a public queue public is that it is published in Active Directory. Since all of our queue paths will be coded into WCF config files, there really isn't any value in publishing the queues to AD.

In this first stage, we won't be using a transactional queue, so make sure you don't click the Transactional checkbox. Transactional queues can add some complexity, but they also provide significantly more reliability so we'll be moving to transactional queues later in the article.

At this time, it's a good idea to configure the security for the queue. You want to make sure that the account running the client is allowed to send messages to the queue, and the account running the service is able to receive messages from the queue. Since the service will be hosted in IIS, by default it will be using the NETWORK SERVICE account.

Configure the Client

Now we know the name of the message queue, we can configure the client to send messages to the correct place. First you need to configure a suitable binding. We'll be using the NetMsmqBinding, which is normally the best option when both the client and service are using WCF. For now we will not be using and security or transactions, so we'll need to specify that in the binding (the exactlyOnce="false" attribute means it's non-transactional).

The endpoint definition is defined in the same way as any WCF client endpoint. One thing to look out for is the address syntax for MSMQ services. Rather than using the format name syntax that you may have used in other MSMQ applications, WCF has a new (and simpler) syntax. The key differences are that all slashes go forwards, and you use "private" instead of "private$". So the address for our locally hosted queue will be net.msmq://localhost/private/MsmqService/MsmqService.svc. Here's the complete config file for the client:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.serviceModel>
    <bindings>
      <netMsmqBinding>
        <binding name="MsmqBindingNonTransactionalNoSecurity" exactlyOnce="false">
          <security mode="None"/>
        </binding>
      </netMsmqBinding>
    </bindings>
    <client>
      <endpoint name="MsmqService"
                address="net.msmq://localhost/private/MsmqService/MsmqService.svc"
                binding="netMsmqBinding" bindingConfiguration="MsmqBindingNonTransactionalNoSecurity"
                contract="MsmqContract.IMsmqContract" />
    </client>
  </system.serviceModel>
</configuration>

Configure the Service

To create the service, start by setting up a new ASP.NET application, hosted in IIS - just as you would for a normal HTTP-based WCF service. This includes creating a .svc file for the service endpoint, and of course a class that implements the service contract. Again, I won't bother showing this code as it's not specific to MSMQ.

You'll also need to modify the service's web.config file to include the configuration details for your WCF service. Not surprisingly, this will look very similar to what we configured on the client.

  <system.serviceModel>
    <bindings>
      <netMsmqBinding>
        <binding name="MsmqBindingNonTransactionalNoSecurity" exactlyOnce="false">
          <security mode="None"/>
        </binding>
      </netMsmqBinding>
    </bindings>
    <services>
      <service name="MsmqService.MsmqService">
        <endpoint address="net.msmq://localhost/private/MsmqService/MsmqService.svc"
                binding="netMsmqBinding" bindingConfiguration="MsmqBindingNonTransactionalNoSecurity"
                contract="MsmqContract.IMsmqContract" />
      </service>
    </services>
  </system.serviceModel>

Enable the MSMQ WAS Listener

The last step is to configure IIS 7 to use WAS to listen to the message queue and activate your service when new messages arrive. There are two parts to this: first you need to activate the net.msmq listener for the entire web site, and second you need to enable the protocols needed for your specific application. You can perform both of these steps using either the appcmd.exe tool (located under C:\Windows\System32\Inetsrv) or by editing the C:\Windows\System32\Inetsrv\config\ApplicationHost.config file in a text editor. Let's go with the former, since it's a bit less dangerous.

To enable the net.msmq listener for the entire web site, use the following command. Note that the bindingInformation='localhost' bit is what tells the listener which machine is hosting the queues that it should listen to. This will be important when we want to start listening to remote queues.

appcmd set site "Default Web Site" -+bindings.[protocol='net.msmq',bindingInformation='localhost']

To enable the net.msmq protocol for our specific application, use the following command. Note that you can configure multiple protocols for a single application, should you want it to be activated in more than one way (for example, to allow either MSMQ or HTTP you could say /enabledProtocols:net.msmq,http).

appcmd set app "Default Web Site/MsmqService" /enabledProtocols:net.msmq

Troubleshooting Steps

If all has gone to plan, you should be able to successfully send messages from the client to the service, and have the service process them correctly. However if you're anything like me, this probably won't work first time. Troubleshooting MSMQ issues can be somewhat of an art form, but I've listed a few techniques that I've found to be helpful to resolve issues.

  • Check queue permissions. Make sure that you've correctly set the ACLs on your message queue so that the user accounts running the client and service are able to send and receive respectively.
  • Check the dead letter queues. In many circumstances, MSMQ will send messages to the Dead Letter Queue (or Transactional Dead Letter Queue) if it couldn't successfully be delivered for any reason. Often the details on the dead letter message will explain why it ended up there (for example, you tried sending a non-transactional message to a transactional queue). If you're using MSMQ across multiple machines, make sure you check the Dead Letter Queues on all servers, as it could end up in different places depending on what caused the delivery failure.
  • Enable Journaling. Sometimes it can be hard to tell whether a message never arrived at all, or if it arrived and subsequently got "lost". If you enable the "journal" feature on a message queue, you'll see a record of every message that passed through. However use this feature sparingly, as you can very easily end up with a huge number of journal messages after a few hours of testing.
  • Shut down the service listener. When troubleshooting, it can be useful to focus on just the client or just the service. For example, if you aren't sure if the client is sending messages properly, you may want to completely disable the service so you can see if the client's messages are arriving on the queue. To do this, you can shut down the IIS service or application pool, or shut down the Net.Msmq Listener Adapter Service.
  • Make sure the MSMQ storage isn't maxed out. MSMQ is designed to be resistant against all sorts of failures, such as temporary network outages. However it seems that if you reach the limit of MSMQ's allocated storage, messages will not be delivered at all. This has happened to us a few times after large amounts of messages ended up in the Dead Letter Queue, or when journaling has been left on for too long. It's easy enough to increase the storage limit, but normally when you reach the limit during testing the best thing to do is purge all of your queues.
  • Try pinging the service using the browser. When you are working on WCF services exposed through HTTP, you're probably used to hitting the .svc file in a web browser to check that you can receive the metadata correctly and that there are no configuration problems. Unfortunately there isn't any equivalent way to "browse" to an MSMQ service, so simple configuration errors can be very hard to track down. However if you enable the HTTP protocol for your site, you will be able to hit the .svc file in the browser, even if you haven't configured an HTTP endpoint for your service. If you get the standard WCF service page, that means the service is probably configured correctly.

That's it for the basics. In Part 2 of this article, we'll look at what's required to get this application deployed on multiple servers, and specifically focus on what you'll need to do for the security configuration.

MSMQ, WCF and IIS: Getting them to play nice (Part 2)

$
0
0

Welcome back! In Part 1 of this tale, we'd successfully configured a WCF client and an IIS-hosted service to communicate via MSMQ on the same machine. But we're only half done. As you may recall, our goal here is to deploy the client, service and queues all on separate machines. We also want to secure the configuration so that only or client is permitted to send messages to the queue.

Before we dive in, a quick disclaimer. As I mentioned in the first article, at the time of writing there is very little information about how to get this scenario working correctly. As such, we had to use a lot of trial and error. While I hope all of the tips in this article are correct and helpful, keep in mind that I don't have the time or resources to test everything in a clean environment. If any of this advice turns out to be incorrect or if I've missed anything important, please let me know and I'll correct the article.

Going Multi-Server With No Security

The next phase of this journey is to get all of the components running on their own servers, as shown in the diagram below (yes, it's the same one I included in part 1). Since getting everything working can be a little fiddly, we're going to continue to take baby steps so we won't be switching to authenticated or transactional messages just yet.

Install the necessary components and applications

To get started, make sure you have your three servers, and configure the necessary Windows components on each as described in part 1. Then deploy the client app on the client app server, create the message queues on the MSMQ server, and deploy the service in IIS on the service app server. Other than the fact that the boxes are different, there really isn't any difference in this step to what you did in the single-server scenario.

Allow anonymous users to send to the queue

The first difference between the single-server and multi-server scenario is that MSMQ will normally reject unauthenticated messages from remote machines. When you use set the security mode to "None" (or "Message") in the NetMsmqBinding, all messages will be deemed to be sent from a pretend user called ANONYMOUS LOGON. As such, you need to set the ACLs on the message queue to grant the ANONYMOUS LOGON account "Send Message" permission. This step had me stumped for ages - I had assigned permissions to the actual account that my client app was running under, yet all messages ended up in the Dead Letter Queue with an "Access Denied" message. Hopefully this tip will save you the same pain!

Change the service's BindingInformation to point to the MSMQ Server

Remember how we needed to configure IIS using appcmd.exe to listen to the MSMQ protocol? Well one part of the cryptic command syntax was bindingInformation='localhost'. The meaning of the bindingInformation attribute varies for each protocol, but for net.msmq this refers to the server that hosts the queues that IIS should be listening to. In our multi-server scenario we will be putting our queues on a separate server called msmqserver. First, let's switch off the localhost binding we configured last time. Note that the only difference in syntax for adding and removing a binding is the use of -+ versus --.

appcmd set site "Default Web Site" --bindings.[protocol='net.msmq',bindingInformation='localhost']

Now let's switch on the net.msmq protocol for our remote msmqserver:

appcmd set site "Default Web Site" -+bindings.[protocol='net.msmq',bindingInformation='msmqserver']

If you're not joined to a domain, here's a top tip from the MSMQ Activation Sample on MSDN: "To enable activation in a computer joined to a workgroup, both the activation service and the worker process must be run with a specific user account (must be same for both) and the queue must have ACLs for the specific user account."

Update the queue address in config files

Hopefully the final step will be to modify the queue address in the config files for your client and service applications. The only change should be to replace localhost with msmqserver, leading to a queue URI like net.msmq://msmqserver/private/MsmqService/MsmqService.svc.

Switching on Transport Security

Now that everything is working properly across multiple machines, let's start hardening the solution by turning on Transport Security. The NetMsmqBinding has four security modes: None (which we've been using up until now), Transport (which specifies that we should be using MSMQ's built-in security features), Message (in which WCF provides security at the SOAP level), and Both (which combines Transport and Message modes). In this scenario we will be using Transport security, as it enables to administrators to use the standard MSMQ security features. Most importantly it will allow us to lock down the queue so only the account running our client is authorised to send messages to the queue.

Enable MSMQ Active Directory Integration

If you haven't done so already, go to Add/Remove Windows Components and enable MSMQ Active Directory integration. Using AD is the easiest way to get Transport Security working. It is possible to use Transport Security without AD if you use custom certificates, but I won't discuss this approach in this article (mainly because I've never tried it :-).

Configure the WCF bindings

In order to switch on Transport Security, we'll need to configure a new WCF binding. (Alternatively you can just change the configuration of your existing binding, but I like to make sure each binding's name describes what it actually does). Transport is the default security mode for NetMsmqBinding, but in order to avoid confusion I also like to configure this explicitly:

<netMsmqBinding>
  <binding name="MsmqBindingNonTransactionalTransportSecurity" exactlyOnce="false">
    <security mode="Transport"/>
  </binding>
</netMsmqBinding>

This needs to go in both your client and server's configuration files. Of course, you'll also need to update your endpoints to use the new MsmqBindingNonTransactionalTransportSecurity binding configuration.

Configure MSMQ Security

This step sounds simple, and in your case it may be. However in my environment I found there were a number of tricks required (probably mainly because I was using non-standard service accounts for my client).

The simple bit is to make sure the account running your client has permission to send to the message queue. If you're running the client app under your own account, this is probably already set. Once you've checked or set the ACLs, try the application and see if it works. If so, congratulations - you're done! If not, keep reading.

One error that we saw when attempting to send to the queue was "An error occurred while sending to the queue: Unrecognized error -1072824273 (0xc00e002f).Ensure that MSMQ is installed and running. If you are sending to a local queue, ensure the queue exists with the required access mode and authorization.". If you get this, it probably means that there is no certificate in Active Directory for your specific user account on a specific server. I'm not sure if you need a certificate for all three servers in the scenario, but it's probably better to be safe than sorry. To register the certificate, do the following:

  1. Log on to the relevant server using the user account that you will be sending messages under
  2. Open the Computer Management (Windows Vista) or Server Manager (Windows Server 2008) console.
  3. Find the Message Queuing node, right-click and choose Properties.
  4. Click on the User Certificate Tab
  5. Click the Register... button, choose the appropriate certificate (normally "DOMAIN\Username, ServerName") and click Register.
  6. If your client is running in IIS 7 under a user account other than NETWORK SERVICE, modify the configuration of your App Pool to load the user profile. This appears to be necessary to allow the client to access the certificate needed to authenticate.

After you've done this, try again - and again, hopefully you're done. One other trick we found may be necessary is to add the machine account of the client app server into the AD domain group called "Windows Authorization Access Group" - this is described in this TechNet article. I'm not positive of the exact situations when this is necessary, but if all else fails I'd suggest giving this a try.

If you're still having troubles getting communication across the boxes, one final thing to check is whether any of the MSMQ traffic is being blocked by a firewall. I'd suggest temporarily turning off all firewalls across the various boxes to see if this is an issue.

Hopefully by now you'll have a secure, multi-server deployment of your WCF client, server and message queues. In the final instalment we will go one step further and switch over to transactional queues to ensure your messages don't ever go walkabout.

MSMQ, WCF and IIS: Getting them to play nice (Part 3)

$
0
0

Previously, in MSMQ, WCF and IIS: Getting them to play nice:

  • In Part 1, we built a client and IIS-hosted service application and got them communicating over MSMQ using WCF's NetMsmqBinding.
  • In Part 2, we deployed the same application across multiple servers, and enabled transport security for MSMQ.

In today's thrilling conclusion, we'll improve the resiliency of the solution by going transactional. Fasten your seat belts!

Going Transactional

Before we get started, let's spend a few minutes discussing the advantages and disadvantages of using transactional message queues. The advantages are all pretty nice:

  1. Messages will be delivered exactly once, and in order.
  2. Messages are persisted to disk, so they won't be lost if a server goes down.
  3. Sending and receiving messages can take place within a transaction. I've found this most useful on the receiving side: if you create a single transaction that encompasses both receiving the message and processing it, and a failure occurs during processing, the entire transaction will be rolled back. This means the message will be returned to the queue, rather than lost.

At this time you're probably thinking, "wow, that all sounds great - why wouldn't anyone want all of those?". The main reason is performance - using transactional message queues is typically many times slower than going with their non-transactional cousins. Also while the prospect of losing messages or getting duplicate messages sounds scary, in reality this would only happen under extremely rare and unfortunate circumstances. So the question here shouldn't really be "do you want the improved reliability that you get from transactional message queues", but rather "can you afford to live without it?".

That said, there are any number of scenarios where transactional message queues are justified - such as storing audit records, processing financial transactions or sending greetings in blog post samples. So let's get started!

Create a Transactional Message Queue

The first thing we need to do is create a shiny new transactional message queue. Even though we already have a non-transactional message queue with the correct name, you can't convert a non-transactional queue to a transactional one. So you'll need to unceremoniously delete the existing queue, and create a new private queue, still called MsmqService/MsmqService.svc. However this time make sure you select the Transactional checkbox.

Now, after all the effort we went through to set the ACLs on the previous queue, make sure you set them correctly on the new queue to avoid more painful permissions problems!

Reconfigure your WCF Bindings

Once again, we'll need to modify the WCF configuration in both the client and service to use a new binding. This time we'll be using the MsmqBindingTransactionalTransportSecurity, which will be defined as follows:

<binding name="MsmqBindingTransactionalTransportSecurity" exactlyOnce="true" receiveErrorHandling="Move">
  <security mode="Transport"/>
</binding>

The exactlyOnce="true" attribute is WCF-speak for using a transactional message queue. The receiveErrorHandling attribute is only needed on the service side (although it won't do any harm on the client side). This tells WCF what to do in the event that it discovers a "poison message". Poison messages are an important concept with transactional message queues. As discussed previously, if an error occurs while processing a transactional message, the transaction will be rolled back and the message will be returned back to its queue - ready to be picked up again by the same service. If the error was caused by a temporary glitch, the message may be processed successfully the next time around. However if the problem was due to a malformed message or a persistent problem with the application, the message is going to fail over and over again. WCF and MSMQ 4.0 have joined forces to provide support for poison message detection and handling. If the same message fails a number of times (3, by default), it will be considered "poison". What happens next depends on the value of the receiveErrorHandling attribute. If you set it to "Move" (my favourite choice!), it will be automatically put onto a sub-queue called "poison" where it can be manually dealt with by someone else.

So with our new binding beautifully configured, make sure you modify the endpoint definitions to refer to the new binding configuration name, and you're ready to move forward.

Add Transaction Attributes to your Service Implementation

If we want go get the advantage of executing the message receiving and processing in a single transaction, you'll need to tell .NET to enlist your code in the existing MSMQ transaction. This can be done in a single line of code, by decorating your service implementation methods with [OperationBehavior(TransactionScopeRequired=true)].

So far my sample service has consisted of a single line of code. While simplicity is normally a good thing in samples, it's not going to give me any opportunities to check the transactional behaviour or poison message handling. In order to make the scenario a bit more interesting, I've added some code that will let me easily create a poison message. My service class now looks like this:

    public class MsmqService : IMsmqContract
    {
        [OperationBehavior(TransactionScopeRequired=true)]
        public void SendMessage(string message)
        {
            if (message == "Bad")
            {
                throw new InvalidOperationException("Bad!");
            }

            Trace.WriteLine(String.Format("Received message at {0} : {1}", DateTime.Now, message));
        }
    }

 

As I'm sure you can tell, whenever I send the message "Bad", my service will fail. This will cause a exception to be thrown, and the transaction will be aborted. As a result the message will be returned back to the message queue, ready to be picked up again. Since the message has not been changed, it will continue to fail twice more, after which WCF will decide the message is poison and move it to the "poison" sub-queue.

Check DTC Configuration

Our epic journey is almost at an end. In fact if you're still playing along at home, you can try running the application with the transactional queues to see if it's working. If it's failing, one possible cause is problems with your Distributed Transaction Coordinator configuration. Here are a few things to try:

  1. Make sure that the DTC service is installed and running on all servers. If you're running Windows Server 2008, the feature may not be installed by default.
  2. Check your DTC security configuration. Under Windows Vista, launch comexp.msc, then expand Component Services\Computers\My Computer\Distributed Transaction Controller\Local DTC. Under Windows Server 2008 this is slightly easier to find, in Server Manager. In both cases, right-click on Local DTC, choose Properties and go into the Security tab. The exact choice of options probably depends on your scenario, but a good start is to switch on "Network DTC Access", "Allow Remote Clients", "Allow Inbound", "Allow Outbound" and "No Authentication Required'.
  3. Make sure that you allow DTC traffic through any firewalls. Again, if you run into problems, a good starting point is to temporarily disable all firewalls so you can find out whether that's the source of your problems.

Conclusion

In the last three posts I've documented pretty well everything I've learned over the past few months about getting MSMQ, WCF and IIS 7 playing nice, both on single machines and across multiple machines. Even though it took quite a while to figure all of this out, I still believe the architecture is both extremely flexible and simple to use - the total amount of code in this solution really is tiny. My only real complaint is that there isn't a lot of help available, either in the tools or on the web, to explain why things don't always work first time or how to go about fixing them. Through this post, I'm hoping my team's experiences will make the path a little smoother for you.

Update: By popular demand (OK, one person asked!), source code for the finished project is attached to this post.

My Clean Image Installation List

$
0
0

Today I rebuilt my work laptop for the first time in about 15 months. It was actually running fairly well in most regards, but I'm currently in a gap between projects so I thought it was a good time to clean out all the crap that one accumulates on a machine that's used every day.

I thought it would be interesting to chronicle the list of things I installed in the first few hours after the clean rebuild, and to find out how this compares with other people's critical software list.

I started with a Windows Vista Enterprise SP1 image that came with Office 2007 pre-installed by our IT department. On top of that, here's what I've added today:

  • Live Mesh
  • Windows Live Messenger
  • Windows Live Writer
  • Windows Live Photo Gallery
  • Office Communicator
  • Visual Studio 2008
  • Find As You Type for Internet Explorer
  • Silverlight
  • Flash
  • Adobe Reader
  • Enterprise Library (of course!)

And here's what I'm absolutely not going to install, no matter how many times other installers try to trick me into doing so:

  • Google toolbar
  • Windows Live toolbar
  • Apple QuickTime, iTunes or Safari

BTW, if any of you ever meet a developer from any software company who believes it's acceptable to install icons onto my desktop or QuickLaunch bar without asking permission first, please give them a kick up the arse from me.

No, not that Tom Hollander

$
0
0

I've been maintaining a relatively public online profile since the early days of the web, back to when I started peddling shareware apps for $10 a pop in the Netscape 1.0 era. And while I never have (and probably never will) climbed beyond C-list internet celebrity status, for a very long time I was the only Tom Hollander on the web in any meaningful way.

So it caused me some irritation when another Tom Hollander, an actor with credits including Pirates of the Caribbean, came on to the scene a couple of years ago. To make matters worse, pages about him consistently rank higher than mine on search engines, and he's beaten me to having a dedicated Wikipedia article. I have been trumped by a B-lister!

Amusingly, I'm getting more and more messages via my blog from people thinking that I'm that actor guy. Most of them presumably don't read anything on my blog below the title, as the content is a bit of a giveaway. However last week someone apparently did read some of the posts and surmised that despite being an accomplished actor, my real passion must lie with software development:

Is this the blog of the actor Tom Hollander - Gosford Park, Pirates of the Caribbean, Freezing, etc? If so, HI! You're a great actor, but importantly, your computer knowledge and skills are certain to be an asset to you. What attention to detail!

While I know practically nothing about the other Mr Hollander, and definitely don't have any ill wishes for him, I would get some satisfaction if I found out he was receiving messages asking for advice on Enterprise Library and other geeky topics.

Application Architecture for .NET v2 - This time for real!

$
0
0

Those of you who have been paying attention may have remembered a post I did over a year ago announcing the p&p team's plans to update the excellent but now very dated Application Architecture for .NET guide.

Those of you who were paying even more attention may have noticed that the promised guide does not actually exist. At the time of the post I knew that I was going to be moving back to Australia and out of the patterns & practices team. However I left the guide in the capable hands of Edward Jezierski, who (as it turned out) also left the team not long after me. So with both of us gone, the project unfortunately needed to be put on ice.

However the good news is that J.D. Meier, long-time p&p'er and author of all sorts of guides on topics such as performance and security, has picked up where Ed and I left off. You can find out about J.D's thoughts on the guide at this post on his blog, and follow the progress and provide feedback on the App Arch 2.0 Guidance Project Site on Codeplex. 

So the moral of the story is that I wasn't lying - it's just taken a little longer than expected to show some visible progress! 


Speaking at WDNUG

$
0
0

It’s been a little while since I’ve presented at a public forum – particularly since I missed TechEd Australia after it clashed with my Great Barrier Reef holiday. However this is all going to change with my next live appearance in the centre of the known universe, Wollongong! More specifically I’ll be talking at the Wollongong .NET User Group (WDNUG) next Wednesday, October 15. That should leave plenty of time for you all to book flights from all over the world to attend this monumental event.

I’m dong a talk with the catchy title of Aspect Oriented Architecture meets Service Oriented Architecture using the Policy Injection Application Block and WCF. Here are the official details:

The October WDNUG meeting is on Wednesday 15th October and this month we have Tom Hollander from Microsoft visiting to talk about Aspect Oriented Architecture within a .NET environment.

Abstract:
Service Oriented Architecture provides a way of looking at systems as black boxes, while Aspect Oriented Architecture allows separation of concerns within each of those boxes. In this session Tom will discuss how these approaches have been brought together on a customer project using Windows Communication Foundation and Enterprise Library’s Policy Injection Application Block, resulting in more predictable behaviour with less code.
Bio:
Tom Hollander is a Solution Architect in Microsoft’s Solutions Development Centre in Sydney, responsible for driving the technical design and delivery of complex customer projects. Prior to joining this team, Tom spent over three years in Microsoft’s headquarters in Redmond working as a product manager in the patterns & practices team. In this role Tom helped deliver many patterns & practices deliverables including Enterprise Library, the Guidance Automation Toolkit and Web Service Software Factory. Tom is a frequent blogger on patterns & practices and architecture topics, at http://blogs.msdn.com/tomholl.

Topic: Aspect Oriented Architecture
Date: Wednesday 15th October
Time: 6:30pm
Location: CSC Offices
               Edney Lane
               Mt St Thomas

Please RSVP so we know how many people will be turning up.

The attendance of all readers of my blog is mandatory. Looking forward to seeing you all in sunny Wollongong!

What did the other 37.5% want?

$
0
0

And so another US election is behind us. Despite the fact that I no longer live in the United States, the presidential race was still watched with interest by me and many, many others around the world. And like many others in the US and around the world I am very happy with the outcome, and look forward to a better world with a President Obama in the White House.

There have been a number of articles in the media calling out that the voter turnout was the highest in decades. Given how much was at stake, this didn’t surprise me. However I was shocked to discover that the highest turnout in decades has been estimated to be around 62.5%, or less than 2 out of 3 eligible voters.

For Americans (and most likely people from a lot of other countries) this may seem to be an impressive, or at least respectable figure. However having grown up in Australia, I’m used to seeing voter turnouts averaging 95%. While it’s possible that Australians are fundamentally more interested in politics than Americans, the real reason for such a high figure here is because this is one of a handful of countries that has (and enforces) compulsory voting.

Due to my cultural upbringing, I find this policy completely familiar and uncontroversial. However whenever this topic came up in conversation in the US most people were horrified, considering it a major violation of rights. There are a couple of reasons why I disagree with this view.

Quite a long time ago, I participated in a training course to learn how to conduct door-to-door market research (although as it turned out I never actually did the job!). In the training we learned that at any given time that you might knock on doors, some people may not be at home so obviously you can’t ask their opinion about whatever product you are researching. However rather than just discounting that person’s opinion, you had to come back on a different day and try again. The reason for this is that a sample of people that only includes those that are home at a particular time (like Saturday morning) is not considered statistically representative. Maybe the people who aren’t home were busy playing sport, and these lifestyle choices are likely to significantly shift their opinion on the product in question. Such practices are apparently common in the world of market research. So if these practices are important enough to guide research into views on fizzy drinks, shouldn’t they be used to decide who should be in government? There is no reason to believe that the group of people who feel uninterested or disenfranchised enough to not want to vote would vote in precisely the same ratio as those that do turn up. Governments must represent all of their constituents, so any election that only includes a self-selecting subset is inherently flawed.

Just to ram this point home, I’m sure you’ve all seen self-selecting polls on all sorts of topics that are frequently run by websites and TV news shows. Most of them contain a disclaimer something like this:

DISCLAIMER: These polls are not scientific and reflect the opinions of only those Internet users who have chosen to participate. Poll results cannot be assumed to represent the opinions of Internet users in general, nor the public as a whole.

The disclaimer is hard to argue with. So why do people believe that voluntary elections, which run on the same principle, are an accurate reflection of the public as a whole?

One common argument against compulsory voting is that some people genuinely have no opinion or are fundamentally opposed to voting on philosophical or religious grounds. And to go back to my market research example, even if the researcher keeps coming back until somebody is home, there is no way to force them to give an opinion if they don’t want to. While I find it hard to understand how someone can be completely indifferent to something that can have such a big impact on their lives, the Australian system does deal with this well. “Compulsory voting” is not really an accurate description of the system. It’s compulsory to turn up to the polling booth on election day and get your name ticked off. What you do with your ballot paper after that is really up to you. And while there are a percentage of people who will lodge an “informal vote”, the overwhelming majority of people will vote properly – after all, they’re already at the polling place with a ballot paper in hand.

I’m not expecting to convince everyone that this is the better way, nor am I saying that compulsory voting would have changed the result in the latest US presidential election. However I’m sure that there have been many elections over the years in many countries where the result would have been different had voting been compulsory. And I find it hard to see how things wouldn’t be better if the outcomes of elections were based on what the people really want, regardless of their views on the political process.

Constructors and Inheritance – Why is this still so painful?

$
0
0

Recently my team discovered a limitation in the RelativeDateTimeValidator that ships with the Enterprise Library Validation Application Block. This validator is used to check if a DateTime object occurs within a configured time before or after Now. It’s a useful validator for checking things like birth dates and expiry dates. However we discovered that it assumes the date being validated is in local time – if the date is specified in UTC (which we do in our app) the calculation will be wrong. Fixing the logic was no big deal – it just involved checking if the DateTimeKind is UTC and if so, converting it to a local time before doing the validation. However like many people we didn’t want to modify the original EntLib codebase, so instead I built a new class imaginatively titled RelativeDateTimeValidatorEx that inherits from RelativeDateTimeValidator and corrects this problem. But while the important code change was only a couple of lines, I was forced to manually implement every one of the 14 constructors from the base class (with no code beyond calling each base constructor) to ensure equivalent and compatible functionality. In my case I didn’t need to change the public interface at all. But even in the (probably more common) case where a derived class includes new functionality that warrants new constructors, it’s very common to want to include all of the base class’s original constructors as well.

To provide a quick example (but thankfully with fewer constructors), imagine the following base class:

public class BaseClass
{
    public BaseClass(int a) : this(a, "Foo")
    {
    }
    public BaseClass(int a, string b) : this(a, b, DateTime.Now)
    {
    }
    public BaseClass(int a, DateTime c) : this(a, "Foo", c)
    {
    }
    public BaseClass(int a, string b, DateTime c)
    {
        // Shared initialisation code
    }
}

Now if I want to build a new class that provides equivalent functionality and supports a new optional parameter, I’m left with this at a minimum:

public class DerivedClass : BaseClass
{
    public DerivedClass(int a) : base(a)
    {
    }
    public DerivedClass(int a, string b) : base(a, b)
    {
    }
    public DerivedClass(int a, DateTime c) : base(a, b, c)
    {
    }
    public DerivedClass(int a, string b, DateTime c) : base(a, b, c)
    {
    }
    public DerivedClass(int a, string b, DateTime c, long d) : base(a, b, c)
    {
        // New initialisation code
    }
}

…and if I want to offer more permutations, the number of constructors may grow exponentially.

The good news is that C# 4.0 will finally include support for optional parameters (a very useful feature that VB programmers have enjoyed for years), which will partly mitigate this issue by providing an alternative to writing a crap-load of constructors. However even with optional parameters, it would be nice if you could tell C# that you want your derived class to inherit all of the base class’s constructors.

On a somewhat related note, it’s often bothered me that there is no way to include constructors as a mandatory part of a class’s contract. This comes up every time you use a plug-in pattern where you create instances of a class at runtime using Activator.CreateInstance and you require all of your plug-ins to be initialised with the same information. The most logical approach is to require each plug-in to implement the same constructor. While you can make this work by passing the constructor parameters to Activator.CreateInstance, this is completely untypesafe as the compiler can’t enforce that all your plug-ins have the required signature (this is a common problem for people implementing custom EntLib plug-ins that receive their configuration properties via a NameValueCollection passed to the constructor. People who forget (or don’t know) to implement this constructor often have a hard time figuring out what is wrong).

The other approach you sometimes see is to create the instances using a default (parameterless) constructor and set the initialisation parameters through a public interface method like Initialize(NameValueCollection properties). But this approach has its own downsides. First, it means your plug-in classes are most likely constructed in an unusable state. Second, it still suffers from the same type safety problems, as there is no guarantee that every plug-in class implements a parameterless constructor.

So why isn’t it possible to specify constructors as a part of a class (or interface) contract? And to make it useful, there should be a compiler-verifiable way to create instances of unknown types that call the said constructor. I’m no language designer, but I’m thinking of something like this:

public class PlugIn
{
    public virtual PlugIn(NameValueCollection properties)
    {
    }
}
// Create an instance
Type plugInType = Type.GetType(plugInTypeNameFromConfigFile);
NameValueCollection properties = LoadPropertiesFromConfigFile();
PlugIn myConcretePlugIn = Plugin.New(plugInType, properties); // Compiler-checked

This is probably a bastardisation of the virtual keyword but you get the idea. Another benefit of this approach is that you could safely new-up instances of generic types in a lot more situations. Currently you can specify a where T : new() constraint for a generic parameter that specifies that the generic type must have a default constructor. I’ve often thought this was overly restrictive, and that it should be possible to do something like where T : new(int, string). This may still be desirable, but if constructors could be specified as a part of a construct then the type alone could allow the code to create new instances. For example, if you had a generic parameter T where T : PlugIn then your code could safely execute T myPlugIn = new T(properties).

Anyway, end of rant. I’m interested in hearing whether any other programming languages have either of the features I’ve proposed, or if there are any good reasons I haven’t thought of which would make these features undesirable or impossible.

Windows Live finally makes sense

$
0
0

Those who have followed my blog for a while should know that I’m not in the habit of using this space to blindly promote Microsoft products – although if I find something genuinely cool or useful I’ve been known to give it a quick plug. This is why I’ve never posted about Windows Live before. It’s not that the various sites and apps were bad – it’s just that they were obviously a random collection of rebranded MSN assets that didn’t make a lot of sense as a unit.

But thankfully this has all changed with the latest incarnation of Windows Live. This has been out for a couple of weeks now, but I’ve only started looking at it properly in the last few days – and so far I’m extremely impressed. While most of the old sites are there in some form, there is now a clear common theme: sharing and discovering information about your social network (which is built from your Messenger contacts). However rather than competing with other social networking sites, Windows Live is able to link to other sites (yes, even non-Microsoft sites such as Twitter, Flickr and StumbleUpon) and aggregate activities from those sites into a single view.

Aside from the new home page that shows all of your network’s activities, the coolest new feature I’ve discovered so far is the new Photos site and the corresponding Windows Live Photo Gallery downloadable app. You can now store all of your photos online in your SkyDrive (with 25Gb of free space!), share them with as many or few people as you want, and view them online in an awesome new Silverlight-based slideshow view. The offline Photo Gallery app has the expected integration with Live Photos and Flickr, plus some fun new features such as People Tagging and Photosynth stitching.

I’m sure there are much more comprehensive reviews out there, but I wanted to draw it to your attention as so far it doesn’t seem to be getting a lot of web love – and it finally deserves some!

How my team does agile

$
0
0

As you know, I’m a big fan of agile software development. But what exactly does “agile” mean? If you ask a room full of software engineers that question, you’re sure to get as many different answers as there are people. I’m not going to try to tell you what agile is, or what it should be – but a lot of people ask me how our team goes about implementing agile. We don’t practice any particular official brand of agile – like most good teams we’ve combined all of our past experiences to come up with something that works well for our project and team. So I’m definitely not claiming that this is the “right” way to do agile or that it will work for your team. However I’m also not going to accept any arguments that what we’re doing is “wrong” (since it works very well for us) – although constructive suggestions are always welcome!

To provide a bit of context, I’m working with a dedicated software engineering team that is part of Microsoft’s Solutions Development Centre in Sydney. We have been building a large new system for an external customer for the last 12 months or so. It’s largely a greenfield project, but there are a handful of external systems we need to integrate with. The application is in a new business area for the customer so their requirements have been evolving constantly, especially after development began. The project team consists of 2 project managers, 1 solution architect (me), 5-9 developers (the numbers have changed a bit over time), 4-7 testers, a build manager, a release manager (just for the last third of the project), and part-time SMEs focusing on infrastructure, security and database. We also have a full-time product manager from the customer, and around 3 other customer representatives that spend at least a day a week with our team.

The project began with an end-to-end analysis and estimation phase, which lasted a couple of months. While this does not fit in with many people’s idea of agile, I believe it’s a necessary step for commercial projects (and probably worthwhile even for internal projects). The goal of this period was to get as good an understanding as possible of the business requirements (as they were understood at the time) in a relatively compressed time period. During this period the team produced UI wireframes, flowcharts and very high level requirements and design diagrams. Doing this estimation allowed us to provide a reasonably good estimate of the schedule and budget for the overall project (even with the full awareness that many, many things would change throughout the project). It also helped us identify the most important and most risky areas so we could schedule them in the most appropriate iterations.

Once the estimation phase was out of the way we started our development iterations. We settled on 4 week iterations, involving 3 weeks of development and one week of stabilisation. Here’s how we tend to approach each iteration:

  • During the second half of the previous iteration, the project managers, architect and customer start planning what requirements should be candidates for the next iteration. For a large project like ours, strict stack-ranking of requirements or stories was not found to be practical. Instead we broke the project into larger “modules” of functionality that each fit roughly into a development iteration and prioritised those. This allows each iteration to have a single theme and vision for the entire team to focus on. Within each module we will prioritise the requirements to ensure we work on the most important features first, and often it will turn out that we don’t get time to complete the bottom-ranked requirements (although they may still be resurrected in future iterations). We also find that new, related requirements tend to emerge as we begin development. As long as they are higher priority than other candidates, we try schedule them for the current iteration.
  • As the architect, I will spend time in the last week of the previous iteration looking through the candidate requirements for the next iteration and come up with a high-level design. This normally involves a first-cut at a data model, 5-10 pages of documentation (usually with lots of pictures!) and often a “spike” or prototype. The project and product managers will also spend time updating or creating UI wireframes and clarifying the requirements stored in Team Foundation Server. The design documents and wireframes produced during this time are never final, but having a documented starting point has proven to help the development team get on the same page quickly and hit the ground running when the iteration begins.
  • On the first day of the new iteration, the entire team gets together to discuss the proposed requirements in detail. This will typically involve reviewing the UI wireframes and design documents on a projector as well as less formal whiteboard discussions. This is also the time when the development team loves to point out problems in my design and how they intend on fixing them :-). Once everyone has a good understanding of the requirements, we’ll start assigning work to the developers. For a while we tried variations of “planning poker”, but we found it rather time consuming and not overly helpful. So we’ve settled on a simpler system where developers will volunteer for requirements until they believe they have at least a week’s worth of work. They will then go away and break it into finer-grained tasks with estimates. We found that 3 weeks of development is too long to plan for in one go, so we do small planning sessions at the start of the next two development weeks where we “top up” any developers who have less than a week’s worth of work with new requirements. In essence, the development phase of each iteration is broken up into three one-week mini-iterations all with a consistent theme.
  • We don’t have any official design period within the iteration, but for the first couple of days the developers will usually spend a lot of time at whiteboards figuring out how best to approach their requirements, with actual development ramping up quickly after that. As soon as a developer (or sometimes several developers) is finished with a requirement, it is marked in TFS as “ready for test”.
  • Testing obviously goes on throughout the iteration, but for the first week the testers typically focus on analysing the requirements and writing test cases. By the second week things are usually in full swing, with the testers developing automated test cases as well as performing manual testing. Any bugs that are discovered are logged in TFS. Bugs are considered “blocking” if they preclude the requirement from being effectively tested, but lower priority “non-blocking” bugs are logged too. Any requirements without blocking bugs are marked as “tested”.
  • The product manager (who represents the customer to our team) gets the final say on whether a requirement is complete. They will go through each of the “tested” requirements and confirm that it meets their needs. If it does, the requirement will be closed. If not, they may raise bugs (if it is not implemented as requested) or new requirements (which generally means “that’s what I asked for, but not what I want” :-).
  • At the end of the third week we hit our “code complete” milestone for the iteration. This doesn’t mean all candidate requirements were completed (in fact we try to have more candidates than we can complete), but it means we have completed all of the highest priority requirements in the time available.
  • The final week is for stabilisation. This means the developers will not start any new requirements, and will work solely on fixing bugs. In order to prevent the bug count from getting unmanageable by the stabilisation week, we also impose a “bug cap” on each developer during the development weeks – once a developer’s personal bug cap exceeds this value they will need to temporarily stop work on new features to fix bugs. However even with the bug cap in place, there are always enough bugs to keep the team busy for the last week. Testing continues during the stabilisation week as well (in fact it’s usually the busiest week in the iteration for the testers). To be completely sure we don’t run out of bugs, we also schedule a “bug bash” event at the start of the stabilisation week. This involves the extended team (usually involving people from the customer who are not normally involved with our team day-to-day) spending an hour playing with the application with a goal to discover as many bugs as possible. To make this fun we put on food and drinks, and hand out prizes for the most bugs, the most severe bug and the most obscure bug.
  • At the end of the fourth week, the iteration comes to an end. We strive to exit each iteration with zero P1 and P2 bugs, and an agreed limit to the number of P3 and P4 bugs. This effectively means that everything we have built in that iteration is complete and stable and ready to be deployed. That said, for this project we don’t actually deploy the application to real users after each iteration. As we get ready for our final release there will be a “final stabilisation” iteration, as well as a number of processes that can only be completed at the time of the final release such as end-to-end security and performance tests, deployment and failover testing.

Hopefully this gives you an idea of the rhythm of our iterations. However we also have a daily rhythm within our team room:

  • At 9am we have our daily stand-up meeting. We’re lucky enough to have our entire team working in the same location, so this doesn’t involve any fancy technology like conference calls. The goal of the stand-up meeting is to make sure everybody knows what’s going on in the team. Each team member gets up to 2 minutes to describe what they did yesterday, what they are going to do today, and what impediments (if any) they might have.
  • After the stand-up everyone starts work – although it’s normally quite a noisy affair (well it is when I’m involved!). We have our own dedicated team room so we can make as much noise as we like, and we encourage face-to-face collaboration on problems as much as possible. Many times a day, someone will raise interesting and difficult questions that may require the customer to clarify requirements, project managers to reprioritise or modify requirements and plans, and surprise changes to the design and test cases.
  • We encourage our developers to check in at least once a day to prevent painful integration and long periods where the testers can’t make progress. Before anyone in the team can check in (yes, me included!) they must go through a code review and a test review. Any developer or tester can perform these roles for anyone else, and we encourage as many combinations as possible. For the code review, the reviewer and the reviewee will share a computer and go through each of the changed code files and discuss the changes. The reviewer has the right to veto the check-in if they find problems, which can include inadequate unit test coverage. The test review is much the same but normally involves running the application through whatever obscure scenarios the tester can think of to see if it holds up. Once both the code and test reviewer are happy, the developer can check in.
  • We run a continuous integration build after each check-in, where the application will be compiled and all of the unit tests run on the build server. If the build breaks, noisy sirens will alert everyone in the team to the problem, and whoever caused the problem needs to fix it immediately as everyone else is blocked from checking in. A good CI build will result in a much more cheerful “Yippee!” sound.
  • At 4pm we do our daily bug triage. This involves the test lead, one of the project managers, the customer product manager and the architect (me). We will go through each of the bugs raised in the last 24 hours, debate its priority, decide if and when it needs to be fixed, and (if it’s not closed) assign it to the most appropriate developer to fix.
  • Also at 4pm we commence our daily build. In addition to compiling the code and running the unit tests, this involves deploying the application to our test server and running a suite of automated Build Verification Tests. Again, if any of this fails, the appropriate people need to down tools and get it all fixed.
  • Once the build is declared “good”, we will start our nightly run of automated test cases. The next morning the testers will analyse the results of the previous night’s run and raise bugs for any regressions, or sometimes update the automated tests to deal with failures caused by changing requirements.

I hope this gives you some insight into one way of using agile methods to deliver a complex project. As I mentioned at the start of the post, this isn’t textbook agile, and we’ve needed to constantly evolve our process to address various challenges and to suit the personalities and experiences of our team. However I’m very happy to say that this is the most passionate, energetic and productive team I’ve ever worked with, and despite the fact that the requirements continue to evolve at what often seems like a frightening pace, we’ve continued to hit all of our milestones and targets. So I hope some of this will be of use to your team as well, and of course I’d love to hear about what you do differently in your team and how it’s working out for you.

StickyNotes for Visual Studio

$
0
0

Pablo Galiano, one of my friends and colleagues from my patterns & practices days, has just released a very cool extension for Visual Studio 2008 called StickyNotes. As the name suggests, it allows you to attach sticky notes to your code, with a lot more richness and less intrusiveness than regular code comments. You can also choose between personal notes (visible only to the person who created it) and team notes (visible to the entire team).

Unfortunately my team is still using Visual Studio 2005 (we started before VS2008 was released and haven’t had time to upgrade yet!), so I haven’t been able to use this on my project yet, but I have played around with it and it looks very nice. And while this tool isn’t free, at $US9.99 (about what I pay for lunch) it’s practically free. Give it a try!

The Joy of Code Reviews

$
0
0

As I mentioned in my recent post about how my team does agile, one of the core ingredients of our process is that nobody is allowed to check in without having gone through a code review and a test review. No other team that I’ve worked on previously has had such a rigorous process around code reviews – while we’ve had ad-hoc pair programming and occasional code walkthroughs, there were no rules about all code being reviewed before check-in. So when I first joined my new team at the SDC, I was unsure what to expect or if I’d like it. But as you might guess from the title of the post, I’ve become a convert.

First, let me go into a bit more detail about how the process works. A developer prepares a change set which normally consists of one completed requirement, or one or more bug fixes. Once they believe they are ready to check in, they will shout out “Code Review!”, at which time any other developer who can spare some time will volunteer to be the reviewer. In some cases the “reviewee” will seek out a specific “reviewer” if they know them to be best qualified in the relevant technology or component.

A code review will typically take around 15 minutes, but they may be considerably longer or shorter. It takes place at the reviewee’s computer (we normally have our entire team working in the same room. For a while we did have one developer in another location - in this case we mimicked the “same computer” approach by using desktop sharing and speakerphones). Normally there isn’t any need to walk through the functional requirements or the high-level design in any detail, since the entire team is involved in the planning sessions and generally knows the critical details. However in some code reviews there may be some use of the whiteboard to communicate any design details that are needed to provided context to the code.

The review is performed by looking at the list of changed files in Visual Studio’s “Pending Changes” window, going through them one-by-one (sometimes from top to bottom, sometimes in whatever order the reviewee thinks is most logical), and performing a “Compare with Latest” operation on each file. Most of us have Beyond Compare or something similar installed to make this easier, but the Visual Studio text comparer works OK as well. We don’t have a specific checklist of things that need to be reviewed, but some typical areas for discussion include:

  • Quantity and quality of unit test coverage
  • Code readability, method and line length
  • Opportunities for reusing existing functionality, or merging new code with existing functionality
  • Naming conventions
  • Consistent application of patterns
  • Globalisation (appropriate use of culture classes, resource files etc.)
  • Hacks, shortcuts or other “smelly” code

If the reviewer is happy with everything in the change set, it’s ready for a test review (or if that happened first, it’s ready to be checked in). Alternatively, the reviewer can veto the check-in and insist that any uncovered issues are fixed first. In rare cases the reviewer may allow the check-in even if there are known issues, with TFS bugs or tasks created for tracking purposes. This option is most commonly used when the issues are minor and there are other developers waiting for the check-in before they can complete their own tasks.

So why did we choose to impose this additional workload across the development team? Well, it’s certainly not because the developers make a lot of mistakes or need close supervision – the team is as experienced and capable as any I’ve worked with. And in fact it is quite rare for a code reviewer to veto a check-in – I don’t have hard statistics but it probably only happens 1 time out of 10. Nevertheless I think the process extremely valuable for the reviewer, the reviewee and the quality of the codebase. First, each developer writes code with full knowledge that it will be scrutinised, so they take extra care to follow established patterns and avoid ugly hacks. Second, it helps “share the wealth” around who understands different parts of the solution. And finally it provides a very personal way for developers to learn from one another, whether it be a new Visual Studio shortcut key, a .NET API they didn’t know existed, or a new architecture, design or testing technique.

One more interesting observation about how this process works in my team: at our “retrospective” meetings that we run at the completion of each iteration, there have been a number of occasions where people have called out that it takes too long to check in code. However I’m not aware of any situations where anyone in the team has suggested abolishing (or even streamlining) the code review or test review processes to achieve this outcome. And having the support and confidence of the team is the ultimate measure of the success of any process.


Just Released: Validation Application Block Hands-On Lab

$
0
0

Reposting Grigori’s announcement in case you missed it:

Here’s a gift for the New Year’s. We have produced a new hands-on lab on validation with Enterprise Library. It contains 13 exercises that walk you through capabilities of the Validation Application Block in various application contexts:

  • The first 11 deal with a Windows Forms data processing application that takes the information entered by the user to populate and process business entities. The Validation Application Block is used to validate the created business objects before processing them in gradually more sophisticated ways.
  • Starting with Lab 7, the Windows Forms validation–integration feature is used to directly validate the input for the form's controls.
  • Labs 8 through 11 deal with the extensibility of the application block.
  • Lab 12 shows how to use the ASP.NET validation-integration feature of the application block to validate the ASP.NET control's values, using a Web forms version of the simple data entry application from the previous labs.
  • Finally, for Lab 13, the ASP.NET application works as a front-end for a Windows Communication Foundation (WCF) service while the WCF validation–integration feature of the application block is used to declaratively validate the service parameters on the server side.

The lab instructions are available as a CHM for easy navigation and as a PDF for printing.

There are two ways you can complete this lab set: you can manually complete it from start-to-finish or you can use the starter solutions to complete only the labs you want to. By using the provided starter solutions, you can complete any of the labs in the order you prefer.

Download Validation HOL

Next up is our new Interception hands-on lab. You may expect to see it released in January.

Thanks to the EntLib team for keeping the good stuff coming!

Enterprise Library 5.0 kick-off! Spend your $100 wisely!

$
0
0

It’s that time again – the kick-off for the next major release of Enterprise Library. I can’t believe we’re already up to version 5.0 – it doesn’t seem like that long ago when we were planning version 1.0.

Anyway, you should know the drill by now. The patterns & practices team needs your input to decide what new features and scenarios go into each new release. Despite the tough economic times, Grigori has been kind enough to give each and every one of you $100 each to spend on the new release – please drop by his blog and let him know how much you want to spend on which features.

I’m going to have to think hard as to where I spend my cash. Unfortunately we’re still using .NET 3.0 (and hence EntLib 3.1) on my current project, and most of my wishes were already granted in EntLib 4.x. Still, there’s always room for improvements, and I’ll be sure to come up with something.

What do you want to see in EntLib 5.0?

$
0
0

As you’ve probably noticed, I’ve been on a bit of a blogging vacation of late. Rest assured that I’m still here and I’ll try to get some good posts happening soon. But for now I just wanted to draw your attention to one of Grigori’s posts asking for your help in prioritising the features in the upcoming Enterprise Library 5.0.

There are a lot of cool things in the proposed list including overhauls to the config tool, support for using the Validation Application Block in WPF applications, a bunch of new training materials, and even a Resource Application Block to help with localisation.

But don’t let me (or anyone else) tell you what’s important – go and tell the team what you really want by completing the survey!

@tomhollander just doesn’t get it

$
0
0

I tried. I really did. For five months I’ve chronicled my daily thoughts and activities for all to see. But despite my efforts, this really only confirmed what I suspected right from the start: that Twitter is really pretty pointless.

Now before you all flame me, I realise that some people do get a lot of value out of Twitter. I’m just not one of them. When I first dipped my toes in the water it became clear that Twitter suffers from a typical “chicken and egg” problem – it’s no fun if you don’t have anyone following you, and it’s hard to get people to follow you when it’s no fun. But I stuck at it, eavesdropping on the veterans and occasionally replying when something interesting came up. Over time this approach has brought me a few followers and the odd interesting conversation – but not enough to justify the investment it’s taken to get this far.

Looking at the people who sing Twitter’s praises, I think I know why. These people get value because they have thousands of followers. And they get thousands of followers by tweeting all the time. I’ve got no problem with people doing this, but frankly I’ve got other things to do (and besides, while I am geeky, I’m not that geeky :-).

I’m curious about the much-discussed surge in Twitter usage over the past year – how many people are really finding it valuable, and how many are, like me, giving it a go but yet to find the point. In any event I’m not throwing in the towel just yet – I’ll keep an eye out on what’s happening and may even contribute the odd tweet. But from now on, it’s up to Twitter to prove its worth to me – I’m done with trying to prove my worth to it.

So for at least a little bit longer, @tomhollander is happy to continue the conversation, just as long as interesting and not too much hard work.

Your guided tour of the Microsoft Solutions Development Centre

$
0
0

When I decided to leave the patterns & practices team to move back to Australia, one of my big concerns was whether I would be able to work on teams with the quality and dedication I experienced on projects such as Enterprise Library. It turned out that my fears were unfounded, as I’ve found myself working in the most high-performing team I’ve ever experienced, Microsoft Australia’s Solutions Development Centre (SDC). The SDC is basically a software engineering team available for hire, building complex solutions for external customers. We pride ourselves on excellence in process, technology and outcomes, using the latest Microsoft technologies and agile development practices to drive the best possible outcomes for customers.

About a month ago, the SDC opened its doors to a wider audience in an event we called the SDC Open Day. This consisted of a series of presentations describing all aspects of how we manage our projects, followed by a tour of our facilities with drinks, nibblies and networking. I had originally planned on blogging about this before the event so more of you could attend, but it turned out that we reached our maximum capacity through direct invitations and word of mouth.

The event went off without a hitch, and the feedback from the 150-odd attendees was overwhelmingly positive. However right from the start we wanted to find a way that this event would continue to provide value long after the projectors went dark. So we decided to video the entire event (over two hours of it!) for you to enjoy no matter where you are. The videos have just been posted live on microsoft.com.

If you’re in Australia (or are happy to move here :-), we’d obviously like you to consider how the SDC could help your team become more effective and your organisation to deliver critical solutions. However no matter where you are, I’m hoping you’ll find the videos interesting. Whether you’re a developer, tester, project manager or architect, the videos will show you some techniques that our team has found very successful, which you may want to apply to your own projects.

The event was divided up into 20 minute sessions presented by people playing different roles in the SDC team (including both Microsoft staff and the partners that work with us), which you can view individually in any order. There are also some introductory interviews and “vox pops” to add a bit of spice. The main sessions are:

  1. An Introduction to the SDC, presented by Rob Mawston (SDC Lead at Microsoft). Rob provides some background into why the SDC exists and how we approach software development.
  2. A Day In the Life of the SDC, presented by me (Solution Architect at Microsoft). I take the audience through a typical day, describing the key activities performed by the team and each individual role.
  3. How we do: Project Management, presented by Prasadi de Silva (Senior Project Manager at Microsoft). Prasadi discusses how we use agile requirements management and metrics to ensure we successfully deliver what the customer actually needs.
  4. How we do: Development, presented by Corneliu Tusnea (Senior Consultant at Readify). Corneliu describes some of the techniques our developers use to ensure ongoing quality and agility, including unit testing and refactoring.
  5. How we do: Testing, presented by Sarah Richey and Bruce McLeod (Managing Director and Principal Consultant at Devtest). Sarah and Bruce describe why testing is critical to success, and how we use metrics and automation to give us great coverage throughout the project.
  6. How we do: Build and Deployment, presented by Emma Hanna and Simon Waight (Senior Consultant and Lead Consultant at Avanade). Emma and Simon describe how we use daily builds, CI builds and automatic deployments to ensure the solution is always in a known state for use and testing.
  7. A Customer’s Journey, presented by Fiona Boyd (COO at Ticketek Australia). Fiona describes Ticketek’s experience as a customer working with the SDC.

I hope you find the videos interesting. Please feel free to post any questions on anything you see in the videos to the blog and I’ll try to get them answered (if not by me, then by someone!).

Viewing all 36 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>