Monthly Archives: July 2007

BizTalk is COM (or update to my post on Orchestration Performance)

So I was answering a question on a internal (to Microsoft) email list I am lucky to be on.  Someone asked a question about some code they were having trouble with  – the code in question was being called from an Orchestration in BizTalk and they were passing in the message as XmlDocument – converting that to an XmlValidatingReader as a way of validating the document against their schema.

I pointed them to my post on using XLANGMessage instead of XmlDocument for performance.  This actually seems to have solved their problem (even though I didn’t think it would – I was just sort of morally opposed to creating an XmlDocument and then turning it into an XmlReader when they could just go directly to an XmlReader – why do two steps when one step would work?).

Tim Wieman (one of the BizTalk Rangers) pointed out to me that my code didn’t explain something really important that my earlier post doesn’t mention (perhaps I should edit that post as well now that I am writing this).  If you look at the docs for XLANGMessage it is pretty clear that when you are passing XLANGMessage to a .NET component – and you just plan on reading it during your method call – you *should* call Dispose on it before the end of your call.  This means you should always put the usage of XLANGMessage inside of a try block and call Dispose in a finally block (as the above documentation link clearly shows).  This was my bad – because of course I am generally thinking about .NET and not about COM.  Well – BizTalk isn’t COM based – but XLANGMessage is reference counted. 

After a discussion with Tim and Paul Ringseth (one of the orignal developers of XLANGs) I came to accept this implementation.  Why did I reject it at first (I think on the email thread I called it “stupid” ;-)?  Well – it just felt unnatural to require *me* as the callee (the method implementor) be responsible for cleaning up the XLANGMessage object passed in by the caller.

In most APIs – if you were passed an object and then called Dispose on it – you’d be breaking the rules of your interaction with that API (one person I ran this by said “I’d be really pissed if you Disposed of *my* object” – that would be Mark Taparauskas – another really smart guy who doesn’t blog ;-)).

As Paul explained – the message doesn’t really belong to the .NET runtime – it belongs to the MessageBox database – so it is similar to a COM interop scenario.  The interesting thing is that the system is ref counted by the Orchestration instance in this case – so if you fail to call dispose – the reference gets cleaned up at the end of the Orchestration.  So depending on how long your Orchestration runs for (if it is short) it isn’t a big deal.

But still –  the general rule of thumb is – if you are only using an XLANGMessage instance during a method call – make sure to call Dispose on it during that method call.

Brings me back to the days when we used to have heated discusions about Refernce Couting versus GC.  I guess those days are not completely gone yet – even in a .NET product like BizTalk.


Happy and honored to be going to TechEd Hong Kong again this year

I have to say – TechEd Hong Kong has been one of the best conferences I’ve ever been to.  The hospitality shown by the people from Microsoft is really second to none.  Hong Kong is a beautiful city to visit and the people (not just from Microsoft) were very accomodating.


This year, not only do I get to hang out with my old friend Bob Beauchemin – I also get to bring Shannon with me this year. I am sure we’ll have a blast.

And – I get to do talks on WF/WCF (one of my current favorite topics). BAM (always one of my favorite topics) – and its integration in BizTalk Server R2. Plus – I get to do two talks on new features in Orcas and ASP.NET!

Should be a great week in Hong Kong if you are in that area of the world – sign up!

A site refresh

I wonder if it was even worth it since I think alot of people just read RSS feeds through readers – but I wanted to play around with using CSS instead of tables to do site layout. 

The other cool thing is the search button on the left – powered by live search.  I actually find it pretty useful and I wasn’t really impressed with the OOB dasBlog search capabilities.

Comments welcome

Synchronous CallWorkflow sample revisited

A while back I built a sample of how you might make a synchronous call between a Parent Workflow and a Child Workflow. The OOB InvokeWorkflow Activity is asynchronous, and a number of people on the workflow forums were interested in how to make this work, so I coded up a quick and dirty sample which a lot of people have used.

The problem with the sample (and this is a pretty common problem that is very easy to have with Windows Workflow) is that it didn’t work in the face of persistence. If the host instance went away, the workflows could persist – but the CallWorkflowService wouldn’t respond properly because of all its state was kept in local variables. I knew this was a problem a few months ago, but I need to find a solution (the problem is non-trivial IMO) and needed to find the time to implement the solution.

You can download the persistence friendly CallWorkflow sample here.

The way I solved the problem is based around the use of DependencyProperty. Dependency properties are one area of WF that is still fairly mysterious to workflow developers. Without writing a whole post on DependencyProperty – there are two basic reasons for the use of DependencyProperty (which is really a simple concept – keeping property values in a collection in the base class rather than having individual field storage for each property): first to make serialization more efficient, second to enable easier data binding between properties on unrelated Activities (known as Activity Binding).

Another interesting feature is the use of attached DependencyProperties. An attached DependencyProperty can be “attached” to an Activity instance which can be totally unaware of the property. There are a few interesting usages for this – Dennis talks about one here, and I mentioned one in my post about building a custom WorkflowLoaderService.

Attached DependencyProperties was the way I solved this particular problem as well. The CallWorkflowService does keep a local cache of the relationship between a child workflow and the parent workflow, as it needs this information to send a message to the parent workflow when the child workflow is completed or terminated. At the same time it builds its local cache of this data, the CallWorkflowService embeds this information into the child workflow instance by using attached DependencyProperties. The CallWorkflowService defines two attached DependencyProperties by calling DependencyProperty.RegisterAttached. One property is for the InstanceId of the parent workflow, the other is for the queue name on the parent workflow that the results of the child workflow will be sent to.

By embedded this information into the child workflow instance, when a child workflow persists the data persists with the workflow instance. When the workflow instance is loaded back into a host – the CallWorkflowService can extract that data from the instance and rebuild its local cache of mappings between child and parent workflows.

Now – to make this implementation work there were a couple of issues I had to work around. The first is getting attached DependencyProperties embedded into the child workflow instances. Now – one way to do this would have been to use DynamicChange. Inside of the CallWorkflowService’s StartWorkflow method where the child workflow instances are created using WorkflowRuntime.CreateWorkflow, I could have used the WorkflowChanges object to make a cloned copy of the child workflow and then embed the DependencyProperty values into the instance using Activity.SetValue (which is really DependencyObject.SetValue). In this sample I went to great lengths to use the most efficient means possible to accomplish my tasks. So instead of using WorkflowChanges, I created a custom WorkflowLoaderService (again for other reasons for doing this see my earlier blog post). By creating a custom WorkflowLoaderService I could embed the DependencyProperty values at workflow creation time – avoiding the overhead of DynamicChange.

The next issue of course was how to communicate between the CallWorkflowService and the WorkflowLoaderService. Since the call to WorkflowRuntime.CreateWorkflow doesn’t allow non-Activity parameters to be passed (and the parameters themselves are not passed to the WorkflowLoaderService) I had to come up with an out-of-band way to pass data to the custom WorkflowLoaderService so it would be able to detect when to place these DependencyProperty values into a child workflow (since not all workflows will be child workflows – some will be parent workflows and some presumable wouldn’t be a child or a parent – just standalone workflows in this host). To pass these two pieces of data (the parent’s InstanceId and the queue name) I went with CallContext. CallContext is a useful way to pass data between two methods in the same logical call stack (see TODO for more information about CallContext).

So the CallWorkflowService puts the two needed values into CallContext, and the custom WorkflowLoaderService picks them up from the CallContext (if they are there) and calls Activity.SetValue to embed the data inside of the workflow instance (again this is to make sure the information is persisted if and when the child workflow were to persist).

So the first half of the problem is accomplished – the parent’s instance id and the queue name become embedded inside of the child workflow using attached DependencyProperties by using CallContext to pass the data between the CallWorkflowService and the custom WorkflowLoaderService. Now I needed to solve the second part of the problem – which is to extract the data from instances which are loaded into a host from the persistence store.

My first plan was to build a custom WorkflowPersistenceService and wrap whatever WorkflowPersistenceService the host was originally going to use (like the SqlWorkflowPersistenceService). The rationale I had for try to make this work was again to make this sample as efficient as possible. When an Activity instance is reloaded by a WorkflowPersistenceService, the WorkflowPersistenceService has access to the raw Activity instance (not just through the WorkflowInstance object).

The other option isn’t my preferred option because it involved calling WorkflowInstance.GetWorkflowDefinition. I’ve blogged about GetWorkflowDefinition before in terms of what it is good for and what it shouldn’t be used for. The reason it isn’t my preferred option doesn’t have anything to do with those issues – but rather with the expense of calling GetWorkflowDefinition. GetWorkflowDefinition makes a cloned copy of the Workflow. This means the Activity instance is serialized into a memory stream and then deserialized into a new object instance (this is what Activity.Clone does). This can be an expensive operation for a medium to large workflow. Obviously GetWorkflowDefinition is a very useful API and you can use it, but it should be used sparingly.

Of course you might be able to tell I ended up using GetWorkflowDefinition. The problem with my first plan was that you can’t use a container pattern with any of the well-known Workflow Services like the TrackingService, WorkflowPersistenceService, or WorkflowLoaderService. The reason you can’t is because the methods you’d need to call on the contained instances are marked “protected internal”. This means a derived instance can’t call the methods necessary to delegate functionality to the contained instance. The WorkflowRuntime of course can call the methods since they are marked “internal” and the WorkflowRuntime is in the same assembly as the base class definitions.

So to solve the second half of the problem the CallWorkflowService subscribes to the WorkflowRuntime.WorkflowLoaded event. When a workflow instance is loaded it checks to see if the two DependencyProperties are found inside of the loaded instance. When the properties are found, that indicates to the CallWorkflowService that this is indeed a child workflow, and it uses this information to rebuild its local cache mapping between the parent and the child.

When a Workflow completes or terminates the CallWorkflowService looks to see if the instance is one that it is interested in (i.e. if it is a child workflow). If it is a child workflow, then it extracts the parent id and the queue name from the instance. Now – in the case of the WorkflowCompleted event, I didn’t mind using the WorkflowDefintion property on the WorkflowCompleted event argument – because the WorkflowRuntime pre-populates the WorkflowDefintion so my code isn’t calling GetWorkflowDefinition again (i.e. it isn’t adding additional overhead).

That’s pretty much it – if you missed it earlier – feel free to download the persistence friendly CallWorkflow sample here (I’ve replaced the earlier sample with this one – the earlier sample I named HideChildWorkflow – because it also shows how you can use a composite child activity and hide the designer by using a custom designer). Note – to run this sample you’ll need to create a database named CallWorkflowPersistence and configure the SqlWorkflowPersistence tables and stored procs.