Yossi Dahan [BizTalk]


Wednesday, March 29, 2006

Streaming pipeline components and property promotion

This is one of those things that makes perfect sense ,but until you see them in action you just can't be sure…

As part of a prototype I wanted to use data from the message to manipulate the behaviour of a send adapter.

To illustrate what I mean imagine the common scenario where there's send port that uses the file adapter and you want to control the name of the file created.

Typically you'd configure the file send port to use the %SourceFileName% macro - and update the ReceivedFileName context property with the name you require.

Now - imagine you can't to rely on the receive port to do the promotion (update ReceivedFileName) but have to do it in custom code in the send pipeline for some reason (which is irrelevant for this discussion)

You would rightly want to implement your pipeline component in a streaming fashion (as we all do, all the time….right?!)

So I did, and in fact, other then a couple of really foolish mistakes I've made writing the component was a relatively easy task, and it worked pretty much first time.

For my implementation I chose to use the XPathMutatorStream.

Normally it is used to update nodes in a streaming fashion, but although I did not need to manupulate the message at all I chose to use it as it has a great built-in mechanism to raise an event when specific locations in the message are read which was very useful for me.

Anyway - running this component in the receive pipeline worked like a charm - the property got promoted and the send adapter delivered the message correctly.

Running the same component with the same configuration (yes - with the same message...), but in the send pipeline, however, did not produce the expected results, and, in the file adapter sample, the original filename is used rather then the value from the message.

It appears the context property does not get set.

Well - actually - I suspect it does, but too late.

With the nature of streaming pipeline components the promotion will only happen when the stream reaches the relevant point in the message, which is only after the adapter (unless you have a non-streaming pipeline component further down the line) read all the bytes of the stream up to that point.

With most adapters this means it will happen too late, as the adapters will usually open whatever connection they need to before reading the stream. (as you know in an ideal scenario the adapter would not read the stream itself at all but leave it for the ultimate recipient). So they need any information regarding the connection up front, before the stream is read.

In my case it meant that only after the adapter started to write the stream to the file (and therefore read it from the component), that the filename property got promoted. Way after the adapter accessed it to determine the filename to use.

Friday, March 24, 2006

Pipeline components settings in ports

I've been using quite a lot recently the much improved ability to reconfigure pipeline components settings at the port level.

And just in case someone still finds it new, here is something you don't want to miss -

When you create a pipeline in the pipeline designer, you set values the the various propreties of any components you use. That’s pretty usual

However - you can actually change these settings for a specific port after the pipeline has been deployed (in "Admin mode").

In BizTalk 2004 you could do this by programmatically setting a configuration XML to replace the deployed settings, this usually required writing some code or script to do so. (in fact - Jon Flanders has one right here)

In BizTalk 2006 this has become much easier - in the port, right next to the combo box that lets you select the pipeline (which has been improved by its own right and now displays the short name first") you will find a nice little magic button.

Clicking on this button magically opens a property editor that allows you to change every single property of every pipeline component you have in the selected pipeline. Magnificent!

One thing you'd notice if you go there is that the property bag has a little bit different feel to the one you get in the pipeline designer.

First of all - The names of the properties are the names given to they keys in the key-value pairs of the component's property bag (as opposed to the names of the class properties which are used in the designer) .

Secondly - any special instructions added to the property such as category, or even specific designer selection (such as file browse or the schema selector used mainly by the disassembler components) are be ignored.

I suspect this is because it is using the property bag directly and not the pipeline component class to interact with the configuration.

So - pay attention next time your naming a property when writing it to the property bag and when validating the information received from the property bag - don't rely on any property designer logic.

Another point I'd like to make is that, although this is an extremely useful feature, especially for development and test but possibly also in debug mode, I personally don't like to rely on it when designing a solution.

In my opinion it may cause confusion when coming back to the solution to add (or fix) logic as it is not immediately clear what settings get executed. If you don't remember to check whether any properties have been overridden in the port level (maybe by another party member, or the BizTalk admin) you might end up looking at the pipeline designer and not understand why it behaves differently then expected.

A second point to remember is that if you a single pipeline, used for different usages, you will not easily be able to distinguish between then when looking in HAT, and, from the same principle, you will not be able to configure different tracking options, as this is one pipeline.

So - in my opinion - if the settings are different by design, towards the stabilization phase pipelines should be separated.

Tuesday, March 21, 2006

Sometimes dynamic send ports

I think you'll all agree that the most common scenario when configuring send ports is to configure the transport details in advance (static send ports).

This is pretty much at the heart of configuring any BizTalk scenario.

Then, for more dynamic cases we have the ability in orchestrations to use role links to dynamically select which send port(s) should be executed or we can even use dynamic ports to provide the transport details at runtime.

Surely we've got all our corners covered.

Alas - these dynamic options come with a cost, mainly around performance.

What if we have a messaging only scenario? Invoking an orchestration just to use dynamic ports is an over kill. Also - what if in most cases the address is known, and it is only in rare occasion that the dynamic ability is required? are'nt dynamic ports are more expensive then static ones? Should we have two ports? One static and one dynamic?

Well, luckily we don't have to, we can configure our send port to use the frequently used address - say http://some-url.com, and in the send pipeline, after executing whatever logic is required to decide if the message should be sent to another address (and to which one) we can promote the new address, - say http://some-url2.com, to the system context property "OutboundTransportLocation" under the http://schemas.microsoft.com/BizTalk/2003/system-properties namespace

At least with the HTTP and FILE adapters this will instruct the adapter to send the message to the provided address (I did not check other adaprers, but this should be correct for most adapter implementing dynamic sends)

Things to note:

This has to be done in the send pipeline, if you promote this property in the receive pipeline you will notice it gets overwritten and by the time the message gets to the send port the transport details of the send port are promoted.

When you look in HAT in the message flow the log will still show you the message was going to the address specified in the adapter although in reality it was send to the address specified in the message's context. (I guess this is a scenario MS should address as it kind of makes this whole solution bad practice from a manageability perspective)

Sunday, March 19, 2006

Failed Message Routing

I really like those tick boxes on the receive and send ports that allow generation of error reports on failures.

There are quite a few posts about this feature, so I won't go into details, but generally it allows messages that failed in the receive (or send ports) to be published to the message box with additional context properties to indicate the error (while not fulfilling any of the "good" subscriptions)

This allows the system to automatically handle error scenarios and basically extends the option we had in BizTalk 2004 to subscribe to NACK's.

I believe the most common use for this is to handle messages with routing failure.
These can generally happen when there was a problem in the receive pipeline (or indeed the received message) or when a service is not enlisted.

The problem with this is that although it allows you to "take ownership" of the problem in a sense and handle it as needed, it still logs an error into the event log, which, at least in large organizations with well implemented monitoring solutions will trigger a whole mechanism of alerting and handling the error scenario.

This led a lot of implementation to create a sort of dead-end subscriber in the form of an orchestration, a pipeline with a consuming pipeline component (which will require configuring transport settings to the adapter which will never be used) or even a custom "consuming" adapter, in order to ensure the message has at least one valid subscriber and avoid errors from being logged. In any case this is an additional, largely pointless object which requires development (and maintenance) and makes a much less clean architecture.

I wish MS had a settings that’s allows us to decide if an error should be logged, ideally allowing us to set it at runtime through our error handling routines, maybe, in the case of routing logging an error only if there were no subscribers to published error message.