Yossi Dahan [BizTalk]

Google
 

Friday, October 07, 2011

New Beginnings

In February this year I have joined the ranks of Microsoft UK as a technical pre-sales guy after working as an independent consultant for about 7 years (and as a developer of kind for several years before that).

Prior to joining Microsoft I've been working almost exclusively with BizTalk from the early stages of BizTalk Server 2000 and all the way through to BizTalk Server 2010 and so it made perfect sense for me to join the ‘Application Platform' team responsible as a technical specialist for BizTalk Server.

One of the (many) reasons that led me to join Microsoft is my belief that the IT world, and with many businesses, are going through a big shift with cloud technologies maturing and becoming mainstream and it became clear to me that it is time for a change; I've been doing pretty much the same thing for a long time, and I wanted to do something new, and I could not think of a more interesting space to be in than Cloud Computing and a better company in this space than Microsoft. I really wanted to get closer Windows Azure

Turns out that I played my cards right, and shortly after joining (as an integration technical specialist) the Application Platform team in which I work had taken ownership of the Windows Azure in Microsoft UK and I was asked to work with our enterprise customers to help them leverage the platform.

From a Microsoft perspective this change of ownership simply reflects the fact that the technology had matured and is now absolutely ‘mainstream’, and of course – highlights the fact that it is an integral part of Microsoft’s Application Platform.

From my perspective it is a huge opportunity to work with a fantastic technology and help companies harness the cloud to their success.

And so with this new start I decided it is time to start a new blog, as my work with Sabra Ltd and my rambling in blog.sabratech.co.uk are associated closely with my previous persona it could get quite confusing.

It is hard to tell how this new blog of mine will shape up, how often I will get to write and what it would be about. only time would tell. but my hope is that as my role as a technical pre-sale guy sits firmly between business, IT and development, I will get to cover many angles of this cloud thing.

Tuesday, June 21, 2011

BizTalk and the MOS protocol

In a meeting a few weeks ago the question of how to support the MOS protocol with BizTalk came up.

The MOS protocol, used in the media industry, has two flavours -

  • Versions 3.x are implemented as ‘proper’ web services
  • Versions 2.x are implemented as ‘xml over TCP’

As the former is a no-brainer for BizTalk, I wanted to look at what it would take to support the later-

The protocol (as pretty much any protocol) defines 4 elements –

  1. Message Format
  2. Message exchange patterns
  3. Profiles/Capabilities
  4. Communication protocol

Message Format

The message format for MOS is XML, and the schema is available from the MOS protocol web site.
I have downloaded the schema and successfully used it from BizTalk with no issues – so creating and parsing MOS messages is pretty much effortless.

One observation I have made, though, is that the authors decided to define one schema with a <mos> root element, under which one can use one of a number of supported elements; this means that in essence there’s only one MOS message, and it is the child element that defines what the request or response actually are, which in turn means a little bit more effort is required from participants, such as BizTalk server, to identify received messages.

In BizTalk that would probably mean a custom disassembler to wrap the out-of-the-box Xml Disassembler and overwrite the message type with the correct one (based on the child element rather than root element);this is a fairly simple component to write and so supporting the MOS format is pretty simple.

Message Exchange Patterns

My understanding of the MOS protocol is that the message exchange pattern is a quite simple request-reply pattern over the same connection (which, I believe, gets terminated after each request), and as such is very easy to support with BizTalk.

The only complicating matter here is that a request needs to include a contiguously incremented message id.
This can be be done using ordered delivery and/or a convoy pattern with a component, either in an orchestration or a pipeline component, back by a database that would inject the correct message id to each request, so technically this should not pose a problem and is not difficult to implement (certainly not more difficult than it would be in any other MOS client or server implementation).

The problem I have with that is more one of a principal – requiring ordered delivery, which is the implication of such a requirement, really affects the throughput that can be achieved as requests have to be serialised, and I wonder whether there’s a good enough justification for that in this case.

Rant over, this is the protocol requirement and the good news is that it shouldn’t be difficult to achieve with BizTalk.

Profiles/Capabilities

The MOS protocol defines sets of messages that need to be supported for each device profile, but that doesn’t affect the core design of the solution on top of BizTalk but rather which messages should be expected and obviously coded against.

If the handling of the messages is to be done in BizTalk then typically I would expect an orchestration for each message type in the supported profiles; if BizTalk only relays the messages, than ports with schemas transformations might be all that’s needed.

Communication Protocol

The communication protocol is a simple TCP/IP socket, with the expectations that a server would listen on a couple of ports and, presumably, be able to handle requests from multiple clients, but only one at a time on a single connection, with the conversation ending with every response.

This really is an over simplified approach, but that also mean that it is not challenging to implement and, whilst there is no TCP/IP adapter for BizTalk out of the box, there are several approaches one can take -

Community BizTalk TCP/IP Adapter

The Open source adapter on codeplex can receive and send messages to and from BizTalk over TCP sockets and supports one-way, request-response and even duplex communications, with extensive configurable options.

The adapter was even designed to support ordered delivery which may come in very handy if the message id requirement described above is to be adhered to.

The adapter was initially developed for BizTalk 2006 and then adapted for BizTalk 2009; to make it work with BizTalk 2010 I downloaded the source code and compiled it locally (I had to remove the post build events that registered the assemblies in the GAC, and so I had to register the assemblies in the GAC myself later on when updating the adapter’s code.

I also had to update the check in the installer that was introduced to confirm that the correct version of BizTalk is installed in the setup project -

image

Once that was done I was able to compile and run the installer, which allowed me to add the adapter in the BizTalk Admin Console -

image

I then followed the instructions on testing the adapter in the user guide supplied to verify it is working as expected, which it did, and so – using the TCP adapter with BizTalk 2010 is pretty straight forward.

There is one issue with the adapter with respect to the MOS protocol, though, and that is that the adapter has been designed to expect, quite rightly, some framing around the messages exchanged indicating the beginning and end of each message, but the MOS protocol does not, it simply assumes that over a single connection there will be one request and up-to one response.

Trying to configure a receive location without framing characters reported an error when I tried to enable it – “The Messaging Engine failed to add a receive location "Receive Location1" with URL “…." to the adapter "TCP". Reason: "20: The property '/Config/frameStartDelimiter' is missing from the configuration.".”

Looking to solve this I found I need to make a few changes - the adapter provides an XSD schema with it’s configuration options, used by the admin console to ‘render’ the settings pages, and this schema needed to be updated to indicate the framing elements are optional.
Following from that there were a few places in the code that referred to those values where support for the potential of a null value needed to be added to avoid an ‘object reference not set to an instance of an object’ error, which is fair enough.

The biggest effort had to be made in the piece of code that actually reads the message from the buffer as this loops through the bytes received and looking for the framing bytes, but a ‘quick and dirty’ version of these changes took a few cycles of code and test and within a couple of hours I’ve had a working adapter with support for messages with no framing which was able to support sending and receiving MOS messages over TCP sockets.

Custom WCF Channel

As an alternative to using a full blown BizTalk adapter, and needing to understand the adapter framework APIs, one could write a custom WCF channel to support the simple TCP socket communication required by MOS.

This, I suspect, is not the typical code one writes (is there such a thing), but someone who’s comfortable with socket programming and familiar with the WCF programming model shouldn’t find it too difficult and – the good thing is – there’s no BizTalk knowledge required and the custom channel can be used from any .net application and not just BizTalk.

In my view – if the TCP adapter did not exist already, this would probably have been the best way forward as it requires less investment, less BizTalk specific knowledge, and a solution that is usable both within and outside BizTalk.

However, as the adapter does exist, and could do the job with minimal effort in adaptation, writing a custom channel is probably not necessary.

Custom LOB adapter SDK implementation

In between the two options mentioned above exist the ability to develop a custom adapter based on the LOB adapter SDK,
Developing such an adapter does require some knowledge of the framework and BizTalk, but this is significantly less than it would have been required to build a full blown adapter from scratch; in addition, such an adapter is re-usable from any .net client, and the programming is focused a lot more on the WCF programming model than on anything BizTalk specific.

The main benefit an adapter provides, in my view, on top of a custom WCF channel,  when the communication is fairly simple, is the developer experience – the browsing of the ‘target system’, the generation of code artefacts, etc. and when there’s a lot to provide through the adapter this can be easily worth while.

However – in the case of the MOS protocol - unless one plans to encapsulate a lot of the protocol knowledge in the adapter, which I don’t think makes sense in this context, I don’t think there’s a strong enough justification for developing a customer adapter, as there’s nothing really to chose from or generate in Visual Studio.

Taking all of the above into consideration I would say that in order to support the basic TCP socket communication required for the MOS protocol I would customise and use the community TCP adapter, but as an alternative, if that is not desired, I would develop a custom WCF channel.

Conclusion

The MOS protocol is quite a simple protocol to implement (in fact – it might just be a bit too simple) and doing so on top of BizTalk, as expected, is not difficult.

Some investment in adopting the TCP adapter to suite this protocol is required, but this would not be significant, and could be done generically, Everything else is pretty much a day in a BizTalk developer’s life..

Smile

Labels:

Tuesday, April 19, 2011

From the phone through the cloud and into BizTalk on my laptop

Last week I sat down to prepare a demo for an ‘application infrastructure’ workshop I’m running next week in which I wanted to demonstrate exposing a BizTalk WCF receive location accessed through the Windows Azure AppFabric Service Bus.

Granted - with the BizTalk Server 2010 Feature Pack released last October – this is merely a case of running the Publish WCF Service Wizard, but I thought it would be a cool demo to run none-the-less.

However – as I was thinking about the scenario for my demo I realised there would have to be a great deal of ‘trust me – I’m not calling the service directly on my computer, I’m really using the cloud’, which is ok, but I wanted to do better.

And then I realised – Microsoft have kindly kitted me with a cool little phone which runs .net – what if I threw this into the mix?! so I’ve decided to build a scenario where I’d use an app, on my phone, to call a BizTalk instance, on my laptop, through the Windows Azure AppFabric Service Bus. how cool is that?! – well, here’s what I needed to do -

It starts off very simple – I created my BizTalk scenario - in my case that consisted of schema, a FF schema, a map between the two, and a send pipeline with a Flat File Disassembler.

Setting up a simple receive port and send port configured with the map and pipeline I could drop an xml file in one folder and get a flat file in the other. I configured the file send adapter to append records, so as I was dropping more xml files my flat file grew bigger. nice start.

, put together it looks like this -

image

The next step was to publish my xml schema as a WCF service so that a client could call it directly, and I did this, as you’d expect, by running the BizTalk WCF Service Publishing Wizard. However – as I had the BizTalk Server 2010 Feature Pack installed - it was an updated wizard, with added support for exposing BizTalk in the Windows Azure AppFabric Service Bus.

I ran through the wizard normally, but made sure to tick the box indicating I wish to use the  AppFabric Service Bus  -

image

That meant that later on in the wizard, two additional steps were added to capture the service bus details, in the first I set the relay binding I wish to use and the URL scheme. normally I would use the netTcpRelayBinding and the sb:// Url scheme, but as I plan to consume this from Silverlight, which sadly does not currently support either, I had to stick with basic http.

I am also asked whether I wish to enable discovery and metadata and in this case I do, so I ticked both boxes, but these are of course optional, and in many cases I would not want to have them enabled (especially if I have security turned off)

image

The next step captures the security configuration for the endpoint, namely the issuer name and key; again – because I’m planning to use a Silverlight client I can’t use client authentication so I disabled it for both the endpoint and the metadata endpoint; this does mean that in its current state this would not be fit for production use, but it’s ok for my demo; for production I will have to ensure other security mechanisms are added to protect my service, which is now effectively enabled for anonymous use.

image

With these two extra steps done the wizard completes and the web service is published to the location specified on my local IIS.

It is worth noting though, that at this point nothing really happened in Azure, but on the flip side- the BizTalk team have done something really nice here – if you look at the receive location you will see that it uses the and if you look at the config file generated for the local service you will see that it uses the standard WCF-BasicHttp adpater, no mention of the relay binding I’ve selected at all -

image

That meant that there were not special configuration in the binding either -

image

However, if you looked a bit deeper, and opened the web.config if the generated wcf service you will find that the wizard added two additional endpoints – one for the relay binding and one for the cloud mex endpoint -

        <endpoint name="RelayEndpoint" address=[cloud url here] 
binding="basicHttpRelayBinding" bindingNamespace=[namespace here]
bindingConfiguration="RelayEndpointConfig" behaviorConfiguration="sharedSecretClientCredentials"
contract="Microsoft.BizTalk.Adapter.Wcf.Runtime.ITwoWayAsyncVoid" />
<endpoint name="MexEndpoint" address=[mex url here]
binding="ws2007HttpRelayBinding" bindingNamespace=[namespace here]
bindingConfiguration="RelayEndpointConfigMex" behaviorConfiguration="sharedSecretClientCredentialsMex" contract="IMetadataExchange" />



These are on top of the usual behaviour of such published services to expose and endpoint derived from the configured receive location, so effectively the wizard had created an on-premise endpoint which I can call directly as well as a cloud facing endpoint and the equivalent mex endpoints. Of course these are controlled through the choices in the Wizard, but if you’ve looked at the receive location generated and wondered where’s the relay binding – now you know!



Another point worth mentioning at this point is that for some reason, in my first attempt I kept getting an error when browsing to the local service suggesting the receive location was not enabled despite the fact that it was. I checked permissions carefully and various other things and eventually concluded something went wrong, so I tried again, and got the same result.



In the end I used a different name for the service in the wizard and that did the trick, but I could not consistently re-create the problem so I can’t comment on what exactly that was – all I would say is – if you’re publishing a service, and keep getting the receive location disabled error, consider using a different name!



Ok – back to the main story – so as I’ve said – with the service published, I thought I could call the cloud endpoint and reach my BizTalk, but browsing to my cloud service revealed no endpoints (despite me asking to include them in the atom feed) -



image



The reason is that, for security reasons amongst other things, the onerous is on the on-premise endpoint to let the cloud know it’s available, open the bi-directional connection between itself and the service bus, and as publishing a service does not actually execute anything it is not enough.



To get things going I’ve simply browsed to my on-premise service, got the usual test instructions page and then refreshed my cloud atom feed page, this time my service got listed correctly -



image



and in it were my two cloud facing endpoints -



image




For production, a way to avoid having to manually browse to the service is to have AppFabric installed and use the AutoStart feature



One last gotcha I stumbled across that’s worth noting – if you followed all the steps above, but when browsing to the local service received an error along the lines of - "The socket transfer timed out after 00:00:00. You have exceeded the timeout set on your binding. The time allotted to this operation may have been a portion of a longer timeout. " – the most likely reason is that your app pool user (which in my case is a dedicated user I usually create to run BizTalk hosts) is unable to access the internet and so was unable to contact the Windows Azure Service Bus to create the bi-directional connection, set the right permissions and that problem will go away.

Labels: , ,