Yossi Dahan [BizTalk]

Google
 

Saturday, November 29, 2008

Calling a service without adding a reference in BizTalk 2006 and BizTalk 2006 R2

We’ve been experimenting with calling ASMX web services from orchestrations without having to add a web reference (for the SOAP adapter) or use the generated items (for the R2 WCF adapter).

The idea, in short, is to achieve increased decoupling between systems even in a web service scenario -

Generally when you add a reference to a service in BizTalk 2006 or in R2 (although there are some clear differences between the implementation) the schemas for the request and response types are generated for you as well as an orchestration which defines message and port types using those schemas.

When using the SOAP adapter the types generated are somewhat “special” and they encapsulate a little bit of black magic; luckily the WCF adapter which shipped with R2 is much better in the sense that there’s nothing special about any of these artifacts (which also explains why it is now “Add Generated Items” and not “Add Service Reference” – as this is all it’s doing).

What this means is that if you follow the path that BizTalk leads you through you will get all these artifacts in the same assembly with your orchestration, which means you are now tightly coupled with the web service contract; not the end of the world, but if you want to stay true to the idea behind BizTalk - in which your processes can be masked from changes in the other applications you have to play pretend a little bit.

We thought that if we had the web service schemas in a separate assembly, and our process only used it’s own representation of the data (which would, ideally, be less than the entire data provided by the, mostly generic, web service) we could then map between the two in the port rather than in the orchestration, which would mean that if the web service changes, all we will need to do (in theory, at least) is re-deploy the assembly with service’s schemas assembly the and the map.

 

So – how I went about doing that with the WCF adapter -

Following best practice I had an assembly to hold all of “My” schemas – these are the ones describing entities in my domain.

I then created an orchestration assembly to contain my orchestration, which references the schemas assembly; the orchestration assembly has no other dependencies.

I then created a third assembly to include all the types for the service - I went through the “Add Generated Items” wizard to get all the artifacts, but I only really used the schemas (and not the message or port types); this assembly, like the schemas assembly, has no dependencies.

I then progressed to create a fourth assembly to hold the mapping between my schemas and the service’s schemas; naturally this assembly references both projects, but, crucially, it is referenced by no-one.

So – at the end of this we get the following -

 

image

I then imported the send port bindings generated by the wizard to create the send port; I could have quite happily created it from scratch as there’s nothing special in that port - with the exception of one point, discussed next - so this was really just to save me some time, and added the two maps I’ve created to map the process output format to the service request and the service response to the process input format.

Goal achieved – the process knows nothing about the service – all is done externally to the process through port configuration.

But did it work? Almost - running this scenario I received a soap fault from the service complaining about a misunderstood soap action; makes sense I thought – how would BizTalk know which service operation I wanted?

Well, the WCF adapter has a very nice way to figure out the soap action to use (in my view) – as part of the port configuration there’s a bit of xml that provides mapping between an orchestration send port’s operation name and the required soap action; the setting looks something like this -

 

image

In the generated port type the operation name matches the operation name in the service description (“HelloWorld”, in my example), which, in turn, is mapped through this xml to the relevant soap action; as I did not use the generated types the operation name did not match – I simply left it as the default “Operation_1” (naturally…); that meant that when the request came the adapter failed to find a matching operation.

Somewhat annoyingly, what the adapter does when it can’t resolve the name is to assume that the entire setting should be used as the soap action and so the entire xml was written to the header;  this behaviour is there to allow one to specify a fixed header to use, but I think the experience could be a bit better there – they could have had two different settings, or at least realise that if I’ve put a BtsActionMapping xml in there I do not intend for it to be used as the header itself(!), and so, if the relevant entry was not found the request should be suspended rather than going out incorrectly to the service; never-the-less the operation could not be resolved, of course, and the service returned a soap fault.

Fixing the issue was easy and simply meant adding the correct entry in the xml and running the scenario again, this time it completed successfully.

 

How does that differ using the SOAP adapter?

Using the SOAP adapter the approach was naturally very similar; pretty much the same assemblies, pretty much the same artifacts; there are three key differences though -

For starters the soap adapter requires a proxy; in most scenarios you’re using a web port type which provides the adapter with a proxy and so in most cases you don’t have to worry about this at all; I can imagine that some are probably not even aware but the send port, using the SOAP adapter, will have the web service proxy set in the “Web Service” tab of the adapter configuration to “Orchestration Web Port”.

Alternatively you can provide a custom proxy class, which is a topic by itself (and you can check it out in Richard Seroter’s post on the topic here), but in most standard cases this is not required.

As I’m not following the “standard” approach I had to create a custom proxy for my send port; I did this by using WSDL.exe and configuring the proxy class in the send port as described in Richard’s post.

In my case, however, unlike Richard’s, I did not wish to pre-defined the method called in the send port; luckily – the configuration allows you to set it to “Specify Later”, which means the method name will be provided per request through the message context (using the SOAP.MethodName property).

Taking the “Specify Later” approach means I don’t have to have a send port per method, which is good of course, but pay attention to my note regarding number of ports in the summary below.

Now that I have the send port and proxy configuration sorted I needed to get the web service’s schemas; I could do that by using XSD.exe and add the output to my service types assembly.

Last thing – when using the soap adapter you don’t generally need to have an XmlDisassembler in the pipeline; however – if you want BizTalk to be able to run a map it needs a “proper” message type in the context, not that awkward one the SOAP adapter puts, and so the XmlDisassembler becomes mandatory in this scenario.

other than that everything else is pretty much the same.

 

So – to summarise –

Calling a service from a process, without the process knowing ANYTHING about the service implementation is very easy, the story is slightly better in the WCF adapter case in my view, but both seem to me quite reasonable.

The only downside to this approach that I could think of so far, is that you are likely to end up with as many send ports as you have response output formats -

As far as requests from the orchestrations to the web service are concerned BizTalk will quite happily pick up the right map from a list of configured maps based on the input; so - if process A has one output format and Process B has a different output format and they both share the same send port – BizTalk will pick up the relevant map to convert either outputs to the service’s request.

On the way back, however, the incoming message (the service’s response) always looks the same, and so BizTalk will have no way of knowing which map to pick from the list.

That means that multiple send ports will have to be created for such cases so that there’s only one map for the service’s response; a large number of those may have some impact on the overall performance of the server group as the number of subscriptions that need to be evaluated increases; what “large” means in this context and how big is the impact is not something I could say easily, so I’d suggest doing some benchmarking to find out in your environment if you are concerned.

Labels: , ,

Tuesday, November 25, 2008

Configuring the Geneva Framework based STS to work with custom UserNamePasswordValidator

It took me a little while (and quite a bit of help from others on this thread) to get to a relatively simple implementation, so I thought I’d summarise the steps I’ve taken –

At the risk of sounding the obvious I would definitely recommend making sure the overall STS scenario works well using windows authentication before changing it to support custom authentication.

 

Once that’s done change the STS’ bindings’ clientCredentialType to UserName and the establishSecurityContext to false.

      <ws2007HttpBinding>

        <binding name="UserNameAuthentication">

          <security mode="Message">

            <message establishSecurityContext="false" clientCredentialType="UserName"/>

          </security>

        </binding>

      </ws2007HttpBinding>

The equivalent changes need to be made on the clinet’s binding going to the STS. these may not be obvious at first glance – on the client you will have the endpoint representing the RP and using ws2007FederationHttpBinding; inside this binding’s configuration you will find the issuer element, which is somewhat similar to an endpoint; this represents the STS’ endpoint and as such has a binding (ws2007HttpBinding) and a binding configuration; it is in that binding’s configuration that you need to change the credential type. setting it on the wrong bindings, as I did initially would send you back a couple of hours :-)

Next, as we’re using username authentication for the STS, a service certificate must be used so that the credentials can be encrypted. This is done through configuration of a service behaviour on the STS service as such:

    <behaviors>

      <serviceBehaviors>

        <behavior name="STSBehaviour">

          <serviceCredentials>

            <serviceCertificate findValue="STS" storeLocation="LocalMachine" storeName="My" x509FindType="FindBySubjectName"/>

          </serviceCredentials>

          <serviceMetadata httpGetEnabled="true"/>

          <serviceDebug includeExceptionDetailInFaults="true"/> <!—use for debug only -->

        </behavior>

      </serviceBehaviors>

(don’t forget to wire the behaviour to the service...)

As the test certificate I’m using are not valid, I needed to disable validation on the client side; I could not find a way to do this through configuration, as at the client there isn’t an endpoint as such for the STS service, just the issuer element in the ws2007FederationHttpBinding, so I’ve done this in the client code (this is a temporary measure for development only!) –

            proxy.ClientCredentials.ServiceCertificate.Authentication.CertificateValidationMode = X509CertificateValidationMode.None;

            proxy.ClientCredentials.ServiceCertificate.Authentication.RevocationMode = X509RevocationMode.NoCheck;

 

The current version of the Geneva Framework, unlike Zermatt before it, does not support the userNameAuthentication element in the serviceCredetial service behaviour. (to be accurate you can kind of force it to do so, but that’s planned to be blocked in the near future, so for all intents and purposes you should not include this element, see more information in the thread mentioned above)

In order to implement authentication a customer SecurityTokenHandler needs to be added; to do so created a class that inherits from WindowsUserNameSecurityTokenHandler and overridden the ValidateToken method; again – several samples of such implementation exist on the thread but the idea is to validate the username and password (made available through the SecurityToken parameter to the method) in whatever way you wish and then, ideally, add some claims to the ClaimsIdentityCollection output; this should generally include the identity, authentication method and authentication instant, but you can add whatever you wish.

To wire the custom handler to the STS service a bit more configuration is required on the STS side -

  <microsoft.identityModel>

    <securityTokenHandlers>

      <remove type="Microsoft.IdentityModel.Tokens.WindowsUserNameSecurityTokenHandler, Microsoft.IdentityModel,Version=0.5.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>

      <add type="MyUserNameSecurityTokenHandler, MyUserNameSecurityTokenHandlerAsembly"/>

    </securityTokenHandlers>

  </microsoft.identityModel>

This replaces the built-in WindowsUserNameSecurityTokenHandelr with my class that inherits from WindowsUserNameSecurityTokenHandler and adds custom implementation.

Note: I needed to add the definition of this section as such –

<section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel,Version=0.5.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>

I hope that makes sense….

Labels: ,

Thursday, November 06, 2008

From "Zermatt" to the "Geneva Framework" part II

A couple of days ago I've posted about the changes I've had to make to allow my custom STS to work with the updated Geneva framework. there's one more, quite crucial, change that I had to make, which I will try to describe next -

If my understanding is correct (and unfortunately there's all the chances in the world that it is not, so if you know otherwise please do comment) the October Geneva SDK has tightened security a little bit around token validation.

I believe that the previous version of SDK, the RP simply made sure that a token was included with the request, and that this token was signed by a party whose certificate exists on the server (and is accessible); the RP did not check which certificate was used to sign the token.

 

As far as I can tell the Geneva Framework SDK now behaves differently - if you execute the same code and configuration you had before (baring necessary changes to allow the code to compile on the new version, but these are mostly name changes) you will get the following error from the RP:

 

"An unsecured or incorrectly secured fault was received from the other party. See
the inner FaultException for the fault code and detail."

 

Basically the Client gets a token from the STS and attaches it to the request but the RP does not recognise the issuer of the token; in order to instruct the RP to accept tokens signed by a particular STS you need to provide it with a list of issuers you accetps, this can be done using the following configuration for example -

 

<microsoft.identityModel>
  <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=0.5.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
    <trustedIssuers>
      <add name="STS" thumbprint="7a0671d475673c1ab131ca1c0c804e4fbd385140"/>
    </trustedIssuers>
  </issuerNameRegistry>
</microsoft.identityModel>

 

This bit of configuration lists all the certificates that are acceptable as STS token signing.

 

It is interesting to note that this model is completely extensible - you can define your own registry IssuerNameRegistry type that would look and behave differently if you have other means of listing those; the same can also be done via code, which is the example provided with the SDK - you define a custom IssuerNameRegistryClass -

 

namespace ClaimsAwareWebService
{
    public class TrustedIssuerNameRegistry : IssuerNameRegistry
    {
        /// <summary>
        ///  Returns the issuer Name from the security token.
        /// </summary>
        /// <param name="securityToken">The security token that contains the STS's certificates.</param>
        /// <returns>The name of the issuer who signed the security token.</returns>
        public override string GetIssuerName( SecurityToken securityToken )
        {
            X509SecurityToken x509Token = securityToken as X509SecurityToken;
            if ( x509Token != null )
            {
                //Note: This piece of code is for illustrative purposes only. Validating certificates based on
                //subject name is not a good practice.  This code should not be used as is in production.
                if ( String.Equals( x509Token.Certificate.SubjectName.Name, "CN=STS" ) )
                {
                    return x509Token.Certificate.SubjectName.Name;
                }
            }

            throw new SecurityTokenException( "Untrusted issuer." );
        }
    }
}

 

and then when configuring the host for the RP service you provide this as a parameter -

 

FederatedServiceCredentials.ConfigureServiceHost(host, new TrustedIssuerNameRegistry());

 

And while I'm on the subject - as this has sent me going in circles - it appears that the framework is not happy with claim-less tokens, so if you're dumb enough (as I was) and end up not adding any claims (I was adding them base on the requested claims in the incoming request, which, at some point, was empty in my configuration) you will get a  error, which, after setting the ServiceDebugBehavior would read "A SamlAssertion requires at least one statement.  Ensure that you have added at least one SamlStatement to the SamlAssertion you are creating."

 

I can't decide about this one - does it not make sense to have a scenario in which you just want to get a signed token to indicate that an STS has authenticated the caller, but don't actually need any claims? not that it's a problem to find at least one claim to add (identity, authentication method are two easy examples), but speaking in principal I'm not yet convinced not having any specific claim should be an error.

Labels: ,

Wednesday, November 05, 2008

Message Creation in BizTalk - solution uploaded

A few weeks ago I published this post about some experiments me and Randal Van Splunteren did around message creation.

Not surprisingly I was asked to post the solution we've used and so I have uploaded it here

 

Have fun! (let me know if anything's missing or unclear, it's been a while since I ran this...)

Labels: , ,

Tuesday, November 04, 2008

From "Zermatt" to the "Geneva Framework"

I have already mentioned that Zermatt has been renamed as the "Geneva Framework", which makes total sense.

At PDC Microsoft have released a new download for the "Geneva Framework", which I have downloaded today to check some of my code against;

While not at all an extensive list, here are the changes I had to do to my code to get it to work with the updated framework -

On the STS:

  • The SecureTokenService class, which is the base class for any STS implementation has moved to the main Microsoft.IdentityModel namespace (it formerly existed under it's own namespace - Microsoft.IdentityModel.Service)
  • The GetScope method of the SecureTokenService is now marked as abstract and so has to be implemented (I believe it previously was not abstract so a base implementation could have been used, either directly or indirectly through an overriding method;
  • ClaimsPrincipal no longer has a 'Current' property, you can get the claims principal from an IClaimsPrincipal instance using the CreateFromPrincipal method or from an IIdentity instance using the CreateFromIdentity method.
  • GetOutputSubjects renamed to GetOutputClaimsIdentity, the order of the parameters has changed a bit (but otherwise remained the same) and the return value is now IClaimsIdentity and not ClaimsIdentityCollection (which, again, makes perfect sense)
  • In the STS service configurationI have changed the bindings from wsHttpBinding to ws2007HttpBinding and the STS contract from IWSTrustFeb2005SyncContract to IWSTrust13SyncContract.

On the RP:

  • ExtensibleServiceCredentials, which is used to configure the RP's host to use the Geneva Framework is now called FederatedServiceCredentials
  • To get the list of Claims in the RP you no longer use something like "(IClaimsIdentity)ClaimsPrincipal.Current.Identity;" but instead check the CurrentPrincipal of the current thread - "IClaimsIdentity identity = Thread.CurrentPrincipal as IClaimsIdentity;"

Labels: ,

Sunday, November 02, 2008

Non-Optional Claims in the Geneva Framework

I'm currently doing some work with the Geneva Framework (formerly known as "Zermatt"), which I am very excited about;
With the SOA wave and now the coming Cloud wave, federated identity becomes a crucial component in the enterprise and it is great to see such a good story for it from Microsoft.

Using the "Zermatt" SDK (I now need to download the updated framework and align with it) I have succesfully, and quite simply, managed to create both an active STS scenario and a passive STS scenario, both sharing the same underlying STS code; this was a great experience and I hope to post some more details over the next few days.

I was, however, a little bit surprised by the behaviour of the framework around non-optional claims -

 

In my scenario the RP (=relaying party, the service the client actually want to call) indicates through its configuration that it requires a specific (custom) claims, which is not optional -

<security mode="Message">
  <message>
    <claimTypeRequirements>
      <add claimType="http://myCompany/claims/someClaim" isOptional="false"/>
      <add claimType=http://myCompany/claims/someOtherClaim isOptional="false"/>
    </claimTypeRequirements>
    <issuer address="http://localhost:6000/STS"/>
    <issuerMetadata address="http://localhost:6000/STS/mex"/>
  </message>
</security>

 

When the client adds a web reference to this service, it is correctly configured with the STS details and the required claims (not posted here, I will try and describe my scenario in detail in a separate post) and so when it calls the service, WCF ensures it first hits the STS requesting the claims as indicated in the config.

You would all probably know that when thinking about any aspect of security in WCF the story is very “tight”, in the sense that you could set up pretty much all the requirements in configuration should you wish to and you could trust that the service’s code will never get executed if these are not met; I believe this is a key design point for WCF - the implementer of the method should not need to worry about how authentication is implemented, nor should you need to change the code if you decide to change your authentication method.

Considering this I expected the STS to try and provide all the claims it can based on the request message and/or configuration for the RP, and then I would expect the channel on the RP side (using the "Geneva" Framework to reject any requests that arrived without all the non-optional claims BEFORE calling the service’s code.

When testing my scenario I deliberately set the STS code so that it does not provide the required and was surprised to find out this was not the case.
My service's method was called whether both claims existed or not; I did have, of course, full access to the claims in code and so it was fairly easy to validate the existence of the claims required, but this seemd a little misaligned with the WCF approach to all the other security aspects and quite wrong frankly.

I could not find much help online (this is still early days for the framework), and checking with a couple of people they all confirmed both my observation and my expectations; luckily for me, though, I was able to attend PDC and so I made sure to give a visit to the Identity folks' booth.

I'm happy to say that they as well have confirmed that the expectation is quite valid and indeed, they expect this behaviour to change before RTM; hopefully this will happen which would keep things nice and tidy.

Labels: , ,