rickgaribay.net

Space shuttles aren't built for rocket scientists, they're built for astronauts. The goal isn't the ship, its the moon.
posts - 287, comments - 169, trackbacks - 33

My Links

News

Where's Rick?


GWAB PHX #2, 3/29
Tucson .NET UG, 4/15

VS Live Chicago, 5/5

VS Live Redmond, 8/18


About Me
I am a developer, architect, writer and speaker, passionate about distributed technologies and Application Lifecycle Management. I am currently a VP and Distinguished Engineer at Neudesic.









Archives

Post Categories

Published Works

Friday, June 20, 2014

Building a Simple NodeJS API on Microsoft Azure Websites from Start to Finish

NodeJS is a powerful framework for building IO-centric applications with JavaScript. Although it hasn’t yet reached a major version number (as of this writing, the latest build is 0.10.28), the level of developer, community, and industry support for NodeJS is nothing short of astounding. From Wal-Mart to LinkedIn, NodeJS is powering more and more of the experiences with which you interact every day.

Although there are many options for hosting NodeJS applications, Microsoft has been an early supporter of NodeJS from the beginning by making direct investments in the framework and demonstrating a commitment to making NodeJS a first class citizen on Windows, both on-premises and on Microsoft Azure.

In my new article for CODE Magazine, I provide a lap around NodeJS and Microsoft Azure Websites by showing you a simple but functional API that I recently developed from the ground up. I’ll start by discussing the design of the API, go on to reviewing implementation details, and then proceed to pushing the API live on Microsoft Azure Websites.

You can read the article here as well as on Amazon and at your local news stand.

http://bit.ly/1nT4K6h

posted @ Friday, June 20, 2014 12:41 PM | Feedback (0) | Filed Under [ Azure NodeJS ]

Friday, May 09, 2014

Visual Studio Live Chicago Recap: Building APIs with NodeJS on Microsoft Azure Websites

My first talk at VS Live Chicago this week (if you’re looking for my IoT talk, please click here) was based on a talk I started doing last year demonstrating fundamental unit testing techniques with NodeJS and Mocha. Since then, the code and the talk has evolved into a real API currently is early alpha at Neudesic.

In this session, we started with looking at the problem – and opportunity- with long, ugly URLs and how most URL minification APIs like bit.ly, tinyurl, etc. solve the problem today.

From there, we looked at why NodeJS is a great choice for building a Web API and proceeded to build the 3 key APIs required to fulfill the most fundamental features you’d expect from a URL shortening API including:

  • Shorten
    • When I submit a long, ugly URL to the create API, I should get back a neurl.
  • Redirect
    • When I submit a neurl to the submit API, my request should be automatically redirected.
  • Hits
    • When I submit a neurl to the hits API, I should get back the number of hits/redirects for that neurl.

With the API up an running on my laptop, we proceeded to create an Azure Website and push the Node app via my local Git repository, taking it live. All was not well unfortunately as initial testing of the Shorten API returned 500 errors. A quick look at the log dumps using the venerable Kudu console revealed the cause: The environment variable for the MongoDB connection string didn’t exist on the Azure Website deployment which was quickly remedied by adding the variable to the website from the Azure portal. Yes, this error was fully contrived, but Kudu is so cool.

With the API up and running, we exercised it a bit, verifying that the Redirect and Hits APIs were good to go and the scaled out the API from one to six instances with just a few clicks.

As the API continues to mature, I’ll update the talk to demonstrate how this level of indirection brought forth by virtualizing the actual URL (as with traditional services and APIs) introduces many opportunities to interact with the person consuming the API (all via URIs!) as they take the journey that starts with the click and ends with the final destination.

Without further ado, the code and more details on the talk can be found below.

Code: https://github.com/rickggaribay/neurl

Abstract: http://bit.ly/1iEEbNV 

Speaking of which, if you haven’t already, why not register for Visual Studio Live Redmond or Washington DC? Early bird discounts are currently available so join me to see where we can take this API from here! hhttp://bit.ly/vslive14

posted @ Friday, May 09, 2014 5:10 PM | Feedback (0) | Filed Under [ Speaking Events Azure NodeJS ]

Visual Studio Live Chicago Recap: From the Internet of Things to Intelligent Systems - A Developer's Primer

I had the pleasure of presenting at Visual Studio Live! Chicago this week. Here is a recap of my second talk “From the Internet of Things to Intelligent Systems- A Developer’s Primer (if you’re looking for a recap of my “Building APIs with NodeJS on Microsoft Azure Websites” you can find it here).

While analysts and industry pundits can’t seem to agree on just how big IoT will be in the next 5 years, one thing they all agree on is that it will be big. From a bearish 50B internet connected devices by 2020, to a more moderate 75B and bullish 200B, all analysts agree that IoT is going to be big. But the reality is that IoT isn’t something that’s coming. It’s already here and this change is happening faster than anyone could have imagined. Microsoft predicts that by 2017, the entire space will represent over $1.7T in market opportunity spanning from manufacturing and energy to retail, healthcare and transportation.

While it is still very early, it is clear to see that the monetization opportunities at this level of scale are tremendous. As I discussed in my talk, the real opportunity for organizations across all industries is two-fold. First, the data and analytical insights that the telemetry (voluntary data shared by the devices) will provide will change the way companies plan, execute and the rate at which they will adapt and adjust to changing conditions in their physical environments. This brings new meaning to decision support and no industry will be left untouched in this regard. These insights will lead to intelligent systems that are capable of taking action at a distance based either on pre-configured rules that interpret this real-time device telemetry or other command and control logic that prompts communication with device.

As a somewhat trivial but useful example, imagine your coffee maker sending you an SMS asking you permission to approve a descaling job. Another popular example of a product that’s already had significant commercial success is the Nest thermostat. Using microcontrollers very similar to the ones I demonstrated, these are simple examples that are already possible today.

Beyond the commercial space, another very real example is a project my team led for our client that involved streaming meter and sensor telemetry from a large downtown metroplex enabling real-time, dynamic pricing, up-to-the-minute views into parking availability and significant cost and efficiency savings by adopting a directed enforcement approach to ticketing.

So, IoT is already everywhere and in many cases, as developers we’re already behind. For example, what patterns do you use for managing command and control operations? How do you approach addressability? How do you overcome resource constraints on devices ranging in size from drink coasters to postage stamps? How do you scale to hundreds and thousands of devices that are sharing telemetry data every few seconds? What about security?

While 75 minutes is not a ton of time to tackle all of these questions, I walked the audience through the following four scenarios based on the definition of the Command message pattern in the "Service Assisted Communications" paper that Clemens Vasters (@clemensv) at Microsoft published this February:

1. Default Communication Model with Arduino - demonstrates the default communication model whereby the Arduino provides its own API (via a Web Server adapted by zoomcat). Commands are sent from the command source to the device in a point to point manner.

2. Brokered Device Communication with Netduino Plus 2 - demonstrates an evolution from the point to point default communication model to a brokered approach to issuing device commands using MQTT. This demo uses the excellent M2MQTT library by WEB MVP Paolo Patierno (@ppatierno) as well as the MQTT plug-in for RabbitMQ (both on-premise and RabbitMQ hosted).

3. Service-Assisted Device-Direct Commands over Azure Service Bus - applies the fundamental service assisted communications concepts evolving the brokered example to leverage Azure Service Bus using the Device Direct pattern (as opposed to Custom Gateway). As with the brokered model, the device communicates with a single endpoint in an outbound manner, but does not require a dedicated socket connection as with MQTT implicitly addressing occasionally disconnected scenarios, message durability, etc.

In the final, capstone demo, “Service-Assisted Device-Direct Commands on the Azure Device Gateway”, I demonstrated the culmination of work dating back to June 2012 (in which Vasters first shared the concept of Service-Assisted Communications) which is now available as a reference architecture and fully functional code base for customers ready to adopt an IoT strategy today:

image

As a set up for the demo, I discussed the Master and Partition roles. The Master role manages the deployment of partitions and the provisioning of devices into partitions using the command line tools that ship with the code base.

In the demo, I provided a look at the instance of Reykjavik deployed on our Neudesic Azure account including the Master and Partition roles. I showed the Azure Service Bus entities for managing the ingress and egress of device messaging for command, notification, telemetry and inquiry traffic (The Device Gateway is currently capable of supporting 1024 partitions with each partition supporting 20K devices today) as well as the storage accounts responsible for device registration and storing partition configuration settings.

I also discussed the protocols for connecting the device to the gateway (AMQP and HTTP are in the box and an MQTT adapter is coming very soon) and walked through the Telemetry Pump which dispatches telemetry messages to the registered telemetry adapter (Table Storage, HD Insight adapters, etc.)

The demo wrapped up with a Reykjavik device sample consisting of a Space Heater emulator that I registered on the Neudesic instance of the Device Gateway to acquire it’s ingress and egress endpoints, initialize fan speed, rpm and begins to send telemetry messages to it’s outbox every 30 seconds (fully configurable).

The beauty of the demo is in its simplicity. Commands are received via the device’s inbox and telemetry is shared via it’s outbox. The code is simple C# with no heavy frameworks which is really key to running on devices with highly constrained resources:

   1:  void SendTelemetry()
   2:          {
   3:              this.lastTelemetrySent = DateTime.UtcNow;
   4:             
   5:              var tlm = new BrokeredMessage
   6:                  {
   7:                      Label = "tlm",
   8:                      Properties =
   9:                      {
  10:                          {"From", gatewayId},
  11:                          {"Time", DateTime.UtcNow},
  12:                          {"tiv", (int) this.telemetryInterval.TotalSeconds},
  13:                          {"fsf", this.fanspeedSettingRpmFactor},
  14:                          {"fss", this.fanSpeedSetting},
  15:                          {"fon", this.fanOn},
  16:                          {"tsc", this.temperatureSettingC},
  17:                          {"hon", this.heaterOn},
  18:                          {"ofr", this.lastObservedFanRpm},
  19:                          {"otm", this.lastObservedTemperature}
  20:                      }
  21:                  };
  22:   
  23:              tlm.SessionId = Guid.NewGuid().ToString();
  24:   
  25:              this.sender.SendWithRetryAsync(tlm);
  26:          }

 

A screenshot from the telemetry table populated by the Reykjavik Table Storage adapter is shown in the Neudesic Azure Storage Explorer below:

image

As I discussed, this is an early point in a journey that will continue to evolve over time, but the great thing about this model is that everything I showed is built on Microsoft Azure so there’s nothing to stop you as a developer form building your own Custom Protocol Adapter and this is really the key to the thinking and philosophy around Device Gateway.

It is still very early in this wave and every organization is going to have different devices, protocols and requirements. So while you’ll see investments in the most common protocols as you can already see like (AMQP, MQTT, and CoAp) the goal is to make this super pluggable and fully embrace custom protocol gateways that just plug in.

As with the Protocol Adapters, there’s nothing to stop you from building your own Telemetry adapter or to use Azure Service Bus or BizTalk Services to move data on premise, etc.

Still with me? Great. The links to my demo samples and more details on the talk are available here:

Abstract: http://bit.ly/vsl-iot 

Demo Samples: https://github.com/rickggaribay/IoT 

Oh, and if you missed the Chicago show, don’t worry! I’ll be repeating this talk in Redmond and Washington DC, so be sure to register now for early bird discounts: http://bit.ly/vslive14

posted @ Friday, May 09, 2014 4:41 PM | Feedback (0) | Filed Under [ Speaking Events Azure IoT ]

Thursday, April 10, 2014

Speaking on Building APIs with NodeJS on Microsoft Azure Websites Next Tuesday, 4/15

I will be speaking at the Tucson .NET User Group next Tuesday on Building APIs with Node.js on Microsoft Azure Websites. This will be the 3rd time I speaking at this group, but first time I’m following Scott Hanselman (who spoke last month), definitely a tough act to follow!

You can learn more about the topic here: http://bit.ly/1hEzAJf

posted @ Thursday, April 10, 2014 11:06 PM | Feedback (0) | Filed Under [ Azure NodeJS ]

Wednesday, February 26, 2014

Configuring Custom Domain Names on Windows Azure Websites in 4 Easy Steps

Windows Azure Websites (WAWS) provides a very robust, yet easy to use container for hosting your web applications. This doesn’t just pertain to ASP.NET apps, but includes several templates like Drupal, Wordpress, Orchard, etc. and also provides very nice first class support for Node.js web apps/APIs, PHP and Python.

If you are new to WAWS, you may think ‘big deal, this is just another web host’. You would be wrong. There is a TON of value that you get with WAWS that blows your congenital, commodity web hosters away:

  • The free version allows you to host up to 10 sites in a multi-tenant environment and provides a great dashboard, FTP and continuous deployment capabilities including first class support for git (local repos) and github.
  • The shared version adds support for seamlessly scaling your app up to 6 instances/nodes along with enabling Web Jobs which provide worker processes for executing jobs on a schedule, continuously or on-demand.
  • The standard version allows you to dedicate not instances, but full VMs to your application and supports auto-scaling your app based on metrics and triggers.

These are just the big rocks… there’s a ton more to WAWS and whether you are a .NET, Node.js, PHP or Python developer and there’s a ton of goodness to WAWS which you can learn more about here: http://www.windowsazure.com/en-us/documentation/services/web-sites/ 

When you create your WAWS application, you get both an IP and URL. The URL takes the form of [your app].azurewebsites.net. This is cool for development, testing and maybe corporate apps, but if you are building publically visible web apps or APIs, chances are you’ll want your own domain name so that instead of [your app].azurewebsites.net you can point your users to foobaz.com or whatever.

Microsoft has official docs on how to do this here, but I found that there was a lot of detail that might intimidate folks so I thought I’d break it down in 4 simple steps. I’ll assume that you’ve already bought your shiny new domain name from a registrar and that it’s parked at some annoying, ad infested landing page.

Step 1: Ensure your site is configured for shared or standard mode

Free doesn’t support custom domains which seems pretty reasonable to me. If you started with a website in free mode, simply click on the Scale option and choose from “Shared” or “Standard” mode and click OK:

image

Step 2: Copy the IP and WAWS URL

The next step is to make note of your URL and IP address which you’ll need for the third step in this process. Go to the list of WAWS sites, select the site (but don’t click on it) and click on the “Manage Domains” icon at the bottom of the command bar:

image

This will bring up a dialog that includes your current domain record ([your app].azurewebsites.net) and your IP:

image

Step 3: Update the A Record and CNAMEs

Make a note of each and login to your domain registrar’s console. You want to look for “DNS Management” and either “Advanced” or “Manage Zones” or “Manage DNS Zone File”. You want to get to whatever console allows you to configure your A Record and CNAMEs. I won’t get into a bunch of DNS theory here, but in a nutshell, these records allow for requests to your registered domain name to be forwarded to Windows Azure, and specifically your website’s host name. The result is that your website will resolve to both [your app].azurewebsites.net and foobaz.com (or whatever domain you purchased).

Each registrar will obviously look different, but this is what GoDaddy’s looks like (there’s several other entries like ftp, MX records, etc. which can be ignored):

image

The A record needs to point to the IP address you captured in step 2. Replace whatever value is there with the IP address provided. When someone calls up foobaz.com, your registrar will authoritatively answer that request and then pass it on directly to the IP address you provided.

Now there are various docs, posts, etc. that will tell you that you can choose to use an A name record or a CNAME alias but my experience was that I needed to configure both. If you want to try one or the other, go ahead and do so and skip to Step 4. If it doesn’t work, come back and do both (I had to).

For the CNAME, there are 3 entries you need to make:

  • Point www to [your app].azurewebsites.net – this tells DNS that [your app].azurewebsites.net should be the destination (canonical host) for any DNS queries that begin with www (i.e. www.foobaz.com)
  • Point awwverify AND awwverify.www to awwverify.[your app].azurewebsites.net – This provides for a DNS validation mechanism so that WAWS can validate that your domain registrar has been configured to allow WAWS to serve as a canonical domain in the event that a CNAME look up fails.

Be sure to save your file/settings.

Step 4: Enter your custom domain name in the Manage Domains dialog and check for validity

Pull up the “Domain Settings” for your website again, and this time, enter your new domain name (i.e. foobaz.com). If you want WAWS to respond to both www.foobaz.com and foobaz.com, you’ll want to create both entries. You’ll likely see a red dot indicating that validation and/or CNAME look up has failed:

 

Note that DNS can take up to 48 hours to propagate so as you move to this step, know that if it doesn’t immediately work, wait a few hours to a day and try again (Dynamic DNS providers solve this problem by acting as a proxy between your authoritative domain and canonical domains/IPs). It is very likely that you’ve done everything right, but the records have not yet propagated.

 

image

This is simply WAWS’ way of telling you that the records have not yet propagated. You can happily continue using your WAWS website using the [your app].azurewebsites.net URL. In time, when you come back to the dialog, the verification should succeed and any request for foobaz.com should automatically resolve to your WAWS app.

If you’ve followed these steps and still have issues after 24-48 hours, feel free to drop a comment or hit me on twitter @rickggaribay and I’ll be happy to help you out.

posted @ Wednesday, February 26, 2014 8:26 PM | Feedback (0) | Filed Under [ Azure Windows Azure ]

Tuesday, January 14, 2014

fatal: could not read Username for 'https://github.com': No such file or directory

Git logo

If all of a sudden you are unable to clone or push repos and you recently downloaded the latest version of msysgit (AKA Git for Windows) you are not alone.

As discussed here and confirmed here, this appears to be a bug in the 1.85.2 build, which consequently is the latest available build linking from github, which likely means that thousands of developers since the release of 1.85.2 are impacted.

3 options, from simplest on down:

1. Uninstall 1.85.2 and downgrade to a previous version.

2. Switch over to SSH instead.

3. Apply the patch “discussed” here.

Personally, I chose 2 out of sheer laziness and opted out of 3 due to a general lack of motivation resulting from what appears to be more interest in pedantic redirection and passive aggressive replies from project representation than communication and spirit of collaboration (which is supposed to be what OSS is all about).

Hopefully this will be fixed in a new build soon. Hope this helps.

posted @ Tuesday, January 14, 2014 5:05 PM | Feedback (0) | Filed Under [ git ]

Monday, January 13, 2014

Announcing the 2nd Annual Global Windows Azure Bootcamp (GWAB) Phoenix!

Global Windows Azure Bootcamp

I am thrilled to announce the 2nd Annual GWAB which has been confirmed for Saturday, March 29th, 2014!

As of today, we have 119 locations in 48 countries and 111 cities!

For those of you who attended last year, you know what a blast we had writing and deploying code to Azure as part of hands on labs and our massive scale-out demo, the "Global Render Lab”. This exercise showed the power of distributed computing and we'll be doing something similar this year. 

This year, we are taking the immense compute power of Windows Azure and putting it to work in the fight against diabetes. We’re partnering with Dr. Richard D. Smith in the Biological Sciences Division at Pacific Northwest National lab (PNNL) in collaboration with Minnie M. Sarwal and Tara K. Sigdel at the California Pacific Medical Center Research Institute with the goal of discovering how our body’s serum protein glycosylation works. We want to know how high blood sugar levels present in diabetes patients affect the complex sugar production systems required for our health and ability to fight disease. We want to prove the theory that when small changes in this process start occurring, the disease can progress and lead to Type 2 diabetes. The results from this work will not only help understand the human diabetic state at the molecular level but also lead the way for early detection of diabetes. More details here: http://global.windowsazurebootcamp.com/charity

Our hope is that as our attendees are using this to learn about how to deploy to Windows Azure and how distributed computing works, we can help solve some of the world's problems at the same time.

Agenda and Speakers

Joe and I have been hard at work on the agenda and I’m pleased to share the final agenda below:

Start

End

Topic

Speaker

9:00 9:15 Welcome, Introduction Rick G. Garibay and Joe Guadagno
9:15 10:15 A Lap Around Windows Azure Rick G. Garibay
10:15 11:30 Big Charity Render Lab! Proctored by speakers, GWAB leadership
11:30 11:45 Break. Grab Lunch, prepare for next session!  
11:45 12:45 A Lap Around Cloud Services Joe Guadagno
12:45 1:45 A Lap Around Windows Azure Websites Rick G. Garibay
1:45 2:45 A Lap Around Windows Azure Mobile Services Brendon Birdoes
2:45 3:45 Hands on Labs Proctored by speakers, GWAB leadership
3:45 4:00 Break  
4:00 5:00 Windows Azure ALM Chris Myers
5:00 6:00 Windows Azure Data & HDInsight Michael Collins

 

The class includes presenters and trainers (Microsoft MVPs, industry experts) with deep, real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned. In fact, I am pleased to announce that the following Microsoft MVPs and community rock stars have already signed on for this year’s event:

Awesome. How much does it cost?
This event is FREE to the attendees. Gratis! Gratuite! Libero!  However, seating is limited so be sure to register and secure your seat today: http://bit.ly/1gCdCZb 

What do I need to bring?
You will need to bring your own computer which can run Visual Studio 2012 or 2013 (i.e. Windows, OSX, Ubuntu with virtualization, etc.) and have it preloaded with the following:

Optionally (or if you will not be doing just .NET labs), the following can also be installed:

More info here: http://www.windowsazure.com/en-us/documentation

Please do the installation upfront as there will be very little time to troubleshoot installations during the day.

You will also need to be signed up for a Windows Azure account. There are many options including a 100% absolutely free 30 day trial. Why not sign up now? http://www.windowsazure.com/en-us/pricing/free-trial/

Is this for beginners?
Yes and no. We will focus on a series of lectures and hands on labs aimed at level 200, but ad-hoc white boarding, deep scenario discussions and Q&A are all part of the fun. Think you already know it all? Great, we still need volunteers, speakers and lab buddies. Drop me a note on Twitter @rickggaribay

And now, for a little fun…

Big thanks to fellow MVPs Maarten Balliauw, Alan Smith, Michael Wood and Magnus Martensson for running this event as our global leaders. Thanks also to Scott Cate at Event Day for providing free registration hosting. We couldn’t do this without them!

posted @ Monday, January 13, 2014 7:24 PM | Feedback (0) | Filed Under [ Events Azure Windows Azure MVP ]

Saturday, December 21, 2013

IoT, Mobility, Hybrid Cloud, ESB Oh My! A Closer Look at the Neudesic Gartner AADI Demo

Earlier this week, my colleague @simonguest posted a great summary of the demo he and I built together for the Gartner AADI Summit in Las Vegas week before last.image

As Simon illustrates, the demo combined two areas we are thinking a lot about these days at Neudesic- enterprise mobility and Internet of Things.

When we talk about enterprise mobility, we're not talking about mobile device management, governance or provisioning. Our focus is on unlocking the assets that live on premises, behind the firewall so that they are safely and securely exposed, composed and consumed by a variety of devices and composite business applications on the web.

In this post, I’ll talk a little bit more about these two areas and dive into the back-end of the demo from a messaging and services perspective to give you a glimpse into the internals of the demo app.  

Exposing On-Premise Assets

The key unlocking event for enterprise mobility is enabling the ability to expose assets that live within an organization’s traditional datacenter. These assets include things like other apps, LOB systems, ERPs and databases. In the most traditional sense, the way this problem has been solved is by determining what assets need to be made available, standing up hardware on a DMZ and specifying which inbound ports allow which kind of traffic. Unfortunately, there’s really nothing too innovative about this approach as it is the way we’ve been solving this problem for nearly 20 years.

Today, there are many enterprise mobility vendors out there offering an appliance based offering which extends this approach to address things like security, hardware compression, etc., but the agility of these solutions is questionable as they still required CapEx spend, time and resources to provision and manage.

Another kind of on-premise asset you will be hearing more and more about involves sensors and devices that communicate with software and other devices. This Machine to Machine (M2M) communication will be revolutionary in domainimages such as transportation, retail, logistics, oil and gas, manufacturing, etc. as they will enable the ability to yield new insights that will at first transform business and then become key to survival. One such device is known as iBeacon.

These devices, about the width and half the length of a stick of gum run on Bluetooth Low Energy (BLE) and thus require very little power. These beacons transmit a unique identifier along with signal strength which can then be used to calculate proximity.

You can easily imagine a scenario where a retailer provides a free app that interacts with you in the store as you approach these iBeacons. As you might imagine, the back end of the app would learn your behaviors and preferences and target you with interesting offers to influence your browsing and buying habits as well as crowd sourcing what ideal retail layouts look like based on the tons of data that could be collected and modeled. Another scenario might involve tracking shipping containers on trucks. Today, 5.9 RFID technology dominates this space but the amount of infrastructure required to read the tags compared to what can be accomplished with a handful of iBeacons and a handheld device is disruptive indeed. To illustrate this in our demo, our iBeacons communicated directly with iOS and Android handsets, with literally no set up or tear down, allowing us to enumerate, update and “check into” the beacons that were placed throughout the conference floor.

For the check in scenario, as you approached an iBeacon, the app would raise an event allowing you to check in to that location. The app then sent a message over the Gartner guest conference WIFI network to a server hosted in our datacenter in Irvine, CA running Neuron ESB. The server sits behind our firewall and no inbound ports are open because our IT policy rightfully prohibits it. Instead, a single outbound port is opened by Neuron over an Azure Service Bus Relay which allows for secure, bi-directional messaging for a number of APIs that we implemented which I’ll cover in more detail next.

Composing and Consuming

As you can see in the demo sample architecture, we exposed a REST head over Azure Service Bus which provided the external messaging fabric for communicating with the apps running on iOS and Android in the Gartner exhibition hall. Using Neuron’s messaging and processing capabilities, we implemented the following 4 APIs:

 

  • REGISTER BEACON
    PUT /beacons
    { id: "<<GUID>>", name: "Neudesic Booth, Expo Hall"}

 


  • LOOK UP BEACON
    GET /beacons
    [{ id: "<<GUID>>", name: "Beacon name"}, {id: "<<GUID>>", name: "Beacon name"}] (Array)

 


  • GET ALL BEACONS
    GET /beacons/:id
    { id: "<<GUID>>", name: "Beacon name"}

 


  • CHECK IN TO BEACON
    PUT /beacons/:id/checkin
    { userid: "<<facebookid>>" }

image
The Android version of the app the weekend before the conference. We used Xamarin to build a consistent UX for iOS as well. The iBeacons were re-registered with the actual locations of the booths on the conference floor within a few seconds.

 

To create a connection to Azure Service Bus, we simply created a Client Connector within a Service Endpoint in Neuron as shown below:

image

Neuron hosts an instance of the WCF NetMessagingBinding and creates a virtual service endpoint using the URI shown above.

In order to route the requests from the app to the right logic, we implemented a simple controller called “Beacon Router” using a Neuron Process. Using a Decision step, we added 4 branches for each API:

image

Each Decision Step has a condition which returns true or false. If the condition returns true, the steps within the branch are executed. If not, the Process evaluates each branch until a match is found our returns a 404, Resource not Found.

Here is a look at the the Check In Condition:

image

As you can see, we used a simple UriTemplate to define the template for the Check In API. The URI is stored in the ESB message context within the MessageHeader.To property. If the template matches the actual URI, we return true and the steps within the branch execute in sequence.

The next step that fires is the Publish step. We included this step for testing purposes to serve as Wire Tap on the messages that reach the branch. In this case, this was useful for inspecting the message off the wire by configuring a subscribing party and using our Neuron Test Client to inspect the message:

image

This little trick aside, the Publish step is very powerful in that it allows you to asynchronously publish a message on a topic for other subscribers to pick up. Think of Scatter Gather as a good example.

The next step you’ll notice is a Call Process step. This step simply allows you to call a nested Process which is helpful in factoring your Process (note that we could have chosen to asynchronously execute a nested process by wrapping it inside a Parallel step).

The Check In process starts with a Push step. This step stores the current message for later use.

This was required because we needed to first determine if the iBeacon location was valid before allowing a check in. The way Neuron Processes work is that the message is modified by each step allowing for very powerful enrichment and transformation of the message. However, since there can only be one message in context at any time, the current message is overwritten by the message we crafted up for the ODBC step which follows.

In this case, we used the ODBC step to query SQL Server for the iBeacon guid. The ODBC step expects a message that conforms to it’s interface, a simple XML syntax for defining the query:

   1:  string sql = String.Empty; 
   2:   SNAGHTMLd1870c0
   3:  sql += "<Statement type=\"Text\" sql=\"SELECT * FROM [dbo].[Beacon] WHERE BeaconGuid =?\">"; 
   4:  sql += "<Parameters>"; 
   5:  sql += "<Parameter type=\"varchar\" name=\"@BeaconGuid\" value=\"" + beaconId + "\"/>"; 
   6:  sql += "</Parameters>"; 
   7:  sql += "</Statement>"; 
   8:   
   9:  context.Data.FromXml(sql);

 

Note line 9 above replaces the message in the context with this new message. The ODBC step executes the query based on the latest context and returns the result:

<QueryResults> 
    <QueryResult> 
        <Id>2</Id> 
        <BeaconGuid>1.1</BeaconGuid> 
        <BeaconName>Neudesic Booth</BeaconName> 
    <QueryResults> 
<QueryResult>

Next, we used a Decision step to check to see if the ODBC step yielded the iBeacon we looked up:

   1:  var xml = context.Data.ToXmlDocument(); 
   2:  var node = xml.GetElementsByTagName("QueryResult"); 
   3:  return node.Count > 0;

If the node count is not zero, we know we have a match. Otherwise, we issue a 404 since the resource could not be found.

Recall that at the beginning of the process execution, the first thing we did was push the message into temporary storage so that we could retrieve it later. Once we determined that the iBeacon exists, we pop the message back into context so that we can grab the iBeacon id from the URI and the user id of the user (in this case, a facebook email address) from the original HTTP message body.

To perform the chec kin, we simply did an INSERT into a SQL Server table called CheckIn. The Code step includes the following code, which as before, the ODBC step will consume and perform the unit of work.

Since the iBeacon ID is part of the URI, we use UriTemplate again to determine a match and then extract the {id} parameter from the URI:

   1:  System.UriTemplate template = new System.UriTemplate("/beacons/{id}/checkin");
   2:  Uri uri = new Uri(context.Data.GetProperty("MessageHeader","To"));
   3:  Uri baseUri = new Uri("https://" + uri.Host);
   4:  var match = template.Match(baseUri, uri);
   5:   
   6:  var beaconId = match.RelativePathSegments[1];

From there, we do some formatting and grab the user id from the JSON message payload that was provided in the HTTP body:

   1:  System.Web.Script.Serialization.JavaScriptSerializer serializer = new System.Web.Script.Serialization.JavaScriptSerializer();
   2:   
   3:  Neudesic.TPG.GartnerAADI.Model.CheckIn checkIn = new 
   4:  System.Web.Script.Serialization.JavaScriptSerializer().Deserialize<Neudesic.TPG.GartnerAADI.Model.CheckIn>(jsonString);

Finally, we build the INSERT statement and load the new message context:

   1:  string sql = String.Empty;
   2:   
   3:  sql += "<Statement type=\"Text\" sql=\"INSERT INTO [dbo].[CheckIn]([UserId],[BeaconGuid],[Date]) VALUES(?,?,?)\">";
   4:  sql += "<Parameters>";
   5:  sql += "<Parameter type=\"varchar\" name=\"@UserId\" value=\"" + checkIn.UserId + "\"/>";
   6:  sql += "<Parameter type=\"varchar\" name=\"@BeaconGuid\" value=\"" + beaconId + "\"/>";
   7:  sql += "<Parameter type=\"varchar\" name=\"@Date\" value=\"" + DateTime.Now.ToString() + "\"/>";
   8:  sql += "</Parameters>";
   9:  sql += "</Statement>";
  10:   
  11:  context.Data.FromXml(sql);

If all goes well, we issue an HTTP 200 back to the client:

   1:  context.Data.Http = new HttpMessage();
   2:  context.Data.Http.StatusCode = System.Net.HttpStatusCode.OK;
   3:  context.Data.FromString("");

Otherwise we own up to the problem being on our end and return a 500 (with some internal logging of course).

The Register API is implemented in much the same way. We check to see if the iBeacon exists and if so, do an update, otherwise insert a new record. Thus, this API supports initial standing up of the beacons as well as managing the names as the beacons moved from development to the expo floor.

For the GET APIs, we again used ODBC steps to query for a single or all beacons and returned a single or an array of beacons respectively as you can see in the BeaconRouter screenshot at the beginning of this walkthrough.

Summary

As you can probably guess, Simon and I had a lot of fun building this demo and the folks at the conference seemed to find it interesting too :-)

There are unlimited possibilities once you unlock the ability to safely and securely transcend network, security and business boundaries behind the firewall. And if agility isn’t enough to compel you to think differently about how you expose and compose your enterprise assets for external consumption by mobile devices today, the proliferation of devices- and I’m not talking about tablets or phones- expected to reach in the excess of 250 billion by 2025, will challenge every industry to think differently about their enterprise messaging and mobility strategy.

posted @ Saturday, December 21, 2013 8:49 AM | Feedback (0) | Filed Under [ Neuron ESB Windows Azure Azure Service Bus IoT Mobility ]

Thursday, December 19, 2013

New Webcast: Enabling Rich Messaging Endpoints with Windows Azure BizTalk Services

Yesterday I presented a live webcast for Neudesic on the powerful new capabilities of Windows Azure BizTalk Services for enabling rich, hybrid messaging for modern enterprise applications across on-premise and cloud-hosted business assets.

Windows Azure BizTalk Services (WABS) introduces advanced new PaaS messaging that enables secure, robust communication between your on-premise and cloud assets.

In the video, I show how you can:

  • Integrate Web and mobile clients with on-premise and cloud assets
  • Communicate securely with databases, ERPs and other LOB applications behind the firewall
  • Validate and translate messages to and from various message representations while supporting POX, REST and SOAP
  • Broker, mediate, enrich, compose, and route messages and requests consistently across multiple endpoints

The video is available here: http://bit.ly/1cDORZ1

image

posted @ Thursday, December 19, 2013 5:36 PM | Feedback (0) |

Sunday, October 20, 2013

WABS BizTalk Adapter Service Installation in Seven Steps

With the proliferation of devices and clouds, businesses and developers are challenged more than ever to both enable employee productivity and take advantage of the cost benefits of cloud computing. The reality however, is that the vast majority of organizations are going to continue to invest in assets that reside both within their own data center and public clouds like Windows Azure and Amazon Web Services.

Windows Azure BizTalk Services (WABS) is a new PaaS based messaging and middleware solution that enables the ability to expose rich messaging endpoints across business assets, whether they reside on-premise or in the commercial cloud.

WABS requires an active Windows Azure account, and from there, you can provision your own namespace and start building rich messaging solutions using Visual Studio 2012. You can download everything you need to get started with WABS here: http://www.microsoft.com/en-us/download/details.aspx?id=39087

Once your WABS namespace has been provisioned, you are ready to start developing modern, rich messaging solutions. At this point, you can experiment with sending messages to a new messaging entity in Windows Azure called an EAI Bridge and routing them to various destinations including Azure Service Bus, Blog Storage, FTP, etc. However, if you want to enable support for connectivity to on-premise assets including popular database platforms like Microsoft SQL Server and Oracle Database as well as ERP systems such as Oracle E-Business Suite, SAP and Siebel eBusiness Applications, you want to install an optional component called the BizTalk Adapter Service (BAS) which runs on-premise.

The BAS includes a management and runtime component for configuring and enabling integration with your LOB systems. The capabilities are partitioned into a design-time experience, a configuration experience and the runtime. At design time, you configure your LOB Target (i.e. SQL Server, Oracle DB, SAP, etc.) for connecting to your LOB application via a LOB Relay. Built on Windows Azure Service Bus Relay Messaging, the LOB Relay allows you to establish a secure, outbound connection to the WABS Bridge which safely enables bi-directional communication between WABS and your LOB target through the firewall.

More details on the BizTalk Adapter Service (BAS) architecture can be found here: http://msdn.microsoft.com/en-us/library/windowsazure/hh689773.aspx

While the installation experience is fairly straightforward, there are a few gotchas that can make things a bit frustrating. In this post, I’ll walk you through the process for installing and configuring BAS in hopes of getting you up and running in a breeze.

Installing the BizTalk Adapter Service

Before you install BAS, ensure you’ve downloaded and installed the following pre-requisites:

  • WCF LOB Adapter Framework (found on the BizTalk Server 2013 installation media)
  • BizTalk Adapter Pack 2013 (found on the BizTalk Server 2013 installation media)
  • IIS 7+ and WAS (I’ve tested installation on Windows 7 and Windows 8 Enterprise editions)
  • AppFabric 1.1 for Windows Server
  • SQL Server 2008 or 2012 (all editions should be supported including free Express versions)

The installation process will prompt you for key information including the account to run the application pool that will host the management and runtime services and a password for encrypting key settings that will be stored by the management service in SQL Server. Let’s take a look at the process step-by-step.

1. When you unpack the installer, the most common mistake your likely to make is to double click it to get started. Instead, open a command prompt as an administrator and run the following command (you’ll need to navigate to the folder in which you unpacked the MSI):


msiexec /i BizTalkAdapterService.msi /l*vx install_log.txt in

This command will ensure the MSI runs as Admin and will log results for you in case something goes wrong.

 

2. The first thing the installer will as you for is credentials for configuring the application pool identity for the BAS Management Service. This service is responsible for configuring LOB Relay and LOB Targets and stores all of the configuration on a repository hosted by SQL Server (Long Live Oslo!). In my case, I’ve created a local service account called svc-bas, but this of course could be a domain account or you can use the other options.

 

Install 1 creds

3. Before you continue, be sure that the account you are using to run the MSI is a member of the appropriate SQL Server role(s) unless you plan on using SQL Server Authentication in the next step. The wizard will create a repository called BAService so will need the necessary permissions to create the database.

4. Next, specify connection info for the SQL Server database that will host the BAService repository. SQL Express or any flavor of SQL Server 2008 or 2012 is supported.

 

Install 2 SQL 

5. Specify a key for encrypting sensitive repository information.

Install 3 master pass

 

6. The installer will then get to work creating the BAService in IIS/AppFabric and the BAService repository in SQL Server.

Install 5 creating databases

7. If all is well, you’ll see a successful completion message:

Install Complete

If the wizard fails, it will roll back the install without providing any indication as to why. If this happens, be sure to carefully follow steps 1 and 2 above and carefully review the logs to determine the problem.

After the installation is complete, you’ll notice the BAService has been created in IIS/AppFabric for Windows Server.

image

The BAService database consists of 4 tables which store information on the configured Azure Service Bus relay endpoints that communicate with the LOB Targets, the operations supported by each target (configured in Visual Studio) and finally the virtual URIs for addressing the BAService for configuring the entities previously mentioned:

image

At this point, the LobRelays, LobTargets and Operations tables will be empty.

Once you configure a LOB Target, the BAService will write the configuration data to each table, enabling Azure Service Bus Relay to fuse with the WCF LOB Adapters that ship with the BizTalk Adapter Pack. This combination enables very powerful rich messaging scenarios that support hybrid solutions exposing key business assets across traditional network, security and business boundaries in a simple and secure manner.

posted @ Sunday, October 20, 2013 10:56 AM | Feedback (0) | Filed Under [ BTS Azure WABS Windows Azure BizTalk Services ]

Friday, October 04, 2013

Seven Learnings Over Seven Years as a Microsoft MVP

On Tuesday I learned that I have been awarded the Microsoft MVP Award for Microsoft Integration for 2013.

This is my second Microsoft Integration award and 7th consecutive MVP Award in a row (previously I was a Connected Systems MVP which has since been retired). Last year, I had the honor of being recognized as Microsoft Integration MVP of the Year, and I’m still glowing about it!

Looking back over the last seven years, I still remember my first award in 2007; how excited and humbled I felt to be recognized among such an incredible community. I can easily say, without hesitation that every year I am awarded, that feeling never dulls. In fact it only gets stronger because the competition, and channels available to have an impact only gets bigger and bigger.

Over the last seven years, technologies have changed and evolved along with my career and I’ve had the privilege to both witness- about as close up as you can get without being a blue badge- and participate in the transformational changes Microsoft has undergone during this time as well.  For example, in 2005 I was focused almost exclusively on .NET, ASP.NET ASMX, WCF and BizTalk Server. Today, I am working with newer technologies and platforms like Windows Azure, Azure Service Bus, Neuron ESB (built on top of WCF, BTW), RabbitMQ, Web Sockets, Node.js. .NET and BizTalk still remain foundational but there are a ton of new technologies that have rounded out my toolbox. Throughout this time, I’ve had the privilege of applying these technologies in numerous industries including transportation, gaming, hospitality and financial services, written a book and published numerous articles and whitepapers, founded a user group along with speaking in the local, regional and national circuits.

While I am not suggesting that all of these experiences have been the result of the MVP program, my experiences over the last seven years have been much richer as a result of it. So, to commemorate seven years of being a Microsoft MVP, I’d like to share seven things I’ve learned as an MVP since 2007:

#7 Don’t complain about the Salmon. Seriously guys. It’s Seattle. Do you know what the street value is these days for a good cut of salmon?

#6 The MVP Program isn’t just about products and technology. It’s about people and the wonderful relationships and friendships- both within and outside of Microsoft- that have made me a better technologist, learner and leader that transcend far beyond the program.

#5 Every MVP should hug their MVP Lead at least once a year. These are the people who work tirelessly to keep the program running. Weekly newsletters, connecting you to the right folks inside the PGs and putting up with our grotesquely overinflated sense of self are just some of the things your MVP Lead does that we know about.

#4 Microsoft isn’t perfect. I know, surprising right? But seriously, before I was an MVP, I used to have these lofty expectations of how Microsoft should think about this or that, or how a product or feature should be implemented. At the end of the day, Microsoft is just a (really big) group of really smart people that are trying to build the right thing for an even bigger group of people which is much, much harder and different than building a solution for one client/customer. So before you complain about this or that, think back to the code your wrote just two projects ago and then look in the mirror :-)

#3 The MVP Program is a significant investment. Despite having to pay for your own travel to Redmond, the cost of subsidizing hotels, providing meals, facilities, transportation for 3000+ MVPs every year is far from trivial. However, this cost pales in comparison to the investment that the product teams- including Program Managers, Software Development Engineers, Testers, directors and members of the executive team take away from their projects, release schedules, and a million other things to prepare content and demos for the week and spend quality, face to face time with MVPs across over 30 product disciplines.

#2 Microsoft genuinely values its MVPs and listens to what they have to say. Sure, some interactions/SDR’s provide less opportunity to influence a feature or scenario depending on how baked the product is, but your input is listened to, documented, reviewed, discussed and considered in shaping the future roadmap. If you have any doubts about this, next time you are in a heated discussion, study the faces of the PMs in the room and the one grimacing could just be the owner of a feature they’ve poured the last year of their careers into that you just (knowing or unknowingly) killed.

#1 It’s not about the perks. MSDN subscriptions, access to internal DLs, invitations to SDRs, Product Group Interactions are really, really awesome, but if that’s the extent of what you are “getting” from the program, you’re doing it wrong. Being an MVP is about representing the community and customers and (politely) challenging your Microsoft peers (who are every bit as smart and passionate as you) to consider what you have to say in an effort to make the product(s) better. The opportunity to shape and influence the roadmaps and feature sets for a $78 billion global company is a tremendous privilege that never, ever gets old.

I’d like to thank Microsoft for another great year. I sincerely hope that I can positively influence the program, products and community in 2014 and give back even a fraction of what I have gained year over year, and with a little but of luck and hard work, maybe we’ll see  a #8 on this list by this time next year!

posted @ Friday, October 04, 2013 11:58 AM | Feedback (0) |

Tuesday, September 10, 2013

On Sprint Duration

I recently had a good internal discussion on 2 week versus 3 week sprints getting ready to kick off a new project for a client.

This is a debate that's been happening in the agile community for well over a decade, and while there is now broad consensus 10 years later that 4 weeks is almost always highly undesirable, the thing to remember is that the quality, caliber and discipline of the team is absolutely essential regardless of sprint duration.


I Just Want to Dance!

The most common argument against two week sprints is that the planning ceremonies occupy too much time and effectively reduce the team's delivery time from 10 days to less than 8. This is true for severely under-performing teams. High performing teams should be able to land demo and retro in one day and planning in as little as half a day. If the only time a team communicates is during these ceremonies, the time will drag on even further.

Performing teams communicate and practice dress rehearsals of demos, have a backlog groomed and ready to go and in some cases, might even have stories already decomposed and ready to go (signaling strong alignment with business prioritization). 

Deployment Happens

The other common argument is that deployment time cuts further into delivery time, and now that 8 days is more like 7 days because it takes a day to deploy. This is true of teams that are practitioners of cargo cult agile development. If you don't have unit tests and automated builds in place, you _will_ feel pain each and every sprint.

Snake Oil

One solution to this dilemma that sometimes comes up is to start with say 3 week sprints and then, when the team has "earned" 2 week sprints, reduce the sprint time accordingly. This is an anti-pattern for the simple reason that you can't improve what you can't measure and if you go around changing sprint duration, velocity becomes corrupt. This means that your ability to predict future velocity is severely impeded which affects budget, staffing and obviously has schedule implications.

Why Two Week Sprints Work

Two week sprints are highly advantageous when you have a high degree of risk and delivering new innovation because they provide tighter intervals with which to adapt and adjust. While 3 week sprints can be successful, most often they are merely masking the lack of efficiency of the team and providing a fig leaf to hide behind- in my experience in almost every case, you will find that productive, heads down time turns out to still be just under 2 weeks!

When building a team, you should settle for no less than A-Team players that are going to kick ass and take names from day one. If you do the heavily lifting to invest in building the right team, the sprint duration won't matter nearly as much, but if you are not committed to this from day zero, no amount of sprint duration optimization will save you from failure.

posted @ Tuesday, September 10, 2013 5:21 PM | Feedback (0) | Filed Under [ Processs Projects ALM ]

Wednesday, March 20, 2013

Global Windows Azure BootCamp– Phoenix 4/27

The rumors are true. The Global Windows Azure Bootcamp is coming to Phoenix on April 27th, 2013. Registration is now open: https://phxglobalazurebootcamp.eventday.com/ clip_image001[9]

This is a truly global event in which the Phoenix community will come together to share and learn what you can do on Windows Azure.

This one day deep dive class will get you up to speed on developing for and deploying to Windows Azure. The class will be led by myself and fellow MVPs including the one and only Joe Guadagno, Dan Wahlin and you’re friendly neighborhood Microsoft Regional Director Scott Cate. You’re guaranteed to learn a ton and in addition to the talks, you’ll work on some great hands on labs so you can apply what you learn on the same day and take the labs with you! Best of all, if you get stuck, we’ll be there to make you do push ups, I mean get unblocked Winking smile

AGENDA

We will start at 9:00 with welcome and introductions and get right into an end-to-end overview of Windows Azure. From there, we’ll participate in a massive, coordinated, global deployment to Windows Azure, teaming with over 60 other worldwide locations to see the workload in action (details are super secret, so you have to show up to get your security clearance- this a a boot camp after all)!

After we’ve done our best to take down a datacenter, we’ll take a break and take a lap around storage and database options on Windows Azure while we enjoy a catered lunch kindly sponsored by Microsoft. We’ll also have fresh primo coffee, sodas, waters and snacks to help you power through the labs which will give you real-world exposure to what its like to actually apply what you’ve learned and take the working applications home with you.

From there we’ll take another break and wrap up the day with a look at how Windows Azure Service Bus changes how you think about messaging and integration when working at cloud scale. We’ll have a Service Bus lab and from there likely plan some nefarious after event activities at one of downtown Chandler’s fine watering holes.

Here are the details:

Activity

Time

Welcome and Introductions (15 mins)

9:00 – 9:15

Windows Azure Overview (75 minutes)

9:15 – 10:30

Deploy to the cloud! (45 minutes)

10:30 – 11:15

Break (15 minutes)

11:15 – 11:30

Windows Azure Storage and Database (90 minutes)

11:30 – 1:00

Hands On labs (2 hours)

1:00 – 2:00

Break (15 minutes)

2:00 - 2:15

Windows Azure Service Bus - (90 minutes)

2:15 – 3:45

Wrap Up

3:45 – 5:00

HOW MUCH DOES BOOTCAMP COST?

This event is FREE to the attendees. Gratis! Gratuite! Libero! We’ll certainly take any good karma you want to send our way, but your attendance and full engagement is all we ask. Be sure to check out the prerequisites to ensure you are ready to rock.

DO I NEED TO BRING ANYTHING?

This is a BYOL event. To get the most of the event, you will want to come to boot camp with your own laptop pre-loaded with Visual Studio, the Azure SDK and all prerequisites. Please see http://globalwindowsazure.azurewebsites.net/?page_id=171 to download and install everything you’ll need to make this a great event.

BUT, I’M COMPELTELY NEW TO THIS AZURE CLOUD THING

This event is for you! We’ll have a mix of content both for experienced developers and those brand spanking new to Windows Azure. Our trainers will be here to answer all of your questions and help you with the labs, so remember, there are no stupid questions.

BUT, I ALREADY KNOW THIS STUFFbootcamp

Awesome! We’d love to have you as you’ll probably teach us a thing or two and we guarantee you’ll walk away learning a few things too!

LOCATION, LOCATION, LOCATION

Boot camp will be held at Gangplank in Chandler, located at 260 South Arizona Avenue | CHANDLER, AZ 85225

WHAT’S NEXT?

Seating is limited for this event so please register now at https://phxglobalazurebootcamp.eventday.com/ to guarantee your seat and help us plan for coffee, drinks, snacks and lunch.

posted @ Wednesday, March 20, 2013 3:06 PM | Feedback (0) | Filed Under [ Events Azure Windows Azure MVP ]

Tuesday, February 26, 2013

Introducing the Neuron Azure Service Bus Adapter for Neuron 3.0

Anyone who knows me knows that I’m a messaging nerd. I love messaging so much, that I all but gave up web development years ago to focus exclusively in the completely unglamorous spaNeuron_Logo_3_Gray_and_Blue_PNGce of messaging, integration and middleware. What drives me to this space? Why not spend my time and focus my career on building sexy Web or device apps that are much more fashionable and that will allow people to actually see something tangible, that they can see, touch and feel?

These are questions I ponder often, but every time I do, an opportunity presents itself to apply my passion for messaging and integration in new and interesting ways that have a pretty major impact for my clients and the industry as a whole. Some recent examples of projects I led and coded on include the Intelligent Transportation and Gaming space including developing an automated gate management solution to better secure commercial vehicles for major carriers when they’re off the road; integrating slot machines for a major casino on the Vegas strip with other amenities on property to create an ambient customer experience and increasing the safety of our highways by reading license plates and pushing messages to and from the cloud. These are just a few recent examples of the ways in which messaging plays an integral role in building highly compelling and interesting solutions that otherwise wouldn’t be possible. Every day, my amazing team at Neudesic is involved in designing and developing solutions on the Microsoft integration platform that have truly game changing business impacts for our clients.

As hybrid cloud continues to prove itself as the most pragmatic approach for taking advantage of the scale and performance of cloud computing, the need for messaging and integration becomes only more important. Two technologies that fit particularly well in this space are Neuron and Azure Service Bus. I won’t take too much time providing an overview of each here as there are plenty of good write ups out there that do a fine job, but I do want to share some exciting news that I hope you will find interesting if you are building hybrid solutions today and/or working with Azure Service Bus or Neuron.

Over the last year, the Neuron team at Neudesic has been hard at work cranking out what I think is the most significant release since version 1.0 which I started working with back in 2007 and I’m thrilled to share that as of today, Neuron 3.0 is live!

Building on top of an already super solid WCF 4.0 foundation, Neuron 3.0 is a huge release for both Neudesic and our clients, introducing a ton of new features including:

 

  • Full Platform support for Microsoft .NET 4/LINQ, Visual Studio 2010/2012
  • New features in Management and Administration including
    • New User Interface Experience
    • Queue Management
    • Server and Instance Management
    • Dependency Viewers
  • New features in Deployment and Configuration Management including
    • New Neuron ESB Configuration storage
    • Multi Developer support
    • Incremental Deployment
    • Command line Deployment
  • New features in Business Process Designer including
    • Referencing External Assemblies
    • Zoom, Cut, Copy and Paste
    • New Process Steps
      • Duplicate Message Detection
      • For Each loop
      • ODBC
  • New Custom Process Steps including
    • Interface for Controlling UI Properties
    • Folder hierarchy for UI display

  • New features in Neuron Auditing including
    • Microsoft SQL Azure
    • Excluding Body and Custom Properties
    • Failed Message Monitoring
  • New Messaging features including
    • AMQP Powered Topics with Rabbit MQ
    • Improved MSMQ Topic Support
    • Adapters
      • POP3 and Microsoft Exchange Adapters
      • ODBC Adapter enhancements
      • Azure Service Bus Adapter
  • New in Service Broker including
    • REST enhancements
    • REST support for Service Policies
    • WSDL support for hosted SOAP services
  • Many enhancements to UI, bug fixes and improvements to overall user experience.
image

In version 2.6, I worked with the team to bring Azure Service Bus Relay Messaging in as a first-class capability. Since Neuron is built on .NET and WCF, and the relay service is exposed very nicely using the WCF programming model, adding the relay bindings to Neuron’s Service Endpoint feature was a no-brainer. This immediately provided the ability to bridge or extend the on-premise pub-sub messaging, transformation, mediation, enrichment and security capabilities with Azure Service Bus Relay, enabling new, highly innovative hybrid solutions imagefor my team and our customers.

Between then and this new release, Microsoft released support for queues and topics also known as Brokered Messaging. These capabilities introduced the ability to model durable, pull-based pub-sub messaging in scenarios where such a brokered mechanism makes sense. To be clear, Brokered Messaging is not a replacement for Relay- in fact we’ve worked on a number of solutions where both the firewall friendly push messaging capabilities of relay fit  and even compliment certain scenarios (notification first pull-based pub-sub is a very handy dandy messaging pattern where both are used and perhaps I’ll write that up some day). Think of each being tools in your hybrid cloud messaging tool box.

It didn’t take long to see the potential of these additions to Azure Service Bus and I started having discussions with the Neuron team at Neudesic and the Azure Service Bus team at Microsoft about building an adapter that like Relay, would bring Brokered Messaging capabilities to Neuron, enabling a complete, rich spectrum of hybrid messaging capabilities.

Luckily, both teams agreed it was a good idea and Neudesic was nice enough to let me write the adapter.

Obviously, as a messaging nerd, this was an incredibly fun project to work on and after just a couple of hours, I had my first spike up and running on a very early build of Neuron 3.0 which demonstrated pushing a message that was published to Neuron and re-published on an Azure Service Bus topic. 7 major milestones later, a number of internal demos, walkthroughs with the Service Bus Team and a ton of load and performance testing I completed what is now the initial release of the Neuron Azure Service Bus Adapter which ships with Neuron 3.0!

What follows is a lap around the core functionality of the adapter largely taken from the product documentation that ships with Neuron 3.0. I hope you will find the adapter interesting enough to take a closer look and even if hybrid cloud is not on your mind, there are literally hundreds of reasons to consider Neuron ESB for your messaging needs.

Overview

Windows Azure Service Bus is a Platform as a Service (PaaS) capability provided by Microsoft that provides a highly robust messaging fabric hosted by Microsoft Windows Azure.

Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB by providing pub-sub messaging capable of traversing firewalls, a taxonomy for projecting entities and very simple orchestration capabilities via rules and actions.

As shown below, Azure Service Bus bridges on-premise messaging capabilities enabling the ability to develop hybrid cloud applications that integrate with external services and service providers that are located behind the firewall allowing a new, modern breed of compositions to transcend traditional network, security and business boundaries.

clip_image002

Bridging ESBs in Hybrid Clouds – Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB enabling a next generation of hybrid cloud applications that transcend traditional network, security and business boundaries.

There are two services supported by Azure Service Bus:

  • Azure Service Bus Relay: Serves as a push-based relay between two (or more) endpoints. A client and service (or services) establish an outbound, bi-directional socket connection over either TCP or HTTP on the relay and thus, messages from the client tunnel their way through the relay to the service. In this way, both the client and service are really peers on the same messaging fabric.

 

  • Azure Service Bus Brokered Messaging: Provides a pull-based durable message broker that supports queues, topics and subscriptions. A party wishing to send messages to Azure Service Bus establishes a TCP or HTTP connection to a queue or topic and pushes messages to the entity. A party wishing to receive messages from Azure Service Bus establishes a TCP or HTP connection and pulls messages from a queue or subscription.

Neuron ESB 3.0 supports both Azure Service Bus services and this topic focuses on support of Azure Service Bus Brokered Messaging via the Neuron Azure Service Bus Adapter.

For more information on support for Azure Service Bus Relay support, please see “Azure Service Bus Integration” in the “Service Endpoints” topic in the Neuron ESB 3.0 product documentation.

About the Neuron Azure Service Bus Adapter

The Neuron Azure Service Bus Adapter provides full support for the latest capabilities provided by the Windows Azure SDK version 1.7.

Once the Neuron Azure Service Bus adapter is registered and an Adapter Endpoint is created, all configuration is managed through the property grid of the Adapter located on the properties tab of the Adapter Endpoint’s Details Pane:

clip_image004

Neuron Azure Service Bus Adapter – Property Grid – All configurations for adapter is managed through the property grid. Properties are divided into 3 sections, General, Publish Mode Properties, and Subscribe Mode Properties.

Please note that in order to connect to an Azure Service Bus entity with the Neuron Azure Service Bus adapter, you need to sign up for an Azure account and create an Azure Service Bus namespace with the required entities and ACS configuration. For more information, visit http://azure.com

Features

The Neuron Azure Service Bus adapter supports the following Azure Service Bus Brokered Messaging features:

  • Send to Azure Service Bus Queue
  • Send to Azure Service Bus Topic
  • Receive from Azure Service Bus Queue
  • Receive from Azure Service Bus Subscription

In addition, the Neuron Azure Service Bus adapter simplifies the development experience by providing additional capabilities typical in production scenarios without the need to write custom code including:

  • Smart Polling
  • Eventual Consistency
  • Transient Error Detection and Retry

The Neuron Azure Service Bus adapter is installed as part of the core Neuron ESB installation. The adapter is packaged into a single assembly located within the \Adapters folder under the root of the default Neuron ESB installation directory:

· Neuron.Esb.Adapters.AzureServiceBusAdapter.dll

In addition, the following assembly is required and automatically installed in the root of the folder created for the service instance name:

· Microsoft.ServiceBus.dll (Azure SDK version 1.7)

To use the adapter, it must first be registered within the Neuron ESB Explorer Adapter Registration Window. Within the Adapter Registration Window, the adapter will appear with the name “Azure Service Bus Adapter”. Once registered, a new Adapter Endpoint can be created and configured with an instance name of your choice:

clip_image006

Neuron ESB Explorer Adapter Registration Window - Property Grid – Before configuring the adapter instance for Publish or Subscribe mode, the adapter must first be registered.

Supported Modes

Once the initial registration is complete, the Neuron Azure Service Bus adapter can be configured in one of 2 modes: Publish and Subscribe.

Publish

Publish mode allows Neuron ESB to monitor an Azure Service Bus Queue or Subscription by regularly polling, de-queuing all the messages, and publishing those messages to a Neuron ESB Topic. Messages are read synchronously via a one-way MEP.

clip_image008

Receiving Messages from Azure Service Bus – When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

Configuration

Configuring the Publish mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:

General Properties
  • Azure Service Bus Namespace Name - A registered namespace on Azure Service Bus. For example 'neudesic' would be the namespace for: sb://neudesic.servicebus.windows.net (for information on how to provision, configure and manage Azure Service Bus namespaces, please see the Azure Service Bus topic on http://azure.com).
  • Azure ACS Issuer Name – The account/claim name for authenticating to the Windows Azure Access Control Service (ACS - For information on how to provision, configure and manage Azure Access Control namespaces, please see the Azure Access Control topic on http://azure.com).
  • Azure ACS Key – The shared key used in conjunction with Azure ACS Issuer Name.
  • Azure Entity Type - Queue or Subscription
  • Azure Channel Type – Default, if outbound TCP port 9354 is open or HTTP to force communication over HTTP port 80/443 (In Default mode, the Neuron Azure Service Bus Adapter will try to connect via TCP. If outbound TCP port 9354 is not open, choose HTTP).
  • Retry Count - The number of Service Bus operations retries to attempt in the event of a transient error (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
  • Minimum Back-Off - The minimum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
  • Maximum Back-Off - The maximum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
Publish Properties
  • Azure Queue Name- The name of the queue that you want to receive messages from (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
  • Azure Topic Name – The name of the topic that the subscription you want to receive messages from is associated with (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
  • Azure Subscription Name - The name of the subscription you want to receive messages from (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
  • Delete After Receive – False by default. If set to True, deletes the message from the queue or topic after it is received regardless of whether it is published to Neuron successfully (for more information on this setting, see the “Understanding Eventual Consistency” topic).
  • Wait Duration - Duration (in seconds) to wait for a message on the queue or subscription to arrive before completing the poll request (for more information on this setting, see the “Understanding Smart Polling” topic).
  • Neuron Publish Topic - The Neuron topic that messages will be published to. Required for Publish mode.
  • Error Reporting – Determines how all errors are reported in the Windows Event Log and Neuron Logs. Either as Errors, Warnings or Information.
  • Error on Polling – Register failed message and exception with Neuron Audit database. Please note that a valid SQL Server database must be configured and enabled.
  • Audit Message on Failure - Determines if polling of data source continues on error and if consecutive errors are reported.

The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Receive” in Publish mode:

clip_image010

Publish Mode General Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Publish mode:

clip_image012

Publish Mode Properties Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

Subscribe

Subscribe mode allows Neuron ESB to write messages that are published to Neuron ESB to an Azure Service Bus queue or topic. In this manner, Neuron ESB supports the ability to bridge an Azure Service Bus entity, allowing for on-premise parties to seamlessly communicate with Azure Service Bus. Once Neuron ESB receives a message, it sends the message to an Azure Service Bus Queue or Topic.

clip_image014

Sending Messages to Azure Service Bus – When in Subscribe mode, the adapter supports sending messages published on Neuron ESB to an Azure Service Bus entity.

Configuration

In addition to the General Properties covered under the Publish mode documentation, configuring the Subscribe mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:

Subscribe Properties
  • Adapter Send Mode - Choose Asynchronous for maximum throughput or Synchronous for maximum reliability (for more information on this setting, see the “Choosing Synchronous vs. Asynchronous” topic).
  • Adapter Queue Name - The name of the queue you want to send messages to (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
  • Adapter Topic Name - The name of the topic you want to send messages to (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Send” in Subscribe mode:

clip_image016

Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.

The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Subscribe mode:

clip_image018

Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.

Understanding Transient Error Detection and Retry

When working with services in general and multi-tenant PaaS services in particular, it is important to understand that in order to scale to virtually hundreds of thousands of users/applications, most services like Azure Service Bus, SQL Azure, etc. implement a throttling mechanism to ensure that the service remains available.

This is particularly important when you have a process or application that is sending or receiving a high volume of messages because in these cases, there is a high likelihood that Azure Service Bus will throttle one or several requests. When this happens, a fault/HTTP error code is returned and it is important for your application to be able to detect this fault and attempt to remediate accordingly.

Unfortunately, throttle faults are not the only errors that can occur. As with any service, security, connection and other unforeseen errors (exceptions) can and will occur, so the challenge becomes not only being able to identify the type of fault, but in addition, know what steps should be attempted to remediate.

Per the guidance provided by the Azure Customer Advisory Team (http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/), the Neuron Azure Service Bus adapter uses an exponential back-off based on the values provided for the Retry Count, Minimum Back-Off and Maximum Back-Off properties within the Properties tab for both Publish and Subscribe mode.

Given a value of 3 retries, two seconds and ten seconds respectively, the adapter will automatically determine a value between two and ten and back off exponentially one time for each retry configured:

clip_image020

Exponential Back-Off Configuration– The adapter will automatically detect transient exceptions/faults and retry by implementing an exponential back-off algorithm given a retry count, initial aclip_image022nd max back-off configuration.

Taking this example, as shown in the figure on the right, if the adapter chose an initial back-off of two seconds, in the event of a transient fault being detected (i.e. throttle, timeout, etc.) the adapter would wait two seconds before trying the operation again (i.e. sending or receiving a message) and exponentially increment the starting value until either the transient error disappears or the retry count is exceeded.

In the event that the retry count is exceeded, the Neuron Azure Service Bus adapter will automatically persist a copy of the message in the audit database to ensure that no messages are lost (provided a SQL Server database has been configured).

Understanding Smart Polling

When imageconfiguring the Neuron Azure Service Bus Adapter in Publish mode, the adapter can take advantage of a Neuron ESB feature known as Smart Polling.

With Smart Polling, the adapter will connect to an Azure Service Bus queue or subscription and check for messages. If one or message is available, all messages will be immediately delivered (see “Understanding Eventual Consistency” for more information on supported read behaviors).

However, if no messages are available, the adapter will open a connection to the Azure Service Bus entity and wait for a specified timeout before attempting to initiate another poll request (essentially resulting in a long-polling behavior). In this manner, Azure Service Bus quotas are honored while ensuring that the adapter issues a receive request only when the configured timeout occurs as opposed to repeatedly polling the Azure Service Bus entity.

Understanding Eventual Consistency

When working with Azure Service Bus, it is important to note that the model for achieving consistency is different than traditional distributed transaction models. For example, when working with modern relational databases or spanning multiple services that are composed into a logical unit of work (using WS-Atomic Transactions for example), it is a common expectation that work will either be performed completely or not at all. These types of transactions have the characteristics of being atomic, consistent, independent and durable (ACID). However, to achieve this level of consistency, a resource manager is required to coordinate the work being carried out by each service/database that participates in a logical transaction.

Unfortunately, given the virtually unlimited scale of the web and cloud computing, it is impossible to deploy enough resource managers to account for the hundreds of thousands if not millions of resources required to achieve this level of consistency. Even if this were possible, the implications on achieving the scale and performance demanded by modern cloud-scale applications would be physically impossible.

Of course, consistency is still as important for applications that participate in logical transactions across or consume cloud services. An alternative approach is to leverage an eventually consistent, or basically available, soft state, eventually consistent (BASE) approach to transactions.

Ensuring Eventual Consistency in Publish Modeimage

Azure Service Bus supports this model for scenarios that require consistency and the Neuron Azure Serviced Bus adapter makes taking advantage of this capability simply a matter of setting the “Delete After Receive” property (available in the Publish Mode Settings) to False, which is the default.

When set to False, when receiving a message, the adapter will ensure that the message is not discarded from the Azure Service Bus entity until the message has been successfully published to Neuron ESB. In the event that an error occurs when attempting to publish a message, the message will be restored on the Azure Service Bus entity ensuring that it remains available for a subsequent attempt to receive the message (Please note that lock durations configured on the entity will affect the behavior of this feature. For more information, please refer to the Azure Service Bus documentation on MSDN: http://msdn.microsoft.com/en-us/library/ee732537.aspx).

Choosing Synchronous versus Asynchronous Receive

When the Neuron Azure Service Bus adapter is configured in Subscribe mode, you can choose to send messages to an Azure Service Bus queue or topic in either synchronous or asynchronous mode by setting the Adapter Send Mode property to either “Asynchronous” or “Synchronous” in the Subscribe Mode Property group.

If reliability is a top priority such that the possibility of message loss cannot be tolerated, it is recommended that you choose Synchronous. In this mode, the adapter will transmit messages to an Azure Service Bus queue or topic at rate of about 4 or 5 per second. While it is possible to increase this throughput by adding additional adapters in subscribe mode, as a general rule, use this mode when choosing reliability at the expense of performance/throughput.

To contrast, if performance/low-latency/throughput is a top priority, configuring the adapter to send asynchronously will result in significantly higher throughput (by several orders of magnitude). While the send performance in this mode is much higher, in the event of a catastrophic failure (server crash, out of memory exception) it is possible for messages that have left the Neuron ESB process but have not yet been transmitted to the Azure Service Bus (i.e. are in memory) the possibility for message loss is much higher than when in synchronous mode because of the significantly higher density of messages being transmitted.

Other Scenarios

Temporal Decoupling

clip_image024One of the benefits of any queue-based messaging pattern is that the publisher/producer is decoupled from the subscribers/consumers. As a result, parties interested in a given message can be added and removed without any knowledge of the publisher/producer.

By persisting the message until an interested party receives the message, the sending party is further decoupled from the receiving party because the receiving party need not be available at the time the message was written to persistence store. Azure Service Bus supports temporal decoupling with both queues and topics because they are durable entities.clip_image026

As a result, a party that writes new order messages to an Azure Service Bus queue can do so uninhibitedly as shown below:

When you configure an instance of the Neuron Azure Service Bus adapter in Publish mode, you can disable the adapter by unchecking the “Enabled” box. Any new messages written to the Azure Service Bus queue or subscription will persist until the adapter is enabled once again.

Competing Consumers

Another messaging pattern that allows you to take advantage of the benefits of pull-based pub-sub model from a performance and scalability perspective is to adjust the number of consumers supported by the resources available to you and keep adding consumers until throughput requirements are met.

To take advantage of this pattern with the Neuron Azure Service Bus adapter and Azure Service Bus, simply add additional instances of the Publishing adapter as needed:

clip_image028

Competing Consumers –Adding additional consumers with Neuron Azure Service Bus is simply a matter of adding additional instances of the Publishing adapter.

Property Table

 

The following table provides details for each property exposed through the Neuron Explorer UI:

Section Name

Property Name

Required

Description

General

   

These properties are used for all modes of the adapter

 

Azure Service Bus Namespace Name

Yes

A registered namespace on Azure Service Bus. For example 'neudesic' would be the namespace for: sb://neudesic.servicebus.windows.net

 

Azure ACS Issuer Name

Yes

The account/claim name for authenticating to the Windows Azure Access Control Service (ACS)

 

Azure ACS Key

Yes

The shared key used in conjunction with Azure ACS Issuer Name.

 

Azure Entity Type

Yes

Default Queue. Queue or Topic

 

Azure Channel Type

Yes

Default is Default. Default, if outbound TCP port 9354 is open or HTTP to force communication over HTTP port 80/443

 

Retry Count

Yes

Default 5. The number of Service Bus operations retries to attempt in the event of a transient error (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).

 

Minimum Back Off

Yes

Default 3. The minimum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).

 

Maximum Back Off

Yes

Default 3. The maximum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).

Publish Properties

   

These properties are only used when the adapter is in either Request/Response or Publish mode.

Azure Queue Name

Yes

The name of the queue that you want to receive messages from (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).

Azure Topic Name

Yes

The name of the topic that the subscription you want to receive messages from is associated with (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

Azure Subscription Name

Yes

The name of the subscription you want to receive messages from (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

Delete After Receive

Yes

Default False. If set to True, deletes the message from the queue or topic after it is received regardless of whether it is published to Neuron successfully (for more information on this setting, see the “Understanding Eventual

Wait Duration

Yes

Default 5. Duration (in seconds) to wait for a message on the queue or subscription to arrive before completing the poll request (for more information on this setting, see the “Understanding Smart Polling” topic).

Neuron Publish Topic

Yes

The Neuron topic that messages will be published to. Required for Publish mode.

Error Reporting

Yes

Default Error. Determines how all errors are reported in the Windows Event Log and Neuron Logs. Either as Errors, Warnings or Information.

Error on Polling

Yes

Default Stop Polling On Error. Register failed message and exception with Neuron Audit database. Please note that a valid SQL Server database must be configured and enabled.

Audit Message on Failure

Yes

Default False. Determines if polling of data source continues on error and if consecutive errors are reported.

Subscribe Properties

   

These properties are only used when the adapter is in either Solicit/Response or Request/Response mode.

Adapter Send Mode

Yes

Default Asynchronous. Choose Asynchronous for maximum throughput or Synchronous for maximum reliability (for more information on this setting, see the “Choosing Synchronous vs. Asynchronous” topic).

Adapter Queue Name

Yes

The name of the queue you want to send messages to (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).

Adapter Topic Name

Yes

The name of the topic you want to send messages to (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

Message Format

Azure Service Bus uses a proprietary message envelope called a Brokered Message as the unit of communication between all messaging entities including queues, topics and subscriptions.

Publish Mode

In Publish mode, the Neuron Azure Service Bus Adapter will automatically map the body of the incoming Brokered Message to the Body property of the Neuron ESBMessage serializing the payload based on the detected encoding type as follows:

 

BrokeredMessage.ContentType

ESBMessage.Header.BodyType

text/plain

text/plain

text/xml

text/xml

application/msbin-1

application/msbin-1

binary/bytes

binary/bytes

Other

text/xml

Note per the table above that unless otherwise specified, the Neuron Azure Service Bus adapter will assume that the incoming message payload is text/xml.

In addition, any properties stored in the Property property bag of the BrokeredMessage will be automatically mapped to the ESBMessage property bag provided the “Include Metadata” option is checked on the General tab in the Adapter Endpoints configuration. An exception to this rule is that the adapter will always map the BrokeredMessage.LockToken to the ESBMessage property bag with the same name regardless of whether “Include Metadata” is checked.

Subscribe Mode

In Subscribe mode, the Neuron Azure Service Bus Adapter will automatically create a new Brokered Message for each transmission and map the body of an outgoing ESBMessage to the new message body as follows:

 

ESBMessage.Header.BodyType

BrokeredMessage.ContentType

text/plain

text/plain

text/xml

text/xml

application/msbin-1

application/msbin-1

binary/bytes

binary/bytes

Other

text/xml

In addition, any properties stored in the Property property bag of the ESBMessage will be automatically mapped to the BrokeredMessage property bag provided the “Include Metadata” option is checked on the General tab in the Adapter Endpoints configuration.

Brokered Message Limitations

Note that the total payload size for Azure Service Bus messages is 256KB. The Neuron Azure Service Bus adapter will throw a runtime exception if a message greater than or equal to 256KB is sent and will save the message to the failed audit table.

Wrapping Up

Thanks for your interest and please don’t hesitate to hit me with questions, comments and feedback. If you see something missing, I’d love to hear from you as we are already starting to think about features for v.Next.

I had a ton of fun writing this adapter and would like to that the Neuron product team for allowing me to make this small contribution to this incredible release.

This adapter is just a small part of this major release and I hope this post has peeked your interest in checking out Neuron ESB. Getting up and running is super simple and you can download the trial bits here: http://products.neudesic.com/

posted @ Tuesday, February 26, 2013 10:47 AM | Feedback (0) | Filed Under [ neudesic ESB Azure Neuron ESB Azure Service Bus ]

Sunday, February 17, 2013

Visual Studio Live! Las Vegas

 

I’ll be speaking at Visual Studio Live!, March 25-29 in Las Vegas, NV http://bit.ly/VSLPKK14RegVSL13_Badge_See125x125

There are a ton of great sessions from fellow friends, MVPs and Neudesic colleagues so this is sure to be a great show.

I’ll be presenting the following sessions:

SPECIAL OFFER: As a speaker, I can extend $500 savings on the 5-day package. Just be sure to use this link to register and use code VLSPK14: http://bit.ly/VSLPKK14Reg

Hope to see you there!

posted @ Sunday, February 17, 2013 12:32 PM | Feedback (1) | Filed Under [ Speaking Events Windows Azure Node .NET 4.5 ]

Powered by: