rickgaribay.net

Space shuttles aren't built for rocket scientists, they're built for astronauts. The goal isn't the ship, its the moon.
posts - 292, comments - 169, trackbacks - 33

My Links

News

Where's Rick?


GWAB PHX #2, 3/29
Tucson .NET UG, 4/15

VS Live Chicago, 5/5

VS Live Redmond, 8/18

AzureConf Redmond/Web, 9/25

VS Live D.C., 10/6

App Dev Trends Las Vegas, 12/8


About Me
I am a developer, architect, writer and speaker, passionate about distributed technologies and Application Lifecycle Management. I am currently a VP and Distinguished Engineer at Neudesic.









Archives

Post Categories

Published Works

Monday, January 19, 2015

Type 1 Diabetes, Open Source Software and The Internet of (life saving) Things

Some of you may be aware that my 9 year old daughter was diagnosed with Type 1 Diabetes last year. To say that the time since has been an emotional roller coaster would be a gross understatement but thankfully she is doing great and adjusting really well.

While huge strides have been made in the treatment of T1D, one of the main things that the disease steals from you and your family is time and peace of mind. Finger pricks, injections, treating lows all occupy time away from living a regular life. What most people don't know is that with type 1, going low (i.e. blood sugar dropping) is far more dangerous that being high.

Type 1 Diabetes is a Family Disease

When you have type 1 diabetes, your body doesn't produce insulin so you have one choice: Add insulin via injections or die. Pretty much that simple.

The irony is that while we can't live without insulin due to its critical role of delivering energy (sugar) to our cells for normal brain and bodily functions, too much insulin can also kill you because it is incredibly efficient at delivering the sugar in your bloodstream to hungry cells (or fat storage if they’re well topped off). Without sugar (glucose to precise) your body, starting with your brain, literally shuts down. But too much sugar in the blood stream (which someone with T1D will always have without supplemental insulin) causes damage to organs and over time can lead to impaired eyesight, nerve damage and a host of other issues. So the key is to maintain a balance of external sugars (carbs) and insulin since your body (specifically, your pancreas) is unable to do so.

The problem is that we make crummy substitutes for doing the important work of the pancreas. The result is life that feels like your constantly walking a tightrope where one miscalculation in the amount of insulin administered could quickly turn into an emergency situation. Most families are very good at detecting the symptoms of hypoglycemia. Irritability, perspiration, nervous shakes and confusion are common indicators that someone with T1D is “going low” and so far my daughter has been really good about detecting these symptoms by staying in tune with her body and her feelings and being super responsible about her tests. We are extraordinarily fortunate to have an amazing care team including teachers, nurses and faculty at school, dance instructors, piano teachers and relatives who have learned the symptoms and can act in the event my daughter doesn’t realize she’s going low (sudden confusion is a common symptom). Maintaining this balance is especially frightening at night during sleep because it is possible to go low without recognizing the symptoms (the person is peacefully sleeping) and the family and care team are also asleep. Far too many children, young adults and grow ups have sadly died in their sleep because they didn’t realize they were going low and before they could medicate- as simple as drinking a juice box or eating a snack size pack of Skittles- its too late.

SNAGHTML21a1da42The cure for parents of children with T1D is to set the alarm clock for 2 or 3 AM (a new routine in my home) and do a test to ensure that if she is low we can get her back up. 9.9 times out of 10 she is fine, right within range but we have had a couple of near lows. This is especially important on high exercise days like dance or running club because the body consumes more sugar (energy) during times of peak activity that can sometimes gradually bring sugar surplus down. In the event that someone with T1D is very high in the middle of the night (again, despite the utmost care, unlike the perfect pancreas and liver team, we can and will make mistakes in calculations) he or she must take insulin to bring the blood sugar down to non-damaging normal range, and that means a pretty awful, sleepless night with more finger pricks. 

As you can imagine, this is not fun for the someone living with T1D or their parents or caretakers, but the choice of giving up a full night’s sleep and bothering your little one with yet another finger prick is an easy trade off when the stakes are this high.

Fortunately, about twice a decade new innovation triumphs over the very high FDA bar to deliver new technology that makes living with Type 1 Diabetes a little easier. Examples include the advent of insulin pens with nano-needles over larger, clunky and more painful injections, home glucose testing kits over relying on highly inaccurate urine tests, or worse monthly visits to your doctor for lab work and insulin pumps that administer micro doses of imageinsulin and do a far better job of playing pancreas than we humans can.

 

Continuous Glucose (Diabetes Nerdspeak for Blood Sugar) Monitoring

One such recent breakthrough has been in bringing continuous glucose monitoring (CGM) to the market. CGMs work by inserting a small needle exposed from a sensor about the size of a postage stamp (but much thicker) into the subcutaneous tissue and reading the current blood sugar in the blood stream at a given interval. The sensor then transmits the reading to a receiver about the size of a cell phone for display. While finger pricks are still required from time to time for calibration purposes, a CGM is a huge improvement to the quality of life of someone living with T1D and their care team because it not only provides the blood sugar information much more frequently, but given the frequency can also report on data trend. This is incredibly helpful in knowing if blood sugar is flat (good), rising (not good) or falling (really not good). For example, if a person with T1D is about to play in a tennis match and is within range but trending down, this data is enough to provide the insight to eat an apple or other healthy snack before the game. Likewise if after a celebratory pizza party blood sugar is still continuing to rise after administering X units of insulin, a corrective dose can be administered without missing it and going high for an afternoon.

So while CGMs are an amazing advancement in the treatment of T1D, alone they are not a panacea because while you still benefit from much more information much more frequently you still have to be awake to benefit from the data. Some devices (such as the Dexcom G4 Platinum) provide the ability to set hypo (low) and hyper (high) alarms which is a great step forward but snoozing through an alarm clock is something we’ve all experienced so still a risk. In addition, the information is isolated within the device and visible only to someone near the device.

CGM in the Cloud (The Nighscout Project)

This is where the work of amazing community that is the Nightscout Project comes in. The project is comprised of a 100% volunteer group of DIYers, all with day jobs who together are learning and sharing new ways to expose this critical data in a way that increases visibility and awareness for the entire care team. You can learn more about this great project here but in a nutshell, the project provides a reference architecture, guidance and code to stream this precious telemetry up to the Microsoft Azure cloud. The idea is that once the data is unshackled from the stand alone device, it can be shared openly. Imagine the peace of mind this brings to a parent who is wondering if his or her child got enough carbs before recess and with one click see a dashboard with live updates 5 minutes apart.

The dashboard running on Azure Websites.

Screen Shot 2015-01-12 at 11.02.14 AM

Microsoft has committed to provide 1,500 families with six-month “Azure passes” to offset the cost of NightScout - See more at: http://diatribe.org/tech-giant-microsoft-teams-jdrf-champion-cloud-based-solutions-diabetes#sthash.xVBOG9g2.dpuf

That’s game changing in and of itself. But that’s really only the beginning. Once the data is landing on Azure, you are free to do with it what you want. This is where leveraging APIs for email or SMS come in. How nice would it be to get automatic updates anytime your child’s blood sugar is trending up or down, or be able to display the current blood sugar on a smart watch giving your child back just a little bit of dignity and freedom to just live life a little more care free like any kid should be able to?

Walking the T1D Tightrope: T1D in the Cloud (and Nightscout Installation Workshop)

Walking the T1D Tightrope: T1D in the Cloud

On Thursday, January 22nd, The JDRF Orange County Chapter, The Nightscout Project and Microsoft are teaming up for a 4 hour educational event to explore the latest state of the art of managing diabetes on the cloud followed by a hand-on workshop to help several families get up and running with the Nightscout kit. You can learn more and or register about the event here.

I will be attending the event to learn all I can and provide any technical support I can offer in the workshop though this is the first time I’ll be seeing the build myself. My goal is bring this event to Phoenix this year and I’m currently in conversations with JDRF, Microsoft and Nightscout leadership so stay tuned (I also got a soft commitment for the keynote from a highly prominent A-Lister that I promise any nerd will won’t want to miss wither you’ve been touched by T1D or not). I’d like to thank Ali Mazaheri and Katie Ryckman at Microsoft, Michelle Popoff at JDRF and Wes Ton of the Nightscout project for brining this event to Orange County and allowing me to participate. Last but not least, I’d like to thank Scott Hanselman for connecting me with the Nightscout project and teaching me that true community extends far beyond a platform or technology.

More to Come…

As you can tell, I’m pretty passionate about this project and you’ll be hearing from me a lot more (as well as hopefully seeing some commits from me on Github).

We didn’t ask for Type 1 Diabetes, and trust me, I’d give anything for God to take it back but I feel uniquely blessed to have the opportunity to be a part of this. As a father of the most amazing daughter in the world (who just happens to have T1D), being a Microsoft Azure MVP coupled with my interest in IoT gives me an amazing opportunity to apply my talents and interests for a greater good. Hopefully I can make a small difference.

posted @ Monday, January 19, 2015 6:38 PM | Feedback (0) |

Tuesday, November 11, 2014

Visual Studio Live Washington, D.C. – Xamarin + Azure Mobile Services

Long overdue, here are my slides from VS Live DC last month.

I plan on getting my demos up on Gitbub and will update this post when I do.

Secure, federated identity. A durable, reliable and scalable backend. Scalable messaging fabrics that unlock assets both in the cloud and behind the firewall. All of these are table stakes when delivering modern mobile enterprise applications. Whether you are building responsive web apps for devices or targeting iOS, Android, Windows Phone or Windows Store apps, as a mobile developer, you must focus on delivering a beautiful and functional user experience if you want your apps to be adopted. What if you could have all of this plus the power or Visual Studio 2013 and Windows Azure? Well, now you can! Instead of reinventing the wheel each and every time you need to target a new device platform, learn how Visual Studio 2013 and Xamarin allow you to target iOS, Android and Windows devices while promoting reuse of code assets across platforms. And when you're done with the front–end work and are ready to wire up your mobile app, come see how Microsoft Azure Mobile Services provides a simplified stack that tackles security, durability, reliability and modern messaging all with just a few lines of code. In this session, you'll learn how you can harness the power of Visual Studio 2013 with the flexibility of Xamarin and the power of Microsoft Azure Mobile Services to tackle all of your cross platform and back end chores quickly and easily so you can focus on what your users really care about.


posted @ Tuesday, November 11, 2014 5:32 PM | Feedback (0) | Filed Under [ Azure Mobility Xamarin Mobile ]

Visual Studio Live D.C. – BizTalk Services Bridges & Hybrid Connections

 

Microsoft Azure BizTalk Services is a new breed of PaaS middleware capabilities (iPaaS) for building a new generation of applications and solutions that extend beyond the traditional business, network and security boundaries of the enterprise. More than just messaging, MABS delivers robust routing, transformation, mediation and enrichment capabilities to your hybrid applications enabling a whole new generation of connectivity across the enterprise, devices and clouds. In this session, we'll wire up a responsive mobile web app to a back-end application residing behind the firewall using a low friction, RESTful approach. We'll cover how to map from your application domain to other system, application APIs and options for pushing data to multiple destinations including Microsoft Azure, Web Services, and FTP. You will learn how MABS can take the pain out of sharing information behind the firewall, support many of the rich mapping capabilities you've come to expect from BizTalk making integration with 3rd party service providers and assets behind the firewall a breeze.

posted @ Tuesday, November 11, 2014 5:28 PM | Feedback (0) | Filed Under [ Azure Windows Azure BizTalk Services ]

Service Assisted Communications on Microsoft Azure

I had the privilege of presenting at Azure Conf in Channel9 studios a couple of weeks ago. This is the second time I’ve had the pleasure to present at AzureConf (see my 2012 talk on WebSockets here:http://channel9.msdn.com/events/windowsazure/AzureConf2012/A01)

I covered a ton of content in just over 60 minutes including a command demo with a Fez Spider board and Azure Service Bus and a sneak peak of Azure Device Gateway (code-name Reykjavik) deployment on my Azure subscription to demonstrate deployment, telemetry, inquires and commands with a software emulator.



Direct Link to Channel9 http://bit.ly/1DBVLxk

posted @ Tuesday, November 11, 2014 4:20 PM | Feedback (0) | Filed Under [ Azure Azure Service Bus IoT ]

Monday, August 11, 2014

Visual Studio Live Redmond – 8/18 to 8/21

The Goods...


Thank you Redmond, 1105 Media, Microsoft, fellow speakers and all attendees for a great show. I had a blast!

Code: https://github.com/rickggaribay/IoT
 
Code: https://github.com/rickggaribay/neurl
+++

 

I’m thrilled to be speaking at VS Live Redmond next week. The show starts on Monday August 18th and goes through Thursday the 21st on Microsoft campus in Redmond, WA.

Events in Redmond aRDSPK10 (1)re always a special treat as it gives everyone a chance to see the campus, interact with product team members and as always, meet and hang out with some of the best, most recognized speakers in the industry like Ted Neward, Michael Collier, Brian Noyes, Eric Boyd, Rachel Appel, Miguel Castro, Rocky Lhotka, Andrew Brust- the list goes on.

I’ll be delivering two Azure focused presentations on the Internet of Things and API development with NodeJS.

Since there is only so much space available for the abstracts themselves, I thought I’d elaborate a bit on what you can expect from each session in this short post. You can find more details about both talks on the VS Live Redmond website or go directly to the abstracts by following the links below.

From the Internet of Things to Intelligent Systems: A Developer's Primer

In this talk, I lay the foundation for IoT and why you, as a developer should care. I’ll show off a handful of devices ranging from Arduino, Netduino and Fez Spider and demonstrate a number of common patterns currently in the wild including default communication, brokered and service assisted. We’ll explore the challenges that exist today in supporting commands, notifications, inquiries and telemetry. I’ll then spend some time giving you an in-depth tour of Reykjavik, Microsoft’s code name for its reference architecture focused on delivering highly scalable messaging fabrics for modern IoT solutions.

We’ll take a look at the reference architecture and how it maps to components on Microsoft Azure. I’ll then demonstrate what a first-class Reykjavik device looks like and demonstrate live telemetry and commands for an end-to-end tour of Reykjavik. I’ve been spending a lot of time with Clemens and team over the last several weeks so this promises to be an inside look at the reference architecture and general shape of things you're unlikely to find publically anywhere else.

Learn more about this talk here: http://bit.ly/VSLRIOT or follow the conversation on Twitter #VSLTH04

Building APIs with NodeJS on Microsoft Azure Websites

This is a talk that I’ve been working on for several months now and continues to evolve. As I discuss in my latest article in CODE Magazine, it started off as a spike for teaching myself basic NodeJS and kind of evolved into a little project for work that needed a hosting environment. After exploring various options, Azure Websites made the most sense and this talk focuses on the key features and functionality of a little URL shortening API along with key ALM considerations like IDE, unit testing, continuous integration and deployment.

I’ll walk you through each step I took in building this API from scratch and deploy it live to Azure Websites as well as show you some really cool things you can do with the Kudu console when things go awry (as they almost always do in a live demo :-))

More about this talk here: http://bit.ly/VSLRAPI or follow the conversation on Twitter  #VSLW09

If you plan on attending either of my sessions please stop by and say hi or after the talk. I hope to see you there!

posted @ Monday, August 11, 2014 4:38 PM | Feedback (0) | Filed Under [ Speaking Events Azure Azure Service Bus Node IoT NodeJS ]

Friday, June 20, 2014

Building a Simple NodeJS API on Microsoft Azure Websites from Start to Finish

NodeJS is a powerful framework for building IO-centric applications with JavaScript. Although it hasn’t yet reached a major version number (as of this writing, the latest build is 0.10.28), the level of developer, community, and industry support for NodeJS is nothing short of astounding. From Wal-Mart to LinkedIn, NodeJS is powering more and more of the experiences with which you interact every day.

Although there are many options for hosting NodeJS applications, Microsoft has been an early supporter of NodeJS from the beginning by making direct investments in the framework and demonstrating a commitment to making NodeJS a first class citizen on Windows, both on-premises and on Microsoft Azure.

In my new article for CODE Magazine, I provide a lap around NodeJS and Microsoft Azure Websites by showing you a simple but functional API that I recently developed from the ground up. I’ll start by discussing the design of the API, go on to reviewing implementation details, and then proceed to pushing the API live on Microsoft Azure Websites.

You can read the article here as well as on Amazon and at your local news stand.

http://bit.ly/1nT4K6h

posted @ Friday, June 20, 2014 12:41 PM | Feedback (0) | Filed Under [ Azure NodeJS ]

Friday, May 09, 2014

Visual Studio Live Chicago Recap: Building APIs with NodeJS on Microsoft Azure Websites

My first talk at VS Live Chicago this week (if you’re looking for my IoT talk, please click here) was based on a talk I started doing last year demonstrating fundamental unit testing techniques with NodeJS and Mocha. Since then, the code and the talk has evolved into a real API currently is early alpha at Neudesic.

In this session, we started with looking at the problem – and opportunity- with long, ugly URLs and how most URL minification APIs like bit.ly, tinyurl, etc. solve the problem today.

From there, we looked at why NodeJS is a great choice for building a Web API and proceeded to build the 3 key APIs required to fulfill the most fundamental features you’d expect from a URL shortening API including:

  • Shorten
    • When I submit a long, ugly URL to the create API, I should get back a neurl.
  • Redirect
    • When I submit a neurl to the submit API, my request should be automatically redirected.
  • Hits
    • When I submit a neurl to the hits API, I should get back the number of hits/redirects for that neurl.

With the API up an running on my laptop, we proceeded to create an Azure Website and push the Node app via my local Git repository, taking it live. All was not well unfortunately as initial testing of the Shorten API returned 500 errors. A quick look at the log dumps using the venerable Kudu console revealed the cause: The environment variable for the MongoDB connection string didn’t exist on the Azure Website deployment which was quickly remedied by adding the variable to the website from the Azure portal. Yes, this error was fully contrived, but Kudu is so cool.

With the API up and running, we exercised it a bit, verifying that the Redirect and Hits APIs were good to go and the scaled out the API from one to six instances with just a few clicks.

As the API continues to mature, I’ll update the talk to demonstrate how this level of indirection brought forth by virtualizing the actual URL (as with traditional services and APIs) introduces many opportunities to interact with the person consuming the API (all via URIs!) as they take the journey that starts with the click and ends with the final destination.

Without further ado, the code and more details on the talk can be found below.

Code: https://github.com/rickggaribay/neurl

Abstract: http://bit.ly/1iEEbNV 

Speaking of which, if you haven’t already, why not register for Visual Studio Live Redmond or Washington DC? Early bird discounts are currently available so join me to see where we can take this API from here! hhttp://bit.ly/vslive14

posted @ Friday, May 09, 2014 5:10 PM | Feedback (0) | Filed Under [ Speaking Events Azure NodeJS ]

Visual Studio Live Chicago Recap: From the Internet of Things to Intelligent Systems - A Developer's Primer

I had the pleasure of presenting at Visual Studio Live! Chicago this week. Here is a recap of my second talk “From the Internet of Things to Intelligent Systems- A Developer’s Primer (if you’re looking for a recap of my “Building APIs with NodeJS on Microsoft Azure Websites” you can find it here).

While analysts and industry pundits can’t seem to agree on just how big IoT will be in the next 5 years, one thing they all agree on is that it will be big. From a bearish 50B internet connected devices by 2020, to a more moderate 75B and bullish 200B, all analysts agree that IoT is going to be big. But the reality is that IoT isn’t something that’s coming. It’s already here and this change is happening faster than anyone could have imagined. Microsoft predicts that by 2017, the entire space will represent over $1.7T in market opportunity spanning from manufacturing and energy to retail, healthcare and transportation.

While it is still very early, it is clear to see that the monetization opportunities at this level of scale are tremendous. As I discussed in my talk, the real opportunity for organizations across all industries is two-fold. First, the data and analytical insights that the telemetry (voluntary data shared by the devices) will provide will change the way companies plan, execute and the rate at which they will adapt and adjust to changing conditions in their physical environments. This brings new meaning to decision support and no industry will be left untouched in this regard. These insights will lead to intelligent systems that are capable of taking action at a distance based either on pre-configured rules that interpret this real-time device telemetry or other command and control logic that prompts communication with device.

As a somewhat trivial but useful example, imagine your coffee maker sending you an SMS asking you permission to approve a descaling job. Another popular example of a product that’s already had significant commercial success is the Nest thermostat. Using microcontrollers very similar to the ones I demonstrated, these are simple examples that are already possible today.

Beyond the commercial space, another very real example is a project my team led for our client that involved streaming meter and sensor telemetry from a large downtown metroplex enabling real-time, dynamic pricing, up-to-the-minute views into parking availability and significant cost and efficiency savings by adopting a directed enforcement approach to ticketing.

So, IoT is already everywhere and in many cases, as developers we’re already behind. For example, what patterns do you use for managing command and control operations? How do you approach addressability? How do you overcome resource constraints on devices ranging in size from drink coasters to postage stamps? How do you scale to hundreds and thousands of devices that are sharing telemetry data every few seconds? What about security?

While 75 minutes is not a ton of time to tackle all of these questions, I walked the audience through the following four scenarios based on the definition of the Command message pattern in the "Service Assisted Communications" paper that Clemens Vasters (@clemensv) at Microsoft published this February:

1. Default Communication Model with Arduino - demonstrates the default communication model whereby the Arduino provides its own API (via a Web Server adapted by zoomcat). Commands are sent from the command source to the device in a point to point manner.

2. Brokered Device Communication with Netduino Plus 2 - demonstrates an evolution from the point to point default communication model to a brokered approach to issuing device commands using MQTT. This demo uses the excellent M2MQTT library by WEB MVP Paolo Patierno (@ppatierno) as well as the MQTT plug-in for RabbitMQ (both on-premise and RabbitMQ hosted).

3. Service-Assisted Device-Direct Commands over Azure Service Bus - applies the fundamental service assisted communications concepts evolving the brokered example to leverage Azure Service Bus using the Device Direct pattern (as opposed to Custom Gateway). As with the brokered model, the device communicates with a single endpoint in an outbound manner, but does not require a dedicated socket connection as with MQTT implicitly addressing occasionally disconnected scenarios, message durability, etc.

In the final, capstone demo, “Service-Assisted Device-Direct Commands on the Azure Device Gateway”, I demonstrated the culmination of work dating back to June 2012 (in which Vasters first shared the concept of Service-Assisted Communications) which is now available as a reference architecture and fully functional code base for customers ready to adopt an IoT strategy today:

image

As a set up for the demo, I discussed the Master and Partition roles. The Master role manages the deployment of partitions and the provisioning of devices into partitions using the command line tools that ship with the code base.

In the demo, I provided a look at the instance of Reykjavik deployed on our Neudesic Azure account including the Master and Partition roles. I showed the Azure Service Bus entities for managing the ingress and egress of device messaging for command, notification, telemetry and inquiry traffic (The Device Gateway is currently capable of supporting 1024 partitions with each partition supporting 20K devices today) as well as the storage accounts responsible for device registration and storing partition configuration settings.

I also discussed the protocols for connecting the device to the gateway (AMQP and HTTP are in the box and an MQTT adapter is coming very soon) and walked through the Telemetry Pump which dispatches telemetry messages to the registered telemetry adapter (Table Storage, HD Insight adapters, etc.)

The demo wrapped up with a Reykjavik device sample consisting of a Space Heater emulator that I registered on the Neudesic instance of the Device Gateway to acquire it’s ingress and egress endpoints, initialize fan speed, rpm and begins to send telemetry messages to it’s outbox every 30 seconds (fully configurable).

The beauty of the demo is in its simplicity. Commands are received via the device’s inbox and telemetry is shared via it’s outbox. The code is simple C# with no heavy frameworks which is really key to running on devices with highly constrained resources:

   1:  void SendTelemetry()
   2:          {
   3:              this.lastTelemetrySent = DateTime.UtcNow;
   4:             
   5:              var tlm = new BrokeredMessage
   6:                  {
   7:                      Label = "tlm",
   8:                      Properties =
   9:                      {
  10:                          {"From", gatewayId},
  11:                          {"Time", DateTime.UtcNow},
  12:                          {"tiv", (int) this.telemetryInterval.TotalSeconds},
  13:                          {"fsf", this.fanspeedSettingRpmFactor},
  14:                          {"fss", this.fanSpeedSetting},
  15:                          {"fon", this.fanOn},
  16:                          {"tsc", this.temperatureSettingC},
  17:                          {"hon", this.heaterOn},
  18:                          {"ofr", this.lastObservedFanRpm},
  19:                          {"otm", this.lastObservedTemperature}
  20:                      }
  21:                  };
  22:   
  23:              tlm.SessionId = Guid.NewGuid().ToString();
  24:   
  25:              this.sender.SendWithRetryAsync(tlm);
  26:          }

 

A screenshot from the telemetry table populated by the Reykjavik Table Storage adapter is shown in the Neudesic Azure Storage Explorer below:

image

As I discussed, this is an early point in a journey that will continue to evolve over time, but the great thing about this model is that everything I showed is built on Microsoft Azure so there’s nothing to stop you as a developer form building your own Custom Protocol Adapter and this is really the key to the thinking and philosophy around Device Gateway.

It is still very early in this wave and every organization is going to have different devices, protocols and requirements. So while you’ll see investments in the most common protocols as you can already see like (AMQP, MQTT, and CoAp) the goal is to make this super pluggable and fully embrace custom protocol gateways that just plug in.

As with the Protocol Adapters, there’s nothing to stop you from building your own Telemetry adapter or to use Azure Service Bus or BizTalk Services to move data on premise, etc.

Still with me? Great. The links to my demo samples and more details on the talk are available here:

Abstract: http://bit.ly/vsl-iot 

Demo Samples: https://github.com/rickggaribay/IoT 

Oh, and if you missed the Chicago show, don’t worry! I’ll be repeating this talk in Redmond and Washington DC, so be sure to register now for early bird discounts: http://bit.ly/vslive14

posted @ Friday, May 09, 2014 4:41 PM | Feedback (0) | Filed Under [ Speaking Events Azure IoT ]

Thursday, April 10, 2014

Speaking on Building APIs with NodeJS on Microsoft Azure Websites Next Tuesday, 4/15

I will be speaking at the Tucson .NET User Group next Tuesday on Building APIs with Node.js on Microsoft Azure Websites. This will be the 3rd time I speaking at this group, but first time I’m following Scott Hanselman (who spoke last month), definitely a tough act to follow!

You can learn more about the topic here: http://bit.ly/1hEzAJf

posted @ Thursday, April 10, 2014 11:06 PM | Feedback (0) | Filed Under [ Azure NodeJS ]

Wednesday, February 26, 2014

Configuring Custom Domain Names on Windows Azure Websites in 4 Easy Steps

Windows Azure Websites (WAWS) provides a very robust, yet easy to use container for hosting your web applications. This doesn’t just pertain to ASP.NET apps, but includes several templates like Drupal, Wordpress, Orchard, etc. and also provides very nice first class support for Node.js web apps/APIs, PHP and Python.

If you are new to WAWS, you may think ‘big deal, this is just another web host’. You would be wrong. There is a TON of value that you get with WAWS that blows your congenital, commodity web hosters away:

  • The free version allows you to host up to 10 sites in a multi-tenant environment and provides a great dashboard, FTP and continuous deployment capabilities including first class support for git (local repos) and github.
  • The shared version adds support for seamlessly scaling your app up to 6 instances/nodes along with enabling Web Jobs which provide worker processes for executing jobs on a schedule, continuously or on-demand.
  • The standard version allows you to dedicate not instances, but full VMs to your application and supports auto-scaling your app based on metrics and triggers.

These are just the big rocks… there’s a ton more to WAWS and whether you are a .NET, Node.js, PHP or Python developer and there’s a ton of goodness to WAWS which you can learn more about here: http://www.windowsazure.com/en-us/documentation/services/web-sites/ 

When you create your WAWS application, you get both an IP and URL. The URL takes the form of [your app].azurewebsites.net. This is cool for development, testing and maybe corporate apps, but if you are building publically visible web apps or APIs, chances are you’ll want your own domain name so that instead of [your app].azurewebsites.net you can point your users to foobaz.com or whatever.

Microsoft has official docs on how to do this here, but I found that there was a lot of detail that might intimidate folks so I thought I’d break it down in 4 simple steps. I’ll assume that you’ve already bought your shiny new domain name from a registrar and that it’s parked at some annoying, ad infested landing page.

Step 1: Ensure your site is configured for shared or standard mode

Free doesn’t support custom domains which seems pretty reasonable to me. If you started with a website in free mode, simply click on the Scale option and choose from “Shared” or “Standard” mode and click OK:

image

Step 2: Copy the IP and WAWS URL

The next step is to make note of your URL and IP address which you’ll need for the third step in this process. Go to the list of WAWS sites, select the site (but don’t click on it) and click on the “Manage Domains” icon at the bottom of the command bar:

image

This will bring up a dialog that includes your current domain record ([your app].azurewebsites.net) and your IP:

image

Step 3: Update the A Record and CNAMEs

Make a note of each and login to your domain registrar’s console. You want to look for “DNS Management” and either “Advanced” or “Manage Zones” or “Manage DNS Zone File”. You want to get to whatever console allows you to configure your A Record and CNAMEs. I won’t get into a bunch of DNS theory here, but in a nutshell, these records allow for requests to your registered domain name to be forwarded to Windows Azure, and specifically your website’s host name. The result is that your website will resolve to both [your app].azurewebsites.net and foobaz.com (or whatever domain you purchased).

Each registrar will obviously look different, but this is what GoDaddy’s looks like (there’s several other entries like ftp, MX records, etc. which can be ignored):

image

The A record needs to point to the IP address you captured in step 2. Replace whatever value is there with the IP address provided. When someone calls up foobaz.com, your registrar will authoritatively answer that request and then pass it on directly to the IP address you provided.

Now there are various docs, posts, etc. that will tell you that you can choose to use an A name record or a CNAME alias but my experience was that I needed to configure both. If you want to try one or the other, go ahead and do so and skip to Step 4. If it doesn’t work, come back and do both (I had to).

For the CNAME, there are 3 entries you need to make:

  • Point www to [your app].azurewebsites.net – this tells DNS that [your app].azurewebsites.net should be the destination (canonical host) for any DNS queries that begin with www (i.e. www.foobaz.com)
  • Point awwverify AND awwverify.www to awwverify.[your app].azurewebsites.net – This provides for a DNS validation mechanism so that WAWS can validate that your domain registrar has been configured to allow WAWS to serve as a canonical domain in the event that a CNAME look up fails.

Be sure to save your file/settings.

Step 4: Enter your custom domain name in the Manage Domains dialog and check for validity

Pull up the “Domain Settings” for your website again, and this time, enter your new domain name (i.e. foobaz.com). If you want WAWS to respond to both www.foobaz.com and foobaz.com, you’ll want to create both entries. You’ll likely see a red dot indicating that validation and/or CNAME look up has failed:

 

Note that DNS can take up to 48 hours to propagate so as you move to this step, know that if it doesn’t immediately work, wait a few hours to a day and try again (Dynamic DNS providers solve this problem by acting as a proxy between your authoritative domain and canonical domains/IPs). It is very likely that you’ve done everything right, but the records have not yet propagated.

 

image

This is simply WAWS’ way of telling you that the records have not yet propagated. You can happily continue using your WAWS website using the [your app].azurewebsites.net URL. In time, when you come back to the dialog, the verification should succeed and any request for foobaz.com should automatically resolve to your WAWS app.

If you’ve followed these steps and still have issues after 24-48 hours, feel free to drop a comment or hit me on twitter @rickggaribay and I’ll be happy to help you out.

posted @ Wednesday, February 26, 2014 8:26 PM | Feedback (0) | Filed Under [ Azure Windows Azure ]

Tuesday, January 14, 2014

fatal: could not read Username for 'https://github.com': No such file or directory

Git logo

If all of a sudden you are unable to clone or push repos and you recently downloaded the latest version of msysgit (AKA Git for Windows) you are not alone.

As discussed here and confirmed here, this appears to be a bug in the 1.85.2 build, which consequently is the latest available build linking from github, which likely means that thousands of developers since the release of 1.85.2 are impacted.

3 options, from simplest on down:

1. Uninstall 1.85.2 and downgrade to a previous version.

2. Switch over to SSH instead.

3. Apply the patch “discussed” here.

Personally, I chose 2 out of sheer laziness and opted out of 3 due to a general lack of motivation resulting from what appears to be more interest in pedantic redirection and passive aggressive replies from project representation than communication and spirit of collaboration (which is supposed to be what OSS is all about).

Hopefully this will be fixed in a new build soon. Hope this helps.

posted @ Tuesday, January 14, 2014 5:05 PM | Feedback (0) | Filed Under [ git ]

Monday, January 13, 2014

Announcing the 2nd Annual Global Windows Azure Bootcamp (GWAB) Phoenix!

Global Windows Azure Bootcamp

I am thrilled to announce the 2nd Annual GWAB which has been confirmed for Saturday, March 29th, 2014!

As of today, we have 119 locations in 48 countries and 111 cities!

For those of you who attended last year, you know what a blast we had writing and deploying code to Azure as part of hands on labs and our massive scale-out demo, the "Global Render Lab”. This exercise showed the power of distributed computing and we'll be doing something similar this year. 

This year, we are taking the immense compute power of Windows Azure and putting it to work in the fight against diabetes. We’re partnering with Dr. Richard D. Smith in the Biological Sciences Division at Pacific Northwest National lab (PNNL) in collaboration with Minnie M. Sarwal and Tara K. Sigdel at the California Pacific Medical Center Research Institute with the goal of discovering how our body’s serum protein glycosylation works. We want to know how high blood sugar levels present in diabetes patients affect the complex sugar production systems required for our health and ability to fight disease. We want to prove the theory that when small changes in this process start occurring, the disease can progress and lead to Type 2 diabetes. The results from this work will not only help understand the human diabetic state at the molecular level but also lead the way for early detection of diabetes. More details here: http://global.windowsazurebootcamp.com/charity

Our hope is that as our attendees are using this to learn about how to deploy to Windows Azure and how distributed computing works, we can help solve some of the world's problems at the same time.

Agenda and Speakers

Joe and I have been hard at work on the agenda and I’m pleased to share the final agenda below:

Start

End

Topic

Speaker

9:00 9:15 Welcome, Introduction Rick G. Garibay and Joe Guadagno
9:15 10:15 A Lap Around Windows Azure Rick G. Garibay
10:15 11:30 Big Charity Render Lab! Proctored by speakers, GWAB leadership
11:30 11:45 Break. Grab Lunch, prepare for next session!  
11:45 12:45 A Lap Around Cloud Services Joe Guadagno
12:45 1:45 A Lap Around Windows Azure Websites Rick G. Garibay
1:45 2:45 A Lap Around Windows Azure Mobile Services Brendon Birdoes
2:45 3:45 Hands on Labs Proctored by speakers, GWAB leadership
3:45 4:00 Break  
4:00 5:00 Windows Azure ALM Chris Myers
5:00 6:00 Windows Azure Data & HDInsight Michael Collins

 

The class includes presenters and trainers (Microsoft MVPs, industry experts) with deep, real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned. In fact, I am pleased to announce that the following Microsoft MVPs and community rock stars have already signed on for this year’s event:

Awesome. How much does it cost?
This event is FREE to the attendees. Gratis! Gratuite! Libero!  However, seating is limited so be sure to register and secure your seat today: http://bit.ly/1gCdCZb 

What do I need to bring?
You will need to bring your own computer which can run Visual Studio 2012 or 2013 (i.e. Windows, OSX, Ubuntu with virtualization, etc.) and have it preloaded with the following:

Optionally (or if you will not be doing just .NET labs), the following can also be installed:

More info here: http://www.windowsazure.com/en-us/documentation

Please do the installation upfront as there will be very little time to troubleshoot installations during the day.

You will also need to be signed up for a Windows Azure account. There are many options including a 100% absolutely free 30 day trial. Why not sign up now? http://www.windowsazure.com/en-us/pricing/free-trial/

Is this for beginners?
Yes and no. We will focus on a series of lectures and hands on labs aimed at level 200, but ad-hoc white boarding, deep scenario discussions and Q&A are all part of the fun. Think you already know it all? Great, we still need volunteers, speakers and lab buddies. Drop me a note on Twitter @rickggaribay

And now, for a little fun…

Big thanks to fellow MVPs Maarten Balliauw, Alan Smith, Michael Wood and Magnus Martensson for running this event as our global leaders. Thanks also to Scott Cate at Event Day for providing free registration hosting. We couldn’t do this without them!

posted @ Monday, January 13, 2014 7:24 PM | Feedback (0) | Filed Under [ Events Azure Windows Azure MVP ]

Saturday, December 21, 2013

IoT, Mobility, Hybrid Cloud, ESB Oh My! A Closer Look at the Neudesic Gartner AADI Demo

Earlier this week, my colleague @simonguest posted a great summary of the demo he and I built together for the Gartner AADI Summit in Las Vegas week before last.image

As Simon illustrates, the demo combined two areas we are thinking a lot about these days at Neudesic- enterprise mobility and Internet of Things.

When we talk about enterprise mobility, we're not talking about mobile device management, governance or provisioning. Our focus is on unlocking the assets that live on premises, behind the firewall so that they are safely and securely exposed, composed and consumed by a variety of devices and composite business applications on the web.

In this post, I’ll talk a little bit more about these two areas and dive into the back-end of the demo from a messaging and services perspective to give you a glimpse into the internals of the demo app.  

Exposing On-Premise Assets

The key unlocking event for enterprise mobility is enabling the ability to expose assets that live within an organization’s traditional datacenter. These assets include things like other apps, LOB systems, ERPs and databases. In the most traditional sense, the way this problem has been solved is by determining what assets need to be made available, standing up hardware on a DMZ and specifying which inbound ports allow which kind of traffic. Unfortunately, there’s really nothing too innovative about this approach as it is the way we’ve been solving this problem for nearly 20 years.

Today, there are many enterprise mobility vendors out there offering an appliance based offering which extends this approach to address things like security, hardware compression, etc., but the agility of these solutions is questionable as they still required CapEx spend, time and resources to provision and manage.

Another kind of on-premise asset you will be hearing more and more about involves sensors and devices that communicate with software and other devices. This Machine to Machine (M2M) communication will be revolutionary in domainimages such as transportation, retail, logistics, oil and gas, manufacturing, etc. as they will enable the ability to yield new insights that will at first transform business and then become key to survival. One such device is known as iBeacon.

These devices, about the width and half the length of a stick of gum run on Bluetooth Low Energy (BLE) and thus require very little power. These beacons transmit a unique identifier along with signal strength which can then be used to calculate proximity.

You can easily imagine a scenario where a retailer provides a free app that interacts with you in the store as you approach these iBeacons. As you might imagine, the back end of the app would learn your behaviors and preferences and target you with interesting offers to influence your browsing and buying habits as well as crowd sourcing what ideal retail layouts look like based on the tons of data that could be collected and modeled. Another scenario might involve tracking shipping containers on trucks. Today, 5.9 RFID technology dominates this space but the amount of infrastructure required to read the tags compared to what can be accomplished with a handful of iBeacons and a handheld device is disruptive indeed. To illustrate this in our demo, our iBeacons communicated directly with iOS and Android handsets, with literally no set up or tear down, allowing us to enumerate, update and “check into” the beacons that were placed throughout the conference floor.

For the check in scenario, as you approached an iBeacon, the app would raise an event allowing you to check in to that location. The app then sent a message over the Gartner guest conference WIFI network to a server hosted in our datacenter in Irvine, CA running Neuron ESB. The server sits behind our firewall and no inbound ports are open because our IT policy rightfully prohibits it. Instead, a single outbound port is opened by Neuron over an Azure Service Bus Relay which allows for secure, bi-directional messaging for a number of APIs that we implemented which I’ll cover in more detail next.

Composing and Consuming

As you can see in the demo sample architecture, we exposed a REST head over Azure Service Bus which provided the external messaging fabric for communicating with the apps running on iOS and Android in the Gartner exhibition hall. Using Neuron’s messaging and processing capabilities, we implemented the following 4 APIs:

 

  • REGISTER BEACON
    PUT /beacons
    { id: "<<GUID>>", name: "Neudesic Booth, Expo Hall"}

 


  • LOOK UP BEACON
    GET /beacons
    [{ id: "<<GUID>>", name: "Beacon name"}, {id: "<<GUID>>", name: "Beacon name"}] (Array)

 


  • GET ALL BEACONS
    GET /beacons/:id
    { id: "<<GUID>>", name: "Beacon name"}

 


  • CHECK IN TO BEACON
    PUT /beacons/:id/checkin
    { userid: "<<facebookid>>" }

image
The Android version of the app the weekend before the conference. We used Xamarin to build a consistent UX for iOS as well. The iBeacons were re-registered with the actual locations of the booths on the conference floor within a few seconds.

 

To create a connection to Azure Service Bus, we simply created a Client Connector within a Service Endpoint in Neuron as shown below:

image

Neuron hosts an instance of the WCF NetMessagingBinding and creates a virtual service endpoint using the URI shown above.

In order to route the requests from the app to the right logic, we implemented a simple controller called “Beacon Router” using a Neuron Process. Using a Decision step, we added 4 branches for each API:

image

Each Decision Step has a condition which returns true or false. If the condition returns true, the steps within the branch are executed. If not, the Process evaluates each branch until a match is found our returns a 404, Resource not Found.

Here is a look at the the Check In Condition:

image

As you can see, we used a simple UriTemplate to define the template for the Check In API. The URI is stored in the ESB message context within the MessageHeader.To property. If the template matches the actual URI, we return true and the steps within the branch execute in sequence.

The next step that fires is the Publish step. We included this step for testing purposes to serve as Wire Tap on the messages that reach the branch. In this case, this was useful for inspecting the message off the wire by configuring a subscribing party and using our Neuron Test Client to inspect the message:

image

This little trick aside, the Publish step is very powerful in that it allows you to asynchronously publish a message on a topic for other subscribers to pick up. Think of Scatter Gather as a good example.

The next step you’ll notice is a Call Process step. This step simply allows you to call a nested Process which is helpful in factoring your Process (note that we could have chosen to asynchronously execute a nested process by wrapping it inside a Parallel step).

The Check In process starts with a Push step. This step stores the current message for later use.

This was required because we needed to first determine if the iBeacon location was valid before allowing a check in. The way Neuron Processes work is that the message is modified by each step allowing for very powerful enrichment and transformation of the message. However, since there can only be one message in context at any time, the current message is overwritten by the message we crafted up for the ODBC step which follows.

In this case, we used the ODBC step to query SQL Server for the iBeacon guid. The ODBC step expects a message that conforms to it’s interface, a simple XML syntax for defining the query:

   1:  string sql = String.Empty; 
   2:   SNAGHTMLd1870c0
   3:  sql += "<Statement type=\"Text\" sql=\"SELECT * FROM [dbo].[Beacon] WHERE BeaconGuid =?\">"; 
   4:  sql += "<Parameters>"; 
   5:  sql += "<Parameter type=\"varchar\" name=\"@BeaconGuid\" value=\"" + beaconId + "\"/>"; 
   6:  sql += "</Parameters>"; 
   7:  sql += "</Statement>"; 
   8:   
   9:  context.Data.FromXml(sql);

 

Note line 9 above replaces the message in the context with this new message. The ODBC step executes the query based on the latest context and returns the result:

<QueryResults> 
    <QueryResult> 
        <Id>2</Id> 
        <BeaconGuid>1.1</BeaconGuid> 
        <BeaconName>Neudesic Booth</BeaconName> 
    <QueryResults> 
<QueryResult>

Next, we used a Decision step to check to see if the ODBC step yielded the iBeacon we looked up:

   1:  var xml = context.Data.ToXmlDocument(); 
   2:  var node = xml.GetElementsByTagName("QueryResult"); 
   3:  return node.Count > 0;

If the node count is not zero, we know we have a match. Otherwise, we issue a 404 since the resource could not be found.

Recall that at the beginning of the process execution, the first thing we did was push the message into temporary storage so that we could retrieve it later. Once we determined that the iBeacon exists, we pop the message back into context so that we can grab the iBeacon id from the URI and the user id of the user (in this case, a facebook email address) from the original HTTP message body.

To perform the chec kin, we simply did an INSERT into a SQL Server table called CheckIn. The Code step includes the following code, which as before, the ODBC step will consume and perform the unit of work.

Since the iBeacon ID is part of the URI, we use UriTemplate again to determine a match and then extract the {id} parameter from the URI:

   1:  System.UriTemplate template = new System.UriTemplate("/beacons/{id}/checkin");
   2:  Uri uri = new Uri(context.Data.GetProperty("MessageHeader","To"));
   3:  Uri baseUri = new Uri("https://" + uri.Host);
   4:  var match = template.Match(baseUri, uri);
   5:   
   6:  var beaconId = match.RelativePathSegments[1];

From there, we do some formatting and grab the user id from the JSON message payload that was provided in the HTTP body:

   1:  System.Web.Script.Serialization.JavaScriptSerializer serializer = new System.Web.Script.Serialization.JavaScriptSerializer();
   2:   
   3:  Neudesic.TPG.GartnerAADI.Model.CheckIn checkIn = new 
   4:  System.Web.Script.Serialization.JavaScriptSerializer().Deserialize<Neudesic.TPG.GartnerAADI.Model.CheckIn>(jsonString);

Finally, we build the INSERT statement and load the new message context:

   1:  string sql = String.Empty;
   2:   
   3:  sql += "<Statement type=\"Text\" sql=\"INSERT INTO [dbo].[CheckIn]([UserId],[BeaconGuid],[Date]) VALUES(?,?,?)\">";
   4:  sql += "<Parameters>";
   5:  sql += "<Parameter type=\"varchar\" name=\"@UserId\" value=\"" + checkIn.UserId + "\"/>";
   6:  sql += "<Parameter type=\"varchar\" name=\"@BeaconGuid\" value=\"" + beaconId + "\"/>";
   7:  sql += "<Parameter type=\"varchar\" name=\"@Date\" value=\"" + DateTime.Now.ToString() + "\"/>";
   8:  sql += "</Parameters>";
   9:  sql += "</Statement>";
  10:   
  11:  context.Data.FromXml(sql);

If all goes well, we issue an HTTP 200 back to the client:

   1:  context.Data.Http = new HttpMessage();
   2:  context.Data.Http.StatusCode = System.Net.HttpStatusCode.OK;
   3:  context.Data.FromString("");

Otherwise we own up to the problem being on our end and return a 500 (with some internal logging of course).

The Register API is implemented in much the same way. We check to see if the iBeacon exists and if so, do an update, otherwise insert a new record. Thus, this API supports initial standing up of the beacons as well as managing the names as the beacons moved from development to the expo floor.

For the GET APIs, we again used ODBC steps to query for a single or all beacons and returned a single or an array of beacons respectively as you can see in the BeaconRouter screenshot at the beginning of this walkthrough.

Summary

As you can probably guess, Simon and I had a lot of fun building this demo and the folks at the conference seemed to find it interesting too :-)

There are unlimited possibilities once you unlock the ability to safely and securely transcend network, security and business boundaries behind the firewall. And if agility isn’t enough to compel you to think differently about how you expose and compose your enterprise assets for external consumption by mobile devices today, the proliferation of devices- and I’m not talking about tablets or phones- expected to reach in the excess of 250 billion by 2025, will challenge every industry to think differently about their enterprise messaging and mobility strategy.

posted @ Saturday, December 21, 2013 8:49 AM | Feedback (0) | Filed Under [ Neuron ESB Windows Azure Azure Service Bus IoT Mobility ]

Thursday, December 19, 2013

New Webcast: Enabling Rich Messaging Endpoints with Windows Azure BizTalk Services

Yesterday I presented a live webcast for Neudesic on the powerful new capabilities of Windows Azure BizTalk Services for enabling rich, hybrid messaging for modern enterprise applications across on-premise and cloud-hosted business assets.

Windows Azure BizTalk Services (WABS) introduces advanced new PaaS messaging that enables secure, robust communication between your on-premise and cloud assets.

In the video, I show how you can:

  • Integrate Web and mobile clients with on-premise and cloud assets
  • Communicate securely with databases, ERPs and other LOB applications behind the firewall
  • Validate and translate messages to and from various message representations while supporting POX, REST and SOAP
  • Broker, mediate, enrich, compose, and route messages and requests consistently across multiple endpoints

The video is available here: http://bit.ly/1cDORZ1

image

posted @ Thursday, December 19, 2013 5:36 PM | Feedback (0) |

Sunday, October 20, 2013

WABS BizTalk Adapter Service Installation in Seven Steps

With the proliferation of devices and clouds, businesses and developers are challenged more than ever to both enable employee productivity and take advantage of the cost benefits of cloud computing. The reality however, is that the vast majority of organizations are going to continue to invest in assets that reside both within their own data center and public clouds like Windows Azure and Amazon Web Services.

Windows Azure BizTalk Services (WABS) is a new PaaS based messaging and middleware solution that enables the ability to expose rich messaging endpoints across business assets, whether they reside on-premise or in the commercial cloud.

WABS requires an active Windows Azure account, and from there, you can provision your own namespace and start building rich messaging solutions using Visual Studio 2012. You can download everything you need to get started with WABS here: http://www.microsoft.com/en-us/download/details.aspx?id=39087

Once your WABS namespace has been provisioned, you are ready to start developing modern, rich messaging solutions. At this point, you can experiment with sending messages to a new messaging entity in Windows Azure called an EAI Bridge and routing them to various destinations including Azure Service Bus, Blog Storage, FTP, etc. However, if you want to enable support for connectivity to on-premise assets including popular database platforms like Microsoft SQL Server and Oracle Database as well as ERP systems such as Oracle E-Business Suite, SAP and Siebel eBusiness Applications, you want to install an optional component called the BizTalk Adapter Service (BAS) which runs on-premise.

The BAS includes a management and runtime component for configuring and enabling integration with your LOB systems. The capabilities are partitioned into a design-time experience, a configuration experience and the runtime. At design time, you configure your LOB Target (i.e. SQL Server, Oracle DB, SAP, etc.) for connecting to your LOB application via a LOB Relay. Built on Windows Azure Service Bus Relay Messaging, the LOB Relay allows you to establish a secure, outbound connection to the WABS Bridge which safely enables bi-directional communication between WABS and your LOB target through the firewall.

More details on the BizTalk Adapter Service (BAS) architecture can be found here: http://msdn.microsoft.com/en-us/library/windowsazure/hh689773.aspx

While the installation experience is fairly straightforward, there are a few gotchas that can make things a bit frustrating. In this post, I’ll walk you through the process for installing and configuring BAS in hopes of getting you up and running in a breeze.

Installing the BizTalk Adapter Service

Before you install BAS, ensure you’ve downloaded and installed the following pre-requisites:

  • WCF LOB Adapter Framework (found on the BizTalk Server 2013 installation media)
  • BizTalk Adapter Pack 2013 (found on the BizTalk Server 2013 installation media)
  • IIS 7+ and WAS (I’ve tested installation on Windows 7 and Windows 8 Enterprise editions)
  • AppFabric 1.1 for Windows Server
  • SQL Server 2008 or 2012 (all editions should be supported including free Express versions)

The installation process will prompt you for key information including the account to run the application pool that will host the management and runtime services and a password for encrypting key settings that will be stored by the management service in SQL Server. Let’s take a look at the process step-by-step.

1. When you unpack the installer, the most common mistake your likely to make is to double click it to get started. Instead, open a command prompt as an administrator and run the following command (you’ll need to navigate to the folder in which you unpacked the MSI):


msiexec /i BizTalkAdapterService.msi /l*vx install_log.txt in

This command will ensure the MSI runs as Admin and will log results for you in case something goes wrong.

 

2. The first thing the installer will as you for is credentials for configuring the application pool identity for the BAS Management Service. This service is responsible for configuring LOB Relay and LOB Targets and stores all of the configuration on a repository hosted by SQL Server (Long Live Oslo!). In my case, I’ve created a local service account called svc-bas, but this of course could be a domain account or you can use the other options.

 

Install 1 creds

3. Before you continue, be sure that the account you are using to run the MSI is a member of the appropriate SQL Server role(s) unless you plan on using SQL Server Authentication in the next step. The wizard will create a repository called BAService so will need the necessary permissions to create the database.

4. Next, specify connection info for the SQL Server database that will host the BAService repository. SQL Express or any flavor of SQL Server 2008 or 2012 is supported.

 

Install 2 SQL 

5. Specify a key for encrypting sensitive repository information.

Install 3 master pass

 

6. The installer will then get to work creating the BAService in IIS/AppFabric and the BAService repository in SQL Server.

Install 5 creating databases

7. If all is well, you’ll see a successful completion message:

Install Complete

If the wizard fails, it will roll back the install without providing any indication as to why. If this happens, be sure to carefully follow steps 1 and 2 above and carefully review the logs to determine the problem.

After the installation is complete, you’ll notice the BAService has been created in IIS/AppFabric for Windows Server.

image

The BAService database consists of 4 tables which store information on the configured Azure Service Bus relay endpoints that communicate with the LOB Targets, the operations supported by each target (configured in Visual Studio) and finally the virtual URIs for addressing the BAService for configuring the entities previously mentioned:

image

At this point, the LobRelays, LobTargets and Operations tables will be empty.

Once you configure a LOB Target, the BAService will write the configuration data to each table, enabling Azure Service Bus Relay to fuse with the WCF LOB Adapters that ship with the BizTalk Adapter Pack. This combination enables very powerful rich messaging scenarios that support hybrid solutions exposing key business assets across traditional network, security and business boundaries in a simple and secure manner.

posted @ Sunday, October 20, 2013 10:56 AM | Feedback (0) | Filed Under [ BTS Azure WABS Windows Azure BizTalk Services ]

Powered by: