Sunday, October 20, 2013
With the proliferation of devices and clouds, businesses and developers are challenged more than ever to both enable employee productivity and take advantage of the cost benefits of cloud computing. The reality however, is that the vast majority of organizations are going to continue to invest in assets that reside both within their own data center and public clouds like Windows Azure and Amazon Web Services.
Windows Azure BizTalk Services (WABS) is a new PaaS based messaging and middleware solution that enables the ability to expose rich messaging endpoints across business assets, whether they reside on-premise or in the commercial cloud.
WABS requires an active Windows Azure account, and from there, you can provision your own namespace and start building rich messaging solutions using Visual Studio 2012. You can download everything you need to get started with WABS here: http://www.microsoft.com/en-us/download/details.aspx?id=39087
Once your WABS namespace has been provisioned, you are ready to start developing modern, rich messaging solutions. At this point, you can experiment with sending messages to a new messaging entity in Windows Azure called an EAI Bridge and routing them to various destinations including Azure Service Bus, Blog Storage, FTP, etc. However, if you want to enable support for connectivity to on-premise assets including popular database platforms like Microsoft SQL Server and Oracle Database as well as ERP systems such as Oracle E-Business Suite, SAP and Siebel eBusiness Applications, you want to install an optional component called the BizTalk Adapter Service (BAS) which runs on-premise.
The BAS includes a management and runtime component for configuring and enabling integration with your LOB systems. The capabilities are partitioned into a design-time experience, a configuration experience and the runtime. At design time, you configure your LOB Target (i.e. SQL Server, Oracle DB, SAP, etc.) for connecting to your LOB application via a LOB Relay. Built on Windows Azure Service Bus Relay Messaging, the LOB Relay allows you to establish a secure, outbound connection to the WABS Bridge which safely enables bi-directional communication between WABS and your LOB target through the firewall.
More details on the BizTalk Adapter Service (BAS) architecture can be found here: http://msdn.microsoft.com/en-us/library/windowsazure/hh689773.aspx
While the installation experience is fairly straightforward, there are a few gotchas that can make things a bit frustrating. In this post, I’ll walk you through the process for installing and configuring BAS in hopes of getting you up and running in a breeze.
Installing the BizTalk Adapter Service
Before you install BAS, ensure you’ve downloaded and installed the following pre-requisites:
- WCF LOB Adapter Framework (found on the BizTalk Server 2013 installation media)
- BizTalk Adapter Pack 2013 (found on the BizTalk Server 2013 installation media)
- IIS 7+ and WAS (I’ve tested installation on Windows 7 and Windows 8 Enterprise editions)
- AppFabric 1.1 for Windows Server
- SQL Server 2008 or 2012 (all editions should be supported including free Express versions)
The installation process will prompt you for key information including the account to run the application pool that will host the management and runtime services and a password for encrypting key settings that will be stored by the management service in SQL Server. Let’s take a look at the process step-by-step.
1. When you unpack the installer, the most common mistake your likely to make is to double click it to get started. Instead, open a command prompt as an administrator and run the following command (you’ll need to navigate to the folder in which you unpacked the MSI):
msiexec /i BizTalkAdapterService.msi /l*vx install_log.txt in
This command will ensure the MSI runs as Admin and will log results for you in case something goes wrong.
2. The first thing the installer will as you for is credentials for configuring the application pool identity for the BAS Management Service. This service is responsible for configuring LOB Relay and LOB Targets and stores all of the configuration on a repository hosted by SQL Server (Long Live Oslo!). In my case, I’ve created a local service account called svc-bas, but this of course could be a domain account or you can use the other options.
3. Before you continue, be sure that the account you are using to run the MSI is a member of the appropriate SQL Server role(s) unless you plan on using SQL Server Authentication in the next step. The wizard will create a repository called BAService so will need the necessary permissions to create the database.
4. Next, specify connection info for the SQL Server database that will host the BAService repository. SQL Express or any flavor of SQL Server 2008 or 2012 is supported.
5. Specify a key for encrypting sensitive repository information.
6. The installer will then get to work creating the BAService in IIS/AppFabric and the BAService repository in SQL Server.
7. If all is well, you’ll see a successful completion message:
If the wizard fails, it will roll back the install without providing any indication as to why. If this happens, be sure to carefully follow steps 1 and 2 above and carefully review the logs to determine the problem.
After the installation is complete, you’ll notice the BAService has been created in IIS/AppFabric for Windows Server.
The BAService database consists of 4 tables which store information on the configured Azure Service Bus relay endpoints that communicate with the LOB Targets, the operations supported by each target (configured in Visual Studio) and finally the virtual URIs for addressing the BAService for configuring the entities previously mentioned:
At this point, the LobRelays, LobTargets and Operations tables will be empty.
Once you configure a LOB Target, the BAService will write the configuration data to each table, enabling Azure Service Bus Relay to fuse with the WCF LOB Adapters that ship with the BizTalk Adapter Pack. This combination enables very powerful rich messaging scenarios that support hybrid solutions exposing key business assets across traditional network, security and business boundaries in a simple and secure manner.
Friday, October 04, 2013
On Tuesday I learned that I have been awarded the Microsoft MVP Award for Microsoft Integration for 2013.
This is my second Microsoft Integration award and 7th consecutive MVP Award in a row (previously I was a Connected Systems MVP which has since been retired). Last year, I had the honor of being recognized as Microsoft Integration MVP of the Year, and I’m still glowing about it!
Looking back over the last seven years, I still remember my first award in 2007; how excited and humbled I felt to be recognized among such an incredible community. I can easily say, without hesitation that every year I am awarded, that feeling never dulls. In fact it only gets stronger because the competition, and channels available to have an impact only gets bigger and bigger.
Over the last seven years, technologies have changed and evolved along with my career and I’ve had the privilege to both witness- about as close up as you can get without being a blue badge- and participate in the transformational changes Microsoft has undergone during this time as well. For example, in 2005 I was focused almost exclusively on .NET, ASP.NET ASMX, WCF and BizTalk Server. Today, I am working with newer technologies and platforms like Windows Azure, Azure Service Bus, Neuron ESB (built on top of WCF, BTW), RabbitMQ, Web Sockets, Node.js. .NET and BizTalk still remain foundational but there are a ton of new technologies that have rounded out my toolbox. Throughout this time, I’ve had the privilege of applying these technologies in numerous industries including transportation, gaming, hospitality and financial services, written a book and published numerous articles and whitepapers, founded a user group along with speaking in the local, regional and national circuits.
While I am not suggesting that all of these experiences have been the result of the MVP program, my experiences over the last seven years have been much richer as a result of it. So, to commemorate seven years of being a Microsoft MVP, I’d like to share seven things I’ve learned as an MVP since 2007:
#7 Don’t complain about the Salmon. Seriously guys. It’s Seattle. Do you know what the street value is these days for a good cut of salmon?
#6 The MVP Program isn’t just about products and technology. It’s about people and the wonderful relationships and friendships- both within and outside of Microsoft- that have made me a better technologist, learner and leader that transcend far beyond the program.
#5 Every MVP should hug their MVP Lead at least once a year. These are the people who work tirelessly to keep the program running. Weekly newsletters, connecting you to the right folks inside the PGs and putting up with our grotesquely overinflated sense of self are just some of the things your MVP Lead does that we know about.
#4 Microsoft isn’t perfect. I know, surprising right? But seriously, before I was an MVP, I used to have these lofty expectations of how Microsoft should think about this or that, or how a product or feature should be implemented. At the end of the day, Microsoft is just a (really big) group of really smart people that are trying to build the right thing for an even bigger group of people which is much, much harder and different than building a solution for one client/customer. So before you complain about this or that, think back to the code your wrote just two projects ago and then look in the mirror :-)
#3 The MVP Program is a significant investment. Despite having to pay for your own travel to Redmond, the cost of subsidizing hotels, providing meals, facilities, transportation for 3000+ MVPs every year is far from trivial. However, this cost pales in comparison to the investment that the product teams- including Program Managers, Software Development Engineers, Testers, directors and members of the executive team take away from their projects, release schedules, and a million other things to prepare content and demos for the week and spend quality, face to face time with MVPs across over 30 product disciplines.
#2 Microsoft genuinely values its MVPs and listens to what they have to say. Sure, some interactions/SDR’s provide less opportunity to influence a feature or scenario depending on how baked the product is, but your input is listened to, documented, reviewed, discussed and considered in shaping the future roadmap. If you have any doubts about this, next time you are in a heated discussion, study the faces of the PMs in the room and the one grimacing could just be the owner of a feature they’ve poured the last year of their careers into that you just (knowing or unknowingly) killed.
#1 It’s not about the perks. MSDN subscriptions, access to internal DLs, invitations to SDRs, Product Group Interactions are really, really awesome, but if that’s the extent of what you are “getting” from the program, you’re doing it wrong. Being an MVP is about representing the community and customers and (politely) challenging your Microsoft peers (who are every bit as smart and passionate as you) to consider what you have to say in an effort to make the product(s) better. The opportunity to shape and influence the roadmaps and feature sets for a $78 billion global company is a tremendous privilege that never, ever gets old.
I’d like to thank Microsoft for another great year. I sincerely hope that I can positively influence the program, products and community in 2014 and give back even a fraction of what I have gained year over year, and with a little but of luck and hard work, maybe we’ll see a #8 on this list by this time next year!
Tuesday, September 10, 2013
I recently had a good internal discussion on 2 week versus 3 week sprints getting ready to kick off a new project for a client.
This is a debate that's been happening in the agile community for well over a decade, and while there is now broad consensus 10 years later that 4 weeks is almost always highly undesirable, the thing to remember is that the quality, caliber and discipline of the team is absolutely essential regardless of sprint duration.
I Just Want to Dance!
The most common argument against two week sprints is that the planning ceremonies occupy too much time and effectively reduce the team's delivery time from 10 days to less than 8. This is true for severely under-performing teams. High performing teams should be able to land demo and retro in one day and planning in as little as half a day. If the only time a team communicates is during these ceremonies, the time will drag on even further.
Performing teams communicate and practice dress rehearsals of demos, have a backlog groomed and ready to go and in some cases, might even have stories already decomposed and ready to go (signaling strong alignment with business prioritization).
The other common argument is that deployment time cuts further into delivery time, and now that 8 days is more like 7 days because it takes a day to deploy. This is true of teams that are practitioners of cargo cult agile development. If you don't have unit tests and automated builds in place, you _will_ feel pain each and every sprint.
One solution to this dilemma that sometimes comes up is to start with say 3 week sprints and then, when the team has "earned" 2 week sprints, reduce the sprint time accordingly. This is an anti-pattern for the simple reason that you can't improve what you can't measure and if you go around changing sprint duration, velocity becomes corrupt. This means that your ability to predict future velocity is severely impeded which affects budget, staffing and obviously has schedule implications.
Why Two Week Sprints Work
Two week sprints are highly advantageous when you have a high degree of risk and delivering new innovation because they provide tighter intervals with which to adapt and adjust. While 3 week sprints can be successful, most often they are merely masking the lack of efficiency of the team and providing a fig leaf to hide behind- in my experience in almost every case, you will find that productive, heads down time turns out to still be just under 2 weeks!
When building a team, you should settle for no less than A-Team players that are going to kick ass and take names from day one. If you do the heavily lifting to invest in building the right team, the sprint duration won't matter nearly as much, but if you are not committed to this from day zero, no amount of sprint duration optimization will save you from failure.
Wednesday, March 20, 2013
The rumors are true. The Global Windows Azure Bootcamp is coming to Phoenix on April 27th, 2013. Registration is now open: https://phxglobalazurebootcamp.eventday.com/
This is a truly global event in which the Phoenix community will come together to share and learn what you can do on Windows Azure.
This one day deep dive class will get you up to speed on developing for and deploying to Windows Azure. The class will be led by myself and fellow MVPs including the one and only Joe Guadagno, Dan Wahlin and you’re friendly neighborhood Microsoft Regional Director Scott Cate. You’re guaranteed to learn a ton and in addition to the talks, you’ll work on some great hands on labs so you can apply what you learn on the same day and take the labs with you! Best of all, if you get stuck, we’ll be there to make you do push ups, I mean get unblocked
We will start at 9:00 with welcome and introductions and get right into an end-to-end overview of Windows Azure. From there, we’ll participate in a massive, coordinated, global deployment to Windows Azure, teaming with over 60 other worldwide locations to see the workload in action (details are super secret, so you have to show up to get your security clearance- this a a boot camp after all)!
After we’ve done our best to take down a datacenter, we’ll take a break and take a lap around storage and database options on Windows Azure while we enjoy a catered lunch kindly sponsored by Microsoft. We’ll also have fresh primo coffee, sodas, waters and snacks to help you power through the labs which will give you real-world exposure to what its like to actually apply what you’ve learned and take the working applications home with you.
From there we’ll take another break and wrap up the day with a look at how Windows Azure Service Bus changes how you think about messaging and integration when working at cloud scale. We’ll have a Service Bus lab and from there likely plan some nefarious after event activities at one of downtown Chandler’s fine watering holes.
Here are the details:
Welcome and Introductions (15 mins)
9:00 – 9:15
Windows Azure Overview (75 minutes)
9:15 – 10:30
Deploy to the cloud! (45 minutes)
10:30 – 11:15
Break (15 minutes)
11:15 – 11:30
Windows Azure Storage and Database (90 minutes)
11:30 – 1:00
Hands On labs (2 hours)
1:00 – 2:00
Break (15 minutes)
2:00 - 2:15
Windows Azure Service Bus - (90 minutes)
2:15 – 3:45
3:45 – 5:00
HOW MUCH DOES BOOTCAMP COST?
This event is FREE to the attendees. Gratis! Gratuite! Libero! We’ll certainly take any good karma you want to send our way, but your attendance and full engagement is all we ask. Be sure to check out the prerequisites to ensure you are ready to rock.
DO I NEED TO BRING ANYTHING?
This is a BYOL event. To get the most of the event, you will want to come to boot camp with your own laptop pre-loaded with Visual Studio, the Azure SDK and all prerequisites. Please see http://globalwindowsazure.azurewebsites.net/?page_id=171 to download and install everything you’ll need to make this a great event.
BUT, I’M COMPELTELY NEW TO THIS AZURE CLOUD THING
This event is for you! We’ll have a mix of content both for experienced developers and those brand spanking new to Windows Azure. Our trainers will be here to answer all of your questions and help you with the labs, so remember, there are no stupid questions.
BUT, I ALREADY KNOW THIS STUFF
Awesome! We’d love to have you as you’ll probably teach us a thing or two and we guarantee you’ll walk away learning a few things too!
LOCATION, LOCATION, LOCATION
Boot camp will be held at Gangplank in Chandler, located at 260 South Arizona Avenue | CHANDLER, AZ 85225
Seating is limited for this event so please register now at https://phxglobalazurebootcamp.eventday.com/ to guarantee your seat and help us plan for coffee, drinks, snacks and lunch.
Tuesday, February 26, 2013
Anyone who knows me knows that I’m a messaging nerd. I love messaging so much, that I all but gave up web development years ago to focus exclusively in the completely unglamorous space of messaging, integration and middleware. What drives me to this space? Why not spend my time and focus my career on building sexy Web or device apps that are much more fashionable and that will allow people to actually see something tangible, that they can see, touch and feel?
These are questions I ponder often, but every time I do, an opportunity presents itself to apply my passion for messaging and integration in new and interesting ways that have a pretty major impact for my clients and the industry as a whole. Some recent examples of projects I led and coded on include the Intelligent Transportation and Gaming space including developing an automated gate management solution to better secure commercial vehicles for major carriers when they’re off the road; integrating slot machines for a major casino on the Vegas strip with other amenities on property to create an ambient customer experience and increasing the safety of our highways by reading license plates and pushing messages to and from the cloud. These are just a few recent examples of the ways in which messaging plays an integral role in building highly compelling and interesting solutions that otherwise wouldn’t be possible. Every day, my amazing team at Neudesic is involved in designing and developing solutions on the Microsoft integration platform that have truly game changing business impacts for our clients.
As hybrid cloud continues to prove itself as the most pragmatic approach for taking advantage of the scale and performance of cloud computing, the need for messaging and integration becomes only more important. Two technologies that fit particularly well in this space are Neuron and Azure Service Bus. I won’t take too much time providing an overview of each here as there are plenty of good write ups out there that do a fine job, but I do want to share some exciting news that I hope you will find interesting if you are building hybrid solutions today and/or working with Azure Service Bus or Neuron.
Over the last year, the Neuron team at Neudesic has been hard at work cranking out what I think is the most significant release since version 1.0 which I started working with back in 2007 and I’m thrilled to share that as of today, Neuron 3.0 is live!
Building on top of an already super solid WCF 4.0 foundation, Neuron 3.0 is a huge release for both Neudesic and our clients, introducing a ton of new features including:
- Full Platform support for Microsoft .NET 4/LINQ, Visual Studio 2010/2012
- New features in Management and Administration including
- New User Interface Experience
- Queue Management
- Server and Instance Management
- Dependency Viewers
- New features in Deployment and Configuration Management including
- New Neuron ESB Configuration storage
- Multi Developer support
- Incremental Deployment
- Command line Deployment
- New features in Business Process Designer including
- Referencing External Assemblies
- Zoom, Cut, Copy and Paste
- New Process Steps
- Duplicate Message Detection
- For Each loop
- New Custom Process Steps including
- Interface for Controlling UI Properties
- Folder hierarchy for UI display
- New features in Neuron Auditing including
- Microsoft SQL Azure
- Excluding Body and Custom Properties
- Failed Message Monitoring
- New Messaging features including
- AMQP Powered Topics with Rabbit MQ
- Improved MSMQ Topic Support
- POP3 and Microsoft Exchange Adapters
- ODBC Adapter enhancements
- Azure Service Bus Adapter
- New in Service Broker including
- REST enhancements
- REST support for Service Policies
- WSDL support for hosted SOAP services
- Many enhancements to UI, bug fixes and improvements to overall user experience.
In version 2.6, I worked with the team to bring Azure Service Bus Relay Messaging in as a first-class capability. Since Neuron is built on .NET and WCF, and the relay service is exposed very nicely using the WCF programming model, adding the relay bindings to Neuron’s Service Endpoint feature was a no-brainer. This immediately provided the ability to bridge or extend the on-premise pub-sub messaging, transformation, mediation, enrichment and security capabilities with Azure Service Bus Relay, enabling new, highly innovative hybrid solutions for my team and our customers.
Between then and this new release, Microsoft released support for queues and topics also known as Brokered Messaging. These capabilities introduced the ability to model durable, pull-based pub-sub messaging in scenarios where such a brokered mechanism makes sense. To be clear, Brokered Messaging is not a replacement for Relay- in fact we’ve worked on a number of solutions where both the firewall friendly push messaging capabilities of relay fit and even compliment certain scenarios (notification first pull-based pub-sub is a very handy dandy messaging pattern where both are used and perhaps I’ll write that up some day). Think of each being tools in your hybrid cloud messaging tool box.
It didn’t take long to see the potential of these additions to Azure Service Bus and I started having discussions with the Neuron team at Neudesic and the Azure Service Bus team at Microsoft about building an adapter that like Relay, would bring Brokered Messaging capabilities to Neuron, enabling a complete, rich spectrum of hybrid messaging capabilities.
Luckily, both teams agreed it was a good idea and Neudesic was nice enough to let me write the adapter.
Obviously, as a messaging nerd, this was an incredibly fun project to work on and after just a couple of hours, I had my first spike up and running on a very early build of Neuron 3.0 which demonstrated pushing a message that was published to Neuron and re-published on an Azure Service Bus topic. 7 major milestones later, a number of internal demos, walkthroughs with the Service Bus Team and a ton of load and performance testing I completed what is now the initial release of the Neuron Azure Service Bus Adapter which ships with Neuron 3.0!
What follows is a lap around the core functionality of the adapter largely taken from the product documentation that ships with Neuron 3.0. I hope you will find the adapter interesting enough to take a closer look and even if hybrid cloud is not on your mind, there are literally hundreds of reasons to consider Neuron ESB for your messaging needs.
Windows Azure Service Bus is a Platform as a Service (PaaS) capability provided by Microsoft that provides a highly robust messaging fabric hosted by Microsoft Windows Azure.
Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB by providing pub-sub messaging capable of traversing firewalls, a taxonomy for projecting entities and very simple orchestration capabilities via rules and actions.
As shown below, Azure Service Bus bridges on-premise messaging capabilities enabling the ability to develop hybrid cloud applications that integrate with external services and service providers that are located behind the firewall allowing a new, modern breed of compositions to transcend traditional network, security and business boundaries.
Bridging ESBs in Hybrid Clouds – Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB enabling a next generation of hybrid cloud applications that transcend traditional network, security and business boundaries.
There are two services supported by Azure Service Bus:
- Azure Service Bus Relay: Serves as a push-based relay between two (or more) endpoints. A client and service (or services) establish an outbound, bi-directional socket connection over either TCP or HTTP on the relay and thus, messages from the client tunnel their way through the relay to the service. In this way, both the client and service are really peers on the same messaging fabric.
- Azure Service Bus Brokered Messaging: Provides a pull-based durable message broker that supports queues, topics and subscriptions. A party wishing to send messages to Azure Service Bus establishes a TCP or HTTP connection to a queue or topic and pushes messages to the entity. A party wishing to receive messages from Azure Service Bus establishes a TCP or HTP connection and pulls messages from a queue or subscription.
Neuron ESB 3.0 supports both Azure Service Bus services and this topic focuses on support of Azure Service Bus Brokered Messaging via the Neuron Azure Service Bus Adapter.
For more information on support for Azure Service Bus Relay support, please see “Azure Service Bus Integration” in the “Service Endpoints” topic in the Neuron ESB 3.0 product documentation.
The Neuron Azure Service Bus Adapter provides full support for the latest capabilities provided by the Windows Azure SDK version 1.7.
Once the Neuron Azure Service Bus adapter is registered and an Adapter Endpoint is created, all configuration is managed through the property grid of the Adapter located on the properties tab of the Adapter Endpoint’s Details Pane:
Neuron Azure Service Bus Adapter – Property Grid – All configurations for adapter is managed through the property grid. Properties are divided into 3 sections, General, Publish Mode Properties, and Subscribe Mode Properties.
Please note that in order to connect to an Azure Service Bus entity with the Neuron Azure Service Bus adapter, you need to sign up for an Azure account and create an Azure Service Bus namespace with the required entities and ACS configuration. For more information, visit http://azure.com
The Neuron Azure Service Bus adapter supports the following Azure Service Bus Brokered Messaging features:
- Send to Azure Service Bus Queue
- Send to Azure Service Bus Topic
- Receive from Azure Service Bus Queue
- Receive from Azure Service Bus Subscription
In addition, the Neuron Azure Service Bus adapter simplifies the development experience by providing additional capabilities typical in production scenarios without the need to write custom code including:
- Smart Polling
- Eventual Consistency
- Transient Error Detection and Retry
The Neuron Azure Service Bus adapter is installed as part of the core Neuron ESB installation. The adapter is packaged into a single assembly located within the \Adapters folder under the root of the default Neuron ESB installation directory:
In addition, the following assembly is required and automatically installed in the root of the folder created for the service instance name:
· Microsoft.ServiceBus.dll (Azure SDK version 1.7)
To use the adapter, it must first be registered within the Neuron ESB Explorer Adapter Registration Window. Within the Adapter Registration Window, the adapter will appear with the name “Azure Service Bus Adapter”. Once registered, a new Adapter Endpoint can be created and configured with an instance name of your choice:
Neuron ESB Explorer Adapter Registration Window - Property Grid – Before configuring the adapter instance for Publish or Subscribe mode, the adapter must first be registered.
Once the initial registration is complete, the Neuron Azure Service Bus adapter can be configured in one of 2 modes: Publish and Subscribe.
Publish mode allows Neuron ESB to monitor an Azure Service Bus Queue or Subscription by regularly polling, de-queuing all the messages, and publishing those messages to a Neuron ESB Topic. Messages are read synchronously via a one-way MEP.
Receiving Messages from Azure Service Bus – When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.
Configuring the Publish mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:
- Azure Service Bus Namespace Name - A registered namespace on Azure Service Bus. For example 'neudesic' would be the namespace for: sb://neudesic.servicebus.windows.net (for information on how to provision, configure and manage Azure Service Bus namespaces, please see the Azure Service Bus topic on http://azure.com).
- Azure ACS Issuer Name – The account/claim name for authenticating to the Windows Azure Access Control Service (ACS - For information on how to provision, configure and manage Azure Access Control namespaces, please see the Azure Access Control topic on http://azure.com).
- Azure ACS Key – The shared key used in conjunction with Azure ACS Issuer Name.
- Azure Entity Type - Queue or Subscription
- Azure Channel Type – Default, if outbound TCP port 9354 is open or HTTP to force communication over HTTP port 80/443 (In Default mode, the Neuron Azure Service Bus Adapter will try to connect via TCP. If outbound TCP port 9354 is not open, choose HTTP).
- Retry Count - The number of Service Bus operations retries to attempt in the event of a transient error (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
- Minimum Back-Off - The minimum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
- Maximum Back-Off - The maximum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
- Azure Queue Name- The name of the queue that you want to receive messages from (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
- Azure Topic Name – The name of the topic that the subscription you want to receive messages from is associated with (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
- Azure Subscription Name - The name of the subscription you want to receive messages from (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
- Delete After Receive – False by default. If set to True, deletes the message from the queue or topic after it is received regardless of whether it is published to Neuron successfully (for more information on this setting, see the “Understanding Eventual Consistency” topic).
- Wait Duration - Duration (in seconds) to wait for a message on the queue or subscription to arrive before completing the poll request (for more information on this setting, see the “Understanding Smart Polling” topic).
- Neuron Publish Topic - The Neuron topic that messages will be published to. Required for Publish mode.
- Error Reporting – Determines how all errors are reported in the Windows Event Log and Neuron Logs. Either as Errors, Warnings or Information.
- Error on Polling – Register failed message and exception with Neuron Audit database. Please note that a valid SQL Server database must be configured and enabled.
- Audit Message on Failure - Determines if polling of data source continues on error and if consecutive errors are reported.
The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Receive” in Publish mode:
Publish Mode General Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.
The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Publish mode:
Publish Mode Properties Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.
Subscribe mode allows Neuron ESB to write messages that are published to Neuron ESB to an Azure Service Bus queue or topic. In this manner, Neuron ESB supports the ability to bridge an Azure Service Bus entity, allowing for on-premise parties to seamlessly communicate with Azure Service Bus. Once Neuron ESB receives a message, it sends the message to an Azure Service Bus Queue or Topic.
Sending Messages to Azure Service Bus – When in Subscribe mode, the adapter supports sending messages published on Neuron ESB to an Azure Service Bus entity.
In addition to the General Properties covered under the Publish mode documentation, configuring the Subscribe mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:
- Adapter Send Mode - Choose Asynchronous for maximum throughput or Synchronous for maximum reliability (for more information on this setting, see the “Choosing Synchronous vs. Asynchronous” topic).
- Adapter Queue Name - The name of the queue you want to send messages to (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
- Adapter Topic Name - The name of the topic you want to send messages to (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Send” in Subscribe mode:
Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.
The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Subscribe mode:
Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.
When working with services in general and multi-tenant PaaS services in particular, it is important to understand that in order to scale to virtually hundreds of thousands of users/applications, most services like Azure Service Bus, SQL Azure, etc. implement a throttling mechanism to ensure that the service remains available.
This is particularly important when you have a process or application that is sending or receiving a high volume of messages because in these cases, there is a high likelihood that Azure Service Bus will throttle one or several requests. When this happens, a fault/HTTP error code is returned and it is important for your application to be able to detect this fault and attempt to remediate accordingly.
Unfortunately, throttle faults are not the only errors that can occur. As with any service, security, connection and other unforeseen errors (exceptions) can and will occur, so the challenge becomes not only being able to identify the type of fault, but in addition, know what steps should be attempted to remediate.
Per the guidance provided by the Azure Customer Advisory Team (http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/), the Neuron Azure Service Bus adapter uses an exponential back-off based on the values provided for the Retry Count, Minimum Back-Off and Maximum Back-Off properties within the Properties tab for both Publish and Subscribe mode.
Given a value of 3 retries, two seconds and ten seconds respectively, the adapter will automatically determine a value between two and ten and back off exponentially one time for each retry configured:
Exponential Back-Off Configuration– The adapter will automatically detect transient exceptions/faults and retry by implementing an exponential back-off algorithm given a retry count, initial and max back-off configuration.
Taking this example, as shown in the figure on the right, if the adapter chose an initial back-off of two seconds, in the event of a transient fault being detected (i.e. throttle, timeout, etc.) the adapter would wait two seconds before trying the operation again (i.e. sending or receiving a message) and exponentially increment the starting value until either the transient error disappears or the retry count is exceeded.
In the event that the retry count is exceeded, the Neuron Azure Service Bus adapter will automatically persist a copy of the message in the audit database to ensure that no messages are lost (provided a SQL Server database has been configured).
When configuring the Neuron Azure Service Bus Adapter in Publish mode, the adapter can take advantage of a Neuron ESB feature known as Smart Polling.
With Smart Polling, the adapter will connect to an Azure Service Bus queue or subscription and check for messages. If one or message is available, all messages will be immediately delivered (see “Understanding Eventual Consistency” for more information on supported read behaviors).
However, if no messages are available, the adapter will open a connection to the Azure Service Bus entity and wait for a specified timeout before attempting to initiate another poll request (essentially resulting in a long-polling behavior). In this manner, Azure Service Bus quotas are honored while ensuring that the adapter issues a receive request only when the configured timeout occurs as opposed to repeatedly polling the Azure Service Bus entity.
When working with Azure Service Bus, it is important to note that the model for achieving consistency is different than traditional distributed transaction models. For example, when working with modern relational databases or spanning multiple services that are composed into a logical unit of work (using WS-Atomic Transactions for example), it is a common expectation that work will either be performed completely or not at all. These types of transactions have the characteristics of being atomic, consistent, independent and durable (ACID). However, to achieve this level of consistency, a resource manager is required to coordinate the work being carried out by each service/database that participates in a logical transaction.
Unfortunately, given the virtually unlimited scale of the web and cloud computing, it is impossible to deploy enough resource managers to account for the hundreds of thousands if not millions of resources required to achieve this level of consistency. Even if this were possible, the implications on achieving the scale and performance demanded by modern cloud-scale applications would be physically impossible.
Of course, consistency is still as important for applications that participate in logical transactions across or consume cloud services. An alternative approach is to leverage an eventually consistent, or basically available, soft state, eventually consistent (BASE) approach to transactions.
Ensuring Eventual Consistency in Publish Mode
Azure Service Bus supports this model for scenarios that require consistency and the Neuron Azure Serviced Bus adapter makes taking advantage of this capability simply a matter of setting the “Delete After Receive” property (available in the Publish Mode Settings) to False, which is the default.
When set to False, when receiving a message, the adapter will ensure that the message is not discarded from the Azure Service Bus entity until the message has been successfully published to Neuron ESB. In the event that an error occurs when attempting to publish a message, the message will be restored on the Azure Service Bus entity ensuring that it remains available for a subsequent attempt to receive the message (Please note that lock durations configured on the entity will affect the behavior of this feature. For more information, please refer to the Azure Service Bus documentation on MSDN: http://msdn.microsoft.com/en-us/library/ee732537.aspx).
When the Neuron Azure Service Bus adapter is configured in Subscribe mode, you can choose to send messages to an Azure Service Bus queue or topic in either synchronous or asynchronous mode by setting the Adapter Send Mode property to either “Asynchronous” or “Synchronous” in the Subscribe Mode Property group.
If reliability is a top priority such that the possibility of message loss cannot be tolerated, it is recommended that you choose Synchronous. In this mode, the adapter will transmit messages to an Azure Service Bus queue or topic at rate of about 4 or 5 per second. While it is possible to increase this throughput by adding additional adapters in subscribe mode, as a general rule, use this mode when choosing reliability at the expense of performance/throughput.
To contrast, if performance/low-latency/throughput is a top priority, configuring the adapter to send asynchronously will result in significantly higher throughput (by several orders of magnitude). While the send performance in this mode is much higher, in the event of a catastrophic failure (server crash, out of memory exception) it is possible for messages that have left the Neuron ESB process but have not yet been transmitted to the Azure Service Bus (i.e. are in memory) the possibility for message loss is much higher than when in synchronous mode because of the significantly higher density of messages being transmitted.
One of the benefits of any queue-based messaging pattern is that the publisher/producer is decoupled from the subscribers/consumers. As a result, parties interested in a given message can be added and removed without any knowledge of the publisher/producer.
By persisting the message until an interested party receives the message, the sending party is further decoupled from the receiving party because the receiving party need not be available at the time the message was written to persistence store. Azure Service Bus supports temporal decoupling with both queues and topics because they are durable entities.
As a result, a party that writes new order messages to an Azure Service Bus queue can do so uninhibitedly as shown below:
When you configure an instance of the Neuron Azure Service Bus adapter in Publish mode, you can disable the adapter by unchecking the “Enabled” box. Any new messages written to the Azure Service Bus queue or subscription will persist until the adapter is enabled once again.
Another messaging pattern that allows you to take advantage of the benefits of pull-based pub-sub model from a performance and scalability perspective is to adjust the number of consumers supported by the resources available to you and keep adding consumers until throughput requirements are met.
To take advantage of this pattern with the Neuron Azure Service Bus adapter and Azure Service Bus, simply add additional instances of the Publishing adapter as needed:
Competing Consumers –Adding additional consumers with Neuron Azure Service Bus is simply a matter of adding additional instances of the Publishing adapter.
The following table provides details for each property exposed through the Neuron Explorer UI:
| || || |
These properties are used for all modes of the adapter
| || |
Azure Service Bus Namespace Name
A registered namespace on Azure Service Bus. For example 'neudesic' would be the namespace for: sb://neudesic.servicebus.windows.net
| || |
Azure ACS Issuer Name
The account/claim name for authenticating to the Windows Azure Access Control Service (ACS)
| || |
Azure ACS Key
The shared key used in conjunction with Azure ACS Issuer Name.
| || |
Azure Entity Type
Default Queue. Queue or Topic
| || |
Azure Channel Type
Default is Default. Default, if outbound TCP port 9354 is open or HTTP to force communication over HTTP port 80/443
| || |
Default 5. The number of Service Bus operations retries to attempt in the event of a transient error (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
| || |
Minimum Back Off
Default 3. The minimum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
| || |
Maximum Back Off
Default 3. The maximum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
| || || |
These properties are only used when the adapter is in either Request/Response or Publish mode.
Azure Queue Name
The name of the queue that you want to receive messages from (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
Azure Topic Name
The name of the topic that the subscription you want to receive messages from is associated with (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
Azure Subscription Name
The name of the subscription you want to receive messages from (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
Delete After Receive
Default False. If set to True, deletes the message from the queue or topic after it is received regardless of whether it is published to Neuron successfully (for more information on this setting, see the “Understanding Eventual
Default 5. Duration (in seconds) to wait for a message on the queue or subscription to arrive before completing the poll request (for more information on this setting, see the “Understanding Smart Polling” topic).
Neuron Publish Topic
The Neuron topic that messages will be published to. Required for Publish mode.
Default Error. Determines how all errors are reported in the Windows Event Log and Neuron Logs. Either as Errors, Warnings or Information.
Error on Polling
Default Stop Polling On Error. Register failed message and exception with Neuron Audit database. Please note that a valid SQL Server database must be configured and enabled.
Audit Message on Failure
Default False. Determines if polling of data source continues on error and if consecutive errors are reported.
| || || |
These properties are only used when the adapter is in either Solicit/Response or Request/Response mode.
Adapter Send Mode
Default Asynchronous. Choose Asynchronous for maximum throughput or Synchronous for maximum reliability (for more information on this setting, see the “Choosing Synchronous vs. Asynchronous” topic).
Adapter Queue Name
The name of the queue you want to send messages to (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
Adapter Topic Name
The name of the topic you want to send messages to (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
Azure Service Bus uses a proprietary message envelope called a Brokered Message as the unit of communication between all messaging entities including queues, topics and subscriptions.
In Publish mode, the Neuron Azure Service Bus Adapter will automatically map the body of the incoming Brokered Message to the Body property of the Neuron ESBMessage serializing the payload based on the detected encoding type as follows:
Note per the table above that unless otherwise specified, the Neuron Azure Service Bus adapter will assume that the incoming message payload is text/xml.
In addition, any properties stored in the Property property bag of the BrokeredMessage will be automatically mapped to the ESBMessage property bag provided the “Include Metadata” option is checked on the General tab in the Adapter Endpoints configuration. An exception to this rule is that the adapter will always map the BrokeredMessage.LockToken to the ESBMessage property bag with the same name regardless of whether “Include Metadata” is checked.
In Subscribe mode, the Neuron Azure Service Bus Adapter will automatically create a new Brokered Message for each transmission and map the body of an outgoing ESBMessage to the new message body as follows:
In addition, any properties stored in the Property property bag of the ESBMessage will be automatically mapped to the BrokeredMessage property bag provided the “Include Metadata” option is checked on the General tab in the Adapter Endpoints configuration.
Note that the total payload size for Azure Service Bus messages is 256KB. The Neuron Azure Service Bus adapter will throw a runtime exception if a message greater than or equal to 256KB is sent and will save the message to the failed audit table.
Thanks for your interest and please don’t hesitate to hit me with questions, comments and feedback. If you see something missing, I’d love to hear from you as we are already starting to think about features for v.Next.
I had a ton of fun writing this adapter and would like to that the Neuron product team for allowing me to make this small contribution to this incredible release.
This adapter is just a small part of this major release and I hope this post has peeked your interest in checking out Neuron ESB. Getting up and running is super simple and you can download the trial bits here: http://products.neudesic.com/
Sunday, February 17, 2013
I’ll be speaking at Visual Studio Live!, March 25-29 in Las Vegas, NV http://bit.ly/VSLPKK14Reg.
There are a ton of great sessions from fellow friends, MVPs and Neudesic colleagues so this is sure to be a great show.
I’ll be presenting the following sessions:
SPECIAL OFFER: As a speaker, I can extend $500 savings on the 5-day package. Just be sure to use this link to register and use code VLSPK14: http://bit.ly/VSLPKK14Reg
Hope to see you there!
Thursday, January 31, 2013
I recently received news from Lisa Feigenbaum (@lisafeig) , Program Manager at Microsoft that I have been selected as the MVP of the Year for Microsoft Integration.
Needless to say, I'm incredibly honored and humbled to share this recognition with other fellow MVPs that were awarded within their own competency.
As a six-time Microsoft MVP (Connected Systems and most recently Microsoft Integration) I can tell you that being recognized by Microsoft for doing what I love never, ever gets old. There are people in my competency and MVP alumnus who throughout the last decade taught me to treat what I do as a craft, motivated me to become a stronger technologist and learn the value of community- not so much what it can do for me, but what I can do to give back and contribute and be so much better for it (you know who you are).
So, needless to say, to be honored by fellow MVPs who I so respect and from who I have and will continue to learn so much- well, I am speechless so all I can say is THANK YOU!!
Regrettably, for the first time in 6 years, the summit conflicts with my my 10th wedding anniversary that I will be spending with my wonderful wife in Spain. We will definitely raise a glass of vino tinto for all MVPs and wish everyone a great summit!!
Monday, December 10, 2012
I had the honor of being interviewed by fellow MVP Magnus Mastersson (@noopman) for his Magnanimous Software Podcast (love that name).
Other than the dubious task of following really smart guys like Glenn Block and Mads Torgersen in this new series, we had a good chat about Neuron ESB, Azure Service Bus, BizTalk Server 2013, my book and other topics. In addition, Magnus managed to uncover some little known tidbits about my past
The interview was a lot of fun and is now available here for your listening pleasure: http://msnetcast.com/0003/rick-garibay-wcf-biztalk-servicebus-book
Links from the show:
Friday, November 16, 2012
I had a total blast delivering my session on WebSockets on Windows Azure at AzureConf on Wednesday and am pleased to share that the recording of my session is now up.
You can view it by clicking below or follow this link to view on Channel 9 along with a number of other fantastic sessions that highlight how MVPs are using Windows Azure today.
Thanks again to Brady, Cory and team for an amazing event!
Thursday, November 08, 2012
I was working on some demos today moving from a VM to my core host OS and when I tried to open the solution in VS12 that contains a handful of ASP.NET and WCF Service application projects, VS refused to so so, reporting the following error:
The URL X for Web project Y is configured to use IIS as the web server but the URL is currently configured on the IIS Express web server. To open this project, you must edit the 'C:\Users\User\Documents\IISExpress\config\applicationhost.config' file to change the port of the site currently using the URL in IIS Express.
I remembered that I had shut down IIS because I was working on some node.js stuff, so I started all services and still had no luck.
From there, I went into the config file for IISExpress as directed and changed the ports to something other than 80, still to no avail.
The solution was to open the .csproj files, find the UseIIS element and change it to False:
Of course, you’ll need to reset this though the project properties based on whether you want to use IISExpress or IIS to host your application.
I’m not sure if this is a bug or an environmental issue, but regardless, I can’t imagine why the presence of a web server or lack thereof should ever be a reason to stop a solution from loading projects, ever. If I have time, I may log this to Connect, and if anyone has any insights, please let me know.
Until then, hope this help should you run into the same issue.
Wednesday, November 07, 2012
I am flattered to share that I’ve been invited to speak at AzureConf one week from today at Channel 9 Studios in Redmond on Wed. 11/14.
AzureConf is a premier live streamed event delivered for the community by the community from Channel 9 Studios on the Microsoft Campus in Redmond, WA.
Brady Gaster and Corey Fowler have been hard at work for several weeks organizing content, logistics and I can tell you that I am both excited and humbled by the speaker line-up and content.
The event will kick off with a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members including my friend, colleague and fellow (Azure) MVP Michael Collier and esteemed MVPs and Insiders flying in from all over the country/world to join Scott and team at the Channel 9 studios including Magnus Martensson and Eric Boyd, just to name a few.
Streamed live for an online audience on Channel 9, the event will allow you to see how customers, partners and MVPs are making the most of our skills to develop a variety of innovative applications on Windows Azure. The goal of the conference is to be just as valuable to seasoned Azure developers and architects as well as those just learning the tremendous power of this exciting platform.
You can learn more about AzureConf by visiting http://www.windowsazureconf.net/. Please be sure to register as capacity for the live streamed event will be limited (however sessions will be available for playback following the conference).
Thank you for your interest and please help spread the word!
Wednesday, October 31, 2012
I’m thrilled to share that I will be presenting with Phoenix MVPs Dan Wahlin and Joe Guadagno at the upcoming Windows 8 Unleashed event in Tempe to celebrate the release of Windows 8 and help jump start the local community to start building Windows 8 apps.
The event is 100% FREE and you are welcome whether you are an experienced .NET developer, or even if you are just getting started.
The detailed agenda is below (I’ll be delivering Session II), but in a nutshell, we’ll spend the morning in lecture and hands on labs. Half way through, you’ll get a chance to take what you learned in the morning and compete in a hackathon with a chance to win some great prizes!
Space is limited, so be sure to register right away if you haven’t already done so: http://tempewin8unleashed.eventbrite.com/#
Required Software and Downloads
This is a BYOL event, so please make sure to bring your laptop and get the required software and downloads BEFORE coming to the event. You can find everything you need on the “Windows 8 Unleashed Installs” page.
Session I – Overview
·Metro Style Apps Overview
·Developing Metro Style Apps
·Working with Controls
Session II – Location and Data
·Working with Location
Session III – Metro Principals
·Live Tiles & Toast
·Full, Fill Snap and Portrait Views
Build an App and Win Money and Prizes!
Our sponsors have stepped up with some awesome prizes. Not only do you get a ton of information for FREE, but you have an opportunity to win some awesome stuff! You can’t afford to miss this!
DAY-OF prizes will be awarded (sponsored by PDSA):
·1st Place: $500 Gift Card
·2nd Place: $200 Gift Card
·3rd Place: $100 Gift Card
“Best Overall App” Prizes will be awarded to the best app from ALL unleashed events:
·$10,000 from Component Art
·Windows 8 Tablet from PDSA, Inc.
In addition all attendees will receive a FREE eBook from O’Reilly
Hope to see you all there!
Wednesday, October 17, 2012
I was recently interviewed on SSWUG TV to talk about my new book, Windows Server AppFabric Cookbook.
SSWUG.ORG is the one of the largest community-driven sites that focuses on enterprise technology, as more than 570,000 members regularly visit our site for answers to their IT-related questions and to access the information they need to grow as professionals.
As I mention in the interview, everyone who watches the video is eligible for a 22% discount on both the print and e-copy version and is valid through the end of October.
You can watch the video and learn more about the SQL Server Worldwide User Group here: http://www.sswug.org/
Thanks to Stephen and Josh for the opportunity to catch up and talk about the book!
Tuesday, October 09, 2012
My friend, and fellow Microsoft Integration MVP Steef-Jan Wiggers has been kind enough to organize a giveaway for my new book Windows Server AppFabric Cookbook.
You can learn more details about the contest on his blog: http://soa-thoughts.blogspot.nl/2012/10/microsoft-windows-server-appfabric.html
In addition, I’m happy to announce that Neudesic has teamed up with Packt Publishing to give away one e-book each week from 10/22 – 11/12 for a total of 3 winners. Be sure to follow Neudesic on twitter and stay tuned for rules and regulations beginning on Monday, October 15th.
As I shared in my thoughts when I announced the first run of the book, our goal with this book was not to promote the latest shiny object but instead to apply nearly two years of experience into a concise reference that could benefit organizations and the community tasks with building lightweight integration solutions on-premise today. As such, my hope is that what you will find is a trusty field guide rich with simple to follow guidance for enhancing your career as a WCF and WF developer, architect or administrator.
I’d like to thank Steef-Jan and the community for the overwhelmingly positive response to the book so far. Hammad and I worked very hard for nearly a year on this project and it is incredibly rewarding to see praise from world renowned thought leaders in the integration space like Steef-Jan, Richard Seroter and others who have been very kind in validating our efforts.
You can learn more about the book on Packt’s website or Amazon.com.
Friday, October 05, 2012
I had the pleasure of being invited to keynote the Visual Studio 2012 Launch at Microsoft Tempe on Monday and would like to thank Microsoft for having me and everyone for attending.
I’ve been really impressed with the VS 2012 bits since working with the previews and it was a great privilege to showcase all the amazing features in the IDE and new version of TFS.
In my role at Neudesic, I spend a lot of time talking to customers about their challenges and how we can help make them more productive. The reality is that there is a tremendous shift happening in the way that IT operates and is consumed today. Compared to just 5 years ago, organizations that want to succeed must leverage technology in a manner that allows them to both grow and innovate as opposed to merely running their legacy assets. The emergence of devices, and corresponding customer expectations to be productive anywhere, on any device has forced new, modern trends and work styles that IT must contend with or they will simply be left out.
Often, I talk to customers about this through an integration lens, but it applies just as equally to Application Lifecycle Management (ALM). While I strongly believe that integration is a core strategic enabler to unlocking an organization’s potential to maximize their investments in Systems of Record assets enabling them to compose existing assets with new capability to grow the business in new ways that differentiate them from the competition while driving innovation, none of this is possible without the right people, technology, process and tools.
I believe that one of Microsoft’s biggest value propositions is that it provides a single, unified platform for delivering capabilities both on-premise and in the cloud. This single platform approach extends from devices; including desktops, laptops, tablets and smart phones to always available compute resources in the cloud or on-premise. This is compelling for many reasons, not the least of which is the developer economics of working within a platform that embraces both native and web development styles providing a first class tooling experience for each without compromise.
I’ve worked with Visual Studio and TFS since the 2005 release and while the product has matured and grown significantly with each subsequent release, Visual Studio 2012 and TFS 2012 is really a coming of age. The user experience is exceptionally polished, and while I will admit that I was very vocal on some of the changes from the 2010 aesthetic, the final product is fast, efficient and fluid.
Some of my favorite new features in the new release includes local workspaces. With local workspaces (now the default), we can ensure that we don’t lose productivity (or our precious code) when on the go. Detecting changes to files under source control and adding new files- whether within the IDE or using another tool just works.
And Team Explorer just flies. Asynchrony is no longer an esoteric, dark art but a requirement for delivering responsive, fast user experiences and Team Explorer in VS 2012 is a great example of lazy loading and asynchronous messaging for enabling a great user experience.
I also love the new My Work feature. Distractions are an unfortunate part of every developer’s day and it seems that just when you are in the zone with all of the right files open, break point set getting close to completing that feature of fixing that defect, you get that emergency call that takes you away from your work. With My Work, you can simply suspend your current working session and Visual Studio will literally dehydrate all of your context including files, break points, etc. until you return, leaving your IDE clean and ready for the next fire. When you are done saving the world, you can simply resume your work and the IDE is literally rehydrated just as you left it. It really is amazing.
Speaking of information radiators, the TFS web client is beautifully redone. All of the metrics that your team values can be added to your team/area’s dashboard as a favorite. While I’ve always been an IDE zealot as far as working with WIs, queries, etc. the management experience in the new web client is exceptional. I can prioritize stories, do release planning on the fly and get a snap shot of how utilized or under allocated my team is with just a glance or by reviewing the new Taskboard feature which provides a view into state of work by team member or as a team.
As the pace or speed at which the business expects IT to be able to deliver value, particularly within systems of innovation like web sites, web apps, etc., the tolerance for misunderstood requirements has shrunk significantly. To quote the old adage: “You’ve built exactly what I asked for, but it isn’t at all what I want”, it is very easy to misrepresent and misunderstand requirements, particularly within the presentation layer. Many design technologists minimize this risk upfront by introducing wire frames and story boards. VS 2012 brings some entry level story boarding tools to developer hands by integrating with the most popular modeling tool out there: PowerPoint. Supporting mobile, native and web styles, it’s now easier to put together mock ups to gain feedback earlier in the development lifecycle when changes are least expensive.
Another very cool feature is the Feedback Client/Manager. Anyone who has ever worked with me on a project knows that I am a big believer in the importance of feedback, early and often. Whether that means ensuring that _everyone_ knows the status of the build (and owns the outcome) or has visibility to the backlog, current velocity and burndown, identifying meaningful information radiators (Alastair Cockburn’s term) is essential. As I always remind folks: If the first time the customer has seen your feature is in the demo meeting, you suck. Just as you wouldn’t pay a home builder to build your dream house and accept that you can’t visit as the foundation is laid, frame goes up, windows and dry wall go in, software is no different. By using the new Feedback Manager, you can invite your stakeholder/product owner/manager to review and explore your feature via an automated email that maintains traceability to your work item/story. From there, the stakeholder can walk through the feature, capture feedback via voice, images, video or text, and punt that actionable feedback right back to you. Of course, I would never suggest this as an alternative to having a conversation, but as teams become increasingly distributed, this is a great capability for ensuring you are getting feedback early and often.
Lastly, as a TDD zealot, I am really digging the improved support for unit testing. Everything from an improved test runner that reports execution time to support for fakes and stubs makes test-first development much more tenable without resorting to 3rd party tools or frameworks (not necessarily a bad thing, but you never know when an open source project might be abandoned leaving you and your team in a lurch).
Wow. Ok, that’s a ton of stuff and I haven’t event scratched the surface so I’ll leave you with the slides I shared and a link to download and evaluation Visual Studio 2012 and TFS toda http://www.microsoft.com/visualstudio/eng/downloads