IT as a Service Part 3: Service Enablement and Delivery

This is the third post in a series about ITaaS. The posts walk IT admins through the steps of enabling true ITaaS, starting from the last step. This post will cover the importance of service enablement and service delivery.

By Adam Stockley, Pre-sales Consultant, RES Software

As I explained in my previous post, when describing the configuration of services within the catalog, simply installing an application is not necessarily enough for the full enablement of a service. And what about services that don’t actually require installation in order for them to work?

Service Enablement

Technology that is deployed to enable services needs careful consideration, but above all, it should be capable of enabling the broadest range of services possible. There are a whole host of considerations when selecting a technology (or technologies) to enable a service for a user. After all, this step is key in ensuring that the service is actually delivered according to end users requirements.

It’s worth taking a moment to consider one particular point: Every IT department delivers IT services to its users every hour of every day— that’s their job. You may have a mix of manual and automated tools and processes that result in the end user receiving the services needed to do their job. Perhaps you have invested in software deployment technology to automate the delivery of applications to users. Or maybe you have many scripts, written in-house, which perform mundane day-to-day configuration provisioning tasks. Perhaps you still take on largely manual “heavy lifting,” possibly even desk-side visits to make sure the users get the services they need.

In order to successfully deliver a great service catalog-driven user experience, you need to automate the processes behind the enablement of services. It’s not much of a positive experience if John in sales requests access to a service but has to wait a couple of days because Fred in IT has a backlog in his job queue. This does little to improve the current situation in many businesses, aside from replacing “John calls the helpdesk and requests the service” with “John requests the service from his catalog.” If you want to deliver a consumer-like experience to your users, then you MUST automate the processes that enable those services.

Not to labor the point but service enablement is about much more than mere application installation. File and print services, directory services, mail services, helpdesk service, and dare I say it, application access all require multi-step processes in order to ensure that the service is enabled in such a way that it is fully useable at first access by the user. Permissions might need to be altered, security group memberships amended, user properties updated, files copied, registry tweaks applied, firewalls configured; the list goes on…. In fact, application access might not even need an installation. Consider Citrix XenApp published applications, App-V published applications, even web applications. In many cases, the process to enable the application for a user might be something as simple as adding them to a security group in your directory. Of course, there is absolutely a place for end-point management, where applications or software needs to be installed, but it is by no means a complete solution.

So how do we go about this service enablement? Well, the first consideration is that whatever tool you use to build your processes, it must be able to accept input in the form of parameters from your service catalog – information such as user names and other user or device-specific information must be capable of being passed from the catalog service to the enablement mechanism. This is a given. You should also, when possible, select a tool that can build workflows for enabling services, which can integrate with your existing automation tools and scripts. There’s no point in re-inventing the wheel, and if you have an acceptable automated process in place then use it! However, once again, the tool should be capable of injecting user-and device-specific information into your existing processes because without this, the process of integration is difficult, if not impossible.

This might seem like a lot of work, and indeed, the service enablement step towards ITaaS is possibly the most complex and involved part of the transformation. Try to use what you already have and fill in the gaps, step-by-step, until you reach the goal of fully automated service enablement.

In our path through the ITaaS model, we now have chargeback enabled, users able to receive a service catalog that includes both mandatory and optional services, and now fully automated the enablement of that service for the user. What we now need to consider is how the user actually interacts with that service, how is it “delivered” to them?

Service Delivery
Many businesses today provide a host of different ways in which their users can access and interact with IT services. Different desktop delivery platforms suit some use cases better than others. Consider the widespread use of Citrix tools as remote access or branch access solutions. In fact, it is commonplace for users to use different desktop delivery platforms throughout the course of the day to enable access to different services in different situations. This can lead to a very confused and inconsistent user experience, not to mention huge complexity in efforts to manage this kind of multi-desktop, possibly multi-operating system, environment.

Services may be “context” specific in that users should only receive them in certain circumstances, even if they are granted access through the service catalog (for example, a user might be granted access to a color printer in the New York office, but they probably would not need access to that printer when working in London.) Security requirements might require that a user can only access a particular application or file share from a certain location, or at a specific time of day. I’ve often spoken to IT teams who’ve told me “we want the user to have the same experience everywhere.” But when quizzed, a majority of them recognize that this is incorrect. What they actually want is an experience that’s predictable and relevant to the user depending on their circumstance.

The “traditional” approach to managing the user environment has been to use roaming profiles, folder redirection, logon scripts, GPOs, GPPs etc. The problem with these techniques is that they are typically OS specific, and they are clunky when trying to use them to take user context into account. Another issue with these techniques is that even if you can take context into account, they only execute at logon, and do not take into account changes in user context (for example if a user moves from one network to another, or hibernates their laptop, goes home, then resumes). You very well may want to deliver different services as this context changes.

There is one “service” that I have yet to mention, and that is personalization— the bane of many an IT admins lives. I’ve seen personalization described as “those awkward bits which users demand but IT have no idea how to deliver.” The classic consideration is wallpaper. A user wants a photo of their kids sitting on their desktop. Nice. But the business is hardly going to fall over if they cannot do it…. In fact, a common approach is to ditch any attempt to allow personalization and instead to mandate every element of the user experience. The personalization debate is one that can rage on, but in my experience, happy users lead to a successful project, and there are genuine reasons why personalization should be allowed (consider accessibility requirements for those with a disability as an example). Roaming profiles can solve this to an extent, but the technology is fraught with difficulties. Logon times can increase exponentially, and profiles are not compatible across operating systems. Additionally, roaming profiles do not lend themselves to today’s “work anywhere” way of thinking.

So you need a technology that can take into account the context of the user and deliver services that are appropriate. It should also behave consistently and allow user personalization across multiple operating systems and desktop delivery platforms. This is the realm of User Environment Management (or User Virtualization depending on who you speak to.) Services should be delivered according to a rule-set based upon user context and security requirements. It is also beneficial if the tool recognizes service subscription status from your service catalog as an access principal because in the long run this will simplify and significantly reduce the time to deploy services —both existing and new.
There are many players in this space today. You could even argue that the market is well on its way to becoming commoditized. It is important to choose wisely, though, and in order to select a technology that has the ability to deliver all the services you need to handle, you need to have an in-depth understanding of the services that your users consume today.

For that, a discovery step is critical. Check back for my next post, which will cover the importance of discovery and how it fits into ITaaS.

Posted in Tech for Tomorrow and tagged , .

Leave a Reply