ITIL Intermediate CSI - CSI Methods and Techniques Tutorial

CSI Methods and Techniques

Welcome to lesson 4 ‘Continual Service Improvement Methods and Techniques’ which is a part of the ITIL Intermediate CSI Certification Course. This chapter deals with details about the Methods and Techniques in continual service improvement, covering the managerial and supervisory aspects.

Let us begin with the objectives of this lesson.

Objectives

By the end of this ‘Continual Service Improvement Methods and Techniques’ lesson, you will be able to understand:

  • When to use assessments, what to assess and how a gap analysis can provide insight into the areas that have room for improvement

  • How to use benchmarking, service measurement, metrics, service reporting, including balanced scorecard and SWOT, to support CSI

  • How to create a return on investment, establish a business case and measure the benefits achieved

  • How techniques within availability management, capacity management, IT service continuity management and problem management can be used by CSI

Let us go ahead and learn more about Assessments.

Looking to learn more about ITIL Intermediate CSI? Why not enroll in our ITIL CSI Course!

Assessments

Assessments are the formal mechanisms for comparing the operational process environment to the performance standards for the purpose of measuring improved process capability and/or to identify potential shortcomings that could be addressed.

The advantage of assessments is they provide an approach to sample particular elements of a process or the process organization which impact the efficiency and the effectiveness of the process.

The performance standard is dependent on the operational environments, for example:

  • ITSM then ISO 20000

  • Quality Management then ISO9000

  • Security Management then ISO27000

Just by conducting a formal assessment an organization is demonstrating their significant level of commitment to improvement. Assessments involve real costs, staff time and management promotion. Organizations need to be more than just involved in an assessment, they need to be committed to improvement.

Let us now move on to our next section which explains the situations when assessment should take place.

When to assess

Assessments can be conducted at any time. A way to think about assessment timing is in line with the improvement Lifecycle:

  • Plan (project initiation) – Assess the targeted processes at the inception of process introduction to form the basis for a process improvement project. Processes can be of many configurations and design which increases the complexity of assessment data collection.

  • Plan (project midstream) – A check during process implementation or improvement activities serves as validation that process project objectives are being met and, most importantly, provide tangible evidence that benefits are being achieved from the investment of time, talent and resources to process initiatives.

  • Do/Check (a process in place) – Upon the conclusion of a process project, it is important to validate the maturation of the process and the process organization through the efforts of the project team. In addition to serving as a decisive conclusion for a project, scheduling periodic reassessments can support overall organizational integration and quality efforts.

Let us now move on to our next section.

What to assess

Assessments continued: In our last section we learned about when to do Assessments. This section explains what to assess. The assessment’s scope is one of the key decisions.

The scope should be based on the assessment’s objective and the expected future use of process assessments and assessment reports. Assessments can be targeted broadly at those processes currently implemented or focused specifically where known problems exist within the current process environment.

There are three potential scope levels discussed as follows:

  • Process only – Assessment only of process attributes based on the general principles and guidelines of the process framework which defines the subject process.

  • People, process, and technology – Extend the process assessment to include assessment of the skills, roles, and talents of the managers and practitioners of the process as well as the ability of the process-enabling technology deployed to support the objectives and transaction state of the process.

  • Full assessment – Extend the people, process, and technology assessment to include:

- an assessment of the culture of acceptance within the organization

- the ability of the organization to articulate a process strategy

- the definition of a vision for the process environment as an ‘end state’

- the structure and function of the process organization

- the ability to process governance to assure that process objectives and goals are met

- the business/IT alignment via a process framework, the effectiveness of process reporting/metrics

- the capability and capacity of decision-making practices to improve processes over time

Let us now move on to our next section which explains How to assess.

How to assess

The assessment can be conducted in the following manner:

Using external resources for assessments

Assessments can be conducted by the sponsoring organization or with the aid of a third party. The advantages of conducting a self-assessment are the reduced cost and the intellectual lift associated with learning how to objectively gauge the relative performance and progress of an organization’s processes.

Of course, the downside is the difficulty associated with remaining objective and impartial during the assessment. The pitfall of a lack of objectivity can be eliminated by using a third party to conduct the assessment.

There are a number of public ‘mini-assessments’ that are available on various websites which provide a general perspective of maturity. However, a more detailed assessment and resulting report can be contracted through a firm specializing in an assessment practice.

Balancing against the obvious increased cost of a third-party assessment is the objectivity and experience of an organization that performs assessments on a regular basis.

Performing self-assessments

Organization when conducts internal assessments that can be beneficial as compared to external.

The Advantages of internal assessments are:

- No expensive consultant promotes external cooperation and communication.

However, there can be certain disadvantages of internal assessments such as:

- Lack of objectivity, little acceptance of findings, internal politics, and limited knowledge of skills.

Whether conducted internally or externally, the assessment should be reported using the levels of the maturity model. A best-practice reporting method is to communicate assessment results in a graphical fashion.

Graphs are an excellent tool as they can fulfill multiple communication objectives. For instance, graphs can reflect changes or trends of process maturity over time or reflect the comparison of the current assessment to standards or norms.

Let us now move on to our next section which explains the Gap analysis.

Gap analysis

Gap analysis is a business assessment tool enabling an organization to compare where it is currently and where it wants to go in the future. This provides the organization with insight into areas which have room for improvement.

This can be used to determine the gap between ‘What do we want?’ and ‘What do we need?’ for example. The process involves determining, documenting and approving the variance between business requirements and current capabilities.

Gap analysis naturally flows from benchmarking or other assessments such as service or process maturity assessments. Once the general expectation of performance is understood then it is possible to compare that expectation with the level of performance at which the company currently functions. This comparison becomes the gap analysis.

Such analysis can be performed at the strategic, tactical or operational level of an organization. Gap analysis can be conducted from different perspectives such as:

  • Organization (e.g. Human resources)

  • Business direction

  • Business processes

  • Information technology.

Let us now move on to our next section which explains Benchmarking.

Benchmarking

Benchmarking is a process used in management, particularly strategic management, in which organizations evaluate various aspects of their processes in relation to best practice, usually within their own sector.

This then allows organizations to develop plans on how to adopt such best practice, usually with the aim of increasing some aspect of performance. Benchmarking may be a one-time occurrence event but is often treated as a continuous process in which organizations continually seek to challenge their practices. Let us now move on to our next section which explains Benchmarking procedure.

Benchmarking procedure

Here you identify your problem areas. Because benchmarking can be applied to any business process or function, a range of research techniques may be required. They include:

  • Informal conversations with customers, employees, or suppliers

  • Focus groups

  • In-depth marketing research

  • Quantitative research

  • Surveys

  • Questionnaires

  • Re-engineering analysis

  • Process mapping

  • Quality control variance reports

  • Financial ratio analysis.

Let us now move on to our next section which explains Benchmarking costs.

Benchmarking costs

Benchmarking is a moderately expensive process, but most organizations find that it more than pays for itself. The three main types of costs are:

  • Visit costs – This includes travel- and accommodation-related expenses for team members who need to travel to the site.

  • Time costs – Members of the benchmarking team will be investing time in researching problems, finding exceptional companies to study, visits and implementation. This will take them away from their regular tasks for part of each day so additional staff might be required.

  • Benchmarking database costs – Organizations that institutionalize benchmarking into their daily procedures find it is useful to create and maintain a database of best practices and the companies associated with each best practice.

Let us now move on to our next section explains Benchmarking Value.

Value of Benchmarking

Organizations have a growing need to get a clear view of their own qualities and performances with regard to their competitors and in the eye of their customers. It isn’t sufficient anymore to have self-assessment reports on the status of the IT performance; it is important to test and compare it with the view the market has on the performance of the organization.

A positive result of this test and comparison can give a competitive edge to the organization in the market and gives trust to its customers. The results of benchmarking and self-assessments lead to the identification of gaps in terms of people, process, and technology.

A benchmark can be the catalyst for initiating prioritization of where to begin formal process improvement. The results of benchmarking must clearly display the gaps, identify the risks of not closing the gaps, facilitate prioritization of development activities and facilitate communication of this information.

To summarize, a benchmark is a basis for:

  • Profiling quality in the market

  • Boosting self-confidence and pride in employees as well as motivating and tying employees in an organization. This is relevant to today’s staff shortages in the IT industry – IT personnel want to work in a highly efficient, cutting-edge environment

  • Trust from customers that the organization is a good IT service management provider.

Let us now move on to our next section to learn about Benchmarking Value.

Benchmarking as a level (for change)

This section explains Benchmarking as a level (for the change) Benchmarking as a level for change means that benchmarking is sometimes the only way to open an organization to new methods, New ideas and tools to improve their effectiveness.

It helps crack through resistance to change by demonstrating other methods of solving problems than the one currently employed, and demonstrating that they are irrefutably better because they are being used successfully by others.

Let us now move on to our next section which explains Benchmarking as a steering instrument.

Benchmarking as a steering instrument

Benchmarking is an ongoing method of measuring and improving products, services, and practices against the best that can be identified in any industry anywhere. It has been defined as ‘the search for industry best practices which lead to superior performance.’

Let us now move on to our next section which explains Benchmarking categories.

Benchmarking categories

Benchmarking is a great tool to identify improvement opportunities as well as to verify the outcome of improvement activities. Organizations can conduct internal or external benchmark studies. Improving service management can be as simple as: ‘Are we better today than we were yesterday?’

Following are the incremental improvements:

  • An internal benchmark is where an organization sets a baseline at a certain point in time for the same system or department and they measure how they are doing today compared to the baseline originally set. This type of benchmark is often overlooked by organizations (service targets are a form of the benchmark)

  • Comparison to industry norms provided by external organizations

  • Direct comparisons with similar organizations

  • Comparison with other systems or departments within the same company.

Let us now move on to our next section which explains Benchmarking benefits.

Benchmarking benefits

Benchmarking often reveals quick wins – opportunities for improvement that are relatively easy and inexpensive to implement while providing substantial benefits in terms of process effectiveness, cost reduction, or staff synergy. The costs are clearly repaid through the improvements realized when organizations use benchmarking successfully.

Let us now move on to our next section which explains what to benchmark.

What to benchmark?

To determine what to benchmark a method must be adopted. Differences in benchmarks between organizations are normal. All organizations and service-provider infrastructures are unique, and most are continually changing.

There are also intangible but influential factors that cannot be measured, such as growth, goodwill, image, and culture. Direct comparison with similar organizations is most effective if there is a sufficiently large group of organizations with similar characteristics.

It is important to understand the size and nature of the business area, including the geographical distribution and the extent to which the service is used for business or time-critical activities. Comparison with other groups in the same organization normally allows a detailed examination of the features being compared, so that it can be established whether or not the comparison is of like with like.

When benchmarking one or more services or service management processes, the IT organization has to ascertain which of these the organization should focus on first, if all cannot be implemented simultaneously. Determine which services and supporting processes to compare.

Benchmarking of a service management process is used to find out if a process is cost-effective, responsive to the customer’s needs and effective in comparison with other organizations. Some organizations use benchmarking to decide whether they should change their service provider.

Benchmarking is comprised of 4 stages:

  • Planning

  • Analysis

  • Action

  • Review

It is essential in planning for service management to start with an assessment or review of the relevant service management processes. The results of this can provide a baseline for future comparison. Seven step improvement processes can be used for benchmarking method.

Let us now move on to our next section which explains Benchmarking – Who is involved?

Benchmarking - Who is involved?

Within an organization there will be three parties involved in benchmarking:

  • The customer – that is, the business manager responsible for acquiring IT services to meet business objectives. The customer’s interest in benchmarking would be: ‘How can I improve my performance in procuring services and managing service providers, and in supporting the business through IT services?’

  • The user or consumer – that is anyone who uses IT services to support his or her work. The user’s interest in benchmarking would be: ‘How can I improve my performance by exploiting IT?’

  • The internal service provider – providing IT services to users under Service Level Agreements negotiated with and managed by the customer. The provider’s interest in benchmarking would be: ‘How can we improve our performance in the delivery of IT services which meet the requirements of our customers and which are cost-effective and timely?’ There will also be participation from external parties

  • External service providers – providing IT services to users under contracts and Service Level Agreements negotiated with and managed by the customer

  • Members of the public – are increasingly becoming direct users of IT services

  • Benchmarking partners – that is, other organizations with whom comparisons are made in order to identify the best practices to be adopted for improvements.

Let us now move on to our next section which explains Benchmarking – Comparison with industry norms.

Comparison with industry norms

Comparison with industry norms is discussed below:

Process maturity comparison

Conducting a process maturity assessment is one way to identify service management improvement opportunities. Often when an organization conducts a maturity assessment they want to know how they compare to the other organizations.

Figure 5.4 reflects average maturity scores for over 100 separate organizations that went through a maturity assessment. As you can see Service Level Management which is a key process in support of CSI is at a fairly low maturity level in the organizations used in the above example.

The lack of a mature SLM process that provides for identification of new business requirements, monitoring, and reporting of results can make it difficult to identify service improvement opportunities. A prime target for improvements in this example would be first to mature the SLM practice to help achieve measurable targets to improve services going forward.

Total cost of ownership

The total cost of ownership (TCO), developed by Gartner, has become a key measurement of the effectiveness and the efficiency of services. TCO is defined as all the costs involved in the design, introduction, operation, and improvement of services within an organization from its inception until retirement.

Often, TCO is measured relating to hardware components. The TCO of an IT service is even more meaningful. CSI needs to take the TCO into perspective when looking at service improvement plans. TCO is often used to benchmark specific services in IT against other organizations, i.e. Managed service providers.

Let us now move on to our next section which explains Benchmark approach.

Benchmark approach

Benchmarking will establish the extent of an organization’s existing maturity with best practice and will help in understanding how that organization compares with industry norms. Deciding what the KPIs are going to be and then measuring against them will give solid management information for future improvement and targets.

A benchmark exercise would be used as the first stage in this approach. This could be either one or other of:

  • An internal benchmark – completed internally using resources from within the organization to assess the maturity of the service management processes against a reference framework

  • An external benchmark – this would be completed by an external third-party company. Most of these have their own proprietary models for the assessment of service management process maturity.

The results and recommendations contained in the benchmarking review can then be used to identify and rectify areas of weakness within the IT service management processes.

Let us now move on to our next section which explains Service Measurement.

Service Measurement

Benchmarking activities need to be business-aligned. They can be expensive exercises whether undertaken internally or externally and therefore they need to be focused on where they can deliver the most value. For services, there are three basic measurements that most organizations utilize:

  • Availability of the service

  • Reliability of the service

  • Performance of the service

Let us now move on to our next section which explains the Design and development of a service measurement framework.

What are you waiting for? Interested in taking up an ITIL Intermediate CSI Course? Check out our Course Preview!

Design and Develop a Service Measurement Framework

One of the first steps in developing a Service Measurement Framework is to understand the business processes and to identify those that are most critical to the delivery of value to the business. The IT goals and objectives must support the business goals and objectives.

There also needs to be a strong link between the operational, tactical and strategic level goals and objectives, otherwise, an organization will find itself measuring and reporting on performance that may not add any value.

Service measurement is not only looking at the past but also the future – what do we need to be able to do and how can we do things better? The output of any Service Measurement Framework should allow individuals to make operational, tactical or strategic decisions.

For a successful Service Measurement Framework, the following critical elements are required. A Framework that is:

  • Integrated into business planning

  • Focused on business and IT goals and objectives

  • Cost-effective

  • Balanced in its approach to what is measured

  • Able to withstand change.

Define roles and responsibilities - Creating a Service Measurement Framework will require the ability to build upon different metrics and measurements. The end result is a view of the way individual component measurements feed the end-to-end service measurement which should be in support of key performance indicators defined for the service.

This will then be the basis for creating a service scorecard and dashboard. The service scorecard will then be used to populate an overall Balanced Scorecard or IT scorecard. Thus it is important to identify:

  • Who defines the measures and targets?

  • Who monitors and measures?

Let us now move on to our next section which explains the Different levels of measurement and reporting.

Different Levels Of Measurement And Reporting

Starting at the bottom, the technology domain areas will be monitoring and reporting on a component basis. This is valuable as each domain area is responsible for ensuring the servers are operating within defined guidelines and objectives. At this level, measurements will be on component availability, reliability and performance.

The output of these measurements will feed into the overall end-to-end service measurement as well as the Capacity and Availability Plans.

These measurements will also feed into any incremental operations improvements and into a more formal CSI initiative.

This will then be the basis for creating a service scorecard and dashboard. The service scorecard will then be used to populate an overall balanced scorecard or IT scorecard.

Let us now move on to our next section which explains the Service measurement model.

Service Measurement Model

As shown in the figure below, there are multiple levels that need to be considered when developing a Service Measurement Framework.

What gets reported at each level is dependent on the measures that are selected.

Let us now move on to our next section which explains the Service management process measurement model.

Service Management Process Measurement

There are four major levels to report on.

The bottom level contains the activity metrics for a process and these are often volume type metrics such as the number of Request for Changes (RFC) submitted, number of RFCs accepted into the process, number of RFCs by type, the number approved, number successfully implemented, etc.

The next level contains the KPIs associated with each process. The activity metrics should feed into and support the KPIs.

The KPIs will support the next level which is the high-level goals such as improving service quality, reducing IT costs or improving customer satisfaction, etc.

Finally, these will feed into the organization’s Balanced Scorecard or IT scorecard. When first starting out, be careful to not pick too many KPIs to support the high-level goals. Additional KPIs can always be added at a later time.

Let us now move on to our next section which explains the Service measurement model.

Service Management Model

The same principles apply when measuring the efficiency and effectiveness of a service Management process.

As the figure shows you will need to define what to measure at the process activity level. These activity measures should be in support of process key performance indicators (KPIs).

The KPIs need to support higher-level goals. In the example below for Change Management, the higher level goal is to improve the service quality. One of the major reasons for service quality issues is the downtime caused by failed changes.

And one of the major reasons for failed changes is often the number of urgent changes an organization implements with no formal process.

Let us now move on to our next section which explains about Creating a measurement framework grid.

Creating A Measurement Framework Grid

It is recommended to create a framework grid that will lay out the high-level goals and define which KPIs will support the goal and also which category the KPI addresses.

KPI categories can be classified as the following:

  • Compliance – are we doing it?

  • Quality – how well are we doing it?

  • Performance – how fast or slow are we doing it?

  • Value – is what we are doing making a difference?

Other considerations are: Measurement, Target audience, and Owner

Let us now move on to our next section which explains the Service measurement metrics.

Metrics

It is important to remember that there are three types of metrics that an organization will need to collect to support CSI activities as well as other process activities.

The types of metrics are:

Technology metrics – these metrics are often associated with component and application based metrics such as performance, availability etc.

Process metrics – these metrics are captured in the form of CSFs, KPIs and activity metrics for the service management processes. These metrics can help determine the overall health of a process.

Four key questions that KPIs can help answer are around quality, performance, value, and compliance of following the process. CSI would use these metrics as input in identifying improvement opportunities for each process.

Service metrics – these metrics are the results of the end-to-end service. Component/technology metrics are used to compute the service metrics.

Let us now move on to our next section which explains the answer to the question How many CSFs and KPIs?

How many CSFs and KPIs?

Some recommended that no more than two to three KPIs are defined per CSF at any given time and that a service or process has no more than two to three CSFs associated with it at any given time while others recommend upwards of four to five.

This may not sound much but when considering the number of services, processes or when using the Balanced Scorecard approach, the upper limit can be staggering! It is recommended that in the early stages of a CSI program only two to three KPIs for each CSF are defined, monitored and reported on.

As the maturity of a service and service management processes increase, additional KPIs can be added. Based on what is important to the business and IT management the KPIs may change over a period of time.

Also, keep in mind that as service management processes are implemented this will often change the KPIs of other processes. As an example, increasing the first-contact resolution is a common KPI for Incident Management. This is a good KPI, to begin with, but when you implement Problem Management this should change.

Let us now move on to our next section which explains the Type of KPIs.

Type of KPIs

The next step is to identify the metrics and measurements required to compute the KPI. There are two basic kinds of KPI, qualitative and quantitative.

Here is a qualitative example:

CSF: Improving IT service quality

KPI: 10 percent increase in customer satisfaction rating for handling incidents over the next 6 months.

Metrics required:

  • Original customer satisfaction scores for handling incidents

  • Ending customer satisfaction score for handling incidents.

Measurements:

  • Incident handling survey scores

  • A number of survey scores.

Here is a quantitative example:

CSF: Reducing IT costs

KPI: 10 percent reduction in the costs of handling printer incidents.

Metrics required:

  • The original cost of handling a printer incidents

  • The final cost of handling a printer incidents

  • Cost of the improvement effort

Let us now move on to our next section which explains the Tension metrics.

Tension Metrics

The effort from any support team is a balancing act of three elements:

  • Resources – people and money

  • Features – the product or service and its quality

  • The schedule

The delivered product or service, therefore, represents a balanced trade-off between these three elements. Tension metrics can help create that balance by preventing teams from focusing on just one element – for example, on delivering the product or service on time.

Let us now move on to our next section which explains the Goals and metrics.

Goals and Metrics

Each phase of the service Lifecycle requires very specific contributions from the key roles identified in Service Design, Service Transition and Service Operation, each of which has very specific goals to meet.

Ultimately, the quality of the service will be determined by how well each role meets its goals, and by how well those sometimes conflicting goals are managed along the way. That makes it crucial that organizations find some way of measuring performance – by applying a set of metrics to each goal.

Let us now move on to our next section which talks about Interpreting and using metrics.

Interpreting and using metrics

Results must be examined in the context of the objectives, environment, and any external factors. Therefore after collecting the results, organizations will conduct measurement reviews to determine how well the indicators worked and how the results contribute to the objectives.

Before starting to interpret the metrics and measures it is important to identify if the results that are being shown even make sense. If they do not, then instead of interpreting the results, action should be taken to identify the reasons the results appear the way they do.

An example was an organization that provided data for the Service Desk and the data showed there were more first-contact resolutions at the Service Desk than there were incident tickets opened by the Service Desk. This is impossible and yet this organization was ready to distribute this report.

When this kind of thing happens then some questions need to be asked, such as:

  • How did we collect this data?

  • Who collected the data?

  • What tools were used to collect the data?

  • Who processed the data?

  • What could have led to the incorrect information?

Let us now move on to our next section which talks about Using measurement and metrics.

Using Measurement and Metrics

Another key use of measurement and metrics is for comparison purposes. Measures by themselves may tell the organization very little unless there is a standard or baseline against which to assess the data. Measuring only one particular characteristic of performance in isolation is meaningless unless it is compared with something else that is relevant.

The following comparisons are useful:

  • Comparison against the baseline

  • Comparison against a target or goal

  • Comparison over time such as day to day, week to week, month to month, quarter to quarter, or year to year

  • Comparison between different business units

  • Comparison between different services.

Another comparison that is useful is the comparison with other organizations. In this case, you should be sure to understand that the strategy, goals and objectives of other organizations may not be in alignment with yours so there may be driving factors in the other organization that you don’t have or it could be the other way around.

Individual metrics and measures by themselves may tell an organization very little from a strategic or tactical point of view. Some types of metrics and measures are often more activity based than volume based but are valuable from an operational perspective.

Examples of such metrics and measures could be:

  • The services used

  • The mapping of customers to services

  • Frequency of use of each service

  • Times of the day each service is used

  • The way each service is used (internally or externally through the web)

  • The performance of each component used to provide the service

  • The availability of each component used to provide the service.

Let us now move on to our next section which talks about Creating scorecards and reports.

Creating Scorecards and Reports

Reports and scorecards should be linked to overall strategy and goals. Using a Balanced Scorecard approach is one way to manage this alignment

Creating reports

Service measurement information will be used for three main purposes:

  • to report on the service to interested parties

  • to compare against targets

  • to identify improvement opportunities

Reports must be appropriate and useful for all those who use them. There are typically three distinct audiences for reporting purposes:

  • The business – is it really focused on delivery to time and budget?

  • IT management – management will be interested in the tactical and strategic results that support the business.

  • IT operations/technical managers – these people will be concerned with the tactical and operational metrics which support better planning, coordination, and scheduling of resources. The operational managers will be interested in their technology domain measurements such as component availability and performance.

Let us now move on to our next section which explains the concept of setting targets.

Setting targets

Targets set by management are quantified objectives to be attained. They express the aims of the service or process at any level and provide the basis for the identification of problems and early progress towards solutions and improvement opportunities.

Service measurement targets are often defined in response to business requirements or they may result from new policy or regulatory requirements. Service Level Management through Service Level Agreements will often drive the target that is required.

Unfortunately, many organizations have had targets set with no clear understanding of the IT organization’s capability to meet the target. That is why it is important that Service Level Management not only looks at the business requirements but also IT capability to meet business requirements.

The CSFs and SLRs will give vital information as to what we are trying to achieve and it is important that we keep the targets in mind when measuring and reporting.

Let us now move on to our next section which explains the concept of Balanced Scorecard.

Balanced Scorecard

This is a technique developed by Kaplan and Norton in the mid-1990s and involves the definition and implementation of a measurement framework covering four different perspectives:

  • Customer

  • Internal Business

  • Learning

  • Growth

  • Financial

The four linked perspectives provide a Balanced Scorecard to support strategic activities and objectives and can be used to measure overall IT performance. The Balanced Scorecard is complementary to ITIL. Some of the links to IT include the following:

  • Client perspective – IT as a service provider, primarily documented in Service Level Agreements (SLAs)

  • Internal processes – Operational excellence utilizing Incident, Problem, Change, Configuration and Release management as well as other IT processes; successful delivery of IT projects

  • Learning and growth – Business productivity, the flexibility of IT, investments in software, professional learning, and development

  • Financial – Align IT with the business objectives, manage costs, manage risks, deliver value. IT Financial Management is the process used to allocate costs and calculate ROI.

Let us now move on to our next section which explains the example of Balanced Scorecard.

IT Balanced Scorecard

An example of a balanced scorecard for a service desk is shown below.

Many organizations are structured around strategic business units (SBUs) with each business unit focusing on a specific group of products or services offered by the business.

The structure of IT may match the SBU organization or may offer services to the SBU from a common, shared services IT organization or both. This last hybrid approach tends to put the central infrastructure group in the shared services world and the business solutions or application development group in the SBU itself.

This often results in non-productive finger-pointing when things go wrong. The business itself is not interested in this blame-storming exercise but rather in the quality of IT service provision. Therefore, the Balanced Scorecard is best deployed at the SBU level.

Let us now move on to our next section which explains the SWOT Analysis.

SWOT Analysis

SWOT stands for strengths, weaknesses, opportunities, and threats. This section provides guidance on properly conducting and using the result of a SWOT analysis, how to select the scope and range of this common assessment tool, as well as the common mistakes people make when using a SWOT analysis. SWOT analysis provides a quick overview of the strategic situation as:

  • Strengths are internal attributes of the organization that are helpful to the achievement of the objective.

  • Weaknesses are internal attributes of the organization that are harmful to the achievement of the objective.

  • Opportunities are external conditions that are helpful to the achievement of the objective.

  • Threats are external conditions that are harmful to the achievement of the objective.

Let us now move on to our next section which explains an example of Sample SWOT Analysis.

Sample SWOT Analysis for CSI

The figure shown here is an example of an analysis performed for CSI:

Let us now move on to our next section which explains the concept of Return on investment.

Return on investment

The ROI challenge needs to take into consideration many factors usually it is a cost versus benefit analysis.

Cost – the money an organization pays to improve services and service management process internal resource costs, tool costs, consulting costs etc. It is often easy to come up with these costs.

Return or benefit - These returns are often hard to define. In order to be able to compute these items it is important to know the following:

What is the cost of downtime?

This would include both the lost productivity of the customers and the loss of revenue.

Reduce network - What is the cost of doing rework? How many failed changes have to be backed out and reworked?

Let us now move on to our next section which explains an example of Return on investment.

ROI example

XYZ corporation spent USD 5,00,000 to establish the formal problem management process that saved USD 8,00,000.

The ROI was calculated at the end of the first year of operation was USD 3,00,000.

Let us now move on to our next section which explains the Value on Investment example.

Value on Investment example

XYZ corporation establishment of a formal change management process. The change management process reduced the number of failed changes. It improved the ability of XYZ Corp. To respond quickly to changing market conditions and unexpected opportunities.

The VOI was:

  • Enhanced market position

  • Promoted collaboration between business units and IT

  • Freed up resources to work on other projects

Let us now move on to our next section which talks about establishing the business case.

Establishing the business case

The Business Case should articulate the reason for undertaking a service or process improvement initiative. As far as possible, data and evidence should be provided relating to the costs and expected benefits of undertaking process improvement, noting that:

  • Process redesign activities are more complex and therefore more costly than initially expected

  • The organizational change impact is often underestimated

  • A changed process usually requires changed competencies and tools, adding further to the expense.

Examples of business value measures are:

  • Time to market

  • Customer retention

  • Inventory carrying cost

  • Market share.

IT’s contribution can be captured as follows:

  • Gaining agility

  • Managing knowledge

  • Enhancing knowledge

  • Reducing costs

  • Reducing risk.

Let us now move on to our next section which talks about Reporting policy and rules in Service reporting.

Service Reporting

Service reports should be produced to meet identified needs and customer requirements. An ideal approach to building a business-focused service-reporting framework is to take the time to define and agree on the policy and rules with the business and Service Design about how reporting will be implemented and managed.

This includes:

  • Targeted audience(s) and the related business views on what the service delivered is

  • Agreement on what to measure and what to report on

  • Agreed definitions of all terms and boundaries

  • The basis of all calculations

  • Reporting schedules

  • Access to reports and medium to be used

  • Meetings scheduled to review and discuss reports.

Let us now move on to our next section which explains about CSI and Availability Management.

CSI and Availability Management

The CSI process makes extensive use of methods and practices found in many ITIL processes throughout the Lifecycle of a service. Far from being redundant, the use of the outputs in the form of flows, matrices, statistics or analysis reports provide valuable insight into the service’s design and operation.

There are various analysis techniques to determine what needs to be improved, prioritize and suggest improvements such as:

  • Component Failure Impact Analysis

  • Fault Tree Analysis

  • Service Failure Analysis

  • Technical Observation

Let us now move on to our next section explains about Component Failure Impact Analysis and CSI.

Component Failure Impact Analysis and CSI

Component Failure Impact Analysis (CFIA) identifies single points of failure, IT services at risk from failure of various Configuration Items (CI) and the alternatives that are available should a CI fail. It should also be used to assess the existence and validity of recovery procedures for the selected CIs.

The same approach can be used for a single IT service by mapping the component CIs against the vital business functions and users supported by each component. When a single point of failure is identified, the information is provided to CSI. This information, combined with business requirements, enable CSI to make recommendations on how to address the failure.

Let us now move on to our next section which explains about Fault Tree Analysis and CSI.

Fault Tree Analysis and CSI

Fault Tree Analysis (FTA) is a technique that is used to determine the chain of events that cause a disruption of IT services. This technique offers detailed models of availability. It makes a representation of a chain of events using Boolean algebra and notation.

Essentially FTA distinguishes between four events: basic events, resulting events, conditional events, and trigger events. When provided to CSI, FTA information indicates which part of the infrastructure, process or service was responsible for the service disruptions.

This information, combined with business requirements, enables CSI to make recommendations about how to address the fault.

Let us now move on to our next section which explains about Service Failure Analysis and CSI.

Service Failure Analysis and CSI

Service Failure Analysis (SFA) is a technique designed to provide a structured approach to identify end-to-end availability improvement opportunities that deliver benefits to the user. Many of the activities involved in SFA are closely aligned with those of Problem Management.

In a number of organizations, these activities are performed jointly by Problem and Availability Management. SFA should attempt to identify improvement opportunities that benefit the end user. It is therefore important to take an end-to-end view of the service requirements.

SFA Distinguishes between four events:

  • basic events

  • resulting events

  • conditional events

  • trigger events

CSI and SFA work hand in hand. SFA identifies the business impact of an outage on a service, system or process. This information, combined with business requirements, enables CSI to make recommendations about how to address improvement opportunities.

Let us now move on to our next section which explains the Technical Observation and CSI.

Technical Observation and CSI

A Technical Observation (TO) is a prearranged gathering of specialist technical support staff from within IT support. They are brought together to focus on specific aspects of IT availability. The TO’s purpose is to monitor events, real-time as they occur, with the specific aim of identifying improvement opportunities within the current IT infrastructure.

The TO is best suited to delivering proactive business and end-user benefits from within the real-time IT environment. Bringing together the specialist technical staff to observe specific activities and events within the IT infrastructure and operational processes creates an environment to identify improvement opportunities.

The TO gathers, processes and analyses information about the situation. Too often the TO has been reactive by nature and is assembled hastily to deal with an emergency. Why wait? If the TO is included as part of the launch of a new service, system or process, for example, a lot of the issues

Let us now move on to our next section which explains about CSI and Capacity Management.

CSI and Capacity Management

This section provides practical usage and details about how each Capacity Management method mentioned below can be used in various activities of CSI. We will understand all the three areas of business capacity, component capacity and service capacity below:

Business Capacity Management:

A prime objective of the Business Capacity Management sub-process is to ensure that future business requirements for IT services are considered and understood and that sufficient capacity to support the services is planned and implemented in an appropriate timescale.

Service Capacity Management:

A prime objective of the Service Capacity Management sub-process is to identify and understand the IT services, their use of the resource, working patterns, peaks, and troughs, as well as to ensure that the services can and do meet their SLA targets.

In this sub-process, the focus is on managing service performance, as determined by the targets contained in the SLAs or SLRs.

Component Capacity Management:

A prime objective of Component Capacity Management sub-process is to identify and understand the capacity and utilization of each of the components of the IT infrastructure. This ensures the optimum use of the current hardware and software resources in order to achieve and maintain the agreed service levels.

All hardware components and many software components in the IT infrastructure have a finite capacity, which, when exceeded, have the potential to cause performance problems.

Let us now move on to our next section which explains about Connecting business and service capacity management.

Connecting business and service capacity management

Here we discuss an example of how three sub-systems of capacity management ties together.

Let’s look at the example shown below:

There are three services: A, B, and C. There are three departments: Marketing, Sales, and Finance

  • Service A is used by all three departments

  • Service B is used only by Marketing and Sales.

  • Service C is used only by Finance.

Let us now move on to our next section which explains The Expanded incident Lifecycle.

The Expanded Incident Lifecycle

The following figure shows a technique to help with the technical analysis of incidents affecting the availability of components and IT services.

The Expanded incident Lifecycle is a technique to help with the technical analysis of Incidents affecting the availability of components and IT services.

The Expanded Incident Lifecycle is further made up of two parts: time to restore service (aka downtime) and time between failures (aka uptime). There is a diagnosis part of the Incident lifecycle as well as repair, restoration and recovery of the service.

Let us now move on to our next section which explains about Workload and Demand Management for CSI.

Workload and Demand Management for CSI

Workload Management can be defined as understanding which customers use what service, when they use the service, how they use the service and finally how using the service impacts the performance of a single or multiple systems and/or components that make up a service.

Demand Management is often associated with influencing the end users’ behavior. By influencing the end users’ behavior an organization can change the workload thus improving the performance of components that support IT services.

Using Demand Management can be an effective way of improving services without investing a lot of money. Workload and Demand Management for CSI has many activities such as:

  • Utilization monitoring

  • Response time monitoring

  • Analysis

  • Tuning and optimization

  • Implementation

  • Designing resilience

  • Threshold management and control

  • Demand Management

  • There are certain analysis techniques like Trend analysis, Modeling and trending, and Baselining, Simulation Modeling.

Let us now move on to our next section which explains IT Service Continuity Management used in CSI.

Are you curious to know what ITIL Intermediate CSI is all about? Watch our Course Preview for free!

IT Service Continuity Management use in CSI

Any CSI initiative to improve services needs to also have integration with ITSCM as any changes to the service requirements, infrastructure etc. need to be taken into account for any changes that may be required for the Continuity Plan.

That is why it is important for all service improvement plans to go through Change Management. Every organization manages its risk, but not always in a way that is visible, repeatable and consistently applied to support decision making.

The task of Risk Management is to ensure that the organization makes cost-effective use of a risk process that has a series of well-defined steps. The aim is to support better decision making through a good understanding of risks and their likely impact.

Management of risk should be carried out in the wider context of safety concerns, security, and business continuity

Health and safety policy and practice are concerned with ensuring that the workplace is a safe environment.

Security is concerned with protecting the organization’s assets, including information, buildings and so on. Business continuity is concerned with ensuring that the organization could continue to operate in the event of a disaster, such as loss of a service, flood or fire damage.

Risk Management from the business perspective, in the context of working with suppliers, centers on assessing vulnerabilities in supplier arrangements which pose threats to any aspect of the business including:

  • Customer satisfaction

  • Brand image

  • Market share

Let us now move on to our next section which explains about Risk Register and CSI.

Risk Register and CSI

Risk Register and CSI would involve various factors such as:

  • Identification of risks

  • Target i.e. The asses under threat

  • Impact of risk, qualitative and quantitative

  • Probability of occurrence

  • Possible mitigating actions or controls

  • Identification of stakeholder who is accountable for the risk

  • Responsibility for implementing selected actions on the controls

  • Evaluation of impact vs. Cost of action or control

Let us now move on to our next section which explains about Risk Register.

Risk Register

Risk Management processes need to be considered as cyclical, reviewing, reviewing the suitability of previous actions, and reassessing risks in the light of changing circumstances. The risks are likely to be managed through a risk register such as the example shown here.

Let us now move on to our next section which explains Problem Management and CSI.

Problem Management and CSI

CSI and Problem Management are closely related as one of the goals of Problem Management is to identify and remove errors permanently that impact service from the infrastructure. This directly supports CSI activities of identifying and implementing service improvements.

Problem Management also supports CSI activities through trend analysis and the targeting of preventive action. Problem Management activities are generally conducted within the scope of Service Operation. CSI must take an active role in the proactive aspects of Problem Management to identify and recommend changes that will result in service improvements.

Post Implementation Review: PIR is done for certain changes, CSI working with Change Management can require a PIR for all CSI changes

Let us now move on to our next section.

Knowledge Management and CSI

Knowledge Management and CSI: In our last section we learned about Problem Management and CSI. This section explains about Knowledge Management and CSI. One of the key domains in support of CSI is Knowledge Management.

Capturing, organizing, assessing for quality and using knowledge is great input in CSI activities. An organization has to gather knowledge and analyze what the results are in order to look for trends in Service Level Achievements and/or results and output of service management processes. This input is used for determining what service improvement plans to be working on.

Let us now move on to our next section.

Summary

In this lesson, we covered the following topics:

  • Assessments and Gap analysis

  • Benchmarking, service measurement, metrics, service reporting, balanced scorecard and SWOT analysis.

  • Return on investment and measurement of benefits

  • Techniques within availability management, capacity management, risk management, IT service continuity management and problem management used in CSI.

The next lesson focuses on Organizing for Continual Service Improvement.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Request more information

For individuals
For business
Name*
Email*
Phone Number*
Your Message (Optional)
We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Email*
Phone Number*
Company*
Job Title*