Universal Credit – who’s in charge?

By Charlotte Pell, Vanguard Consulting

Sarah, a human being from Redcar, lost her job. She applied for Universal Credit online and waited for something to happen. Nothing did. When she rang the help line to find out what was going on, the operator said, ‘I’m sorry, it hasn’t registered your claim due to a technical error. Please re-apply by telephone’.

Sarah re-applied by phone, supplying the same information she entered online. Halfway through the application, the operator said, ‘The portal has fallen over. It does this a lot. The data isn’t lost, but I can’t see the application right now. I’ll ring you back tomorrow’.

After a second call to complete the application, a letter invited Sarah to a face-to-face interview and asked her to bring four pieces of supporting evidence. At the interview, Sarah is told the letters aren’t right. They tell her, ‘I’m sorry, but “it” won’t let me pay UC until you have submitted a fifth piece of evidence’. Sarah would have to make another appointment.

Unfortunately, there were no delays with Sarah’s bills. They arrived promptly as usual in the post. As a result of the stress and uncertainty, Sarah’s health deteriorated as she lost control of her normally well-managed diabetes and her anxiety returned. She struggled to get her children ready for school and when she did, they turned up ill-prepared and tired. The school alerted social services and Sarah got a visit.

After 23 weeks her first Universal Credit was paid, and Sarah started to rebuild her life.

Some will say that this shouldn’t have happened, that the Department of Work and Pensions (DWP) could have done this or that differently, or that Sarah was unlucky. But this is what does happen, all the time. ‘It’ got in the way of helping Sarah get her money.

What is this ‘it’? I knew what the operator meant. You know what he meant. The same ‘It’ is everywhere. It runs much of the public sector. It spews out the wrong letters to the wrong people. It gets the amounts we are owed wrong. It does things in 10 working days. It doesn’t do things in 10 working days. It generates letters we don’t understand. It is inflexible. It can only deal with one part of our query at a time. It doesn’t listen or understand. It certainly doesn’t care. It can only go in one order and at a certain speed. It doesn’t recognise anything outside its boxes. It loses documents. It tells us we are third in the queue. It is very sorry. It gives us a unique customer reference number. It tells us our session has timed out. It takes a very long time. It makes us angry.

But ‘it’ cannot deal with variety.

It forces people like Sarah into crisis situations.

What can deal with variety? What can deal with people with a variety of needs? What can deal with people who express their needs differently? What can adapt to different speeds and take things in a different order? What can react quickly, listen, reassure and explain? What can make decisions and judgements quickly? What can learn? What can connect with us?

The answer isn’t a what. It’s a who. The answer is a human being. Only human beings can deal with the variety of demand that hits a service like UC. Only a human being can deal with the variety of needs and circumstances involved when people claim benefits or tax credits. But not just any human being – someone who has the authority and expertise to make a decision.

Ask a housing benefits manager how many types of claims they process a year, and the answer will be as many claims as they get. There are no ‘types’. There is just endless variety. The same is true when it comes to the change in a claimant’s circumstances – even more variety.

Many housing benefits managers have learnt that to absorb this endless variety, benefit claimants should be seen by a senior benefits adviser as soon as they claim. Claimants arrive, get their claims sorted quickly and go away again. They don’t keep coming back with questions and bits of paper. There is no front and back office split. Instead, there is just one benefits office with one purpose – to help people claim the right amount of benefit at the right time. If claimants’ circumstances change, they are more likely to tell the benefits adviser, because they know the process will be hassle free. The purpose of these offices is no longer to answer phones, send out letters and fill in forms. Managers have learnt that although unit costs might be higher, letting the customer see the expert is much cheaper overall because there are far fewer repeat calls, visits, errors and complaints.

Simple, eh?

And yet government plans to give ‘it’ more power with proposals for the full Universal Credit Digital Service. Big IT companies will tell ministers that, oh yes, of course it will work. Of course the digital service will be flexible and of course it will be easy for claimants to understand. There will be guidance notes, drop-down menus and a help line. The IT will be ‘deliverable within the timescale and on budget’. But as soon as it comes up against real people it will fail. Claimants will ring the help line because they want to speak to a human being. The call centres will struggle to cope with ‘unanticipated demand’. Costs will rise.

Only people can absorb variety. Only people can listen and respond. Only people can make judgements.

Sarah didn’t get the help she needed because of the design of the system. The operatives were hamstrung by the IT.

Ministers should end the reign of unthinking scripts and screens. They should choose to put the human being, the expert and the most intelligent machine we have, where it matters: right in front of the customer.

Charlotte Pell

 

 

Human by default

By Jeremy Cox

Today’s faulty thinking about digital technology in service organisations is rooted in a dim view of human nature and an over-optimistic view of automation. Managers typically think of staff as ‘resources’ and automation as a means of reducing costs by replacing people with machines and latterly digital technology. Digital is cheaper, faster, more reliable and more modern… it’s just better, isn’t it?

Sadly, expectations are often confounded when technology makes things harder rather than easier for customers and staff, and renders work processes more cumbersome than before they were automated. E-enabled IT helpdesk requests produce multiple rework loops to clarify a problem, where before an expert and user would sort it out in direct conversation. Document imaging and workflow systems in both public and private services such as benefits, insurance claims or mortgage sales create fragmented, inflexible processes that paradoxically slow down the work, multiply failure demand and reduce revenue, all contradicting management’s assumption that automation would have the opposite effect.

Defaulting to digital in services for reasons of cost is a trap. Instead, we must learn to put technology to work for us and our services – explicitly designing it to complement human activity and enhance value creation. Counterintuitively, this turns out to be the route not only to improved service, revenue and morale, but reduced costs as well.

The origins of faulty thinking

Where does the prevailing attitude to automation come from? Can understanding the origins of the digital obsession help us to frame a better approach?

A century ago, industrialisation triggered the emergence of the ‘command-and-control’ management archetype, with Henry Ford, F.W. Taylor and Alfred Sloan of General Motors among notable pioneers. In grappling with the challenges of building large, complex mass-production manufacturing enterprises, they were instrumental in formulating two critical pillars in subsequent thinking about management and automation: the ‘machine view’ of work’, and the separation of decision-making from that work.

The ‘machine view’ of work sees industrial processes through a reductionist lens as simple, stable and repeatable. Adam Smith’s observation of ‘the division of labour in pin manufacturing, and the great increase in the quantity of work that results’, is illustrated on the reverse of a British £20 note, and this mechanistic perspective has been subconsciously internalised by most managers. In his 1978 Harvard Business Review article ‘Where Does a Customer Fit in a Service Organisation?’, Richard Chase applies the machine view to service, setting out the necessity of isolating customers from the delivery of work. A customer-facing ‘front office’ is used for intake, allowing work to be processed as standardised and automated ‘factory work’ in a ‘back office’ for maximum efficiency.

The separation of decision-making from work makes management responsible for planning, directing and controlling the work of front-line staff who are treated as passive recipients of management orders – a Taylorist construct that has come to seem normal and unremarkable. Managers spend their time in meetings making decisions using aggregated measures and arbitrary targets, planning and orchestrating change top-down. Functional hierarchies keep decision-making removed from the reality of front-line customer-facing work and the experience of real customers.

We default to digital

Both ideas have made the leap from manufacturing to service and dominate management thinking in service organisations to this day. Managers constantly strive to reduce costs by replacing human activity with top-down imposition of standardised automated processes.

Customers are encouraged, and increasingly coerced, to transact online and self-serve. We are ‘channel switched’, through the use of physical menaces (‘you need to check in at the kiosk’), financial penalties (extra charges for transactions with humans) or withdrawal of service (queries can only be submitted online). The initial digital-access-only design for the Universal Credit programme unravels as the reality of dealing with vulnerable people with multiple interrelated issues dawns.

The NHS Connecting for Health programme was a multi-billion pound debacle rooted in the erroneous assumption that technology is ‘just better’ and therefore the point of departure for a transformation programme. Commercial organisations from banks and insurers to utilities and telecoms firms have all introduced technology-driven front-office/back-office designs with digital channels at the front end and IT-driven back offices that produce slow, expensive and poor service.

We have all experienced situations of technology degrading our experience and causing frustration, delay, confusion, and repeat demand, in the process discouraging us from coming back to spend more. The machine metaphor of work is simply incompatible with the true nature of service provision. Health and social care, financial and emergency services, social housing organisations, utility companies and charities do not manufacture millions of identical pins – their challenge is to absorb an almost infinite variety of demand and create value for customers for each of whom ‘what matters’ is something quite different.

Standardisation, functionalisation and automation can disastrously undermine an organisation’s ability to meet those challenges. The separation of decision-making from work compounds the ‘machine view’ problem by making managers blind to the reality of the issues customers and staff alike experience.

There is a clear alternative

The hopeful alternative is to reframe our approach to technology. We should consider it as something that should be designed to complement rather than replace human activity. Putting technology to work in service organisations means learning to listen to customers and understand what matters to them on an individual basis, and learning to absorb variety. By using that as a starting point for better services design, we can leverage technology to work in our favour. At my squash club I can book courts online, over the phone, and in person at the club reception. Sometimes I have a complex request with multiple overlapping bookings and different people paying for courts, and a conversation works best. At other times I’m happy to do all the work myself online; the system can absorb variety and do what matters for me at different times.

An insurance company client found that re-integrating responsibility and decision-making into front-line work and switching the role of management from controlling to supporting staff led to reductions in fraud. Staff were better able to identify suspicious transactions when dealing with live demand than the previous script-driven automated systems. Similarly, levels of fraud in a benefits system fell when expert staff switched (under their own initiative) from back-office processing to face-to-face support and assessment with customers. In both cases, IT was then used to track background patterns and flag suspicious cases for investigation – an example of value-driven, human-by-design customer processes, with technology configured to complement rather than control.

While Airbnb, Uber, and Amazon are notable examples of technology enabling real benefits to customers and disrupting older market models, to consider them as a justification for ‘digital-by-default’ is to miss the point – instead they reflect the way technology can be exploited to better meet what matters to customers. Airbnb and Uber don’t work without good drivers and hosts. The technology complements human activity rather than replacing it.

Putting technology to work

Our inheritance of outdated and inappropriate mass-production thinking has led to a pessimistic view of human nature and an over-rosy view of automation. ‘Digital-by-default’ thinking prevails in service organisations, but the idea of using technology to put people out of work is ripe for consignment to the dustbin of history. Shifting the perspective to one of ‘human by default, put technology to work’, is a profound and to many counterintuitive idea.

‘Putting technology to work’ entails first building service organisations that recognise and respond to what matters to customers. It entails systematically enabling staff to create value for customers. It entails managers and staff learning and improving together to create service systems that absorb variety. Then and only then should technology be pulled to add more value to the delivery of service and truly act as a complement to human activity.

Read similar articles in Edition Three of The Vanguard Periodical: The Vanguard Method and Digital. Ask for your FREE hard copy or PDF.

 

 

It’s good to talk

Patrick Hoare, Vanguard Consulting

I like digital services. If I can do it online and it works, I will. My phone and computer remember my personal details and my card details, so it’s quick and easy. And many customers think like me; banking transactions on smart phones overtook those done on computers for the first time in 2015. What’s not to like?

Well, I don’t always want to use online channels. Sometimes I have questions and want to talk them through. Take the example of applying for a credit card. For some customers, maybe the majority, applying online can be good for them and good for the bottom line of the bank. It’s a win for both parties. However, look at the end-to-end customer experience in the following case study and consider the findings that emerge.

The logic of the bank in question was, ‘Online is cheaper, so let’s make customers apply digitally.’ So that’s what it did. Customers who telephoned were referred to the website. What happened? This is what we learned.

Over six months, spurred by a successful, if expensive, marketing campaign, 183,000 customers applied for an account online. Of those, 61,000 (33%) failed the credit-score process, which is not unusual and easy to rationalise – who wants risky customers? That left 122,000, who after filling in the form online had to do – guess what? Print the form, sign it and send it in by post… Of the ‘successful’ 122,000 applicants, 86,500 actually opened an account. On further review (the system couldn’t understand everything about the applicants at the point of application), a further 17,000 of the 122,000 became credit-score fails. The remaining 18,500 (10%) began the application process and were accepted, but for some reason never completed. Maybe the tortuous digital journey was a factor.

Lesson number 1

If your customers are happy to go online, design the channel to be clear, easy to use – and properly digital. In customer terms:

  • Tell me clearly the information you need to make a decision
  • Explain the product’s features
  • Enable me to apply, and get verified, online
  • Make an instant decision
  • State my credit limit
  • Allow me to use the account straight away.

The channel may have been ‘cheaper’, but we were beginning to understand that it was also unfriendly and poorly designed. It had ‘turned off’ 18,500 successful applicants – was it hiding other problems and opportunities? The short answer was yes.

As we saw, 78,000 people who wanted a card, or 43% of the total, were deemed a credit risk, either immediately or after a delay. For many that was just the cost of reducing risk. But the total warranted investigation. We found that the way the digital system was constructed meant that some customers in good credit standing were being rejected. Unaware that special student cards were available, for example, students were being rejected when they applied for a standard card – then rejected again on applying for a student card for having made ‘more than one application within six months’.

If there had been dialogue we’d have understood this earlier – along with other damage done by the confusing and rigid application form. All told, thousands of perfectly creditworthy customers were being turned down, while others dropped out of their own accord.

Lesson Number 2

Don’t be lured into believing that computers can replace humans.

I have yet to see a matrix of decision trees that can replace dialogue and understanding. In every instance when they make this mistake, businesses end up losing money.

Still, we had 86,500 customers a year making it through the application system on a cheap channel – good news, surely? Up to a point. What we found was that the non-digital digital application system turned customers off. Whether through lack of clarity or a deliberate marketing ploy, customers had to read the features and benefits very carefully. The payback option, defaulted to the minimum per month, was another problem.  So was the time – on average 15.5 days – it took for customers to receive their cards, which had repercussions on card usage. Two months after dispatch, 26% of cards had not been used. Shockingly, 18 months after the start date 87% of accounts had been closed or had never been used.

Lesson Number 3

Don’t try to trick customers through ‘smart online marketing’.

Customers aren’t fooled – as we would quickly see if our measures related to purpose from their point of view. The lessons of mis-sold endowments, PPI and packaged bank accounts are crystal clear: we should have learned them by now.

We thought we had a cheap channel of acquisition that ticked all the boxes for ‘going digital’. The reality was that we had a complicated system that put off customers, drove in failure demand and did not lend itself to building long-term customer relationships.

When we quietly abandoned the digital-only policy and started talking to customers, the following things happened. An extra 21,661 applications were accepted, which was a good start. As customers and bank gained a better understanding of each other’s needs, applications were cleaner, with the result that the time taken from application to card issue halved to 7.4 days. With the application was still ‘warm’ in the customer’s mind, time taken to use the card also fell significantly, from 24.8 days to 16.2 days. And more customers used their cards: the proportion failing to use their card in the first two months fell to 12.2%. While it’s difficult to draw an empirical conclusion, the hypothesis is that customers had a better relationship with the bank and consequently had a higher propensity to use the card.

To summarise, we learned three important lessons when we studied what happened when the bank made customers apply digitally:

Lesson 1: If your customers are happy to go online, design the channel to be clear, easy to use – and properly digital.

Lesson 2 Don’t be lured into believing that computers can replace humans.

Lesson 3: Don’t try to trick customers through ‘smart online marketing’.

And most importantly, don’t fall for the ‘digital by default’ narrative. It can work for some customers in some circumstances – but don’t diss dialogue.

 

Having IT your way

By John Little

It’s a significant and neglected challenge for service leaders: distinguishing problems they actually have to solve from those they think they have or have been persuaded by others to believe they have.

The push to digitise services has been encouraged by government and large IT outsourcers, often in a Whitehall partnership. They promote the notion that digitising services also makes them cheaper, faster and better. Not so. Many outsourced IT providers have little knowledge of what good IT support looks like to meet individual organisations’ operational needs, and frankly often don’t care. They just want a sale.

When implemented, dysfunctional IT architecture and software effectively dictates what service staff can and can’t do to meet a user’s needs. Senior leaders often deny this happens… unless and until they go into the work to see for themselves, at which point, ‘This isn’t what I thought we had bought. It needs to be changed,’ is a frequently heard lament. ‘The vendor said it was “configurable”, so I guess we just need to reconfigure it’.

Few large organisations now write or maintain their own IT software. Like service work in general, IT development has been outsourced to software houses with their own standardised product offerings and profit-focused sales agenda. Very often the only things retained in the IT department are the minor technical assistance function, contract management and a few peripheral activities. They have thus voluntarily down-skilled themselves into product administrators – a self-inflicted helplessness often compounded by effective ‘capture’ of IT support departments by vendors and their products. The effect is to lock in inflexible and wasteful systems sometimes for a decade or more.

As for configuration: whatever vendors promise in sales negotiations, the reality is that after implementation, off-the-shelf software is almost impossible to reconfigure except at a cost that most organisations will not want or be able to afford. Reconfiguring a ‘vanilla’ product often entails persuading other purchasers, whom you don’t know and who don’t know or care about your business, to agree to software changes that will affect all users. Not surprisingly, this doesn’t usually end well.

How to do it wrong

A housing organisation, let’s call it AB Housing, was looking to upgrade its IT in support of a newly completed business improvement project.

AB Housing hired an expert in PRINCE II project management on a ‘temporary’ basis to help it source and implement appropriate software. The project manager supposedly knew about the PRINCE II framework but not very much about IT itself, nor about AB’s IT requirements. He struggled to comprehend the business needs as the work progressed but couldn’t let that show. He was an expert, after all, who needed to work with other clients after this project ended.

The PRINCE II approach to software development appears systematic and orderly. It is based on seven principles, seven themes and seven processes – what could be more reassuring? Just to be sure and to comply with best-practice, the project manager insisted on a risk register which was duly drawn up, with a list of imagined/made-up risks, all categorised, named and given a risk number. Needless to say, when the IT project collapsed, none of the things that caused it to fail figured on the risk register. The reason was that the project leaders, who understood neither the work nor the staff who did, were themselves the source of most of the risk.

The temporary ‘expert’ now believed everything was in place for full project control.

The project launched with a project board, senior responsible owner, project manager, steering committee of potential users (no one spotted the irony here – why not start with actual users?), and a senior external IT expert available on call.

Although no one on the project board had been involved with the business improvement project, its members were expected to sign off on staged implementation proposals on the basis of what they were told the IT would do to deliver the improvement. This brought their individual biases and opinions into play and triggered much debate. The senior responsible owner had not taken part in the business improvement project, being considered too busy and senior, so a second external IT expert was drafted in for another opinion. Benchmarking trips to see systems of other clients were arranged, but no actual users of the system were included.

In parallel, the project management expert was ensuring that the PRINCE II documentation of ‘milestone review meetings’ was being properly recorded. What could possibly go wrong?

I know you want a happy ending to this story. You will be glad to know there was one.

The matter was resolved by terminating the services of the PRINCE II and external IT experts. Having adopted a Vanguard Method approach, the senior leadership realised that the initial business improvement project and findings were flawed. They were flawed because they were based on customer focus groups, asking managers what they thought, staff workshops and ‘away days’.

So leaders went back to first principles and established what was actually happening on the ground and why. They then learned what they could do to meet service users’ needs in the most effective, efficient and consequently economical manner. The supporting software-writing expertise and solution were then ‘pulled’ to the real future solution within the organisation. The solution was what is known as a Rapid Application Development (RAD) approach. It is hated by off-the-shelf IT snake-oil salespeople.

The staff who carried out a rapid ‘check’ and were therefore aware of just how bad the existing IT systems were now had knowledge to take forward into a redesign. After two weeks of prototyping the redesign team decided they needed software code-writers to work alongside them to learn what an effective IT system should look like and be able to support them with. Coders then worked iteratively with the team to create software that followed the logic and flow of the work.

Understanding the need for measures relating to purpose, the code-writers ensured that the data required for capability charts was available as a matter of course rather than as an afterthought.

When it was rolled-In to the workforce the EDIPing of staff was so much easier because the work-flow-focused IT had been designed in support of the work.  Staff could get the measures they needed when they needed them.  Because AB Housing owned and controlled the code it was as cheap as chips to make any amendments that were necessary.

How to do it right

This example describes how Norwich-based Flagship Housing Group built its own bespoke IT system to ensure seamless interaction across the interlinked systems in its repairs and maintenance service. To do so, it decided to bring the service under direct control through its own specially created subsidiary, RFT Services.

This came about because Flagship’s chief executive and two directors took the time to participate, along with a cross section of staff, directly in the ‘check’ or understand phase. It was a professionally life-changing experience for all of them. What they discovered by seeing it for themselves was that performance was very different from what they had been led to believe. With real knowledge of the ‘what and why of performance’, they could see that there was a great opportunity to provide services in a much more effective and efficient manner.

Much of the underperformance established in ‘check’ was caused by the standardised, off-the-shelf, so-called best-practice IT in use by Flagship and its outsourced contractors. There was a compelling case for a complete rethink of the delivery model and specifically the IT systems for repairs and maintenance. This included a significantly greater understanding of the grim state of the logistics support to tradespeople in outsourced contractors.

As the ‘redesign’ phase progressed, the directors realised that having the right IT to support and integrate their requirements in terms of leadership, logistics, operations and tradespeople was absolutely pivotal to achieving their goals of as near as possible perfect service delivery.

Nothing on the market met their specific software requirements. Like many other organisations, Flagship realised that its own internal software writing abilities had been eroded over the years. It had bought the line chorused by many senior IT managers that writing your own software is too expensive. The answer to that of course is, ‘compared to what?’ They have no idea of the waste and associated costs driven in by buying standardised systems. Initially, Flagship needed help from an outside IT supplier to build a system to its specific need. Concurrently the company began to rebuild its internal IT capability to ensure self-sufficiency and sustainability in the longer term – a smart and counterintuitive move.

At the end of it, Flagship had in place a bespoke IT system that was a key enabler of its ability to deliver service right first time. The system is still being developed in an emergent manner and that will continue. This is all at a fraction of the cost of traditional dysfunctional IT maintenance systems available on the market. There are none of the requirements to take unwanted software updates that make off-the-shelf IT providers such huge undeserved profits.

What does the new IT look like?

When a repair is reported, it is recorded and visible on the system which is then fed into the workload management facility.

This makes it accessible to those who need it, in the format they need and at the time they need it to deploy resources. It enables them to understand and plan organisational capacity against what tenants actually demand. The tradesperson attends at the time the specified by the tenant, not the housing organisation. Resource controllers have full visibility on where tradespeople are and what demands still have to be met, enabling frontline leaders to deploy effectively in their support.

Meanwhile, materials used in the repair are recorded on the system. The visibility on usage of materials by trade and location and building archetype allows the Flagship logistics centres to replenish van stocks in a timely manner. As with initial coding, early logistics learning work was supported by an outside company called Perfect Flow. However, the Flagship logistics staffs are now self-supporting. Crucially, their element of the Flagship IT system is fully integrated with the rest of the business, allowing the cost of the repair to be calculated when the job is closed. This is fed directly into the logistics element of the IT system so that business intelligence is acquired, in emergent fashion, on product usage and performance, in turn feeding into more effective and economical purchasing and supplier selection.

The prospects for IT in service organisations is bright – so long as it is approached in a user-oriented, purpose-focused manner. The future should not be to line the pockets of outsourcing providers, nor to sustain the UK as a centre of incompetence in IT project management. It should be to ensure IT serves the public and staff in the most effective and economical fashion.

It is essentials to know what problem(s) you are actually trying to solve with IT; indeed, if you have one at all. Because there is something new and shiny in the marketplace doesn’t mean your organisation has to have it. IT must be fit for purpose – your purpose – not some compromise that ends up cutting you off from customers.

As at Flagship, senior leaders must be actively involved in the entire ‘understand, improve and implement’ cycle. This ensures those with authority can make properly formulated and informed choices regarding what fit-for-purpose IT consists of for their organisation.

Service staff must not be captured and constrained in how they serve customers by the IT system. Standardised IT-based transactions always cost more, in our experience, driving in failure, waste and unanticipated overall cost. Cheap is usually dear. Yet having useful and useable IT is both achievable and necessary. Bespoke can deliver what you need when you need it how you need it. As in the case of Flagship, well designed, purpose-focused IT is a significant part of that bright future – provided you do it your way.

John Little

 

 

Better outcomes in software development

The Age of Enlightenment was a wave of humanist and scientific thought that spread across Europe during the 18th century, producing great advancements in many fields, particularly in science.

In recent years software development appears to have experienced its own age of enlightenment with the spread of ‘Agile’ methods taking more adaptive and iterative approaches, requiring continuous learning among its practitioners.  But if that’s true, why does so much of the software and technology that touches our lives still seem to be problematic?

A brief background to Agile

My earliest experience of software development was during my electronic engineering degree in the early 1990s. We were introduced to the programming language Modula-2 to teach us problem-solving through the use of computers. The first chapter of the textbook laid out a method based on six steps:

  1. Define the problem
  2. Design a solution
  3. Refine the solution
  4. Develop a testing strategy
  5. Code and test the program
  6. Complete the documentation

This linear approach is also present in many of the formal or plan-driven approaches to software development that were in common use at that time (e.g. Waterfall or V-model).  These methods are sequential: each stage must be completed before progressing to the next.

At that time computers were spreading fast through business, and the increasing pace and need for change triggered a crisis in the software industry. An ‘application delivery lag’ of up to three years between business requirement and working software was not unusual. Increasingly frustrated with the long development times, a group of like-minded software professionals came together in 2000 ‘looking for something more timely and responsive’.

Several emerging concepts and methodologies – iterative development (drawing on Deming and Boehm), the alignment of concurrent activity (influenced by Takeuchi and Nonaka’s 1986 Harvard Business Review article, ‘The New New Product Development Game’) and self-organising teams (as in Sutherland and Schwaber’s ‘scrum’ software development process) – now started to be drawn together. The result was a manifesto drawn up by the group which set out new principles of effective software development in contrast to the sequential and plan-driven approaches.

The document was named ‘The Agile Manifesto’. Agile was not intended to be a methodology but rather a mindset, within which a variety of methods and practices co-exist (SCRUM, Kanban, Scrumban, DSDM, BDD/TDD and XP among others).

A number of organisations ascribe their success to Agile. One of the best-known and most influential is Spotify, which so far has managed to keep ahead of much bigger competitors such as Google and Apple. Agile also has critics, however, who point to high-profile failures such as the Department of Work and Pensions’ Universal Credit and less than overwhelming results from the UK Government Digital Service. So what is causing some organisations to succeed and others to fail?

Common traps associated with Agile

The tools trap

When Japanese car manufacturers such as Toyota began to gain global market share at the expense of western competitors, industry experts queued up to visit Japanese production plants to learn the secrets of their success. What they observed was widespread use of technology (robotics, Andon cords), techniques (Kaizen, Kanban, just-in-time) and training (skills such as problem-solving and Six-Sigma analysis). Much of this learning crystallised into practices that fall under the ’Total Quality Management’ and  ‘lean’ banners, which have been deployed not only by vehicle manufacturers but also in a wide variety of sectors and organisations. Yet outcomes have often been disappointing or even detrimental. The problem is that it was not the tools but rather the principles and thinking that were key to Toyota’s success, and these are completely missed when the improvement techniques are used ‘out of the box’.

The thinking trap

In the same way, with no understanding of the thinking behind Agile practices, the focus of teams remains fixed in conventional thinking, so Agile becomes reduced to cards and sticky notes on walls with teams having meetings standing up. In one large financial organisation I worked with, over 400 personnel had been through Agile training, yet a year later not one piece of work had been delivered in an Agile manner.

Even when teams learn to apply Agile principles in practice, they often run up against traditional management philosophies and structures based on the idea of the organisation as a machine to be controlled in all its components (including those completing the work), which is entirely at odds with Agile thinking.

The structure trap

At its heart, software development is a series of interactions and learning processes between knowledge workers that sits uncomfortably with the way projects and changes are handled in organisations. Traditional managers are unsettled by the lack of fixed plan or demands for cross-functional personnel for unpredictable periods, interfering with corporate resource planning and productivity reporting. They struggle with perceived lack of structure that leaves them feeling delivery is not in their control. In some organisations this results in an ‘Agile sandwich’, with delivery teams using Agile methods to deliver changes that are then subject to weeks or months of delay to fit in with traditional planning and release schedules. No matter how Agile the code developers, they are always at the mercy of those working to outdated industrial-age management dogma.

The ‘wrong problems’ trap

The final and most destructive trap is to have teams working Agilely on the wrong problems.  In their article on improving software project management, Brendan O’Donovan and Peter Middleton show how software projects are often subject to highly unsystematic and partial selection criteria, carrying the danger that the organisation is merely getting faster at delivering the wrong thing, expensively replicating poor existing processes in ‘IT concrete’. The key to effective change lies in understanding service capability as a system and from the users’ perspective.

In a large financial organisation I worked with, an application development team was working to address a backlog of errors and bugs that were affecting an automated financial platform. Although the organisation was in the process of implementing Agile, the leader decided he first needed to better understand what was really happening in his organisation, using the Vanguard Method to do so (although to be clear the Vanguard Method is not one of Agile’s many flavours).

The team quickly learnt that it was much slower at resolving coding issues that it had imagined. The average end-to-end time from starting work to implementation of the solution was four months, but predictably could be up to 11 months. That was bad enough. Four further findings added to the shock:

  1. The fixes were small and relatively simple (typically no more than 10 lines of code, limited to one module and had no impact on external system interfaces)…
  2. … yet each issue had a massive effect on operational teams, which had to take manual action, often complicated and time-consuming, to correct the impact on a supposedly automated system. In one case, operations calculated that correcting errors in customer accounts was costing it the equivalent of the work of five fulltime employees.
  3. Issues had often been identified long before the development team got to them, typically sitting in a backlog for two years. The problem therefore affected the operational team for far longer than the developers were working on it.
  4. Perhaps most worrying of all, the controls and governance structure were showing the team’s performance as ‘green’ – ie it was meeting performance requirements with no outstanding issues.

Rather than simply using the Agile techniques they had been trained in, the leader and team decided to work to principles drawn from systems theory, in effect to determine whether the resulting outcomes were beneficial or not in a wider context. In essence they were putting scientific method to use by testing their hypothesis that ‘the new principles will improve the team’s effectiveness and therefore performance’.

Within months they had learnt how to put the new principles into practice, with effective solutions taking an average of four weeks, and predictably up to eight weeks, to implement. The change of focus and decision-making resulted in a new approach to governance and collective learning, leading to further improvements. A year on, effective solutions were predictably being implemented in less than a week, with simple scenarios being resolved in two days, as the application team and its leaders progressively identified constraints obstructing the work and took action to remove the ‘waste’. Having eliminated the backlog, the team progressed to close working with the operational team proactively to prevent backlogs from occurring.

All the improvement occurred without the use of Agile or related operating models. At the same time, some of the practices emerging from the systems principles had parallels with Agile.

We were left with a working hypothesis that better outcomes in software development are dependent not so much on using Agile practices as on the effective application of systems principles. A critical consequence of using a systems approach, as opposed to a software perspective, is that business change and IT efforts are focused on issues that demonstrably affect outcomes for customers.

Releasing the new principles into production

Unlike new software code into a computer, principles cannot simply be ‘installed’ in groups of human beings. The Italian astronomer and physicist Galileo is one of a number of historical figures who have suffered persecution for espousing new thinking that challenges the common dogma held by the hierarchy. Galileo was ordered to turn himself in to the Holy Office for trial for maintaining the belief that the earth revolves around the sun, then deemed heretical by the Catholic Church. Standard practice demanded that the accused be imprisoned and secluded during the trial.

The development of new principles, or a new manifesto, requires a change of thinking and behaving. In conceiving the Vanguard Method, one of John Seddon’s founding tenets was that effective change cannot be imposed on people but has to be enabled through study of what is happening in the work and why. Helping people to experience for themselves why change is required may take more effort than other approaches, but it is the only one that works sustainably. If training programmes for techniques such as Agile do not challenge managers’ traditional frame of reference, they will continue to manage rather than lead teams, nullifying Agile’s potential benefits. (Which is why new employees often find it easier to tune in to the Agile mindset and practices than do staff with long experience of older methods.)

Even if people do get to understand for themselves how and why Agile principles should be applied, this still leaves them short of method to identify the constraints affecting the performance of Agile teams. This is why in the earlier example the Vanguard Method was key to the remarkable improvements in cycle times that were eventually achieved.

So what?

Any organisation adopting Agile thus faces a choice: either treat it as the latest fad or accept that it needs to be set in a systems perspective. Do managers stay in the comfortable framework of current dogma? Or do they move into the enlightenment that comes from understanding the organisation as a system and dealing with change accordingly? The latter demands a change of thinking not only in those developing code, but in managers throughout the organisation, regardless of functional speciality or seniority. From my observation, successfully applying Agile requires a solid investment of leadership in:

  • propagating a manifesto of principles grounded in systems theory
  • enabling a learning organisation that is able to put the principles into practice
  • using intervention theory to enable change in the organisation.

Naturally if you have any data or observations that enable me to improve on my hypothesis I would be very grateful to hear about them.

Richard Moir