Agile Is About People Too

I’m not the first to say this, but my experience is showing me that people are still being forgotten in “Agile” adoption.

Invariably in software development, we are writing software to create products to help solve other people’s problems. People writing software for people. The tools we use to write & distribute that software augment the work we do, not replace it.

For the purposes of this post , let’s blow the dust of the Agile Manifesto and step through each of the ideas:


Individuals and interactions over processes and tools

Top 2 definitions of “individual” on (which I believe is the context the Agile Manifesto uses them)

  • a single human being, as distinguished from a group
  • a person

From the principles behind the Agile Manifesto, I take “interactions” to be conversations & collaboration between those human beings, or persons.


Working software over comprehensive documentation

How is the decision made which software gets written?

How is software defined as “working”?

How does the software get written? (I’ve heard of Self Modifying Code, but that suggests the code has already been written. SMC on Wikipedia)


Customer collaboration over contract negotiation

What does a Customer look like in your organisation?


Responding to change over following a plan

In order to even recognise the need to respond to change (especially from the real world), I argue you need to have the power of judgement. Can machines make judgements?


I’m hoping the answers you got to recognised the need for human beings, a person, people…

Go & take a look at the principles behind the manifesto to see why we do the Agile practices that we do.

Without people, we wouldn’t have software. People are complex systems & this complexity is amplified when we’re expected to work together.


“No matter how it looks at first, it’s always a people problem”

Jerry Weinberg, The Secrets of Consulting.


Keep your people happy, people!


Further Research:

The Human Side Of Agile – Gil Broza

Agile People of Agile Process – Mike Roberts

The Happy Secret To Better Work (TED talk) – Shawn Achor


  • Rick Robinson

    Hi Duncan,

    Thanks for linking to my article about the ability of computers and machines to process “reasoning” as we would understand it. I think your question “Can machines make judgements?” absolutely goes to the heart of the debate we need to have about the role of technology in our lives.

    Computers can certainly make *choices* based on data that is available to them; but that is a very different thing than a “judgement”: judgements are made based on values; and values emerge from our experience of life.

    Computers don’t yet have values; and so they can’t take judgements. Today, that places a fundamental limit on the role that we should allow them to play in our lives and society, in my opinion.

    Will that ever change? Possibly: Steve Grand (an engineer) and Richard Powers (a novelist) are just two of the people who have explored what might happen if computers in the form of robots are ever able to experience the world in a way that allows them to form their own sense of the value of their existence.

    If that ever happens, then it’s possible that technological entities will be able to make what we would call “judgements” based on the values that they discover for themselves.

    But those values would not be our values; they would be based on a fundamentally different experience of “life” than ours. And there is therefore no guarantee at all that the judgements resulting from those values would be in our interest.

    Sorry for such a long comment; but your brief question is one of the most important that I can think of at the moment, and I think it merits a full and considered answer!

    Best regards,


    • DuncanNisbet

      Hi Rick, thanks so much stopping by & adding your voice to the conversation, I really appreciate it!

      This is an area I have only just started exploring, predominantly through the works of Dave Snowden (Cynefin & sense making) & Harry Collins (The Shape Of Actions).

      With regards to the machines having values not being our values, but those discovered for themselves, how would they even start having values if we humans didn’t give them even the concept of values? (Likely I’m showing my ignorance here)

      Thanks for the 2 references – it’s always good to have more sources of information. Just checking I have the right people:

      Steve Grand

      Richard Powers
      (Pygmalion & Echomaker in particular)


      • Rick Robinson

        Hi Duncan,

        Yes, those are the right people; Richard Powers’ book “Galatea 2.2” was the one I was thinking of in particular.

        >> how would they even start having values if we humans didn’t give them even the concept of values?

        I’m absolutely no authority on that subject; but I find the idea that Steve Grand describes in “growing up with Lucy” interesting: that if we build artificial systems that are able to experience the world by moving about in it with some autonomy, then those experiences will lead values to emerge. Steve also builds artificial systems that replicate known features of biological systems – e.g. hormones – which *might* support that emergence, eg through providing a partial mechanism for emotion. I thought these ideas had echoes of Robert Pirsig’s exploration of “value” in Zen and the Art of Motorcycle Maintenence.

        I really don’t know if those ideas are on the right track or not; but it seems to me that our own sense of values must somehow have evolved out of our experience and evolution.

        Thanks also for the references to Snowden and Colins – I’ll have a look at them,



        • DuncanNisbet

          Thanks for the confirmation Rick – sounds like Steve Grand has some great ideas.

          Zen & the Art of Motorcycle Maintenance is my pile of books to read – one day I will read it!