101 Ways is a product-focused, technology consultancy
At 101 Ways we work on the basis of selecting the best individuals for a specific project. We build teams of highly skilled people who share our philosophy and values and we actively manage and support our people through the life of a project. Follow this blog to know about Agile adoption, agile leadership, agile planning, agile project management and much more.
I’ve suffered with anxiety throughout most of my life,
but it wasn’t until a decade ago that I even knew that’s what it was. And oddly, I have Peter Andre to thank for that. While it would be fair to assume that another chart comeback post-Insania / A Whole New World remake is enough to cause nightmares, it wasn’t that.
Once upon a time I worked for ITV as a Senior Developer. While there, I was watching a live recording of GMTV, where Peter Andre was ensconced on the sofa talking about his mental health battles. I watched, fascinated, as he described his experience with panic attacks; a light bulb switched on in my head when I realised that I did too.
Finally, I had a name for what I was feeling.
Watching a man talk so openly about his feelings back then felt monumental. Nowadays, anxiety has become the zeitgeist of mental health and while there is an abundance of think pieces on the topic, men remain the minority authors or speakers. This is particularly worrying when the statistics reveal that:
12.5% of men are suffering with mental health problems;
Men are three times more likely than women to become alcohol or drug dependent;
Men are less likely to access psychological therapies than women, representing only 36% of referrals to IAPT (Improving Access to Psychological Therapies); and
76% of all suicides are men.
While I now knew what I had was anxiety, talking about it still wasn’t easy when you’re not a slick-haired, six-packed celebrity who earns millions from sharing the minutiae of your life with the general public.
At the end of 2018, I began struggling again when I moved from contracting to consulting and required to be onsite at four different clients. For around six months, I was waking up everyday and not knowing where I was required to be / when, which was incredibly stressful. The nature of our job is dealing with dysfunction and the mental overload of context-switching became completely overwhelming. It was almost impossible to fully focus and commit to one thing as I was always forward-planning and moving on to the next. 101 Ways HQ became a safe haven, and somewhere I could ‘hide’.
I was worried about admitting that I couldn’t cope, so I tried to deal with it by myself, but it began impacting not only my work life, but my home life too. I became really restless and lost sleep, which meant I was constantly on edge. I have two wonderful, boisterous boys aged 6 and 8 and it pains me to admit that I would be less patient with them and have a shorter fuse.
The feeling of tension in chest was a daily occurrence until it got to the point where I felt trapped and didn’t know what to do. It took a lot of courage, but eventually I spoke to senior management about my concerns. First and foremost, I needed to know whether feeling the way I did was normal, or was it just me who was unable to manage? Luckily, our CEO, Kelly was really supportive and gentle with me, explaining that it was a symptom of what happens when you’re lacking stability. Sharing the load, and importantly just feeling able to talk about it with both colleagues and family, made me feel better. But I knew it wasn’t as simple as that and I also needed to carve out long-term strategies for being able to deal with my anxiety when (not if) it rears its head again.
After speaking to a therapist, I realised that there were things I could do to feel in control again. It was clear that I needed to redress my work / life balance and maintaining an exercise routine became an incredibly important part of that. I prioritised going to the gym or running in the mornings and attending jiu jitsu classes twice a week in the evening. Jiu jitsu specifically helped me focus on one thing for a couple of hours; everything else that is churning around my brain just melts away.
At work I was able to cut down on the number of clients to a manageable load and if there were particular areas – such as the commercial, rather than technical side of projects – where I needed a helping hand then I felt able to reach out to colleagues with more expertise who would support me and vice versa.
Anxiety can affect anyone; but having it isn’t the whole problem, it’s not being able to share your feelings about it, especially for men. It’s not weak to ask for help – no-one expects you to do it alone. I hope that sharing my story helps break that taboo because as another celeb (Bob Hoskins) famously quipped, it really is ‘good to talk’.
Fair warning, this is going to get a little complicated and I will try and explain it in layman’s terms, although there will be an equation or two involved – but nothing more complicated than middle school algebra … I hope.
All encryption over the internet is based on finding the
factors of a really large non-prime number. Unlike multiplying numbers together to get a really large number, dividing a really large number into integer factors takes a very long time. The best method we have for doing this is incredibly slow. In my previous article I mentioned that the largest factored RSA number was 768 bits long and it took 15,000 CPU years to decrypt (two years of real time, on many hundreds of computers). This takes too much time and electricity to be useful.
Enter Shor’s algorithm, quantum computers, and the threat these pose to effectively breaking encryption. The algorithm is based on two aspects of quantum theory, ‘quantum superposition’ and ‘interference’. Peter Shor is an American professor of applied mathematics at MIT and is the inventor of this quantum algorithm from his work in the 90s in the field of quantum computation.
RSA encryption is based on a mechanism of obscuring information with a large number such that the only way to unobscure it is by finding the factors of this large number. Our current best method effectively guesses a number and if it isn’t a factor, then it tries another until it finally finds a guess that works. Even with optimisations during the process, it is extremely slow.
Ultimately though, all encryption is based on the hope that the factoring process takes so long that people won’t bother and to date this has largely been the case.
Shor’s algorithm is based on making a bad guess of a factor and then using the algorithm to turn that bad guess into a much better guess. Unlike quantum computers that take an astonishingly small amount of time, classical computers can also run Shor’s algorithm but take a very long time to complete.
Fundamentally, it can be broken down into two parts:
The mathematical part – making the guesswork more accurate; and
The physics part – speeding up the process
The TL;DR here is roughly as follows:
Make a guess, g, at a number that shares factors with the RSA encrypted number, N
Shor’s algorithm says that a much better guess would be gp/2 ± 1, where P is the number of times you’d have to multiply g with itself such that
gp = m * N + 1, where m is some multiple of N
We can find P very quickly by using quantum superpositions and interference so that all the wrong superpositions of P destructively interfere with each other and you’re left with the right value
Working this back we can then use Euclid’s algorithm to find the real factors
Then we break the encryption!
The Maths Part
(NB: The following section uses * to symbolise multiplication – while this may offend the purist mathematicians for the purposes of this article it is being used due to common usage by non-mathematicians)
Let’s start with a big number N, which you’ll need to find the factors of. First step, make a guess, g, which is a number less than N. Numbers that share factors are okay too because of Euclid’s algorithm, which I won’t go into now, but effectively means that we can find the real factors using this.
We can now use a trick based on the following, to turn the bad guess into something more accurate:
Take any pair of whole numbers that don’t share a factor with N and multiply one of them by itself enough times you’ll eventually end up with some whole number multiple of the other number plus 1
Factor A * Factor B → A * A * A * A …(enough times) = some multiple * B + 1
Written more succinctly, this is:
AP = m * B + 1, for some power P and some multiple m. The important part here is that we eventually end up with a situation where we have a remainder of 1.
Let’s take a look at a few examples.
If we take 7 and 15 as A and B, then:
72 = 3 * 15 + 4
73 = 22 * 15 + 13
74 = 160 * 15 + 1 ← We have a good match!
Or looking at 42 and 13:
422 = 135 * 13 + 9
423 = 5699 * 13 + 1 ← And another match
Working this forward, for our big number N and some bad guess g, we are guaranteed that:
gp = m * N + 1
Being clever with our mathematics here, we can also write this as:
gp – 1 = m * N
Or be even more clever, by rearranging the algebra like this:
(gp/2 + 1) * (gp/2 – 1) = m * N
Now we have an equation which roughly looks like something * something = m*N, that is to say, the unknown factors. And even better, these two parts are in the format that Shor’s algorithm prescribes, which is to say, take a guess g, multiply it by itself p/2 times and then add or subtract 1:
g → gp/2 ± 1
In this equation, we now have a situation where each part can be a multiple of the actual factors we’re looking for and Euclid will come to the rescue so that we can find the real factors. Once we have them, we’ll have broken the encryption!
The Physics Part
(Note: The notation for a superposition is |something> where the something is a value, set of values or a function)
Now for the hard part, how to find P (i.e. the number of times we need to multiply our bad guess by itself to get: m * N + 1).
Unlike a normal computation – which gives one answer for a given input – a quantum computation can simultaneously calculate a whole load of possible answers for a given input by using a quantum superposition. Even better, all of those possible answers are whittled down to a single correct answer by destructive quantum interference (i.e. just like waves can destructively interfere with each other to cancel out). Let me try and explain one step at a time.
In general it can be very difficult to try and put anything into a quantum form where all the wrong answers destructively interfere, but that’s exactly what Shor’s algorithm does for the problem of finding P.
Just to recap, we’ve made a bad guess, g, and we’re trying to find P, such that gp = m * N + 1. A P that does that is also very likely to share factors with N (that is to say, gp/2 ± 1).
Next we need to build a quantum mechanical computer program that takes a number, x, as input and then raises our guess to the power of x. The program then needs to take that number and calculate how much bigger than a multiple of N it is, let’s call that the remainder.
|x> → qf(gx) → |x,gx> → qf(?>m*N) → |x,+r>, where qf is a quantum function, and the remainder, r, we’re looking for should eventually be 1.
As it’s a quantum computer we’re working on, we can send in a superposition of numbers instead of a single number to speed up the process. For example:
And then a superposition of how much bigger those powers are to the number:
→ qf(?>m*N) → |1,+19> + |2,+37> + |3,+23> + …
If we try to measure the superposition at this point we will run into trouble (we’re looking at the cat in the box) because the quantum state will collapse and return a random (and not necessarily the correct) answer. Instead, we need to get all the non-P answers to destructively interfere and cancel out, leaving us with only one possible answer, the true P.
Luckily, there is another mathematical trick that will allow us to do just that. So, again, let’s recap. If we knew what P was, we could raise our guess, g, to the power of P and get 1 more than a multiple of N:
gp = m * N + 1
If we take our guess to a random power, say 42, then it’s probably going to be some other number more than a multiple of N:
g42 = m * N + 7
Now here’s the interesting part, if we raise our guess to the power of that random number (42) plus P then it will be the same remainder with a different multiple:
g42+p = m2 * N + 7
Notice that the remainder is always the same no matter what multiple of P we add to our random number x:
gx = m * N + r
gx+P = m2 * N + r
gx+2P = m3 * N + r
gx+3P = m4 * N + r
Effectively, P has a repeating property to it such that if we take our guess and raise it to the power of a random number and then add multiple Ps to it then the remainder stays the same.
gx or gx+P or gx+2P or gx+3P or … ⇒ +r (r is always the same)
This repeating pattern isn’t something you can figure out by taking our guess to just one power. Rather it’s a structural relationship between different powers and we can take advantage of it because quantum computations can take advantage of superpositions of different possible powers.
If we take the superposition of all possible powers and just measure the amount more than a multiple of N part (the remainder, +r), then we are left with a superposition of just the powers that could have resulted in the same remainder.
The key part here is that each of these superpositions are exactly P apart from each other! These superpositions repeat with a period of P, or to be more specific, have a frequency of 1/P. If we can find the frequency, we can find P and break the encryption. Luckily, we have a tool to find frequencies: the Fourier Transform.
Fourier transforms are a way to input a signal, say an audio signal for example, and they will produce a graph of all the frequencies that the signal is made up of. This is how noise-cancelling headphones work, they select the frequencies from the surrounding environment using a Fourier transform and cancel them out using wave interference. Very clever stuff.
To solve our problem here, there is a quantum version of the Fourier transform, which we can apply to our superposition and find P. In a nutshell, it causes all the possible values that aren’t correct to destructively interfere so that we are left with the correct frequency 1/P.
Now that we have 1/P, we can invert that to get the value for P (i.e. 1/1/P). As long as that is an even number, we can put that back into our equation gP ± 1, and as long as that is not a multiple of N, we are guaranteed that it shares a factor with N. We can then use Euclid’s algorithm to find the real factors of N and finally break the encryption!
We have quantum computers today, so what’s stopping us from breaking encryption right now? In a nutshell, the size of quantum memory in these computers. This is still an emerging field of research and although some quantum computers exist, they have nowhere near enough quantum memory yet to be able to make the calculations necessary to break encryption. Some estimates say that around 5096 qubits are required to break the 768-bit RSA encryption in the example given above. Currently, we’re in the 10s of qubits – a long way off.
Saying that, a lot of time and money are being poured into this field and developments are gathering pace. What is certain however, is that quantum computers WILL eventually break encryption so the roll out of better encryption is essential in combating this and keeping our private information truly private.
I’m on one of our client sites to meet Sonny Lam: Data Explorer, Scrum Master and lounge wear extraordinaire. While I like my interviewees to feel relaxed, I’ll admit this is the first time I’ve ever questioned someone wearing slippers, and Pokémon ones at that. But whatever gets the job done, right?
I had promised to bring custard creams, but during the commute across London, it had unfortunately slipped my mind. I quickly throw some apologies in with my greetings / footwear compliments before sitting down to talk all things data.
…And just in case I almost lost you at that last line, I can promise there won’t be any mention of (whispers it) GDPR. Instead, I’ll be giving Sonny the Spanish Inquisition about using a data-led approach to delivering value using Scrum, turning chaos into calm and then consistency of delivery.
HI Sonny, it’s clearly been an exhilarating time for the client and a lot of change has happened since its inception three years ago.
Yep! In the last year alone, the company has experienced astronomical growth – going from ~300 people through three restructures, to ~3,000 people now.
The data warehouse was originally built for the finance team; a single, specific use case and it was done quickly, an MVP. However, as other teams heard about its potential and value, they wanted to utilise the data for their own objectives. Consequently, the warehouse had things added to it haphazardly.
As the company was expanding into new territories, more data was not only being acquired, but extracted for new things and the data warehouse was really struggling. It was being maintained by a small group of people and wasn’t built for scale because it lacked appropriate design, investment, support and infrastructure. Changes were being made directly in the live environment so bugs weren’t caught in time and had a knock-on effect that weren’t obvious until much later. It led to a lot of pain.
Why do think this happened and how did you come to be involved?
At a micro level, the data team grew accustomed to just delivering tickets; they needed to understand how to ‘deliver value’. At a macro level, more leadership and management was required to understand how to effectively engage the team and therefore capitalise on what it could offer. The business as a whole needed to be informed and educated about the team’s capabilities and limitations; it was simply seen as an under-performing data service centre. The analysts and data team didn’t push back as they felt there was no point, but even if they did, they didn’t really know how to.
In February 2018, 101 Ways placed a product team onsite to work on the main product platform. Three months later, a second team was brought in, specifically for data with me as the Scrum Master, with additional data engineers, QA and tech lead following shortly after.
It sounds like you were dropped in the deep end and experienced a baptism of fire. Where do you start with something like that?
It was sink or swim; I spent my first week observing and absorbing absolutely everything. I arranged a one-to-one with every member of the data team to understand them on a personal level in all aspects. I did a lot of walk-and-talk meetings, and spent a lot of time passive smoking with staff on cigarette breaks! I then took all of that honest feedback (and my newly blackened lungs), compiled it into a strategy and presented to the team their ‘Agile journey towards AWESOME’.
Awesome? Is that the technical term? Tell me more.
Yes! [Laughs] It’s a way of inspiring the team that there is so much more and building from there. I wanted to show the team that someone was on their side supporting and guiding them forward. That was the most important message to drum in, because it was clear the team needed more love!
My job was to give whoever needed it, the courage to say ‘no’ and ‘why’, but also ‘here are other options to achieve the same thing’. In a large organisation that is growing at speed, no-one says ‘no’, especially if they’ve exclusively always said ‘yes’. I became a daily human shield when they had to challenge poor requirements with actual value.
Where did that journey begin?
With a vision via a slide presentation! [Grins and shows me the below]
Doing it this way, I could:
Empower the team – inspire them to give more as an individual and even more as a team;
Show them the strengths they already have;
Show them the pain they currently suffer from;
Introduce the team foundations to build upon.
Show them the vision (The ‘what’)
Go through the ideas to get them there (The ‘how’)
As a result, the team knew (at least on paper) what the ‘journey towards awesome’ would look like. To get there, it was about having someone with a specific experience and balance; someone who was always willing to do the right thing that invested in the long term over the short term, which is hard.
Going into a difficult environment and pressing the ‘restart’ button isn’t always easy, or even possible – what challenges did you face in terms of changing both people and process?
There had been a lot of growth, with new people joining and with each restructure, you’d lose people with essential historic knowledge and therefore lack any sort of delivery consistency. Habits are hard to break – especially bad ones. You can’t expect to change everything straight away, it’s about identifying which practices are valuable and finding ways to turn those into habits.
I started small, with team lunches and sharing chocolate goodies, so that we got used to coming together as a team. [Clearly a snack fiend, I feel another pang of guilt at my earlier empty-handedness]. It may seem basic or unnecessary, but actually encouraging the team to share nice moments helped them bond and fostered a good atmosphere. To see the positive outcome of this investment took about three months, but building a new culture is always a work in progress.
Why did this situation call for a new approach?
It became clear that I needed to establish a ‘why’ within the team. There was no purpose; the analysts were there and were grinding away, but they didn’t really know why or how they were adding value for the wider business. Even senior management and other teams I asked, regurgitated what they thought was the company mission, but I felt there needed to be more. This lack of clarity had filtered down to the teams and left them wondering what their role was, or worse, not caring. So I created a team mission statement to align the data team and communicated their new vision of why they were there.
Afterwards, whenever the team did something, it wasn’t just for the sake of it. Either they were solving problems or improving the working processes, but always with a focus on value.’
Traditional Scrum tactics have been tried and tested in complex environments before with success, what was the problem here?
Quite simply, it wasn’t working. Domain knowledge was lost as people left and a huge effort was required to get new hires integrated in a complex situation. Story points used up incredible amounts of time and resources unnecessarily. An aspect of being lean in Agile is about removing waste so there is nothing left but value. If something is not valuable, it is wasteful. The lack of team maturity due to all the changes was leading to an incredible amount of waste and so I felt there had to be another way, another option.
That’s when you brought in the ‘no estimates’ approach?
Yes, there are many ways of doing no estimates, but at base level, it’s about breaking things down. In a similar way to story mapping. I used the Tim Ferriss DiSS framework:
Deconstruct – Break it down (to fit your context);
Identify – What takes 20% effort and gives 80% value?
Select – Group together all the priorities that make sense or can be delivered together;
Sequence – Arrange and order it for delivery (again fit it to your context).
I had the data team break down the tickets into single, achievable pieces of work, deliverable within three days. This immediately revealed dependencies affecting delivery outside the team, whether there were any unknowns and showed us where to dig deeper if required. Doing this gave the team the balance and flexibility to focus on value – and delivering it as a priority – rather than churning out big chunks of work where not all of it was necessary.
From there, we would load up the next sprint conservatively and measure the throughput data in terms of lead time (customer request to delivery), cycle time (the time work begins to delivery) and my own customised measurement of local lead time (when we’ve prioritised and committed to it in the sprint to delivery).
What was the benefit of doing it this way, including the customisation?
It helped manage stakeholder expectations; you need to understand where they’re coming from as well as speak their language. Here, the leadership team and stakeholders wanted to know ‘How many sprints will X take?’ and measuring local lead time gave us the answer.
It took time, but eventually the data team understood that poorly crafted tickets were not good enough. We needed a certain level of refinement in order to achieve ‘awesome’ and I gave them the courage to only accept awesome.
It can be hard to go against the grain, especially when you have opinions from those that had ‘been there and done it another way’. How did you build trust in the alternative and in your leadership?
By engaging constantly, supporting teams, their members and showing the value it created, even when it wasn’t my role to. Being ahead of the game in terms of requirements; I gave people what they needed before they asked for it. Whether that’s professional support or office biscuits and some extra time off to do important personal things, it doesn’t matter. You’re showing integrity.
I believe in going above and beyond as a leader because it fosters a mentality of ‘all in it together’. If I do it for them, they will (hopefully) want to do it for me and others.
When priorities were being delivered – in some cases ahead of time and to the level expected – the trust grew and the pressure from senior stakeholders decreased somewhat. Leading with data allowed us to provide a more informed approach to decisions, commitment and expectations.
When you go into the office and feel the team’s energy, see their hunger for challenge and increased engagement, that is when you know you are on the right path.
Presumably this reduction in pressure translated to the data?
Of course – when I started in July 2018, out of 37 tickets, only three were delivered by a team of four people (~8% of what was committed to). Within a month, over 80% were delivered when the focus shifted to achievable value, and it’s been maintained at 60% or over ever since, even with so many team and organisational changes.
Every sprint has a context. Month-to-month during 2018, we were expanding into new markets: Italy, Brazil, Spain, USA etc. While the statistics were not perfect, as it’s knowledge work, it gave a good indication of what happens during each sprint and the variables affecting it. It’s about using data to understand which decisions have impacted delivery, how and why. Things like recruitment losses, additional, urgent tickets, holiday periods, sickness, lack of expertise and knowledge. We were able to see that losing a senior engineer cost us five-to-six tickets.
What did that mean in terms of actual sprint times?
We could measure the cost of bad decisions. Using the data, we could show senior stakeholders that if they wanted to do ‘X’ without due consideration, it will cost us ‘Y’ amount of days, which would put other delivery at risk. For example,we could ‘show’ building a bespoke dashboard without clear requirements and refinement had cost the team 11 days of effort and where most of the effort was focused. When initial perceptions from the top was that it would be a ‘quick’ thing to build (less than four days).
Prior to the client’s partnership with 101 Ways, a user story could take two months (54 working days / four sprints) to deliver and this was, wrongly, accepted as the norm. With a data-led approach, we improved to single digit working days to deliver value. It allowed me to communicate in the same language as senior members of the business and show the rewards of investment in a balanced process and more importantly, changing the mindset. If they gave us what we needed to be awesome, we could deliver what they wanted and more quickly.
While we couldn’t measure feelings or morale, each statistical improvement also backed up all the good vibes going on at team level.
You’re moving on to a new role, what would you like your legacy to be?
In my opinion the mission of any Scrum Master is to make yourself redundant, but ultimately I hope that I delivered value and made a positive impact. More importantly, I left happy and safe in the knowledge that the data team knew the difference between ‘ok’ and ‘awesome’. Job done.
And with that, Sonny and Snorlax head off to their next meeting. Let’s hope there’s biscuits.
My mother calls me several times during the day and
when I answer it’s usually to tell me about the latest Duchess of Sussex gossip or something silly my brother
And yet every time I have to say the same thing:
“Mum, I’m at work, I can’t talk now”.
I guess it’s funny for our parents to think of their children in full-time work. Of course, my mother is very proud of
me and #humblebrags to her friends about my latest achievements, but ultimately she still sees me as her little girl.
Working at 101 Ways is great. The company invests in us as leaders; we have management meetings every six-to-eight weeks which allows time for reflection and reviewing the past month or so’s successes and pitfalls. This way we’re all able to understand what we should continue doing and what we need to improve and how. But as much as I love being a part of a great leadership team, being one of the youngest in the room does not do wonders for my anxiety, which I talked about in a previous post.
I can’t deny that the feeling of being ‘young and inexperienced’ in comparison to my peers (and therefore undeserving), plays on my mind regularly. I’m in a management position and while I don’t have decades of talent experience under my belt, what I do have is the skills. It is this that I try to remember when each day I am surrounded by amazing people who have already achieved so much in their careers.
How does this play out for me? For one, I often feel like the only thing I am successful at is deceiving people. My anxiety leaves me wondering when (not if) the team will find out that I am not that special coupled with overarching fear that I cannot live up to others’ expectations of me and will eventually let them down. And when I do, I’ll be sent packing with a large dose of shame and confirmation that I was right to think I was never really up to the job in the first place. Yet, with almost a decade of working in the recruitment industry under my belt, including two years at 101 Ways, this has never happened. Nor will it, because logically – and factually – it’s simply not true.
It’s a daily battle to remind myself that I’m supposed to be where I am, so I wanted to share my way of coping in the hope that it might help others in the same position.
I can imagine that in this 24/7 digital world, it may come across as slightly old fashioned advice, but I promise the proof is in the pudding. Get out a pen and paper (remember those things?) and write down your professional achievements, no matter how small and no matter whether you were fully or partially responsible for them. Everything counts.
While I find it difficult to shout about them, here are the things I’m proud of and were stepping stones to where I am today:
Managed a team of four by 27;
Received an award for being Top Performer / Biller in 2016;
Successfully changed career direction from being an external recruiter to an internal one;
Been part of a team that helped grow 101 Ways from two to over 100 people across London and Amsterdam in two years;
Implemented new processes to boost transparency when it comes to hiring;
Trained as a Scrum Master; and
Helped build an awesome Talent team at 101 Ways, which I am incredibly proud of.
So that’s me. Now it’s your turn. Just remember, you deserve to be where you are and don’t let your anxiety – or anyone – tell you otherwise.
Moore’s law is over. As it stands today, we’re pushing up against the laws of nature, the physical limits of how you can construct a transistor – atomic limits.
Currently, the best fabrication methods from Intel have produced transistors of 14nm in size, and nVidia have gone slightly better with 12nm. Although they are expected to shrink to around 5-7nm over the next few years, this seems to be about as far as we can go. Beyond that point, transistors will become so small that quantum effects prevent them from working properly.
What does this mean for the software created to run on these processors? Computer scientists have known this day would arrive for some time and have been preparing by taking advantage of Massively Parallel Processing and distributed computing. The emergence of cloud computing as an affordable technique meant that software engineers could take advantage of huge amounts of distributed computing power, allowing their software to run at global scale.
But there remains a problem. In recent years, I’ve seen the term ‘premature optimisation’ as a mantra not to optimise at all. This has led to complacency among certain rank and file engineers, and as a child of the home computing revolution in the early 80s, this doesn’t sit well with me. Optimisation was part of the game back then when you had to scrimp and scrape for every byte – it’s amazing what you can do with 64KB.
Let’s take a look at the node.js / npm landscape as a modern example of the point I’m trying to make. I came across a hilarious article recently that truly illustrates one of the problems at the heart of node.
‘Code bloat’ has taken on a new meaning in this paradigm. Even the simplest “Hello World!” app takes up a huge 1.5MB of space! This can’t be right. I know I’m going to sound like an old codger here, but in the early days of web development we imposed limits of 90 KB – including imagery – for an entire web page otherwise we couldn’t guarantee a good experience on dial-up. (Gen-Z: Yes, we used a telephone connection to dial into an Internet Service Provider – you could even hear the dial tone and sound of the data transferring).
So why is this a problem and what exactly is the paradox here? Well, I believe we’re losing a valuable skill by not choosing to optimise when the opportunity arises. The paradox is huge amounts of creativity can come through constraint.
To illustrate this, I have three examples: the modern demoscene, the Oculus Rift, and Banksy.
The idea of creating ‘demos’ that capture
the limits of a particular device has been
around since the early 80s and arguably even earlier. The idea was to ‘crack’ the copy protection on a game and to share it with your friends (usually by passing around a cassette tape) with an added introduction screen showing off your skills as a coder. These intro screens became more and more complex as coders became more familiar with the intricacies of the processor and other chips in the computer. Eventually, these demos took on a life of their own, no longer limited to the intro / loading screen of a game but rather a full audio visual experience of its own.
The most prolific ‘scene’ of the 80s was the commodore 64 scene. Believe it or not, it is still alive today. Coders are managing to find ways to do crazy things with 64KB of RAM and a 1MHz 6510 processor. There are demos that show off video streaming – something that wasn’t deemed possible until the mid-nineties with more powerful multimedia PCs. Meanwhile, other demos show that the perceived limits of the hardware can be extended by some clever use of ‘undocumented opcodes’ (or illegal opcodes as they were colloquially known). Techniques like ‘Any Given Screen Position’, ‘Flexible Line Interpretation’, and ‘Multiplexed Sprites’ wouldn’t have been invented has there not been a hard limit on the VIC chip and the 6510.
Physical constraints of the hardware forced coders to think again about what was possible and to get creative with solutions.
For a more modern example, look at some of the advances made by the Oculus team. Video latency is one of the major limiting factors to getting VR to a place where your brain doesn’t baulk and induce motion sickness.
John Carmack (CTO at Oculus – and personal hero of mine) has regularly spoken about solving the problem of latency across the net, through innovations in routing, networking software, and switches (and of course hard infrastructure) – meanwhile, video latency has never been an issue until now. Good VR needs to push beyond these limits to overcome the aforementioned problems, so Oculus has rightly invested a great deal of time in clever optimisations and techniques that can trick the brain into full immersion. After all, Carmack is the king of optimisation.
Finally, I want to talk about Banksy. His artwork leaves an indelible mark on all who see it and always conveys an important message, but what you don’t see is the amount of planning that goes into each piece.
The constraint for Banksy is the time available to implement what many consider to be an illegal activity. There is only minutes, possibly even seconds available to put up the work and this constraint leads to unknown innovation behind the scenes to make sure that it is executed perfectly. The detail in some of the pieces is exquisite and so the stencilling must be equally so. The effort that goes into planning the execution, sometimes using ‘workmen’ disguises and other mechanisms involves a certain amount of genius too.
The takeaway? Optimisation is a skill, and it’s a fundamental one at that. It shouldn’t be thought of simply as the ‘right thing to do’, but also a great way to bolster and support innovation through constrained creativity. As engineers, it’s therefore incumbent on us to make an effort to be better.
Remember how Beyoncé could help you network? Well according to professional community builder and speaker at our last WTF event Samantha Hepburn, so can the slogans of global sports brands.
In terms of excuses, Sam has heard them all, from the clichéd (and mistaken!) ‘I have nothing to offer’ to the more nuanced fears that come with being introverted or anxious. As Sam explained, she is dyslexic and at school she was dubbed the ‘most likely to not succeed’. Rather than accepting the damning (and clearly wrong) judgement of her peers and teachers, she went to work in hospitality, learned how to code and talked her way to where she is now: a founder, consultant and community manager.
So, how did she get so far you ask? With a simple motto shared with Richard Branson: say yes now and panic later.
Sam truly believes that everyone has a super power, you just need to know to use it, especially when networking. So while our other speaker, Alicia Teagle talked about how to get over the fear and actually go to events, Sam focused on the very real WHAT-THE-HELL-DO-I-DO-ONCE-I’M-THERE terror.
“At the end of the day, we all look at other people and tell ourselves we couldn’t do what they do. But it’s not true, we’re all human and we have all the ability to network.”
While Sam sadly isn’t available to hire to give prompts through an earpiece (we asked), we wrote down her top tips for not only surviving, but succeeding at networking events so you could emulate her magic:
Think about the ‘why’ – Whether it’s discovering a new industry or meeting a new people, be curious and learn about others’ experiences, so that you can understand what you want from it.
Be a friend – If you go with someone, remember what it’s like for those going alone and make an effort to strike up conversation with those standing on their own.
Forget status – It doesn’t matter whether you’re talking to a CEO or an intern, ask quality questions and listen to the answers. You will find out so much more about a company that way and get two different, yet equally valid perspectives.
You’re not trying to close a sale – You’re not there to pitch your credentials as the next Sheryl Sandberg to their Mark Zuckerberg. It’s about building a rapport with people and nurturing that relationship for longer-term (and mutual) benefit.
You’re born with it – There’s no ‘maybe’ about it, Sam says you already know how to speak; utilise it. When you’re at school, university, work, you’re always networking, at an event it’s just a professional friendship.
It’s all about the follow-up – Make sure you have some accountability and remember that people are busy; they don’t have time to chase you. After an event, reach out and build on any connections you made. Simple things like remembering something interesting they told you is enough to nurture a connection.
Get comfortable being uncomfortable – It is hard, but identify the approachable people and start from there. If someone isn’t open, it’s not personal – move on.
For a few years now, the development approach has been that engineers are responsible for the quality of their code and the solution. Meanwhile, QA staff are responsible for testing the quality of the product.
‘QA in Dev’ is an approach aimed at making all engineers responsible for the quality of the product as well as the code – this is my experience of this situation but I learned a lot and our focus was on always making sure we were trying to make things better.
Why take this approach?
The benefits of this are potentially a better product, with a shorter delivery time. I say potentially, because, like any process, when done wrong it could make things worse (slower and lower quality), so it’s important to stay on track.
a) Faster overall delivery times
b) Improved quality
Because the QA tasks are part of the ‘dev’ delivery, reviewers of pull requests etc. should expect to see those tasks (i.e. a ticket cannot be progressed without the QA tasks being complete). This reduces the chance of the QA task being given less time (or removed altogether) due to delivery pressure etc. It also means the QA task is included in the ticket estimate, which further decreases the chance of QA tasks being reduced.
Because the engineers are now writing unit tests, integration tests and end to end / UI tests, we can truly develop a test ‘pyramid’. With most tests as early as possible (i.e. unit tests), the developer knows exactly what is covered by unit tests and can therefore reduce the amount of UI tests required (although some overlap is a good thing).
This also encourages engineers to ‘shift tests left’ as much as possible; the more that is covered in unit tests, the faster the feedback for regression failures will be and the slower (and more fragile) UI tests have less work to do. For example: if a screen written as a React component has validation on text fields (phone number / name etc.), and those fields have their validation well-covered with unit tests, there’s less (potentially no) need to cover the form validation in UI tests, since the person writing those tests has full understanding of the entire ‘test stack’ and can see it’s already covered. UI tests can then focus on what’s not covered in unit / integration tests, such as user journeys through the application.
How do you bring QA into Dev?
Like most things in development, designing a model that brings QA into Dev is an ongoing journey. It’s important to continually re-evaluate and adjust the approach, instead of aiming to design a final solution that gets everything right first time. Trying to do that almost certainly results in a model that is an inflexible ‘law’ and not designed to be adapted or driven with common sense and will lose ‘buy-in’ from engineers (which is essential for this to work).
With that in mind, here are some points to note following my experience so far:
Be realistic and upfront: it’s unlikely that adding QA to Dev will reduce the work an engineer has to do for a ticket – the trade-off is (hopefully) a reduced chance of a ticket returning to development because of a bug found in UAT or Beta.
What works, works. What doesn’t work, doesn’t work.
Develop a model that’s followed because it’s ‘right’ and has tangible benefit, not because it’s ‘law’.
As far as possible, always include all engineers in any decision regarding the model.
Expect the model to change: it will need to adapt to changes in project / company priorities, timescales, team members, tech leads, new philosophies etc.
Model changes are not ‘done’ until the engineers have seen it and provided input / discussed. Encourage (justified) opinions.
Small changes to the current practice, develop a QA mindset within development before introducing too much.
Begin at the beginning: ensure good code / solution reviews are happening, ensure TDD is being followed, add integration tests, then e2e UI tests and so on.
Initially the model will change a lot:
Feedback will highlight areas that aren’t providing benefit, or could be designed better etc.
New test areas will be added (integration / e2e etc…)
The model will change less frequently as time goes on, but never let it become stale – always review periodically.
For many engineers this is different to the normal process flow, so out of habit – even with the best intentions – people will forget, cut corners, and race ahead. Accept that it’s just part of the journey.
Depending on your technical ability, get involved in working on engineering tickets – I would recommend with development work that you always ‘pair’ initially to get to know the culture etc. but if you’re able, be as much a ‘developer’ on the team as possible. It helps if you personally understand what the model is like to work with.
History of the model
Following is a brief history of the process model that is evolving in our current project (I’m using ‘me’ and ‘I’ to highlight the role of the ‘Quality Assistant’ here).
After a couple of initial meetings with the team to explain what the goal is and get feedback on ideas / past experiences / opinions, I proposed this:
The negative scenarios in the first step were intended to be written with no ‘solution’ in mind (i.e. before any development work had begun, just an understanding of the requirement).
We decided to meet once a week to feedback on how it’s going and refine the process, as well as learn more about QA in general through discussions about testing practices, carrying out exercises and team ‘bug-hunts’ on the product we had so far.
Over time, we made the pull request review a more formal test opportunity and added more ‘customer-focused’ testing during the demo to the BA:
We found that negative test scenarios weren’t necessarily what we ended up testing, so we changed the focus to exploratory tests. However, we found that individuals regularly forgot to plan these after picking up a ticket (before the work), so we moved that task to be a team one during sprint planning and recorded on the tickets:
We then included e2e UI tests as part of the delivery (code, unit test, UI test if relevant). I wrote the setup for this (using wdio + cucumber etc.) to get us started, but let the engineers write the tests required to get us up to speed with our current product. As far as possible, I avoided writing any tests here, but advised and reviewed the automation code that was being written, with the occasional meeting to discuss ideas / concepts / good BDD practice among other things
During discussions we realised we had not been following the initial step of creating exploratory test scenarios for a few sprints (breaking habits is hard!). However, by now we were at the point where:
Our unit tests had a very high coverage thanks to our ‘shift tests left’ approach;
The unit tests were well-written, well-reviewed and we had a high level of confidence in them; and
Our reviews were rigorous and we also believed in what we were merging into Master.
At this stage much of our effort and focus was aimed at producing good unit tests and following good practices (TDD, testable code etc.). Further, we were reliably delivering e2e UI tests with relevant tickets, running all tests as part of our CI process (and with ‘git hooks’ during commit and push) and the manual and ‘mob’ exploratory testing was still being done.
These details created the feeling that it would be more advantageous to continue focusing on unit tests and reviews (including pulling the code and running it locally where possible), rather than spending time on the initial exploratory test planning. We therefore changed the plan a little to reflect this:
It is worth noting that the final process described above couldn’t have been the first approach – this is the product of a journey, not just developing the process, but also developing us as a team to this point. It’s not the only solution, simply one that is working well for now – if our involvement in the project had continued, this process would have to continue to grow and adapt.
My experience to this point shows that the process must continually be subject to change to remain relevant to priorities / skill sets / team members. It should therefore, never be considered ‘finished’.
As part of our new ‘6 Common Elements of Change’ series, I’ll be taking a deep dive into the concepts of Vision, Principles, Practices, Structure, Technology and Leadership previously touched upon by Kelly Waters. Particularly exploring how they play a key role in helping people understand the link between the change and the company’s success or survival.
Why is vision needed?
Every startup or organisation needs to have a ‘vision’, and will at the beginning of their journey, but it is especially important at any point of transformation. It needs to detail what the company is trying to achieve long-term and why. For example a publisher moving from print to digital, or from advertising to other revenue models. Without a vision for such change, it becomes less achievable, as it is not able to be seen in the minds eye of the people working towards it.
Examples of successful, transformative vision statements can be seen through TED’s evolution in 2002 from a for-profit conference company to not-for-profit event that would, ‘Spread ideas worth spreading’ or Patagonia’s: ‘Build the best product, cause no unnecessary harm, use business to inspire and implement solutions to the environmental crisis’.
The challenge of miscommunication
Outlining company vision and new goals is always very important, but it means little if there is a later miscommunication or complete failure to translate it down the chain. While it may not be deliberate – or even known about – miscommunication around vision can be fatal.
So, how does it happen? Sometimes a vision for change will be discussed by management and leadership between each other, but isn’t subsequently shared with key stakeholders, teams or new starters. Or – as is common in startups – a vision may be articulated at the beginning of the journey, with assumptions made that everyone is aware of it and nothing more needs to be done. However, during periods of growth or transformation, sustaining communications about the company’s vision (and it’s changing status) can be forgotten about, meaning that new joiners aren’t aware of it.
Other times, a revised vision will be frequently communicated, but not well enough. What might be impactful for engineers may not be right for brand people and vice versa. While the whole organisation has to follow the same core one, it is important that the vision is adapted and related to each area of the organisation and told how they can help in their own way to achieve it.
The impact of miscommunicating a vision for change
When there is no vision for change shared, or its core message is unclear, people begin moving in different directions to each other. Confusion reigns and staff begin working on what they believe to be right and what might, conversely, conflict with the stated company goals. As I’ve seen happen, if they are then pulled up for doing so or don’t gain any benefit from it, people quickly become disillusioned and disengaged from the product, putting its development at risk.
A lack of vision or miscommunication also makes management look inconsistent or, worse, disinterested. This leads to people failing to buy into the culture, an inability to connect the dots or failure to recognise where they fit into the bigger picture, and therefore how they’re involved in delivering that vision.
On a wider level, a mis- or non-communicated vision impacts both retention and recruitment. Not defining a transformative vision results in a higher employee turnover as they don’t know why the company is doing what they are now doing, and how they help, therefore becoming disenfranchised with the company.
In my 25 years of experience, I’ve seen this happen a number of times. On one occasion, a company was building what seemed to be a like-for-like platform. While you can assume there was a reason behind this, it was never made clear what that reason was nor what the benefit would be, which made staff restless. It would have been simple to articulate if it was about cost-savings, optimisation or resilience etc, but the development teams were told to, “Just do it”. People became disillusioned, lost faith in the leadership and some even voted with their feet and resigned.
When combined with goals, and objectives, strategic decisions and intent, vision has been shown to be a key factor in improving overall organisational performance. It follows therefore that this is a direct consequence of people understanding a vision for change and their role in bringing it to fruition on both an individual and team-level. Time therefore needs to be spent thinking not only about the vision itself, but how it should be adapted for a particular audience.
What does good communication look like?
A number of years ago, I worked with a client who was operating post-merger and despite facing huge transformation efforts, navigated it successfully by sharing their vision widely and repeatedly. The company had a large programme of work, with several brands and their respective teams under one umbrella, so needed to refine and unify its systems.
Leadership began the process first by communicating what they wanted the company to move towards and how, but also – and this is key – why. The plan and reasons behind the vision were written on the walls of offices, referenced at the start of all presentations and status emails, and in town halls / broadcasts. The approach was essentially ‘the more information, the better’ and so the company vision was disseminated via several different channels and regularly.
The overall aim of the transformation was cost-savings through optimisation – anything that wasn’t creating value needed to be standardised and upgraded. Investment was made in retraining and repurposing people so they could use the core system and be deployed across any of the brands, while still retaining the individual brand identities and their products for the customers’ benefit.
Sharing a vision for change the right way
It is ultimately the responsibility of leadership to redevelop a clear vision and purpose for an organisation’s future and – together with management – ensure that it is appropriately translated to each department or team, and linked to their specific purpose.
The way to do this properly and effectively is three-fold. First, if it’s been lost, reconsider what the vision should be and why – if the original one worked, great. If it disappeared during the period of change, perhaps it wasn’t resilient or relevant enough. Although a vision statement can (and should) be adaptable to accommodate growth and new ventures, the core message and values need to be ubiquitous, enduring, collaborative, manageable.
Second – it’s important for everyone to understand why the vision has been successful, and how they will know when it has. Once the intent has been agreed, leadership needs to find a way to make it definable and most importantly, measurable. If you intend to become the market leader in X, you need to ascertain what that means and how you can assess its effectiveness, i.e. is it higher turnover than your top three competitors year-on-year or bigger market share? If the latter, your vision has been successful when the pre-defined percentage has been achieved.
Finally, while it may sound simple, the statement needs to be shared in written form: intranets, websites, strategy documents, roadmaps, newsletters etc. and verbally with staff during all hands, team days and one-to-one reviews. And do it often – time spent sharing at the right points will save time and expense incurred from failing to do so.
Communicate it again with new starters during the onboarding process, but ideally, a vision will be shared at the beginning of any recruitment process – especially if people are being brought in to assist the transformation. That way, it’s easier to ensure the right people being attracted by the company’s future ideals, and filters out those that don’t truly share them.
Remember, a clear and well communicated vision may sound easy, but it isn’t and it could be the thing that makes or breaks your chances of success.
Whether you’re inheriting a mature team, have new people joining an one, or are fortunate enough to build a new team from scratch, you’d want to make sure that it is healthy, happy, aligned, and motivated.
As people come from various different backgrounds and experiences, it is extremely important to ensure that we are all on the same page.
How might you do this?
Something that has always worked for me is facilitating a Ways of Working Workshop (WoWW) – a mix between a Retrospective and a Futurespective that you have probably experienced in agile delivery environments.
The WoWW is designed to identify all the great things that we love about working in teams, acknowledge all the things we don’t, and come up with a team promise – or manifesto of sorts – to which we can hold ourselves accountable.
We can then hold team retros and discuss how we are performing as a team, evaluate against the team promise, make improvements, and support our teammates where needed.
How do you run a Ways of Working Workshop?
Allocate at least half a day for the whole team,
book a large meeting room with a whiteboard – don’t forget the whiteboard markers and eraser… And the drinks and snacks!
1) Make time and focus
It’s important to remove as many distractions as possible; phones set to do not disturb, laptops left on desks, and tell others outside of the team where you are so if there is an emergency they know where to find you.
2) Set the scene
When getting everyone together, it is essential to be clear regarding what the session is about and what you are expecting to get out of it.
In this instance, it’s to gain a collective understanding of what makes great teams based on people’s experiences and draft a team manifesto which can be referred back to, so the team can hold themselves accountable and in check.
If this is one of the first times that the team have come together as a whole, an ice-breaker might be an excellent way to start. For example:
Rock, Paper, Scissors: Break off into pairs and compete in a ‘best of three’. Whoever wins matches with another winner and plays again. The loser follows and cheers on their winning opponent until there are only two people left in the final face off.
The Agile Penny Game: This game does two things; firstly it teaches the importance of batching work, which works well for agile teams. Secondly, it promotes teamwork which is the point of the Ways of Working Workshop.
3) The Retrospective
Hold an open discussion around what people have enjoyed and disliked when working in teams. Ask the team if they are happy to shout things out or write on post-its then discuss after. Remember to be led by the room; you are facilitating, not driving.
4) The Futurespective
Once we have a common understanding of our experiences in great and not so great teams, it’s time to discuss how we want to work together; what behaviours, attitudes and values we want to carry as a team.
We want to consider this as a future goal – an aspirational way of working. It takes time to form new bonds, and for teams to appreciate and build trust in each other.
Having a set of promises or a manifesto can help foster the right behaviours to get you through the four phases of team formation; forming, storming, norming, and performing.
5) The Team Promise(s)/Manifesto
At the end of the Futurespective section of the Workshop, you should have a clearly defined set of statements which the team has all agreed signed up to. Capture these, save them, print them out, and have them visible in the office where you are working.
Run team retrospectives where you hold each other accountable and in check. These manifestos are not commandments; they’re owned and created by the team and the team can evolve them as they mature, as new members join, and as the team continually improves.
Here is an example of one of my team’s manifestos:
Suggested ways of working
So you and your teams can try a Ways of Working Workshop, here’s what I would recommend:
Remember: communication and collaboration with each other, and the client and their teams, is paramount.
Work as teams; ensure everyone contributes and is heard. For larger teams, break into smaller groups for activities, and have each other’s backs.
Be mindful of time – if scheduling or attending meetings ensure you understand the agenda, times and availability of participants
Be open to new thinking and accept there is no one way of doing something
Recognise when energy levels are flagging and you or others need a break.
Manage your emotions and monitor the team’s emotions. Make sure everyone is there supporting each other
Stick to one conversation at a time!
Build and select, rather than agree to disagree – you are trying to build on your good ideas, not tear down bad ideas. If there are disagreements in approaches then look to spike, evaluate, gather data and decide as a group.