Shared posts

29 Jul 15:29

Looking back on five years of web components

by Joe Gregorio

Over 5 years ago I wrote No more JS frameworks and just recently Jon Udell asked for an update.

I have been blogging bits and pieces over the years but Jon’s query has given me a good excuse to roll all of that up into a single document.

For the last five years me and my team have been using web components to build our web UIs. At the time I wrote the Zero Framework Manifesto we moved all of our development over to Polymer.

Why Polymer?

We started with Polymer 0.5 as it was the closest thing to web components that was available. At the time I wrote the Zero Framework Manifest all of the specifications that made up web components were still just proposed standards and only Chrome had implemented any of them natively. We closely followed Polymer, migrating all of our apps to Polymer 0.8 and finally to Polymer 1.0 when it was released. This gave us a good taste for what building web components was like and verified that building HTML elements was a productive way to do web development.

How

One of the questions that comes up regularly when talking about zero frameworks is how can you expect to stitch together an application without a framework? The short answer is ‘the same way you stitch together native elements’, but I think it’s interesting and instructional to look at those ways of stitching elements together individually.

There are six surfaces, or points of contact, between elements, that you can use when stitching elements together, whether they are native or custom elements.

Before we go further a couple notes on terminology and scope. For scope, realize that we are only talking about DOM, we aren’t talking about composing JS modules or strategies for composing CSS. For the terminology clarification, when talking about DOM I’m referring to the DOM Interface for an element, not the element markup. Note that there is a subtle difference between the markup element and the DOM Interface to such an element.

For example, <img data-foo="5" src="https://example.com/image.png"/> may be the markup for an image. The corresponding DOM Interface has an attribute of src with a value of https://example.com/image.png but the corresponding DOM Interface doesn’t have a data-foo attribute, instead all data-* attributes are available via the dataset attribute on the DOM Interface. In the terminology of the WhatWG Living Standard, this is the distinction between content attributes vs IDL attributes, and I’ll only be referring to IDL attributes.

With the preliminaries out of the way let’s get into the six surfaces that can be used to stitch together an application.

Attributes and Methods

The first two surfaces, and probably the most obvious, are attributes and methods. If you are interacting with an element it’s usually either reading and writing attribute values:

element.children

or calling element methods:

document.querySelector('#foo');

Technically these are the same thing, as they are both just properties with different types. Native elements have their set of defined attributes and methods, and depending on which element a custom element is derived from it will also have that base element’s attributes and methods along with the custom ones it defines.

Events

The next two surface are events. Events are actually two surfaces because an element can listen for events,

ele.addEventListener(‘some-event’, function(e) { /* */ });

and an element can dispatch its own events:

var e = new CustomEvent(‘some-event’, {details: details});
this.dispatchEvent(e);

DOM Position

The final two surfaces are position in the DOM tree, and again I’m counting this as two surfaces because each element has a parent and can be a parent to another element. Yeah, an element has siblings too, but that would bring the total count of surfaces to seven and ruin my nice round even six.

<button>
  <img src="">
</button>

Combinations are powerful

Let’s look at a relatively simple but powerful example, the ‘sort-stuff’ element. This is a custom element that allows the user to sort elements. All children of ‘sort-stuff’ with an attribute of ‘data-key’ are used for sorting the children of the element pointed to by the sort-stuff’s ‘target’ attribute. See below for an example usage:

 <sort-stuff target='#sortable'>
   <button data-key=one>Sort on One</button>
   <button data-key=two>Sort on Two</button>
 </sort-stuff>
 <ul id=sortable>
   <li data-one=c data-two=x>Item 3</li>
   <li data-one=a data-two=z>Item 1</li>
   <li data-one=d data-two=w>Item 4</li>
   <li data-one=b data-two=y>Item 2</li>
   <li data-one=e data-two=v>Item 5</li>
 </ul>

If the user presses the “Sort on One” button then the children of #sortable are sorted in alphabetical order of their data-one attributes. If the user presses the “Sort on Two” button then the children of #sortable are sorted in alphabetical order of their data-two attributes.

Here is the definition of the ‘sort-stuff’ element:



And here is a running example of the code above:

Sort on One Sort on Two
  • Item 3
  • Item 1
  • Item 4
  • Item 2
  • Item 5

Note the surfaces that were used in constructing this functionality:

  1. sort-stuff has an attribute 'target' that selects the element to sort.
  2. The target children have data attributes that elements are sorted on.
  3. sort-stuff registers for 'click' events from its children.
  4. sort-stuff children have data attributes that determine how the target children will be sorted.

In addition you could imagine adding a custom event ‘sorted’ that ‘sort-stuff’ could generate each time it sorts.

Why not Polymer?

But after having used Polymer for so many years we looked at the direction of Polymer 2.0 and now 3.0 and decided that may not be the direction we want to take.

There are a few reasons we moved away from Polymer. Polymer started out and continues to be a platform for experimentation with proposed standards, which is great, as they are able to give concrete feedback to standards committees and allow people to see how those proposed standards could be used in development. The downside to the approach of adopting nascent standards is that sometimes those things don’t become standards. For example, HTML Imports was a part of Polymer 1.0 that had a major impact on how you wrote your elements, and when HTML Imports failed to become a standard you had a choice of either a major migration to ES modules or to carry around a polyfill for HTML Imports for the remainder of that web app’s life. You can see the same thing happening today with Polymer 3.0 and CSS mixins.

There are also implementation decisions I don’t completely agree with in Polymer, for example, the default use of Shadow DOM. Shadow DOM allows for the encapsulation of the children of a custom element so they don’t participate in things like querySelector() and normal CSS styling. But there are several problems with that, the first is that when using Shadow DOM you lose the ability to do global styling changes. If you suddenly decide to add a “dark mode” to your app you will need to go and modify each element’s CSS. It was also supposed to be faster, but since each element contains a copy of the CSS there are performance implications, though there is work underway to address that. Shadow DOM seems like a solution searching for a problem, and Polymer defaults to using Shadow DOM while offering a way to opt out and use Light DOM for your elements; I believe the default should lie in the other direction.

Finally Polymer’s data binding has some mis-features. It offers two-way data binding which is never a good idea, every instance of two-way data binding is just a bug waiting to happen. The data binding also has a lot of magic to it, in theory you just update your model and Polymer will re-render your template at some point in the future with the updated values. The “at some point in the future” is because updates happen in an async fashion, which in theory allows the updates to be more efficient by batching the updates, but the reality is that you spend a lot of development time updating your model, not getting updated DOM, and scratching your head until you remember to either call a function which forces a synchronous render, or that you updated a deep part of your model and Polymer can’t observe that change so you need to update your code to use the set() method where you give the path to the part of the model you just updated. The async rendering and observing of data is fine for simple applications, but for more complex applications leads to wasted developer time debugging situations where a simpler data binding model would suffice.

It is interesting to note that the Polymer team also produces the lit-html library which is simply a library for templating that uses template literals and HTML Templates to make the rendering more efficient, and it has none of the issues I just pointed out in Polymer.

What comes after Polymer?

This is where I started with a very concrete and data driven minimalist approach, first determining what base elements we really needed and then what library features we would need as we built up those elements, and finally what features we need as we build full fledged apps from those base elements. I was completely open to the idea that maybe I was just being naive about the need for async render or Shadow DOM and I’d let the process of building real world applications inform what features were really needed.

The first step was to determine which base elements we really needed. The library of iron-* and paper-* elements that Polymer provides is large and the idea of writing our own version of each was formidable, so instead I looked back over the previous years of code we’d written in Polymer to determine which elements we really did need. If we’d started this process today I would probably just have gone with Elix or another pure web components library of elements, but none of them existed at the time we started this process.

The first thing I did was scan each project and record every Polymer element used in every project. If I’m going to replace Polymer at least I should know how many elements I’m signing up to rewrite. That initial list was surpising in a couple of ways, the first was how short the list was:

Polymer/Iron elements Used
iron-ajax
iron-autogrow-textarea
iron-collapse
iron-flex-layout
iron-icon
iron-pages
iron-resizable-behavior
iron-scroll-threshold
iron-selector
paper-autocomplete
paper-button
paper-checkbox
paper-dialog
paper-dialog-scrollable
paper-drawer-panel
paper-dropdown-menu
paper-fab
paper-header-panel
paper-icon-button
paper-input
paper-item
paper-listbox
paper-menu
paper-menu-button
paper-radio-button
paper-radio-group
paper-spinner
paper-tabs
paper-toast
paper-toggle-button
paper-toolbar
paper-tooltip

After four years of development I expected the list to be much larger.

The second surpise was how many of the elements in that list really shouldn’t be elements at all. For example, some could be replaced with native elements with some better styling, for example button for paper-button. Alternatively some could be replaced with CSS or a non-element solution, such as iron-ajax, which shouldn’t be an element at all and should be replaced with the fetch() function. After doing that analysis the number of elements actually needed to be re-implemented from Polymer fell to a very small number.

In the table below the ‘Native’ column is for places where we could use native elements and just have a good default styling for them. The ‘Use Instead’ column is what we could use in place of a custom element. Here you will notice a large number of elements that can be replaced with CSS. Finally the last column, ‘Replacement Element’, is the name of the element we made to replace the Polymer element:

Polymer Native Use Instead Replacement Element
iron-ajax   Use fetch()  
iron-collapse     collapse-sk
iron-flex-layout   Use CSS Flexbox/Grid  
iron-icon     *-icon-sk
iron-pages     tabs-panel-sk
iron-resizable-behavior   Use CSS Flexbox/Grid  
iron-scroll-threshold   Shouldn’t be an element  
iron-selector     select-sk/multi-select-sk
paper-autocomplete   No replacement yet.  
paper-button button    
paper-checkbox     checkbox-sk
paper-dialog     dialog-sk
paper-dialog-scrollable   Use CSS  
paper-drawer-panel   Use CSS Flexbox/Grid  
paper-dropdown-menu     nav-sk
paper-fab button    
paper-header-panel   Use CSS Flexbox/Grid  
paper-icon-button button   button + *-icon-sk
paper-input input    
paper-item     nav-sk
paper-listbox option/select    
paper-menu     nav-sk
paper-menu-button     nav-sk
paper-radio-button     radio-sk
paper-radio-group **    
paper-spinner     spinner-sk
paper-tabs     tabs-sk
paper-toast     toast-sk
paper-toggle-button     checkbox-sk
paper-toolbar   Use CSS Flexbox/Grid  
paper-tooltip   Use title attribute  

** - For radio-sk elements just set a common name like you would for a native radio button.

That set of minimal custom elements has now been launched as elements-sk.

Now that we have our base list of elements let’s think about the rest of the tools and techniques we are going to need.

To get a better feel for this let’s start by looking at what a web framework “normally” provides. The “normally” is in quotes because not all frameworks provide all of these features, but most frameworks provide a majority of them:

  • Framework
    • Model
    • Tooling and structure
    • Elements
    • Templating
    • State Management

All good things, but why do they have to be bundled together like a TV dinner? Let’s break each of those aspects of a framework out into their own standalone thing and then we can pick and choose from the various implementations when we start developing an application. This style of developement we call “a la carte” web development.

Instead of picking a monolithic solution like a web framework, you just pick the pieces you need. Below I outline specific criteria that need to be met for some components to participate in “a la carte” web development.

A la carte

“A la carte” web development does away with the framework, and says just use the browser for the model, and the rest of the pieces you pick and choose the ones that work for you. In a la carte development each bullet point is a separate piece of software:

A la carte

Tooling and structure
Defines a directory structure for how a project is put together and provides tooling such as JS transpiling, CSS prefixing, etc. for projects that conform to that directory structure. Expects ES modules with the extension that webpack, rollup, and similar tools presume, i.e. allow importing other types of files, see webpack loaders.
Elements
A library of v1 custom elements in ES6 modules. Note that these elements must be provided in ES6 modules with the extension that webpack, rollup, and similar tools presume, i.e. allow importing other types of files, see webpack loaders. The elements should also be “neat”, i.e. just HTML, CSS, and JS.
Templating
Any templating library you like, as long as it works with v1 custom elements.
State Management
Any state management library you like, if you even need one.

The assumptions needed for all of this to work together are fairly minimal:

  1. ES6 modules and the extension that webpack, rollup, and similar tools presume, i.e. allow importing other types of files, see webpack loaders.
  2. The base elements are “Neat”, i.e. they are JS, CSS, and HTML only. No additional libraries are used, such as a templating library. Note that sets of ‘neat’ elements also conform to #1, i.e. they are provided as webpack/rollup compatible ES6 modules.

Obviously there are other guidelines that could be added as advisory, for example Google Developers Guide - Custom Elements Best Practices, should be followed when creating custom elements sets, except for the admonition to use Shadow DOM, which I would avoid for now, unless you really need it.

Such code will natively run in browsers that support custom elements v1. To get it to run in a wider range of browsers you will need to add polyfills and, depending on the target browser version, compile the JS back to an older version of ES, and run a prefixer on the CSS. The wider the target set of browsers and the older the versions you are targeting the more processing you will need to do, but the original code doesn’t need to change, and all those extra processing steps are only incurred by projects that need it.

Concrete

So now that we have our development system we’ve started to publish some of those pieces.

We published pulito, a stake in the ground for what a “tooling and structure” component looks like. You will note that it isn’t very complex, nothing more than an opinionated webpack config file. Similarly we published our set of “neat” custom elements elements-sk.

Our current stack looks like:

Tooling and structure
pulito
Elements
elements-sk
Templating
lit-html

We have used Redux in an experimental app that never shipped and haven’t needed any state management libraries in the other applications we’ve ported over, so our ‘state management’ library is still an open question.

Example

What is like to use this stack? Let’s start from an empty directory and start building a web app:

$ npm init
$ npm add pulito

We are starting from scratch so use the project skeleton that pulito provides:

$ unzip node_modules/pulito/skeleton.zip
$ npm

We can now run the dev server and see our running skeleton application:

$ make serve

Now let’s add in elements-sk and add a set of tabs to the UI.

$ npm add elements-sk

Now add imports to pages/index.js to bring in the elements we need:

import 'elements-sk/tabs-sk'
import 'elements-sk/tabs-panel-sk'
import '../modules/example-element'

And then use those elements on pages/index.html:

<body>
  <tabs-sk>
    <button class=selected>Some Tab</button>
    <button>Another Tab</button>
  </tabs-sk>
  <tabs-panel-sk>
    <div>
      <p> This is Some Tab contents.</p>
    </div>
    <div>
      This is the contents for Another Tab.
    </div>
  </tabs-panel-sk>
  <example-element active></example-element>
</body>

Now restart the dev server and see the updated page:

$ make serve

Why is this better?

Web frameworks usually make all these choices for you, you don’t get to choose, even if you don’t need the functionality. For example, state managament might not be needed, why are you ‘paying’ for it, where ‘paying’ means learning about that aspect of the web framework, and possibly even having to serve the code that implements state managment even if you never use it. With “a la carte” development you only include what you use.

An extra benefit comes when it is time to upgrade. How much time have you lost with massive upgrades from v1 to v2 of a web framework? With ‘a la carte’ developement the upgrades don’t have to be monolithic. I.e. if you’ve chosen a templating library and want to upgrade to the next version you only need to update your templates, and not have to touch every aspect of your application.

Finally, ‘a la carte’ web development provides no “model” but the browser. Of all the things that frameworks provide, “model” is the most problematic. Instead of just using the browser as it is, many frameworks have their own model of the browser, how DOM works, how events work, etc. I have gone into depth on the issues previously, but they can be summarized as lost effort (learning something that doesn’t translate) and a barrier to reuse. What should replace it? Just use the browser, it already has a model for how to combine elements together, and now with custom elements v1 gives you the ability to create your own elements, you have all you need.

One of the most important aspects of ‘a la carte’ web developement is that it decouples all the components, allowing them to evolve and adapt to user needs on a much faster cycle than the normal web framework release cycle allows. Just because we’ve published pulito and elements-sk doesn’t mean we believe they are the best solutions. I’d love to have a slew of options to choose from for tooling, base element sets, templating, and state management. I’d like to see Rollup based tools that take the place of pulito, and a whole swarm of “neat” custom elements sets with varying levels of customizability and breadth.

What we’ve learned

We continue to learn as we build larger applications.

lit-html is very fast and all the applications we’ve ported over have been smaller and faster after the port. It is rather pleasant to call the render() function and know that the element has been rendered and not getting tripped up by async rendering. We haven’t found the need for async rendering either, but that’s not surprising. Let’s think about cases where async rendering would make a big difference, i.e. where it would be a big performance difference to batch up renders and do them asynchronously. This would have to be an element with a large number of properties and each change of the property would change the DOM expressed and thus would require a large number of calls to render(). But in all the development we’ve done that situation has never arisen, elements always have a small number of attributes and properties. If an element takes in a large amount of data to display that’s usually done by passing in a small number of complex object as properties on the element and that results in a small number of renders.

We haven’t found the need for Shadow DOM. In fact, I’ve come to think of the Light DOM children of elements as part of their public API that goes along with the attributes, properties, and events that make up the ‘normal’ programming surface of an element.

We’ve also learned that there’s a difference between creating base elements and higher level elements as you build up your application. You are not creating bullet-proof re-usable elements at every step of development; the same level of detail and re-usability aren’t needed as you move up the stack. If an element looks like it could be re-used across applications then we may tighten up the surface of the element and add more options to cover more use cases, but that’s done on an as-needed basis, not for every element. Just because you are using the web component APIs to build an application doesn’t mean that every element you build needs to be as general purpose and bullet proof as low level elements. You can use HTML Templates without using any other web component technology. Same for template literals, and for each of the separate technologies that make up the web components group of APIs.

27 Jul 05:24

Incrementalism

First: a story. Alice and Bob are sent to an earth-like planet and given the task of finding its highest point. Unfortunately, they are initially given only stone-age era technology to work with. The planet is foggy and visibility is only 20 feet or so. Alice and Bob adopt different approaches:

  • Bob just follows the local slope he can see with his eyes until reaching the top of the nearest hill. Maybe he wanders randomly a little ways away… but not too far, because what if he couldn’t find his way back to the locally-highest hill he’d just discovered?
  • Alice spends a long time reinventing radar technology, then builds a rocket to launch several satellites to perform a mapping of the planet and provide GPS guidance. She also builds an airplane, because it turns out the landing site is an island, and the tallest point is on the other side of a huge ocean. With all this technology in hand, she then flies directly to the highest mountain.

To an outside observer who didn’t know that rockets, radar, GPS, or airplanes were possible, it would look a lot like Alice was just screwing around, not accomplishing much of anything. “Alice, what the hell are you even doing?” Bob would say. “You’ve just got a pile of metal parts… but I got to the top of a very tall hill over there… in fact I’ve made 200 upwards steps in the past month.”

But then, quite rapidly, Alice’s progress would jump well past Bob. She’d hop on a plane and fly across the ocean directly to the foot of the tallest mountain, then keep hiking up and up and up…

Over long enough time scales, the “shortest path” to any goal usually involves doing things that have no obvious connection to that goal. It involves doing things whose progress cannot be measured in any simple way that would be obvious to someone without expertise. And over these longer timespans, the incrementalist approach of making local improvements according to obvious metrics is a colossal failure. (“You can’t get to the moon by piling up chairs”)

Anyway, none of this is too surprising, but now let’s complicate things a little. Suppose Alice and Bob are put in the same situation and given the same goal, but are told that “whoever gets to the highest point in the next month will be rewarded with additional people, supplies, etc”. And if we imagine iterating this experiment, always rewarding the person who makes the most measurable progress, Bob might amass an army of thousands of people, all wandering around climbing little nearby hills! Meanwhile, Alice gets almost no resources even though she could have have made exponential progress and surged past Bob 10000x. Perhaps Alice is even tempted to abandon her ideas about how things can be made much better long term.

The focus on incremental, measurable progress is often dangerous to innovation and bigger progress.

The software industry

The software industry is swimming in incrementalism. Everywhere you look in software, there are things that can be incrementally improved, and people getting paid to make these improvements. That one library you use, it’s missing a feature. That bit of code, it could be rewritten to be faster. That app, it needs an extra widget or feature. That web standard could use a few additional extensions to it. Etc, etc, etc. There are people working on all these things, because obvious progress is rewarded. Billions of dollars are poured into the industry to capitalize on whatever little incremental improvements can capture some temporary market share. And this process plays out over and over again.

In academia, there’s a different sort of incrementalism. The pressure to produce “publishable” results in the short term (due to the tenure system and other factors) leads researchers to be quite conservative in various ways. They don’t work on anything too crazy that has a high chance of failure. They stay largely within the extremely narrow confines of what they already know about. They do modest extensions to existing research.

Real progress doesn’t look like progress at first. It looks like people screwing around. Playing. Trying different things. Building a rocket and a radar system instead of just walking up the nearest hill. Sometimes the best thing to do is to give smart people a very very long leash and plenty of resources, to be patient, and not worry about progress.

05 Jul 01:00

Book: Good Strategy/Bad Strategy

by Cate

I loved Good Strategy/Bad Strategy (Amazon) and learned so much from it. What really stood out to me was the depth required in defining strategy, and the way of thinking that takes that depth, and constructs a long term trajectory built on proximate objectives – the next steps that seem totally possible from where we are now. This was the kind of book I was recommending even before I finished it, I definitely think it’s worth the time. I’ve included many quotes below, all emphasis is mine.

Early in the book he gives the example of Napoleon, who split his ships into two columns, destroying 2/3 of the other fleet with no loss to his own.

Good strategy almost always looks this simple and obvious and does not take a thick deck of PowerPoint slides to explain. It does not pop out of some “strategic management” tool, matrix, chart, triangle, or fill-in-the-blanks scheme. Instead, a talented leader identifies the one or two critical issues in the situation—the pivot points that can multiply the effectiveness of effort—and then focuses and concentrates action and resources on them.

He is damning on the sub prime crisis, and specifically about Lehman brothers taking on more risk without mitigating it.

Being ambitious is not a strategy.

These two examples in the opening nailed – for me – the idea that strategy is unrelated to ambition, and charisma – it’s not how motivating the presentation it is, or how grandiose the claims… it’s about what it actually is, the reality it exists in, and the effects that unfold.

A good strategy does more than urge us forward towards a goal or a vision. A good strategy honestly acknowledges the challenges being faced and provides an approach to overcoming them. And the greater the challenge, the more a good strategy focuses and coordinates efforts to achieve a powerful competitive punch or problem solving effect.

Damn!

Unfortunately, good strategy is the exception, not the rule. And the problem is growing. More and more organizational leaders say they have a strategy, but they do not. Instead they espouse what I call bad strategy. Bad strategy tends to skip over pesky details such as problems. It ignores the power of choice and focus, trying instead to accommodate a multitude of conflicting demands and interests. Like a quarterback whose only advice to teammates is “Let’s win,” bad strategy covers up its failure to guide by embracing the language of broad goals, ambition, vision and values. Each of these elements is, of course, an important part of human life. But, by themselves, they are not substitutes for the hard work of strategy.


The section on bad strategy was gripping and recognisable – such a clear articulation, so pointed, so damning. I was taking pictures and sending it to friends, as it so clearly articulated things we have complained about.

The definition of bad strategy is on point.

Bad strategy is long on goals and short on policy or action. It assumes that goals are all you need. It puts forward strategic objectives that are incoherent and, sometimes, totally impracticable. It uses high-sounding words and phrases to hide these failings.

As is the failure mode. This is such a good articulation of something I have been calling “failing managers blame”.

When a leader characterizes the challenge as underperformance, it sets the stage for bad strategy. Underperformance is a result. The true challenges are the reason for the underperformance.

Why we fail to create strategy:

The essential difficultly in creating strategy is not logical; it is choice itself. Strategy does not eliminate scarcity and its consequence—the necessity of choice. Strategy is scarcity’s child and to have a strategy, rather than vague aspirations, is to choose one path and eschew others. There is difficult psychological, political, and organizational work in saying “no” to whole worlds of hopes, dreams, and aspirations.

I was particularly fascinated by the distinction between leadership and strategy, and charisma as a driver of bad strategy – I’m sure we all have examples of bad (but charismatic) leaders, however some of the most strategic leaders had no charisma. Whilst leaders have to get people through the change than strategy entails (charisma is helpful here), the strategy itself is figuring out what purposes are worthwhile and possible to accomplish. This has to be grounded in reality, not wishful thinking.

I do not know whether meditation and other onward journeys perfect the human soul. But I do know that believing that rays come out of your head and change the physical world, and that by thinking only of success you can become a success, are forms of psychosis and cannot be recommended as approaches to management or strategy. All analysis starts with the consideration of what may happen, including unwelcome events.

Good strategy is about reducing ambiguity such that people can actually deliver.

Phyllis’s insight that “the engineers can’t work without a specification” applies to most organized human effort. Like the Surveyor design teams, every organization faces a situation where the full complexity and ambiguity of the situation is daunting. An important duty of any leader is to absorb a large part of that complexity and ambiguity, passing on to the organization a simpler problem — one that is solvable. Many leaders fail badly at this responsibility, announcing ambitious goals without resolving a good chunk of ambiguity about the specific obstacles to be overcome. To take responsibility is more than a willingness to accept the blame. It is setting proximate objectives and handing the organization a problem it can actually solve.

I found this piece on timeframes helpful. I am now somewhat obsessed with proximate objectives – such a helpful description of a way of thinking.

Many writers on strategy seem to suggest that the more dynamic the situation, the further ahead a leader must look. This is illogical. The more dynamic the situation, the poorer your foresight will be. Therefore, the more uncertain and dynamic the situation, the more proximate a strategic objective must be. The proximate objective is guided by forecasts of the future, but the more uncertain the future, the more it’s essential logic is that of “taking a strong position and creating options,” not of looking far ahead.

Gilbreth’s building techniques as an example, “business process transformation” or “re-engineering”. I love this articulation – it really ties into my thoughts on the judicious application of process, and the need for empathy.

Whatever it is called, the underlying principle is that improvements come from re-examining the details of how work is done, not just from cost controls or incentives.

The same issues that arise in improving work processes also arise in the improvement of products, except that observing buyers is more difficult that examinings one’s own systems. Companies that excel at product development and improvement carefully study the attitudes, decisions, and feelings of buyers. They develop a special empathy for customers and anticipate problems before they occur.

This definition of “culture” is not an interpretation I have thought of or heard before, but was immediately helpful in the way I consider and approach things.

We use the word “culture” to mark the elements of social behavior and meaning that are stable and strongly resist change.

The importance of context – this is so critical, and explains why so many leaders who move to a new context fail – because they don’t acknowledge the context, and just try and do the same again.

A good strategy is a hypothesis of what will work based on functional knowledge and your knowledge of your own business – this is a crucial insight. Many people find success in one area, and then fail in the next because they apply the same strategy in a different context. Good strategy is only good in context.

Treating strategy like a problem is deduction assumes that anything worth knowing is already known—that only computation is required.

There was a whole section on why we have to question our ideas and consider more than one, which I think is really important – the first idea often seems like the one that will work, but I think the first idea is often the one that is just the easiest to contemplate.

Thus, when we do come up with an idea, we tend to spend most of our effort justifying it rather than questioning it. That seems to be human nature, even in experienced executives. To put it simply, our minds dodge the painful work of questioning and letting go of our first early judgements, and we are not conscious of the dodge.

Finally, the section on keeping your head and heard mentality was really helpful – the example of re-enforcement in financial markets where optimism begets optimism and problems beget panic is a good but extreme example – human emotions are contagious, and this happens in less measurable ways elsewhere, too.

I really recommend it – I learned a lot. It gave me tools, and also confidence to call the strategy I already do what it is.

04 Jul 19:43

The Cost of Fixing Things

by Cate
Fall in Bruges

In September, I disappeared in Seoul and caused everyone who cares about me to think I was having some kind of breakdown. I deactivated my Twitter account, and refused to engage with anyone other than my closest friends. I got to the point where I felt I had to drop everything, and then I came back and chose things that could return, one by one. Some things still haven’t made it back. Maybe they never will.

What took me to that point was three team turnarounds in three years. The final one, with a fractured shoulder whilst buying and renovating a house (also a turnaround). But that is the big story – what took me to that point was a thousand choices, made at various decision points, that consistently put my own well-being last. What took me to that point was some deep seated need to act as-if I was some highly-optimized, resilient robot rather than a physically hurt human being with her own needs and life.

It was hard to untangle this, because the ways in which I am good at the turnaround are directly related to the ways in which I am bad at being a human in the world. I focus on the important – I let things that are not important go (but life is made up of unimportant things and it’s hard if none of them are “done”). I stop dysfunction like some kind of human shock absorber – I am afraid to let people into my own dysfunction, to the point of being willing to shut them out entirely. I have high standards – the standards I would hold other people to are nothing compared to the standards I have for myself. I see it as my job to live in the space of ambiguity and create clarity for other people – I don’t prioritize resolving ambiguity for myself. I am very driven by values – sometimes the values I hold conflict with what I need as a human.

“Show me your heart like transparent Glass Catfish” ~Seoul Aquarium

In this space, when people expressed concern it was met first with bewilderment, then resentment. Bewilderment, because this was – as I understood it – what I had been asked to do. It was always going to be terrible for me, the real surprise was how badly my shoulder was injured and that renovating a house was extremely terrible too. Resentment, when that concern came as feedback, to which I wanted to respond, “I did what you needed, I’m sorry it didn’t look pretty, too.”

In a distributed environment, no-one needs to know how you really are.

Around the time I disappeared in Seoul. I was winding up on the third turnaround team, handing it back to the proper person. I was deeply burnt out, and my then-boss hadn’t decided what team I would go to, resulting in me drifting around without a clear place to go, unsure of what I could take on – my life in general feeling on hold around medical appointments and waiting.

At home, I found a therapist, finally unpacked and started living out of closets rather than boxes, did the work of building a life in the city I had spent the best part of a year calling home but didn’t feel like home yet, prioritized medical appointments above everything else (with some help from my mom). At work I covered a month of parental leave for one of my peers, and the engineer leading a huge project (the new editor) asked me to come help him. I joked to my peer leading that part of the organization that he had brought me to her like a cat with a dead animal offering. I “joked”. It felt true.

“I’ve found out that life and soul are the most essential elements in art.” ~Arario Museum in Space, Seoul

We rolled out the new editor. I moved to another team, reporting to the CEO again – I was grateful to him for resolving the drifting, but felt like I was doing what everyone else wanted me to do – although how could it be any other way, when I didn’t know what I wanted myself? I kept going to therapy, got to the place where I could confront some of my less appealing characteristics, spent time with friends, finally shared pictures of the house. Had moments where I could contemplate feeling okay again, even if that was definitively, absolutely, not right now. Always contingent on things outside of my control.

Today I feel okay, even happy. Things are not perfect, but I have a sense of direction and purpose, some kind of stability – some internal, some external. Various things came together, and it started to feel like enough to go on. I started to feel like enough.

Only the most perceptive people notice when you disappear

Raccoon ~Seoul

This was not supposed to be a story about burnout, this was supposed to be the things I learned working through it and being able to see the other side. But it feels dishonest to write about how to make teams more functional without some level of insight into what that process has done to me. It feels futile to talk about working through burnout, without some insight into the context that burnout was within. Only the most perceptive people notice when you disappear, especially if the Achievements keep accumulating because it’s easy to assume busy instead. Not everyone can be present when you’re a shadow, simpler and less confronting to say “let me know if you need anything” and disappear instead.

When I think about burnout, I always come back to the Maslach Burnout Inventory (there is a book, but it’s more succinctly summarized in this article). It is a helpful framework for thinking about burnout, in particular the five causes of burnout that are not overwork. They are: lack of control, insufficient reward, lack of community, absence of fairness, and conflict in values.

Lack of Control

Lack of control was a huge factor for me. Both on a personal level (the healthcare system and builders), as a human at work (what is my job now?) and in a work context (these things are not working well, but out of my remit to fix). This is really what triggered my disappearance in Seoul, when I realized going with the flow was leaving me completely miserable, and even (in a certain context) triggering an existential panic where I wasn’t sure if I existed at all. It was a topic that came up again and again in therapy.

Owl, Tokyo

Letting go of everything allowed me to focus on things within my control. The relationships I was confident were good, the appointments and calls I could make to move things forward, the remit I had at work. I refused to engage with the ambiguous or bad, and demonstrated to myself that most things continued without them. As I let them back in, I was very deliberate in giving them an appropriate place in the hierarchy of importance, and any supporting structure needed to be manageable.

I learned more is within my control than I thought, and that I need to accept and manage the impact of things outside of my control. The result of this is that I feel more centred and less blown about by uncertainty or ineptitude. I change what I can change, influence what I can influence, and when neither is an option, I aim to contain it and move on.

Insufficient Reward

One way to look at the situation I was in – drifting – was that the reward for doing a good job was the ambiguity, because the only decision that had been made was that I wouldn’t go back to my previous team. I understood (and agreed) to this, but it definitely left me living in a space of uncertainty that got harder and harder to manage over time. I felt less confident – did my boss really value me? Would other people think I had been demoted? Would what I ended up being given be something I even wanted?

In response to this, I searched for validation elsewhere. Focused on shipping things: internal blog posts, progress reports, external articles. Hoarded complements. And with people I trust, admitted that I felt terrible and straight up asked for the validation I needed. These things helped in the moment, although fundamentally they needed to come with a change in mindset too – one of looking for information that supports a positive hypothesis, rather than a negative one.

Lack of Community

It is a truth universally acknowledged that leadership positions are lonely. In many ways, I got myself into this situation by so badly wanting my peers to be a team and being prepared to do things in service of that. At the lowest point, though, I did feel disconnected from them in terms of tempo – they were busy and focused and I was drifting around. They had direction and I was lost.

This was a time to lean on the community I had worked hard to build. When I left our group to move to the new team, one of the most meaningful things was the support and enthusiasm of my peers in seeing it as a positive move – even as I wasn’t sure – and as I left the channels, one of them veto-ing my departure from our backchannel and peer support call.

“If all relationships were to reach equilibrium then this building would dissolve” ~Arario Museum in Space, Seoul

other people assumed I felt the most confident at a point where I felt the least confident

It was also a time to build community. On my new team, and with other groups of people who it’s in our remit to help. Peers in other parts of the business, all engineering team leads, everyone involved in our hiring processes. This work is just beginning, but I am genuinely excited for it.

Last week I was at a leadership offsite where we had an intense development week. A coaching exercise with a colleague I don’t normally have much interaction with surfaced that other people assumed I felt the most confident at a point where I felt the least confident. This is one of the dissonances that can arise when people don’t see each other, and I think in the absence of other cues, can make it easy to assume someone is busy and not reach out. I’m not totally sure what to do with this, but I can at least model the behavior I want, and make more effort to check in.

Absence of Fairness

There was one situation in particular that really got to me – a lot of my time was wasted, I was denied any kind of input, and a situation was forced onto the team that I felt negated much of the effort I had made. It felt like a situation where “assuming best intent” and trying to be helpful – usually a good thing and a strength – in the wrong situation feel like an attack vector.

I’m not confident in what I’m taking from this, yet. On a concrete level, the importance of documenting and being direct. I think it’s easy to assume that “other people notice” but if they don’t this can lead to a cycle of frustration. Usually little things are just that – little things. However sometimes they are a product of something much bigger and much more problematic. If no-one flags the little things, the patterns take much longer to surface.

On a meta level, it’s reminded me to ask, “how much is this is my problem?” and accept that sometimes the best we can do is manage the impact of failure, because we do not have the power to prevent it.

Conflict in Values

“Inframince” ~Arario Museum in Space, Seoul

This came up, and particularly when people’s stated values differ from their lived values – creating a compound effect. This is a concept that has come up a lot in coaching for me – for every turnaround project – the question “what values is this hitting for you?”

so much of good management seems to be about being a decent human being

I am personally very values driven; so much of good management seems to be about being a decent human being. Of course, being decent is rarely the easiest path in the immediate frame, and often a lot of work. This is the kind of dissonance that will escalate a disagreement to existential crisis for me.

Again, it is a strength, values scale much better than people or process and creating values on teams is part of how I have been effective, and able to hand things off. However, the downside is clear and intense. I think this is true for a lot of effective people who burn out – we are good because we care, but the downside is that care is for a reason – often values – and we struggle when those values are violated. It can seem like the path for success is to be more self-serving and care less, but that just creates the situations that we claim we don’t want. If we want things to be different, we have to make them different as we can, but in a way that isn’t self-destructive, or requires changing the core of who we are.

This is not a concrete takeaway, so concretely, I seek to support people rather than systems, make sure my work aligns with and communicates clear values, and ask questions and seek clarity on things that are open to interpretation or are potentially problematic.

Work Overload

“Live without dead time” ~Museum of Art, Seoul

Of course, in all of this, working a lot was a factor. I worked long hours and regularly over weekends (even if “just” travelling so as to avoid missing a weekday). In many ways overload was a multiplying factor, though; I used working to avoid things I didn’t want to deal with (like the building site or the medical system), and the fact that I had worked so hard compounded the existential problems of reward, fairness and values.

The first thing I changed here was working to make the time I did take off better. Moving to a place where I could have a separate office (I work from home), and organizing my living space and containing the mess such that I could have a place to relax without seeing a physical todo list in the form of things not yet done or tidy. The better my physical space has been, the better I have felt. The first time I had a weekend where I didn’t have any domestic stuff I needed to do was a milestone.

Within that, I made more effort to stop work by 7pm, and then be deliberate in spending my time on what would make me feel better. E.g. making an active choice between the gym and bringing some sense of order. When I needed to work a weekend, I made a point to balance that with other things I needed – like working in a coffee shop for some human contact, and breaking up delivery points with things for me. And also making sure I didn’t make the exact same mistake the following week and have to work a subsequent weekend.

The second thing was a resolution to take statutory holidays. These are not super meaningful to me – as an atheist, I don’t celebrate religious holidays, and in a distributed environment there are always other people working. However coming to see them as like weekends – arbitrary days that we have agreed as a society not to work – has been helpful. Yes, I could take a three day weekend any time when flights to Paris are cheaper, but I can take that three day weekend and the arbitrary one too (and using the arbitrary one to play video games is completely reasonable).

Similarly, I started taking time off for medical stuff. This wasn’t always possible (it’s unfortunate if one is in hospital on a day that is supposed to be release day, for example), but overwhelmingly has been. If I have to go to the UK to see the doctor, I take the entire period, rather than trying to work around flights and transit and appointments, to do what is going to be best for me. This was a bit of a culture shock for me, at the Conglomerate when people were sick they “worked from home”, but in an environment where people already work from home people actually take sick days. Including me.

Finally, I think it’s always worth taking time where there is opportunity. I took an extended break between ending my last job and starting this one (I fulfilled a life goal and went to Tuvalu). I made two weeks of space between the first team and the second, even though I had some work to do, I was free of responsibility and had two amazing long weekends (one in London, and one in Paris). Winding up on the second team made space for the disappearance in Seoul – where I had many positive experiences (including meeting a raccoon!) even though I didn’t feel particularly positive in myself.

The Other Side

The TL;DR of this is perhaps that I have spent a lot of time lately confronting the shadow side of my strengths – the personal cost of the professional “success”, and how that manifests as burnout. It’s hard to understate how confronting this has been, how difficult, and it’s still far from done.

I know, though, it’s something I am far from alone in. Burnout is the epidemic of millenials, and the epidemic of tech, particularly in those of us who genuinely and deeply take on the work of inclusion, of trying to make the functional environments we have never, or rarely, experienced ourselves. A while ago I wrote that the third shift of inclusion work is to heal ourselves and more than ever I believe this is true. Broken leaders cannot create functional environments – especially if we have power, we owe it to the people we work with to do the work on ourselves that makes us safe and reasonable people for others to show up to.

“Forgive Yourself” ~Sign in Tulum, Mexico

Thanks to my colleagues who engaged so openly in our leadership training, which helped me break out the other side of this, my boss who looked out for me at the worst point, and the amazing community in our engineering managers slack, who started the conversation that made me realize I was ready to write this, and inspired me to do so.