<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Merrick Christensen's Articles</title>
        <link>https://www.merrickchristensen.com</link>
        <description></description>
        <lastBuildDate>Sat, 21 Feb 2026 21:06:50 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <copyright>All rights reserved 2026, Merrick Christensen</copyright>
        <item>
            <title><![CDATA[AI is the New Medium]]></title>
            <link>https://www.merrickchristensen.com/articles/ai-is-the-new-medium</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/ai-is-the-new-medium</guid>
            <pubDate>Sun, 09 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[AI isn't just a tool for creating content within the mediums we already know. AI is the new medium.]]></description>
            <content:encoded><![CDATA[
Like the rest of the world, I've been tinkering with AI for the last few years. For me, it started in 2021 when I gained access to OpenAI's GPT-3. I was on my own "bachelor's trip" at a cabin near Bear Lake in northern Utah, geeking out over how amazing these APIs were. A friend of mine had been curating data for an app he was developing—one that paired groups of friends with activities—and I was experimenting with using AI to seed and refine this data. I was amazed to see a machine generate data that somewhat resembled what my friend had manually curated for months.

Then came image models—Stable Diffusion 1.5 was my entry point. I learned how to fine-tune them using LoRA adapters and even created my own derivative checkpoints. One of my favorite projects was training models to generate images of my kids, allowing me to create custom coloring pages featuring their faces. That process led me to fine-tune LLaMA models as well, which, after quantization, I realized I could do entirely on my MacBook Pro. I experimented with training models on the writings of some of my favorite thinkers, like George MacDonald and C.S. Lewis. Then came voice models—having my kids sing "Go Bananas, Go Go Bananas" and hearing it come out in Joe Biden's voice was absolutely hilarious.

<audio controls>
  <source
    src="/assets/audio/articles/ai-is-a-new-medium/bananas.mp3"
    type="audio/mp3"
  />
</audio>

After my brother passed away in 2022, I almost trained models on his texts, voice, and pictures so that I could continue to experience some version of him. However, I knew at best, I'd only be creating a distorted caricature of him to avoid my own grief. A caricature that couldn't hold a candle to the real him with his completely unpredictable humor and uniqueness. Frankly, I don't know if there is a model that is both uncensored and completely loving & accepting enough to serve as a foundation for a caricature of that man. He was a human paradox. No human fits neatly into the dimensions of personality we strive to force them into. They can be wild and calming, irreverent and safe, prideful and utterly human, and most of all, unpredictable. The people in our lives are unmistakably "other" and what we know of them is a fragment of the infinitude of who they are and it's pitiful to think that can be compressed into a set of numbers. Still, I suppose it's only natural to reach for the tools we have to relieve our grief—after all, people have been doing it with photos and videos for years.

Another area where AI had a dark side for me was how mindlessly I started using it. I had it generate bedtime stories for my kids instead of making up my own. I pasted in code snippets, tweaking AI-generated solutions until something worked, only then did I bother putting in the effort to understand why I had a problem in the first place. I copied errors from my software without even reading them. It became so useful that I noticed my brain starting to under-function. I used to tell personalized, creative stories to my kids before bed. I used to write code with monochrome syntax highlighting to focus purely on the structure. I designed software in my head and my unit tests before implementing it. I always made sure I knew the "why" behind every bug. (That habit, thankfully, never died.) I realized that over-functioning with AI was a slippery slope to under-functioning myself.

Recently, I've been reflecting on how I use AI, and a book helped crystallize my thoughts: Brave New Words by Sal Khan. The book opens with Sal's daughter co-authoring a story with AI. This subtle change, where the AI cooperates with the human asking them to take part in the writing is such a meaningful improvement versus having AI write the whole story as I've done for my kids. Generalizing this idea to have AI collaborate with you, rather than work on behalf of you leads to a more enriching use of the tools. GPT Tasks have helped me take this "flip the script" approach to another level.

I've started using GPT Tasks in this way. Every morning, AI asks me a systems design, programming, or computer science question. It then reviews my response, points out errors in my thinking, and highlights learning opportunities. This gives me the chance to articulate my understanding and immediately receive feedback from an infinitely patient tutor. Already, I feel like I've learned more from this approach than from using AI as a crutch to hurry through debugging.

Another example is using AI for writing feedback. Instead of having it draft documents for me, I write the initial draft and then ask AI, "I've written the following document for purpose X. What considerations have I missed? What are the best counterarguments?" This allows me to reap the cognitive benefits of writing—mental clarity and deep thinking—while refining my message and considering alternative perspectives. Besides, writing with my own voice and personality makes my work worth reading, unlike the sterile, alienatingly boringly perfect output AI often produces.

They say that whenever a new medium emerges, people first use it to replicate the old ones:

- Film → Theater: Early films were static, staged-like plays with a single camera angle.
- TV → Radio: Early television shows were essentially filmed radio programs.
- Web → Print: Early websites mimicked newspapers, with rigid layouts and no interactivity.
- Streaming → Cable: Early streaming platforms copied TV schedules and weekly episode releases.

In this same vein, I was using AI to mimic exactly what I had been doing before—just through a new medium. I created bedtime stories, so I had AI create bedtime stories. I made silly Photoshop edits of my kids, so I had AI generate silly images of them. I wrote code and fixed errors, so I had AI write code and fix errors. I was treating AI as a tool to bridge into familiar territories rather than recognizing it as something entirely new.

I'm starting to see AI not just as a tool, but as a new medium—one that unlocks entirely new types of experiences. Instead of simply answering my questions, AI can help me ask better ones. Instead of generating my writing, it can challenge and refine my thoughts. In this role, AI becomes less like a crutch and more like a mentor—one that pushes me to grow rather than shrink. Before, AI was a bit like a pet genie on a chain—immensely powerful, tempting me to let it do more and more while I become smaller in its shadow. The real danger isn't that AI is too powerful, but that it can make us less powerful if we use it passively. If we let it generate for us rather than provoke us, if we let it replace curiosity instead of fueling it, we risk shrinking into lazy consumers. AI can either diminish us or stretch us into something greater than we were before. The choice is ours: do we hold the chain and wither, or do we show self-discipline, put in the work, and use this genie to help us thrive?

AI isn't just a tool for creating content within the mediums we already know. AI is the new medium. Try flipping the script and see what happens.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Webflow Design Language]]></title>
            <link>https://www.merrickchristensen.com/articles/webflow-design-language</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/webflow-design-language</guid>
            <pubDate>Mon, 25 Nov 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Learn about the structurally edited language that powers sites built with Webflow.]]></description>
            <content:encoded><![CDATA[
I finally got the chance to talk about the core of what I've worked on at Webflow for the past 7 years. We evolved a programming language that is:

- Projectionally edited (modify the code by directly modifying the result). Direct manipulation is at the heart of No Code.
- Foreign function interfacing (FFI) to interoperate with JavaScript. This lets Webflow provide code that our customers can visually consume. For example, date formatting functions or other JavaScript values.
- Host extensible type system with typed holes and inference that enables our team to provide unique custom visual editing interfaces that offer bindings out of the box with full support for type aliases that provide tailored editing experiences for the same underlying type. A good way to visualize this is a phone number editing experience being different than that of an email, but they're both ultimately strings.
- Pure evaluation which enables many of our interactive live evaluation and debugging experiences.
- Holes which enable us to support live editing, even if values are missing or corrupt.
- Replaying computations without executing effects again and so much more.

Webflow Design Language is the DNA of a Webflow site and its design decisions facilitate our visual editing experience. Read about it on our engineering blog [here](https://webflow.com/blog/webflow-design-language).
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Using Webflow with Netlify]]></title>
            <link>https://www.merrickchristensen.com/articles/using-webflow-and-netlify</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/using-webflow-and-netlify</guid>
            <pubDate>Sat, 10 Oct 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[Configure Netlify to send particular routes to Webflow so that you can selectively serve pages that are designed and hosted on Webflow.]]></description>
            <content:encoded><![CDATA[
Webflow is hard to beat as a solution for the marketing site for your
application. Netlify is a great CI/CD & hosting option for your front-end
application. This article briefly shows how to configure Netlify to send
particular routes to Webflow so that you can selectively serve pages that are
designed and hosted on Webflow.

We want some URLs served directly by Netlify for our application and other URLs
served by Webflow for marketing.

`merrickchristensen.com`
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;👈
Hosted on Webflow

`merrickchristensen.com/app` &nbsp;&nbsp;👈 Hosted on Netlify

## 1. Point Your Desired Domain to Netlify

Set your custom domain up with Netlify, follow
[Netlify's official custom domain instructions](https://docs.netlify.com/domains-https/custom-domains/).

## 2. Setup a Subdomain For Webflow

Connect a custom subdomain to Webflow, follow
[Webflow's official subdomain setup instructions](https://university.webflow.com/lesson/connect-a-subdomain),
as part of this setup you'll be asked to create a `CNAME` record that points to
Webflow. In my case, I use Netlify as my name servers, so I setup a
[CNAME record in Netlify](https://docs.netlify.com/domains-https/netlify-dns/dns-records/#add-a-new-record).
Don't worry, your subdomain won't be what your customers see, so the actual
subdomain doesn't matter much.

![Netlify CNAME Settings Example](/assets/images/articles/using-webflow-and-netlify/netlify-domain.png)

> Netlify Subdomain Settings

![Webflow Subdomain Added to Project](/assets/images/articles/using-webflow-and-netlify/webflow-domain.png)

If you go to the subdomain directly it should be publicly available and served
by Webflow. All good, let's look at sharing a domain between the two.

## 3. Send Traffic For Desired Pages To Webflow Using Netlify Proxies

Using Netlify [redirects](https://docs.netlify.com/routing/redirects/) &
specifically,
[proxies](https://docs.netlify.com/routing/redirects/rewrites-proxies/#proxy-to-another-service)
we can selectively proxy certain URL paths to our Webflow site. The key to
making Netlify proxy requests is to use the `200` status code. The syntax for a
proxy directive, which would go in your Netlify `_redirects` file, is:

```
<path> <full-url-to-proxy> <status-code>
```

So to send the traffic of our homepage to our Webflow site we can add the
following directive.

```
/ https://marketing-site.merrickchristensen.com 200
```

This will send all traffic to the root page from the domain hosted by Netlify,
`merrickchristensen.com` to our Webflow site.

We can route other pages to, we can also take advantage of Netlify `:splat`
syntax to support Webflow CMS URLs.

```
/  https://marketing-site.merrickchristensen.com/  200
/contact-us  https://marketing-site.merrickchristensen.com/contact-us 200
/blog/:splat https://marketing-site.merrickchristensen.com/blog/:splat  200
        ^                                                          ^
        |                                                          |
        :splat groups that part of the URL and and be used in the redirect
        this is primarily useful for supporting CMS paths in Webflow.
```

All of your other URLs will be handled by Netlify by default.

## Trade Offs

In addition to building really fast websites, Webflow has really fast hosting.
Webflow will serve pages using a CDN which means that your customers will get
your site delivered to them by a server near them which means they'll see your
site faster. Unfortunately, the proxy setup goes through Netlify first which
means we are making one additional hop from Netlify to Webflow so there will be
some additional latency. To be honest, for most usecases this latency is really
minimal and negligible. In my minimal testing, proxying through Netlify added
roughly 50-100 milliseconds to the request.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[JSON Lisp]]></title>
            <link>https://www.merrickchristensen.com/articles/json-lisp</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/json-lisp</guid>
            <pubDate>Mon, 03 Aug 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[Learn about how programming languages work as we design & implement a little Lisp-like language called JSON Lisp.]]></description>
            <content:encoded><![CDATA[
Lisp is a programming language that is often used didactically in higher
education for its simplicity and minimal syntax. Sounds fancy, right? I wouldn't
know to be honest, and I didn't make it all the way through the third grade to
have you judge me.

Anyways, in a few paragraphs, you'll be able to read & write Lisp! By the end of
this article we'll have implemented Lisp style evaluation with our programs
provided as JSON, or JSON Lisp™, to be precise.

# Lisp Crash Course

If you're familiar with Lisp feel free to skip this crash course introduction
and jump straight to JSON Lisp where stuff gets weird.

## Symbolic Expressions (S-expressions)

Lisp uses something called S-expressions for its syntax. An
[expression](<https://en.wikipedia.org/wiki/Expression_(computer_science)https://en.wikipedia.org/wiki/Expression_(computer_science)>)
is an entity that may be evaluated to determine its value. In Lisp, everything
is an expression. Let's take a look at some Lisp Expressions that are also valid
JavaScript expressions.

```lisp
; semicolon is how you introduce a comment in lisp
; (the // in JavaScript)


4 ; Value: 4
"hello" ; Value: "hello"
true ; Value: #t (that means true)
```

These are called primitive values. Primitive values and expressions aren't the
only concepts shared with JavaScript though, Lisp also has function calls!
However, the syntax for them is different. They are wrapped in parentheses which
makes the order of evaluation unmistakable. Arguments are separated by a space
instead of a comma.

```lisp
; --- This is the name of the function.
; |
; |
; ˅
(+ 1 2) ; Value: 3
;   ˄ ˄
;   | |
;   | |
;   | |
;   ------ These are the arguments to the "+" function,
;       arguments are separated by a space.
```

It's a little weird to see `+` before `1` & `2`, instead of in-between it.
That's called [prefix notation](https://en.wikipedia.org/wiki/Polish_notation).
It's also a bit odd to see an operator like `+` referred to as a function. Here
are a bunch of JavaScript next to Lisp examples to make this more concrete.

### Operators

_JavaScript_

Operators are built-in with
[infix notation](https://en.wikipedia.org/wiki/Infix_notation#:~:text=Infix%20notation%20is%20the%20notation,plus%20sign%20in%202%20%2B%202.)
and you only need to provide parenthesis to opt-out of the built-in
[order of operations](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Operator_Precedence).

```jsx
3 * 3 + 3; // 12
3 * (3 + 3); // 18
```

_Lisp_

Operators are just function calls. Parenthesis are always provided which means
special syntax is not required to override built-in operator precedence, the
developer always has to provide it.

```lisp
(+ (* 3 3) 3) ; Value: 12
(* 3 (+ 3 3)) ; Value: 18
```

### Methods

_JavaScript_

In JavaScript, some operations are provided as functions on the object's
prototype. The value on the left becomes `this` in the implementation of
`toUpperCase`. Don't be deceived by the special method syntax, they're also
functions you can reference directly.

```jsx
"hello".toUpperCase(); // 'HELLO'
String.prototype.toUpperCase.call(["h", "i"]); // 'H,I'
// An Array? Yes. How? Coercion. Why? Life is pain.
```

_Lisp_

Same situation in Lisp as we had for operators, function on the left, arguments
on the right.

```lisp
(string-upcase "hello") ; Value: "HELLO"
```

### Function Calls

_JavaScript_

Function calls have the name of the function on the left, some parentheses to
indicate calling that function, and arguments are provided in the parenthesis.

```jsx
parseInt("3"); // 3
```

_Lisp_

Same situation as operators and methods. Function on the left, arguments on the
right.

```lisp
(parse-integer "3") ; Value: 3
```

Notice how in JavaScript, special syntax is used for operators, method calls &
function calls? In Lisp, they all use the same syntax! Suddenly it checks out
why the brainiacs use it.

## Lisp Evaluation

Lisp evaluates a program with a strategy called
[Applicative order](https://en.wikipedia.org/wiki/Evaluation_strategy#:~:text=Applicative%20order%20evaluation%20is%20an,before%20the%20function%20is%20applied.).
Applicative order means that innermost arguments are evaluated _before_ they are
provided to their function in Lisp.

```lisp
(+ 1 (+ 2 3))
;  ˄     ˄
;  |     |
;  |     |
;  |     - Value: 5
;  |
;  Value: 1
;
; (+ 1 5)
; ; Value 6, All done!
```

All the Lisp examples we've seen thus far have been evaluated this way!

> You'll notice Applicative order seems a great deal like what you're used to in
> JavaScript! JavaScript uses a similar strategy called
> [Call by sharing](https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing)
> which is very similar to Applicative order but differs in some important ways.

## Special Forms

Unfortunately, there are some exceptions to this simple evaluation strategy.
Since arguments are evaluated before they are provided to their function in
Lisp, certain types of constructs can't be evaluated this way. Let's explore a
hypothetical broken `if-else` function in Lisp to clarify this.

_Lisp_

```lisp
; Given Applicative Order, arguments are evaluated before applying `if`.
(if-else (> 2 1) (+ 5 5) (+ 2 2)) ; Evaluate the arguments first.
;           |       |       |
;           ˅       ˅       ˅
;(if-else  true    10       4)
;                           ˄
;                           |
;                           |
;                           - Cripes! We executed the else branch
;                             even though we shouldn't have!
; Value: 10
```

Since the arguments were evaluated before they were provided to the `if-else`
function we end up evaluating the `else` branch even if we shouldn't. Here is a
JavaScript interface to this problem:

_JavaScript_

```jsx
// By the time ifElse gets the arguments, it's too
// late. As they say, "Young Herc was mortal now."
ifElse(2 > 1, 5 + 5, 2 + 2);
```

To solve this problem Lisp keeps the same syntax but treats the evaluation for
those entities differently. This handful of specially interpreted expressions
are called
[special forms](https://courses.cs.northwestern.edu/325/readings/special-forms.php#:~:text=What%20are%20special%20forms%3F,%2C%20if%20any%2C%20are%20evaluated.).
Special forms look like normal function calls, however, the interpreter treats
them, specially.

_Lisp_

```lisp
(if (> 2 1) (+ 5 5) (+ 2 2))
;      ˄
;      |
;      Lisp understands `if` is different,
;      it evaluates the conditional first
;      to determine what to do next.
;
;(if true (+ 5 5) (+ 2 2))
;            ˄
;            |
;            Now this will be evaluated but
;            (+ 2 2) will never be evaluated.
; Value: 10
```

See how Lisp knows that `if` is different? Special even? Worthy of a noble
treatment indeed!

Alright, that's everything we need to know to start our JSON Lisp journey.

# Lisp in JSON

Let's migrate our Lisp examples into a JSON syntax.

Our basic primitive values, like `4`, `"Hello"` and `true` hold-up just fine as
values in JSON. So we don't need to do anything there. Things start to get
interesting when we look at expressing function calls in JSON. Which, if you
remember from our Lisp crash course above, buys us operators and methods too.

## Function Calls

To represent function calls in JSON let's take a look at our addition example.

_Lisp_

```lisp
(+ 1 2)
```

Unfortunately for us the elegant parenthesis, `()`, are invalid symbols in JSON.
Fortunately, their rectilinear cousins `[]` are totally valid. So let's swap
those. While we're at it whitespace is insignificant and will be thrown out by
most JSON deserializers/parsers, so we'll use the `,` as our separator. These
two modifications alone, give us a mechanism for expressing the above program in
JSON.

_JSON Lisp_

```json
["+", 1, 2]
```

Let's look at our other examples in light of these rules.

## Example JSON Lisp Programs

_JSON Lisp Operators_

```json
["*", 3, ["+", 3, 3]]
```

_JSON Lisp Methods_

```json
["uppercase", "hello"]
```

_JSON Lisp Function Calls_

```json
["alert", ["uppercase", "Life is dope."]]
```

_JSON Lisp Special Forms_

```json
["if", [">", 2, 1], ["+", 5, 5], ["+", 2, 2]]
//         ˄             ˄           ˄
//         |             |           |
//         |             |           |
//     condition      if true     otherwise
```

Turns out that Lisp's simple S-expression syntax make it pretty easy to model  
in JSON.

# JSON Lisp Implementation

Our JSON Lisp implementation will be done in TypeScript.

> TypeScript, so hot right now. - Mugatu

## Lexical Analysis & Parsing

First things first, programming language implementations typically need a
mechanism for taking the source program, a string of text, and turning it into a
structure that the interpreter or compiler can work with, this is called a
[parse tree](https://en.wikipedia.org/wiki/Parse_tree#:~:text=A%20parse%20tree%20or%20parsing,to%20some%20context%2Dfree%20grammar.)
The parse tree is often subsequently converted into an
[abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree) for
further analysis, however, our implementation will interpret the parse tree
directly. So what do we need for the lexer and parser? Get ready.

```jsx
export const parse = JSON.parse;
```

Yup, `JSON.parse`. Takes care of our lexing and parsing. It converts values
nicely and even gives useful syntax errors. Out of the gate, our JSON Lisp will
have a great headstart for other host language implementations!

> The growth of modern streaming architectures & streaming JSON parsers raises
> an interesting possibility I'd love to explore further (or hear about if you
> explore it) around streaming evaluation, or the ability to evaluate the
> program as it is downloaded and parsed. This would make JSON Lisp or something
> like it a really interesting general-purpose format for primarily server
> rendered architectures with persistent views that emit instructions to the
> client using Web Sockets like
> [Phoenix Live View](https://github.com/phoenixframework/phoenix_live_view).

Ok, since we have a type system at our disposal we may as well add some nice
type annotations for JSON.

```jsx
type Json =
  | null
  | boolean
  | number
  | string
  | Json[]
  | { [prop: string]: Json };

export const parse = (source: string): Json => JSON.parse(source);
```

## Evaluation

To evaluate a JSON Lisp expression we'll provide a function called `evaluate`.
Which will take in a JSON Lisp `expression`, the return value of parse above,
and an `environment`. The `environment` is the context the program will be run
in. The `environment` will contain the implementations of our built-in operators
and provide us a place for storing user-land function definitions and values.

You can think of the `environment` as the global namespace in a browser window.
There are some built-in functions you can call that are implemented in the host
language like `alert`. It also contains functions that you put there.

So our `Environment` type will be a map of identifiers (global names) to either
`JSON` (user-land implementations) or to JavaScript `Function`s (host language
provided functions).

```jsx
// The environment is a map of identifiers to values.
type Environment = {
  [name: string]: JSON | Function,
  //                 ˄       ˄
  //                 |       |
  //                 |       |
  //                user    host
};

export const evaluate = (expression: Json, environment: Environment) => {
  // TODO Write the evaluator
};
```

### Applicative Order Evaluation

In Applicative Order evaluation, each expression is evaluated before providing
it to its function. For our base case, we'll evaluate each of the arguments
along with the function recursively as we walk down the expression tree, and
we'll apply our evaluated procedures on the way up (in Lisp the function is
often called the procedure).

When we encounter a string, we'll look up its
[function](https://en.wikipedia.org/wiki/Subroutine) implementation in the
environment often called a procedure in Lisp, or by the OG programmers of 20+
years ago.

```jsx
export const evaluate = (expression: Json, environment: Environment) => {
  // When we encounter an expression, `[ ]`
  if (Array.isArray(expression)) {
    // Evaluate each of the sub-expressions
    const result = expression.map((expression) =>
      evaluate(expression, environment)
    );

    // If there is nothing to apply, we have a value.
    if (result.length === 1) {
      return result[0];
    } else {
      // Retrieve the procedure from the environment
      const procedure = result[0];
      // Apply it with the evaluated arguments
      return procedure(...result.slice(1));
    }
  } else {
    // Look up strings in the environment as references.
    if (
      typeof expression === "string" &&
      environment.hasOwnProperty(expression)
    ) {
      return environment[expression];
    }

    // Return values.
    return expression;
  }
};
```

This is enough to implement our `operator` examples! In fact, with this small
implementation we can implement the 1.1 exercises from MIT's famous
[Structure & Interpretation of Computer Programs](https://mitpress.mit.edu/sites/default/files/sicp/index.html).

```jsx
const add = (...args) => args.reduce((x, y) => x + y);
const subtract = (...args) =>
  (args.length === 1 ? [0, args[0]] : args).reduce((x, y) => x - y);
const division = (...args) =>
  (args.length === 1 ? [1, args[0]] : args).reduce((x, y) => x / y);
const multiplication = (...args) => args.reduce((x, y) => x * y, 1);

// We provide the environment that references read from here.
const defaultEnvironment = {
  "+": add,
  "-": subtract,
  "/": division,
  "*": multiplication,
};

expect(evaluate(10, defaultEnvironment)).toEqual(10);
expect(evaluate(["+", 5, 3, 4], defaultEnvironment)).toEqual(12);
expect(evaluate(["-", 9, 1], defaultEnvironment)).toEqual(8);
expect(evaluate(["/", 6, 2], defaultEnvironment)).toEqual(3);
expect(evaluate(["+", ["*", 2, 4], ["-", 4, 6]], defaultEnvironment)).toEqual(
  6
);
expect(evaluate(["+", ["*", 2, 4], ["-", 4, 6]], defaultEnvironment)).toEqual(
  6
);
```

### Other Functions

With that, the pattern for adding `functions` to call is to provide their
implementations on the environment and then reference them by name in the left
most position in our JSON Lisp S-Expression.

For example, we could add uppercase to our environment:

```jsx
evaluate(["uppercase", "Hello world!"], {
  uppercase: (str) => str.toUpperCase(),
});
```

> Using strings to double as identifiers creates a weird class of bugs, what if
> I mean the string `"uppercase"` instead of the function `uppercase`? To solve
> this a special form could be introduced for reading identifiers or expressing
> strings. E.G. `["Text", "Hello world!"]`

### Special Forms

As we explored above, certain constructs can't be implemented as functions using
strict applicative order evaluation, for example, `if`. Let's explore adding a
special form for handing `if`.

```jsx
["if", true, ["+", 2, 2], 0];
//       ˄         ˄           ˄
//       |         |           |
//       |         |          Else branch
//       |         Run this only if the predicate is true
//       Evaluate the predicate first
// returns 4
```

The way we'll go about this is checking to see if the expression reflects one of
our special forms before we evaluate. If it does we'll evaluate it specially, if
it doesn't we'll evaluate and apply as usual!

In the case of `if`, we'll first evaluate its predicate. Depending on the value
returned we'll then subsequently evaluate the correct branch.

```jsx
export const evaluate = (expression: Json, environment: Environment) => {
  if (Array.isArray(expression)) {
    const procedure = expression[0];

    // Look for special forms based on the first array entry.
    switch (procedure) {
      // Check if we have a special form!
      case "if": {
        // Retrieve the predicate
        const predicate = expression[1];
        // Evaluate the predicate in the environemnt
        if (evaluate(predicate, environment)) {
          // If it is true evaluate the first branch
          return evaluate(expression[2], environment);
        } else {
          // If it is false evaluate the false branch,
          // if one is provided.
          if (expression[3]) {
            return evaluate(expression[3], environment);
          } else {
            return null;
          }
        }
      }
      // Time for some applicative order evaluation Ma!
      default: {
        const result = expression.map((expression) =>
          evaluate(expression, environment)
        );

        if (result.length === 1) {
          return result[0];
        } else {
          const procedure = result[0];
          return procedure(...result.slice(1));
        }
      }
    }
  } else {
    // Nothing changed here.
    if (
      typeof expression === "string" &&
      environment.hasOwnProperty(expression)
    ) {
      return environment[expression];
    }
    return expression;
  }
};
```

Sweet! Now we've got support for conditional branching in our JSON Lisp!

```jsx
expect(evaluate(["if", true, ["+", 1, 1]])).toEqual(2);
expect(evaluate(["if", false, 1, ["+", 1, 2]])).toEqual(3);
// If an else branch isn't provided return a null value.
expect(evaluate(["if", false, 1])).toEqual(null);
```

With, that the pattern for adding special forms is to add additional branches to
the `switch` statement.

## Summary

At this point, our JSON Lisp implementation is in a place where we can add more
special forms and provide new function implementations. I hope that this toy
implementation illuminated some dark corners about how programming languages
work. At the least, I hope you had fun!

You can play with [JSON Lisp here](https://github.com/iammerrick/json-lisp) by
cloning the repository and running `yarn test`.

Some suggestions for taking this a step further:

1. Implement `define` for creating new abstractions on the environment,
   variables, and functions! Can prototypical inheritance be used to mimic a
   stack for function variable scoping?
2. Implement additional special forms!
3. Someone please explore streaming evaluation and tell me how it goes!
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Lazy Evaluation in JavaScript]]></title>
            <link>https://www.merrickchristensen.com/articles/lazy-evaluation-in-javascript</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/lazy-evaluation-in-javascript</guid>
            <pubDate>Sat, 25 Jul 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[JavaScript's call by sharing and getter semantics allow us to implement lazy evaluation for field access.]]></description>
            <content:encoded><![CDATA[
Sometimes evaluating an argument for a function is expensive! So expensive, that
you only want to pay that fiddler to evaluate the value when the function you
are calling _uses_ the value.

Let's look at this problem abstractly first:

```jsx
// The problem
const code = (props) => {
  // Arguments are sometimes only conditionally used!
  if (isExpensiveUseful(props.cheap)) {
    return doSomethingWithExpensiveArg(props.expensive);
  }

  return expensiveIsntEvenUsed;
};

const expensive = getExpensiveValue();
const cheap = {
  /*..*/
};

// Dangit! I had to compute expensive even though the `code`
// may not use it!
code({ cheap, expensive });
```

Do you see the problem? `code`, requires `expensive` but it only uses
`expensive` if `isExpensiveUseful` is truthy! This isn't ideal because we have
to compute `expensive` every time, even if `code` doesn't use it.

## The Obligatorily Mentioned Boring Thunk

> Jump to the section on Self Overwriting Lazy Getters to skip the obligatorily
> mentioned boring option.

If we have access to the function that only conditionally uses the expensive
argument and we have access to update its other callers we can simply change the
interface to make `expensive` a function that returns the `expensive` value.
This allows the consumer of `expensive` to determine if we should pay the cost
of computing it.

```jsx
// Solution One - The Thunk, Explicit Deferred Evaluation
const code = (props) => {
  if (isExpensiveUseful(props.cheap)) {
    // We only pay the cost of `getExpensive`
    // (and therefore whatever functions it calls)
    // if we actually use it! Nice!
    return doSomethingWithExpensiveArg(props.getExpensive());
  }

  return expensiveIsntEvenUsed;
};

const payTheCostLaterIfYouWantThunk = () => {
  return getExpensiveValue();
};

const cheap = {
  /*..*/
};

// It's all on you now code, do us right.
code({
  cheap,
  getExpensive: payTheCostLaterIfYouWantThunk,
});
```

This updates the function's interface to make the conditional use of `expensive`
explicit and allows `code` to decide when to evaluate it. This is a great option
if you have access to update the code and all the callers. It also requires you
to have confidence that they are using that expensive value only when it is
needed. Unfortunately, their function may be consuming functions that
conditionally want the expensive value and you'd need to be able to update every
value user to where the value is needed to make sure this strategy works.

## Self Overwriting Lazy Getters

An obscure and far less boring technique is to provide a getter that overwrites
itself on the first retrieval. This allows us to avoid paying the cost of
evaluating `expensive` if it isn't used without updating the function that
conditionally uses it.

```jsx
const code = (props) => {
  if (isExpensiveUseful(props.cheap)) {
    // Hey it's being access so it is time
    // to pay the fiddler!
    return doSomethingWithExpensiveArg(props.expensive);
  }

  return expensiveIsntEvenUsed;
};

const cheap = {
  /*..*/
};

code({
  cheap,
  // This code will only be run when someone
  // accesses `expensive`.
  get expensive() {
    // Don't forget to overwrite the value so you
    // only pay the cost once!
    delete this.expensive;
    // `this` is the object that contains `cheap` and
    // our `expensive` getter upon first retrieval we
    // overwrite the getter with its value.
    return (this.expensive = getExpensiveValue());
  },
});
```

First, we make `expensive` an "implicit thunk" by providing a
[getter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/get).
The getter is only run when someone accesses that field. Here comes the
important part. The getter than overwrites itself with the computed value!
`this` in the getter refers to the object that contains the getter. We are
overwriting the getter from within the getter with the computed value. This
prevents `getExpensiveValue` from being run several times if the value is
accessed several times.

# Concrete Example

Consider the following API that accepts any DOM element and returns a tree of
the same shape with each element's computed `styles` & boundling client `rect`
if that particular element has any classes attached.

```jsx
const getVisualDetails = (el) => {
  return {
    children: Array.from(el.children || []).map((child) =>
      getVisualDetails(child)
    ),
    styles:
      (el.classList || []).length === 0 ? null : window.getComputedStyle(el),
    rect: el.getBoundingClientRect(),
  };
};
```

Let's explore some sample usage.

```jsx
// Here is what usage looks like!
const styles = getVisualDetails(document.body);
// Should return the styles of the first element
// with classes in the document.
console.log(styles.children[0].styles);

// Should return the styles of the second element
// in the first child with classes.
console.log(styles.children[0].children[1].styles);

// Should return the layout of the first element
console.log(styles.children[1].rect);
```

If we wrap it in a time function we can get a rough pulse on how expensive it is
and running it in the console.

```jsx
const time = () => {
  // Time Before Computation
  const then = performance.now();
  const styles = getVisualDetails(document.body);
  styles.children[0].styles;
  styles.children[0].children[1].styles;
  styles.children[1].rect;
  // Subtract time before against time now.
  console.log("This computation took: ", performance.now() - then, "ms");
};

time();
```

My results running this function on [github.com](https://github.com):

```
This computation took:  5.984999999782303 ms
This computation took:  3.9449999985663453 ms
This computation took:  4.269999999451102 ms
```

## Make It Lazy, Make It Fast

Let's explore how we can use our new "Lazy Getter" technique to only compute
what is used.

```jsx
const getVisualDetails = (el) => {
  return {
    // We don't even walk the whole DOM in one go! Just one layer
    // of depth at a time, as needed!
    get children() {
      delete this.children;
      return (this.children = Array.from(el.children || []).map((child) =>
        getVisualDetails(child)
      ));
    },
    // Only compute styles on demand if they're retrieved
    get styles() {
      delete this.styles;
      return (this.styles =
        (el.classList || []).length === 0 ? null : window.getComputedStyle(el));
    },
    // Only compute rect on demand if it is retrieved
    get rect() {
      delete this.rect;
      return (this.rect = el.getBoundingClientRect());
    },
  };
};
```

When we first call our lazy implementation:

```jsx
const styles = getVisualDetails(document.body);
```

Very little work is done! We just create an object with some locked and loaded
properties that are ready to do the real work if and when needed. Additionally,
we only do the work one layer at a time, so when `children` is accessed, that
just returns a list of locked and loaded objects.

The actual heavy lifting of walking the DOM, computing styles, and measuring
layout are all done lazily as needed.

Here are my results running the lazy version on [github](https://github.com):

```
This computation took:  0.43000000005122274 ms
This computation took:  0.1250000004802132 ms
This computation took:  0.28000000020256266 ms
```

As we expected, the lazy version is many times faster. And less memory
intensive! That is because we avoid computing or storing anything that isn't
used, while the initial implementation computes the entire tree upfront even
though only a handful of paths are accessed. Yikes!

## Memoized Getters

An alternative implementation of self overwriting lazy getters is to define a
memoized property that caches its result after its first call.

Lodash provices a handy utility called
[`once`](https://lodash.com/docs/4.17.15#once) which runs a function once and
caches the value. Subsequent calls return the value initially computed.

Here is an implementation of `once`:

```jsx
const once = (fn) => {
  let cache = null;
  // If we don't have it cached, compute and cache it.
  // Otherwise, return the cache.
  return () => (cache === null ? (cache = fn()) : cache);
};
```

Now we can
[define a property](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/defineProperty)
on our object that calls our memoized getter.

```jsx
const args = {
  cheap,
};

Object.defineProperty(args, "expensive", {
  get: once(getExpensiveValue),
});
```

Here is what our `getVisualDetails` implementation looks like with `once` at our
disposal.

```jsx
const once = (fn) => {
  let cache = null;
  return () => (cache === null ? (cache = fn()) : cache);
};

const getVisualDetails = (el) => {
  const api = {};

  Object.defineProperty(api, "children", {
    get: once(() =>
      Array.from(el.children || []).map((child) => getVisualDetails(child))
    ),
  });

  Object.defineProperty(api, "styles", {
    get: once(() =>
      (el.classList || []).length === 0 ? null : window.getComputedStyle(el)
    ),
  });

  Object.defineProperty(api, "rect", {
    get: once(() => el.getBoundingClientRect()),
  });

  return api;
};
```

The important thing here, is that regardless of implementation, we want to
compute the expensive value _once_ and lazily _as needed_. If you're wondering
how our memoized version performs, its pretty comparable to our self-overwriting
implementation.

```
This computation took:  0.5100000003039895 ms
This computation took:  0.14000000010128133 ms
This computation took:  0.15499999972234946 ms
```

## Summary

JavaScript is a wonderfully (and sometimes terribly) flexible language. Its
[call by sharing](https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing)
and
[getter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/get)
semantics allow us to implement lazy evaluation for field access.

An exercise left for the reader is to implement lazy evaluation using
[ES2015 Proxy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy)s.

This technique of
[lazy evaluation](https://en.wikipedia.org/wiki/Lazy_evaluation) is sometimes
referred to as
[call by need](https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_need).
To explore this concept further I'd recommend an exploration of Haskell where
the entire language is lazily evaluated.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[No Code is Eating Software]]></title>
            <link>https://www.merrickchristensen.com/articles/no-code-is-eating-software</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/no-code-is-eating-software</guid>
            <pubDate>Fri, 26 Jun 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[Marc Andreessen was right, software is eating the world! Today more people can create software than ever and this trend will only continue as No Code eats software.]]></description>
            <content:encoded><![CDATA[
In 2011, Marc Andreessen famously wrote that
[software is eating the world](https://a16z.com/2011/08/20/why-software-is-eating-the-world/).
His article has proven prescient and this trend shows no signs of slowing.
Unfortunately, the power to take part in the digital economy was limited to the
tiny percentage of people who could create software by hand with code. The No
Code movement is about building bridges to creators and empowering them to
create software without code. Already we see rich visual development tools for
[creating websites & user interfaces](https://webflow.com),
[connecting systems](https://zapier.com) &
[managing data](https://airtable.com). There is going to be an explosion of
specialized tools to distribute tasks that are currently confined to the work of
engineers. The component mental model will be distributed across product
organizations as the source of truth for Design Systems are moved from front-end
developers to designers using tools like [Modulz](https://www.modulz.app/) &
[Plasmic](https://www.plasmic.app/). We’re going to see the data world shaken up
with tools for [data processing](https://luna-lang.org) &
[AI model training](https://teachablemachine.withgoogle.com/). The cost of
creating in-house tools is going to drop with the expanding breadth of
functionality landing in
[cloud providers](https://aws.amazon.com/blogs/aws/introducing-amazon-honeycode-build-web-mobile-apps-without-writing-code/)
& [new visual tools](https://retool.com). It’s only a matter of time before
accessible general purpose visual development unleashes the masses. Yes, Marc
Andreessen was right, software is eating the world! Today more people can create
software than ever and this trend will only continue as No Code eats software.

> Originally authored & published for a [UI.dev](https://ui.dev/) newsletter.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Lessons from Lean Manufacturing]]></title>
            <link>https://www.merrickchristensen.com/articles/lessons-from-lean-manufacturing</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/lessons-from-lean-manufacturing</guid>
            <pubDate>Sat, 25 Jan 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[Continuous improvement is better than delayed perfection.]]></description>
            <content:encoded><![CDATA[
[The Phoenix Project](https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592)
is a narrative about an organization struggling to evolve to compete in the
digital economy. It was written for a general, business-oriented audience to
help a few vital concepts sink in at all layers of an organization.

The book uses the lens of a factory to transfer the lessons of the industrial
revolution &
[lean manufacturing](https://en.wikipedia.org/wiki/Lean_manufacturing) to the
digital era.

## The Three Ways

### Flow Efficiency Over Resource Efficiency

Thinking of your business as a manufacturing line where value flows from
business needs to the customer can help you reduce bottlenecks and misaligned
prioritization between organizations.

Flow efficiency is optimizing for an end to end value delivery. How fast are you
able to get work from ideation to end-user value? While resource efficiency is
thinking in terms of how utilized your current resources are. It seems the human
default is to think in terms of resource efficiency. "If I'm busy & working my
hardest and so is everyone else... we must be doing all that we can to reach our
desired result!"

The insight of optimizing for flow efficiency is that it teaches you to think in
terms of the entire system.

Interestingly, this concept is not foreign to engineers. We all know that your
app is only as fast as your slowest bottleneck. Sure, your code may be efficient
but as long as your connecting to that old legacy database and running that
expensive query every request, your ship is sunk. And you're not going to make
it faster by ensuring your CPU & memory usage are always peaked (resource
efficiency). In fact, throwing more machines might ultimately cause more
coordination overhead and flood the constrained resource making matters worse.

In the world of software & machines, this concept is intuitive. If you zoom out
a little... to the software makers. You'll find this lens is useful in
optimizing how you make the software itself. Optimize for flow efficiency, end
to end value delivery.

- Remove constraints in your system
- Reduce work in progress (it creates coordination & context switching overhead)

> Another lens is thinking of your business as a flywheel, if any part of the
> flywheel isn't forwarding the momentum the whole thing comes to a screeching
> halt! This metaphor is useful over the assembly line because it shows that
> outputs of the business feedback in as inputs to the business. Farnam Street
> had a wonderful podcast with
> [Jim Collins, Keeping the Flywheel in Motion](https://fs.blog/jim-collins/) on
> this subject.

### Shorten Feedback Loops

Flow efficiency teaches us to optimize for delivering value from the business to
the user. The second insight is to shorten feedback loops from the user to
business. This sort of thinking was popularized by the
[Lean Startup methodology](https://en.wikipedia.org/wiki/Lean_startup).

Mechanisms for shortening feedback loops are sometimes battle-tested best
practices. At other times they look more like creative & novel experiments. If
we think of our system fractally, zooming in to our constraints & subsystems we
can employ different strategies based on the culprit.

For example, we can drastically shorten feedback loops about our business idea
by deferring the expensive cost of building an entire project and first
conducting market research. Perhaps offering preorders or a marketing site that
gathers emails.

We can shorten the feedback loop of specific user experience decisions by first
creating high fidelity prototypes and testing them before paying expensive
development costs.

Within the product development subsystem itself, we do all sorts of things to
shorten feedback loops. Here are some of my favorites, listed in order of
optimizing feedback loops from the individual developer experience out to the
broader team.

- [Editor integrated automated code formatting](https://prettier.io/) that
  reveals syntax errors I would have otherwise waited for the linter or compiler
  to reveal.
- [Linter](https://eslint.org/) to reveal syntax and other statically analyzable
  errors I would have otherwise waited till runtime to discover.
- [Type System](https://flow.org/) or a
  [Typed Language](https://www.typescriptlang.org/) to reveal statically
  analyzable semantic errors I would have otherwise waited till runtime or
  integration time to discover.
- [Write tests](https://jestjs.io/) that give you immediate feedback as you
  author your code you otherwise would need to exercise by hand.
- [Write end to end tests](https://www.cypress.io/) that exercise the end user's
  experience that you otherwise would need to wait for a quality assurance team
  member, or worse, your end-user to discover.
- [Continuous Integration](https://en.wikipedia.org/wiki/Continuous_integration)
  shortens the feedback loop I would otherwise need to wait to manually deploy,
  integrate with the latest code, update dependency versions, etc.
- [Production Monitoring](https://www.bugsnag.com/) means I get insight into my
  code failing without waiting for users to contact support, and for support to
  find their way to me.

The book advocates use of a
[Kanban board](https://en.wikipedia.org/wiki/Kanban_board) for identifying back
pressure & discovering constraints in your system. If many tasks spend a lot of
time in a specific column, you've got a constraint to address. Generally, I've
found Kanban boards to be very effective if everybody on the team is invested in
its data integrity & onboard with using it as a tool for improving the flow of
work.

In many ways, there is a meta "feedback loop" for improving the improvement of
the system. Individual team members know how to shorten loops at their
workstation better than the team does. Individuals know how to shorten feedback
loops between each other better than the team. The team knows how to improve
their interactions & process better than the department. The department leaders,
better than the CEO. Ad Infinitum. In the same way that we ought to shorten
feedback loops from the end-user to business, perhaps we would benefit to
shorten meta feedback loops about the system itself from individual contributors
back to the business.

### Learning Culture

The last advocation in the book is to create a culture where constant
experimentation & feedback is encouraged. Intentionally put tension into the
system to reinforce good habits & improve something. Continuous, never-ending
improvement, [kaizen](https://en.wikipedia.org/wiki/Kaizen).

- Safe retrospectives
- Intentionally creating chaos to exercise crisis response processes
- [Distributing technical architecture to mitigate the risk of technical decisions](/articles/cost-of-consensus/)

This third way is particularly poignant to me as a tempering force against the
first two ways. Often well-intentioned managers trying to improve flow
efficiency & shorten feedback loops institute measures to better understand
their system. Unfortunately, in the words of
[Goodhart](https://en.wikipedia.org/wiki/Goodhart%27s_law),

> "Any statistical regularity will tend to collapse once pressure is placed upon
> it for control purposes."

Or said far more idiomatically, by Marilyn Strathern,

> "When a measure becomes a target, it ceases to be a good measure."

The culture will be an authoritarian culture of hiding mistakes, blame & fear if
people are externally measured & punished. Their measures will look great but
their results won't. A learning culture is a safe culture.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Art of Abstraction]]></title>
            <link>https://www.merrickchristensen.com/articles/abstraction</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/abstraction</guid>
            <pubDate>Fri, 29 Nov 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[Monorepos & packages seem to be all the rage. However, simply relocating code to a package doesn’t make it more valuable. In fact, it can make it more expensive & introduce unexpected risks! The real value comes from good abstractions.]]></description>
            <content:encoded><![CDATA[
Monorepos & packages seem to be all the rage. However, simply relocating code to
a package doesn’t make it more valuable. In fact, it can make it more expensive
& introduce unexpected risks! The real value comes from **good** abstractions.
Packages are a set of tools to author, encapsulate & distribute such
abstractions. Here are some thoughts for you to consider when designing an
abstraction.

> Fundamentally, computer science is a science of abstraction — creating the
> right model for thinking about a problem and devising the appropriate
> mechanizable techniques to solve it. -
> [Jeffry Ullman, Computer Science: The Mechanization of Abstraction](http://infolab.stanford.edu/~ullman/focs/ch01.pdf)

When creating a package it is important to
[consider the complexity we want the package to encapsulate](https://www.facebook.com/notes/kent-beck/one-bite-at-a-time-partitioning-complexity/1716882961677894/).
One of my favorite mental tools for reasoning about abstraction is the
abstraction graph introduced by Cheng Lou in his talk
["The Spectrum of Abstraction”](https://youtu.be/mVVNJKv9esE?t=304), here is the
TLDR;

[![A React Motion Abstraction Graph](/assets/images/articles/abstraction/spectrum-of-abstraction.png)](/assets/images/articles/abstraction/spectrum-of-abstraction.png)

A few definitions:

1. Usefulness - Concrete use case for an abstraction.
2. Power - How many downstream use cases an abstraction powers.
3. Indirection Cost - Cognitive load is increased if an abstraction fails to
   encapsulate its complexity. For example, if `react-motion` required us to “go
   up” the chain and implement something with React directly in order to be
   “useful”. Then we paid the cognitive (and code size) cost of `react-motion` &
   the cost of `react` in order to fulfill our use case.

The value of an abstraction should exceed the cost of indirection. This value
comes in the form of abstracting complexity and/or isolating risk.

The interface for an abstraction should strive for the
[minimum API surface](https://2014.jsconf.eu/speakers/sebastian-markbage-minimal-api-surface-area-learning-patterns-instead-of-frameworks.html)
intrinsically required to make it useful.

An abstraction should not
[“leak”](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/)
the complexity of whatever it is responsible for encapsulating. Else we are
doomed to incur the overhead of the abstraction and inherit the cost of the
thing we were trying to abstract in the first place.

_Abstraction is a tool for collective thought_. It results in more cohesive APIs
& tooling across projects because people are thinking & collaborating about
things with a
[similar language](http://www.cs.virginia.edu/~evans/cs655/readings/steele.pdf).

Abstraction is a double edged sword. The right abstraction can be immensely
valuable. The wrong abstraction can be extremely expensive. One is unlikely to
create a useful abstraction over something they don’t understand and given the
high impact potential of abstraction we should be prudent in our wielding of
this power. A great deal of the risk in an abstraction comes from its conceptual
overhead. Jeffry Ullman

> “Don’t repeat yourself is the introductory step to the real principle; don’t
> repeat concepts.” - Jimmy Koppel

I’d recommend
[Hammock Driven Development](https://www.youtube.com/watch?v=f84n5oFoZBc) as
part of your exercise in identifying abstractions.

As a concrete example of a wonderful abstraction that abides by the above
principles I offer React. React abstracts the complexity of keeping the DOM up
to date efficiently & securely. React’s value also shines in its
[drastic reduction of API surface area](https://2014.jsconf.eu/speakers/sebastian-markbage-minimal-api-surface-area-learning-patterns-instead-of-frameworks.html)
compared to working with the DOM directly, you no longer have to think about the
intricacies of the browser! Because you’ve taken on the constraints of working
with the abstraction new opportunities are afforded, such as rendering to a
different platform. React additionally gives us new concrete language like
“Component” & “Element” to collaborate, as well as the ability to define our own
language in terms of this collective language. React is truly a delightful
abstraction for this and many other reasons.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Cost of Consensus]]></title>
            <link>https://www.merrickchristensen.com/articles/cost-of-consensus</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/cost-of-consensus</guid>
            <pubDate>Sat, 24 Aug 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[Alignment is Precious. The cost of alignment increases proportional to the number of agents that need to be aligned.]]></description>
            <content:encoded><![CDATA[
### TL;DR

You have two knobs at your disposal when managing the cost of alignment:

1. How effective your team is at getting aligned.
2. The number of people you are required to align.

Though it is an uncomfortable reality, I don't think any team's alignment
effectiveness is able to overcome the sheer volume of connections as a team
scales. Consequently, I'd like to encourage more thinking & discussion about how
to reduce the number of people required to be aligned in the first place.

When you need to make a decision, limit its scope, prove it out in a safe way.
Don't pay the cost of consensus eagerly until you have to. Weigh the value of
consensus against the probability of reaching it, when you do this a whole bunch
of things that make you itch for uniformity show their true colors as minutia
and you find tolerance is a viable strategy. Build consensus for the things that
truly matter and cherish alignment when you have it.

# The Cost of Consensus

Authentic alignment is precious. When a group of people has agreed on a given
course of action their collaboration will result in a beautiful result that is
more than just the sum of the parts. Beyond that, each member of the team will
be engaged because they understand the who, why, what, where and when of each
item at hand to accomplish their shared purpose. Everyone has felt what it is
like to be on a team that is truly aligned & consequently it is no wonder
organizations everywhere pursue this kind of alignment.

## Alignment Is Expensive

The cost of alignment increases proportionally to the number of agents that need
to be aligned. On a small team, alignment is so cheap that it is taken for
granted. For example, if Bob is on a team with Susan. Susan & Bob have a
conversation about their next steps and they are effectively aligned as a side
effect of their planning.

> Hey Susan, I'm thinking we should use React for your next project, what do you
> think? - Bob

> Hey Bob, sounds great! - Susan

Susan & Bob align, effectively for free. But throw in just one more person,
Scuba Steve.

> Hey all! I'm thinking we should use React for this next project, what do you
> think? - Bob

> I was talking to Steve and he wanted to use Vue. - Susan

Steve, waits a few days cause he is on his scuba trip & he is kind of nervous to
confront Bob about his desire to use Vue.

> Yeah, I'd rather try out Vue, you ok with that? - Scuba for Life, Steve

> No, I'm not OK with that. - Bob

> Sounds like we need to meet. - Susan

This makes sense, each person needs to be aligned with every other person and
consequently the cost of alignment goes up roughly:

`connections = (number of people * (number of people - 1) / 2)`

`cost of alignment = connections * alignment effectiveness`

This is the number of connections in a group multiplied by a score of how
effective that group is at achieving alignment.

What exactly is the cost? The cost is a rough proxy for the amount of time spent
doing "meta work" managing the connections before doing the "work" itself. The
cost is the time spent creating presentations to convince everyone involved, the
energy spent addressing questions, adapting to feedback, compromising &
convincing until everyone is aligned. It is the time that you aren't getting
user feedback because you're debating in the realm of imagination. The "cost of
alignment" refers to the very real financial & opportunity cost that causes
great people to look elsewhere. It makes your business ripe for disruption. Do
you find yourself in a growing business and feeling like you get _less_ done
even though you have _more_ people? Or shocked at how rapid your side-projects
move along while your team is still trying to figure out which framework to use?
_That_ is the cost of alignment.

<LineChart initNumberOfPeople={10} initEffectiveness={0.5} />

## What Is Alignment Effectiveness

Alignment Effectiveness is simply how effective your team is at getting aligned.
Are people radically candid with each other? How clear is the goal of the team
in the first place? Are people safe & comfortable with healthy disagreement? How
willing are people to disagree and commit? Are the incentives of the
organization such that playing as a team is more rewarded than winning as a hero
all-star contributor?

What about communication mechanisms? Are conversations had in the open so anyone
can follow up and see why a decision is made? Are meetings recorded and run
effectively in their own right?

Certainly, improving your team's ability to reach consensus is one angle of
driving down the cost of consensus.

<div
  style={{
    display: "grid",
    gridTemplateColumns: "1fr 1fr",
    gridTemplateRows: "1fr auto",
    justifyItems: "center",
    padding: "16px 0",
  }}
>
  <LineChart initNumberOfPeople={10} initEffectiveness={0.9} />
  <LineChart initNumberOfPeople={10} initEffectiveness={0.2} />
  <div>Effective Alignment</div>
  <div>Poor Alignment</div>
</div>

In the above examples, the team with Effective Alignment can scale to a
reasonable number of people and keep the Cost of Alignment at bay. However, the
team with poor alignment immediately struggles as more people are added! Who
knows, maybe their goals aren't clear? Maybe, they need to invest more time in
nurturing their connections? Maybe they hired a diva who refuses to collaborate?
Regardless, I just wish I could hug each one of them because that is a miserable
place to be.

Worse yet, as each team tries to scale the number of people creates an
insurmountable cost, with the Alignment Effectiveness of the former charts held
constant:

<div
  style={{
    display: "grid",
    gridTemplateColumns: "1fr 1fr",
    gridTemplateRows: "1fr auto",
    justifyItems: "center",
    padding: "16px 0",
  }}
>
  <LineChart initNumberOfPeople={40} initEffectiveness={0.9} />
  <LineChart initNumberOfPeople={40} initEffectiveness={0.2} />
  <div>Effective Alignment</div>
  <div>Poor Alignment</div>
</div>

Oh no! Even the team with great alignment skills has to pay a high cost to keep
their team aligned as they scale. No matter how effective each individual is at
reaching alignment there is a growing cost in the limitations of human
communication & the number of people attempting to communicate.

Eventually, the number of people passes our biological limitations (150 is the
commonly used value) to coordinate effectively as a cohesive group, and sub
groups are forced to emerge. The number at which this occurs is referred to as
[Dunbar's number](https://en.wikipedia.org/wiki/Dunbar%27s_number), Dunbar
theorized:

> "this limit is a direct function of relative neocortex size, and that this in
> turn limits group size [...] the limit imposed by neocortical processing
> capacity is simply on the number of individuals with whom a stable
> inter-personal relationship can be maintained"

So, what is a growing company supposed to do? Just stop growing?! No, of-course
not. I shudder to think of the wonderful human accomplishments that never would
have occurred if each organization decided to stop growing at this point. I
think it is natural to reach for processes & tools to improve Alignment
Effectiveness. I consider Agile, Kanban, Scrum & other planning methodologies as
tools people use for trying to improve this very measure.

Unfortunately, these tools can come at the cost of autonomy and mastery and
yield a shallow sense of alignment where many people disengage. Furthermore, the
overhead of conforming to the process is at risk of exceeding the cost of
alignment. Often the overhead is simply additional cost (process for process
sake) and doesn't improvement alignment at all. I think encouraging teams to use
these tools should they fit and creating an "interface" for reporting to the
organization is a reasonable middleground.

Some of my favorite tools for improving alignment.

- RFC Processes
- Communication Interfaces like Changelogs, Slack Channel Types &
  [Status Updates](https://www.fictiv.com/blog/posts/using-the-on-track-off-track-framework-to-drive-results)

I'd like to propose some additional ways of thinking about this problem beyond
improving a team's Alignment Effectiveness.

# Small Teams

Given that we are _not_ going to be talking about improving the Alignment
Effectiveness variable, that leaves us with one additional variable to work
with: `number of people`. What tools do we have at our disposal to keep the cost
of consensus down by reducing the number of people that need to be aligned?

The answer here is obvious in ideal, but extremely difficult in practice. Have
small teams! Jeff Bezos famously referred to this as the
["two pizza rule"](http://blog.idonethis.com/two-pizza-team/),

> Bezos believes that no matter how large your company gets, individual teams
> shouldn’t be larger than what two pizzas can feed.

When I read this, I knew I could never work at Amazon because I eat an entire
pizza by myself so I'd languish in a life of perpetual isolation.

<div
  style={{
    display: "grid",
    gridTemplateColumns: "1fr 1fr",
    gridTemplateRows: "1fr auto",
    justifyItems: "center",
    padding: "16px 0",
  }}
>
  <LineChart initNumberOfPeople={8} initEffectiveness={0.9} />
  <LineChart initNumberOfPeople={8} initEffectiveness={0.2} />
  <div>Effective Alignment</div>
  <div>Poor Alignment</div>
</div>

By limiting team size to 8 people, the cost of alignment is much more
manageable. This sounds great, but of-course restructuring your organization
into small teams doesn't mean that those teams suddenly don't need to have
connections between them! The technical architecture has to facilitate this kind
of strategy.

There is an adage referred to as Conway's law that states:

> organizations which design systems ... are constrained to produce designs
> which are copies of the communication structures of these organizations.

If this is true, if systems are merely a reflection of the communication
structures that create them, is the relationship bidirectional? Meaning, can a
system backpropagate & change the communication structures of the organization
that creates it? I think so! Let's explore the tip of the iceberg of a few ideas
to get your gears turning on how to scale your organization with technology.

## Event Sourcing

An event-sourced architecture allows subsystems to communicate over a
sequential, replayable log of events. Subsystems communicate their `writes` by
putting an event in the log. All systems derive and manage their internal state
based on their processing of the log. With an architecture like this, each team
controls its infrastructure, databases, caches, interfaces & deployments.

Subsystems should not talk to each other for state, it should derive a view of
the state it needs for its app from the log, creating what is effectively a
"copy" of the state. While this costs you in terms of (eventual) consistency, it
buys you resilience and scalability.

There is then a "coordinator" that stitches all of the sub-systems together in a
loosely coupled way, on the backend that looks like routing/load balancing:

```bash
my-fancy-app.com/some-sub-app => <service-owned-by-some-sub-app-team>
my-fancy-app.com/another-sub-app => <service-owned-by-another-sub-app-team>
```

And on the front-end, there might be a lightweight coordinator like this:

```javascript
Router([
  {
    path: "/some-app",
    init: (el, eventLog, dispatch) =>
      import("https://my-fancy-app.com/some-sub-app").then((SomeApp) =>
        SomeApp.init(el, eventLog, dispatch)
      ),
  },
  {
    path: "/another-app",
    init: (el, eventLog, dispatch) =>
      AnotherSubApp.init(el, eventLog, dispatch),
    init: (el, eventLog, dispatch) =>
      import("https://my-fancy-app.com/another-sub-app").then((AnotherApp) =>
        SomeApp.init(el, eventLog, dispatch)
      ),
  },
]).run();
```

In this hypothetical example, `SomeApp` & `AnotherSubApp` are responsible for
communicating writes via `dispatch` and deriving state from `eventLog`, they are
also given an `el` to render into, they would also need to return a function
that can be used to "tear them down". Both apps, in this case, have a clearly
defined interface, how they are implemented is owned by `some-sub-app-team` &
`another-sub-app-team`. Those teams each interview customers, deploy updates &
make technical decisions all autonomously within their "domain".

The "coordinator" implements the "shell" of the UI, such as the navigation & to
avoid too much redundant work, sub-systems can publish embedded interfaces for
each other to use. The team that owns the "coordinator" can also provide a
`Design System` for sub-teams to use, unfortunately though, these design systems
create a wide "link" for consensus cost.

This is a completely didactic & hypothetical interface. What an `App` looks like
in your system needs to be clearly defined for your team. And of course, whether
an architecture like this is a good idea depends a lot on the busines domain you
are serving.

Here are some great resources for learning more about a loosely coupled event
based architecture:

- [Rich Hickey: Deconstructing the Database](https://www.youtube.com/watch?v=Cym4TZwTCNU)
- [The Log: What every software engineer should know about real-time data's unifying abstraction](https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying)
- [Using logs to build a solid data infrastructure (or: why dual writes are a bad idea)](https://www.confluent.io/blog/using-logs-to-build-a-solid-data-infrastructure-or-why-dual-writes-are-a-bad-idea/)
- [Domain Driven Design](https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215)

### Benefits

- Teams are _very_ encapsulated & therefore _very_ autonomous.
- The system has some great resiliency because subsystems can keep operating
  when other systems have an outage. Systems can self-heal by catching up with
  the log. Since there are copies of the state stored across various systems
  your data has some great redundancy.

### Trade-offs

- Teams are so encapsulated that there is a great deal of redundant work. I
  think tools like [gRPC](https://grpc.io/docs/reference/) help drive down this
  cost but again, the more you share, the more links you introduce back into the
  system, the higher the cost of consensus rises.
- A large investment in infrastructure is required, however, tools like
  Kubernetes & Kafka are improving the landscape.
- There is a performance cost, I think this is most notable on clients. Where
  they share a runtime environment. Duplicating state on the client is just
  wasteful. Shipping multiple runtime frameworks ultimately make your user pay
  for reducing the cost of consensus. This cost can be mitigated by aligning
  your teams and user personas.
- Analytics are harder to get right & cross compare, you'll likely need to
  create an "interface" for teams on this front.
- When you create good boundaries to reduce the risk of allowing teams to
  innovate autonomously you have implicitly put boundaries on the positive scope
  of the impact they'll be able to make.

  I don't know of a way to mitigate negative risk exclusively without also
  compressing your upper end. If Julie can only contribute to this sub app, she
  can only impact that sub app, regardless of the direction of that impact.

I have so many architectural ideas to explore here. Especially as technology
like HTTP2, Web Workers, WASM, Portals & SharedArrayBuffers become more widely
available.

> A humble bow of admiration to the people at Pluralsight, it was there that I
> encountered many of these ideas in practice.

## Better Tools

### Technology Agnostic Design Systems

I hope that tooling will allow design systems to be created in a
framework-agnostic way and then "compile" to whatever technology the sub-system
is using. This preserves the sub-system team's autonomy in their technical
choices without the overhead of maintaining their own implementation of the
design system. It also reduces the design team's friction in making changes
across such a distributed & encapsulated system. Heck, Designers should be able
to push new versions of presentational components directly to the package
registry and sub systems should be able to import & use the latest version in
the technology of their choice.

### Better Client Build Tools

I hope that WebAssembly continues to thrive. That
[dynamic linking](https://webassembly.org/docs/dynamic-linking/) &
[garbage collection integration](https://github.com/WebAssembly/gc) make it a
viable compilation target for polyglot teams.

I hope HTTP2 adoption continues and that our build tools can take advantage of
this reality, requiring less compile time awareness of various sub-systems to
create performant builds.

I hope new standards like [Portals](https://web.dev/hands-on-portals) &
[Realms](https://github.com/tc39/proposal-realms) will make truely encapsulated
sub systems on the client a viable reality without paying iFrame overhead.

### Polyglot Typed RPC Systems

I hope that tools like gRPC will reduce the cost of communicating across
encapsulated sub systems by preserving type information across languages,
reducing performance overhead and having consistent interoperation when
communicating across network boundaries.

### Containers

Containers have created a wonderful encapsulation model and I hope tools like
Kubernetes will continue to thrive.

## Other Ideas

Here are a few high-level ideas that can mechanistically reduce the cost of
consensus and allow you to scale your organization as a set of small teams.

- Mono Repos, Modules & Interfaces
- Actor architectures
- Abstraction as a common language
- The Levels of Process, from implicit people-oriented processes to automated
  processes
- Relying on automation to reflect team boundaries, for example, disallowing
  importing from particular directories without going through a particular
  interface. Tools like Prettier leverage automation to drive down the cost of
  consensus.

Lastly, if you'd like to play with different values for the visualization used
throughout this article, here you go:

<LineChart initNumberOfPeople={8} initEffectiveness={0.2} controls />

#### A Note On Top Down "Alignment"

Alignment is so valuable that it is tempting for leaders to try and force
alignment using top-down mandates. Given the cost of consensus is so high, I can
understand this impulse but the cost of imposed alignment is much higher. People
disengage, they can't bring their best self to work, they don't surface
important feedback, they look out for themselves. This course of action creates
a specter of alignment at best and it looks so little like authentic alignment
that I don't think it belongs in this article at all.

I'd rather fight the good fight for authentic alignment than fall for any
artificial versions of it.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Headless User Interface Components]]></title>
            <link>https://www.merrickchristensen.com/articles/headless-user-interface-components</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/headless-user-interface-components</guid>
            <pubDate>Fri, 22 Jun 2018 00:00:00 GMT</pubDate>
            <description><![CDATA[A headless component is a component that offers maximum visual flexibility by providing no interface.]]></description>
            <content:encoded><![CDATA[
A headless user interface component is a component that offers maximum visual
flexibility by providing no interface. "Wait for a second, are you advocating a
user interface pattern that doesn't have a user interface?"

Yes. That is exactly what I'm advocating.

# Coin Flip Component

Suppose you had a requirement to implement a coin flip feature that performed
some logic when it is rendered to emulate a coin flip! 50% of the time the
component should render "Heads" and 50% of the time it should render "Tails".
You say to your product manager, "Oof that will take years of research!", and
you get to work.

```jsx
const CoinFlip = () =>
  Math.random() < 0.5 ? <div>Heads</div> : <div>Tails</div>;
```

Turns out emulating coin flips is way easier than you thought so you proudly
share the results. You get a response, "This is great! Could you please update
it to show these cool coin images?". No problem!

```jsx
const CoinFlip = () =>
  Math.random() < 0.5 ? (
    <div>
      <img src="/heads.svg" alt="Heads" />
    </div>
  ) : (
    <div>
      <img src="/tails.svg" alt="Tails" />
    </div>
  );
```

Soon, they'd like to use your `<CoinFlip />` component in the marketing material
to show people how cool your new feature is. "We'd like to put in the blog post,
but we need the labels "Heads" & "Tails" back, for SEO and stuff." Oh man, I
guess we'll add a flag for the marketing site?

```jsx
const CoinFlip = (
  // We'll default to false to avoid breaking the applications
  // current usage.
  { showLabels = false }
) =>
  Math.random() < 0.5 ? (
    <div>
      <img src="/heads.svg" alt="Heads" />

      {/* Add these labels for the marketing site. */}
      {showLabels && <span>Heads</span>}
    </div>
  ) : (
    <div>
      <img src="/tails.svg" alt="Tails" />

      {/* Add these labels for the marketing site. */}
      {showLabels && <span>Tails</span>}
    </div>
  );
```

Later, a requirement emerges. "We were wondering if you could add a button to
`<CoinFlip />`, but only in the application, to rerun the odds?". Things are
starting to get ugly, I can't even look Kent C. Dodds in the eyes anymore:

```jsx
const flip = () => ({
  flipResults: Math.random(),
});

class CoinFlip extends React.Component {
  static defaultProps = {
    showLabels: false,
    // We don't repurpose `showLabels`, we aren't animals, after all.
    showButton: false,
  };

  state = flip();

  handleClick = () => {
    this.setState(flip);
  };

  render() {
    return (
      // Use fragments so people take me seriously.
      <>
        {this.state.showButton && (
          <button onClick={this.handleClick}>Reflip</button>
        )}
        {this.state.flipResults < 0.5 ? (
          <div>
            <img src="/heads.svg" alt="Heads" />
            {showLabels && <span>Heads</span>}
          </div>
        ) : (
          <div>
            <img src="/tails.svg" alt="Tails" />
            {showLabels && <span>Tails</span>}
          </div>
        )}
      </>
    );
  }
}
```

Soon a co-worker reaches out to you. "Hey, your `<CoinFlip />` feature is rad!
We just got assigned the new `<DiceRoll />` feature and we'd like to reuse your
code!". The new dice feature:

1.  Wants to "re-run" the odds `onClick`.
2.  Wants to be displayed in the application and marketing site as well.
3.  Has a totally different interface.
4.  Has different odds.

You now have two options, replying "Sorry, not much to share here." or adding
`DiceRoll` complexity into `CoinFlip` as you watch the bones of your component
break under the weight of its responsibility. (Is there a market for brooding
programmer poets? I'd love to pursue that craft.)

# Enter Headless Components

Headless user interface components separate the logic & behavior of a component
from its visual representation. This pattern works great when the logic of a
component is sufficiently complex and decoupled from its visual representation.
A headless implementation of `<CoinFlip/>` as a
[function as child component](/articles/function-as-child-components/) or render
prop would look like so:

```jsx
const flip = () => ({
  flipResults: Math.random(),
});

class CoinFlip extends React.Component {
  state = flip();

  handleClick = () => {
    this.setState(flip);
  };

  render() {
    return this.props.children({
      rerun: this.handleClick,
      isHeads: this.state.flipResults < 0.5,
    });
  }
}
```

This component is headless because it doesn't render anything, it expects the
various consumers to do the presentation work while it tackles the logic
lifting. So the application code would look like so:

```jsx
<CoinFlip>
  {({ rerun, isHeads }) => (
    <>
      <button onClick={rerun}>Reflip</button>
      {isHeads ? (
        <div>
          <img src="/heads.svg" alt="Heads" />
        </div>
      ) : (
        <div>
          <img src="/tails.svg" alt="Tails" />
        </div>
      )}
    </>
  )}
</CoinFlip>
```

The marketing website code:

```jsx
<CoinFlip>
  {({ isHeads }) => (
    <>
      {isHeads ? (
        <div>
          <img src="/heads.svg" alt="Heads" />
          <span>Heads</span>
        </div>
      ) : (
        <div>
          <img src="/tails.svg" alt="Tails" />
          <span>Tails</span>
        </div>
      )}
    </>
  )}
</CoinFlip>
```

Isn't this great! We've completely untangled the logic from the presentation!
This gives us so much visual flexibility! I know what you're thinking...

> You mindless sack of idiot! Isn't that just a render prop?!

This headless component happens to be implemented as a render prop, yes! It
could just as well be implemented as a higher order component. _Looks over my
shoulder, in a hushed low tone._ It could have even been implemented as a `View`
and a `Controller`. Or a `ViewModel` and a `View`. The point here is about
separating the "mechanism" of flipping coins and the "interface" to that
mechanism.

## What about `<DiceRoll />`?

The neat thing about this separation is how easy it is to generalize our
headless component to support our co-workers new `<DiceRoll />` feature. Hold my
Diet Coke™:

```jsx
const run = () => ({
  random: Math.random(),
});

class Probability extends React.Component {
  state = run();

  handleClick = () => {
    this.setState(run);
  };

  render() {
    return this.props.children({
      rerun: this.handleClick,

      // By taking in a threshold property we can support
      // different odds!
      result: this.state.random < this.props.threshold,
    });
  }
}
```

With this headless component we can swap out the implementation of
`<CoinFlip />` without any changes to its consumers:

```jsx
const CoinFlip = ({ children }) => (
  <Probability threshold={0.5}>
    {({ rerun, result }) =>
      children({
        isHeads: result,
        rerun,
      })
    }
  </Probability>
);
```

Now our co-worker can share the mechanism of our `<Probability />` emulator!

```jsx
const RollDice = ({ children }) => (
  // Six Sided Dice
  <Probability threshold={1 / 6}>
    {({ rerun, result }) => (
      <div>
        {/* She was able to use a different event! */}
        <span onMouseOver={rerun}>Roll the dice!</span>
        {/* Totally different interface! */}
        {result ? (
          <div>Big winner!</div>
        ) : (
          <div>You win some, you lose most.</div>
        )}
      </div>
    )}
  </Probability>
);
```

Pretty neat, eh?

# Rule of Separation - Unix Philosophy

This is one expression of a general underlying principle, one that has been
around for a very long time! Rule 4 of the "Basics of Unix Philosophy" is:

> Rule of Separation: Separate policy from mechanism; separate interfaces from
> engines. - Eric S. Raymond

I'd like to extract a portion of that book and replace the word "policy" with
"interface".

> _Interface_ and mechanism tend to mutate on different timescales, with
> _interfaces_ changing much faster than mechanism. Fashions in the look and
> feel of GUI toolkits may come and go, but raster operations and compositing
> are forever.

> Thus, hardwiring _interfaces_ and mechanisms together has two bad effects: It
> makes _interfaces_ rigid and harder to change in response to user
> requirements, and it means that trying to change _interfaces_ has a strong
> tendency to destabilize the mechanism.

> On the other hand, by separating the two we make it possible to experiment
> with new _interfaces_ without breaking mechanism. We also make it much easier
> to write good tests for the mechanism (_interfaces_, because _they_ age so
> quickly, often do not justify the investment).

I love the great insights here! This also gives us some insight as to when it is
useful to use the headless component pattern.

1.  How long will this component live for? Is it worth deliberately preserving
    the mechanism aside from the interface? Perhaps to use this mechanism in
    another project with a different look and feel?
2.  How frequently is our interface bound to change? Will the same mechanism
    have multiple interfaces?

There is an indirection cost paid when you separate "mechanism" and "policy".
You need to be sure that the benefits of separation merit the expense of
indirection. I think this is largely where a lot of the MV\* patterns of the
past went wrong, they started with the axiom that _everything_ should be
separated this way; when in reality, mechanism and policy are often deeply
coupled or the cost of separation doesn't outweigh the benefits of this sort of
separation.

# Open Source Headless Components & Non-Trivial References

For a truly exemplar non-trivial headless component, check out a project by my
friend [Kent C. Dodds](https://kentcdodds.com/) over at Paypal called
[downshift](https://github.com/paypal/downshift). In fact, it is downshift that
ultimately inspired this post. Without providing any user interface, downshift
offers sophisticated autocomplete/dropdown/select experiences that are
accessible. Take a look at all the ways it can be used
[here](http://downshift.netlify.com/?selectedKind=Examples&selectedStory=basic&full=0&addons=1&stories=1&panelRight=0).

I sincerely hope that more projects like downshift emerge over time. I can't
count how many times I've wanted to use a particular open source UI component
but couldn't because it wasn't "themeable" or "skinnable" in the way that met
design requirements. Headless components circumvent this problem entirely with a
"bring your own interface" requirement.

In a world where design systems and user interface libraries are headless, your
interfaces can have a high-end custom feel _and_ the durability & accessibility
of a great open source library. You spend your time implementing the only part
that you needed to, the part that is truly unique, the look and feel specific to
your application.

I could go on about the benefits from internationalization to E2E test
integration but I'd recommend you try it out for yourself.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Stateful Semantic Diffing]]></title>
            <link>https://www.merrickchristensen.com/articles/stateful-semantic-diffing</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/stateful-semantic-diffing</guid>
            <pubDate>Fri, 23 Dec 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[A journey trying to build an intelligent code assistant.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; for offering little to no value from the
moment it was written.

I'm trying to infer programmer intention by observing semantic changes as they
edit files. My first attempt was to parse the syntax trees to determine the
changes made by the programmer. I first used a generic diff implementation but
quickly realized I would need somethign more semantically aware in able to infer
any serious meaning about the changes intended by the programmer. I started
reading about change detection algorithms which meant looking up a lot of
mathematical symbols I'm not accustomed to.

As seems to always be the case with "great ideas that nobody has done" I've run
into a lot of unforseen issues. For example:

```javascript
const name = "Merrick";
console.log(name);
```

Say a programmer changes this variable name:

```javascript
const me = "Merrick";
console.log(name);
```

The code assistant should note the change as occurs, character by character. As
we diff the two trees we might see the following events:

The cursor jumps down to the end of name, and hits backspace:

```javascript
{ node_type: 'Identifier', type: 'change', name: 'nam' }
```

And another:

```javascript
{ node_type: 'Identifier', type: 'change', name: 'na' }
```

And another:

```javascript
{ node_type: 'Identifier', type: 'change', name: 'n' }
```

And one more:

```javascript
{ node_type: 'Identifier', type: 'change', name: '' }
```

But wait, we can't have an empty identifier, that won't parse... So, we need to
wait till we are parseable again, the developer types "m"

```javascript
{ node_type: 'Identifier', type: 'change', name: 'm' }
```

And one last event:

```javascript
{ node_type: 'Identifier', type: 'change', name: 'me' }
```

Ok, now the code assistant should suggest that you update the use of `name`
found in `console.log`, this poses a really challenging issue. Connecting "me"
to "name". The variable was "name" a long time ago, so how do we know to suggest
name to me at this point? Do we need to persist the scope some place? That we
can adjust references as we receive changes? So `name` references are updated to
`nam`, `na`, `n`, (parse failure), `m`, `me`. And after even `identifier` change
events we suggest updating? How do we know "name" is the good state for
reference? How do we avoid pointing variables inbetween name to me, meaning if
there were also a variable "nam" how do we avoid accidentally pointing nam to
me? I suppose by checking if there is a corresponding VariableDeclarator for nam
I could avoid destructive suggestions.

Conclusion for the day: My mind is tired. I anticipated this would be extremely
difficult but in my initial excitment I definitely believed it would be easier
than this. AI would be a long term goal, I'm just trying to solve the problem of
determining programmer intention using stateful change observation at this
point. I've been battling a lot shame and self-confidence issues as I've faced
friction. Hard not to feel stupid, or that I should have gone to school. I can't
help but feel inadequate as it takes me 2 hours to read 6 pages to try and
comprehend it and look up mathematical symbols on wikipedia.

## Resources Used

- [List of Mathematical Symbols](https://en.wikipedia.org/wiki/List_of_mathematical_symbols)
- [Change Detection in Hierarchically Structured Information](http://ilpubs.stanford.edu:8090/115/1/1995-46.pdf)
- [Fine-grained and Accurate Source Code Differencing](https://hal.archives-ouvertes.fr/hal-01054552/document)
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Function as Child Components]]></title>
            <link>https://www.merrickchristensen.com/articles/function-as-child-components</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/function-as-child-components</guid>
            <pubDate>Sat, 30 Jul 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[Function as Child Components a useful technique for higher order components.]]></description>
            <content:encoded><![CDATA[
I recently polled on Twitter regarding Higher Order Components and Function as
Child, the [results](https://twitter.com/iammerrick/status/735254529474629633)
were surprising to me.

If you don’t know what the “Function as Child” pattern is, this article is my
attempt to:

1.  Teach you what it is.
2.  Convince you of why it is useful.
3.  Get some fetching hearts, or retweets or likes or newsletters or something,
    I don’t know. I just want to feel appreciated, you know?

## What are Function as Child Components?

“Function as Child Component”s are components that receive a function as their
child. The pattern is simply implemented and enforced thanks to React’s property
types.

```jsx
class MyComponent extends React.Component {
  render() {
    return <div>{this.props.children("Scuba Steve")}</div>;
  }
}

MyComponent.propTypes = {
  children: React.PropTypes.func.isRequired,
};
```

That is it! By using a Function as Child Component we decouple our parent
component and our child component letting the composer decide what & how to
apply parameters to the child component. For example:

```jsx
<MyComponent>{(name) => <div>{name}</div>}</MyComponent>
```

And somebody else, using the same component could decide to apply the name
differently, perhaps to an attribute:

```jsx
<MyComponent>
  {(name) => (
    <img src=’/scuba-steves-picture.jpg’ alt={name} />
  )}
</MyComponent>
```

What is really neat here is that MyComponent, the Function as Child Component
can manager state on behalf of components it is composed of, without making
demands on how that state is leveraged by its children. Let's move on to a more
realistic example.

### The Ratio Component

The Ratio Component will use the current device width, listen for resize events
and call into its children with a width, height, and some information about
whether or not it has computed the size yet.

First, we start out with a Function as Child Component snippet, this is common
across all Function as Child Component’s and it just lets consumers know we are
expecting a function as our child, not React nodes.

```jsx
class Ratio extends React.Component {
  render() {
    return (
        {this.props.children()}
    );
  }
}

Ratio.propTypes = {
 children: React.PropTypes.func.isRequired,
};
```

Next lets design our API, we want a ratio provided in terms of X and Y axis
which we will then use the current width to compute, lets set up some internal
state to manage the width and height, whether or not we have even calculated
that yet, along with some propTypes and defaultProps to be good citizens for
people using our component.

```jsx
class Ratio extends React.Component {

  constructor() {
    super(...arguments);
    this.state = {
      hasComputed: false,
      width: 0,
      height: 0,
    };
  }

  render() {
    return (
      {this.props.children()}
    );
  }
}

Ratio.propTypes = {
  x: React.PropTypes.number.isRequired,
  y: React.PropTypes.number.isRequired,
  children: React.PropTypes.func.isRequired,
};

Ratio.defaultProps = {
  x: 3,
  y: 4
};
```

Alright so we aren’t doing anything interesting yet, let's add some event
listeners and actually calculate the width (accommodating as well for when our
ratio changes):

```jsx
class Ratio extends React.Component {
  constructor() {
    super(...arguments);
    this.handleResize = this.handleResize.bind(this);
    this.state = {
      hasComputed: false,
      width: 0,
      height: 0,
    };
  }

  getComputedDimensions({ x, y }) {
    const { width } = this.container.getBoundingClientRect();
    return {
      width,
      height: width * (y / x),
    };
  }

  componentWillReceiveProps(next) {
    this.setState(this.getComputedDimensions(next));
  }

  componentDidMount() {
    this.setState({
      ...this.getComputedDimensions(this.props),
      hasComputed: true,
    });
    window.addEventListener("resize", this.handleResize, false);
  }

  componentWillUnmount() {
    window.removeEventListener("resize", this.handleResize, false);
  }

  handleResize() {
    this.setState(
      {
        hasComputed: false,
      },
      () => {
        this.setState({
          hasComputed: true,
          ...this.getComputedDimensions(this.props),
        });
      }
    );
  }

  render() {
    return (
      <div ref={(ref) => (this.container = ref)}>
        {this.props.children(
          this.state.width,
          this.state.height,
          this.state.hasComputed
        )}
      </div>
    );
  }
}

Ratio.propTypes = {
  x: React.PropTypes.number.isRequired,
  y: React.PropTypes.number.isRequired,
  children: React.PropTypes.func.isRequired,
};

Ratio.defaultProps = {
  x: 3,
  y: 4,
};
```

Alright, so I did a lot there. We added some event listeners to listen for
resize events as well as actually computing the width and height using the
provided ratio. Neat, so we’ve got a width and height in our internal state, how
can we share it with other components?

This is one of those things that is hard to understand because it is so simple
that when you see it you think, “That can’t be all there is to it.” but this
**_is_ **all there is to it.

#### Children is literally just a JavaScript function.

That means in order to pass the calculated width and height down we just provide
them as parameters:

```jsx
render() {
    return (
      <div ref='container'>
        {this.props.children(this.state.width, this.state.height, this.state.hasComputed)}
      </div>
    );
}
```

Now anyone can use the ratio component to provide a full width and properly
computed height in whatever way they would like! For example, someone could use
the Ratio component for setting the ratio on an img:

```jsx
<Ratio>
  {(width, height, hasComputed) =>
    hasComputed ? (
      <img src="/scuba-steve-image.png" width={width} height={height} />
    ) : null
  }
</Ratio>
```

Meanwhile, in another file, someone has decided to use it for setting CSS
properties.

```jsx
<Ratio>
  {(width, height, hasComputed) => (
    <div style={{ width, height }}>Hello world!</div>
  )}
</Ratio>
```

And in another app, someone is using to conditionally render different children
based on computed height:

```jsx
<Ratio>
  {(width, height, hasComputed) =>
    hasComputed && height > TOO_TALL ? <TallThing /> : <NotSoTallThing />
  }
</Ratio>
```

### Strengths

1.  The developer composing the components owns how these properties are passed
    around and used.
2.  The author of the Function as Child Component doesn’t enforce how its values
    are leveraged allowing for very flexible use.
3.  Consumers don’t need to create another component to decide how to apply
    properties passed in from a “Higher Order Component”. Higher Order
    Components typically enforce property names on the components they are
    composed with. To work around this many providers of “Higher Order
    Components” provide a selector function which allows consumers to choose
    your property names (think redux-connects select function). This isn’t a
    problem with Function as Child Components.
4.  Doesn’t pollute “props” namespace, this allows you to use a “Ratio”
    component and a “Pinch to Zoom” component together regardless that they are
    both calculating width. Higher Order Components carry an implicit contract
    they impose on the components they are composed with, unfortunately this can
    mean colliding prop names being unable to compose Higher Order Components
    with other ones.
5.  Higher Order Components create a layer of indirection in your development
    tools and components themselves, for example setting constants on a Higher
    Order Component will be inaccessible once wrapped in a Higher Order
    Component. For example:

```jsx
MyComponent.SomeContant = "SCUBA";
```

Then wrapped by a Higher Order Component,

```jsx
export default connect(...., MyComponent);
```

RIP your constant. It is no longer accessible without the Higher Order Component
providing a function to access the underlying component class. Sad.

#### Summary

Most the time when you think “I need a Higher Order Component for this shared
functionality!” I hope I have convinced you that a Function as Child Component
is a better alternative for abstracting your UI concerns, in my experience, it
nearly always is, with the exception that your child component is truly coupled
to the Higher Order Component it is composed with.

#### An Unfortunate Truth About Higher Order Components

As an ancillary point, I believe that Higher Order Components are improperly
named though it is probably too late to try and change their name. A higher
order function is a function that does at least one of the following:

1.  Takes n functions as arguments.
2.  Returns a function as a result.

Indeed Higher Order Components do something similar to this, namely, take a
Component as an argument and return a Component but I think it is easier to
think of a Higher Order Component as a factory function, it is a function that
dynamically creates a component to allow for runtime composition of your
components. However, they are **unaware** of your React state and props at
composition time!

Function as Child Components allows for similar composition of your components
with the benefit of having access to state, props and context when making
composition decisions. Since Function as Child Components:

1.  Take a function as an argument.
2.  Render the result of said function.

I can’t help but feel they should have gotten the title “Higher Order
Components” since it is a lot like higher order functions only using the
component composition technique instead of functional composition. Oh well, for
now, we will keep calling them “Function as Child Components” which is just
wordy and gross sounding.

### Examples

1.  [Pinch to Zoom - Function as Child Component](https://gist.github.com/iammerrick/c4bbac856222d65d3a11dad1c42bdcca)
2.  [react-motion](https://github.com/chenglou/react-motion) This project
    introduced me to this concept after being a long time Higher Order Component
    convert.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Single State Tree + Flux]]></title>
            <link>https://www.merrickchristensen.com/articles/single-state-tree</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/single-state-tree</guid>
            <pubDate>Sun, 30 Aug 2015 00:00:00 GMT</pubDate>
            <description><![CDATA[A brief Flux explanation as a pre-cursor for single state tree goodness.]]></description>
            <content:encoded><![CDATA[
When people ask me how I build user interfaces these days they often stare back
at me with wide-eyed shock when I tell them I manage all my state in a single
tree using Flux to populate it.

> Wait a minute, you store everything in a single tree? Like one giant variable?

Yup. And it's awesome.

> Really?

Totes McGoats

## What is the "single state tree" anyways?

In order for me to make my audacious claims of why I think this approach is
awesome, let me first explain the approach. It's really quite simple, all of
your application's state is managed in a single tree. The tree is populated
using a concept called "actions".

### What are actions?

Actions are serialized representations of events that get dispatched your
"store", your stores job is to manage that single state tree. Some examples of
actions:

The _results_ of an HTTP request:

```javascript
{
  type: 'FETCH_COMMENTS_SUCCESS',
  payload: {
    comments: [...]
  }
}
```

A user interaction:

```javascript
{
  type: "SAVE_COMMENT";
}
```

Or any other type of state changing mechanism.

### What is a store?

Your store receives each action and figures out what state to derive from it.
For example, the system make a request to get the latest comments, it then
dispatches an action, the store then says:

> Ahhh, `FETCH_COMMENTS_SUCCESS`, I know precisely what to do with you, I save
> your `comments` into cache and notify my observers that state has changed.

So what does this process look like for a typical "Download and Show Data"
requirement? Well first we have what we call an "action creator" which is really
just a function that creates an action object, or a serialized representation of
a state change, and dispatches it to the store. So it all starts with an action
creator:

```javascript
function getComments(dispatch) {
  $.ajax({
    url: "/post/single-state-tree/comments",
  }).then((comments) => {
    var action = {
      type: "FETCH_COMMENTS_SUCCESS",
      payload: {
        comments,
      },
    };

    dispatch(action);
  });
}
```

We then have a store with a `dispatch` function that manages all state changes.

```javascript
function dispatch(action) {
  if (action.type === 'FETCH_COMMENTS_SUCCESS') {
    // Aww yes my good sir, I know precisely what to do with you.

    // Set next state...
    state = Object.assign({}, state, { comments: action.payload.comments}};
  }
}
```

These two are then combined accordingly to dispatch the action.

```javascript
getComments(dispatch);
```

Typically dispatch is associated with a store, I'll get into that in just a bit.
Anyways, subscribers to state changes are then notified, "Hey Miss, your state
looks different!"

### What is a single state tree?

A single state tree is a single tree that represents your state. This approach
is different than traditional flux in that you have a singular store which
manages all of your states in one mega tree. First, though let's recap some of
the benefits of Flux.

## Benefits of Flux

1.  Using actions to represent state changes provides a serializable data format
    for all your systems state changes.

- Ever wondered how your application got into a particular state? Wonder no
  more, we now have a frame by frame replay of every state change your system
  has ever experienced.
- This frame by frame replay is shareable, imagine a bug report with a
  downloadable "reproduce" file. You simply _play_ the "reproduce" file and get
  the same bug.
- This frame by frame replay is analyzable. Imagine recording all your user
  tests and using analysis tools to get deeper insights into how people interact
  with your system. Also, optimization opportunities anyone?

2.  All of your state changes are passed through a single mechanism.

- A developer can walk into a system and know every potential state changing
  piece of code there is by looking at your action's constants.
- Developers can author "middleware", or functions that each action is passed
  through. This enables things like logging each state change, having different
  types of actions such as thunks or promises, implementing analytics, the
  possibilities are only as limited as our creativity.
- Developer tooling hook into every state change, holy awesome.

3.  Synchronous UI

- UI is rendered synchronously, every time. No more this state change caused
  this state change, caused this state change, caused this state change. Just
  here is the current state, what does the UI look like with this state? Just
  think about it, your UI as a pure function... ever heard of UI so easy to
  test? Me either.

4.  Decouples actions from state changes. This means you can have one action
    make multiple state changes. For example, HTTP responses are decoupled from
    state changes. This is useful for endpoints that contain nested entities.
    For example, say you download all of the comments and they come back with
    nested user objects, with flux you can simply populate the user section of
    the tree, next time someone asks for that user, you can skip the request
    because you know you have the sufficient state.

5.  The questions, "What is my state?" & "When does my state change?" is
    answered simply instead of being littered throughout your application code.

Ok, that all sounds great but at what cost! I hear you gasping...

> But, but, but this approach seems unwieldy and infeasible.

Does it, dear friend? It's not. There are tools to rope in the hard parts. Here
is an example of my tool of choice for leveraging this technique, called Redux.

# Redux

Redux has a remarkably simple interface. And its job is solving precisely what
this article is all about, a single state tree managed by actions. Let's start
out by making a store, in redux this function is called `createStore`.

### Creating a Store

```javascript
import { createStore } from "redux";

let store = createStore(handler);
```

When we call `createStore` we give it our action handler or our _reducer_. The
reducers job is to take the current state and the action and return the next
state. Following our example above let's write a simple reducer.

```javascript
function reducer(state = {}, action) {
  if (action.type === "FETCH_COMMENTS_SUCCESS") {
    return Object.assign({}, state, {
      comments: action.payload.comments,
    });
  }
}
let store = createStore(reducer);
```

Writing a reducer like this can get a little unwieldy because you are managing
the structure of your tree yourself, thankfully Redux offers a little utility
function called combineReducers, its job is to take multiple reducers and manage
the tree structure for you. It can be as nested as you like but we'll
demonstrate a flat tree below.

```javascript
let reducerTree = {
  comments: (state = {}, action) {
    if (action.type === 'FETCH_COMMENTS_SUCCESS') {
      return action.payload.comments;
    }
  },
  users: ...,
  posts: ...
};

let store = createStore(combineReducers(reducerTree))
```

Now our comments reducer is just the reducer for comments, our user reducer is
just the reducer for users etc.. Our state tree would look like this:

```javascript
{
  comments: {},
  users: {},
  posts: {}
}
```

Notice how it matches the reducerTree we provided to combineReducers?

Since we have our reducer wired up when we call `dispatch` on the store with
that particular action, the store's state tree will change to a copied state
tree with the comments included. But how do we access this state tree? Well its
pretty simple really, we call `getState`.

```javascript
let state = store.getState();
```

This can be called at any point in time to retrieve your applications current
state but you typically call it `onSubscribe`. Which lets you listen for changes
to the state and do something accordingly, you know, like render your interface
again.

```javascript
store.subscribe(() => {
  let state = store.getState();
  render(ui, state);
});
```

This gives you simple, _synchronous_ renders, where each render is not a mix of
changes over time but a singular snapshot of state at a given point in time.

### Benefits of Single State Tree

1.  Improved developer tooling. Since all your state exists in one location,
    other parts of your application can be reloaded _without_ blasting your
    state. Gone are the days of reloading your entire application, clicking
    different things to get your application into that state you are working on
    and testing your changes. Instead, you simply reload that one file and your
    application's state stays intact. You can do this kind of hot reloading with
    different tools in the ecosystem.
2.  Shared cache across your system. When one section of your application
    downloads a user, its there for the other pieces of your application that
    use that user, no HTTP request required.
3.  Your entire applications state can be viewed in one structure (and shared as
    one structure) see benefits of Flux above.
4.  The majority of your applications state management are pure functions, hello
    testability.
5.  Your applications state can be simply bootstrapped from the server, hello
    server-side rendering.
6.  State changes are predictable.
7.  Undo & Redo are practically free.

## Further Reading

- [Redux Documentation](http://rackt.github.io/redux/index.html)
- [React Hot Reloader](https://github.com/gaearon/react-hot-loader)
- [Redux Developer Tools](https://github.com/gaearon/redux-devtools)
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[React + Angular DI]]></title>
            <link>https://www.merrickchristensen.com/articles/react-angular-di</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/react-angular-di</guid>
            <pubDate>Sat, 20 Dec 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[Dependency injection with Angular 2's dependency injector, React and react-router.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; because it was a terrible (but fun)
idea.

I have been deliberating on the topic of writing testable React code without
singletons and service locator “injection” for some time. My
[last article](/articles/react-dependency-injection.html) proposed a very
straightforward approach and discussed why I am chasing these goals. Well, I am
very happy to say that I feel I’ve settled on an approach and an unlikely one at
that. I believe the following approach offers the following:

1.  Code isn’t “coupled” to the dependency injector, it can be run without it
    using vanilla constructor injection.
2.  Code is testable without module system level mocking.

Link to the discussed project is
[here](https://github.com/iammerrick/ng-react-router).

## di.js

No front-end web framework has pioneered testability like Angular.js. At the
forefront of this effort is the dependency injector. Angular 2 has a little
known, _fantastic_, project in the works called
[di.js](https://github.com/angular/di.js/).

di.js gives us the testability of a dependency injector but the ease of use of a
module system.

## React

React has pioneered the virtual DOM and the most fantastic component
compose-ability I’ve seen to date. React dominates the UI.

## di.js + React

Lets take a look at pairing up di.js & React. While we’re add it lets use the
fantastic react-router library.

#### bootstrap.js

The first thing we need is a bootstrap file, think main() function in Java or C.
This just gets the app up and running.

```javascript
var di = require("di");
var Router = require("./Router");
var React = require("react");

// Make the injector
var injector = new di.Injector([]);

// Grab the Router
var router = injector.get(Router);

// Get it up and running and render it into the DOM
router.run((Handler) => {
  React.render(<Handler />, document.body);
});
```

#### Router.js

Now we create the Router and inject the corresponding Routes. Notice we annotate
the Router to inform di.js what to inject, the cool thing is though, we could
just inject our own Routes manually at test time. Rad, eh?

```javascript
var ReactRouter = require("react-router");
var di = require("di");
var Routes = require("./routes");

var Router = function (Routes) {
  return ReactRouter.create({
    routes: Routes,
  });
};

// Inject the Routes
di.annotate(Router, new di.Inject(Routes));

module.exports = Router;
```

#### Routes.js

Same story here, just define the Routes and inject AppHandler to deal with the
base route.

```javascript
var { Route } = require("react-router");
var AppHandler = require("./AppHandler");
var di = require("di");
var React = require("react");

var Routes = function (AppHandler) {
  return <Route handler={AppHandler} />;
};

di.annotate(Routes, new di.Inject(AppHandler));

module.exports = Routes;
```

#### AppHandler.js

Notice, we can inject child components to use in our component.

```javascript
var React = require("react");
var ChildComponent = require("./ChildComponent");
var di = require("di");

var AppHandler = function (ChildComponent) {
  return React.createClass({
    render() {
      return (
        <div>
          <h1>Hello world!</h1>
          <ChildComponent />
        </div>
      );
    },
  });
};

di.annotate(AppHandler, new di.Inject(ChildComponent));

module.exports = AppHandler;
```

#### ChildComponent.js

```javascript
var React = require("react");
var di = require("di");
var AppActions = require("./AppActions");

var ChildComponent = function (AppActions) {
  return React.createClass({
    handleClick() {
      AppActions.alertInExcitement();
    },
    render() {
      return (
        <div onClick={this.handleClick}>I am a child component. Click me!</div>
      );
    },
  });
};
di.annotate(ChildComponent, new di.Inject(AppActions));
module.exports = ChildComponent;
```

#### AppActions.js

One of my favorite pieces here, AppActions has no dependencies so we can just
construct an ES6 class.

```javascript
class AppActions {
  alertInExcitement() {
    alert("I am so excited!");
  }
}

module.exports = AppActions;
```

Take a look at AppActions, could a file be easier to test? It is literally just
a class.(Aside from calling alert, gross.) Notice that even the code that does
have dependencies can still be constructed by hand, neat right?

## Thoughts

I am going to travel this path a little further and perhaps write an update,
di.js could probably use some utility functions for non AtScript code,
annotations in long form is a little tedious but other than that I am very
satisfied and excited about this technique.

## Credits

1.  A conversation a few years ago with Igor Minar, Angular’s tech lead, that
    convinced me that “service locator” mocking was a bad idea; and that a
    dependency injector is in fact a very useful tool.
2.  Ryan Florence for
    [create-container](https://github.com/rpflorence/create-container) and
    making react-router injector friendly
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[React Dependency Injection]]></title>
            <link>https://www.merrickchristensen.com/articles/react-dependency-injection</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/react-dependency-injection</guid>
            <pubDate>Sat, 15 Nov 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[Constructor Dependency Injection with React.js]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; because I think writting object oriented
React is the wrong approach for a number of reasons.

## Update June 5, 2016

React has removed automatically exporting factory functions from createClass.
This means, components can no longer be called as functions that accept their
properties. In order to get the same effect, you must call React.createFactory
before calling your component as a function.

This:

```javascript
var MyComponent = React.createClass({
  render() {
    return <h1>Hello {this.props.name}</h1>;
  },
});

MyComponent({
  name: "Merrick",
});
```

Becomes this:

```javascript
const MyComponent = React.createFactory(
  class extends React.Component {
    render() {
      return <h1>Hello {this.props.name}</h1>;
    }
  }
);

MyComponent({
  name: "Merrick",
});
```

To be clear, the recommended use in your application's code is to leverage JSX.
In order to follow along with this article, without the overhead of JSX,
components should be wrapped in createFactory. (Which JSX effectively does for
you by compiling to React.createElement). See
[this deprecation notice](https://gist.github.com/sebmarkbage/d7bce729f38730399d28)
for more information.

# React Dependency Injection

I love React.js. I find it to be a powerful tool for creating UI and revel in
its [immediate mode](http://en.wikipedia.org/wiki/Direct_mode) rendering model.
Unforunately however, the dominant approach for writing React applications is
use singletons at the module level. Here is an example of what I mean:

```javascript
var count = 0;

module.exports = {
  increment() {
    return count++;
  },

  getCount() {
    return count;
  },
};
```

The trouble with this approach is that it is difficult to test. You typically
need to add some sort of reset functionality to your module, like this:

```javascript
var count = 0;

module.exports = {
  increment() {
    return count++;
  },

  getCount() {
    return count;
  },

  reset() {
    count = 0;
  },
};
```

That way in your unit tests you can reset the corresponding state to its
original place. This can get very complex and tedious depending on how much or
how complex the state in your module is. Because of this people tend to just
throw away the entire module and re-evaluate it each time. That way, each test
gets the benefit of fresh state, and you don't have to write reset methods. This
is the way Facebook's [Jest](https://facebook.github.io/jest/) works and my own
library, [Squire.js](https://github.com/iammerrick/Squire.js/). This is
problematic for a few reasons.

1.  It's slower, you are reevaluating the module several times.
2.  The module still has the same state for an entire test file, multiple it()
    blocks would still need to reset state or require() the module in each it()
    block. Slow and difficult to test.
3.  `require` is a
    [service locator](http://en.wikipedia.org/wiki/Service_locator_pattern), not
    a [dependency injector](http://en.wikipedia.org/wiki/Dependency_injection).
    Using it for both conflates it's use violating the
    [single responsibility principle](http://en.wikipedia.org/wiki/Single_responsibility_principle).
4.  In tests uses of instanceof can break because you could be getting a
    different instance for each `require()`.
5.  Code is now encouraged to be written in the form of singletons which is
    problematic for it's
    [own reasons](http://stackoverflow.com/questions/137975/what-is-so-bad-about-singletons).

# A Different Approach

I wanted to offer a different approach that seems to solve the requirements I
have which are:

1.  No singletons, re-evaluation or reset methods required for tests.
2.  Code must remain easy to understand.

## React Properties

The method for passing anything to a React Element is to use properties,
properties are passed in as the first argument of a React Element. For example:

```javascript
var MyComponent = React.createClass({
  render() {
    return <h1>Hello {this.props.name}</h1>;
  },
});

MyComponent({
  name: "Merrick",
});
```

This would render the following HTML:

```html
<h1>Hello Merrick</h1>
```

Properties are effectively the technique one can use to pass things to the
component which are not child elements. The neat thing is you can even set
default properties. Check this out:

```javascript
var MyComponent = React.createClass({
  getDefaultProps() {
    return {
      name: "Scuba Steve",
    };
  },
  render() {
    return <h1>Hello {this.props.name}</h1>;
  },
});

MyComponent();
```

This would render what you would expect:

```html
<h1>Hello Scuba Steve</h1>
```

Have you made the connection yet? We can use React Properties to inject
dependencies to our components. Check this out:

```javascript
var MyComponentViewModel = require("./MyComponentViewModel");
var HTTP = require("http");

var MyComponent = React.createClass({
  getDefaultProps() {
    return {
      model: new MyComponentViewModel(new HTTP()),
    };
  },

  getInitialState() {
    return this.props.model.getState();
  },

  render() {
    return <h1>Hello {this.state.name}</h1>;
  },
});

MyComponent();
```

Now, the code is just as tractable as it is using the singleton approach, you
can see right where the dependencies exist on the file system, but here is where
it gets cool... We can pass in a different view model under test, like this:

```javascript
var MyComponentViewModel = require("./MyComponentViewModel");
var mockHTTP = {
  get: function () {
    // Would probably return a promise.
    return {
      name: "Async Name",
    };
  },
};

MyComponent({
  model: new MyComponentViewModel(mockHTTP),
});
```

With this approach we get the following benefits:

1.  We can specify default implementations of dependencies.
2.  We can inject different dependencies if we would like (for example under
    test).
3.  The code is just as tractable as it is using modules as singletons.
4.  We are coding to an interface not an implementation.
5.  No more re-evaluation of code for tests or global state.

## Bonus

A neat side-effect of this approach is that we can use propTypes to validate our
dependencies honor a specific interface. This encourages us to code to an
interface, not and implementation.

```javascript
var MyComponent = React.createClass({
  propTypes: {
    model: React.PropTypes.shape({
      getState: React.PropTypes.func,
    }),
  },

  getDefaultProps() {
    return {
      model: new MyComponentViewModel(new HTTP()),
    };
  },

  getInitialState() {
    return this.props.model.getState();
  },

  render() {
    return <h1>Hello {this.state.name}</h1>;
  },
});
```

This will validate that our model has a `getState` method! How cool is that!?
Now we are really coding to an interface and we get that validated by React's
type system.

# Real Talk

This is a new idea and I'm not positive how great it is. I would love to hear
critisism and feedback in all its forms, preferably
[twitter](http://twitter.com/iammerrick) or email.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Swift Notes]]></title>
            <link>https://www.merrickchristensen.com/articles/swift</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/swift</guid>
            <pubDate>Tue, 03 Jun 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[My notes trying to develop a mental model for Apple's Swift Language]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; for offering little to no value from the
moment it was written.

- Type inference with sane interpolation (which takes expressions yay)! Make for
  less nasty NSLog.

- Dictionary & Array literal syntax. Legit. More akin to PHP than JS by syntax.

- let & var seem to hold for reference semantics not value semantics. Think
  Scala's val & var.

- TODO: Does for in take advantage of NSFastIteration protocol? In which case,
  users could provide iterables?

- Expression support for control flow! Pattern matching!

- Ranges. Appear to be lazily computed as expected.

- Nice overloading of `...` to represent variadic functions.

- First class functions!

- The closure block syntax is a little odd overloading the in keyword to
  separate the interface from the implementation.

- Single line statement closures are implicit return types

- Classes look freakishly like ES6/Typescript classes.

- Explicit override for subclasses.

- Weird implicit "newValue" in setters. You can explicitly set the name if you
  like though.

- willSet/didSet I imagine this was done to keep binding logic out of
  getters/setters.

- The existential `?` operator is just great, reminds me of coffee script.

- Named parameters only exists in methods? Seems like a choice for Objective-C
  interoperability, feels odd that functions don't have the same restriction.

- structs == value classes

- enums + Pattern Matching reminds me a lot of Scala's case classes.

- extensions lots like C# so far as I can tell, I'm guessing this will be used
  to replace Objective-C Categories. Extending built-ins FTW!

- Protocols now restrict consumers to consuming only methods in the protocol.
  Awesome!
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Extensible Web Summit]]></title>
            <link>https://www.merrickchristensen.com/articles/extensible-web-summit</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/extensible-web-summit</guid>
            <pubDate>Sat, 05 Apr 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[Developers, standards bodies, and browser implementors all working together to progress the web.]]></description>
            <content:encoded><![CDATA[
As I sat down in a half empty room located at Adobe headquarters and began to
look around it didn't take long for me to realize how truly privileged I was to
be there. I felt humbled. Actually, I felt like an impostor, as if I had
infiltrated a secret meeting of sages deciding the fate for something of massive
importance. In representation, there were members of TC39, W3C Tag, Angular,
Polymer, Ember &
[Sir Tim Berners-Lee](http://en.wikipedia.org/wiki/Tim_Berners-Lee). In fact, I
requested a picture with him for posterity but found myself feeling like a
[humiliated rejected Bieber fan](https://www.youtube.com/watch?v=FZQtRxsN_JU)
when he declined my request, and asked me to focus on the content. Touchè Mr.
Lee. Below, is a summary of that very content. ;-)

The event was started with Daniel Appelquist whom I believe was the organizer of
the event. It was to be run [barcamp](http://en.wikipedia.org/wiki/BarCamp)
style, however, before the barcamp sessions there were a series of lightning
talks.

![Final Sessions](/assets/images/articles/extensible-web-summit/final-sessions.jpg)

# Lightning Talks

## Yehuda Katz

Yehuda noted that the "don't break the web" requirement means that browser
vendors don't have the liberty to "ship and iterate" in the traditional sense.
The [Extensible Web Manifesto](http://extensiblewebmanifesto.org/) is about
providing primitives that allow developers to iterate in user space and
eventually merge that progression back into the platform. This enables iteration
and mitigates the burden of backward compatibility. It also empowers users to
progress the platform. He noted Polymer in particular as providing high-level
APIs in userland iteration, meanwhile leveraging that information to drive the
platform forward.

## Jake Archibald

Jake started about by lamenting the unfortunate lack of features found in the
web platform such as push notifications, offline first, background updates,
payments, alarms, geofencing, etc. He then went on to explain a new technology,
[Service Workers](https://github.com/slightlyoff/ServiceWorker), that solves
many of these problems.

## Angelina Fabbro

Angelina gave a quick overview of
[Web Components](http://www.w3.org/TR/components-intro/). Namely,

- [Shadow DOM](http://www.w3.org/TR/shadow-dom/)
- [Templates](http://www.w3.org/TR/html-templates/)
- [HTML Imports](http://www.w3.org/TR/html-imports/)
- [Custom Elements](http://www.w3.org/TR/custom-elements/).

## Tab Atkins

Tab opened by conceding that CSS is by far the least extensible piece of the web
platform. He then went over different ideas to open up CSS for developer
empowerment. My favorite part was the optimistic and empowering close, "The
future of CSS is open for business".

- [CSS Variables](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_variables)
- [CSS Aliases](http://tabatkins.github.io/specs/css-aliases/)

## Domenic Denicola

Domenic shared his story about the
[Promises/A+](http://promises-aplus.github.io/promises-spec/) specification in
which a community born specification is now a platform-wide primitive for
thinking about asynchrony. He is now working on a
[streams](https://github.com/whatwg/streams) specification. I owe Domenic a
special bit of gratitude for convincing me to come to this meeting in the first
place, thank you, Domenic.

## Jeni Tennison

Jeni shared her work on a
[specification for packaging on the web](https://github.com/w3ctag/packaging-on-the-web).
Today we optimize our applications using concatenation, however, you can't
concatenate ES6 modules due to module scoping. There is also a large amount of
work for mitigating requests, sprites or Base64 encoding images, CSS & JS
concatenation etc. Packages are an answer to this problem.

## Alex Russell

Alex proposed the question, how does progress happen? He contended that progress
happens outside of the W3C, the W3C stamps that progress and they become
standards. Progress happens with:

- Evidence-based consensus, e.g. polyfill.
- Engineers have personal experience with a problem and they fix it.

Progress starts with changing mind and ends with changing behavior. Delivering
meaningful progress on the web requires that we ship and use evidence to iterate
on the platform.

## Anne van Kesteren

Anne told a story in which about a decade ago the HTML parser was standardized.
Prior to that time browsers implemented HTML in a variety of ways which meant
new specifications and standards were unreasonably difficult to agree upon and
implement. Once they got behind a standard strategy of parsing HTML a great deal
of progress was enabled. That work is leveraged as the underlying primitive for
many of the features we are working on today. The point of the story, so far as
I understood it, is that progress on the web platform requires a sort of
"archaeology" in which you unearth the underlying primitives in our existing
systems to propose meaningful ways of moving them forward. He noted a lot of
things need some archaeological work done, namely styling form controls and
content editing.

All of the above happened in about thirty minutes, it was wonderful. Afterward,
we split up into the barcamp sessions in which I attended.

1.  [Service Workers](https://github.com/slightlyoff/ServiceWorker)
2.  [Web Components](http://www.w3.org/TR/components-intro/)
3.  Bed Rock - Unearthing Primitives - A crazy awesome wacky idea of using
    MessageChannel's to provide low-level APIs.
4.  [Packaging on the Web](https://github.com/slightlyoff/ServiceWorker)
5.  Remote Debugging - Source Maps - I bailed out to go listen to
    [Misko Hevery](https://twitter.com/mhevery) discuss directive semantics in
    Dart with [Justin Fagnani](https://github.com/justinfagnani).

I considered posting my notes about the discussions and topics covered in these
sessions but I think that content is best explored in the meeting minutes found
on the events [lanyrd page](http://lanyrd.com/2014/extensible-web-summit/).

![Discussing Web Packaging](/assets/images/articles/extensible-web-summit/packaging.jpg)
![Web Components](/assets/images/articles/extensible-web-summit/web-components.jpg)
![Drinks](/assets/images/articles/extensible-web-summit/drinks.jpg)
![Discussing Sessions](/assets/images/articles/extensible-web-summit/proposing-sessions.jpg)
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Grunt.js Workflow]]></title>
            <link>https://www.merrickchristensen.com/articles/gruntjs-workflow</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/gruntjs-workflow</guid>
            <pubDate>Fri, 01 Nov 2013 00:00:00 GMT</pubDate>
            <description><![CDATA[Completely refactor your workflow with Grunt.js. A task-based command line tool written in JavaScript.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; for irrelevance. Webpack & NPM Scripts
have made Grunt useless to me.

In this article I'm going to show you how to leverage Grunt.js to completely
refactor your workflow. Follow in the footsteps of some of the most
[prolific open source projects](https://github.com/jquery/jquery/commits/master/grunt.js)
in the world and leave the grunt work to Grunt.js.

## What is Grunt.js?

Grunt.js is a fantastic task-based command line tool written in JavaScript on
top of the wonderful Node.js platform. You can leverage Grunt.js to script away
all of your grunt work. Tools and procedures that you historically ran and
configured yourself, you can now abstract behind a convention based command line
interface with a consistent means of configuration. You can write your most
complicated tasks _once_ and leverage them in all of your projects using project
specific configuration.

What are these tasks I keep referencing? Well, really thats up to you, but a few
of the most common ones are
[concatenating files](https://github.com/gruntjs/grunt-contrib-concat),
[linting](https://github.com/gruntjs/grunt-contrib-jshint) and
[testing](https://github.com/karma-runner/grunt-karma) your code, and
[minification](https://github.com/gruntjs/grunt-contrib-uglify). Grunt.js
doesn't limit you to JavaScript specific tasks either, because Grunt.js is built
on top of Node.js you can leverage all the power of Node in your tasks. Even if
your tool isn't implemented in JavaScript we can defer tasks to child processes,
using their command line interface or even a web service. For example, the
[grunt-contrib-compass](https://github.com/gruntjs/grunt-contrib-compass) task,
which allows you to compile your SCSS/SASS stylesheets using the excellent
[Compass framework](http://compass-style.org/) (a ruby program), is implemented
using the Compass provided command line interface. The point is that just about
any tool can be abstracted behind a Grunt task.

## Why should I use Grunt.js?

### Consistency

Grunt.js provides a consistent interface for configuring and using any task.

### Utility

Grunt.js allows you to run your tasks when monitored files change or manually
using the command line. It also allows you to aggregate tasks using aliases,
this is a very powerful feature which allows you to abstract the details of a
more generic task. For example, one might abstract linting, testing,
concatenating, and minification behind a task named "build".

### Community

Grunt.js has a strong and rapidly growing community. The barrier to entry for
using and customizing Grunt is so minimal that there is already a whole bunch
great tasks available for use today. Take a look at the bountiful
[plugin landscape](http://gruntjs.com/plugins).

### Power & Flexibility

Grunt.js provides all the power of Node.js for your tasks. It also provides some
very fundamental and powerful abstractions for common needs, like working with
the file system or logging to the standard out (stdout).

## Who is using it?

Lots of excellent projects are leveraging Grunt.js. To name a few:

- [angular.js](https://github.com/angular/angular.js/commits/master/Gruntfile.js)
- [jQuery](https://github.com/jquery/jquery/commits/master/grunt.js)
- [Many, many, many more…](https://github.com/search?q=grunt.js)

## A Grunt.js Workflow

The project structure we are going to use in this example is straight forward.
We have a folder called "src", which has our application's source code and a
folder called "test" that has our application's test. This is a Mocha driven
test suite.

![Before Grunt.js File System](/assets/images/articles/gruntjs-workflow/file-system-pre-grunt.png "Before Grunt.js File System")

First things first, lets get Grunt installed and integrated into the project so
we can start adding useful tasks.

### Installation

Lets install the Grunt CLI globally so we can access the "grunt" command. To
install Grunt run the following command. I assume you already have
[Node.js](http://nodejs.org/) installed. The job of the grunt command is to load
and run the version of Grunt you have installed locally to your project,
irrespective of its version. If you have some projects using Grunt 0.4 and
others using Grunt 0.3, the grunt-cli will select and run the proper grunt
installation.

```bash
npm install grunt-cli -g
```

You can check and make sure Grunt is installed correctly by asking the program
for it's version.

```bash
grunt --version
```

### Project Integration

Time to integrate Grunt with our project. First lets create a package.json file
so we can quickly track and install our applications dependencies. What
dependencies? Well, we will need to add Grunt.js as one. Soon we will add our
third party Grunt tasks here too!

Having a package.json file is not required to use Grunt, however it makes
managing our third party tasks a lot easier! You will see what I mean in a
little bit. For now, we can create a package.json at the root of our project
with the following contents.

#### package.json

```json
{
  "name": "Example",
  "version": "0.0.1",
  "private": true,
  "devDependencies": {
    "grunt": "latest"
  }
}
```

If you already have a package.json file and you want to add Grunt to your
project. You can do that by running the following command:

```bash
npm install grunt-cli --save-dev
```

Terrific now it is time to create our Gruntfile! A Gruntfile is a JavaScript
file that Grunt leverages to understand your projects tasks and configuration.
When you run "grunt" from the command line, Grunt will recurse upwards till it
finds your Gruntfile. This functionality allows you to run Grunt from any sub
directory of your project.

#### Gruntfile.js

This is the basis of a Grunt file, it's a wrapper function that takes in "grunt"
as an argument. This allows us to register tasks and configuration with grunt
(and leverage Grunt's APIs) before Grunt actually runs any tasks. Think of this
as an entry point of sorts for Grunt.

```javascript
module.exports = function (grunt) {
  // set up grunt
};
```

Well done, you now have Grunt installed. Go ahead, and run "grunt" from your
command line at the root of your project. You will see a warning that it can't
find a default task. Very well then, lets go about adding some tasks!

```bash
$ grunt
Warning: Task "default" not found. Use --force to continue.

Aborted due to warnings.
```

### Automate The Grunt Work

Wouldn't it be great if we could run our JavaScript code through JSHint and get
all sorts of great feedback on making our code more consistent and less buggy?
Fortunately, Grunt has a few common tasks built right in, code linting with
JSHint is one of them. Lets modify our Gruntfile to get this working!

#### Gruntfile.js

First install the JSHint task by running this command:

```bash
npm install grunt-contrib-jshint --save-dev
```

```javascript
/*global module:false*/
module.exports = function (grunt) {
  grunt.initConfig({
    jshint: {
      src: [
        "Gruntfile.js",
        "src/app/**/*.js",
        "src/config.js",
        "tests/app/**/*.js",
      ],
      options: {
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        boss: true,
        eqnull: true,
        browser: true,
        globals: {
          require: true,
          define: true,
          requirejs: true,
          describe: true,
          expect: true,
          it: true,
        },
      },
    },
  });

  // Load JSHint task
  grunt.loadNpmTasks("grunt-contrib-jshint");

  // Default task.
  grunt.registerTask("default", "jshint");
};
```

Here we install and configure the "jshint" task telling it which files to lint
using [minimatch](https://github.com/isaacs/minimatch) style globbing. We also
configure [JSHint](http://www.jshint.com/) to our liking, adding globals that
are ok and making sure it only complains when we violate our decided coding
standards. We then create an alias called "default", and we tell it to run the
"jshint" task.

Go ahead and run grunt again…

```bash
$ grunt

Running "jshint:src" (jshint) task
Lint free.

Done, without errors.
```

We can also run the jshint task directly!

```bash
$ grunt jshint

Running "jshint:src" (jshint) task
Lint free.

Done, without errors.
```

Wouldn't it be great if we could lint our code automatically every time one of
our linted files changes? Thats easy enough thanks to the grunt "watch" task.
Let's install and add some "watch" configuration to your grunt.initConfig call.

First we install it:

```bash
npm install grunt-contrib-watch --save-dev
```

Then we configure and load it:

```javascript
grunt.initConfig({
  // …
  watch: {
    files: "<%= jshint.src %>",
    tasks: ["jshint"],
  },
  // …
});

grunt.loadNpmTasks("grunt-contrib-watch");
```

This tells the grunt "watch" task (a built in task), to run the "lint" task
every time one of the configuration specified lint files changes! The sharp
reader will notice the use of
[lodash templates](http://lodash.com/docs#template) to reference the "lint"
configuration from "watch".

While code linting is a great tool to try an manage code quality it hardly
provides the confidence we need to ship our application to production. Lets set
up Grunt to run our [Mocha](http://visionmedia.github.com/mocha/) specs in
PhantomJS, that should give us a bit more confidence before shipping.

Before we set out to write our own Grunt task for Mocha, lets check NPM and make
sure nobody else has implemented one already.

```bash
$ npm search gruntplugin mocha

NAME                  DESCRIPTION
grunt-mocha           Grunt task for running Mocha specs
grunt-mocha-test      A grunt task for running server side mocha tests
grunt-simple-mocha    A simple wrapper for running tests with Mocha.
istanbul-grunt-mocha  Grunt task for running Mocha specs, writing istanbul code
```

What do you know!? Looks like someone has already created a Grunt task to run
Mocha specs! Lets go ahead and leverage it!

```bash
$ npm install grunt-mocha -D
```

This command installs [grunt-mocha](https://github.com/kmiyashiro/grunt-mocha)
and saves it as a development dependency in our package.json. This is useful
when someone else clones our repository; instead of tracing down our
dependencies manually, they can simply run "npm install" and NPM will download
the correct version of [grunt-mocha](https://github.com/kmiyashiro/grunt-mocha)
for them!

Now that grunt-mocha is installed we need to load it into our project using the
grunt.loadNpmTasks method, which allows us to load in grunt tasks from NPM
installed dependencies.

#### grunt.js

```javascript
// …
modules.exports = function(grunt) {

  grunt.initConfig(…);

  grunt.loadNpmTasks('grunt-mocha');
};
```

Now that Grunt is loading in grunt-mocha, lets configure it to run our tests.

```javascript
grunt.initConfig({
  // …
  mocha: {
    all: ["tests/index.html"],
  },
  // …
});
```

The tests/index.html file is a basic Mocha test runner, much like the one you
see if you run "mocha init" from the command line. Now we can execute our mocha
tests in PhantomJS from Grunt.

```bash
$ grunt mocha

Running "mocha:all" (mocha) task
Testing index.html.OK
>> 1 assertions passed (0.04s)
```

Pretty cool, eh? With, 3 lines of configuration and a method call we are running
our tests in PhantomJS. Lets add the mocha task to our watch command so each
time a linted file changes we execute our tests as well as lint our code.

```javascript
grunt.initConfig({
// …
  watch: {
    files: <%= jshint.src %>,
    tasks: ['jshint', 'mocha']
  },
// …
});
```

Now when we execute "grunt watch" and change a file we get immediate feedback on
our tests and code quality.

```bash
$ grunt watch
Running "watch" task
Waiting...OK
>> File "Gruntfile.js" renamed.

Running "jshint:src" (jshint) task
Lint free.

Running "mocha:all" (mocha) task
Testing index.html.OK
>> 1 assertions passed (0.04s)
```

Pretty sweet, huh!?

## Custom Tasks

Obviously Grunt.js isn't limited to code quality tasks. To demonstrate how
generic a Grunt.js task can be lets write a custom task to compliment us every
time we run grunt.

Lets start out by registering a task called "compliment".

```javascript
module.exports = function (grunt) {
  // …
  grunt.registerTask("compliment", function () {
    grunt.log.writeln("You are so awesome!");
  });
  // …
};
```

With a call to register task, and simple call back function we have a custom
Grunt task. We can run it from the command line directly just like we ran lint
or mocha.

```bash
$ grunt compliment

Running "compliment" task
You are so awesome!

Done, without errors.
```

Well, thank you Grunt. I think you are awesome too.

Wouldn't it be great if users could customize their compliments using their
grunt configuration and have a random compliment each time? Simple enough…

```javascript
grunt.initConfig({
  //...
  compliment: [
    "You are so awesome!",
    "You remind me of Brad Pitt, only you have a better body.",
    "You are a funny, funny kid.",
  ],
});

grunt.registerTask("compliment", "Treat yo' self!", function () {
  var defaults = ["No one cares to customize me."];

  // Notice we use the grunt object to retrieve configuration.
  var compliments = grunt.config("compliment") || defaults;
  var index = Math.floor(Math.random() * compliments.length);

  grunt.log.writeln(compliments[index]);
});
```

Observant readers will note that I added a description argument, this is used
when consumers run grunt -h…

```bash
$ grunt -h

…
  compliment  Treat yo' self!
…
```

We pull the compliment array out of the configuration using the grunt.config
function, we then select a random compliment and echo it to the user. Lets add
this to our default just before lint and test.

```javascript
// …
grunt.registerTask("default", ["compliment", "jshint", "mocha"]);
```

Now when we run "grunt" we get a fresh compliment. My day is already starting to
look up.

```bash
$ grunt

Running "compliment" task
You are a funny, funny kid.

Running "jshint:src" (jshint) task
Lint free.

Running "mocha:all" (mocha) task
Testing index.html.OK
>> 1 assertions passed (0.04s)

Done, without errors.
```

If you are interested in writing Grunt tasks others can use, see
[grunt-compliment](https://github.com/iammerrick/grunt-compliment) where I
refactored this "compliment" task and published it NPM.

## Wrapping it up...

Grunt is a powerful tool and I've only scratched the surface in this article.
Try to leverage it in your next project and if it doesn't completely refactor
your workflow [email me](mailto:merrick.christensen@gmail.com) and tell me why.
Leave the grunt work to Grunt.js so your mind and time can be liberated for the
hard problems.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Scenario vs. Problem Solving]]></title>
            <link>https://www.merrickchristensen.com/articles/scenario-vs-problem-solving</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/scenario-vs-problem-solving</guid>
            <pubDate>Sun, 25 Aug 2013 00:00:00 GMT</pubDate>
            <description><![CDATA[Scenario solving versus problem solving and its consequences for the artifact produced.]]></description>
            <content:encoded><![CDATA[
## What is Scenario Solving?

A scenario is typically just one narrow manifestation of a much broader problem.
Scenario solving is taking one narrow segment of a problem and solving it
without considering the scope or context of the problem itself.

## What is Problem Solving?

Problem-solving is considering the entire scope of the problem and implementing
a solution. A problem can be thought of as an aggregate of scenarios, the
solution to which resolves the majority of the problem's scenarios. A scenario
is typically just one manifestation of a much more general problem.

## The Difference

Scenario solving results in a fragmented user experience. While problem-solving
results in useful primitives that can be composed to build features
significantly faster and more efficiently.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Angular Cheat Sheet]]></title>
            <link>https://www.merrickchristensen.com/articles/angular-cheat-sheet</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/angular-cheat-sheet</guid>
            <pubDate>Sat, 18 May 2013 00:00:00 GMT</pubDate>
            <description><![CDATA[The arcane bits of Angular in the form of a simple cheat sheet.]]></description>
            <content:encoded><![CDATA[
### Update August 11, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; because I've not ensured its relevance
with the latest in Angular.js. I brought it back by
[request](https://twitter.com/ghettosoak/status/1027118531131916289).

<div dangerouslySetInnerHTML={{__html: `<h2>Directives</h2>
<h3>Restrict</h3>

<table>
  <thead>
    <tr>
      <th>Character</th>
      <th>Declaration Style</th>
      <th>Example</th>
    </tr>
  </thead>
  <tr>
    <td>E</td>
    <td>element</td>
    <td>&lt;hello to="world"&gt;&lt;/hello&gt;</td>
  </tr>
  <tr>
    <td>A</td>
    <td>attribute</td>
    <td>&lt;div hello="world"&gt;&lt;/div&gt;</td>
  </tr>
  <tr>
    <td>C</td>
    <td>class</td>
    <td>&lt;div class="hello:world"&gt;&lt;/div&gt;</td>
  </tr>
  <tr>
    <td>M</td>
    <td>comment</td>
    <td>&lt;!--directive:hello World --&gt;</td>
  </tr>
</table>

<h3>Scope</h3>

<table>
  <thead>
    <tr>
      <th>Scope Type</th>
      <th>Syntax</th>
      <th>Description</th>
    </tr>
  </thead>
  <tr>
    <td>existing scope</td>
    <td>scope: false (default)</td>
    <td>The existing scope for the directive's DOM element.</td>
  </tr>
  <tr>
    <td>new scope</td>
    <td>scope: true</td>
    <td>A new scope that inherits from your enclosing controller's scope prototypically. This scope will be shared with any other directive on your DOM element that request this kind of scope and can be used to communicate with them.</td>
  </tr>
  <tr>
    <td>isolate scope</td>
    <td>scope: { attributeName: 'BINDING_STRATEGY' } or { attributeAlias: 'BINDING_STRATEGY' + 'attributeName' }</td>
    <td>An isolate scope that inherits no properties from the parent, you can however access the parent scope using $parent.</td>
  </tr>
</table>

<h3>Scope Binding Strategies</h3>

<table>
  <thead>
    <tr>
      <th>Symbol</th>
      <th>Meaning</th>
    </tr>
  </thead>
  <tr>
    <td>@</td>
    <td>
      Pass this attribute as a string. You can bind values to the parent scope
      using {{ interpolation }}.
    </td>
  </tr>
  <tr>
    <td>=</td>
    <td>Data bind this property to the directive's parent scope.</td>
  </tr>
  <tr>
    <td>&amp;</td>
    <td>
      Pass in a function from the parent scope to be called later. Used to pass
      around lazily evaluated angular expressions.
    </td>
  </tr>
</table>

<h3>Require</h3>

<table>
  <thead>
    <tr>
      <th>Option</th>
      <th>Usage</th>
    </tr>
  </thead>
  <tr>
    <td>directiveName</td>
    <td>A camel-cased name that specifies which directive teh controller should come from. If our <dialog-title> directive needs to find a controller on it's parent <dialog> we would write "dialog".</td>
  </tr>
  <tr>
    <td>^</td>
    <td>By default, Angular gets the controller from the named directive on the same element. Adding this symbol says to walk up the DOM to find the directiev. For the dialog example with would need to add this symbol "^dialog". This means, look up till you find the parent dialog directive and give me that controller.</td>
  </tr>
  <tr>
    <td>?</td>
    <td>Makes the required controller optional, otherwise Angular would throw an exception if it couldn't find it.</td>
  </tr>
</table>`}} />
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Source Modification With r.js]]></title>
            <link>https://www.merrickchristensen.com/articles/build-angular-with-requirejs</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/build-angular-with-requirejs</guid>
            <pubDate>Fri, 10 May 2013 00:00:00 GMT</pubDate>
            <description><![CDATA[An overview of source modification with r.js. Using ngmin and Angular.js as an example.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; for irrelevance. I have migrated from
r.js to Webpack.

One of the great principles of [Require.js](http://requirejs.org/) is that you
shouldn't need a build step during development, you should simply be able to
refresh the browser and see your changes reflected. This does not mean, however,
that you should also have to compromise on performance. Often times it makes
sense to modify your application's source at build time for performance reasons.
This includes minification, annotation, inlining of text dependencies, or even
transpiling one source format to another. Require.js offers the
[r.js](http://requirejs.org/docs/optimization.html) build tool to optimize your
code for production as well as a robust plugin system. This article will focus
on the former, using
[Angular.js dependency injection annotations](/articles/javascript-dependency-injection.html)
and [ngmin](https://github.com/btford/ngmin) as an example.

## The Problem

In Angular.js a dependency injection system is used to resolve dependencies and
provide them at run time. Dependencies use constructor function argument names
to match dependencies and provide them at run time. For example:

```javascript
angular.module("people").controller("MyCtrl", function ($scope, $http) {
  // $scope and $http were resolved by name and provided here.
});
```

> Please see the
> [JavaScript Dependency Injection](/articles/javascript-dependency-injection.html)
> article for a more detailed explanation.

This approach becomes problematic at the minification phase of a project because
when the argument names are mangled they can no longer be properly mapped to
dependencies.

```javascript
angular.module("people").controller("MyCtrl", function (a, b) {
  // WTF is a or b?
});
```

For the above reason Angular.js provides a build safe approach for declaring
dependencies which involves using strings to annotate dependencies.

```javascript
angular.module("people").controller("MyCtrl", [
  "$scope",
  "$http",
  function (a, b) {
    // Ok so a is $scope and b is $http.
  },
]);
```

This certainly works and is more akin to how we declare AMD dependencies, but
doing these annotations means we duplicate our dependency declarations once in
the array annotations and again in the function arguments. Worse, since we don't
have the ability to use something like the excellent
[CommonJS sugar](http://requirejs.org/docs/whyamd.html#sugar) Require.js
provides, we are forced to maintain two disparate lists of dependencies and
match them up using order instead of variable declarations.

Wouldn't it be great if we could use a tool to perform these annotations for us?
Enter [ngmin](https://github.com/btford/ngmin).

### ngmin

[ngmin](https://github.com/btford/ngmin) is a preprocessor which parses your
code for injectable constructor functions and annotates them automatically
making your Angular.js code "build safe".

```bash
ngmin somefile.js somefile.annotate.js
```

This command would output "somefile.annotate.js" which would be an annotated
version of some file.

> As a side note, ngmin also offers a
> [grunt task](https://github.com/btford/grunt-ngmin) and a
> [Rails Asset Pipeline plugin](http://rubygems.org/gems/ngmin-rails).

Using ngmin is well and good and all but we now have an additional step of added
complexity for every build we perform. A developer needs to run a concatenator
(or dependency tracer), ngmin, and then the minifier. All of this before or
after other application specific build tools. To make things worse order matters
in many of these cases so running different tasks in parallel becomes difficult.

Enter [r.js](http://requirejs.org/docs/optimization.html).

### r.js

r.js is the defacto build tool for AMD driven projects and thanks to its
extensible callbacks we can perform source modification using things like ngmin.
This way developers will run "r.js" causing concatenation, annotation and
minification to be taken care of by a single system. This helps reduce
complexity in a build system by decreasing the number of cognitive steps to one
instead of three.

## Solution

r.js offers an excellent build hook "onBuildRead" which is invoked for each
module, the return value of this hook will be used for the built file prior to
minification. For performance reasons r.js will only invoke this on your bundled
modules by default. I recommend setting "normalizeDirDefines" to "all" which
means these modifications will be run on all files, not just your bundled
modules. The reason I make this recommendation is because I believe you should
run your unit tests after the build process and since unit tests are executed
against individual modules you will need your source modifications to run
against those as well. It is important to remember that tools like UglifyJS,
r.js or ngmin aren't flawless.

```javascript
({
  dir: "javascripts-built",
  baseUrl: "javascripts",
  modules: [
    {
      name: "MyApplication",
    },
  ],
  normalizeDirDefines: "all",
  onBuildRead: function (moduleName, path, contents) {
    return require("ngmin").annotate(contents);
  },
});
```

Now all of "MyApplication" and its child modules will be run through ngmin, and
minified afterwards. This means we can unit test those children as well. The
combination of "onBuildRead" and "normalizeDirDefines" empowers us to perform
testable source modification at build time.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[MV* Frameworks and Libraries]]></title>
            <link>https://www.merrickchristensen.com/articles/mvstar-libraries-and-frameworks</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/mvstar-libraries-and-frameworks</guid>
            <pubDate>Sun, 25 Nov 2012 00:00:00 GMT</pubDate>
            <description><![CDATA[JavaScript can get messy quickly, we tame this spaghetti monster using libraries and frameworks that implement battle tested design patterns for more structure and maintainability.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; because I've since moved to different
paradigms completely such as [Single State Atom](/articles/single-state-tree/),
[GraphQL](https://graphql.org/) & [React](https://reactjs.org/). While MV\* was
a dream on the backend it really broke down for me on the client.

So, we just explored [DOM Libraries](/articles/learn-js/dom-libraries.html) in
our last article, which do a whole world of good when it comes to abstracting
browser bugs and leveling the playing field. Certainly tools like
[jQuery](http://jquery.com) are a powerful weapon in our arsenal for conquering
the JavaScript landscape. It can be so powerful, in fact, people can unknowingly
abuse it. Turning innocent interfaces into crimes against animation, and
bludgeoning the separation of concerns with a round house kick to the teeth of
maintainability. Let me show you the problem as it unfolds on so many websites
and applications today, then I will walk you through some of the solutions we
have at our disposal.

## The DOM! The Trap! The Humanity!

A lot of times in our application we have the need for more interesting and
complex user interface components. Historically speaking we often called these
"progressive enhancements", which in fact they often are. Sometimes they are
less enhancements, but rather requirements for our user interface to make any
sense. When you get a powerful tool like jQuery into your hands your inclination
is to take a boring DOM structure and spice it up with a little jQuery romance.
Let me show you what I mean...

### \$.fn.abuse

jQuery has a wonderful plugin system which allows you to add methods to the
jQuery object to provide more functionality to your selected elements. The
trouble is in the lack of discipline and structure developers employ to
construct these plugins. Often times plugins will do so many things all in one
giant callback. Traditional Object Oriented programming patterns are thrown to
the dogs while developers struggle to take a document composed of HTML and make
it something incredible like users are growing to expect.

Take for example this element, we really want to make it do something cool.

```html
<div class="some-element" title="WTHeck Man?"><h2 class="title"></h2></div>
```

So we implement our plugin, whatever it may be.

```javascript
(function ($) {
  $.fn.abuse = function () {
    // Set some markup from DOM stored state
    this.$(".title").html(this.attr("title"));

    // Callback from AJAX to see if the user should receive their highfive.
    var highFive = function (response) {
      if (response.highfive === true) {
        $("<div>High five holmes!</div>").appendTo(document.body);
      }
    };

    // Check the time...
    var time = Date.now();

    // Inform the server the time which informs us to highfive or not.
    $.ajax({
      url: "stupid/request",
      data: {
        time: time,
      },
      type: "POST",
      dataType: "json",
    }).then(highFive);
  };
})(jQuery);
```

Then we select our element and instantiate our plugin. If you are wondering how
this works, \$.fn is simply a pointer to jQuery's prototype. (If you don't know
what that means read up on prototypal inheritance.
[This](http://yehudakatz.com/2011/08/12/understanding-prototypes-in-javascript/)
is a great article to start with.) And "this" is the element you are calling the
plugin on.

```javascript
$(".some-element").abuse();
```

This approach is riddled with problems. Look at all the different types of work
we did in the context of our abuse plugin. What if we needed to change the way
all of our AJAX responses get processed? Why would the view need to know
anything about the server in the first place? Why does this plugin have the
power to reach outside of it's own element? What if the server wanted to
schedule the plugin to high five in a few seconds? Should we put all that logic
in our plugin too?

As with all types of programming separating our concerns is fundamental to
maintainability. This is as true in JavaScript as it is anywhere else, imagine
if the next implementation of your web services included all of your database
queries, logging, routing, cacheing, etc in a single class. Yikes!

Obviously the \$.fn.abuse plugin could be refactored into smaller functions but
that still isn't enough. Even the jQuery UI project has tools to help write
robust and separated components beyond simple plugins... Look into the
[Widget Factory](http://api.jqueryui.com/jQuery.widget/) which provides an
inheritance model and improved structure. It is a serious problem, that grows
with your project. Eventually plugins are fighting over DOM elements and sharing
state, your pre-javascript markup looks nothing like your inspect-able source.
It's a tearful road travel.

This is not to say interacting with the document to create nifty new user
interface components is _bad_, I am just trying to point out that we need more!
We need a layer to interact with our data, one that can handle validation and
syncing to the server. We need a view that is agnostic to concerns except
presentation. We need a way to keep our views and our models in sync so we can
stop the madness of trying synchronize them ourselves. I am having a difficult
time articulating how and why a project grows unmaintainable when you construct
everything using the DOM. If you've worked on a project of any scale odds are
you're already nodding your head saying, "Oh man, I know about this spaghetti
mess already. You are preaching to the choir son, how do I tame it?"

## The Structure! The Convention! The Beauty!

If you haven't already please review some of the structural patterns
[Model-View-Controller](http://st-www.cs.illinois.edu/users/smarch/st-docs/mvc.html)
and the work done by [Addy Osmani](http://addyosmani.com/blog). JavaScript MV\*
or Model-View-? frameworks provide much needed structure and convention to web
application development.

This article is not to explain MVC or other structural patterns. It's not to
condemn using jQuery or other DOM manipulation libraries either. I am simply
proposing those libraries have a purpose and we should keep them focused on
doing what they do best, DOM manipulation and browser normalization! It is
simply to spread the good news, these patterns fit in the JavaScript landscape
just like they do in C++, Java, Objective-C or any other language! All those
great patterns and principles we learned building native UIs are applicable in
building JavaScript UIs as well! We can take our abuse plugin an make some sense
of it... This example leverages [Backbone.js](http://backbonejs.org). Ultimately
we should refactor the DOM construction to a template library, but we will save
that for another article.

```javascript
// Create a model to store our data and application logic
var State = Backbone.Model.extend({
  defaults: {
    time: Date.now(),
    highfive: false,
  },

  url: "stupid/request",
});

// Create a view to *declaratively* represent our state.
var SomeView = Backbone.View.extend({
  initialize: function () {
    this.model.on("change:highfive", this.highFive, this);
  },

  render: function () {
    this.$(".title").html(this.$el.attr("title"));
  },

  highFive: function () {
    $("<div>High five holmes!</div>").appendTo(document.body);
  },
});

// Instantiate our view.
new SomeView({
  el: ".some-element",
});
```

See how we are able to take this horrible example and refactor it so we have a
separation of concerns? SomeView is now responsible for the DOM manipulation,
and State is responsible for all of our data management. There are so many wins
to this kind of approach. Keeping them in sync is a breeze, maintainability and
testability go way up because our code is separated into focused pieces. Reusing
and extending components becomes easy. When leveraging an approach like this,
constructing all of your markup in a browser becomes feasible, even sensible. We
can write and maintain complex user interfaces that provide desktop class
interaction. We can reuse code in new ways because it isn't coupled to a
particular DOM structure, we can connect our data with different views. We can
do all sorts of great things you didn't think JavaScript could!

The JavaScript landscape has a robust collection of libraries for you to pick
from. Almost all of these libraries build on top of existing DOM libraries like
jQuery or work nicely with them. Nearly every one of these libraries provide
some notion of a Model and a View. Though their concepts and approaches are
different they all provide great ideas and tools towards writing more
maintainable web applications.

### A Summary of Libraries

1.  [Backbone.js](http://backbonejs.org/) The most prolific MV\* library for
    JavaScript, lightweight and un-opinionated. Provides a great starting point
    for little cost. There is a large ecosystem of frameworks built on top of
    Backbone.js:

- [MarionetteJS](http://marionettejs.com/) A composite application library for
  Backbone.js
- [Chaplin](https://github.com/chaplinjs/chaplin) An application architecture
  for Backbone.js
- [Thorax](http://walmartlabs.github.com/thorax/) An opinionated Backbone
  application framework.

2.  [Ember.js](http://emberjs.com/) An excellent full featured MVC framework for
    creating ambitious web applications.
3.  [AngularJS](http://angularjs.org/) An awesome structural framework from
    Google providing an end-to-end solution for building web applications.
    Beautiful dependency injection and testability.
4.  [CanJS](http://canjs.us/) Similar to Backbone.js, formerly known as
    JavaScriptMVC.
5.  [Knockout](http://knockoutjs.com/) An MVVM framework for JavaScript. Used
    heavily by the C# community.
6.  [Spine](http://spinejs.com/) Similar to Backbone.js though leans closer
    towards MVC, primarily used with CoffeeScript.
7.  [Batman.js](http://batmanjs.org/) A full stack framework from Shopify, lots
    of useful convention.
8.  [Meteor](http://meteor.com/) A promising new platform to construct rich
    JavaScript applications. Not exactly an MV\* framework but worth a mention.

### A Note

There are a plethora of MV\* framework for you to pick from.
[TodoMVC](http://todomvc.com/) is a great project to help you select the right
one.

## Coming Soon

Next we will talk about template libraries, giving us the power to move all our
application logic to client, by constructing our markup in the web browser.
After MV\* libraries its the next logical step, right?
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Introducing StyleManager]]></title>
            <link>https://www.merrickchristensen.com/articles/introducing-stylemanager</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/introducing-stylemanager</guid>
            <pubDate>Sun, 11 Nov 2012 00:00:00 GMT</pubDate>
            <description><![CDATA[Manage your styles with JavaScript. A low level tool to manage CSS in the browser. Support for loading StyleSheets as AMD dependencies included.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; because, while the idea was on point,
[CSS Modules](https://github.com/css-modules/css-modules) and other modern tools
have displaced it.

Introducing StyleManager, a JavaScript library to manage styles in the browser.
StyleManager works by creating style tags in the browser and providing an API to
interact with it accordingly. StyleManager is intended to be used as a build
target or leveraged using the provided AMD plugin.

One of the biggest problems in web application development is CSS. While CSS is
strides ahead of markup driven styles it is still under featured for desktop
class applications. The language is rigid and difficult to maintain. In fact
your only real defense for maintaining CSS properly is a lot of discipline and
perhaps a CSS pre-processor. As an application grows, the separation of your
stylesheets and your features becomes blurry. The trouble is that a great deal
of JavaScript driven components simply don't work without their corresponding
styles and due to the separation of concerns we still load our stylesheets
entirely separate from their corresponding features. We lazily load features in
JavaScript, but load their styles up front. We specify our JavaScript
dependencies, but not their corresponding stylesheets that are required to
function properly. Why?

StyleManager aims to empower developers to control that fine line between shared
styles and feature specific styles. It is my contention that feature specific
styles should be loaded just like the feature's other dependencies. StyleManager
is also a great build target for those who would like to compile their
stylesheets and templates to a JavaScript consumable target.

## Get It

[StyleManager on Github](https://github.com/iammerrick/StyleManager)

## An Example

Below you will find two examples, one using the StyleManager AMD css! plugin and
the other using StyleManager as a build target.

### AMD Plugin

```javascript
define(["css!dialog.css", "ui/View"], function (styles, View) {
  // The contents of dialog.css will be added to the page
  // by the time the module definition function is called
  // you can interact with the stylesheets using the
  // styles parameter
  return View.extend({
    render: function () {
      // ...
    },
  });
});
```

### StyleManager - As a build target.

StyleManager is intended to be leveraged as a build target or using the AMD
plugin. You could grab the contents of some-feature.css as it corresponds to
some-feature.handlebars and precompile both accordingly.

```javascript
define(["handlebars", "StyleManager"], function (Handlebars, StyleManager) {
  var sm = new StyleManager("some-feature.css");

  sm.register(
    "feature",
    "/*some-feature.css contents*/ .dialog { position: absolute; }"
  );

  var template = Handlebars.precompile(
    "/* some-feature.handlebars contents */"
  );

  return template;
});
```
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[JavaScript Dependency Injection]]></title>
            <link>https://www.merrickchristensen.com/articles/javascript-dependency-injection</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/javascript-dependency-injection</guid>
            <pubDate>Wed, 03 Oct 2012 00:00:00 GMT</pubDate>
            <description><![CDATA[A quick explanation of how the AngularJS dependency injector works, and how you could write your own simplified JavaScript dependency injection library.]]></description>
            <content:encoded><![CDATA[
[Inversion of control](http://en.wikipedia.org/wiki/Inversion_of_control) and
more specifically
[dependency injection](http://en.wikipedia.org/wiki/Dependency_injection) have
been growing in popularity in the JavaScript landscape thanks to projects like
[Require.js](http://requirejs.org/) and [AngularJS](http://angularjs.org/). This
article is a brief introduction to dependency injection and how it fits into
JavaScript. It will also demystify the elegant way AngularJS implements
dependency injection.

## Dependency Injection In JavaScript

Dependency injection facilitates better testing by allowing us to mock
dependencies in testing environments so that we only test one thing at a time.
It also enables us to write more maintainable code by decoupling our objects
from their implementations.

With dependency injection, your dependencies are given to your object instead of
your object creating or explicitly referencing them. This means the dependency
injector can provide a different dependency based on the context of the
situation. For example, in your tests, it might pass a fake version of your
services API that doesn't make requests but returns static objects instead,
while in production it provides the actual services API.

Another example could be to pass [ZeptoJS](http://zeptojs.com/) to your view
objects when the device is running [Webkit](http://www.webkit.org/) instead of
[jQuery](http://jquery.com/) to improve performance.

The main benefits experienced by adopting dependency injection are as follows:

1.  Code tends to be more maintainable.
2.  APIs are more elegant and abstract.
3.  Code is easier to test.
4.  The code is more modular and reusable.
5.  Cures cancer. (Not entirely true.)

Holding dependencies to an API based contract becomes a natural process. Coding
to interfaces is nothing new, the server side world has been battle testing this
idea for a long time to the extent that the languages themselves implement the
concept of interfaces. In JavaScript, we have to force ourselves to do this.
Fortunately, dependency injection and module systems are a welcome friend.

Now that you have some idea of what dependency injection is, let's take a look
at how to build a simple implementation of a dependency injector using
[AngularJS style dependency injection](http://docs.angularjs.org/guide/di) as a
reference implementation. This implementation is purely for didactic purposes.

## AngularJS Style Injection

AngularJS is one of the only front-end JavaScript frameworks that fully adopts
dependency injection right down to the core of the framework. To a lot of
developers, the way dependency injection is implemented in AngularJS looks
completely magic.

When creating controllers in AngularJS, the arguments are dependency names that
will be injected into your controller. The argument names are the key here, they
are leveraged to map a dependency name to an actual dependency. Yeah, the word
"key" was used on purpose, you will see why.

```javascript
/* Injected */
var WelcomeController = function (Greeter) {
  /** I want a different Greeter injected dynamically. **/
  Greeter.greet();
};
```

## Basic Requirements

Let's explore some of the requirements to make this function work as expected.

1.  The dependency container needs to know that this function wants to be
    processed. In the AngularJS world that is done through the Application
    object and the declarative HTML bindings. In our world, we will explicitly
    ask our injector to process a function.

2.  It needs to know what a Greeter before it can inject it.

### Requirement 1: Making the injector aware.

To make our dependency injector aware of our WelcomeController we will simply
tell our injector we want a function processed. It's important to know AngularJS
ultimately does this same thing just using less obvious mechanisms whether that
be the Application object or the HTML declarations.

```javascript
var Injector = {
  process: function (target) {
    // Time to process
  },
};

Injector.process(WelcomeController);
```

Ok, now that the Injector has the opportunity to process the WelcomeController
we can figure out what dependencies the function wants, and execute it with the
proper dependencies. This process is called dependency resolution. Before we can
do that we need a way to register dependencies with our Injector object...

### Requirement 2: Registering dependencies

We need to be able to tell the dependency injector what a `Greeter` is before it
can provide one. Any dependency injector worth it's bits will allow you to
describe _how_ it is provided. Whether that means being instantiated as a new
object or returning a singleton. Most injection frameworks even have mechanisms
to provide a constructor some configuration and register multiple dependencies
by the same name. Since our dependency injector is just a simplified way to show
how AngularJS does dependency mapping using parameter names, we won't worry
about any of that.

Without further excuses, our simple register function:

```javascript
Injector.dependencies = {};

Injector.register = function (name, dependency) {
  this.dependencies[name] = dependency;
};
```

All we do is store our dependency by name so the injector knows what to provide
when certain dependencies are requested. Let's go ahead and register an
implementation of Greeter.

```javascript
var RobotGreeter = {
  greet: function () {
    return "Domo Arigato";
  },
};

Injector.register("Greeter", RobotGreeter);
```

Now our injector knows what to provide when `Greeter` is specified as a
dependency.

## Moving Forward

The building blocks are in place it's time for the sweet part of this article.
The reason I wanted to post this article in the first place, the nutrients, the
punch line, the hook, the call toString() with some sweet reflection. This is
where the magic is, in JavaScript, we don't have to execute a function
immediately. The trick is to call toString on your function which returns the
function as a string, this gives a chance to preprocess our functions as strings
and turn them back into functions using the Function constructor, or just
execute them with the proper parameters after doing some reflection. The latter
is exactly what we will do here.

### toString Returns Winning

```javascript
var WelcomeController = function (Greeter) {
  Greeter.greet();
};

// Returns the function as a string.
var processable = WelcomeController.toString();
```

You can try it on your console!

![Function toString Example](/assets/images/articles/javascript-dependency-injection/processable.png "Function toString Example")

Now that we have the WelcomeController as a string we can do some reflection to
figure out which dependencies to inject.

### Dependency Checking

It's time to implement the process method of our Injector. First, let's take a
look at
[injector.js](https://github.com/angular/angular.js/blob/master/src/auto/injector.js)
from Angular. You'll notice the reflection starts on
[line 54](https://github.com/angular/angular.js/blob/master/src/auto/injector.js#L54)
and leverages a few regular expressions to parse the function. Let's take a look
at the regular expression, shall we?

```javascript
var FN_ARGS = /^function\s*[^\(]*\(\s*([^\)]*)\)/m;
```

The `FN_ARGS` regular expression is used to select everything inside the
parentheses of a function definition. In other words the parameters of a
function. In our case, the dependency list.

```javascript
var args = WelcomeController.toString().match(FN_ARGS)[1];
console.log(args); // Returns Greeter
```

Pretty neat, right? We have now parsed out the WelcomeController's dependency
list in our Injector _prior_ to executing the WelcomeController function!
Suppose the WelcomeController had multiple dependencies, this isn't terribly
problematic since we can just split the arguments with a comma!

```javascript
var MultipleDependenciesController = function (Greeter, OtherDependency) {
  // Implementation of MultipleDependenciesController
};

var args = MultipleDependenciesController.toString()
  .match(FN_ARGS)[1]
  .split(",");

console.log(args); // Returns ['Greeter', 'OtherDependency']
```

The rest is pretty straightforward, we just grab the requested dependency by
name from our `dependencies` cache and call the target function passing the
requested dependencies as arguments. Let's implement the function that maps our
array of dependency names to their dependencies:

```javascript
Injector.getDependencies = function (arr) {
  var self = this;
  return arr.map(function (dependencyName) {
    return self.dependencies[dependencyName];
  });
};
```

The `getDependencies` method takes the array of dependency names and maps it to
a corresponding array of actual dependencies. If this map function is foreign to
you check out the
[Array.prototype.map documentation](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Array/map).

Now that we have implemented our dependency resolver we can head back over to
our `process` method and execute the target function with its proper
dependencies.

```javascript
target.apply(target, this.getDependencies(args));
```

Pretty awesome, right?

### Injector.js

```javascript
var Injector = {
  dependencies: {},

  process: function (target) {
    var FN_ARGS = /^function\s*[^\(]*\(\s*([^\)]*)\)/m;
    var text = target.toString();
    var args = text.match(FN_ARGS)[1].split(",");

    target.apply(target, this.getDependencies(args));
  },

  getDependencies: function (arr) {
    var self = this;
    return arr.map(function (value) {
      return self.dependencies[value];
    });
  },

  register: function (name, dependency) {
    this.dependencies[name] = dependency;
  },
};
```

### Example & Excuses

You can see the functioning injector we created in this
[example](https://jsfiddle.net/nMK6j/) on jsFiddle.

<iframe
  style={{ width: "100%", height: 500 }}
  src="https://jsfiddle.net/nMK6j/1/embedded/"
  allowFullScreen="allowfullscreen"
  frameBorder="0"
></iframe>

This contrived example is not something you would use in an actual codebase it
was simply created to demonstrate the rich functionality JavaScript provides and
to explain how AngularJS provides dependency injection. If this interests you I
highly recommend reviewing their code further. It's important to note this
approach is not novel. Other projects use toString to preprocess code, for
example [Require.js](https://requirejs.org) uses a similar approach to parse and
transpile CommonJS style modules to AMD style modules.

I hope you found this article enlightening and continue to explore dependency
injection and how it applies to the client side world.

I really think there is something special brewing here.
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Learn JavaScript - DOM Libraries]]></title>
            <link>https://www.merrickchristensen.com/articles/dom-libraries</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/dom-libraries</guid>
            <pubDate>Sun, 16 Sep 2012 00:00:00 GMT</pubDate>
            <description><![CDATA[Each browser's DOM is unique. Either use an abstraction layer, or perish.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; for irrelevance. These days evergreen
browsers have drastically improved the situation and we are targeting higher
level abstractions.

The primary focus of this post is to discuss DOM libraries and why they exist.
To fully answer the question, "What is a DOM library and why do I need one?", we
will first need to uncover some of JavaScript's dark and beautiful secrets and
put them out on the table.

In the land of JavaScript we have many players. All of which implement their
platforms slightly, and in some cases, drastically different than the others.
For example, we have your standard web browsers like Chrome, Firefox, and
Internet Explorer. We also have server platforms such as Node.js and Rhino. We
even have mobile platforms like Boot2Gecko and even more targeted platforms with
the ability to program robots. The most dominant and most used JavaScript
platform is the web browser, but unfortunately even those are not implemented
the same. The sole purpose of most DOM libraries is to mitigate and abstract
these differences so you can program on a consistent interface. In other words,
write your code once and have it work across multiple platforms.

This problem isn't new, nearly all developers are familiar with the pain of
supporting multiple platforms. Whether you are leveraging macros in the C
language to compile to OS specific code or using Java for the "comfort" of a
virtual machine, the problem is relatively the same. In JavaScript, it's no
different.

We have plenty of platforms to support and despite the great efforts of groups
trying to set standards, differences and bugs still get pushed out into the
market. Innocent consumers use these platforms to the great pain of many
developers who are forced to support them until the offending platform dies.
Rest in pieces IE6. However, out of all the fragmentation and differences that
exists, fewer things have caused more headaches, late nights, and tears than the
Document Object Model (DOM).

## What is the DOM?

"The Document Object Model is an API for HTML and XML documents. It provides a
structural representation of the document, enabling you to modify its content
and visual presentation. Essentially, it connects web pages to scripts or
programming languages." - Mozilla Developer Network

The Document Object Model is standardized by the
[W3C](http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html) and it defines
an interface to abstract HTML and XML documents as objects. The DOM provides us
with a document structure, tree model, Event architecture, DocumentFragments,
Elements and even more process heavy behaviors such as DOM Traversal.

Hyper Text Markup Language or HTML is the web's markup language and it is
specified in terms of the DOM. The HTML DOM includes things like the className
property on HTML elements or different APIs like document.body.

If you would like to learn more about the DOM, I can't recommend the
[DOM Section](https://developer.mozilla.org/en-US/docs/DOM) on the
[Mozilla Developer Network](https://developer.mozilla.org/es/) enough.

## So what is the problem?

Remember our talks about fragmentation? Yeah, that applies here. Each browser
has a subtly different implementation of the DOM. Honestly, in some cases, they
aren't terribly subtle at all. In fact, nearly every DOM method is broken in
some way or another in some browser. Since the DOM is what connects web pages to
programming languages, our program will have to handle all the different use
cases to support each browser.

### A Example 1. getElementsByTagName

This is a very commonly used DOM method to grab all elements that have a certain
tag name, which is the name of an element. Given an HTML document that looks
like this:

```html
<ul>
  <li>Name: Merrick</li>
  <li id="length">Height: 6' Tall</li>
</ul>
```

If I call this:

```javascript
var els = document.getElementsByTagName("li");
```

I now have a collection of `<li>` tags selected. Now lets count the number of
elements that have been selected, we use the length property for that.

```javascript
console.log(els.length); // Returns 2
```

Works great! Unfortunately, in Internet Explorer the length property will get
overwritten because we have an element in our example above with an
`id="length". :(

### Example 2. querySelectorAll

The querySelectorAll method finds DOM elements using CSS selectors.
Unfortunately, the method doesn't even exist in quirks mode in IE 8 and id
selectors don't match at all in XML documents.

These two DOM methods I've just shown also happen to be the most popular and
both contain very serious problems when used in different browsers. When working
with other DOM features be prepard to find other obvious inconsistencies
especially with Events and AJAX requests. To solve this problem we need an
abstraction layer. Something our program can talk to that mitigates these issues
and handles the browser specific bugs for us.

## Introducing, DOM Libraries

DOM libraries are the solution to this problem. They offer a consistent API to
interact with the DOM that will work cross browser. It's safe to say that this
comes as a very welcomed relief! Let's revisit our examples above, by using the
popular library jQuery we can select and output the length across all browsers
with great ease.

```javascript
console.log($("li").length); // Returns 2 everywhere!
```

Most DOM libraries typically include abstractions for AJAX requests, DOM
selection, traversal, and manipulation (like CSS and attributes), as well as
event implementations like click.

_DOM Libraries are a key tool for fixing cross browser incompatibilities and
bugs._

### A Summary of Libraries

1.  [jQuery](http://jquery.com) the most popular DOM library, a very safe choice
    with a strong community.
2.  [Dojo Toolkit](http://dojotoolkit.org/) An excellent well supported library
    that offers a lot excellent utility even beyond the DOM.
3.  [YUI](http://yuilibrary.com/) Yahoo's DOM Library
4.  [Prototype](http://prototypejs.org/) One of the founding fathers of the
    JavaScript library movement.
5.  [MooTools](http://mootools.net/) MooTools is a beautifully designed API
    similar to Prototype.
6.  [Zepto](http://zeptojs.com/) ZeptoJS is a Webkit specific subset
    implementation of the jQuery API. Useful when smaller downloads are
    important and you don't need to support other browsers.
7.  [Ender](http://ender.no.de/) The idea here is that you compose your own
    library from a series of micro libraries with a binding layer.

There are many, many more DOM libraries to choose from but the important thing
to remember is that they are all trying to solve the same problem, by providing
you with a consistent API for common web development tasks like DOM
manipulation, AJAX requests, and event management that will work cross browser.

## Moving Forward - MV\* Frameworks

Next we will talk about the spaghetti code that can occur when you lean too
heavily on a DOM library like jQuery, and a whole new set of tooling to help
solve that problem. Move on to the next article,
[MV\* Libraries and Frameworks](/articles/learn-js/mvstar-libraries-and-frameworks.html).
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
        <item>
            <title><![CDATA[Learn JavaScript]]></title>
            <link>https://www.merrickchristensen.com/articles/learn-js</link>
            <guid isPermaLink="false">https://www.merrickchristensen.com/articles/learn-js</guid>
            <pubDate>Sun, 16 Sep 2012 00:00:00 GMT</pubDate>
            <description><![CDATA[An introduction to a new series of posts that will attempt to explain the JavaScript landscape.]]></description>
            <content:encoded><![CDATA[
### Update June 7, 2018 - Hall of Shame

This article is a Hall of Shamer&trade; for irrelevance.

Hello there! Hopefully you are here because you want to learn JavaScript. Odds
are you have come here from another language and want to punch all of us
hipsters in the neck for naming our projects after things that do not even sound
vaguely related to what they actually do, you know...
[Backbone.js](http://backbonejs.org/), [Express](http://expressjs.com/),
[Mustache](http://mustache.github.com/), etc. I imagine diving into JavaScript
with its plethera of transpilers, absurd amount of frameworks, libraries,
awkward language decisions, and multiple environments is a daunting task.

My aim in this series of posts is to give you an idea of the different types of
problems we face as JavaScript developers, why we have so many libraries and
frameworks, and also give you a brief overview of some of the options that
exists. It is my hope that when this is all over you can walk away with an
understanding of some of the projects you may have already heard of, projects
such as [CoffeeScript](http://coffeescript.org/).

Alright that as far as introductions go, thats all I got. Now, head over to the
first post [The DOM Problem](dom-libraries.html)
]]></content:encoded>
            <author>Merrick Christensen</author>
        </item>
    </channel>
</rss>