What Audio Engineering Could Learn from Software Development

Woe recorded two songs a few weeks ago. They’re planned for release later this year, medium TBD. Like the last album but unlike the two before that, it was not recorded and will be mixed elsewhere, which makes its chances of sounding good significantly higher than if I had been completely in charge and the likelihood of emerging with some sanity greater still. It’s cool.

Something that’s not cool is how primitive aspects of the modern digital recording process still feel. It’s not like I had any illusions about this – I still read about things and record my own demos, so I haven’t completely given up – but I became hyperaware of it as we started this mix process and talked about ways to collaborate on some additional tracks and work. It got me thinking about how so many collaboration and communication problems could be solved by tools and processes that are popular in the software development world. Here are a few.

PROBLEM: Version control of mixes

In my current workflow, I include the current date in a project’s filename and I never clean files unassociated with a project until I’m completely done editing. It’s a bit of a drag, since those filenames don’t include details about what changed or why, they’re just dates, which are sort of useful, but not really. It’s even worse when I’m writing over the course of days or weeks or even months, since I’ll invariably clean my project’s unassociated files, breaking old versions in the process. It’d be so great if I could keep track of a project and its audio dependencies as well as log distinct changes.

SOLUTION: Git LFS?

Git Large File Storage, or Git LFS, might do the trick. It’s like git, only for large files. Go figure. I have no idea how well it works, nor do I have an answer for how the audio world should handle things like merging branches, but for basic versioning, hanging onto audio dependencies, as well as having a places for notes about changes, this could be cool.

PROBLEM: Plugin Dependency Management

I’ve switched computers a few times over the years and I don’t always reinstall all the same software. This is especially true for my plugins, which I had a habit of buying greedily and forgetting after a mix or two. It’s rare that I need to open an old project, but without fail, if a few years have passed, I’ll get an error about missing plugins. It would be amazing if there was a single file that contained a list of all the plugins used in a project, the vendors who created them, and the last versions encountered at the time that the file was saved.

SOLUTION: Bundler/NPM/Cargo/etc for Recording?

This is something that so many languages have solved. Ruby has Bundler, JavaScript has npm, Rust has Cargo. (Incidentally, Cargo was built with involvement Yehuda Katz, who also worked on Bundler. Yarn, an alternative to npm that I really like, is another project that benefits from his involvement.) In each case, there’s an aspect that might be somewhat difficult for the recording world, a central repository of open-source libraries from which the depedency manager can draw, but I’d settle for just a list of what I need, how much of it is missing, and where I can get it. Baby steps.

PROBLEM: Shared Projects, Unshared Plugins

This one kills me. I’m working on a project and I want to send it to my friend to help with the mix or contribute some extra tracks, but I can’t because I’m using a bunch of plugins they don’t have. Even if we have the same DAW (let’s not even get into the fact that there isn’t a universal project format…), it’s going to be hard to share without coordinating very carefully. I know this is one of the reasons that UAD and Waves plugins are popular, but for the rest of us, it sucks.

SOLUTION: https://en.wikipedia.org/wiki/Adapter_pattern

The Adapter Pattern is a coding practice in which we create a trustworthy interface for something foreign and unreliable. It would work like this:

I open my DAW and I insert a generic reverb wrapper plugin. The reverb wrapper asks me to choose my reverb, so I choose some Nebula nonsense. I tell the wrapper, “this nebula variable is input, this one is output, this is wet level, this is dry level, this is delay time,” etc,… After that, I control my plugin entirely through the wrapper, not through Nebula. As I work, the wrapper keeps a record of the average output level, which will be important later. The next week, I send my project to my friend. She doesn’t have Nebula, but that’s ok because the wrapper will say, “Feed me a reverb. It must have a distinct input, output, wet, dry, and delay time.” She chooses some other plugin that satisfies those requirements. Because the project has been keeping track of the average output level, it does its best to tune her plugin’s settings to match mine. We’re obviously going to have different results, but it should at least get us started and allow us to communicate, which is better than where we are now.

There are a lot of problems with this. There are so few plugins whose settings would sync up nicely, I could see it being really frustrating. But maybe not. Or maybe some basic qualities would map over, maybe you’d have the shared settings and dial the rest to taste. Maybe something something machine learning could help find the settings in plugin B that are sonically closest to those of plugin A. I dunno, I’m just a guy who complains about technology.

I doubt that any of these will ever be explored, other than me maybe starting to use Git LFS to track versions of songs. They’re still fun to think about. As technology advances and we find better ways of sharing information, I’m confident that we’ll solve all of these problems in ways that are far more appropriate for the audio engineering world. Until then, it’s fun to daydream.

TypeScript Makes React Better

After starting work on the Proteus Client (Boston Biomotion) in the summer of 2016, it was clear that I’d be working fast, refactoring constantly, and experimenting with a lot of code. In an effort to bring some order to my work, I decided to go all in on TypeScript. I merged the PR that migrated all my js(x) to ts(x) on October 6 of that year. It was, without a doubt, the best gamble on technology that I can ever remember making. React’s simplicity and its preference for straightforward, clear, safe patterns makes it a perfect partner for TypeScript. I’m always finding new ways to get more out of them and cannot begin to imagine working on a large project without the safety net of the compiler. Below, I’ll share a few of my favorite patterns.

These all focus on a particular type of pain point that I run into when maintaining a large project: consistency and safety of objects coming from and going to disparate places. In other words: “How can I be sure that I have the thing that I think I have?” Most engineers in dynamic languages use a combination of tests, ducktype checks, trust, and a hell of a lot of grep and manual debugging. They keep a ton of stuff in their head and hope that everyone knows how everything works or is willing to trace stuff out when it doesn’t work right. I offer some examples of how we can make our lives easier by leveraging the compiler.

Better reducers, mapStateToProps, and component store access

At the beginning of the year, I wrote this post about TypeScript and Redux. It laid out my pattern for ensuring safety and consistency between the output of each reducer, the output of each mapStateToProps function, and the data accessed from within each component. In the eleven months (how has it been so long!?) since writing this, I’ve stuck with this pattern and truly love it. No change there. Read that if you haven’t already.

Better Redux actions

An omission in the aformentioned writeup was how to handle action creators. This was skipped because, at the time, I didn’t have a healthy pattern for it. You can see the evidence of this in one of my code snippets from that post:

// A reducer
function crucialObject(currentState = { firstKey: 'none', secondKey: 'none' }, action) {
  switch (action.type) {
    case 'FIRST_VALUE': {
      return { firstKey: 'new', secondKey: action.newValue };
    }
    case 'SECOND_VALUE': {
      return Object.assign({}, currentState, { secondKey: action.newValue });
    }
    default:
      return currentState;
  }
}

Note the implicit any of action. Note the trust that action.newValue would just… be there… and be what we expect it to be. Gross. In reality, this did not scale at all. My reducers grew to be frightening, messy places where data might be there, where I couldn’t be sure what keys were supposed to be present, where I couldn’t tell which action was responsible for which key.

There are a few libraries that try to solve this problem. I felt like they were complicated what should be a pretty straightforward issue. The pattern I settled on is nearly identical to the one outlined here. I differ in my preference for a slightly more manual approach within my reducers. While that post’s author likes a type that joins all possible actions, like this:

export type ActionTypes =
    | IncrementAction
    | DecrementAction
    | OtherAction;

function counterReducer(s: State, action: ActionsTypes) {
  switch (action.type) {
    case Actions.TypeKeys.INC:
      return { counter: s.counter + action.by };
    case Actions.TypeKeys.DEC:
      return { counter: s.counter - action.by };
    default:
      return s;
  }
}

…I name things a bit differently and just tell the compiler what each action is.

export interface Increment {
  type: ActionTypes.INCREMENT;
  by: number;
}

export interface Decrement {
  type: ActionTypes.DECREMENT;
  count: number; // to illustrate that sometimes, your actions might end up with weird, inconsistent keys
}

function counterReducer(s: CounterState, action: { type: string }) : CounterState {
  switch (action.type) {
    case ActionTypes.INCREMENT:
      const typedAction = action as Increment;
      return { counter: s.counter + typedAction.by };
    case ActionTypes.DECREMENT:
      const typedAction = action as Decrement;
      return { counter: s.counter - typedAction.by };
    default:
      return s;
  }
}

I like doing it this way because I think it makes it easier to quickly see what you’re working with in each case statement. It also makes it easy if you have an action without a payload, since you can be a little lazy and not define an interface for it. It takes a little more discipline but it’s worth it.

Either way, the result is the same: your actions will be consistent when they are created and read.

The Empty object and isEmpty

A tricky issue I ran into when first getting into this was dealing with empty objects. Say user reducer either returns a PersistedUser or nothing. How would I represent that? You can’t return undefined from a reducer and this:

function user(currentState: PersistedUser | {}, action: { type: string} ): PersistedUser | {} {
  ...
}

…is no good because AnyInterface | {} is treated as any, bypassing all type safety.

I settled on a simple pattern that I feel like I picked up in another language, but I can’t remember where. I define a simple interface that I call Empty:

export interface Empty {
  empty: true;
}

An Empty represents an object that is deliberately, explicitly blank. It might be a guest user, or a way to demonstrate that there is not a connection to the robot, or any number of processes that have not yet occurred. I define types like this:

export type UserState = PersistedUser | Empty;

And then export a very simple isEmpty function:

export function isEmpty(config: any) : config is Empty {
  return config !== null && config !== undefined && config.empty !== undefined;
}

By defining types that are either Empty or something else, I’m forced to always prove to the compiler that I’m acting on the right object. There are times where I use Empty when I could leave something undefined, since optional values are easy to cheat with !. I deal with the isEmpty case and move on.

A Better isEmpty() with Generics

An issue I ran into last week involved a refactor that allowed some bad code to slip through my isEmpty function. I started with this interface and type:

export interface Device {
  connected: boolean;
  ...some other things
}

export type DeviceState = Device | Empty;

I used it in components like this:

interface StateProps {
  device: DeviceState;
}

class DeviceAwareComponent extends Component<StateProps, object> {
  render() {
    if (isEmpty(this.props.device)) {
      return (something)
    }

    // go on with rendering happy path
  }
}

That was all well and good until I refactored that interface. I was left with this:

export interface ProteusState {
  status: ProteusConnectionStatus;
  device: DeviceState;
  errorMessage?: string;
}

export type DeviceState = Device | Empty;

// and back in the component

interface StateProps {
  proteus: ProteusState;
}

class DeviceAwareComponent extends Component<StateProps, object> {
  render() {
    // here's the problem
    if (isEmpty(this.props.proteus)) {
      return (something)
    }

    // go on with rendering happy path
  }
}

As you might notice, I forgot to change my isEmpty call to look at this.props.proteus.device. As far as my function was concerned, everything was fine. It has no awareness of whether it’s possible for this.props.proteus to be Empty, so it let it through, even when the device was in an invalid state. This was a pretty big problem and I needed a safer way of handling it.

My solution was to enhance the behavior of isEmpty with an optional generic that I can use to identify the expected interface of the object if it is not empty. By doing this, the compiler will do an extra check to ensure that what I think I’m passing is what is actually being passed. The code looks like this:

export function isEmpty<T = any>(config?: T | Empty) : config is Empty {
  return config !== null && config !== undefined && (config as any).empty !== undefined;
}

I can then modify the broken function call above and the compiler will bark at me immediately.

  // This fails to compile! The object passed does not match the generic given to the function.
  if (isEmpty<DeviceState>(this.props.proteus)) {
    return (something)
  }

I’m now using this everywhere that I call isEmpty without a clear sense of what the object should be if it is not Empty. Had this been in place ahead of time, it would have kept my bug from sneaking into my commit!

Bonus: better testing with rosie and generics

This isn’t specific to React, but with a little extra typing, TypeScript transforms the ease with which tests can be written in JavaScript when using rosie to create factories.

On the backend, I’m still trying to wean myself off of Ruby on Rails. My API is built with Grape and my responses are Grape Entities, but ActiveRecord remains my greatest addiction.

One of ActiveRecord’s greatest assests is the way it seamlessly maps your database columns to methods, creating getters and setters, and then offers these interfaces to factory_bot. There is immediate, guaranteed consistency throughout the stack, because the strongly typed database acts as a single source of truth for what is and isn’t permissible. Naturally, Ruby being Ruby, it’s not perfect. If you remove a column from your database, it’s on you to grep through your code and remove references to it, but ActiveRecord models are so easy to test via factories that it’s usually easy enough to get things passing.

This consistency is what gets me. My experience with testing in JavaScript always required a lot of dilligence. In pre-TypeScript days, if I had an implied interface for a PersistedUser, it was on me to ensure that my factory (if I had one) matched the actual implementation in production. After I started working with TypeScript, it was a little bit better because I could manually build factories using exported interface defs, but it was missing the fluidity of factory_bot and its integration with ActiveRecord.

I started working with Rosie a few months ago. With an interface inspired by factory_bot, it felt awfully familiar, except not: its TypeScript definitions made heavy use of any and its use of a shared global state made it hard to improve. I ended up reworking the definitions to allow the use of generics to let the compiler know what interfaces you’re defining or building. You can see examples here. We’re left with something that feels remarkably like the Ruby version.

In practice, you’d do something like this:

interface PersistedUser {
  id: number;
  createdAt: number;
  updatedAt: number;
  name: string;
  age?: number;
  occupation?: string;
}

Factory.define<PersistedUser>('PersistedUser').attrs({
  id: 0,
  createdAt: () => moment().unix(),
  updatedAt: () => moment().unix(),
  name: () => `${Faker.name.firstName} ${Faker.name.lastName}`
}).sequence('id');

// elsewhere...

const user: PersistedUser = Factory.build<PersistedUser>('PersistedUser', { name: 'Chris Grigg' });

In the above example, assuming you have the right dependencies imported, everything will compile correctly. If occupation suddenly becomes a required key in PersistedUser, the factory definition will complain. If the createdAt or updatedAt values went from unix timestamps to ISO 8601 strings, it would complain. If I supply the wrong kind of object to the second argument of build (maybe if I say { name: 666 }), the compiler will reject it. Same goes for adding a key that doesn’t exist.

Tests are worthless if they do not accurately match the code being tests. Without TypeScript and Rosie, we put the burden of maintaining parity on the user or a separate validation framework, which is a real drag. Introducing this change is a holy grail for me and has improved my test coverage dramatically.

Wrapup

So there you have it: some of my favorite non-trivial uses for TypeScript. These patterns let you build and refactor with significantly more speed and confidence than one could in vanilla JavaScript. The next time someone tells you that they don’t need a compiler because it’s easy enough to keep variable types in their head, share some of this with them and see how it compares to their process.

WiFi on Ubuntu 16.04 with wpa_supplicant trouble

I’m working on the first draft of the production version of the small server that acts as the brain for Boston Biomotion’s Proteus. Today, I hit a snag with the wifi that I want to document.

The unit we’re working with right now is the Intel NUC NUC6i3SYH. It uses the Intel WiFi 8260. We need to specify different wifi configurations for in-office and away, so I tried using wpa_supplicant in roam mode to specify our local and mobile settings with IP addresses. It kept hanging with a strange error and it took me way too many hours to get to the bottom of it.

The instructions I found everywhere were as follows:

Modify your /etc/network/interfaces like this:

allow-hotplug wlp1s0
iface wlp1s0 inet manual
  wpa-roam path-to-your-wpa_supplicant.conf

iface network-defined-in-supplicant.conf inet dhcp

# and so on...

This would fail to start at boot. The command sudo ifup wlp1s0 would fail with an error mentioning p2p-dev-wlp1s0 and, more importantly, wpa_bin daemon failed to start. The p2p-dev errors seemed more descriptive and pressing, so I troubleshot that for a while and came up empty.

I also found that if I started the service manually,

wpa_supplicant -B -i wlp1s0 -c path-to-config.conf

…it would start, but it did not have my roaming conf present so it was useless.

I finally started digging into /etc/wpa_supplicant/functions.sh, which contains functions invoked when someone calls ifup, among others.

I first modified the script’s call to enable more verbose output by appending -dd to the command, which wasn’t that useful, and then thought I saw somewhere that -D was another logging level. Strangely, adding just -D made everything work! It only took a second to notice that this specifies the “driver backend.”

Scanning a few more lines down, I noticed if statements that set different -D options. I execution to the path actually invoked and found that it looked like this:

WPA_SUP_OPTIONS="$WPA_SUP_OPTIONS -D nl80211,wext"

The docs for -D said that when given with no options, it defaults to wext. I modified the line to explicitly call wext without nl80211. My ifup command started working immediately. You can read up on the difference between wext and nl80211, but you can specify the driver to use from within your /etc/network/interfaces/ file. It looks like this:

allow-hotplug wlp1s0
iface wlp1s0 inet manual
  wpa-driver wext
  wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf

So, TLDR, add wpa-driver wext to your interface config file to use a legacy driver if the modern one is incompatible with your hardware.

Hope this saves someone else some time.

TypeScript and Redux

About halfway through last year, I started working with React and Redux. Our client app is extremely busy, full of data streaming in, going back out, and being passed around. Since I am the only full-time engineer and on a tight schedule, I moved to TypeScript in an effort to get something to look over my shoulder while I worked. The migration was not pleasant but the learning curve was practically non-existant and the benefits were immediately clear.

What was not clear immediately was how I could get the most of out TypeScript. TypeScript’s type inference is one of its greatest strenghts but it can bite you in the ass because it will allow you to treat everything as a generic any. This might be handy sometimes but in general, you do yourself a disservice when you choose to leave objects untyped when stricter options exist.

Nowhere is this more evident and troublesome than when dealing with the output of reducers and keys from the Redux store. Left alone, state keys mapped to props will behave as any. If there is one area of an app that I think strong types are crucial, it is the state that is shared throughout your app. In a perfect world, at a minimum, I want the following guarantees and behavior:

  • Each key of my state has known, predictable values
  • If the output of a reducer changes, state keys that depend on old output will become immediately apparent
  • If the shape of my state changes, invalid reliance upon old truths will become immediately apparent
  • The definition of my state’s shape should be centrally managed, I should not have to cast types in components

It took a little work and thought but I ended up with an approach that achieves all of this. It requires a little more boilerplate than you might find appealing but it is definitely worth the clarity, stability, and refactoring oversight that it provides.

Given the following:

// A reducer
function crucialObject(currentState = { firstKey: 'none', secondKey: 'none' }, action) {
  switch (action.type) {
    case 'FIRST_VALUE': {
      return { firstKey: 'new', secondKey: action.newValue };
    }
    case 'SECOND_VALUE': {
      return Object.assign({}, currentState, { secondKey: action.newValue });
    }
    default:
      return currentState;
  }
}

// A root reducer that will be fed to a store

const rootReducer = combineReducers({ crucialObject });

// A container/component with a mapStateToProps function that will be used with connect

const mapStateToProps = (state) => {
  return { crucialObject: state.crucialObject };
};

// And, in that same component, an expectation of what keys will exist on crucialObject
class MyClass extends Component<any, any> {
  render() {
    const myVal = this.props.crucialObject.firstKey;
  }
}

We want a healthy dependency between these pieces of code. If the reducer starts spitting out objects that don’t match what our component expects, we need to know! We can accomplish this by defining a few interfaces and then wiring them together carefully.

First, the reducer. We need to define the shape out of its output, which is simple enough.

interface CrucialObject {
  firstKey: string;
  secondKey: string;
}

function crucialObject(currentState = { firstKey: 'none', secondKey: 'none' }, action) : CrucialObject {
  // the rest is unchanged
}

Next, we want to tell subscribers of the store that if they call upon state.crucialObject, they will get a CrucialObject. We can do this with another interface. I tend to define this in the same file as my reducer.

interface CrucialValueReducer {
  crucialObject: CrucialObject
}

We need to define all of the keys and values that exist on our root reducer. Easy again! In the same file where I define rootReducer, I also define a State interface. This interface extends the interfaces that are provided by my reducer files.

interface State extends CrucialValueReducer {};

// we don't need to do anything with this here, but I find it is good practice to keep these two together since a change to one will require a change to the other
const rootReducer = combineReducers({ crucialObject });

Next, in my component, I need to tell the compiler what it should expect of the state parameter in mapStateToProps.

const mapStateToProps = (state: State) => {
  return { crucialObject: state.crucialObject };
};

This is a great change. Now, the compiler knows that State contains the combined interfaces of all of my reducers. If I change the name of my state key, remove it entirely, or change its output, mapStateToProps will bark at me and I will have to fix them. A good example would be to start with this:

const mapStateToProps = (state: State) => {
  return { safeToProceed: state.crucialObject.secondKey === 'new' };
};

If, in my reducer, I get rid of secondKey, the function that the above code is invalid. Awesome!

We have two things left. First, we want to guarantee that mapStateToProps returns an object with the complete shape that our component needs. This is no problem with, you guessed it, another interface.

interface ComponentStateProps {
  crucialView: CrucialObject;
}

const mapStateToProps = (state: State) : ComponentStateProps => {
  return { crucialObject: state.crucialObject };
};

This is crucial. If we omit it, we might make a change that the compiler sees as valid that our component is not expecting. As written, we’re in good shape. If we do the code below, though, the output of our state will not match the promise we will momentarily make to our component.

interface ComponentStateProps {
  crucialView: CrucialObject;
}

// The compiler will catch this because our object does not match the return signature
const mapStateToProps = (state: State) : ComponentStateProps => {
  return { crucialObject: state.crucialObject.firstKey };
};

Finally, we can reuse the ComponentStateProps so references within the body of the component can be matched to our interface.

class MyClass extends Component<ComponentStateProps, any> {
  // The body is unchanged, but if we call upon `this.props`, we'll see the injected state keys.
  // If we changed our State Props interface, dependencies in here will become immediately clear
}

And there we have it: dependencies are clearly defined and will be hightlighted by the compiler if broken. If we need to add another reducer, we can continue extending our State interface:

interface State extends CrucialValueReducer, CurrentUserReducer {};

We are now covered from multiple angles.

  • By identifying an object as a State, I know what data is available.
  • By matching the output of mapStateToProps to my component’s input, I can be confident that the state will move safely from Redux to the view

I find it weird that in all my reading about TypeScript and Redux, I could find dozens of examples of adding types fo reducer inputs but almost nothing about outputs and the sanctity of state. Maybe there’s a simpler way of handling it? If so, I’d love to know. Regardless, TypeScript is a joy to work with and this helps get more out of it in a busy app. This is one case where you have to give it a few hints up front, but then it will keep its eyes open for you forever.

A Tale of Two Libraries

Work has kept me busy this year, but the past few months were slightly more of a challenge than usual when I was thrust into the world of front-end development. All of our front-end engineers were gone, updates were needed, I like learning new things, so that was that. We have two React apps at work, one slighly older that uses Flux, another a bit newer using Redux. They were each configured by different people and demonstrated many different ways of doing things. After a few weeks of writing production code and quite a few frustrated IMs to friends, I reached a point where I was feeling good about my time with JS, React, and front-end development as a whole.

This is not a story about that, though.

At some point after I reached that “maybe-I-don’t-totally-suck-at-this” phase, I started helping a friend with a project that needed a front-end app. It seemed like it would be a good fit for React, so I though this would be a nice opportunity to start with a fresh project and see the best practices of people out in the JavaScript community.

I had it in my head that configuring Webpack, React, Redux, etc,… from scratch was a pain in the ass. Since the setup portion was the area that I felt least confident, I thought I would do some research and find the best React starter repo and use that as a template. After all, a popular project would probably reflect the current state of the art and include all kinds of helpful things that I and my former co-workers might not have known about!

I did some research. I wanted something that already had React 15 and the modern versions of React-Router and Redux. I wanted it to preferably already have a test framework setup. It had to use webpack, naturally, have SASS support, and hot reloading. These requirements didn’t seem too demanding but findind the right library proved a little tough. There’s a lot of old stuff out there, a lot of of libraries using old versions or missing pieces. Finally, I settled on an extremely active React starter project that had all the right versions, the right config, the right number of Github stars, the right number of contributors and people responding to issues. Sweet! I cloned, I copied into a new repo, I got to work.

Things were weird right away. The sprawl was insane. Unreal. Every conceivable tactic to split this project into extra files had been employed. Each route was split into pieces, some routes had dedicated components, others had dedicated containers, others… those are just routes, you have to find the rest of those pieces. The webpack config was split and split and split. Always quick to assume that I’m just a primitive Ruby engineer who doesn’t understand the ways of these sophisticated front-end professionals, I put my head down, refactored it a bit to make it more sensible to me, and pushed through.

It worked for a little while. Sure, every time I had to add a new route or component, I felt like I was jumping through flaming hoops that were also screaming at me and dancing around and covered in spikes, but… I just had to adapt to their modular approach! This was a “fractal” file organization, broken up by features. (Forget the fact that today’s unshared, feature-specific code and assets are tomorrow’s reusable time-saver and this reeked over pre-optimization.) I spent more and more time. I wrote some cool code. Things started coming together.

Somewhere along the line, I simplified the routes, components, and containers to look more like one of my apps at work. It felt less sophisticated but no matter how much I read, I couldn’t find any evidence that anyone else was actually using this library’s approach to file organization. Oh yeah, also, my huge production apps at work looked nothing like this and they were doing fine. Oh, and no repo that I had ever looked at was organized like this. I again considered that maybe there was something wrong with this library’s approach.

It came time to do some styling, so I asked Lauren (my wife, she’s good at that – she also taught me JavaScript in the first place) if she could help us out. I got node and everything installed on her laptop, and we realized… where do we put stylesheets? Where do we put assets? How do we require librarires? We had to add a webpack loader, where did the config go?

Those flaming hoops that we had to jump through to make things work? There were no hoops at this point, just fire. We had to find a way to just jump through the fire and make things work.

After a lot of stress, we got everything working. The app was styled, everything looked good!

Finally – finally! – things seemed pretty stable. I thought everything could be left alone for a little while.

But then I read about this cool new library: webpack-dashboard. It made my webpack config look all… pretty, ‘n stuff! I wanted to use it. I started following the documentation aaaaand…

Everything broke. Everything fucking broke. I could not figure out where to even start making the simple changes required to make this thing work.

That was the end! Fuck this shit! I decided to find another library to use as a baseline for the project. I’d rip my code out, configure the stuff I didn’t want to configure before, and know that I had something predictable.

A few weeks after I started the new project, create-react-app was released. It was super barebones – no Redux, no Router, no tests, no SASS – but fuck it! I’d figure out the rest!

I cloned it, I ejected (cause I’m a control freak who likes to see more of his dependencies), and guess what jumped out at me immediately?

It was so. fucking. simple. There were so few files. It was so predictable. There were a few clearly configurable options, but it was all totally declarative, easy to expand, easy to reason about. I quickly wired up everything I needed, ported over my code, and had it working. Since then, there have probably been a dozen little tweaks I had to make to my webpack or test config and it never fucking surprises me or breaks. I keep an eye on the project’s repo to see what changes they’re making and if I see something I like, I follow their lead.

WHAT’S MY FUCKING POINT, THEN?

I’ve got a few.

My first point is that complicated code is not necessarily healthy code. Libraries that aim to be all things to all people run the risk of becoming amalgamations of many organizations’ infrastructures, not examples of how any one organization would ever handle their infrastructure! The first React starter library I used was more of a tech demo (“LOOK AT ALL THE CRAZY SHIT YOU CAN DO WITH REACT!”) than a reasonable approach to project organization.

My second point is that even if you are an outsider to a community, a little scepticism can be healthy. I try to approach languages and frameworks openly, with the belief that their practices evolved naturally and their patterns must work for them or else they wouldn’t have made it that far. This is especially true when it comes to something like JavaScript, where there is a rapidly evolving, complex ecosystem full of brilliant people. Who am I to tell them how to do things? I am not an experienced front-end professional like these people! Yeah, well, those are nice ideas and all but they betray the fact that I am not new to code, organizing projects, or the patterns and practices that lead to maintainable, reasonable production-ready code. You, reader (JK nobody reads this), might be in a similar situation, and to you I say: trust yourself. Don’t immediately shit on other ways of doing things but if something strikes you as weird – too engineered, too sprawling, oddly-named, whatever – you’re probably right. You don’t have to be a fucking JavaScript expert to see that.

Finally, remember that at the end of the day, doing things “right” by the standards of a community (of fucking strangers, who gives a shit?) should be secondary to feel productive and shipping. Don’t feel obligated to change tech or put up with patterns that just do not seem to be sticking if it’s getting in the way of your work. I should have punted on the first library right away.

And that’s that. I’ve been happy as a clam since then, coding and shipping. My project is getting pretty intense and I’m eyeing TypeScript as a way to make things more reliable. I branched and started the conversion but noticed a lot of weirdness around the different approaches to obtaining type defs, so once I spend some time configuring my project for global typings and… WAIT A MINUTE. NOT THIS AGAIN. I’M WAITING FOR 2.0.

subscribe via RSS