The Software Crisis

We developed methods of building nested layers of abstractions, hiding information at multiple levels. We took the problem of constructing software and morphed it into towering layers. We integrated these layers into the software required to use our computers, and the software that drives our lives.

From The Software Crisis.


One pivotal distinction between senior and junior engineers is how they manage abstractions, both in building them and understanding them.

After all, “We can solve any problem by introducing an extra level of indirection.” The temptation to solve problem at hand with another wrapper object or adding an if block is irresistible. The abstraction layer could also be a comfort zoom that many may unintentionally trapped themselves in; it is more rewarding for them to build cathedrals in their layers than reaching down and out (yes, this has been a critique of Web engineering and myself in the beginning of my career.)


How would you like to survive in this era of crisis?

The economics of the old indieweb

This post is a response to We can have a different Web by Molly White. I enjoy her takes and her colorful metaphors. Go read that post and support her if you can.

I too have a few things to say and it didn’t fit into a Mastodon post. So I guess this is the topic to take it over to a self-hosted blog.

While I too have an indieweb presence since 2000, what is lacking is not technologies nor advocacy. Molly correctly pointed out the (almost) forever backward compatible Web means the technologies exist back then are still available today. There are surely enough warnings on the dangers of walled gardens and the toxicity of algorithms. Yet I feel that the barrier is simply economics on what people wanted out of the time and effort they spent.

People put themselves on the Web to connect. Many may want to be influencers, but a lot more are just here to find their crowds. The relentless network effect means you’ll need to meet people where they are, and to do that you need to go to one of these “five websites.” It just doesn’t make sense, for most people, to build a cozy cabin on the indieweb, with no visitors.

Just take myself as an example. I enjoy the company of my tech folks on Mastodon, but I still had to regretfully log on to other social platforms not lose touch with my other friends. Every time I do it, I feel that I am risking my mental health by exposing myself to algorithmic toxicity, in exchange for staying connected. It must be worse for people without any other places to escape.


So what was the economics that enabled the old indieweb?

People used to be able to host content on Geocities, which was ad-supported.

People used to be able to find their crowds in much smaller self-hosted forums, web rings, and human-curated web directories.

What made the business model of Geocities unsustainable, or made the self-hosted forums die out? What gave rise to the walled garden content websites? What made algorithm-curated content win?

I don’t have a clear answer to all these questions. I am not going to start a new web ring or a web directory. I just know that we’ll need to tilt the balance again to make indieweb work again.


Allow me to end this post with something I’ve said before:

When I look at the history of the commercialization of the internet & web (no, Al Gore didn’t invent the internet), it always reminds me that proprietary information services (like CompuServe) existed before that, and will likely continue to exist afterward.

We must remind ourselves that open systems, like democratic forms of governance, are the outliers of human history, not the norm, no matter how precious we feel.

Celebrating CSS: Testing the Versatile Yet Elusive

I love CSS despite I know there are a lot of haters out there.

The problem with CSS is not cascade. The problem with CSS is it is hard to test. We know how to test JavaScript — create a bunch of test cases and assert the outputs — but we know too little about how to validate CSS outputs (the layout) effectively. It is so frustrating, to the point that people end up demanding “simpler” abstractions on top of it to prevent mistakes.

You can read back layout in JavaScript and assert the dimensions, but not everything is exposed (like ::before or ::after,) and some states are hard to reach (like :hover or dark mode.)

Or, you can create a screenshot tool to diff the pixels. You would still face the issue above, in addition to maintaining a complex test suite across multiple browsers and multiple OSes. On, and different versions too, including underlining text rendering changes.

Regretfully, none of these are easy enough than hiring QA engineers to manually click through the pages.

In a way, CSS is a victim of its own success — it is so versatile and adaptive, allowing you to achieve a lot with a few lines of code — and cause regressions in the same few lines if you are not careful.

Maybe this is a problem of declarative languages in general. I don’t know how to write test cases for a spreadsheet either, and there are no well-known or built-in tools for that, after more than a half-century since its invention.

Cracking this problem may be the thing that could make CSS popular.