The A.I. Dilemma

When people talk about LLMs being at its “Netscape Moment”, this is the thing that ponders me the most.

Tristan Harris, which I covered his early work, timewellspent.io, and Aza Raskin, discuss, quote, “how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.

The Internet already died once, and it, and the remaining human creativity, may very well die again.

Towards a modern Web stack by Hixie

Towards a modern Web stack

by Ian ‘Hixie’ Hickson

This is not the first time people talk about a Web Platform design for applications, by sidestepping HTML. This is, nonetheless, the first time such idea is being lad out in a detailed proposal.

What is surprising is that this came from one of the editors of HTML5. In the link in the footnote, he asserts that the unfulfilled promises of HTML can only be solved by revamping the stack altogether, ditching the old HTML/CSS/JavaScript trilogy.

Overall it is pretty complete — except for addressing the fact that WASM is a byte code. Unlike HTML/CSS in its declarative form, which browsers are free to parse and render partially downloaded content, there is going to be a loading screen for WASM widgets/apps like Java applets.

This sole issue isn’t going to be a showstopper in any way. HTML/CSS will continue to live on as the best document format.

It would be interesting to see if the browser vendors (his employer included) will ack on these.

ChatGPT

Note: I promise, nothing in this post is generated by ChatGPT.

Mental calculation and calculator 

I grew up in an East Asian culture that values learning mental calculation. I remember how adults think of calculators, and what children were being told to practice at Kumon

I can’t do mental calculations, and I still admire those who do. They, however, don’t seem to always end up getting STEM degrees or doing better in personal finance.

I reach for a calculator on my phone, watch, and Alfred every day.

Writing and input methods

I also grew up in a place that appreciates penmanship.

Yet, I can’t seem to write Chinese characters because of my inability to remember the strokes of the characters. Since leaving school, I have been almost exclusively relying on typing characters on screens. Thankfully, East Asian input methods are ubiquitous on all devices — handwriting recognition, in fact, came later.

Writing and ChatGPT 

Essay writing is also an appreciated skill. It is arguably global, not limited to East Asia.

Users of the language model can generate essays with the right prompts, and supplement them with their edits.

There are no undisputed results with a given calculation or character. 16 times 4 is always 64. The same goes for typing Chinese characters — the outputs are the same CJK code points in Unicode. One of course one has to understand the math enough to put in the right calculation, or with enough reading skills to pick the right character among the homophones.

It is also a learned skill to give ChatGPT the right prompt and supplement the output with the right edits. We, humans, are only a few months into understanding what it feels like to learn a such skill. But even with users who excelled at that skill, the output isn’t indisputable. The vast space of possible human utterance and expression means that there will be bad and effortless ChatGPT-generated essays, and there will be essays made better because of ChatGPT.

What would that lead us, you ask?

ChatGPT and half-lies

As of today, the output of a ChatGPT-assisted essay still relies on heavily on a human to fact-check. The language model does not employ the same editorial standards as Wikipedia, let alone academic papers. It does not know if the materials it was trained on were trustworthy either.

Like CoPilot, the copyright of the generated content is a subject of scrutiny too. Just last week, I had to reject a pull request at work, because it contains a function generated by ChatGPT without copyright notation.

What if it is by design, not a problem?

ChatGPT and half-truth

We may be witnessing the end of the human-driven internet, where the majority of the texts are written by humans.

We have already seen the shift where almost all contents are now curated by algorithm, with many machine learning models. There are tons of reflections on the effects of that (among the almost-destruction of democracies) not worth repeating.

User-generated content may soon be drowned by machine-generated content. Perspectives will soon be replaced by entities with the biggest wallets. Intelligence and misinformation campaigns will be even effortless.

On the other hand, the next great fantasy literature may be generated than written. Its fictional universe can be as glamorous as that of Asimov.

In conclusion

It is a paradigm shift. Just like how calculators and input methods relieve the mental burden of many, ChatGPT is going to change how humans construct essays, and even the thoughts themselves.

I am looking forward to seeing what kind of creativity ChatGPT will unleash on humanity, while carefully observing the harm it may cause.

I will miss the days when life was simpler, and you don’t have to worry about the quality of the texts you encounter on the web, at least this much.

I hope future generations will continue to be able to enjoy the ability to reflect on their thoughts through their writings, like what I am doing with my blog right now.


This post is a reprise of what Evan Puschak said in his The Nerdwriter video, also a toot from ronnywang, together with my take on the subject. Any mistakes are mine and mine alone.