A new take on Asian IME for English audiences

East Asian Input Methods are not hard to understand, but for English speakers, we can do better than a general explanation on Wikipedia with a concrete example. This post is me attempting to do that for English audiences.

It is written for my own amusement, but I hope you would like it. My recommendation is to read it once without tapping on footnotes and skim through it again with footnotes.


Spoken languages are humans conveying ideas by making sounds. A human can only make a limited number of sounds, due to anatomy. Not every sound humans can make is used for a given spoken language.

There are around 44 sounds in spoken English. Linguists call them phonemes. English is usually written in Latin alphabets. There are 26 alphabets. To represent 44 sounds, combinations of alphabets are utilized. Linguists call these alphabet combinations graphemes. Each English phoneme is being represented by many graphemes (too many for some, without a spelling reform). Humans are taught to pick the right grapheme combinations to write down the exact words they intend to speak. It’s called spelling.

Now, imagine there is a language1 written using a different script. Instead of Latin alphabets, it would traditionally be written with graphemes composed of distinct shapes of drawings2. Linguists call these shapes monograms. This particular imagery script comes with tens of thousands of these monograms, with the same relationship with the phonemes of the imagery language like English — just as humans are taught to spell English words correctly, they are taught to pick the right monogram combinations to write down the exact words they intend to speak.

Most humans were born with ten fingers. Modern computer keyboards come with around 78 keys, designed for the ten fingers to operate. This is enough for Latin alphabets given that there are only 26 of them, but far from enough for the monograms of the imagery language. Something would have to be done with that.

Thankfully, as a human language, the number of sounds of the imagery language would still be within a manageable magnitude. Before the introduction of modern computers, local linguists would have already identified these phonemes. They would have gone afar and invented a set of symbols for these sounds. These symbols — phonetic symbols, they called — would “spell” a phoneme with just one to four symbols3. Unlike English, since the symbols were constructed and not naturally developed, each symbol combination would only represent one phoneme, and each phoneme would only be written by one symbol combination, systematically.

Aside: other linguists disagree and used Latin alphabets to “spell” the phonemes of the same language4. The principle is the same, though.

Aside: Local linguists of another imagery language picked a different route and decided to invent symbols to directly represent each sound. It’s “less” systematic, but it gets the job done too5.

Since the number of symbols is limited, they could then be arranged on a modern computer keyboard. Computers would then be loaded with a program6 that allows humans to pick the right monogram for each symbol combination, as they type.

Aside: For spelling the language with Latin alphabets, it is even easier — you don’t even have to arrange a different set of symbols on the keyboard.

When computers were dumb with limited capacity, these programs would only be implemented with a simple mapping table, mapping symbol combinations to monograms. This would be quite cumbersome, because words that people type are often the same monogram combinations, and humans really hate to repeat themselves.

Aside: A different school of programs for the same purpose would instead map monograms to visual symbols by decomposing their shapes, instead of the sounds they represent. Their mapping table would map visual symbol combinations to monograms7. This is very helpful for typing a monogram without knowing its sound and/or disregarding the spoken language being written. Some argue it is easier to type too, given that there can be arbitrary more symbol combinations designed for the program, than a fixed number of phonemes.

As computers become more powerful, a new class of programs would have developed. Instead of mapping symbol combinations to monograms (the shapes that make up the words), these programs would map multiple symbol combinations to words8. It would need a bigger mapping table for sure, and the table would also require constant curation, because of the endless evolution of human thoughts and their new words (comparably, new monograms are rarely added.)

Thankfully, computers are also powerful enough to manage these tasks. Maintaining and developing these bigger mapping tables are also helped by the fact that computers have since been connected across the planet9 (and its lower orbits10, to be exact) so it would not be hard for computers to find a large body of text written in the imagery language (a “text corpus“, linguists and computer scientists call) waiting to be extracted and processed11.

Thus, through the ingenuity of these humble programs built upon linguistics knowledge, our imagery language would have been allowed to strive in the Information Age, expressed in monograms the same way it would have been written down for thousands of years, and perhaps thousands of years to come.


If you like this post, you would like my not-to-be-updated JSZhuyin and its interactive tutorial. I would imagine you will already be frequent on many YouTube videos on linguistics, and a fan of the movie The Arrival, like me.


  1. Mandarin Chinese is the imagery language in question. ↩︎
  2. Mandarin Chinese is traditionally written in Chinese characters, a monogram shared among East Asia languages. Among these languages, the usage of Chinese characters only survived in Chinese, Japanese, and Korean, abbreviated as CJK in the information processing field. ↩︎
  3. This is how Bopomofo Phonetic Symbol system spells Mandarin sounds. It was invented in the 1900s. Invented in 1443, which predates modern linguistics, Hangul works more or less the same way for spelling Korean. ↩︎
  4. The Pinyin system is designed to spell Mandarin with Latin alphabets. ↩︎
  5. Japanese Kana is one such system. ↩︎
  6. The programs are Input Method programs, or IMEs, the subject of our discussion here. ↩︎
  7. An example of these kinds of IMEs is Cangjie input method, which codes Chinese Characters with 24 invented “radical” symbols. ↩︎
  8. These newer IMEs are often dubbed “smart” or “intelligent” IMEs. As mentioned in the later paragraph, all IMEs are “smart” nowadays. ↩︎
  9. Internet and World Wide Web, if you haven’t heard about it. ↩︎
  10. There is Internet on International Space Station, usable by astronauts. One got sued for it (and vindicated.) ↩︎
  11. This study of distilling human language using computers is called Natural Language Processing. ↩︎

Status of IDN ccTLDs

For some reasons, work has taken me to investigate current usage of Internationalized country code top-level domain. Something I came across all the way back almost two decades ago.

I remember it was a big thing being promoted by NICs. As a web engineer, I have also found it to be an interesting technical endeavor (with Punycode and etc) and spent my own effort to make sure the <IDN>.tw site I managed at the time also resolves on <IDN>.台灣, given that per NIC rule they auto-register you with the IDN ccTLD when you register for a second level ccTLD domain. Edit: I misremember this.

Fast forward to today: I was struggling to find a live website that resolves on an IDN ccTLD hostname. I no longer handled that <IDN>.台灣 website and my successor broke it (probably because of me failing to document my work.) The university websites that I know of at the time all stopped resolving on their IDN ccTLD hostnames. Hell, even the TWNIC website doesn’t resolve on twnic.台灣!

Eventually through the wonder of Wikipedia, I found the one website that resolves: уміц.укр, Ukrainian Network Informational Centre. It is good enough for me even though it won’t connect over HTTPS.

Ukrainians never disappoint.

How to Kill a Decentralised Network (such as the Fediverse)

How to Kill a Decentralised Network (such as the Fediverse)

This piece is a good history lesson of why XMPP failed to gain momentum and a cautionary tale on how companies can commandeer open, “public good” protocols.

My experience with XMPP was limited: in an unpublished project, I wired GMail to MSN Messenger though the protocol. I know enough to know XMPP as a precursor of things being re-invented (my other favorite that falls into category is NNTP, and even e-mails to a certain extent.)

From the post, it sounds like the danger arise of an intentional commercial protocol fork that designed to compete with the original protocol. I wonder if there are things to learn (like governing model) from protocols that so far had survived risk of fragmentation.

Something to dig further.