Use Promise, and what to watch out if you don’t

If you do know Promise, consider the following code; do you know the order of the resulting log? (answered below)

var p = new Promise(function(resolve, reject) {
  console.log(1);
  resolve();
})
p.then(function() {
  console.log(2);
});
console.log(3);
setTimeout(function() {
  console.log(4);
});
p.then(function() {
  console.log(5);
});
console.log(6);
setTimeout(function() {
  console.log(7);
});
console.log(8);

The Promise interface

The Promise interface is one of the few generic interfaces that graduated from being a JavaScript library to be a Web platform API (The other being the JSON interface.) You already know about it if you have heard of Q, or Future, or jQuery.Deferred. They are similar, if not identical, things under different names.

Promise offers better asynchronous control to JavaScript code. It offers a chain-able interface, where you could chain your failure and success callbacks when the Promise instance is “rejected” or “resolved”. Any asynchronous call can be easily wrapped into a Promise instance by calling the actual interface inside the synchronous callback function passed when constructing the instance.

The ability to chain the calls might not be a reason appeal enough for the switch; what I find indispensable is the ability of Promise.all(); it manages all the Promise instances on your behalf and “resolves” the returned promise when all passed instances are resolved. It’s great if you want to run multiple asynchronous action in parallel (loading files, querying databases) and do your things only if everything have returned. (The other utility being Promise.race(), however I’ve not found a use case for myself yet.)

Keep in mind there is one caveat: compare to EventTarget callbacks (i.e. event handlers), this in all Promise callbacks are always window. You should wrap your own function in bind() for specific context.

The not-so-great alternatives

Before the Promise interface assume it’s throne in the Kingdom of Asynchronous Control, there are a few alternatives.

One being the DOMRequest interface. It feels “webby” because it’s inherited from the infamous EventTarget interface. If you have ever add a event listener to a HTML element, you have already worked with EventTarget. A lot of JavaScript developers (or jQuery developers) don’t work with EventTarget interface directly because they use jQuery, which absorb the verboseness of the interface (and difference between browser implementations). DOMRequest, being an asynchronous control interface simply dispatches success and error events, is inherently verbose, thus, unpopular. For example, you may find yourself fighting with DOMRequest interface if you want to do things with IndexedDB.

Another terrible issue with DOMRequest is that it’s usage is entirely reserved for native code, i.e. you can not new DOMRequest() and return the instance for the method of your JavaScript library. (likewise, your JavaScript function cannot inherit EventTarget either, which is the reason people turned to EventEmitter, or hopelessly dispatch custom event on the window object. That also means to mock the APIs inheriting EventTarget and/or returning DOMRequests, you must mock them too.)

Unfortunately, given the B2G project (Firefox OS) was launched back in 2011, many of the Web API methods return DOMRequest, and new methods of these APIs will continue to return DOMRequest for consistency.

The other alternative would be rolling your own implementation of generic asynchronous code. In the Gaia codebase (the front-end system UIs and preload web apps for B2G), there are tons of example because just like many other places in Mozilla, we are infected with Not-Invented-Here syndrome. The practices shoot us in the foot because what thought to be easily done is actually hard to done right. For example, supposedly you have the following function:

function loadSomething(id, callback) {
    if (isThere(id)) {
      getSomething(id, callback);

      return;
    }

    var xhr = new XMLHttpRequest();
    ...
    xhr.onloadend = function() {
      registerSomething(id, xhr.response);
      callback(xhr.response);
    };
    ...
}

To the naïve eyes there is nothing wrong with it, but if you look closely enough you will realize this function does not return the callback asynchronously every time. If I want to use it:

loadSomething(id, function(data) {
  console.log(1, data);
}); 
console.log(2);

The timing of 1 is non-deterministic; it might return before 2, or after. This creates Schrödinger bugs and races that will be hard to reproduce, and fix.

You might think a simple solution to the problem above would be simply wrap the third line in setTimeout(). This did solve the problem but it comes with issues of its own, not to mention it further contribute to the complexity of the code. Wrap the entire function, instead, in a Promise instance, guarantees the callbacks runs asynchronously even if you have the data cached.

(Keep in mind that the example above have lots of detail stripped; good luck finding the same pattern when someone else hides it in a 500-line function between 10 callbacks.)

Not-Invented-Here syndrome also contribute to other issues, like every other software project; more code means more bugs, and longer overhead for other engineers to pick up.

Conclusion

In the B2G project, we want to figure out what’s needed for the Web to be considered a trustworthy application platform. The focus has been enabling hardware access for web applications (however sadly many of the APIs was then restricted to packaged apps because of their proprietary nature and security model), yet I think we should be putting more focus on advancing common JavaScript interfaces like Promise. I can’t say for sure that every innovation nowadays are valid solutions to the problems. However, as the saying goes, the first step toward fixing a problem is to admit there is one. Without advances in this area, browser as an application runtime will be left as-is, fill with legacies for its document reader era and force developers to load common libraries to shim it. It would be “a Web with kilobytes of jquery.js overhead.”, one smart man once told me.

(That’s one reason I kept mention EventTarget v.s. EventEmitter in this post: contrary to Promise v.s. DOMRequest, the EventEmitter use case have not yet been fulfilled by the platform implementations.)


The answer to the question at the beginning is: 1, 3, 6, 8, 2, 5, 4, 7. Since all the callbacks are asynchronous except (1), only (1) happens before (3), (6), and (8). Promise callbacks (2) and (5) are run asynchronous and they return before setTimeouts.

核四議題與理工人的傲慢

核能電廠不是原子彈

核電廠不會核爆。取自經濟部 Facebook 粉絲團

是,我也知道核電廠不是原子彈,不會核爆,炸毀方圓數公里還冒出蕈狀雲。拿了物理系的學位,雖然成績很差而且還轉行了,這種常識我還是有的。

但具體的,我還是要想要行文,反對核四廠的興建。我不反對核能:用輕浮的比喻來說,如果文明不能有效應用核能,那怎麼發展曲速引擎與星際旅行?核電若能取代石化發電,也能幫助減緩碳排放。但是,我認為,只單純因為對於科學與工程的信念,就贊成核能電廠這個公共建設,完全是一種理工人的傲慢。核電廠,以及這個世界,不是只靠駕馭冷冰冰的物理定律就能順利運作的,社會的法制與有效治理(governance),比物理與工程還要重要太多了。我對我們投票組成的政府在無法確保食品安全、司法獨立、經濟機會平等的狀況下,竟然宣稱它能完善運作核電廠的治理感到恐懼,即便我知道讓核電廠得以運作的科學與工程技術,早在半個世紀前就已經為人所知

我們所在的時代是複雜的,知識在不同領域與地區的分布是不均的。比起理論物理,政治哲學以及治理可能還停留在啟蒙時代,實作的工程甚至更加落後。科學與工程的發展差異只需要數十年就能在全球的文明中傳播與弭平,但治理需要數個世紀,甚至在某些層次,需要經由本地文化的醞釀,才能完善。去年,But 翻譯了福島核能電廠事故調查報告。這個報告可以拿來探討別的國家治理失敗的過程與因素。引用這份文獻絕對沒有隱含日本都做不好台灣不可能做好的意思,但是作為選民我覺得值得細讀,以及捫心自問,選出的政府官僚以及樹立的制度,能不能避免同樣的問題發生。


回到核能電廠不是原子彈這個廣告。反過來或許可以歸因,會出現「核能電廠不是原子彈」這種澄清,大概是因為有反核團體用反智、製造恐懼的方式在宣傳類似「核能電廠是原子彈」這種謬誤。同類的反核團體還常用類似核能很複雜,人類絕對無法駕馭這種訴諸神秘學的說法來反核。如果社會運動的立論只停留在這個層次,下次真的不要怪政府拿「此方案真的很複雜」、「無法用簡單的幾句話說明清楚」這種宣傳來羞辱人了。


如果您看完這篇拙作,還想要更了解科學與工程之外,台灣不該繼續興建核電廠的理由,請參考全國廢核行動平台的說帖


Edit (4/24):刪除理論物理與政治哲學/治理的比較。我的原意是要指出科學方法在自然科學之外的複雜知識的無力之處,而不是字面所指的知識的「落後」。刪除此句對本文的論點(公共建設所需的知識超越自然科學)無礙(正好相反——治理與政治哲學在核能電廠的興建無比重要)。在此對認為有任何不敬之處的讀者表達歉意。

網路資料個人隱私與安全:Cookie 與 HTTPS 加密

二月時,應莊庭瑞老師之邀,替台灣人權促進會的季刊春季號【網路時代公民權】撰寫了一篇關於 Cookie 與 HTTPS 加密的介紹、爭議、與個人能夠使用的工具。受邀者除了小的我以外都是一時之選(笑),如果內容有什麼問題還請指教。

出刊的文本(PDF)因為篇幅的關係所以有刪節。這邊刊載全文。

Continue reading