My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Monday, October 25, 2010

JavaScript Coercion Demystified

This post is another complementary one for my front-trends slides, about performances and security behind sth == null rather than classic sth === null || sth === undefined.
I have already discussed about this in my JSLint: The Bad Part post but I have never gone deeper into this argument.

Falsy Values

In JavaScript, and not only JavaScript, we have so called falsy values. These are respectively: 0, null, undefined, false, "", NaN. Please note the empty string is empty, 'cause differently from php as example, "0" will be considered truish, and this is why we need to explicitly enforce a number cast, specially if we are dealing with input text value.
In many other languages we may consider falsy values even objects such arrays or lists:

<?php
if (array())
echo 'never'
;
?>

#python
if []:
print 'never'


Above example will not work in JavaScript, since Array is still an instanceof Object.
Another language that has falsy values is the lower C, where 0 as example could be considered false inside an if statement.
Falsy values are important to understand, specially if we would like to understand coercion.

About Coercion

In JavaScript world, coercion is considered a sort of evil and unexpected implicit cast, while in my opinion it's simply a feature, if we understand it and we know how to use it.
Coercion is possible only via == (eqeq) operator, but the thing is that everything is properly implemented cross browsers accordingly with ECMAScript 3 Standard.
This is the scary list newcomers have probably never read, directly from the even newer ECMAScript 5 specification, just to inform you that coercion will hopefully always be there, and nothing is evil.

The comparison x == y, where x and y are values, produces true or false. Such a comparison is performed as
follows:


  1. If Type(x) is the same as Type(y), then

    1. If Type(x) is Undefined, return true: undefined == undefined

    2. If Type(x) is Null, return true: null == null

    3. If Type(x) is Number, then

      1. If x is NaN, return false: NaN != NaN

      2. If y is NaN, return false: NaN != NaN

      3. If x is the same Number value as y, return true: 2 == 2

      4. If x is +0 and y is −0, return true: 0 == 0

      5. If x is −0 and y is +0, return true: 0 == 0

      6. Return false: 2 != 1


    4. If Type(x) is String, then return true if x and y are exactly the same sequence of characters (same length and same characters in corresponding positions). Otherwise, return false: "a" == "a" but "a" != "b" and "a" != "aa"

    5. If Type(x) is Boolean, return true if x and y are both true or both false. Otherwise, return false: true == true and false == false but true != false and false != true

    6. Return true if x and y refer to the same object. Otherwise, return false: var o = {}; o == o but o != {} and {} != {} and [] != [] ... etc etc, all objects are eqeq only if it's the same


  2. If x is null and y is undefined, return true: null == undefined

  3. If x is undefined and y is null, return true: undefined == null

  4. If Type(x) is Number and Type(y) is String, return the result of the comparison x == ToNumber(y): 2 == "2"

  5. If Type(x) is String and Type(y) is Number, return the result of the comparison ToNumber(x) == y: "2" == 2

  6. If Type(x) is Boolean, return the result of the comparison ToNumber(x) == y: false == 0 and true == 1 but true != 2

  7. If Type(y) is Boolean, return the result of the comparison x == ToNumber(y)

  8. If Type(x) is either String or Number and Type(y) is Object, return the result of the comparison x == ToPrimitive(y): ToPrimitive means implicit valueOf call or toString if toString is defined and valueOf is not


About last point, this is the object coercion we are all scared about ...

var one = {
valueOf: function () {
return 1;
},
toString: function () {
return "2";
}
};

alert(one == 1); // true
alert(one == "2"); // false

If we remove the valueOf method, we will implicitly call the toString one so that one == "2" or, more generally, {} == "[object Object]".

null == undefined And null == null, Nothing Else!

99% of the time we do a check such:

function something(arg) {
if (arg === undefined) {
}
}

We are asking the current engine to check if the undefined variable has been redefined in the current scope, up to the global one, passing through all outer scopes.
Even worse, we may end up with the most silly check ever:

function something(arg) {
if (arg === undefined || arg === null) {
}
}

which shows entirely how much we don't know JavaScript, being the exact equivalent of:

function something(arg) {
if (arg == null) {
}
}

with these nice to have differences:

  • null cannot be redefined, being NOT a variable

  • null does NOT require scope resolution, neither lookup up to the global scope

The only side effect we may have when we check against null via == is that we consider for that particular case null and undefined different values .... now think how many times you do this ... and ask yourself why ...

Performances

Once again I send you to this simple benchmark page, where if you click over null VS undefined or null VS eqeq, or one of the lines showed under the header, you can realize that while it's usually faster and safer, it provides even more control when compared against the ! not operator.
The only way to reach better performances when we mean to compare against an undefined value in a safer way is to declare the variable locally without assigning any value, so that minifiers can shrink the variable name while the check will be safer.

// whatever nested scope ...
for (var undefined, i = 0; i < 10; ++i) {
a[i] === undefined && (a[i] = "not anymore");
}


There Is NO WTF In JavaScript Coercion!

Coercion in JavaScript is well described and perfectly respected cross browser being something extremely simple to implement in whatever engine. Rules are there and if we know what kind of data we are dealing with, we can always decide to be even safer and faster.
Of course if we are not aware the strict equivalent operator === is absolutely the way to go, but for example, how many times you have written something like this?

if (typeof variable === "string") ...

being typeof an operator we cannot overwrite neither change, and being sure that typeof always returns a string there is no reason at all to use the strict eqeqeq operator since String === String behavior is exactly the same of String == String by specs.
Moreover, as said before, coercion could be absolutely meant in some case, check what we can do with other languages, as example:

# Python 3
class WTF:
def __eq__(self, value):
return value == None

# Python 2
class WTF():
def __eq__(self, value):
return value == None

# in both cases ...
if WTF() == None:
"""WTF!!!"""

While this is a C# example:

using System;

namespace wtf {

class MainClass {

public static void Main (string[] args) {
WTF self = new WTF();
Console.WriteLine (
self ?
"true" : "false"
);
}
}

class WTF {
static public implicit operator bool(WTF self) {
return false;
}
}

}

Can we consider some sort of coercion latest cases as well? It's simply operator overloading, virtually the same JavaScript engines implemented behind the scene in order to consider specifications points when an eqeq is encountered.
Have fun with coercion ;-)

Sunday, October 24, 2010

The Layer ... Of The Layer ... Of The Layer ...

When I read tweets like this one I cannot avoid a quick comment but the reason I am posting, is simply to explain that every time we write a web page/application, we are dealing with at least 4 different layers.
Moreover, this post is complementary for few slides I have introduced at front-trends, specially regarding the "avoid classic OOP emulation when not necessary" point.

Layer #1: JavaScript Libraries

We all know the DOM is a mess, and this is most likely the reason we chose a JS library rather than deal directly with possible problems we can have when we develop an x?HTML page.
Even if many developers don't care, I keep saying that every millisecond gained in this first layer, the page itself, is important.
Moreover, if we have a good understanding of the JavaScript programming language, we can easily realize that all these "Java Pretending Style Frameworks" emulating classic inheritance and OOP are not easier to maintain neither faster for what we need on mobile devices, included Netbooks.
The "easier to maintain" fuzz, associated with "Java style JavaScript", a sentence that does not make sense itself, is only a Java developer point of view.
Well written JavaScript without any "wannabe another language" pragmas, is truly much easier to both understand and write, modify, or fix, while the great magic behind this or that framework/library could become our first enemy when something goes wrong and we would like to understand and debug that magic 'cause we had a problem and we have strict deadline that may not match with a bug lifecycle.
Finally, as easily demonstrated via this test page, we can all spot how much more it costs to simply initialize a new instanceof constructor, compared with proper way to go natively via JavaScript, in that case made easier by this essential script, developed following TDD and tested here cross browser.
Anyway, common sense first and fast production quality, should always be kept in mind when we decide an approach, rather than another one. So, here frameworks play usually quite good role, the one to bring same functionality cross browser.
But what is a browser?

Layer #2: The Browser

As libraries are considered an abstract way to reach same goal in all browsers, browsers are simply abstract applications able to bring the web cross platform.
This is were the browser speed may vary, accordingly with the platform, and were every technique able to speed up render ( DOM+CSS engine such Gecko, Trident, others ) and JavaScript ( engine a part such V8, JavaScriptCore, SpiderMonkey ) is more than welcome. These guys are implementing any sort of trick to make the page and the code that fast, even if they have to deal with different operating systems. And guess what is an operating system?

Layer #3: The Operating System

We are even lucky if the browser deals directly with the operating system graphic API, since many other middle layers could be part of this stack ( flash or third parts plugins, as example ).
You cannot expect that Linux, Mac, and Windows, just mentioning fews Desktop related ( more choices on mobile world ) magically display and provide browser functionalities via the same API. We would need something like a jOSQuery library here to make it happens ... but even worse, every operating system may have another abstract layer able to use, as example, Hardware Acceleration.

Layer #4: The Hardware

Open GL ES 2.0 is simply another abstraction able to transform API calls into specific hardware driver calls which means that starting back from the DOM and the used WebGL or CSS3 with HW support, things have been modified, translated, re-created at least a couple of times.
In few words, if we asked too many things to do on first abstract layer, and being the first the slower one, nothing can be that fast.

As Summary

We, as web or scripting programming languages developers, rarely think that performances on the highest level ever can be that important but unfortunately, that highest level is the slowest one ever so, specially if we would like to reach best frame rate via canvas, WebGL, or CSS3 animations, it's highly recommended to be sure that the strategy/code we are using is the best one for our requirements.
As example, if we spend just a millisecond more to create each object we need for a single frame, we can easily switch from 30fps, a decent visual framerate, to 29 or less, were things will start to be visually slower for our eyes ...
Finally, kudos for Opera Mini and its growing market share, I am pretty sure it will become soon the IE for mobile platforms, making developers life easier, being a portable browser fallback for whatever website or application, hoping will not have all IE problems we all know.

Thursday, October 21, 2010

Front Trends 2010 - My Talk

My talk is finished, there were probably too stuff to talk about and it was hard to make a clear point, but I am willing to better explain myself posting here about main points.

Slides without me trying to show stuff online do not probably make much sense but these are here: ft2010 WebReflection Slides

The benchmark I have showed that should run in any browser ( at least those A grade ) is here. During this talk I have tried to explain what each test means. You can grab the benchMark function from the source, it's simple but it ruffling did the job.

The showed "problematic parent" example is here.

Tests for my essential Class are here.

Talk Summary


I have tried to explain that sometimes we should take care of performances techniques, accordingly with the goal and the target.
I have showed how things could go slow in whatever Netbook, regardless hardware acceleration.
What I have not been probably able to explain , is that it's up to our common sense decide when we should avoid common good practices or not, and I won't link here the IE9+iPhone/iPad canvas experiment since I would like to talk with openstreetmap.org guys before and eventually create a proper GitHub project for their slickviewer, mobile version.

Thanks everzbody for listening, all the best.

Sunday, October 17, 2010

Pre Authorization Meta Tag Proposal

Under the HTML5 flag, browsers are bringing to our desktops or devices exciting features such GeoLocation, File, and many others such camera, microphone, system access, etc ...

The Problem

While this is good from possibilities point of view, the activation approach historically sucks. Flash and it's video or microphone activation shows a dialog that asks the user to authorize media access while browsers are lazily asking via JavaScript the same. The real life scenario we all know is definitively different when a page is using a screen reader, and this article and video about twitter UX should be enough to open our eyes: something is wrong.

Solutions

If we ever used an Android device, or we have download applications from whatever mobile store, we should be familiar with the installation warnings and confirmations we suppose to read and accept in order to grant application access to anything it needs to work properly.
This simply means that once we accept we won't be bothered anymore and the application can easily work as expected.
It's all about privileges and in my opinion it would be nice to have similar approach in our browsers as well, also to make web application even closer to native one.


Proposal

Following the "don't break the WEB" approach, all we could do is put a meta tag, as we do for viewports, specifying all those Technologies/API we would like to use in our webpage. This is an example:

<meta name="grant" content="GeoLocation,Camera" />
<meta name="grant" content="System" />

The proposal should ask at the very beginning and only once if the webpage could access these functionalities and the user can decide before what should be available and what should not.
The "before" action is important 'cause in this Ajax era it's extremely easy to loose focus runtime with whatever activation request and this is so wrong.
The list of granted API should be reachable via userAgent via specific method such:

navigator.hasGranted("GeoLocation")

or similar, so that eventually we can decide via JavaScript if we would like to ask again runtime, as we do now, or simply provide an alternative solution or message, maybe showing an "ask again to activate" button, remembering to put back the focus in the right context.

Alternative

Web developers could implement similar concept asking with the very first script access to one or more API and put the page focus back. With GeoLocation, as example, the user will chose immediately his preferences without having surprises in the middle of a session, or later.

// As Soon As Possible
var HAS_GEO_LOCATION = false;
try {
navigator.geolocation.getCurrentPosition(function (result) {
// so that other scripts could check if already available
HAS_GEO_LOCATION = true;
// notify the event in any case
var evt = document.createEvent("Event");
evt.initEvent("geolocationready", 1, 1);
evt.data = result;
// implicit window context
dispatchEvent(evt);
});
} catch(e) {}

A usage example could be something like:

// script loaded after a while ...

if (HAS_GEO_LOCATION) {
doFreakingCoolStuffWithGeo();
} else {
addEventListener(
"geolocationready",
doFreakingCoolStuffWithGeo,
false
);
}


We can eventually specify the handleError callback as well but actually this is part of the HAS_GEO_LOCATION value, if there is an error no reason to insist, simply assume that there is no geo location and go with the fallback, if any.

What do you think?

Friday, October 15, 2010

Technical Reviews: Bestsellers!

Just a quick one about two technical reviews out of two I have recently done for @stoyanstefanov and @cjno for these completely different books: JavaScript Patterns and Test-Driven JavaScript Development.

Right now these are both Top 10 Bestsellers and trust me: other JavaScript Jedis have been involved, you won't regret these lectures! ;-)



Tuesday, October 05, 2010

JavaScriptCore via Terminal

Just a quick one, maybe only for a new Mac comer as I am, I found truly annoying I have already Python, Ruby, and even PHP everywhere available in my command line but not JavaScript?

What The Fuck

Even Windows runs .js files natively and since ages, I wonder why on earth after I have downlaoded the whole XCode SDK "my JavaScript" was not there available for all my needs.

OK, OK, node.js is already on /bin, linked and working properly, but now I have the system default JavaScript Engine that comes automatically with WebKit or the "IE for Mac" aka Safari.

How to link jsc to bin folder

A title that produces zero outcome on Google, could be hopefully better addressed via this blog, and this is how I have solved:

sudo ln -F /System/Library/Frameworks/JavaScriptCore.framework/Versions/Current/Resources/jsc /usr/bin

The Current folde ris the link to the latest one, all those threads about the ..../A/... folder are not automatically updated if A becomes B, as example.
So, now I can type jsc wherever I am and use JavaScript power whenever I want, writing just quit() anytime I need.
I hope this helps, it took a while for me to sort it out.

Saturday, October 02, 2010

Apple UX Fails with Mac Mini

Update
Following Daniel suggestion (first comment), I have grabbed a cabled keyboard and a cabled mouse from a colleague and I have been able to finish the initial procedure. Happy to be a Mac Mini user now, it works like a charm!


as tweeted already, apparently there's no way I can buy a dishwasher. Last time I almost came back home with a pretty cool Samsung Blue Ray Player ... but I bought nothing, this time I did a mistake: I bought a Mac Mini.
The time my new Mac Mini has been switched on is no longer than 10 minutes, and right now I am still unable to use it ... and I would like to tell you the story ...

Nothing On The Package

It's clean and small, no requirements or dependencies specified anywhere. Being the package obviously close, I have not been able to RTFM.
The only thing I took care of was the absence of the HDMI cable ... not a big deal, I have bought one and this, at least, works like a charm!

Pretty Cool And Useless Gadgets

Wireless keyboard and wireless trackpad, these gadgets are a must have for my room configuration: a 32" LED Sony Bravia screen with Full HD capability and a comfortable (and cheap) sofa about 1 meter far away from the screen. The plan was perfect, the result still a disaster.

Totally Stuck On First Run



As you can see from this picture, there is nothing I can do. Trust me, both keyboard and trackpad are switched on, and the lovely King of Operating Systems is unable to recognize them and, for this reason, unable to let me use the consistent amount of money I have spent few minutes before.

Epic Fail

All Apple gadgets and products come ready and easy to use, this is a key for this company and the reason I gave up trying to avoid its products ... these are simply great.
I could never expect such problem using all official Apple/Mac stuff, and if this is a known issue either a marketing strategy (I should buy a mouse now, uh? I won't!) it's a massive hole in the whole UX excellence we think when we talk about Apple.
Please put massive cubic labels over each Mac Mini saying: without our mouse, you can't do anything!

wru against wru: version 1 ready

This shot has been token on 13th September 1923, when W.H. Murphy demonstrated the efficiency of his bulletproof vest, the one that sold later to NY Police Department.

Above image has been historically used for different topics and the current one is "how much we trust what we sell".

Do You Trust Your UT Framework?

I wasn't kidding that much when I wrote about "test the testing framework" in my precedent wru post. The overall Unit Test Frameworks code coverage is poor, specially those with all the magic behind the scene, magic that does not come for free.

Use The Framework Itself To Test The Framework

This is a common technique that may result in a reliability deadlock. If we trust our UT Framework and we use it to test itself, the moment we have a problem with the framework we'll never know ... or even worst, it will be too late.

Don't Trust "Magic" Too Much

If the framework is simple and it does not pollute each test with any sort of crap, we can surely use it to test the framework itself without problems while if this framework elaborates and transforms our tests, it may become impossible to test it via the framework itself due scope and possibly context conflicts for each executed test.
This may produce a lot of false positives for something in place theoretically to make our code more robust.

The wru KISS approach

With or without async.js, wru does few things and now it's proved that it does these things properly. I don't want to spend any extra time for a library that I should trust 100% and if I know that all things this library should do are working as expected, I can simply "forget to maintain it" (improve, eventually) and use it feeling safer.

99.9% Of Code And Cases Coverage

Loaded on top of whatever test, wru uses few native calls and the assumption is that these are already working as expected (accordingly with ES3 or ES5 specs).
The new wru test against wru covers all possible tests combination, where setup and teardown could fail on wru itself or for each test, and assert should simply work in a test life-cycle. This means we could assert inside setups and teardowns as well, simply because these calls are fundamental for a single test and, if present, must be part of the test result. A problem in a setup could compromise the whole test while a problem in a teardown could compromise other tests. Being sure that nothing went wrong is up to us but, at least, we can do it via wru.

Can you do the same with your UT Framework? Do you have the same coverage? Have fun with wru ;)