A quick Overview of the freaking awesome You Don't Know JS 1st Edition (book series).
This is meant to be a quick overview of everything contained with-in the YDKJS book series. This can be used to quickly review either certain sections or the entire book. This is a great way for higher level devs to still get the benefits of the series without having to dig through everything (if you're a serious front-end or web developer you should probably read the whole series...unless you know everything all ready!)
BIG THANKS to: Kyle Simpson (@getify) the author of the You Don't Know JS 1st Edition (book series), you gave me a Promise...to save me from callback hell.
- Scope & Closures
- this & Object Prototypes
- Types & Grammar
- Async & Performance
- ES6 & Beyond
- Up & Going - intro level stuff (put at the end...just because)
Chapter 1: What is Scope?
Scope is the set of rules that determines where and how a variable (identifier) can be looked-up. This look-up may be for the purposes of assigning to the variable, which is an LHS (left-hand-side) reference, or it may be for the purposes of retrieving its value, which is an RHS (right-hand-side) reference.
LHS references result from assignment operations. Scope-related assignments can occur either with the =
operator or by passing arguments to (assign to) function parameters.
The JavaScript Engine first compiles code before it executes, and in so doing, it splits up statements like var a = 2;
into two separate steps:
-
First,
var a
to declare it in that Scope. This is performed at the beginning, before code execution. -
Later,
a = 2
to look up the variable (LHS reference) and assign to it if found.
Both LHS and RHS reference look-ups start at the currently executing Scope, and if need be (that is, they don't find what they're looking for there), they work their way up the nested Scope, one scope (floor) at a time, looking for the identifier, until they get to the global (top floor) and stop, and either find it, or don't.
Unfulfilled RHS references result in ReferenceError
s being thrown. Unfulfilled LHS references result in an automatic, implicitly-created global of that name (if not in "Strict Mode" [^note-strictmode]), or a ReferenceError
(if in "Strict Mode" [^note-strictmode]).
Chapter 2: Lexical Scope
Lexical scope means that scope is defined by author-time decisions of where functions are declared. The lexing phase of compilation is essentially able to know where and how all identifiers are declared, and thus predict how they will be looked-up during execution.
Two mechanisms in JavaScript can "cheat" lexical scope: eval(..)
and with
. The former can modify existing lexical scope (at runtime) by evaluating a string of "code" which has one or more declarations in it. The latter essentially creates a whole new lexical scope (again, at runtime) by treating an object reference as a "scope" and that object's properties as scoped identifiers.
The downside to these mechanisms is that it defeats the Engine's ability to perform compile-time optimizations regarding scope look-up, because the Engine has to assume pessimistically that such optimizations will be invalid. Code will run slower as a result of using either feature. Don't use them.
Chapter 3: Function vs. Block Scope
Functions are the most common unit of scope in JavaScript. Variables and functions that are declared inside another function are essentially "hidden" from any of the enclosing "scopes", which is an intentional design principle of good software.
But functions are by no means the only unit of scope. Block-scope refers to the idea that variables and functions can belong to an arbitrary block (generally, any { .. }
pair) of code, rather than only to the enclosing function.
Starting with ES3, the try/catch
structure has block-scope in the catch
clause.
In ES6, the let
keyword (a cousin to the var
keyword) is introduced to allow declarations of variables in any arbitrary block of code. if (..) { let a = 2; }
will declare a variable a
that essentially hijacks the scope of the if
's { .. }
block and attaches itself there.
Though some seem to believe so, block scope should not be taken as an outright replacement of var
function scope. Both functionalities co-exist, and developers can and should use both function-scope and block-scope techniques where respectively appropriate to produce better, more readable/maintainable code.
NOTE of interest: Principle of Least Privilege
Chapter 4: Hoisting
We can be tempted to look at var a = 2;
as one statement, but the JavaScript Engine does not see it that way. It sees var a
and a = 2
as two separate statements, the first one a compiler-phase task, and the second one an execution-phase task.
What this leads to is that all declarations in a scope, regardless of where they appear, are processed first before the code itself is executed. You can visualize this as declarations (variables and functions) being "moved" to the top of their respective scopes, which we call "hoisting".
Declarations themselves are hoisted, but assignments, even assignments of function expressions, are not hoisted.
Be careful about duplicate declarations, especially mixed between normal var declarations and function declarations -- peril awaits if you do!
Chapter 5: Scope Closure
Closure seems to the un-enlightened like a mystical world set apart inside of JavaScript which only the few bravest souls can reach. But it's actually just a standard and almost obvious fact of how we write code in a lexically scoped environment, where functions are values and can be passed around at will.
Closure is when a function can remember and access its lexical scope even when it's invoked outside its lexical scope.
Closures can trip us up, for instance with loops, if we're not careful to recognize them and how they work. But they are also an immensely powerful tool, enabling patterns like modules in their various forms.
Modules require two key characteristics: 1) an outer wrapping function being invoked, to create the enclosing scope 2) the return value of the wrapping function must include reference to at least one inner function that then has closure over the private inner scope of the wrapper.
Now we can see closures all around our existing code, and we have the ability to recognize and leverage them to our own benefit!
Quick Snippet: To be clear, JavaScript does not, in fact, have dynamic scope. It has lexical scope. Plain and simple. But the this mechanism is kind of like dynamic scope.
The key contrast: lexical scope is write-time, whereas dynamic scope (and this!) are runtime. Lexical scope cares where a function was declared, but dynamic scope cares where a function was called from.
Finally: this cares how a function was called, which shows how closely related the this mechanism is to the idea of dynamic scoping. To dig more into this, read the title "this & Object Prototypes".
Quick Snippet: Let me add one last quick note on the performance of try/catch
, and/or to address the question, "why not just use an IIFE to create the scope?"
Firstly, the performance of try/catch
is slower, but there's no reasonable assumption that it has to be that way, or even that it always will be that way. Since the official TC39-approved ES6 transpiler uses try/catch
, the Traceur team has asked Chrome to improve the performance of try/catch
, and they are obviously motivated to do so.
Secondly, IIFE is not a fair apples-to-apples comparison with try/catch
, because a function wrapped around any arbitrary code changes the meaning, inside of that code, of this
, return
, break
, and continue
. IIFE is not a suitable general substitute. It could only be used manually in certain cases.
The question really becomes: do you want block-scoping, or not. If you do, these tools provide you that option. If not, keep using var
and go on about your coding!
Quick Snippet: Though this title does not address the this mechanism in any detail, there's one ES6 topic which relates this to lexical scope in an important way, which we will quickly examine.
ES6 adds a special syntactic form of function declaration called the "arrow function". It looks like this:...
Chapter 1: this
Or That?
this
binding is a constant source of confusion for the JavaScript developer who does not take the time to learn how the mechanism actually works. Guesses, trial-and-error, and blind copy-n-paste from Stack Overflow answers is not an effective or proper way to leverage this important this
mechanism.
To learn this
, you first have to learn what this
is not, despite any assumptions or misconceptions that may lead you down those paths. this
is neither a reference to the function itself, nor is it a reference to the function's lexical scope.
this
is actually a binding that is made when a function is invoked, and what it references is determined entirely by the call-site where the function is called.
Chapter 2: this
All Makes Sense Now!
Determining the this
binding for an executing function requires finding the direct call-site of that function. Once examined, four rules can be applied to the call-site, in this order of precedence:
-
Called with
new
? Use the newly constructed object. -
Called with
call
orapply
(orbind
)? Use the specified object. -
Called with a context object owning the call? Use that context object.
-
Default:
undefined
instrict mode
, global object otherwise.
Be careful of accidental/unintentional invoking of the default binding rule. In cases where you want to "safely" ignore a this
binding, a "DMZ" object like ø = Object.create(null)
is a good placeholder value that protects the global
object from unintended side-effects.
Instead of the four standard binding rules, ES6 arrow-functions use lexical scoping for this
binding, which means they adopt the this
binding (whatever it is) from its enclosing function call. They are essentially a syntactic replacement of self = this
in pre-ES6 coding.
Chapter 3: Objects
Objects in JS have both a literal form (such as var a = { .. }
) and a constructed form (such as var a = new Array(..)
). The literal form is almost always preferred, but the constructed form offers, in some cases, more creation options.
Many people mistakingly claim "everything in JavaScript is an object", but this is incorrect. Objects are one of the 6 (or 7, depending on your perspective) primitive types. Objects have sub-types, including function
, and also can be behavior-specialized, like [object Array]
as the internal label representing the array object sub-type.
Objects are collections of key/value pairs. The values can be accessed as properties, via .propName
or ["propName"]
syntax. Whenever a property is accessed, the engine actually invokes the internal default [[Get]]
operation (and [[Put]]
for setting values), which not only looks for the property directly on the object, but which will traverse the [[Prototype]]
chain (see Chapter 5) if not found.
Properties have certain characteristics that can be controlled through property descriptors, such as writable
and configurable
. In addition, objects can have their mutability (and that of their properties) controlled to various levels of immutability using Object.preventExtensions(..)
, Object.seal(..)
, and Object.freeze(..)
.
Properties don't have to contain values -- they can be "accessor properties" as well, with getters/setters. They can also be either enumerable or not, which controls if they show up in for..in
loop iterations, for instance.
You can also iterate over the values in data structures (arrays, objects, etc) using the ES6 for..of
syntax, which looks for either a built-in or custom @@iterator
object consisting of a next()
method to advance through the data values one at a time.
Chapter 4: Mixing (Up) "Class" Objects
Classes are a design pattern. Many languages provide syntax which enables natural class-oriented software design. JS also has a similar syntax, but it behaves very differently from what you're used to with classes in those other languages.
Classes mean copies.
When traditional classes are instantiated, a copy of behavior from class to instance occurs. When classes are inherited, a copy of behavior from parent to child also occurs.
Polymorphism (having different functions at multiple levels of an inheritance chain with the same name) may seem like it implies a referential relative link from child back to parent, but it's still just a result of copy behavior.
JavaScript does not automatically create copies (as classes imply) between objects.
The mixin pattern (both explicit and implicit) is often used to sort of emulate class copy behavior, but this usually leads to ugly and brittle syntax like explicit pseudo-polymorphism (OtherObj.methodName.call(this, ...)
), which often results in harder to understand and maintain code.
Explicit mixins are also not exactly the same as class copy, since objects (and functions!) only have shared references duplicated, not the objects/functions duplicated themselves. Not paying attention to such nuance is the source of a variety of gotchas.
In general, faking classes in JS often sets more landmines for future coding than solving present real problems.
Chapter 5: Prototypes
When attempting a property access on an object that doesn't have that property, the object's internal [[Prototype]]
linkage defines where the [[Get]]
operation (see Chapter 3) should look next. This cascading linkage from object to object essentially defines a "prototype chain" (somewhat similar to a nested scope chain) of objects to traverse for property resolution.
All normal objects have the built-in Object.prototype
as the top of the prototype chain (like the global scope in scope look-up), where property resolution will stop if not found anywhere prior in the chain. toString()
, valueOf()
, and several other common utilities exist on this Object.prototype
object, explaining how all objects in the language are able to access them.
The most common way to get two objects linked to each other is using the new
keyword with a function call, which among its four steps (see Chapter 2), it creates a new object linked to another object.
The "another object" that the new object is linked to happens to be the object referenced by the arbitrarily named .prototype
property of the function called with new
. Functions called with new
are often called "constructors", despite the fact that they are not actually instantiating a class as constructors do in traditional class-oriented languages.
While these JavaScript mechanisms can seem to resemble "class instantiation" and "class inheritance" from traditional class-oriented languages, the key distinction is that in JavaScript, no copies are made. Rather, objects end up linked to each other via an internal [[Prototype]]
chain.
For a variety of reasons, not the least of which is terminology precedent, "inheritance" (and "prototypal inheritance") and all the other OO terms just do not make sense when considering how JavaScript actually works (not just applied to our forced mental models).
Instead, "delegation" is a more appropriate term, because these relationships are not copies but delegation links.
Classes and inheritance are a design pattern you can choose, or not choose, in your software architecture. Most developers take for granted that classes are the only (proper) way to organize code, but here we've seen there's another less-commonly talked about pattern that's actually quite powerful: behavior delegation.
Behavior delegation suggests objects as peers of each other, which delegate amongst themselves, rather than parent and child class relationships. JavaScript's [[Prototype]]
mechanism is, by its very designed nature, a behavior delegation mechanism. That means we can either choose to struggle to implement class mechanics on top of JS (see Chapters 4 and 5), or we can just embrace the natural state of [[Prototype]]
as a delegation mechanism.
When you design code with objects only, not only does it simplify the syntax you use, but it can actually lead to simpler code architecture design.
OLOO (objects-linked-to-other-objects) is a code style which creates and relates objects directly without the abstraction of classes. OLOO quite naturally implements [[Prototype]]
-based behavior delegation.
class
does a very good job of pretending to fix the problems with the class/inheritance design pattern in JS. But it actually does the opposite: it hides many of the problems, and introduces other subtle but dangerous ones.
class
contributes to the ongoing confusion of "class" in JavaScript which has plagued the language for nearly two decades. In some respects, it asks more questions than it answers, and it feels in totality like a very unnatural fit on top of the elegant simplicity of the [[Prototype]]
mechanism.
Bottom line: if ES6 class
makes it harder to robustly leverage [[Prototype]]
, and hides the most important nature of the JS object mechanism -- the live delegation links between objects -- shouldn't we see class
as creating more troubles than it solves, and just relegate it to an anti-pattern?
I can't really answer that question for you. But I hope this book has fully explored the issue at a deeper level than you've ever gone before, and has given you the information you need to answer it yourself.
README authors note: I still use "classes" but as long as you understand the underlying code then the sugar API goes down a bit sweeter :) ...most likely ES12 or some very future version will "fix" the underlying code
Chapter 1: Types
JavaScript has seven built-in types: null
, undefined
, boolean
, number
, string
, object
, symbol
. They can be identified by the typeof
operator.
Variables don't have types, but the values in them do. These types define intrinsic behavior of the values.
Many developers will assume "undefined" and "undeclared" are roughly the same thing, but in JavaScript, they're quite different. undefined
is a value that a declared variable can hold. "Undeclared" means a variable has never been declared.
JavaScript unfortunately kind of conflates these two terms, not only in its error messages ("ReferenceError: a is not defined") but also in the return values of typeof
, which is "undefined"
for both cases.
However, the safety guard (preventing an error) on typeof
when used against an undeclared variable can be helpful in certain cases.
Chapter 2: Values
In JavaScript, array
s are simply numerically indexed collections of any value-type. string
s are somewhat "array
-like", but they have distinct behaviors and care must be taken if you want to treat them as array
s. Numbers in JavaScript include both "integers" and floating-point values.
Several special values are defined within the primitive types.
The null
type has just one value: null
, and likewise the undefined
type has just the undefined
value. undefined
is basically the default value in any variable or property if no other value is present. The void
operator lets you create the undefined
value from any other value.
number
s include several special values, like NaN
(supposedly "Not a Number", but really more appropriately "invalid number"); +Infinity
and -Infinity
; and -0
.
Simple scalar primitives (string
s, number
s, etc.) are assigned/passed by value-copy, but compound values (object
s, etc.) are assigned/passed by reference-copy. References are not like references/pointers in other languages -- they're never pointed at other variables/references, only at the underlying values.
Chapter 3: Natives
JavaScript provides object wrappers around primitive values, known as natives (String
, Number
, Boolean
, etc). These object wrappers give the values access to behaviors appropriate for each object subtype (String#trim()
and Array#concat(..)
).
If you have a simple scalar primitive value like "abc"
and you access its length
property or some String.prototype
method, JS automatically "boxes" the value (wraps it in its respective object wrapper) so that the property/method accesses can be fulfilled.
Chapter 4: Coercion
In this chapter, we turned our attention to how JavaScript type conversions happen, called coercion, which can be characterized as either explicit or implicit.
Coercion gets a bad rap, but it's actually quite useful in many cases. An important task for the responsible JS developer is to take the time to learn all the ins and outs of coercion to decide which parts will help improve their code, and which parts they really should avoid.
Explicit coercion is code which is obvious that the intent is to convert a value from one type to another. The benefit is improvement in readability and maintainability of code by reducing confusion.
Implicit coercion is coercion that is "hidden" as a side-effect of some other operation, where it's not as obvious that the type conversion will occur. While it may seem that implicit coercion is the opposite of explicit and is thus bad (and indeed, many think so!), actually implicit coercion is also about improving the readability of code.
Especially for implicit, coercion must be used responsibly and consciously. Know why you're writing the code you're writing, and how it works. Strive to write code that others will easily be able to learn from and understand as well.
Chapter 5: Grammar
JavaScript grammar has plenty of nuance that we as developers should spend a little more time paying closer attention to than we typically do. A little bit of effort goes a long way to solidifying your deeper knowledge of the language.
Statements and expressions have analogs in English language -- statements are like sentences and expressions are like phrases. Expressions can be pure/self-contained, or they can have side effects.
The JavaScript grammar layers semantic usage rules (aka context) on top of the pure syntax. For example, { }
pairs used in various places in your program can mean statement blocks, object
literals, (ES6) destructuring assignments, or (ES6) named function arguments.
JavaScript operators all have well-defined rules for precedence (which ones bind first before others) and associativity (how multiple operator expressions are implicitly grouped). Once you learn these rules, it's up to you to decide if precedence/associativity are too implicit for their own good, or if they will aid in writing shorter, clearer code.
ASI (Automatic Semicolon Insertion) is a parser-error-correction mechanism built into the JS engine, which allows it under certain circumstances to insert an assumed ;
in places where it is required, was omitted, and where insertion fixes the parser error. The debate rages over whether this behavior implies that most ;
are optional (and can/should be omitted for cleaner code) or whether it means that omitting them is making mistakes that the JS engine merely cleans up for you.
JavaScript has several types of errors, but it's less known that it has two classifications for errors: "early" (compiler thrown, uncatchable) and "runtime" (try..catch
able). All syntax errors are obviously early errors that stop the program before it runs, but there are others, too.
Function arguments have an interesting relationship to their formal declared named parameters. Specifically, the arguments
array has a number of gotchas of leaky abstraction behavior if you're not careful. Avoid arguments
if you can, but if you must use it, by all means avoid using the positional slot in arguments
at the same time as using a named parameter for that same argument.
The finally
clause attached to a try
(or try..catch
) offers some very interesting quirks in terms of execution processing order. Some of these quirks can be helpful, but it's possible to create lots of confusion, especially if combined with labeled blocks. As always, use finally
to make code better and clearer, not more clever or confusing.
The switch
offers some nice shorthand for if..else if..
statements, but beware of many common simplifying assumptions about its behavior. There are several quirks that can trip you up if you're not careful, but there's also some neat hidden tricks that switch
has up its sleeve!
We know and can rely upon the fact that the JS language itself has one standard and is predictably implemented by all the modern browsers/engines. This is a very good thing!
But JavaScript rarely runs in isolation. It runs in an environment mixed in with code from third-party libraries, and sometimes it even runs in engines/environments that differ from those found in browsers.
Paying close attention to these issues improves the reliability and robustness of your code.
Chapter 1: Asynchrony: Now & Later
A JavaScript program is (practically) always broken up into two or more chunks, where the first chunk runs now and the next chunk runs later, in response to an event. Even though the program is executed chunk-by-chunk, all of them share the same access to the program scope and state, so each modification to state is made on top of the previous state.
Whenever there are events to run, the event loop runs until the queue is empty. Each iteration of the event loop is a "tick." User interaction, IO, and timers enqueue events on the event queue.
At any given moment, only one event can be processed from the queue at a time. While an event is executing, it can directly or indirectly cause one or more subsequent events.
Concurrency is when two or more chains of events interleave over time, such that from a high-level perspective, they appear to be running simultaneously (even though at any given moment only one event is being processed).
It's often necessary to do some form of interaction coordination between these concurrent "processes" (as distinct from operating system processes), for instance to ensure ordering or to prevent "race conditions." These "processes" can also cooperate by breaking themselves into smaller chunks and to allow other "process" interleaving.
Chapter 2: Callbacks
Callbacks are the fundamental unit of asynchrony in JS. But they're not enough for the evolving landscape of async programming as JS matures.
First, our brains plan things out in sequential, blocking, single-threaded semantic ways, but callbacks express asynchronous flow in a rather nonlinear, nonsequential way, which makes reasoning properly about such code much harder. Bad to reason about code is bad code that leads to bad bugs.
We need a way to express asynchrony in a more synchronous, sequential, blocking manner, just like our brains do.
Second, and more importantly, callbacks suffer from inversion of control in that they implicitly give control over to another party (often a third-party utility not in your control!) to invoke the continuation of your program. This control transfer leads us to a troubling list of trust issues, such as whether the callback is called more times than we expect.
Inventing ad hoc logic to solve these trust issues is possible, but it's more difficult than it should be, and it produces clunkier and harder to maintain code, as well as code that is likely insufficiently protected from these hazards until you get visibly bitten by the bugs.
We need a generalized solution to all of the trust issues, one that can be reused for as many callbacks as we create without all the extra boilerplate overhead.
We need something better than callbacks. They've served us well to this point, but the future of JavaScript demands more sophisticated and capable async patterns. The subsequent chapters in this book will dive into those emerging evolutions.
Chapter 3: Promises
Promises are awesome. Use them. They solve the inversion of control issues that plague us with callbacks-only code.
They don't get rid of callbacks, they just redirect the orchestration of those callbacks to a trustable intermediary mechanism that sits between us and another utility.
Promise chains also begin to address (though certainly not perfectly) a better way of expressing async flow in sequential fashion, which helps our brains plan and maintain async JS code better. We'll see an even better solution to that problem in the next chapter!
Chapter 4: Generators
Generators are a new ES6 function type that does not run-to-completion like normal functions. Instead, the generator can be paused in mid-completion (entirely preserving its state), and it can later be resumed from where it left off.
This pause/resume interchange is cooperative rather than preemptive, which means that the generator has the sole capability to pause itself, using the yield
keyword, and yet the iterator that controls the generator has the sole capability (via next(..)
) to resume the generator.
The yield
/ next(..)
duality is not just a control mechanism, it's actually a two-way message passing mechanism. A yield ..
expression essentially pauses waiting for a value, and the next next(..)
call passes a value (or implicit undefined
) back to that paused yield
expression.
The key benefit of generators related to async flow control is that the code inside a generator expresses a sequence of steps for the task in a naturally sync/sequential fashion. The trick is that we essentially hide potential asynchrony behind the yield
keyword -- moving the asynchrony to the code where the generator's iterator is controlled.
In other words, generators preserve a sequential, synchronous, blocking code pattern for async code, which lets our brains reason about the code much more naturally, addressing one of the two key drawbacks of callback-based async.
Chapter 5: Program Performance
The first four chapters of this book are based on the premise that async coding patterns give you the ability to write more performant code, which is generally a very important improvement. But async behavior only gets you so far, because it's still fundamentally bound to a single event loop thread.
So in this chapter we've covered several program-level mechanisms for improving performance even further.
Web Workers let you run a JS file (aka program) in a separate thread using async events to message between the threads. They're wonderful for offloading long-running or resource-intensive tasks to a different thread, leaving the main UI thread more resposive.
SIMD proposes to map CPU-level parallel math operations to JavaScript APIs for high-performance data-parallel operations, like number processing on large data sets.
Finally, asm.js describes a small subset of JavaScript that avoids the hard-to-optimize parts of JS (like garbage collection and coercion) and lets the JS engine recognize and run such code through aggressive optimizations. asm.js could be hand authored, but that's extremely tedious and error prone, akin to hand authoring assembly language (hence the name). Instead, the main intent is that asm.js would be a good target for cross-compilation from other highly optimized program languages -- for example, Emscripten (https://github.com/kripken/emscripten/wiki) transpiling C/C++ to JavaScript.
While not covered explicitly in this chapter, there are even more radical ideas under very early discussion for JavaScript, including approximations of direct threaded functionality (not just hidden behind data structure APIs). Whether that happens explicitly, or we just see more parallelism creep into JS behind the scenes, the future of more optimized program-level performance in JS looks really promising.
Chapter 6: Benchmarking & Tuning
Effectively benchmarking performance of a piece of code, especially to compare it to another option for that same code to see which approach is faster, requires careful attention to detail.
Rather than rolling your own statistically valid benchmarking logic, just use the Benchmark.js library, which does that for you. But be careful about how you author tests, because it's far too easy to construct a test that seems valid but that's actually flawed -- even tiny differences can skew the results to be completely unreliable.
It's important to get as many test results from as many different environments as possible to eliminate hardware/device bias. jsPerf.com is a fantastic website for crowdsourcing performance benchmark test runs.
Many common performance tests unfortunately obsess about irrelevant microperformance details like x++
versus ++x
. Writing good tests means understanding how to focus on big picture concerns, like optimizing on the critical path, and avoiding falling into traps like different JS engines' implementation details.
Tail call optimization (TCO) is a required optimization as of ES6 that will make some recursive patterns practical in JS where they would have been impossible otherwise. TCO allows a function call in the tail position of another function to execute without needing any extra resources, which means the engine no longer needs to place arbitrary restrictions on call stack depth for recursive algorithms.
asynquence is a simple abstraction -- a sequence is a series of (async) steps -- on top of Promises, aimed at making working with various asynchronous patterns much easier, without any compromise in capability.
There are other goodies in the asynquence core API and its contrib plug-ins beyond what we saw in this appendix, but we'll leave that as an exercise for the reader to go check the rest of the capabilities out.
You've now seen the essence and spirit of asynquence. The key take away is that a sequence is comprised of steps, and those steps can be any of dozens of different variations on Promises, or they can be a generator-run, or... The choice is up to you, you have all the freedom to weave together whatever async flow control logic is appropriate for your tasks. No more library switching to catch different async patterns.
If these asynquence snippets have made sense to you, you're now pretty well up to speed on the library; it doesn't take that much to learn, actually!
If you're still a little fuzzy on how it works (or why!), you'll want to spend a little more time examining the previous examples and playing around with asynquence yourself, before going on to the next appendix. Appendix B will push asynquence into several more advanced and powerful async patterns.
Promises and generators provide the foundational building blocks upon which we can build much more sophisticated and capable asynchrony.
asynquence has utilities for implementing iterable sequences, reactive sequences (aka "Observables"), concurrent coroutines, and even CSP goroutines.
Those patterns, combined with the continuation-callback and Promise capabilities, gives asynquence a powerful mix of different asynchronous functionalities, all integrated in one clean async flow control abstraction: the sequence.
Chapter 1: ES? Now & Future
ES6 (some may try to call it ES2015) is just landing as of the time of this writing, and it has lots of new stuff you need to learn!
But it's even more important to shift your mindset to align with the new way that JavaScript is going to evolve. It's not just waiting around for years for some official document to get a vote of approval, as many have done in the past.
Now, JavaScript features land in browsers as they become ready, and it's up to you whether you'll get on the train early or whether you'll be playing costly catch-up games years from now.
Whatever labels that future JavaScript adopts, it's going to move a lot quicker than it ever has before. Transpilers and shims/polyfills are important tools to keep you on the forefront of where the language is headed.
If there's any narrative important to understand about the new reality for JavaScript, it's that all JS developers are strongly implored to move from the trailing edge of the curve to the leading edge. And learning ES6 is where that all starts!
Chapter 2: Syntax
ES6 adds a heap of new syntax forms to JavaScript, so there's plenty to learn!
Most of these are designed to ease the pain points of common programming idioms, such as setting default values to function parameters and gathering the "rest" of the parameters into an array. Destructuring is a powerful tool for more concisely expressing assignments of values from arrays and nested objects.
While features like =>
arrow functions appear to also be all about shorter and nicer-looking syntax, they actually have very specific behaviors that you should intentionally use only in appropriate situations.
Expanded Unicode support, new tricks for regular expressions, and even a new primitive symbol
type round out the syntactic evolution of ES6.
Chapter 3: Organization
ES6 introduces several new features that aid in code organization:
- Iterators provide sequential access to data or operations. They can be consumed by new language features like
for..of
and...
. - Generators are locally pause/resume capable functions controlled by an iterator. They can be used to programmatically (and interactively, through
yield
/next(..)
message passing) generate values to be consumed via iteration. - Modules allow private encapsulation of implementation details with a publicly exported API. Module definitions are file-based, singleton instances, and statically resolved at compile time.
- Classes provide cleaner syntax around prototype-based coding. The addition of
super
also solves tricky issues with relative references in the[[Prototype]]
chain.
These new tools should be your first stop when trying to improve the architecture of your JS projects by embracing ES6.
Chapter 4: Async Flow Control
As JavaScript continues to mature and grow in its widespread adoption, asynchronous programming is more and more of a central concern. Callbacks are not fully sufficient for these tasks, and totally fall down the more sophisticated the need.
Thankfully, ES6 adds Promises to address one of the major shortcomings of callbacks: lack of trust in predictable behavior. Promises represent the future completion value from a potentially async task, normalizing behavior across sync and async boundaries.
But it's the combination of Promises with generators that fully realizes the benefits of rearranging our async flow control code to de-emphasize and abstract away that ugly callback soup (aka "hell").
Right now, we can manage these interactions with the aide of various async libraries' runners, but JavaScript is eventually going to support this interaction pattern with dedicated syntax alone!
Chapter 5: Collections
ES6 defines a number of useful collections that make working with data in structured ways more efficient and effective.
TypedArrays provide "view"s of binary data buffers that align with various integer types, like 8-bit unsigned integers and 32-bit floats. The array access to binary data makes operations much easier to express and maintain, which enables you to more easily work with complex data like video, audio, canvas data, and so on.
Maps are key-value pairs where the key can be an object instead of just a string/primitive. Sets are unique lists of values (of any type).
WeakMaps are maps where the key (object) is weakly held, so that GC is free to collect the entry if it's the last reference to an object. WeakSets are sets where the value is weakly held, again so that GC can remove the entry if it's the last reference to that object.
Chapter 6: API Additions
ES6 adds many extra API helpers on the various built-in native objects:
Array
addsof(..)
andfrom(..)
static functions, as well as prototype functions likecopyWithin(..)
andfill(..)
.Object
adds static functions likeis(..)
andassign(..)
.Math
adds static functions likeacosh(..)
andclz32(..)
.Number
adds static properties likeNumber.EPSILON
, as well as static functions likeNumber.isFinite(..)
.String
adds static functions likeString.fromCodePoint(..)
andString.raw(..)
, as well as prototype functions likerepeat(..)
andincludes(..)
.
Most of these additions can be polyfilled (see ES6 Shim), and were inspired by utilities in common JS libraries/frameworks.
Chapter 7: Meta Programming
Meta programming is when you turn the logic of your program to focus on itself (or its runtime environment), either to inspect its own structure or to modify it. The primary value of meta programming is to extend the normal mechanisms of the language to provide additional capabilities.
Prior to ES6, JavaScript already had quite a bit of meta programming capability, but ES6 significantly ramps that up with several new features.
From function name inferences for anonymous functions to meta properties that give you information about things like how a constructor was invoked, you can inspect the program structure while it runs more than ever before. Well Known Symbols let you override intrinsic behaviors, such as coercion of an object to a primitive value. Proxies can intercept and customize various low-level operations on objects, and Reflect
provides utilities to emulate them.
Feature testing, even for subtle semantic behaviors like Tail Call Optimization, shifts the meta programming focus from your program to the JS engine capabilities itself. By knowing more about what the environment can do, your programs can adjust themselves to the best fit as they run.
Should you meta program? My advice is: first focus on learning how the core mechanics of the language really work. But once you fully know what JS itself can do, it's time to start leveraging these powerful meta programming capabilities to push the language further!
Chapter 8: Beyond ES6
If all the other books in this series essentially propose this challenge, "you (may) not know JS (as much as you thought)," this book has instead suggested, "you don't know JS anymore." The book has covered a ton of new stuff added to the language in ES6. It's an exciting collection of new language features and paradigms that will forever improve our JS programs.
But JS is not done with ES6! Not even close. There's already quite a few features in various stages of development for the "beyond ES6" timeframe. In this chapter, we briefly looked at some of the most likely candidates to land in JS very soon.
async function
s are powerful syntactic sugar on top of the generators + promises pattern (see Chapter 4). Object.observe(..)
adds direct native support for observing object change events, which is critical for implementing data binding. The **
exponentiation operator, ...
for object properties, and Array#includes(..)
are all simple but helpful improvements to existing mechanisms. Finally, SIMD ushers in a new era in the evolution of high performance JS.
Cliché as it sounds, the future of JS is really bright! The challenge of this series, and indeed of this book, is incumbent on every reader now. What are you waiting for? It's time to get learning and exploring!
Up & Going - intro level stuff
omitted because its intro level, if you'd like to take a look follow the link!
Make sure to go over and check out the series, and also if you feel so inclined you can support his work on his patreon account. Also visit http://getify.me to sign up for 1 on 1 training with Kyle.
- Read online (free!): ["Up & Going"](up & going/README.md#you-dont-know-js-up--going), Published: Buy Now, ebook format is free!
- Read online (free!): ["Scope & Closures"](scope & closures/README.md#you-dont-know-js-scope--closures), Published: Buy Now
- Read online (free!): ["this & Object Prototypes"](this & object prototypes/README.md#you-dont-know-js-this--object-prototypes), Published: Buy Now
- Read online (free!): ["Types & Grammar"](types & grammar/README.md#you-dont-know-js-types--grammar), Published: Buy Now
- Read online (free!): ["Async & Performance"](async & performance/README.md#you-dont-know-js-async--performance), Published: Buy Now
- Read online (free!): ["ES6 & Beyond"](es6 & beyond/README.md#you-dont-know-js-es6--beyond) (in production)