Let us first go over the below code.
LOCAL EXECUTION CONTEXT
Running/calling or invoking a function is what creates a local execution context. So when you execute a function, a new execution context gets created. So this is what comprises an execution context :
- The thread of execution (we go through the code in the function line by line)
- A local memory (variable environment) where anything defined in the function is stored.
When we start running a function, a local execution context pertaining to that automatically gets created. Once the function is done with its execution, the same gets simply popped off from the call stack.
But what exactly is a call stack ?
When our function gets called, we create a live store of data (local memory/variable environment) for that function’s execution context. Now when the function is done with its execution, it’s local memory gets deleted except the returned value. But what if our functions could hold on to live data/state between executions ? This would let our function definitions to have an associated cache/persistent memory.
For closures, you can say it is closed over the variable environment. When a function is defined, it gets a [[scope]] property that references the local memory/variable environment to which it has been defined.
Let us see an example now,
Whenever we call the above incrementCounter function, it will always look first in its immediate local memory (variable environment) and then in the [[scope]] hidden property next before it looks any further up in the scope chain. It will always look first in its lexical scope. If it does not find the variable there, it will go up the scope chain till it finds a variable. Our lexical scope (the available live data when our function was defined) is what determines our available variables and prioritization at function execution, not where our function is called.