Let’s examine the incoming part of our interface first. In fact, let’s pay very close attention to the subtle details, as this process is so automatic for us that it has become effectively invisible.
One level where some of the work is done without us even noticing is where the raw perceptions are massaged before they are even interpreted. For instance, the optic nerves of our eyes cause blind spots to appear well within our field of view. But unless we try to do something like looking at stars with that particular region of our retinas, we will never notice the insensitive spot because our brains fill them up for us.
Another level is where interpretation work is performed. For example, take a look at this page. It is unavoidable to recognize letters, is it not? It might even be impossible to look at a letter “e” without immediately recognizing it as a letter “e”, instead of it being seen as some arbitrary scribble on a piece of paper.
So how could we describe the process by which we go from “some differences in perceived color”, to “scribble on a piece of paper”, to “this particular sentence and what it means”?
Assuming one has raw data coming from our interface, what we typically do is to scan it for differences in value. In the case of this piece of paper, the difference in value could be described in terms of “darker here”, or “lighter over there”. By grouping similar values together, we draw distinctions such as “the inside of this figure is darker than its surroundings”. If these distinctions follow the same path of recognition as other distinctions we have previously learned to draw, we then recognize the new distinction as being equal to the previous one. This is pervasively performed for everything we do. No wonder brains are good at Pattern Matching.
Note how the way in which distinctions are drawn could be arranged to be their own description as well. In fact, it would be very convenient to have the way in which we recognize a letter “e” to be the lookup key attached to a hash bucket labelled “letter e”, for example.
Pay special attention to the implications: whether the hash bucket is labelled “letter e” or not is absolutely inconsequential. The only thing that matters in our recognition game is that similar enough distinctions end up in similar enough hash buckets.
> a = b ⇒ a hash = b hash
So what if we had hash buckets of limited resolution? Then, eventually, we would have label collisions and things would get messy. This is where Intentions come in.
Depending on what we mean to do, we may end up looking at only one of the possible interpretations for the Hash Bucket, thus disambiguating the meaning of the observed label.
Or perhaps our purposes will call for looking at things with more resolution so that previously colliding observations will end up in a different hash bucket entirely. One way or the other, collisions happen and we deal with them either by refining the precision of our point of view, or by switching our point of view entirely.
> In general, a `hash` ∼= a `identityHash`, depending on the intentions at play for values of a `hash`.
To summarize, based on the raw data coming from the interface, we create distinctions inside our first distinction according to the intentions with which we care about perceived differences in value. We will refer to the entity that executes this mapping process as our eyes.