there is a small pokédex here

Hey, I am Eevee and this is veekun and it's a Pokédex. You probably want to type into that box in the top right, or maybe just start browsing.

Other stuff of interest:

Updates


The Pokémon GO Plus, originally announced for a release in late July, will now be released in September 2016.
The Unova Classic Wi-Fi competition has been announced. Participants will receive a Zen Mode Darmanitan, which was used by N in Pokémon Black 2 and White 2 Versions.

…is, of course, naming.

Not just naming variables or new technologies. Oh no. We can’t even agree on names for basic concepts.

A thousand overlapping vernaculars

Did you know that the C specification makes frequent reference to “objects”? Not in the OO sense as you might think — a C “object” is defined as a “region of data storage in the execution environment, the contents of which can represent values”. The spec then goes on to discuss, for example, “objects of type char.

Method” is a pretty universal term, but you may encounter a C++ programmer who only knows it as “member function”. Conversely, Java may not have functions at all, depending on who you ask. “Procedure” and “subroutine” aren’t used much any more, but a procedure is a completely different construct from a function in Pascal.

Even within the same language, we get sloppy: see if you can catch a Python programmer using “property” to refer to a regular “attribute”, when property is a special kind of attribute.

There’s a difference between “argument” and “parameter”, but no one cares what it is, so we all just use “argument” which abbreviates more easily. I use the word “signature” a lot, but I rarely see anyone else use it, and sometimes I wonder if anyone understands what I mean.

A float is single-precision in C and double-precision in Python. I reflexively tense up whenever someone says “word” unqualified, because it could mean any of three or four different sizes.

Part of the problem here is that we’re not actually doing computer science. We’re doing programming, with a wide variety (hundreds!) of imperfect languages with different combinations of features and restrictions. There are only so many words to go around, so the same names get used for vaguely similar features across many languages, and native speakers naturally attach their mother tongue’s baggage to the jargon it uses. Someone who got started with JavaScript would have a very different idea of what a “class” is than someone who got started with Ruby. People come to Python or JavaScript and exclaim that they “don’t have real closures” because of a quirk of name binding.

Most of the time, this is fine. Sometimes, it’s incredibly confusing. Here are my (least?) favorite lexical clashes. (That was one too!)

Arrays, vectors, and lists

In C, an array is a contiguous block of storage, in which you can put some fixed number of values of the same type. int[5] describes enough space to store five ints, all snuggled right next to each other. There’s no such thing as a “vector”. “List” would likely be interpreted as a linked list, in which each value is stored separately and has a pointer to the next one.

C++ introduced vector, an array that automatically expands to fit an arbitrary number of values. There’s also a standard list type, which is a doubly-linked list. (The exact implementations may be anything, but the types require certain properties that make an array and a linked list the most obvious choices.) But wait! C++11 introduced the initializer_list, which is actually an array.

Lisp dialects are of course nothing but lists, but under the hood, these tend to be implemented as linked lists — which is no doubt why Lisp originally handled lists in terms of heads and tails (very easy to do with linked lists), rather than random access (very easy to do with contiguous arrays). Haskell works similarly, and additionally has a Data.Array module which offers fast random access.

Perl (5)’s sequence type is the array, though “type” is a little misleading here, because it’s really one of Perl’s handful of shapes of variables. Perl also has a distinct thing called a “list”, but it’s a transient context that only exists while evaluating an expression, and is not a type of value. It’s weird and I can’t really explain it within a single paragraph.

Meanwhile, in Python, list is the fundamental sequence type, but it has similar properties to a C++ vector and (in CPython) is implemented with a C array. The standard library also offers the rarely-used array type, which packs numbers into C arrays to save space — a source of occasional confusion for new Python programmers coming from C, who think “array” is the thing they want. Oh, and there’s the built-in bytearray type, a mutable sequence of bytes, which is different from an array that stores bytes.

JavaScript has an Array type, but it’s (semantically) built on top of the only data structure in JavaScript, which is a hash table with string (!) keys. There’s also a family of ArrayBuffer types for storing numbers in C arrays, much like Python’s array module.

PHP’s sole data structure is called array, but it’s really an ordered hash table with string (!) keys. It also has a thing called list, but it’s not a type, just quirky syntax for doing deconstructing assignment. People coming from PHP to other languages are occasionally frustrated that hash tables lose their order.

Lua likewise has only a single data structure, but is more upfront in calling its structure a “table”; there’s nothing in the language called “array”, “vector”, or “list”.

While I’m at it, the names for mapping types are all over the place:

  • C++: map (actually a binary tree; C++11’s unordered_map is a hash table)
  • JavaScript: object (!) (though it’s not a generic mapping, since the keys must be strings; there’s now a Map type)
  • Lua: table
  • PHP: array (!) (string keys only)
  • Perl: hash (another “shape”, somewhat misleading since a “hash” is also a different thing, and again string keys only), though the documentation likes to say “associative array” a lot
  • Python: dict
  • Rust: map, though it exists as two separate types, BTreeMap and HashMap

Pointers, references, and aliases

C has pointers, which are storage addresses. This is pretty easy for C to do, since it’s all about operating on one big array of storage (more or less). A pointer is just an index into that storage.

C++ inherited pointers from C, but chastizes you for using them. As an alternative it introduced “references”, which are exactly like pointers, except you can leave off the *. This added a very strange new capability that didn’t exist in C: two regular ol’ local variables could refer to the same storage, so that a = 5; could also change the value of b.

And so all programming conversation was doomed forever, but more on that in a moment.

Rust has things called references, and uses the C++ reference syntax for their types, but they’re really “borrowed pointers” (i.e., pointers, but opaque and subject to compile-time lifetime constraints). It also has lesser-used “raw pointers”, which use C’s pointer syntax.

Perl has things called references. Two different kinds of things, in fact. The ones people generally refer to are “hard references”, which are pretty much like C pointers, except the “address” is supposed to be opaque and can’t be arbitrarily operated on. The others are “soft references”, where you use the contents of a variable as the name of another variable using much the same syntax as hard references, but this is forbidden by use strict so doesn’t see much use (and can be done other ways anyway). Perl also has things called aliases, which work like C++ references — but they don’t work on local variables, and they’re not really a type, just explicit manipulation of the symbol table. (Cool fact: Perl functions receive their arguments as aliases! It’s easy not to notice, because most people immediately assign the arguments to readable names.)

PHP has things called references, but despite PHP’s prominent Perl influence, it borrowed its references from C++. C++ declares references as part of the type, but PHP has no variable declaration whatsoever, so a variable becomes a reference if it’s involved in one of a handful of specific operations with a & involved. The variable is then permanently “infected” with reference-ness.

Python, Ruby, JavaScript, Lua, Java, and probably several hundred other high-level languages have nothing called pointers, references, or aliases. This causes endless confusion when trying to explain the language semantics to someone with a C or C++ background, because we want to say things like “this references that” or “this points to that” which can lead them to think that there are literal references and pointers available for them to twiddle. For this reason (and my Perl background), I like to call C++’s reference behavior “aliasing”, which more clearly describes what it does and frees up the word “refer” to be used in its generic English sense.

Pass by value, pass by reference

Speaking of references. I’ve explained this before for Python, but here’s the quick(ish) version. I maintain that this dichotomy makes no sense in almost all languages, because the very question hinges on C’s idea of what a value is, and it’s a relatively rare attitude outside of the C family.

The fundamental issue is that C has syntax to imply structure, but the semantics are all about bytes. A struct looks and sounds like a container, a thing with a lid on it: it’s wrapped in braces, and you have to use . to look inside it. But C just sees a blob of bytes, not much different from an int, except that it lets you look at a few of those bytes at a time. If you put one struct inside another, C will dump the inner’s structure into the outer. If you assign one struct to another, C will dutifully copy all the bytes over, same as it would for a double. The boundary is illusory. In effect, the only “true” container C has — the only form of containment that doesn’t spill its contents all over the place — is the pointer!

If you pass a struct to or from a function, C will copy the whole thing, as with any other form of assignment. If you want a function to modify a struct, you have to pass in a pointer to it, so the function can modify the original storage and not a local copy. If you want to pass a very large struct to a function, you should still use a pointer, or you’ll waste a lot of time uselessly copying data around just to throw it away.

This is so-called “pass by value”, but it’s really about the underlying storage, not any semantic notion of “value”. Pass by copy, if you will. It’s similar to how forgetting to quote a variable in a shell script will cause it to be split on whitespace, or how passing an array to a function in Perl will copy all the elements. It’s nonsense. The semantics, the diagrams of boxes that we draw, and even the very syntax all imply that there’s something being bundled up, but then you turn your back for a second and the language scatters your data to the wind.

C++ added references to make this sort of thing more transparent, just in case C was too easy to understand. Now you can appear to pass a struct by value”, but if the function is declared as taking references for arguments, it can still freely modify your data. The function’s argument becomes an alias for whatever you pass in, so even an atomic type like an int can be overwritten wholesale. This is “pass by reference”, perhaps better named “pass by alias”.

The way Java, Python, Ruby, Lua, JavaScript, and countless other languages work is to have containers act as a single unit. When you have a variable containing a structure, and you assign that variable to another variable, no copying is done. Rather, both variables now refer to— err, point to— err…

And here’s the major issue with the terminology. Someone who’s asking whether X language is by-value or by-reference has likely ingrained the C model, and takes for granted that this is how programming fundamentally works. If I say “refer”, they might think there are C++ references (alias) involved somewhere. If I say “point”, they might think the language has some form of indirection like a C pointer. In most cases, the languages have neither, but there are only so many ways to express this concept in English.

Semantically, those languages act like values exist in their own right, and variables are merely names. Assignment gives another name to a value. It’s tempting to explain a = b as “now a points to b” or “now they refer to the same object”, but that introduces an indirection, implies an intermediate layer that doesn’t exist in the language. a and b both name the same value.

Function calls are a form of assignment, so the arguments inside a function name the same values that the caller passed in. You can modify them in-place, if they’re mutable, and the caller will see the changes, because it’s the same value. You can’t just reassign the variable: the variable is not an alias, and assigning to it merely makes it a name for something else instead. This exists so far outside the dichotomy that it doesn’t even have a consistent name, though I’ve seen it called pass by object, pass by identity, and pass by sharing.

It’s entirely possible to have these passing styles in higher-level languages — as I mentioned, PHP can pass by alias using its C++-style references. But pass by alias really exists as a response to pass by copy, and pass by copy exists because there’s not really any alternative in a fairly low-level language like C.

Anything you can do with pass by copy, you can do with pass by sharing followed by an explicit copy. Most things you can do with pass by alias, you can also do with pass by sharing, as long as you’re mutating the same value via its own interface. The exceptions are attempts to rebind the name itself, and most of those are only for the sake of returning multiple values, which you can do directly in most higher-level languages.

Loose typing

Okay, so, this is really up for interpretation, but I’m pretty sure “loose typing” is not actually a thing. At least, I’ve never seen a particularly concrete definition for it, which is kind of ironic. To recap:

  • Strong typing means that values do not implicitly change type to fit operations performed on them. Rust is strongly typed: comparing an i32 with a i64 is an error.

  • Weak typing means that values can implicitly change type to fit operations performed on them. JavaScript is weakly typed: 5 + "3" will implicitly convert the string to a number and produce 8. (Haha, just kidding, it produces "53".) Also, C is weakly typed: you can just straight up assign "3" to an int and get some hilarious nonsense.

  • Static typing means that names (variables) have associated types that are known before the program runs. Java is statically typed: Java code is 70% type names by volume.

  • Dynamic typing means that names are not given types ahead of time. Ruby is dynamically typed: types are figured out on the fly while the program runs.

Strong–weak forms a spectrum, and static–dynamic forms a spectrum. Languages may have both strong and weak elements, or both static and dynamic elements, though usually one is more prominent. For example, while Go is considered statically-typed, interface{} acts much like dynamic typing. Conversely, you could argue that Python is statically-typed and every variable is of type object, but good luck with that.

Crucially, because strong–weak concerns values and static–dynamic concerns names, all four combinations exist. Haskell is strong and static. C is weak and static. Python is strong and dynamic. Shell is weak and dynamic.

So, then, what exactly is “loose typing”? You’d think it would mean the same as “weak typing”, but I’ve seen a lot of people refer to Python as “loosely typed”, even though Python is mostly strong. (Stronger than C!)

Given that I rarely see the phrase used in a non-derogatory context, my best guess is that “loosely typed” really means “doesn’t have C++’s type system”. Which is kind of funny, given how flimsy C++’s type system is. What type is a pointer to a T? It’s not T*, because that might be a null pointer (which is not a pointer to T) or complete garbage (which is unlikely to be a pointer to a T) or uninitialized (also unlikely to be a pointer to a T). What’s the point of static typing if your variables don’t actually have to contain the type they’re declared as?

Caching

This one is most anecdotal, as it’s not even a language feature.

Caching is storing the results of some computations so you don’t have to compute them again later. It’s an optimization, trading memory in exchange for speed.

I think a crucial property of a cache is that if the cache is emptied or destroyed or unavailable for any reason, everything still works, just more slowly.

And yet I’ve seen a number of programmers use “cache” to refer to any form of storing a value to use later. I find this very confusing, since that’s all programming is.

A fabulous example is a handy Python utility that shows up in a number of projects. I know it by the name reify, which is how it’s spelled in Pyramid, where I first saw it. It does lazy initialization of an object attribute, for example:

1
2
3
4
5
6
7
class Monster:
    def think(self):
        # do something smart

    @reify
    def inventory(self):
        return []

Here, monster.inventory doesn’t actually exist until you try to read it, at which point the function is called — once — and the list it returns becomes the attribute. It’s completely transparent, and once the value is created, it’s a normal attribute with no indirection cost. You can add items to it, and you’ll see the same list every time. Hence, “to make real”: the attribute isn’t real until you summon it into being by observing it.

This is nice for objects that deal with several related but interconnected ideas (which are thus difficult to split into multiple objects). If part of the object takes time or space to set up, you can slap @reify on it, and the end user won’t have to pay the cost if they don’t use that functionality.

It wasn’t on PyPI as a separate package for the longest time, probably because it can be implemented in a dozen lines. When I said it “shows up in a number of projects”, I meant “a number of projects have copy/pasted it from each other”.

It finally showed up a couple years ago, under the name… cached-property. The docs even prominently show how to “invalidate” the “cache” — by mucking with object internals.

The problem I have here is that virtually every use of this decorator I have ever seen is not a cache. The example above is silly, of course, but it immediately demonstrates the problem: “invalidating” monster.inventory would irrevocably lose the only copy of the monster’s inventory. Real uses of @reify tend to produce database connections and other kinds of mutable storage, where “invalidating” would be similarly destructive. This isn’t data you can just whip up again if need be.

It’s possible to use @reify to create a cache, but it’s also possible to use dict to create a cache, so I don’t find that very compelling.

I did try to make my case for renaming the project early on — especially as the maintainer wanted to add this to the standard library! — but no one else liked reify and the conversation degenerated into bikeshedding over an alternative name. Naming really is the hardest problem in computer science.

Bonus: cool terminology we should use more

I love that the Git changelog refers to commands as having “learned” new things:

git remote” learned “get-url” subcommand to show the URL for a given remote name used for fetching and pushing.

Relatedly, I love to see “spelled” use to explain how to write code (especially a single construct or brief expression). Indexing is spelled a[b], etc.

A function “signature” is just its interface: the arguments it takes, their names, their types, the return type, and exceptions that may be thrown. Generally “signature” only refers to the parts that can be expressed directly in the language (and that affect call semantics), so a Python programmer likely wouldn’t consider exceptions to be part of a function signature, and a C++ programmer would likewise ignore argument names.

I realize I said “semantic” a bunch of times in this post, but it doesn’t see much use outside HTML — a lot of programmers seem to get really preoccupied with what the physical hardware is doing. “Semantic” refers to what code means, as opposed to how it works in practice.

And my favorite, which I wish I had more excuses to use: a “nybble” is four bits.

Pokémon Ranger: Guardian Signs will be available on the Wii U Virtual Console in Japan on August 3, 2016. It will cost ¥950.

July is themeless.

  • art: Daily Pokémon remain sporadic. Some other doodling.

  • blog: I wrote a thing about technicalities in communities. I’m also working on three more posts concurrently, which is great because I still have to publish four more by the end of the week, whoops!

  • book: I wrote a teeny bit of actual text, then spent much more time fucking around with Sphinx and comparing it to Pandoc and waffling over whether I want to write in rST or TeX.

Mostly writing last week, and mostly writing this week. I guess July’s theme ended up being, um, writing.

A Pokémon GO panel at SDCC featuring Niantic CEO John Hanke has revealed the designs of the team leaders and some new features Niantic is considering for the game.
2016-07-24 23:39:27
veekun-promotions/2016072402
pokedex
22bfacb Eevee Merge pull request #177 from seii/veekun-issue-172 2016-07-06 20:48:30
74af211 Eevee Merge pull request #178 from seii/veekun-issue-176 2016-07-06 20:48:22
8fb4698 Seii Correct 'is_battle_only' flag for Primal Kyogre and Groudon 2016-07-06 02:57:01
b407278 Seii Fix forms_switchable flag for new Mega evolutions in ORAS 2016-07-05 20:59:18
11523de Eevee Merge pull request #167 from AxeBane/patch-1 2016-03-27 22:10:57
b5e4038 Alexander Lazenby-Catherwood Fix terrifyingly wrong heights. 2016-03-06 14:31:30
dd66303 Eevee (Lexy Munroe) Allow common CLI arguments to work both before and after the subcommand 2016-02-02 19:01:45
59d8a79 Eevee (Lexy Munroe) Stub out a CLI search interface, which can also work with JSON and strings 2016-02-02 18:00:30
2dcc3c1 Eevee (Lexy Munroe) Port the CLI to argparse 2016-01-28 12:01:23
ef03259 Eevee (Lexy Munroe) Little better CLI handling of Unicode in Py2 2016-01-28 09:40:08
abf8e63 Andrew Ekstedt Add a missing French translation for move meta. 2015-11-17 01:40:40
cae59b2 Konrad Borowski Update Brick Break for its BW effect. 2015-08-23 15:53:34
c5b0aa8 megadrifter-pt Add some more French text 2015-06-21 15:45:37
20c4a0b megadrifter-pt RSE/BW2 French locations 2015-06-21 15:45:05
01f3261 megadrifter-pt HGSS French locations 2015-06-20 16:07:04
85317a2 megadrifter-pt Add some more French text 2015-06-20 15:43:08
2b6542c megadrifter-pt Add French prose for both Hoenn Dexes 2015-06-15 13:00:18
20e24db Andrew Ekstedt setup.py: List supported Python versions. 2015-11-05 20:54:16
cf5ea6a Andrew Ekstedt setup.py: Don't require importlib 2015-11-05 20:45:58
6e9db09 Andrew Ekstedt Travis: Drop Python 2.6 2015-11-05 20:39:01
pokedex-media
20de51e Eevee (Lexy Munroe) Add X/Y Sugimori art and increase resolution of most older art 2016-07-08 19:20:49
spline-pokedex
d0e69d6 Eevee (Lexy Munroe) Increase the maximum allowed stat in the stat calculator 2016-03-27 00:34:10
322c56d Eevee (Lexy Munroe) Stat calc: characteristic gene must be higher than every other stat's minimum 2015-11-15 22:35:42
2016-07-23 03:05:00

Apropos of nothing, I’d like to tell you a story. I’ve touched on this before, but this is the full version. It’s the story of hypothetical small-to-medium Internet community.

Stop me if you've heard this one

You create a little community for a thing you like. You give it a phpBB forum or something.

You want people to be nice, so you make a couple rules. No swearing. No spamming. Don’t use all caps.

You invite your friends, and they invite their friends, and all is well and good. There are a few squabbles now and then, but they get resolved without too much trouble, and everyone more or less gets along.

One day, a new person shows up, and starts linking to their website in almost every thread. Their website mostly consists of very mean-spirited articles written about several well-known and well-liked people in the group. When people ask them to stop, they lash out with harsh insults.

So you ban them.

There is immediate protest from a number of people, most of whom you strangely don’t recognize. The person didn’t break any of the rules — how dare you ban them? They never swore. They never used all caps. They never even spammed, because technically spam is unwanted and automated, and this was a real person linking their website which is related to the thing the community is about.

You can’t think of a good counter-argument for this, so you unban them. You also add a new rule, prohibiting linking to websites.

Now the majority of the community is affected, because they can’t link their own work any more. This won’t work. You repeal the previous rule, and instead make one that limits the number of website links to one per day.

The original jerk responds by linking their website once a day, and then making other posts that link to that first post they made. They continue to be abrasive towards everyone else, but they never swear, and you’re just not sure what to do about that.

A few other people start posting, seemingly just to make fun of the rest of you, but likewise never break any of your rules.

A preposterous arms race follows, with the rules becoming increasingly nitpicky as you try to distinguish overt antagonism from mundane and innocent behavior.

After a while, you notice that many of your friends no longer come around. And there seem to be a lot more jerks than there were before. You don’t understand why. Your rules are reasonable, and you enforced them fairly, right?

But it's not really a swear word

I’ve noticed that people really like to write rules that sound objective. Seems like a good enough idea, right? Lets everyone know exactly what the line is.

The trick is that human behavior, and especially human language, are very… squishy. We gauge each other based on a lot of unspoken context: our prior relationship, how both of us seem to be feeling, whether or not we skipped lunch today. When the same comment or action can mean radically different things in different circumstances, it’s hard to draw a fine distinction between what’s acceptable behavior and what’s not.

And rules are written in human language, which makes them just as squishy. Who decides what “swearing” is? If all caps aren’t allowed, how about 90%? Who decides what’s a slur? What, precisely, constitutes harassment? These things sound straightforward and concrete, but they can still be nitpicked to death.

We give people the benefit of the doubt and assume they’ll try to respect what we clearly mean, but there’s nothing guaranteeing that.

Have you ever tried to politely decline a request or invitation, and been asked why not? Then the other party starts trying to weasel around your reason, and now you’re somehow part of a debate about what you want? I’ve seen it happen with mundane social interactions, with freelance workers, and of course, with small online communities.

This isn’t to say that hunting for technicalities is a sign of aggressive malice; it’s human nature. We want to do a thing, we’re told me can’t because of X, and so we see X as an obstacle to overcome. Language is subjective, so it’s the easiest avenue of attack.

Fixing this in rules is a hard problem. The obvious approach is to add increasingly specific details, though then you risk catching innocent behaviors, and you can end up stuck in an almost comical game of cat-and-mouse where you keep trying to find ways to edit your own rules so you’re allowed to punish someone you’ve already passed judgment on.

I think we forget that even real laws are somewhat subjective, often hinging on intent. There are entire separate crimes for homicide, depending on whether it was intentional or accidental or due to clear neglect. These things get decided by a judge or a jury and become case law, the somewhat murky extra rules that aren’t part of formal law but are binding nonetheless.

(In an awkward twist, a lot of communities — especially very large platforms! — don’t explain their reasoning for punishing any particular behavior. That somewhat protects them from being “but technically“-ed, but it also means there’s no case law, and no one else can quite be sure what’s expected behavior.)

That’s why I mostly now make quasirules like “don’t be a dick” or “keep your vitriol to your own blog“. The general expectation is still clear, and it’s obvious that I reserve the right to judge individual cases — which, in the case of a small community, is going to happen anyway. Let’s face it: small communities are monarchies, not democracies.

I do have another reason for this, which is based on another observation I’ve made of small communities. I’ve joined a few where I didn’t bother reading the rules, made some conversation, never bothered anyone, and then later discovered that I’d pretty clearly violated a rule. But no one ever pointed it out, and perhaps no one even noticed, because I wasn’t being a dick.

So I concluded that, for a smaller community, the people who need the rules are likely to be people who you don’t want around in the first place. And “don’t be a dick” covers that just as well.

Evaporative cooling

There are some nice people in the world. I mean nice people, the sort I couldn’t describe myself as. People who are friends with everyone, who are somehow never involved in any argument, who seem content to spend their time drawing pictures of bumblebees on flowers that make everyone happy.

Those people are great to have around. You want to hold onto them as much as you can.

But people only have so much tolerance for jerkiness, and really nice people often have less tolerance than the rest of us.

The trouble with not ejecting a jerk — whether their shenanigans are deliberate or incidental — is that you allow the average jerkiness of the community to rise slightly. The higher it goes, the more likely it is that those really nice people will come around less often, or stop coming around at all. That, in turn, makes the average jerkiness rise even more, which teaches the original jerk that their behavior is acceptable and makes your community more appealing to other jerks. Meanwhile, more people at the nice end of the scale are drifting away.

And this goes for a community of any size, though it may take more jerks to significantly affect a very large platform.

It’s still hard to give someone the boot, though, because it just feels like a really harsh thing to do to someone, especially for an abstract reason like “preserving the feel of the community”. And a jerk is more likely to make a fuss about being made to leave, which makes it feel like a huge issue — whereas nice people generally leave very quietly, and you may not even notice until several of them have been gone for a while.

There’s a human tendency to measure peace as though it were the inverse of volume: the louder people get, the less peaceful it is. We then try to optimize for the least arguing. I’m sure you’ve seen this happen before: someone in a group points out that the group is doing something destructive, that causes an argument, and then onlookers blame the person who pointed out the problem for causing the argument to happen. You can probably think of some pretty high-profile examples in some current events.

(You may relatedly enjoy the tale of the missing stair.)

Have you ever watched one of those TV shows where a dude comes in to berate restaurant owners for all the ridiculous things they’ve been doing? One of the most common defenses is: “well, no one complained“.

In the age of the Internet, where it seems like everyone is always complaining about something, it’s easy to forget that by and large people don’t complain. Sure, they might complain on their Twitter or to their friends or whatever, but chances are, they won’t complain to you. Consider: either you’re aware of the problem and have failed to solve it, or you’re clueless for not noticing. Either way, complaining won’t help anything; it’ll just cause conflict, making them that person who “caused” an argument by pointing out the obvious.

Gamification

Some people are aware of the technicality game on some level, and decide to play it — deliberately. Maybe to get their way; maybe just for fun.

These are people who think “it’d be a shame if something happened to it” is just the way people talk. Layered thick with multiple levels of irony, cloaked in jokes and misdirection, up to its eyeballs in plausible deniability, but crystal clear to the right audience.

It’s a game that offers them a massive advantage, because even if you both know you’re playing it, they have much more experience. Oh, and chances are they don’t even truly care about whether they’re banned or not, so they have nothing to lose — whereas you’re stuck with an existential crisis, questioning everything you believe about free speech and community management, while your nicest peers sneak out the back door.

I remember a time when someone in a community I helped run decided they didn’t like me. They started making subtle jabs, and eventually built up to saying the most biting and personal things they could think to say. Those things weren’t true, but they didn’t know that, and they phrased everything in such a way that their friends could rationalize them as not really trying to be cruel. And they had quite a lot of friends in the community, which put me in a pretty awkward position. How do I justify banning them, if a significant number of people are sure they’re innocent? Am I fucking crazy for seeing this glaring pattern when no one else does?

I did eventually ban them, but it contributed to a complete schism where most of the more grating people left to form their own clubhouse. Win/win?

Or let’s say, hypothetically, that some miscreant constructs a fake tweet screenshot. It’s shared by a high-profile person and spreads like wildfire.

Should either of them be punished? Which one, and why? The faker probably regarded it as a harmless joke; if not for the sharer, it would’ve remained one. Yet the sharer’s only crime was being popular. Did the sharer know it was fake? Was the sharer trying to inflict harm, draw attention to troubling behavior, or share something that made them laugh? Are the faker and the sharer the same person? If you can’t be sure either way, does it matter?

What if, instead of the thing you may be thinking about, the forgery depicted Donald Trump plagiarizing Barack Obama’s tweet congratulating Michelle Obama for her speech? Does that change any of the answers?

This is really difficult in extremely large groups, where you most want to avoid doling out arbitrary punishment, yet where people who play this game can inflict the most damage. The people who make and enforce the rules may not even be part of the group any more, and certainly can’t form an impression of every individual person in the group, so how can anything be enforced consistently? How do you account for intention, sarcasm, irony, self-deprecating humor? How do you explain this clearly without subjecting yourself to an endless deluge of technicalities? You could refuse to explain yourself at all, of course, but then you leave yourself open for people to offer their own explanations: you’re a tyrant who bans anyone who contradicts you, or you hated them for demographic reasons, or you’re just plain irrational and do zany cruel things to people around you on a whim.

I don't have any good answers

I’m not sure there are any. Corralling people is a tricky problem. We still barely know how to do it in meatspace groups of half a dozen, let alone digital groups numbering in the hundreds of millions.

Our current approaches kinda suck, though.

July is themeless.

I’m doing better!

  • art: Daily Pokémon continue, perhaps a bit too sporadically to be called “daily” but whatever. Also a sunset painting that came out really cool, damn. And this painting of a new Sun/Moon Pokémon. And a lot of other doodling.

    I’ve been trying out a bunch of Krita brushes, and I’m not quite happy with any of them, but I’m getting enough of a feel for what I like to start making my own.

  • twitter: I finally automated @leafeon_brands, my blocklist of advertisers. It now automatically blocks ads shown to my primary account.

  • blog: I wrote some stuff about color, which somehow took way longer than I’d expected — I was hoping for a day or two, and it feels like it took most of the week. That wraps up my June posts, so, er, I really gotta get moving on July.

  • book: I had a book idea that seems to have a lot more staying power than the last one or two, and I did a bunch of research and planning for it. I’ll talk about it later, when I have something to show.

  • flora: I was dragged into fixing shipping for the Floraverse store, which thankfully mostly worked itself out before it became too much of a nightmare.

I’m working on two posts at the moment, so I should be able to catch up soon. As soon as I can, I want to find some blocks of time to experiment with art, work on Runed Awakening and this book idea, and spruce up veekun.

I’ve been trying to paint more lately, which means I have to actually think about color. Like an artist, I mean. I’m okay at thinking about color as a huge nerd, but I’m still figuring out how to adapt that.

While I work on that, here is some stuff about color from the huge nerd perspective, which may or may not be useful or correct.

Hue

Hues are what we usually think of as “colors”, independent of how light or dim or pale they are: general categories like purple and orange and green.

Strictly speaking, a hue is a specific wavelength of light. I think it’s really weird to think about light as coming in a bunch of wavelengths, so I try not to think about the precise physical mechanism too much. Instead, here’s a rainbow.

rainbow spectrum

These are all the hues the human eye can see. (Well, the ones this image and its colorspace and your screen can express, anyway.) They form a nice spectrum, which wraps around so the two red ends touch.

(And here is the first weird implication of the physical interpretation: purple is not a real color, in the sense that there is no single wavelength of light that we see as purple. The actual spectrum runs from red to blue; when we see red and blue simultaneously, we interpret it as purple.)

The spectrum is divided by three sharp lines: yellow, cyan, and magenta. The areas between those lines are largely dominated by red, green, and blue. These are the two sets of primary colors, those hues from which any others can be mixed.

Red, green, and blue (RGB) make up the additive primary colors, so named because they add light on top of black. LCD screens work exactly this way: each pixel is made up of three small red, green, and blue rectangles. It’s also how the human eye works, which is fascinating but again a bit too physical.

Cyan, magenta, and yellow are the subtractive primary colors, which subtract light from white. This is how ink, paint, and other materials work. When you look at an object, you’re seeing the colors it reflects, which are the colors it doesn’t absorb. A red ink reflects red light, which means it absorbs green and blue light. Cyan ink only absorbs red, and yellow ink only absorbs blue; if you mix them, you’ll get ink that absorbs both red and blue green, and thus will appear green. A pure black is often included to make CMYK; mixing all three colors would technically get you black, but it might be a bit muddy and would definitely use three times as much ink.

The great kindergarten lie

Okay, you probably knew all that. What confused me for the longest time was how no one ever mentioned the glaring contradiction with what every kid is taught in grade school art class: that the primary colors are red, blue, and yellow. Where did those come from, and where did they go?

I don’t have a canonical answer for that, but it does make some sense. Here’s a comparison: the first spectrum is a full rainbow, just like the one above. The second is the spectrum you get if you use red, blue, and yellow as primary colors.

a full spectrum of hues, labeled with color names that are roughly evenly distributed
a spectrum of hues made from red, blue, and yellow

The color names come from xkcd’s color survey, which asked a massive number of visitors to give freeform names to a variety of colors. One of the results was a map of names for all the fully-saturated colors, providing a rough consensus for how English speakers refer to them.

The first wheel is what you get if you start with red, green, and blue — but since we’re talking about art class here, it’s really what you get if you start with cyan, magenta, and yellow. The color names are spaced fairly evenly, save for blue and green, which almost entirely consume the bottom half.

The second wheel is what you get if you start with red, blue, and yellow. Red has replaced magenta, and blue has replaced cyan, so neither color appears on the wheel — red and blue are composites in the subtractive model, and you can’t make primary colors like cyan or magenta out of composite colors.

Look what this has done to the distribution of names. Pink and purple have shrunk considerably. Green is half its original size and somewhat duller. Red, orange, and yellow now consume a full half of the wheel.

There’s a really obvious advantage here, if you’re a painter: people are orange.

Yes, yes, we subdivide orange into a lot of more specific colors like “peach” and “brown”, but peach is just pale orange, and brown is just dark orange. Everyone, of every race, is approximately orange. Sunburn makes you redder; fear and sickness make you yellower.

People really like to paint other people, so it makes perfect sense to choose primary colors that easily mix to make people colors.

Meanwhile, cyan and magenta? When will you ever use those? Nothing in nature remotely resembles either of those colors. The true color wheel is incredibly, unnaturally bright. The reduced color wheel is much more subdued, with only one color that stands out as bright: yellow, the color of sunlight.

You may have noticed that I even cheated a little bit. The blue in the second wheel isn’t the same as the blue from the first wheel; it’s halfway between cyan and blue, a tertiary color I like to call azure. True pure blue is just as unnatural as true cyan; azure is closer to the color of the sky, which is reflected as the color of water.

People are orange. Sunlight is yellow. Dirt and rocks and wood are orange. Skies and oceans are blue. Blush and blood and sunburn are red. Sunsets are largely red and orange. Shadows are blue, the opposite of yellow. Plants are green, but in sun or shade they easily skew more blue or yellow.

All of these colors are much easier to mix if you start with red, blue, and yellow. It may not match how color actually works, but it’s a useful approximation for humans. (Anyway, where will you find dyes that are cyan or magenta? Blue is hard enough.)

I’ve actually done some painting since I first thought about this, and would you believe they sell paints in colors other than bright red, blue, and yellow? You can just pick whatever starting colors you want and the whole notion of “primary” goes a bit out the window. So maybe this is all a bit moot.

More on color names

The way we name colors fascinates me.

A “basic color term” is a single, unambiguous, very common name for a group of colors. English has eleven: red, orange, yellow, green, blue, purple, black, white, gray, pink, and brown.

Of these, orange is the only tertiary hue; brown is the only name for a specifically low-saturation color; pink and grey are the only names for specifically light shades. I can understand grey — it’s handy to have a midpoint between black and white — but the other exceptions are quite interesting.

Looking at the first color wheel again, “blue” and “green” together consume almost half of the spectrum. That seems reasonable, since they’re both primary colors, but “red” is relatively small; large chunks of it have been eaten up by its neighbors.

Orange is a tertiary color in either RGB or CMYK: it’s a mix of red and yellow, a primary and secondary color. Yet we ended up with a distinct name for it. I could understand if this were to give white folks’ skin tones their own category, similar to the reasons for the RBY art class model, but we don’t generally refer to white skin as “orange”. So where did this color come from?

Sometimes I imagine a parallel universe where we have common names for other tertiary colors. How much richer would the blue/green side of the color wheel be if “chartreuse” or “azure” were basic color terms? Can you even imagine treating those as distinct colors, not just variants or green or blue? That’s exactly how we treat orange, even though it’s just a variant of red.

I can’t speak to whether our vocabulary truly influences how we perceive or think (and that often-cited BBC report seems to have no real source). But for what it’s worth, I’ve been trying to think of “azure” as distinct for a few years now, and I’ve had a much easier time dealing with blues in art and design. Giving the cyan end of blue a distinct and common name has given me an anchor, something to arrange thoughts around.

Come to think of it, yellow is an interesting case as well. A decent chunk of the spectrum was ultimately called “yellow” in the xkcd map; here’s that chunk zoomed in a bit.

full range of xkcd yellows

How much of this range would you really call yellow, rather than green (or chartreuse!) or orange? Yellow is a remarkably specific color: mixing it even slightly with one of its neighbors loses some of its yellowness, and darkening it moves it swiftly towards brown.

I wonder why this is. When we see a yellowish-orange, are we inclined to think of it as orange because it looks like orange under yellow sunlight? Is it because yellow is between red and green, and the red and green receptors in the human eye pick up on colors that are very close together?


Most human languages develop their color terms in a similar order, with a split between blue and green often coming relatively late in a language’s development. Of particular interest to me is that orange and pink are listed as a common step towards the end — I’m really curious as to whether that happens universally and independently, or it’s just influence from Western color terms.

I’d love to see a list of the basic color terms in various languages, but such a thing is proving elusive. There’s a neat map of how many colors exist in various languages, but it doesn’t mention what the colors are. It’s easy enough to find a list of colors in various languages, like this one, but I have no idea whether they’re basic in each language. Note also that this chart only has columns for English’s eleven basic colors, even though Russian and several other languages have a twelfth basic term for azure. The page even mentions this, but doesn’t include a column for it, which seems ludicrous in an “omniglot” table.

The only language I know many color words in is Japanese, so I went delving into some of its color history. It turns out to be a fascinating example, because you can see how the color names developed right in the spelling of the words.

See, Japanese has a couple different types of words that function like adjectives. Many of the most common ones end in -i, like kawaii, and can be used like verbs — we would translate kawaii as “cute”, but it can function just as well as “to be cute”. I’m under the impression that -i adjectives trace back to Old Japanese, and new ones aren’t created any more.

That’s really interesting, because to my knowledge, only five Japanese color names are in this form: kuroi (black), shiroi (white), akai (red), aoi (blue), and kiiroi (yellow). So these are, necessarily, the first colors the language could describe. If you compare to the chart showing progression of color terms, this is the bottom cell in column IV: white, red, yellow, green/blue, and black.

A great many color names are compounds with iro, “color” — for example, chairo (brown) is cha (tea) + iro. Of the five basic terms above, kiiroi is almost of that form, but unusually still has the -i suffix. (You might think that shiroi contains iro, but shi is a single character distinct from i. kiiroi is actually written with the kanji for iro.) It’s possible, then, that yellow was the latest of these five words — and that would give Old Japanese words for white, red/yellow, green/blue, and black, matching the most common progression.

Skipping ahead some centuries, I was surprised to learn that midori, the word for green, was only promoted to a basic color fairly recently. It’s existed for a long time and originally referred to “greenery”, but it was considered to be a shade of blue (ao) until the Allied occupation after World War II, when teaching guidelines started to mention a blue/green distinction. (I would love to read more details about this, if you have any; the West’s coming in and adding a new color is a fascinating phenomenon, and I wonder what other substantial changes were made to education.)

Japanese still has a number of compound words that use ao (blue!) to mean what we would consider green: aoshingou is a green traffic light, aoao means “lush” in a natural sense, aonisai is a greenhorn (presumably from the color of unripe fruit), aojiru is a drink made from leafy vegetables, and so on.

This brings us to at least six basic colors, the fairly universal ones: black, white, red, yellow, blue, and green. What others does Japanese have?

From here, it’s a little harder to tell. I’m not exactly fluent and definitely not a native speaker, and resources aimed at native English speakers are more likely to list colors familiar to English speakers. (I mean, until this week, I never knew just how common it was for aoi to mean green, even though midori as a basic color is only about as old as my parents.)

I do know two curious standouts: pinku (pink) and orenji (orange), both English loanwords. I can’t be sure that they’re truly basic color terms, but they sure do come up a lot. The thing is, Japanese already has names for these colors: momoiro (the color of peach — flowers, not the fruit!) and daidaiiro (the color of, um, an orange). Why adopt loanwords for concepts that already exist?

I strongly suspect, but cannot remotely qualify, that pink and orange weren’t basic colors until Western culture introduced the idea that they could be — and so the language adopted the idea and the words simultaneously. (A similar thing happened with grey, natively haiiro and borrowed as guree, but in my limited experience even the loanword doesn’t seem to be very common.)

Based on the shape of the words and my own unqualified guesses of what counts as “basic”, the progression of basic colors in Japanese seems to be:

  1. black, white, red (+ yellow), blue (+ green) — Old Japanese
  2. yellow — later Old Japanese
  3. brown — sometime in the past millenium
  4. green — after WWII
  5. pink, orange — last few decades?

And in an effort to put a teeny bit more actual research into this, I searched the Leeds Japanese word frequency list (drawn from websites, so modern Japanese) for some color words. Here’s the rank of each. Word frequency is generally such that the actual frequency of a word is inversely proportional to its rank — so a word in rank 100 is twice as common as a word in rank 200. The five -i colors are split into both noun and adjective forms, so I’ve included an adjusted rank that you would see if they were counted as a single word, using ab / (a + b).

  • white: 1010 ≈ 1959 (as a noun) + 2083 (as an adjective)
  • red: 1198 ≈ 2101 (n) + 2790 (adj)
  • black: 1253 ≈ 2017 (n) + 3313 (adj)
  • blue: 1619 ≈ 2846 (n) + 3757 (adj)
  • green: 2710
  • yellow: 3316 ≈ 6088 (n) + 7284 (adj)
  • orange: 4732 (orenji), n/a (daidaiiro)
  • pink: 4887 (pinku), n/a (momoiro)
  • purple: 6502 (murasaki)
  • grey: 8472 (guree), 10848 (haiiro)
  • brown: 10622 (chairo)
  • gold: 12818 (kin’iro)
  • silver: n/a (gin’iro)
  • navy: n/a (kon)

n/a” doesn’t mean the word is never used, only that it wasn’t in the top 15,000.

I’m not sure where the cutoff is for “basic” color terms, but it’s interesting to see where the gaps lie. I’m especially surprised that yellow is so far down, and that purple (which I hadn’t even mentioned here) is as high as it is. Also, green is above yellow, despite having been a basic color for less than a century! Go, green.

For comparison, in American English:

  • black: 254
  • white: 302
  • red: 598
  • blue: 845
  • green: 893
  • yellow: 1675
  • brown: 1782
  • golden: 1835
  • græy: 1949
  • pink: 2512
  • orange: 3171
  • purple: 3931
  • silver: n/a
  • navy: n/a

Don’t read too much into the actual ranks; the languages and corpuses are both very different.

Color models

There are numerous ways to arrange and identify colors, much as there are numerous ways to identify points in 3D space. There are also benefits and drawbacks to each model, but I’m often most interested in how much sense the model makes to me as a squishy human.

RGB is the most familiar to anyone who does things with computers — it splits a color into its red, green, and blue channels, and measures the amount of each from “none” to “maximum”. (HTML sets this range as 0 to 255, but you could just as well call it 0 to 1, or -4 to 7600.)

RGB has a couple of interesting problems. Most notably, it’s kind of difficult to read and write by hand. You can sort of get used to how it works, though I’m still not particularly great at it. I keep in mind these rules:

  1. The largest channel is roughly how bright the color is.

    This follows pretty easily from the definition of RGB: it’s colored light added on top of black. The maximum amount of every color makes white, so less than the maximum must be darker, and of course none of any color stays black.

  2. The smallest channel is how pale (desaturated) the color is.

    Mixing equal amounts of red, green, and blue will produce grey. So if the smallest channel is green, you can imagine “splitting” the color between a grey (green, green, green), and the leftovers (red - green, 0, blue - green). Mixing grey with a color will of course make it paler — less saturated, closer to grey — so the bigger the smallest channel, the greyer the color.

  3. Whatever’s left over tells you the hue.

It might be time for an illustration. Consider the color (50%, 62.5%, 75%). The brightness is “capped” at 75%, the largest channel; the desaturation is 50%, the smallest channel. Here’s what that looks like.

illustration of the color (50%, 62.5%, 75%) split into three chunks of 50%, 25%, and 25%

Cutting out the grey and the darkness leaves a chunk in the middle of actual differences between the colors. Note that I’ve normalized it to (0%, 50%, 100%), which is the percentage of that small middle range. Removing the smallest and largest channels will always leave you with a middle chunk where at least one channel is 0% and at least one channel is 100%. (Or it’s grey, and there is no middle chunk.)

The odd one out is green at 50%, so the hue of this color is halfway between cyan (green + blue) and blue. That hue is… azure! So this color is a slightly darkened and fairly dull azure. (The actual amount of “greyness” is the smallest relative to the largest, so in this case it’s about ⅔ grey, or about ⅓ saturated.) Here’s that color.

a slightly darkened, fairly dull azure

This is a bit of a pain to do in your head all the time, so why not do it directly?

HSV is what you get when you directly represent colors as hue, saturation, and value. It’s often depicted as a cylinder, with hue represented as an angle around the color wheel: 0° for red, 120° for green, and 240° for blue. Saturation ranges from grey to a fully-saturated color, and value ranges from black to, er, the color. The azure above is (210°, ⅓, ¾) in HSV — 210° is halfway between 180° (cyan) and 240° (blue), ⅓ is the saturation measurement mentioned before, and ¾ is the largest channel.

It’s that hand-waved value bit that gives me trouble. I don’t really know how to intuitively explain what value is, which makes it hard to modify value to make the changes I want. I feel like I should have a better grasp of this after a year and a half of drawing, but alas.

I prefer HSL, which uses hue, saturation, and lightness. Lightness ranges from black to white, with the unperturbed color in the middle. Here’s lightness versus value for the azure color. (Its lightness is ⅝, the average of the smallest and largest channels.)

comparison of lightness and value for the azure color

The lightness just makes more sense to me. I can understand shifting a color towards white or black, and the color in the middle of that bar feels related to the azure I started with. Value looks almost arbitrary; I don’t know where the color at the far end comes from, and it just doesn’t seem to have anything to do with the original azure.

I’d hoped Wikipedia could clarify this for me. It tells me value is the same thing as brightness, but the mathematical definition on that page matches the definition of intensity from the little-used HSI model. I looked up lightness instead, and the first sentence says it’s also known as value. So lightness is value is brightness is intensity, but also they’re all completely different.

Wikipedia also says that HSV is sometimes known as HSB (where the “B” is for “brightness”), but I swear I’ve only ever seen HSB used as a synonym for HSL. I don’t know anything any more.

Oh, and in case you weren’t confused enough, the definition of “saturation” is different in HSV and HSL. Good luck!

Wikipedia does have some very nice illustrations of HSV and HSL, though, including depictions of them as a cone and double cone.

(Incidentally, you can use HSL directly in CSS now — there are hsl() and hsla() CSS3 functions which evaluate as colors. Combining these with Sass’s scale-color() function makes it fairly easy to come up with decent colors by hand, without having to go back and forth with an image editor. And I can even sort of read them later!)

An annoying problem with all of these models is that the idea of “lightness” is never quite consistent. Even in HSL, a yellow will appear much brighter than a blue with the same saturation and lightness. You may even have noticed in the RGB split diagram that I used dark red and green text, but light blue — the pure blue is so dark that a darker blue on top is hard to read! Yet all three colors have the same lightness in HSL, and the same value in HSV.

Clearly neither of these definitions of lightness or brightness or whatever is really working. There’s a thing called luminance, which is a weighted sum of the red, green, and blue channels that puts green as a whopping ten times brighter than blue. It tends to reflect how bright colors actually appear.

Unfortunately, luminance and related values are only used in fairly obscure color models, like YUV and Lab. I don’t mean “obscure” in the sense that nobody uses them, but rather that they’re very specialized and not often seen outside their particular niches: YUV is very common in video encoding, and Lab is useful for serious photo editing.

Lab is pretty interesting, since it’s intended to resemble how human vision works. It’s designed around the opponent process theory, which states that humans see color in three pairs of opposites: black/white, red/green, and yellow/blue. The idea is that we perceive color as somewhere along these axes, so a redder color necessarily appears less green — put another way, while it’s possible to see “yellowish green”, there’s no such thing as a “yellowish blue”.

(I wonder if that explains our affection for orange: we effectively perceive yellow as a fourth distinct primary color.)

Lab runs with this idea, making its three channels be lightness (but not the HSL lightness!), a (green to red), and b (blue to yellow). The neutral points for a and b are at zero, with green/blue extending in the negative direction and red/yellow extending in the positive direction.

Lab can express a whole bunch of colors beyond RGB, meaning they can’t be shown on a monitor, or even represented in most image formats. And you now have four primary colors in opposing pairs. That all makes it pretty weird, and I’ve actually never used it myself, but I vaguely aspire to do so someday.

I think those are all of the major ones. There’s also XYZ, which I think is some kind of master color model. Of course there’s CMYK, which is used for printing, but it’s effectively just the inverse of RGB.

With that out of the way, now we can get to the hard part!

Colorspaces

I called RGB a color model: a way to break colors into component parts.

Unfortunately, RGB alone can’t actually describe a color. You can tell me you have a color (0%, 50%, 100%), but what does that mean? 100% of what? What is “the most blue”? More importantly, how do you build a monitor that can display “the most blue” the same way as other monitors? Without some kind of absolute reference point, this is meaningless.

A color space is a color model plus enough information to map the model to absolute real-world colors. There are a lot of these. I’m looking at Krita’s list of built-in colorspaces and there are at least a hundred, most of them RGB.

I admit I’m bad at colorspaces and have basically done my best to not ever have to think about them, because they’re a big tangled mess and hard to reason about.

For example! The effective default RGB colorspace that almost everything will assume you’re using by default is sRGB, specifically designed to be this kind of global default. Okay, great.

Now, sRGB has gamma built in. Gamma correction means slapping an exponent on color values to skew them towards or away from black. The color is assumed to be in the range 0–1, so any positive power will produce output from 0–1 as well. An exponent greater than 1 will skew towards black (because you’re multiplying a number less than 1 by itself), whereas an exponent less than 1 will skew away from black.

What this means is that halfway between black and white in sRGB isn’t (50%, 50%, 50%), but around (73%, 73%, 73%). Here’s a great example, borrowed from this post (with numbers out of 255):

alternating black and white lines alongside gray squares of 128 and 187

Which one looks more like the alternating bands of black and white lines? Surely the one you pick is the color that’s actually halfway between black and white.

And yet, in most software that displays or edits images, interpolating white and black will give you a 50% gray — much darker than the original looked. A quick test is to scale that image down by half and see whether the result looks closer to the top square or the bottom square. (Firefox, Chrome, and GIMP get it wrong; Krita gets it right.)

The right thing to do here is convert an image to a linear colorspace before modifying it, then convert it back for display. In a linear colorspace, halfway between white and black is still 50%, but it looks like the 73% grey. This is great fun: it involves a piecewise function and an exponent of 2.4.

It’s really difficult to reason about this, for much the same reason that it’s hard to grasp text encoding problems in languages with only one string type. Ultimately you still have an RGB triplet at every stage, and it’s very easy to lose track of what kind of RGB that is. Then there’s the fact that most images don’t specify a colorspace in the first place so you can’t be entirely sure whether it’s sRGB, linear sRGB, or something entirely; monitors can have their own color profiles; you may or may not be using a program that respects an embedded color profile; and so on. How can you ever tell what you’re actually looking at and whether it’s correct? I can barely keep track of what I mean by “50% grey”.

And then… what about transparency? Should a 50% transparent white atop solid black look like 50% grey, or 73% grey? Krita seems to leave it to the colorspace: sRGB gives the former, but linear sRGB gives the latter. Does this mean I should paint in a linear colorspace? I don’t know! (Maybe I’ll give it a try and see what happens.)

Something I genuinely can’t answer is what effect this has on HSV and HSL, which are defined in terms of RGB. Is there such a thing as linear HSL? Does anyone ever talk about this? Would it make lightness more sensible?

There is a good reason for this, at least: the human eye is better at distinguishing dark colors than light ones. I was surprised to learn that, but of course, it’s been hidden from me by sRGB, which is deliberately skewed to dedicate more space to darker colors. In a linear colorspace, a gradient from white to black would have a lot of indistinguishable light colors, but appear to have severe banding among the darks.

several different black to white gradients

All three of these are regular black-to-white gradients drawn in 8-bit color (i.e., channels range from 0 to 255). The top one is the naïve result if you draw such a gradient in sRGB: the midpoint is the too-dark 50% grey. The middle one is that same gradient, but drawn in a linear colorspace. Obviously, a lot of dark colors are “missing”, in the sense that we could see them but there’s no way to express them in linear color. The bottom gradient makes this more clear: it’s a gradient of all the greys expressible in linear sRGB.

This is the first time I’ve ever delved so deeply into exactly how sRGB works, and I admit it’s kind of blowing my mind a bit. Straightforward linear color is so much lighter, and this huge bias gives us a lot more to work with. Also, 73% being the midpoint certainly explains a few things about my problems with understanding brightness of colors.

There are other RGB colorspaces, of course, and I suppose they all make for an equivalent CMYK colorspace. YUV and Lab are families of colorspaces, though I think most people talking about Lab specifically mean CIELAB (or “L*a*b*”), and there aren’t really any competitors. HSL and HSV are defined in terms of RGB, and image data is rarely stored directly as either, so there aren’t really HSL or HSV colorspaces.

I think that exhausts all the things I know.

Real world color is also a lie

Just in case you thought these problems were somehow unique to computers. Surprise! Modelling color is hard because color is hard.

I’m sure you’ve seen the checker shadow illusion, possibly one of the most effective optical illusions, where the presence of a shadow makes a gray square look radically different than a nearby square of the same color.

Our eyes are very good at stripping away ambient light effects to tell what color something “really” is. Have you ever been outside in bright summer weather for a while, then come inside and everything is starkly blue? Lingering compensation for the yellow sunlight shifting everything to be slightly yellow; the opposite of yellow is blue.

Or, here, I like this. I’m sure there are more drastic examples floating around, but this is the best I could come up with. Here are some Pikachu I found via GIS.

photo of Pikachu plushes on a shelf

My question for you is: what color is Pikachu?

Would you believe… orange?

photo of Pikachu plushes on a shelf, overlaid with color swatches; the Pikachu in the background are orange

In each box, the bottom color is what I color-dropped, and the top color is the same hue with 100% saturation and 50% lightness. It’s the same spot, on the same plush, right next to each other — but the one in the background is orange, not yellow. At best, it’s brown.

What we see as “yellow in shadow” and interpret to be “yellow, but darker” turns out to be another color entirely. (The grey whistles are, likewise, slightly blue.)

Did you know that mirrors are green? You can see it in a mirror tunnel: the image gets slightly greener as it goes through the mirror over and over.

Distant mountains and other objects, of course, look bluer.

This all makes painting rather complicated, since it’s not actually about painting things the color that they “are”, but painting them in such a way that a human viewer will interpret them appropriately.

I, er, don’t know enough to really get very deep here. I really should, seeing as I keep trying to paint things, but I don’t have a great handle on it yet. I’ll have to defer to Mel’s color tutorial. (warning: big)

Blending modes

You know, those things in Photoshop.

I’ve always found these remarkably unintuitive. Most of them have names that don’t remotely describe what they do, the math doesn’t necessarily translate to useful understanding, and they’re incredibly poorly-documented. So I went hunting for some precise definitions, even if I had to read GIMP’s or Krita’s source code.

In the following, A is a starting image, and B is something being drawn on top with the given blending mode. (In the case of layers, B is the layer with the mode, and A is everything underneath.) Generally, the same operation is done on each of the RGB channels independently. Everything is scaled to 0–1, and results are generally clamped to that range.

I believe all of these treat layer alpha the same way: linear interpolation between A and the combination of A and B. If B has alpha t, and the blending mode is a function f, then the result is t × f(A, B) + (1 - t) × A.

If A and B themselves have alpha, the result is a little more complicated, and probably not that interesting. It tends to work how you’d expect. (If you’re really curious, look at the definition of BLEND() in GIMP’s developer docs.)

  • Normal: B. No blending is done; new pixels replace old pixels.

  • Multiply: A × B. As the name suggests, the channels are multiplied together. This is very common in digital painting for slapping on a basic shadow or tinting a whole image.

    I think the name has always thrown me off just a bit because “Multiply” sounds like it should make things bigger and thus brighter — but because we’re dealing with values from 0 to 1, Multiply can only ever make colors darker.

    Multiplying with black produces black. Multiplying with white leaves the other color unchanged. Multiplying with a gray is equivalent to blending with black. Multiplying a color with itself squares the color, which is similar to applying gamma correction.

    Multiply is commutative — if you swap A and B, you get the same result.

  • Screen: 1 - (1 - A)(1 - B). This is sort of an inverse of Multiply; it multiplies darkness rather than lightness. It’s defined as inverting both colors, multiplying, and inverting the result. Accordingly, Screen can only make colors lighter, and is also commutative. All the properties of Multiply apply to Screen, just inverted.

  • Hard Light: Equivalent to Multiply if B is dark (i.e., less than 0.5), or Screen if B is light. There’s an additional factor of 2 included to compensate for how the range of B is split in half: Hard Light with B = 0.4 is equivalent to Multiply with B = 0.8, since 0.4 is 0.8 of the way to 0.5. Right.

    This seems like a possibly useful way to apply basic highlights and shadows with a single layer? I may give it a try.

    The math is commutative, but since B is checked and A is not, Hard Light is itself not commutative.

  • Soft Light: Like Hard Light, but softer. No, really. There are several different versions of this, and they’re all a bit of a mess, not very helpful for understanding what’s going on.

    If you graphed the effect various values of B had on a color, you’d have a straight line from 0 up to 1 (at B = 0.5), and then it would abruptly change to a straight line back down to 0. Soft Light just seeks to get rid of that crease. Here’s Hard Light compared with GIMP’s Soft Light, where A is a black to white gradient from bottom to top, and B is a black to white gradient from left to right.

    graphs of combinations of all grays with Hard Light versus Soft Light

    You can clearly see the crease in the middle of Hard Light, where B = 0.5 and it transitions from Multiply to Screen.

  • Overlay: Equivalent to either Hard Light or Soft Light, depending on who you ask. In GIMP, it’s Soft Light; in Krita, it’s Hard Light except the check is done on A rather than B. Given the ambiguity, I think I’d rather just stick with Hard Light or Soft Light explicitly.

  • Difference: abs(A - B). Does what it says on the tin. I don’t know why you would use this? Difference with black causes no change; Difference with white inverts the colors. Commutative.

  • Addition and Subtract: A + B and A - B. I didn’t think much of these until I discovered that Krita has a built-in brush that uses Addition mode. It’s essentially just a soft spraypaint brush, but because it uses Addition, painting over the same area with a dark color will gradually turn the center white, while the fainter edges remain dark. The result is a fiery glow effect, which is pretty cool. I used it manually as a layer mode for a similar effect, to make a field of sparkles. I don’t know if there are more general applications.

    Addition is commutative, of course, but Subtract is not.

  • Divide: A ÷ B. Apparently this is the same as changing the white point to 1 - B. Accordingly, the result will blow out towards white very quickly as B gets darker.

  • Dodge and Burn: A ÷ (1 - B) and 1 - (1 - A) ÷ B. Inverses in the same way as Multiply and Screen. Similar to Divide, but with B inverted — so Dodge changes the white point to B, with similar caveats as Divide. I’ve never seen either of these effects not look horrendously gaudy, but I think photographers manage to use them, somehow.

  • Darken Only and Lighten Only: min(A, B) and max(A, B). Commutative.

  • Linear Light: (2 × A + B) - 1. I think this is the same as Sai’s “Lumi and Shade” mode, which is very popular, at least in this house. It works very well for simple lighting effects, and shares the Soft/Hard Light property that darker colors darken and lighter colors lighten, but I don’t have a great grasp of it yet and don’t know quite how to explain what it does. So I made another graph:

    graph of Linear Light, with a diagonal band of shading going from upper left to bottom right

    Super weird! Half the graph is solid black or white; you have to stay in that sweet zone in the middle to get reasonable results.

    This is actually a combination of two other modes, Linear Dodge and Linear Burn, combined in much the same way as Hard Light. I’ve never encountered them used on their own, though.

  • Hue, Saturation, Value: Work like you might expect: converts A to HSV and replaces either its hue, saturation, or value with Bs.

  • Color: Uses HSL, unlike the above three. Combines Bs hue and saturation with As lightness.

  • Grain Extract and Grain Merge: A - B + 0.5 and A + B - 0.5. These are clearly related to film grain, somehow, but their exact use eludes me.

    I did find this example post where someone combines a photo with a blurred copy using Grain Extract and Grain Merge. Grain Extract picked out areas of sharp contrast, and Grain Merge emphasized them, which seems relevant enough to film grain. I might give these a try sometime.

Those are all the modes in GIMP (except Dissolve, which isn’t a real blend mode; also, GIMP doesn’t have Linear Light). Photoshop has a handful more. Krita has a preposterous number of other modes, no, really, it is absolutely ridiculous, you cannot even imagine.

I may be out of things

There’s plenty more to say about color, both technically and design-wise — contrast and harmony, color blindness, relativity, dithering, etc. I don’t know if I can say any of it with any particular confidence, though, so perhaps it’s best I stop here.

I hope some of this was instructive, or at least interesting!

I was feeling pretty run down at the end of June. I think I wore myself out a little bit. DUMP 2, then Under Construction, then DUMP 3 (which I missed), and all the while fretting over Under Construction.

I took this past SGDQ week “off” and spent it mostly doodling. I’m a bit better now! I’m a post behind for June, a third of the way into July; don’t worry, I’ll catch up.

July doesn’t have a theme. I’ve got some stuff to do, and I’ll do it.

  • art: The 30-minute daily Pokémon continue, though not quite so “daily” for a bit there. I also made a quick birthday gift for a friend, spent a preposterous amount of time painting a hypothetical evolution, and drew an Extyrannomon for Extyrannomon. Plus a lot of doodling.

    Oh, and I put together an art more good chart for the first half of this year.

  • zdoom: My experiment with embedding Lua is a little cleaner — you can now embed a Lua script in a map and call it from a linedef (switch, etc.), making it slightly more of a real proof-of-concept. I also did some research into how to serialize the entire state of the interpreter, for the sake of quicksaves.

  • gamedev: I did a little more work on rainblob, the tiny PICO-8 platformer I started a month or so ago. It now supports multiple “rooms” and has a couple simple intro puzzles. I also wrote about 20% of a Tetris clone using pentominoes while watching SGDQ’s Tetris runs.

  • veekun: I gathered all of Bulbapedia’s Sugimori art to replace the rather low-res and incomplete collection veekun has at the moment. Not up yet, though. I looked into the current state of extracting skeletal animations, yet again, and did not find any traces of success. Alas.

Back to work this week, then!

Sources:

Forum activity

The forums are dead quiet. No one is posting. A lone tumbleweed rolls by.

Maybe you should do something about this.