Writing good code: how to reduce the cognitive load of your code – Christian M. Mackeprang

Low bug count, good performance, easy modification. Good code is high-impact, and is perhaps the main reason behind the existence of the proverbial 10x developer. And yet, despite it’s importance, it eludes new developers. Literature on the subject usually amounts to disconnected collections of tips. How can a new developer just memorize all that stuff? “Code Complete“, the greatest exponent in this matter, is 960 pages long!

I believe it’s possible to construct a simple mental framework that can be used with any language or library and which will lead to good quality code by default. There are five main concepts I will talk about here. Keep them in mind and writing good code should be a breeze.

Update: Mia Li was kind enough to offer a Chinese translation of this post here.

Keep your personal quirks out of it

You read some article which blows your mind with new tricks. Now you are going to write clever code and all your peers will be impressed.

The problem is that people just want to fix their bugs and move on. Your clever trick is often nothing more than a distraction. As I talked about in “Applying neuroscience to software development“, when people have to digest your piece of code, their “mental stack” fills up and it is hard to make progress.

Source: Writing good code: how to reduce the cognitive load of your code – Christian M. Mackeprang

 

Why Facts Don’t Change Our Minds – The New Yorker

Why Facts Don’t Change Our Minds. New discoveries about the human mind show the limitations of reason.

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

Source: Why Facts Don’t Change Our Minds – The New Yorker

 

Gregory Szorc’s Digital Home | Better Compression with Zstandard

I think I first heard about the Zstandard compression algorithm at a Mercurial developer sprint in 2015. At one end of a large table a few people were uttering expletives out of sheer excitement. At developer gatherings, that’s the universal signal for something is awesome. Long story short, a Facebook engineer shared a link to the RealTime Data Compression blog operated by Yann Collet (then known as the author of LZ4 – a compression algorithm known for its insane speeds) and people were completely nerding out over the excellent articles and the data within showing the beginnings of a new general purpose lossless compression algorithm named Zstandard. It promised better-than-deflate/zlib compression ratios and performance on both compression and decompression. This being a Mercurial meeting, many of us were intrigued because zlib is used by Mercurial for various functionality (including on-disk storage and compression over the wire protocol) and zlib operations frequently appear as performance hot spots.

Source: Gregory Szorc’s Digital Home | Better Compression with Zstandard