E.W.Dijkstra: The Humble Programmer

As a result of a long sequence of coincidences I entered the programming profession officially on the first spring morning of 1952 and as far as I have been able to trace, I was the first Dutchman to do so in my country. In retrospect the most amazing thing was the slowness with which, at least in my part of the world, the programming profession emerged, a slowness which is now hard to believe. But I am grateful for two vivid recollections from that period that establish that slowness beyond any doubt.

After having programmed for some three years, I had a discussion with A. van Wijngaarden, who was then my boss at the Mathematical Centre in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden simultaneously, and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, and to become….., yes what? A programmer? But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline? I remember quite vividly how I envied my hardware colleagues, who, when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that, when faced with that question, I would stand empty-handed. Full of misgivings I knocked on van Wijngaarden’s office door, asking him whether I could “speak to him for a moment”; when I left his office a number of hours later, I was another person. For after having listened to my problems patiently, he agreed that up till that moment there was not much of a programming discipline, but then he went on to explain quietly that automatic computers were here to stay, that we were just at the beginning and could not I be one of the persons called to make programming a respectable discipline in the years to come? This was a turning point in my life and I completed my study of physics formally as quickly as I could. One moral of the above story is, of course, that we must be very careful when we give advice to younger people; sometimes they follow it!

Source: E.W.Dijkstra Archive: The Humble Programmer (EWD 340)

 

Recognizing Bad Advice

[soundcloud url=”https://api.soundcloud.com/tracks/266803044″ params=”color=ff5500″ width=”100%” height=”166″ iframe=”true” /]

A two-part interview with the founders of Remix and Le Tote.

In this episode of Startup School Radio Kat Manalac talks with Remix founder Sam Hashemi and Le Tote cofounders Brett Northart and Rakesh Tondon.

All three guests in this episode are solving problems that aren’t directly theirs. Sam feels the impact of his public transit planning platform as a commuter yet he doesn’t plan the routes himself. Brett and Rakesh worked in investment banking and now they run Le Tote, the “Netflix of Women’s Clothes”. User feedback and mentorship has been integral to the success of both startups however, parsing advice related to solving another person’s problem brings its own set of challenges.

 

GOOGLE DEEP REINFORCEMENT LEARNING

DEEP REINFORCEMENT LEARNING

Humans excel at solving a wide variety of challenging problems, from low-level motor control through to high-level cognitive tasks. Our goal at DeepMind is to create artificial agents that can achieve a similar level of performance and generality. Like a human, our agents learn for themselves to achieve successful strategies that lead to the greatest long-term rewards. This paradigm of learning by trial-and-error, solely from rewards or punishments, is known as reinforcement learning (RL). Also like a human, our agents construct and learn their own knowledge directly from raw inputs, such as vision, without any hand-engineered features or domain heuristics. This is achieved by deep learning of neural networks. At DeepMind we have pioneered the combination of these approaches – deep reinforcement learning – to create the first artificial agents to achieve human-level performance across many challenging domains.

 

Our agents must continually make value judgements so as to select good actions over bad. This knowledge is represented by a Q-network that estimates the total reward that an agent can expect to receive after taking a particular action. Two years ago we introduced the first widely successful algorithm for deep reinforcement learning. The key idea was to use deep neural networks to represent the Q-network, and to train this Q-network to predict total reward. Previous attempts to combine RL with neural networks had largely failed due to unstable learning. To address these instabilities, our Deep Q-Networks (DQN) algorithm stores all of the agent’s experiences and then randomly samples and replays these experiences to provide diverse and decorrelated training data. We applied DQN to learn to play games on the Atari 2600 console. At each time-step the agent observes the raw pixels on the screen, a reward signal corresponding to the game score, and selects a joystick direction. In our Nature paper we trained separate DQN agents for 50 different Atari games, without any prior knowledge of the game rules.

Source: Google DeepMind

 

‘We’re in a Bubble’ – Sam Altman

A lot of people have been saying we’re in a tech bubble for quite some time. Someday they’ll be right, but in the meantime, I thought it’d be fun to look back at some articles from the last 10 years:

2007, Coding Horror — Welcome to Dot-Com Bubble 2.0. “You might argue that the new bubble has been in effect since mid-2006, but the signs are absolutely unmistakable now.”

2008, Gigaom — Is Linkedin worth $1B? “The valuation of $1 billion – not as insane as the [$15 billion] valuation placed by Microsoft on Facebook – was jaw dropping.”

2009, Wall Street Journal — The Bursting of the Silicon Valley Bubble (2009 Edition). “Some think that this round of Silicon Valley blowups might be more damaging than the last.”

2010, Daily Beast — Facebook’s $56 Billion Valuation and More Signs of the Tech Apocalypse.  “One analyst predicts Facebook will easily be worth $200 billion by 2015. Right on! And by 2020 it could be the first company with a $1 zillion market value, so buy-buy-buy, everybody!”

and, famously, Signal v. Noise, Facebook is not worth $33,000,000,000. “But the bullshit monopoly-money valuation merry-go-round has to stop.”

Source: ‘We’re in a Bubble’ – Sam Altman

 

k-Nearest Neighbors from Scratch by David Lettier

Using JavaScript, we implement the k-Nearest Neighbors algorithm from the bottom up.

Demo and Codebase

If you would like to play with the k-Nearest Neighbors algorithm in your browser, try out the visually interactive demo. All of the code for the demo is hosted on GitHub. Stars are always appreciated.

The Scenario

Say you have a garden that is host to many different kinds of plants. Each plant’s location in the garden is based on two of its features. The west to east direction of the garden corresponds to the diameter of the plant’s flower while the south to north direction relates to the length of the plant’s leaf. Each plant in the garden has been carefully labeled with a small tag stuck in the dirt located near its base.

Source: k-Nearest Neighbors from Scratch by David Lettier