Scale Up with Parallel Query
Version 9.6 adds support for parallelizing some query operations, enabling utilization of several or all of the cores on a server to return query results faster. This release includes parallel sequential (table) scan, aggregation, and joins. Depending on details and available cores, parallelism can speed up big data queries by as much as 32 times faster.
“I migrated our entire genomics data platform – all 25 billion legacy MySQL rows of it – to a single Postgres database, leveraging the row compression abilities of the JSONB datatype, and the excellent GIN, BRIN, and B-tree indexing modes. Now with version 9.6, I expect to harness the parallel query functionality to allow even greater scalability for queries against our rather large tables,” said Mike Sofen, Chief Database Architect, Synthetic Genomics.
Brandon Rohrer:How do Convolutional Neural Networks work?
I’m pleased to say that Postgres-BDR is on its way to PostgreSQL 9.6, and even better, it works without a patched PostgreSQL.
BDR has always been an extension, but on 9.4 it required a heavily patched PostgreSQL, one that isn’t fully on-disk-format compatible with stock community PostgreSQL 9.4. The goal all along has been to allow it to run as an extension on an unmodified PostgreSQL … and now we’re there.
The years of effort we at 2ndQuadrant have put into getting the series of patches from BDR into PostgreSQL core have paid off. As of PostgreSQL 9.6, the only major patch that Postgres-BDR on 9.4 has that PostgreSQL core doesn’t, is the sequence access method patch that powers global sequences.
This means that Postgres-BDR on 9.6 will not support global sequences, at least not the same way they exist in 9.4. The 9.6 version will incorporate a different approach to handling sequences on distributed systems, and in the process address some issues that arose when using global sequences in production.
The MIT License, Line by Line
171 words every programmer should understand
The MIT License is the most popular open-source software license. Here’s one read of it, line by line.
Read the License
If you’re involved in open-source software and haven’t taken the time to read the license from top to bottom—it’s only 171 words—you need to do so now. Especially if licenses aren’t your day-to-day. Make a mental note of anything that seems off or unclear, and keep trucking. I’ll repeat every word again, in chunks and in order, with context and commentary. But it’s important to have the whole in mind.
The MIT License (MIT)
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the Software.
Learning how to write performant code is hard. Here are a few simple laws that I hope will convey the core of the matter. I’m calling them…
Crista’s Five Laws of Performant Software
- Programming language << Programmers’ awareness of performance. The programming language doesn’t matter as much as the programmers’ awareness about the implementation of that language and its libraries. These days, all mainstream programming languages and their standard libraries are pretty optimized, and can be used to write performant code in a large range of application domains. They can also be used to write horribly performing code. For better or for worse, the high-level languages provide a large surface area of candy features and libraries that are really awesome to use… until you realize they require huge amounts of memory, or have a super-linear behavior with size of input. It is critical that people question “how does this magic actually work?,” go search for the answer, and figure out the best way of scaling things if the convenient candy is not as good as needed. There is usually another, better performing way of doing it even in high-level programming languages. (The main reason why C/C++ programmers don’t run into this as often is because there is an appalling lack of candy in the C/C++ ecosystem… performance isn’t hidden – nothing is!)
Getting things done – http://jvns.ca/blog/2016/09/19/getting-things-done/
Getting things done
Ok, so this is kind of a feelings-y producti…
Getting things done – Julia Evans
A conversation with Aston Motes, Dropbox’s first employee.
Craig : What were you doing before Dropbox and how did you get involved?
Aston : I’ll go back to when I got to MIT. So, I fully expected to be a professional software engineer and to work at a big company after graduation. I assumed I would go to Microsoft or Google or maybe Amazon. Those were great jobs and a lot of my peers were going to those companies. They seemed like pretty fun places to be. You know, they had free drinks in their refrigerators! And that was kind of my ideal situation after college.
Halfway through MIT I started reading the essays of this guy named Paul Graham [PG] and learned that there was this whole other world of Silicon Valley, the startup world. Obviously, all those companies that I was just talking about came up in this culture of the venture capital backed startup, but that startup world wasn’t visible to me at the time.
I actually went to a talk by PG he gave at MIT. And I just became an acolyte of his, based on his blog posts. They shifted my perspective, and I eventually I decided I really wanted to start a startup rather than work at one of those big companies The only problem was, I didn’t have any great startup ideas. The closest my friends and I got to a real startup was, we built this book exchange website for MIT students called bookX (which is now defunct).We thought that it might be a business, but turned out it wasn’t.
Source: Employee #1: Dropbox · The Macro
Critics say Massively Open Online Courses, or MOOCs, are over-hyped. But defenders say they are reaching people in unexpected ways.
The world’s most popular online course is a general introduction to the art of learning, taught jointly by an educator and a neuroscientist.
“Learning How To Learn,” which was created by Barbara Oakley, an electrical engineer, and Terry Sejnowski, a neuroscientist, has been ranked as the leading class by enrollment in a survey of the 50 largest online courses released earlier this month by the Online Course Report website.
The course is “aimed at a broad audience of learners who wanted to improve their learning performance based on what we know about how brains learn,” said Dr. Sejnowski, the director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in La Jolla, Calif.
A time traveler from 1915 arriving in 1965 would have been astonished by the scientific theories and engineering technologies invented during that half century. One can only speculate, but it seems likely that few of the major advances that emerged during those 50 years were even remotely foreseeable in 1915: Life scientists discovered DNA, the genetic code, transcription, and examples of its regulation, yielding, among other insights, the central dogma of biology. Astronomers and astrophysicists found other galaxies and the signatures of the big bang. Groundbreaking inventions included the transistor, photolithography, and the printed circuit, as well as microwave and satellite communications and the practices of building computers, writing software, and storing data. Atomic scientists developed NMR and nuclear power. The theory of information appeared, as well as the formulation of finite state machines, universal computers, and a theory of formal grammars. Physicists extended the classical models with the theories of relativity, quantum mechanics, and quantum fields, while launching the standard model of elementary particles and conceiving the earliest versions of string theory.
Some of these advances emerged from academia and some from the great industrial research laboratories where pure thinking was valued along with better products. Would a visitor from 1965, having traveled the 50 years to 2015, be equally dazzled?
Maybe not. Perhaps, though, the pace of technological development would have surprised most futurists, but the trajectory was at least partly foreseeable. This is not to deny that our time traveler would find the Internet, new medical imaging devices, advances in molecular biology and gene editing, the verification of gravity waves, and other inventions and discoveries remarkable, nor to deny that these developments often …