The Startup Zeitgeist · The Macro

The Startup Zeitgeist http://macro.ycombinator.com/articles/2016/05/the-startup-zeitgeist/



Reading applications to Y Combinator is like having access to a crystal ball. Twice per year — once in the winter and once in the spring — thousands of men and women apply to Y Combinator. Each of these bright minds has his or her own vision of the future of technology. They pitch ideas related …
The Startup Zeitgeist · The Macro

 

Our brain uses statistics to calculate confidence, make decisions | Neuroscientist News

The brain produces feelings of confidence that inform decisions the same way statistics pulls patterns out of noisy data –

The brain produces feelings of confidence that inform decisions the same way statistics pulls patterns out of noisy data –

The directions, which came via cell phone, were a little garbled, but as you understood them: “Turn left at the 3rd light and go straight; the restaurant will be on your right side.” Ten minutes ago you made the turn. Still no restaurant in sight. How far will you be willing to drive in the same direction?

Research suggests that it depends on your initial level of confidence after getting the directions. Did you hear them right? Did you turn at the 3rd light? Could you have driven past the restaurant? Is it possible the directions are incorrect?

Human brains are constantly processing data to make statistical assessments that translate into the feeling we call confidence, according to a study published in Neuron. This feeling of confidence is central to decision making, and, despite ample evidence of human fallibility, the subjective feeling relies on objective calculations.

“The feeling ultimately relies on the same statistical computations a computer would make,” says Professor Adam Kepecs, a neuroscientist at Cold Spring Harbor Laboratory and lead author of the new study. “People often focus on the situations where confidence is divorced from reality,” he says. “But if confidence were always error-prone, what would be its function? If we didn’t have the ability to optimally assess confidence, we’d routinely find ourselves driving around for hours in this scenario.”

Calculating confidence for a statistician involves looking at a set of data—perhaps a sampling of marbles pulled from a bag—and making a conclusion about the entire bag based on that sample. “The feeling of confidence and the objective calculation are related intuitively,” says Kepecs. “But how much so?”

In experiments with human subjects, Kepecs and colleagues therefore tried to control for different factors that can vary from person to person. The aim was to establish what evidence contributed to each decision. In this way they could compare people’s reports of confidence with the optimal statistical answer. “If we can quantify the evidence that informs a person’s decision, then we can ask how well a statistical algorithm performs on the same evidence,” says Kepecs.

Source: Our brain uses statistics to calculate confidence, make decisions | Neuroscientist News

 

Preparing for the Future of Artificial Intelligence | whitehouse.gov

Today, we’re announcing a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial intelligence.

There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Source: Preparing for the Future of Artificial Intelligence | whitehouse.gov

 

uvloop: Blazing fast Python networking — magicstack

TL;DR

asyncio is an asynchronous I/O framework shipping with the Python Standard Library. In this blog post, we introduce uvloop: a full, drop-in replacement for the asyncio event loop. uvloop is written in Cython and built on top of libuv.

uvloop makes asyncio fast. In fact, it is at least 2x faster than nodejs, gevent, as well as any other Python asynchronous framework. The performance of uvloop-based asyncio is close to that of Go programs.

asyncio & uvloop

The asyncio module, introduced by PEP 3156, is a collection of network transports, protocols, and streams abstractions, with a pluggable event loop. The event loop is the heart of asyncio. It provides APIs for:

  • scheduling calls,
  • transmitting data over the network,
  • performing DNS queries,
  • handling OS signals,
  • convenient abstractions to create servers and connections,
  • working with subprocesses asynchronously.

As of this moment, uvloop is only available on *nix platforms and Python 3.5.

uvloop is a drop-in replacement of the built-in asyncio event loop. You can install uvloop with pip:

$ pip install uvloop

Using uvloop in your asyncio code is as easy as:

import asyncio
import uvloop
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())

The above snippet makes any asyncio.get_event_loop() call return an instance of uvloop.

Source: uvloop: Blazing fast Python networking — magicstack

 

Deep Language Modeling for Question Answering using Keras

Introduction

Question answering has recieved more focus as large search engines have basically mastered general information retrieval and are starting to cover more edge cases. Question answering happens to be one of those edge cases, because it could involve a lot of syntatic nuance that doesn’t get captured by standard information retrieval models, like LDA or LSI. Hypothetically, deep learning models would be better suited to this type of task because of their ability to capture higher-order syntax. Two papers, “Applying deep learning to answer selection: a study and an open task” (Feng et. al. 2015) and “LSTM-based deep learning models for non-factoid answer selection” (Tan et. al. 2016), are recent examples which have applied deep learning to question-answering tasks with good results.

Feng et. al. used an in-house Java framework for their work, and Tan et. al. built their model entirely from Theano. Personally, I am a lot lazier than them, and I don’t understand CNNs very well, so I would like to use an existing framework to build one of their models to see if I could get similar results. Keras is a really popular one that has support for everything we might need to put the model together.

The Github repository for this project can be found here.

Source: Deep Language Modeling for Question Answering using Keras

 

anomaly_detection/Anomaly Detection Post.ipynb at master · fastforwardlabs/anomaly_detection · GitHub

Anomaly Detection & Probabilistic Programming

This post will present a short survey on popular methods in anomaly detection. After exploring some of the goals and limitations of these methods, we will suggest that probabilistic programming provides an easy way to formulate more robust anomaly detection models.

Source: anomaly_detection/Anomaly Detection Post.ipynb at master · fastforwardlabs/anomaly_detection · GitHub

 

The Increasing Problem With the Misinformed

The Increasing Problem with the Misinformed https://www.baekdal.com/analysis/the-increasing-problem-with-the-misinformed



When discussing the future of newspapers, we have a tendency to focus only on the publishing side. We talk about the changes in formats, the new reader behaviors, the platforms, the devices, and the strange new world of distributed digital distribution, which are not just forcing us to do things in new ways, but also atomizes the very core of the newspaper.
The Increasing Problem With the Misinformed

 

How To Make Fossils Productive Again | Peter Bailis

Does the Database Community Have an Identity Crisis? http://www.bailis.org/blog/how-to-make-fossils-productive-again/



How To Make Fossils Productive Again. 30 Apr 2016. At NorCal Database Day 2016, I served on a panel titled 40+ Years of Database Research: Do We Have an Identity Crisis? What follows is a loose transcript of my talk, which I enjoyed writing and delivering and which I hope you enjoy reading.
How To Make Fossils Productive Again | Peter Bailis