Skip to main content

Posts

Pushing Python Performance With Parallelization

TL;DR: For certain types of programs, you can take advantage of idiosyncrasies in the Python interpreter and the host operating system to create real shared memory between processes and get some pretty good parallelization. Premature optimization is the root of all evil. As a developer, you've probably heard this before, and what it means basically is that you shouldn't waste time optimizing code unless it's already doing what you want it to do. We also live in an era of seemingly unlimited resources with AWS/Google Compute, and often the easiest way to get higher throughput in your programs or service is just to pay for more instances. But sometimes it's fun to see what sort of performance we can get on a simple laptop (and save some cash at the same time). So anyway ... I've been working on this thing, and it took too damn long to run, and I needed to run it lots and lots of times ... so, it was time to optimize. Basic optimization has two main steps: 1) P
Recent posts

On Language and Getting from Here to There

The Atlantic  recently published an interesting article regarding the difference in "efficiency" between languages . The basic idea is that some languages, such as Mandarin Chinese, are very efficient in conveying information and possess linguistic shortcuts such as eliminating gender and tenses and collapsing "he" and "she" into a single pronoun, whereas other languages like German are quite verbose and precise, having different articles (i.e., "the") for different gendered nouns along with plentiful verb conjugations. They also touch very briefly on the idea of directional complements, which I've always found to be a fascinating difference between English and the Asian languages I've studied. Basically, directional complements are words attached to verbs that show the direction of movement of the actors. This sounds like an obvious thing, but it's actually a common pain point for English speakers learning Asian languages and vice

AlphaGo and the AI Revolution: Is Natural Language Understanding Next?

Here's a guest post I wrote for a talented friend's linguistics blog, about AlphaGo and natural language understanding. Does the rise of AlphaGo mean that human-level machine translation is just around the corner? Or is language another beast altogether? Enjoy! http://saramariahasbun.com/2016/03/15/alphago-and-the-ai-revolution-is-natural-language-understanding-next/

Dumb AI Will Kill Us Before Superintelligent AI Has A Chance

Humans have been fascinated with the idea of a robot revolution since we first conceived of robots. Long before Skynet and the Terminator were imagined in the early 80's (?!?), Asimov wrote of the Three Laws of Robotics, warning us of the potential for intelligent robots to do us harm. Recently, with the rapid expanse of deep learning neural nets in the last few years, we hear more and more about how an artificial "superintelligence" (i.e., beyond human intelligence level) could wipe us out with as much difficulty as a human stepping on an ant. While I do think that superintelligence could potentially be an apocalyptic threat to humanity if it were ever to be created without proper safety built in, I am actually way more concerned about what I like to call "dumb AI". Which honestly can kill us right now if we're not careful. What's "dumb AI"? Well, artificial intelligence, or AI, is a buzzword that spans many different technologies. The medi

Canterbury Tales Neural Network

I trained another RNN (multi-layer recurrent neural network), this time to generate poems in Middle English in the style of Canterbury Tales. It started generating interesting stuff after just a few minutes of training, but I let it finish anyway. I noticed in one of the samples that it had generated a title, so I seeded it with that title, and sure enough, it closed the poem eventually and started a new one. Kinda cool. The numbers are line numbers (you can see they're obviously not accurate), and it generates footnotes, too, since they were in the source text. The indentation and spacing is all generated by the neural net, too. AUCCIATES TALE,   This walmeth have,' quod Melibee, 'by see                   760   For it was grave, and of my voys I may,   To hir, with-outen fond to wedden she                         285   How that a man unto pituk than;                              1160   Til that were I his messaille aboghnis.   For, in the sovereyntes ful m

A neural network that writes Scalia dissents

I trained a recursive neural network ( https://github.com/karpathy/char-rnn ) on a bunch of Justice Scalia's dissents from the past few years. It spits out some amusing stuff, depending on the starter text and how "adventurous" you want the output. Since it's character-based and not word-based, it makes a bunch of spelling errors (unlike Justice Scalia), but is also able to create new words (just like Justice Scalia!). Here are some samples. *** Starter text: "Justice SCALIA", random level: 0.8. Never would have expected this from a strict constructionist (check the first sentence). This one brings in same-sex marriage, constitutional interpretation, and the typical contempt-ridden air quotes. "Justice SCALIA, dissenting.  The Constitution is an opinion, and so views that "[t]he Court tait the structure relations (interneline) rejectly and weands is not categorical, while all this one inference to do be not applying a nample between the

Stuttering in Korea

I had given up on English. It's my native language, but I figured after 30 some-odd years of disfluent speech, it was time to try something else. So I signed up for language classes in Korean, rationalizing that if I was going to try to teach myself how to speak, I might as well learn a new language along the way. This might seem completely insane, but when the prevailing theme of your conscious thoughts for multiple decades is some variant of "Why can't I say what I want to say?", you come up with lots of crazy ideas. For background, I've been a person who stutters for my entire life. I wrote about it on this blog a few years ago, so I think it's time for a followup. I've learned a lot since then, about myself and about stuttering, but in this post I simply want to give some insight into what it's actually like to stutter, and how my speech has changed over time. After the last stuttering post, the predominant reaction I got from friends was ei