Import Madness

Sing, goddess, the rage of George and the ImportError,

and its devastation, which put pains thousandfold upon his programs,

hurled in their multitudes to the house of Hades strong ideas

of systems, but gave their code to be the delicate feasting

of dogs, of all birds, and the will of Guido was accomplished

since that time when first there stood in division of conflict

Brett Cannon’s son the lord of modules: the brilliant import keyword…

Backstory Before Things Get Weird

This post is about how an ImportError lead me to a very strange place.

I was writing a simple Python program. It was one of my first attempts at Python 3.

I tried to import some code and got an ImportError. Normally I solve ImportError’s by shuffling files around until
the error goes away. But this time none of my shuffling solved the problem. So I found myself actually reading the
official documentation for the Python import system. Somehow I’d spent over five years writing Python code professionally
without ever reading more than snippets of those particular docs.

What I learned there changed me.

Yes, I answered my simple question, which had something to do with when I should use .’s in an import:

# most of the time, don't use dots at all:
from spam import eggs

# If Python has trouble finding spam and spam.py is in the same directory:
# (i.e. "package" as the code doing the import):
from .spam import eggs

# when spam.py is in the *enclosing* package, i.e. one level up:
from ..spam import eggs

# ImportError! At least in Python 3, you can only use the dots with
# the `from a import b` syntax:
import .spam

But more importantly I realized that this whole time I had never really understood what the word module means in Python.

According to the official Python tutorial, a “module” is a file containing Python definitions and statements.

In other words, spam.py is a module.

But it’s not quite that simple. In my running Python program, if I import requests, then what is type(requests)?

It’s module.

That means module is a type of object in a running Python program. And requests in my running program is derived from requests.py, but it’s not the same thing.

So what is the module class in Python and how is babby module formed?

Modules and the Python Import System

Modules are created automatically in Python when you import. It turns out that the import keyword in Python is syntactic sugar for a somewhat more complicated process. When you import requests, Python actually does two things:

1) Calls an internal function: __import__('requests') to create, load, and initialize the requests module object

2) Binds the local variable requests to that module

And how exactly does __import__() create, load, and initialize a module?

Well, it’s complicated. I’m not going to go into full detail, but there’s a great video where Brett Cannon, the main maintainer of the Python import system, painstakingly walks through the whole shebang.

But in a nutshell, importing in Python has 5 steps:

1. See if the module has already been imported

Python maintains a cache of modules that have already been imported. The cache is a dictionary held at sys.modules.

If you try to import requests, __import__ will first check if there’s a module in sys.modules named “requests”. If there is, Python just gives you the module object in the cache and does not do any more work.

If the module isn’t cached (usually because it hasn’t been import yet, but also maybe because someone did something nefarious…) then:

2. Find the source code using sys.path

sys.path is a list in every running Python program that tells the interpreter where it should look for modules when
it’s asked to import them. Here’s an excerpt from my current sys.path:

# the directory our code is running in:
'',
# where my Python executable lives:
'/Users/rogueleaderr/miniconda3/lib/python3.5',
# the place where `pip install` puts stuff:
'/Users/rogueleaderr/miniconda3/lib/python3.5/site-packages'

When I import requests Python goes and looks in those directories for requests.py. If it can’t find it, I’m in for an ImportError. I’d estimate that the large majority of real life ImportError’s happen because the source code you’re
trying to import isn’t in a directory that’s on sys.path. Move your module or add the directory to sys.path and you’ll have a better day.

In Python 3, you can do some pretty crazy stuff to tell Python to look in esoteric places for code. But that’s a topic for another day!

3. Make a Module object

Python has a builtin type called ModuleType. Once __import__ has found your source code, it’ll create a new ModuleType instance and attach your module.py’s source code to it.

Then, the exciting part:

4. Execute the module source code!

__import__ will create a new namespace, i.e. scope, i.e. the __dict__ attribute attached to most Python objects.
And then it will actually exec your code inside of that namespace.

Any variables or functions that get defined during that execution are captured in that namespace. And the namespace is
attached to the newly created module, which is itself then returned into the importing scope.

5. Cache the module inside sys.modules

If we try to import requests again, we’ll get the same module object back. Steps 2-5 will not be repeated.

Okay! This is a pretty cool system. It lets us write many pretty Python programs.

But, if we’re feeling demented, it also lets us write some pretty dang awful Python programs.

Where it gets weird

I learned how to fix my immediate import problem. That wasn’t enough.

Gizmo gets wet

With these new import powers in hand, I immediately starting thinking about how I could use them for evil, rather than good. Because, as we know:

Good is dumb
(c. Five Finger Tees)

So far, the worst idea I’ve had for how to misuse the Python import system is to
implement a mergesort algorithm using just the import keyword. At first I didn’t know if it was possible. But, spoiler alert, it is!

It doesn’t actually take much code. It just takes the stubbornness to figure out how to subvert a lot of very well-intentioned, normally helpful machinery in the import system.

We can do this. Here’s how:

Remember that when we import a module, Python executes all the source code.

So imagine I start up Python and define a function:

>>> def say_beep():
>>>    print("beep!.........beep!")

>>> say_beep()

This will print out some beeps.

Now imagine instead I write the same lines of code as above into a file called say_beep.py. Then I open my interpreter and run

>>> import say_beep.py

What happens? The exact same thing: Python prints out some beeps.

If I create a module that contains the same source code as the body of a function then importing the module will produce the same result as calling the function.

Well, what if I need to return something from my function body? Simple:

# make_beeper.py

beeper = lambda x: print("say beep")

# main.py

from make_beeper import beeper
beeper()

Anything that gets defined in the module is available in the module’s namespace after it’s imported. So from a import b
is structurally the same as b = f(), if I structure my module correctly.

Okay, what about passing arguments? Well, that gets a bit harder. The trick is that Python source code is just a long string, so we
can modify the source of a module before we import it:

# with_args.py

a = None
b = None
result = a + b

# main.py

src = ""
with open("with_args.py") as f:
    for line in f:
        src += line

a = "10"
b = "21"

src = src.replace("a = None", f"a = {a}")
src = src.replace("b = None", f"b = {b}")

with open("with_args.py", "w") as f:
    f.write(src)

from with_args import result

print(result)  # it's 31!

Now this certainly isn’t pretty. But where we’re going, nothing is pretty. Buckle up!

How to mergesort

Okay…how can we apply these ideas to implement mergesort?

First, let’s quickly review what mergesort is: it’s a recursive sorting algorithm with n log n worst-case computational complexity
(meaning it’s pretty darn good, especially compared to bad sorting algorithms like bubble sort that have n^2 complexity.)

It works by taking a list, splitting it in half, and then splitting the halves in half until we’re left with individual elements.

Then we merge adjacent elements by interleaving them in sorted order. Take a look at this diagram:

picture of mergesort

Or read the Wikipedia article for more details.

Some rules

  1. No built in sorting functionality. Python’s built in sort uses a derivative of mergesort
    so just putting result = sorted(lst) into a module and importing it isn’t very sporting.
  2. No user-defined functions at all.
  3. All the source code has to live inside one module file, which we will fittingly call madness.py

The code

Well, here’s the code: (Walk-through below, if you don’t feel like reading 100 lines of bizarre Python)

"""
# This is the algorithm we'll use:

import sys
import re
import inspect
import os
import importlib
import time

input_list = []
sublist = input_list
is_leaf = len(sublist) < 2
if is_leaf:
    sorted_sublist = sublist
else:
    split_point = len(sublist) // 2
    left_portion = sublist[:split_point]
    right_portion = sublist[split_point:]

    # get a reference to the code we're currently running
    current_module = sys.modules[__name__]

    # get its source code using stdlib's `inspect` library
    module_source = inspect.getsource(current_module)

    # "pass an argument" by modifying the module's source
    new_list_slug = 'input_list = ' + str(left_portion)
    adjusted_source = re.sub(r'^input_list = [.*]', new_list_slug, 
                             module_source, flags=re.MULTILINE)

    # make a new module from the modified source
    left_path = "left.py"
    with open(left_path, "w") as f:
        f.write(adjusted_source)

    # invalidate caches; force Python to do the full import again
    importlib.invalidate_caches()
    if "left" in sys.modules:
        del sys.modules['left']

    # "call" the function to "return" a sorted sublist
    from left import sorted_sublist as left_sorted

    # clean up by deleting the new module
    if os.path.isfile(left_path):
        os.remove(left_path)

    new_list_slug = 'input_list = ' + str(right_portion)
    adjusted_source = re.sub(r'^input_list = [.*]', new_list_slug, 
                             module_source, flags=re.MULTILINE)
    right_path = "right.py"
    with open(right_path, "w") as f:
        f.write(adjusted_source)

    importlib.invalidate_caches()

    if "right" in sys.modules:
       del sys.modules['right']
    from right import sorted_sublist as right_sorted

    if os.path.isfile(right_path):
        os.remove(right_path)

    # merge
    merged_list = []
    while (left_sorted or right_sorted):
        if not left_sorted:
            bigger = right_sorted.pop()
        elif not right_sorted:
            bigger = left_sorted.pop()
        elif left_sorted[-1] >= right_sorted[-1]:
            bigger = left_sorted.pop()
        else:
            bigger = right_sorted.pop()
        merged_list.append(bigger)
    # there's probably a better way to do this that doesn't
    # require .reverse(), but appending to the head of a
    # list is expensive in Python
    merged_list.reverse()
    sorted_sublist = merged_list

# not entirely sure why we need this line, but things
# don't work without it!
sys.modules[__name__].sorted_sublist = sorted_sublist
"""

import random
import os
import time

random.seed(1001)

list_to_sort = [int(1000*random.random()) for i in range(100)]
print("unsorted: {}".format(list_to_sort))

mergesort = __doc__
adjusted_source = mergesort.replace('input_list = []',
                                    'input_list = {}'.format(list_to_sort))

with open("merge_sort.py", "w") as f:
    f.write(adjusted_source)

from merge_sort import sorted_sublist as sorted_list

os.remove("merge_sort.py")
finished_time = time.time()

print("original sorted: {}".format(sorted(list_to_sort)))
print("import sorted: {}".format(sorted_list))

assert sorted_list == sorted(list_to_sort)

That’s all we need.

Breaking it down

Madness itself

The body of madness.py is compact. All it does is generate a random list of numbers, grab our template implementation of merge sort from it’s own docstring (how’s that for self-documenting code?), jam in our random list, and kick off the algorithm by running

from merge_sort import sorted_sublist as sorted_list

The mergesort implementation

This is the fun part.

First, here is a “normal” implementation of merge_sort as a function:

def merge_sort(input_list):
    if len(input_list) < 2:  # it's a leaf
        return input_list
    else:
        # split
        split_point = len(input_list) // 2
        left_portion, right_portion = input_list[:split_point], input_list[split_point:]

        # recursion
        left_sorted = merge_sort(left_portion)
        right_sorted = merge_sort(right_portion)

        # merge
        merged_list = []
        while left_sorted or right_sorted:
            if not left_sorted:
                bigger = right_sorted.pop()
            elif not right_sorted:
                bigger = left_sorted.pop()
            elif left_sorted[-1] >= right_sorted[-1]:
                bigger = left_sorted.pop()
            else:
                bigger = right_sorted.pop()
            merged_list.append(bigger)
        merged_list.reverse()
        return merged_list

It has three phases:

  1. Split the list in half
  2. Call merge_sort recursively until the list is split down to individual elements
  3. Merge the sublists we’re working on at this stage into a single sorted sublist by interleaving the elements in sorted order

But since our rule says that we can’t use functions, we need to replace this recursive function with import.

That means replacing this:

left_sorted = merge_sort(left_portion)

With this:

# get a reference to the code we're currently running
current_module = sys.modules[__name__]
# get it's source code using stdlib's `inspect` library
module_source = inspect.getsource(current_module)

# "pass an argument" by modifying the module's source
new_list_slug = 'input_list = ' + str(left_portion)
adjusted_source = re.sub(r'^input_list = [.*]', new_list_slug, module_source, flags=re.MULTILINE)

# make a new module from the modified source
left_path = "left.py"
with open(left_path, "w") as f:
    f.write(adjusted_source)

# invalidate caches
importlib.invalidate_caches()
if "left" in sys.modules:
    del sys.modules['left']

# "call" the function to "return" a sorted sublist
from left import sorted_sublist as left_sorted

# clean up by deleting the new module
if os.path.isfile(left_path):
    os.remove(left_path)

# not entirely sure why we need this line, but things
# don't work without it! Might be to keep the sorted sublist
# alive once this import goes out of scope?
sys.modules[__name__].sorted_sublist = sorted_sublist

And that’s really it.

We just use the tools we learned about to simulate calling functions with arguments and returning values. And we add a few lines to trick Python into not caching modules and instead doing the full import process when we import a module with the same name as one that’s already been imported. (If our merge sort execution tree has multiple levels, we’re going to have a lot of different left.py’s).

And that’s how you abuse the Python import system to implement mergesort.

Many paths to the top of the mountain, but the view is a singleton.

It’s pretty mindblowing (to me at least) that this approach works at all. But on the other hand, why shouldn’t it?

There’s a pretty neat idea in computer science is the Church-Turing thesis. It states that any effectively computable function can be computed by a universal Turing machine. The thesis is usually trotted out to explain why there’s nothing you can compute with a universal Turing machine that you can’t compute using lambda calculus, and therefore there’s no program you can write in C that you can’t, in principle, write in Lisp.

But here’s a corollary: since you can, if you really want to, implement a Turing tape by writing files to the file system one bit at a time and importing the results, you can use the Python import system to simulate a Turing machine. That implies that,
in principle, any computation that can be performed by a digital computer can be performed (assuming infinite space, time, and patience) using the Python import system.

The only real question is how annoying a computation will be to implement, and in this case Python’s extreme runtime dynamism makes this particular implementation surprisingly easy.

The Python community spends a lot of time advocating for good methodology and “idiomatic” coding styles. They have a good reason: if you’re writing software that’s intended to be used, some methods are almost always better than their alternatives.

But if you’re writing programs to learn, sometimes it’s helpful to remember that there are many different models of computation under the sun. And especially in the era when “deep learning” (i.e. graph-structured computations that simulate differentiable functions) is really starting to shine,
it’s extra important to remember that sometimes taking a completely different (and even wildly “inefficient”) approach to
a computational problem can lead to startling success.

It’s also nice to remember that Python itself started out as (and in a sense still is!) a ridiculously inefficient
and roundabout way to execute C code.

Abstractions really matter. In the words of Alfred North Whitehead,

Civilization advances by extending the number of important operations which we can perform without thinking about them

My “import sort” is certainly not a useful abstraction. But I hope that learning about it will lead you to some good ones!

Note Bene

In case it’s not obvious, you should never actually use these techniques in any code that you’re intending to actually use for anything.

But the general idea of modifying Python source code at import time has at least one useful
(if not necessary advisable) use case: macros.

Python has a library called macropy that implements Lisp-style syntactic macros in Python
by modifying your source code at import time.

I’ve never actually used macropy, but it’s pretty cool to know that Python makes the simple things easy and the insane things possible.

Finally, as bad as this mergesort implementation is, it allowed me to run a fun little experiment. We know that
mergesort has good computation complexity compared to naive sorting algorithms. But how long does a list have to be before a standard implementation of bubble sort runs slower than my awful import-based implementation of mergesort? It turns out that a list only has to be about 50k items long before “import sort” is faster than bubble sort.

Computational complexity is a powerful thing!

All the code for this post is on Github

#A* Interview #19: Glyph Lefkowitz, Creator of The Twisted Networking Framework

According to his Linkedin Bio, today’s guest’s name is “Glyph Not Looking Don’t Contact Lefkowitz” and he created a powerful, event driven networking library called Twisted. In this episode, he tells us how he got started, what his most painfulre lessons have been, and how trying to measure the merit of a programmer is a fundamentally flawed exercise.

To learn more about the A* interviews or to find more episodes, look here.

For the impatient or textually inclined, here’s a summary of our conversation.

What are you working on right now? Why are you working on it?

  • Working on “Mimic” for Rackspace.
  • Mimic allows testing of cloud infrastructure provisioning.
  • Saves you a lot of time and money confirming that your deployment will work as expected.
  • Also working on Twisted, where he has created an “egalitarian process” that’s in charge of it. He still contributes when he can.
  • The Twisted project was a really early innovator in being a heavily-process-driven, 100% code review, 100% test coverage, 100% docstring project (started back in 2004).

What are your main non-technical interests? Who’s your favorite musical artist?

How old were you when you started programming? What motivated you to start? How have your motivations changed since then?

  • Very young. Father was a programmer who taught him a bit of APL. Wanted to be a writer and build a text-based RPG like Zork so he taught himself to program…slowly. Spent a couple years building elaborate Hypercard stacks. After a few years, learned about if-statements (and a year later about variables.)
  • Eventually jumped in C++. Went to computer camp, learned Java and LISP, and filled his high school notebook margins with programs.
  • Really loved the interactivity of games; realized that he has attention deficit disorder and computers were the only thing that could hold his attention.

What’s the most painful technical or career lesson you’ve had to learn? How did you learn it?

  • Building a gmail-before gmail product called “Quotient” at a company called Divmod. All the changes they tried to make increased complexity and decreased maintainability, making things move slower and slower. So the most painful lesson was that if you don’t have a process in place that considers the “people factors” you’ll end up with an unmaintainable mess.
  • You have to write stuff down, in a place that people will see it. And you need a process for testing and maintaining quality standards. Need to build that into your project early on.
  • Read Extreme Programming Explained, Embrace Change

What is your favorite programming language? What is your favorite tool (e.g. emacs, Postgres)? Why?

  • Favorite in terms of design – Smalltalk. Few moving pieces, hangs together conceptually. But it has no tools…strange technology from the future and can’t really practically do much with it.
  • Python compromises with reality. But has commandline interactive interpreter. Has a simple C implementation people can hack on. Garbage collection semantics that sort of make sense. The community values readability and there is a great community.
  • Favorite tool is pip. Let’s you do the most important thing, which is split up responsibility into different modules. Twisted’s test runner “Trial”. Tmux is great for managing terminal sessions.

How should we measure how “good” a technologist is? What are the key “virtues” of a technologist or of a solution?

  • 10x is a toxic myth.
  • We should stop trying to measure goodness because the idea of “merit” leads to bad things in the community.
  • The only good metrics measure teams and they look at outcomes (how many features did you ship) rather than the product.
  • When evaluating, worry about how people will fit into a team rather than how “good” they are.
    *If there’s one defining characteristic, it’s a desire for consistent self-improvement.

How structured is your problem solving approach (e.g. do you use TDD? Do you always pseudocode before coding)? How much of your problem solving comes from intuition/flashes of insight vs. conscious thought?

  • Used to be all about flashes on insight. Has ADHD “hyper focus”; can stay focused super hard on one task, until he loses it and then can’t focus. He’s a 10x programmer in a short window of time, then nothing for a long time.
  • Then over-corrected to all 100% TDD all the time.
  • Then heard a talk by John Cleese of Monty Python about scriptwriting and the idea of “closed vs open mode” – open is creative, flowing, no fear of failure. Closed is tight, structured, about getting things done. The split should be about 50/50.
  • Don’t bash your head against problems. Try switching methods.

Any sites or projects you want to plug? (Feel free to plug your own business.)

If you could give one piece of advice to yourself at age 15, what would it be?

  • Learn abstract programming concepts earlier.

And that’s all for today

Follow me on Twitter subscribe to my Youtube channel for future episodes!

“Unresolved identifier” in Swift when importing Frameworks using Cocoapods

The latest Cocoapods (0.36) has a nifty feature: it allows you to import pods written in Swift (such as the networking library Alamofire).

It does this by asking you to insert a little line into your Podfile:

use_frameworks!

If you, like me, are new to iOS development it might not be obvious to you what that line does. It converts all of your pods from being static libraries into being frameworks.

I haven’t gotten around to reading the absurdly long and dense Framework Programming Guide but as best I understand, that means that whereas once your pods were snippets of code that were compiled and rolled directly into your project binary, they are now instead separate folders of code that your app is “aware are over there in their own special place, somewhere”.

One consequence of this transformation from libraries into frameworks is that all of your references to Objective-C Cocoapods classes inside of your Swift code will mysteriously stop working. Of course they won’t tell you why they’ve stopped working (that would be silly!). Your Swift files will just suddenly fail to compile, complaining that all your references to these Cocoapod classes are now “unresolved identifiers”.

The solution to this problem isn’t obvious (at least to me), but it is easy. So let me save you the couple of hours it took me to figure it out.

Solution:

Normally when you’re importing Objective-C code into Swift, you do so by including the header of the file containing that code in the “Bridging Header” for your project. And that is indeed how you include code from a static library (which your pods used to be.)

But it is not how your import Objective-C code from a Framework. To do that you simply type…

import Framework

…inside your Swift file that’s using the Objective-C class (where “Framework” is the name of the actual Framework containing the class.)

That’s it. Good luck!

If you earn $23,500/year, the Apple Watch pays for itself.

Apple Pay? More like Apple Pays For Itself! Amiright?

Okay you can stop reading now.

Or you can continue and learn how I’ve actually calculated that number using science (i.e. estimation + numerology.)

The numbers

SERIOUSLY THOUGH, I watched Apple’s livestream on Monday where they revealed slightly more detail about “the watch”, a.k.a. the Apple Watch, a.k.a. a beautifully manufactured, very elegant looking and shiny gizmo that costs a minimum of $349 and in my expert opinion doesn’t do very much.

The Pretty

Okay sure it does stuff. It lets you send scribbles of trollfaces to your friends and then use Siri to text them an audio recording of your voice to let them know you’ve scribbled them a trollface. And then you can scan your watch to pass through security at the airport, saving you the several seconds that you’ll then need to spend removing your watch to go through the body scanner.

It struck me that almost everything one can currently do with the Apple Watch, one can already do with the iPhone that you must carry at all times if you want your watch to work. Now looking at what a device can currently do or currently costs is a bad way to predict the future. But it’s a pretty good way to predict the present. And at present, the Watch mostly just saves you from having to pull out your phone.

So I was left wondering “exactly how valuable is that?” Well…turns out it’s more valuable than I expected.

People actually spend rather a lot of time pulling out their phones. By some estimation, we do it 150 times per day on average, which tallies up to ~76 hours per year we spend simply removing our phones from our pockets and putting them back in. The Apple Watch offers to give us some of that time back. Not all of it (since many notifications still need the phone to fully address). But some of it.

And how much is that time worth? That depends on how much your individual time is worth but by my estimation, it’s about ~$500/year if you earn a typical Slicing Valley salary. So if we assume that watch has a 3 year replacement cycle (i.e. more like a Macbook than an iPhone), the watch will pay for itself about 4 times over in time it saves the people who might be reading this post. And the minimum you can make before the economic argument disappears is…$23,500. Below that, the hourly value of your time isn’t enough to buy an (entry-level) Apple Watch even if you could cash in every minute you saved.

And there, of course, is the rub(…ber watch band!) This calculation is based on a lot of (plausible) assumptions about how intensively people use their phones and on one big (much less plausible) assumption that the right way to value “time saved” is in comparison to an hourly wage. With something like the Apple Watch, the time saved will come in many, many scattered five-second increments. Most people, whether salaried or hourly, don’t actually have the ability to convert free time into money by simply working more. And those who do will mostly not be able to convert 720 five-second intervals into a full hour of productive work.

Or maybe they will, as soon as someone invents a Mechanical Turk app for the Watch that lets us monetize our milliseconds. Guess we’ll have to watch and see.

Don’t believe my numbers? Have a look at my calculations!

Glance at me over on twitter @rogueleaderr for more mumblings.

$1 Billion and Change

While getting $0.71 of change at a coffee shop today, I started to wonder just how much time is actually consumed by the act of “getting change”.

So I calculated it, and the answer is that $1 billion of time is consumed in the USA waiting for change.

That’s actually a bit smaller than I expected. And I calculate the total number of retail purchases at ~48 billion/year, so assuming $0.50 cents of change per purchase, it would cost consumers $24 billion per year to say “keep the change”.

I guess I’ll keep waiting for payment practices to change!

June 13, 2014 at 8:57:15 PM
1
retail sales = ($400 × 10^9) × 12
$4.80×1012
2
avg purchase = $30
$30.00
3
percent cash = 30%
30%
4
total cash purchases = retail sales / avg purchase × percent cash
4.8×1010
5
seconds per change = 4 seconds
4 seconds
6
time spent = total cash purchases × change
1.92×1011 seconds
7
total_hours = spent in hours
53,333,333.3333333333 hours
8
avg_wage = 20$ / hour
20 $/hour
9
cost = total_hours × avg_wage
$1.07×109

A* Interview #18(ish): Strongly-Typed Emacs Acolyte, Bodil Stokke

Follow Bodil on Twitter.

Check out some of her conference talks.

Use an Emacs starter kit to smooth the learning curve.

For an introduction to type theory, click here.

Or if you want to crack a turtle with types, read Types and Programming Languages.

Learn about the A* Series and see more interviews here.

/* change only these */
var youtubeID = “Zm3xjks7”;
var postID = “Bodil Stokke”;
/* change only these */

// DON’T CHANGE BELOW
$(“.video”).html(“

“);
var tag = document.createElement(‘script’);
tag.src = “http://www.youtube.com/player_api”;
var firstScriptTag = document.getElementsByTagName(‘script’)[0];
firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);

function onPlayerStateChange(event) {
if (event.data ==YT.PlayerState.PLAYING) {
mixpanel.track(“Video played”, {
“youtubeID”: youtubeID,
“postID”: postID

});
}
if (event.data ==YT.PlayerState.ENDED || event.data==0) {
mixpanel.track(“Video ended”, {
“youtubeID”: youtubeID,
“postID”: postID
});
}
}

var player;
function onYouTubePlayerAPIReady() {
player = new YT.Player(‘player’,
{
videoId: youtubeID,
events: {‘onStateChange’: onPlayerStateChange}
});
}

mixpanel.track(“Post viewed”, {
“id”: postID
});

A* Interview #16: StackOverflow’s all-time Python champ, Alex Martelli

Read some of Alex’s favorite monosyllabic books, Cod and Salt.

Listen to the Art of the Fugue on LinerNotes.

Check out Alex’s personal website.

And at his suggestion, sign up for Fitocracy and get yourself in shape!

Learn about the A* Series and see more interviews here.

/* change only these */
var youtubeID = “R6XSEfNqgZ4”;
var postID = “Alex Martelli”;
/* change only these */

// DON’T CHANGE BELOW
$(“.video”).html(“

“);
var tag = document.createElement(‘script’);
tag.src = “http://www.youtube.com/player_api”;
var firstScriptTag = document.getElementsByTagName(‘script’)[0];
firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);

function onPlayerStateChange(event) {
if (event.data ==YT.PlayerState.PLAYING) {
mixpanel.track(“Video played”, {
“youtubeID”: youtubeID,
“postID”: postID

});
}
if (event.data ==YT.PlayerState.ENDED || event.data==0) {
mixpanel.track(“Video ended”, {
“youtubeID”: youtubeID,
“postID”: postID
});
}
}

var player;
function onYouTubePlayerAPIReady() {
player = new YT.Player(‘player’,
{
videoId: youtubeID,
events: {‘onStateChange’: onPlayerStateChange}
});
}

mixpanel.track(“Post viewed”, {
“id”: postID
});

A* Interview #15: Jeff Lindsay, Creator of Flynn, Co-Founder of Hacker Dojo, Executive Producer of “Indie Game: The Movie”

Check out Flynn, an open source Heroku. And Docker, a container-based deployment system. And use Jeff’s new employer DigitalOcean for some cheap, fast cloud servers.

For more Jeff, follow him on Twitter or check out his personal site.

Read about (one of) Jeff’s favorite musicans Animals as Leaders here.

Watch Indie Game: The Movie here or on Netflix. Buy Kevin Kelly’s What Technology Wants here.

Learn about the A* Series and see more interviews here.

/* change only these */
var youtubeID = “Gpqs-tU29kU”;
var postID = “Jeff Lindsay”;
/* change only these */

// DON’T CHANGE BELOW
$(“.video”).html(“

“);
var tag = document.createElement(‘script’);
tag.src = “http://www.youtube.com/player_api”;
var firstScriptTag = document.getElementsByTagName(‘script’)[0];
firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);

function onPlayerStateChange(event) {
if (event.data ==YT.PlayerState.PLAYING) {
mixpanel.track(“Video played”, {
“youtubeID”: youtubeID,
“postID”: postID

});
}
if (event.data ==YT.PlayerState.ENDED || event.data==0) {
mixpanel.track(“Video ended”, {
“youtubeID”: youtubeID,
“postID”: postID
});
}
}

var player;
function onYouTubePlayerAPIReady() {
player = new YT.Player(‘player’,
{
videoId: youtubeID,
events: {‘onStateChange’: onPlayerStateChange}
});
}

mixpanel.track(“Post viewed”, {
“id”: postID
});

A* Interview #14: Rob Ashton, Hacker / Soon-To-Be-Erstwhile Peripatetic

Follow Rob on Twitter or on Github or on his personal site.

Read about (one of) Rob’s favorite musicans Mark Knopfler here

Learn about the A* Series and see more more interviews here.

/* change only these */
var youtubeID = “XKFoNNavh2I”;
var postID = “Rob Ashton”;
/* change only these */

// DON’T CHANGE BELOW
$(“.video”).html(“

“);
var tag = document.createElement(‘script’);
tag.src = “http://www.youtube.com/player_api”;
var firstScriptTag = document.getElementsByTagName(‘script’)[0];
firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);

function onPlayerStateChange(event) {
if (event.data ==YT.PlayerState.PLAYING) {
mixpanel.track(“Video played”, {
“youtubeID”: youtubeID,
“postID”: postID

});
}
if (event.data ==YT.PlayerState.ENDED || event.data==0) {
mixpanel.track(“Video ended”, {
“youtubeID”: youtubeID,
“postID”: postID
});
}
}

var player;
function onYouTubePlayerAPIReady() {
player = new YT.Player(‘player’,
{
videoId: youtubeID,
events: {‘onStateChange’: onPlayerStateChange}
});
}

mixpanel.track(“Post viewed”, {
“id”: postID
});