Thursday, May 30, 2013

Belly Button Biodiversity: The End Game

In the previous installment of this saga, I admitted that my predictions had completely failed, and I outlined the debugging process I began.  Then the semester happened, so I didn't get to work on it again until last week.

It turns out that there were several problems, but the algorithm is now calibrating and validating!  Before I proceed, I should explain how I am using these words:

  • Calibrate: Generate fake data from the same model the analysis is based on.  Run the analysis on fake data and generate predictive distributions.  Check whether the predictive distributions are correct.
  • Validate: Using real data, generate a rarefied sample.  Run the analysis on the sample and generate predictive distributions.  Check whether the predictive distributions are correct.

If the analysis calibrates, but fails to validate, that suggests that there is some difference between the model and reality that is causing a problem.  And that turned out to be the case.

Here are the problems I discovered, and what I had to do to fix them:

The prior distribution of prevalences

For the prior I used a Dirichlet distribution with all parameters set to 1.  I neglected to consider the "concentration parameter," which represents the prior belief about how uniform or concentrated the prevalences are.  As the concentration parameter approaches 0, prevalences tend to be close to 1 or 0; that is, there tends to be one dominant species and many species with small prevalences.  As the concentration parameter gets larger, all species tend to have the same prevalence.  It turns out that a concentration parameter near 0.1 yields a distribution of prevalences that resembles real data.

The prior distribution of n

With a smaller concentration parameter, there are more species with small prevalences, so I had to increase the range of n (the number of species).  The prior distribution for n is uniform up to an upper bound, where I choose the upper bound to be big enough to avoid cutting off non-negligible probability.  I had to increase this upper bound to 1000, which slows the analysis down a little, but it still takes only a few seconds per subject (on my not-very-fast computer).

Up to this point I hadn't discovered any real errors; it was just a matter of choosing appropriate prior distributions, which is ordinary work for Bayesian analysis.

But it turns out there were two legitimate errors.

Bias due to the definition of "unseen"

I was consistently underestimating the prevalence of unseen species because of a bias that underlies the definition of "unseen."  To see the problem, consider a simple scenario where there are two species, A and B, with equal prevalence.  If I only collect one sample, I get A or B with equal probability.

Suppose I am trying to estimate the prevalence of A.  If my sample is A, the posterior marginal distribution for the prevalence of A is Beta(2, 1), which has mean 2/3.  If the sample is B, the posterior is Beta(1, 2), which has mean 1/3.  So the expected posterior mean is the average of 2/3 and 1/3, which is 1/2.  That is the actual prevalence of A, so this analysis is unbiased.

But now suppose I am trying to estimate the prevalence of the unseen species.  If I draw A, the unseen species is B and the posterior mean is 1/3.  If I draw B, the unseen species is A and the posterior mean is 1/3.  So either way I believe that the prevalence of the unseen species is 1/3, but it is actually 1/2.  Since I did not specify in advance which species is unseen, the result is biased.

This seems obvious in retrospect.  So that's embarrassing (the price I pay for this experiment in Open Science), but it is easy to fix:

a) The posterior distribution I generate has the right relative prevalences for the seen species (based on the data) and the right relative prevalences for the unseen species (all the same), but the total prevalence for the unseen species (which I call q) is too low.

b) Fortunately, there is only one part of the analysis where this bias is a problem: when I draw a sample from the posterior distribution.  To fix it, I can draw a value of q from the correct posterior distribution (just by running a forward simulation) and then unbias the posterior distribution with the selected value of q.

Here's the code that generates q:


    def RandomQ(self, n):
        
        # generate random prevalences
        dirichlet = thinkbayes.Dirichlet(n, conc=self.conc)
        prevalences = dirichlet.Random()

        # generate a simulated sample
        pmf = thinkbayes.MakePmfFromItems(enumerate(prevalences))
        cdf = pmf.MakeCdf()
        sample = cdf.Sample(self.num_reads)
        seen = set(sample)

        # add up the prevalence of unseen species
        q = 0
        for species, prev in enumerate(prevalences):
            if species not in seen:
                 q += prev

        return q


n is the hypothetical number of species.  conc is the concentration parameter.  RandomQ creates a Dirichlet distribution, draws a set of prevalences from it, then draws a simulated sample with the appropriate number of reads, and adds up the total prevalence of the species that don't appear in the sample.

And here's the code that unbiases the posterior:


    def Unbias(self, n, m, q_desired):
        
        params = self.params[:n].copy()
        
        x = sum(params[:m])
        y = sum(params[m:])
        a = x + y

        g = q_desired * a / y
        f = (a - g * y) / x
        params[:m] *= f
        params[m:] *= g


n is the hypothetical number of species; m is the number seen in the actual data.

x is the total prevalence of the seen species; y is the total prevalence of the unseen species.  f and g are the factors we have to multiply by so that the corrected prevalence of unseen species is q_desired.

After fixing this error, I find that the analysis calibrates nicely.


From each predictive distribution I generate credible intervals with ideal percentages 10, 20, ... 90, and then count how often the actual value falls in each interval.

For example, the blue line is the calibration curve for n, the number of species.  After 100 runs, the 10% credible interval contains the actual value 9.5% of of the time.The 50% credible interval contains the actual value 51.5% of the time.  And the 90% credible interval contains the actual value 88% of the time.  These results show that the posterior distribution for n is, in fact, the posterior distribution for n.

The results are similar for q, the prevalence of unseen species, and l, the predicted number of new species seen after additional sampling.

To check whether the analysis validates, I used the dataset collected by the Belly Button Biodiversity project.  For each subject with more than 400 reads, I chose a random subset of 100 reads, ran the analysis, and checked the predictive distributions for q and n.  I can't check the predictive distribution of n, because I don't know the actual value.

Sadly, the analysis does not validate with the collected data.  The reason is:

The data do not fit the model

The data deviate substantially from the model that underlies the analysis.  To see this, I tried this experiment:

a) Use the data to estimate the parameters of the model.
b) Generate fake samples from the model.
c) Compare the fake samples to the real data.

Here's a typical result:


The blue line is the CDF of prevalences, in order by rank.  The top-ranked species makes up about 27% of the sample.  The top 10 species make up about 75%, and the top 100 species make up about 90%.

The green lines show CDFs from 10 fake samples.  The model is a good match for the data for the first 10-20 species, but then it deviates substantially.  The prevalence of rare species is higher in the data than in the model.

The problem is that the real data seem to come from a mixture of two distributions, one for dominant species and one for rare species.  Among the dominant species the concentration parameter is near 0.1.  For the rare species, it is much higher; that is, the rare species all have about the same prevalence.

There are two possible explanations: this effect might be real or it might be an artifact of errors in identifying reads.  If it's real, I would have to extend my model to account for it.  If it is due to errors, it might be possible to clean the data.

I have heard from biologists that when a large sample yields only a single read of a particular species, that read is likely to be in error; that is, the identified species might not actually be present.

So I explored a simple error model with the following features:

1) If a species appears only once after r reads, the probability that the read is bogus is p = (1 - alpha/r), where alpha is a parameter.

2) If a species appears k times after n reads, the probability that all k reads are bogus is p^k.

To clean the data, I compute the probability that each observed species is bogus, and then delete it with the computed probability.


With cleaned data (alpha=50), the model fits very nicely.  And since the model fits, and the analysis calibrates, we expect the analysis to validate.  And it does.


For n there is no validation curve because we don't know the actual values.

For q the validation curve is  a little off because we only have a lower bound for the prevalence of unseen species, so the actual values used for validation are too high.

But for l the validation curve is quite good, and that's what we are actually trying to predict, after all.

At this point the analysis depends on two free parameters, the concentration parameter and the cleaning parameter, alpha, which controls how much of the data gets discarded as erroneous.

So the next step is to check whether these parameters cross-validate.  That is, if we tune the parameters based on a training set, how well do those values do on a test set?

Another next step is to improve the error model.  I chose something very simple, and it does a nice job of getting the data to conform to the analysis model, but it is not well motivated.  If I can get more information about where the errors are coming from, I could take a Bayesian approach (what else?) and compute the probability that each datum is legit or bogus.

Or if the data are legit and the prevalences are drawn from a mixture of Dirichlet distributions with different concentrations, I will have to extend the analysis accordingly.

Summary

There were four good reasons my predictions failed:

1) The prior distribution of prevalences had the wrong concentration parameter.

2) The prior distribution of n was too narrow.

3) I neglected an implicit bias due to the definition of "unseen species."

4) The data deviate from the model the analysis is based on.  If we "clean" the data, it fits the model and  the analysis validates, but the cleaning process is a bit of a hack.

I was able to solve these problems, but I had to introduce two free parameters, so my algorithm is not as versatile as I hoped.  However, it seems like it should be possible to choose these parameters automatically, which would be an improvement.

And now I have to stop, incorporate these corrections into Think Bayes, and then finish the manuscript!

Tuesday, May 28, 2013

Python Epistemology at PyCon Taiwan

This weekend I gave a talk entitled "Python Epistemology" for PyCon Taiwan 2013.  I would have loved to be in Taipei for the talk, but sadly I was in an empty room in front of a teleconference screen.

Python Epistemology: PyCon Taiwan 2013

As I explained, the title is more grandiose than accurate.  In general, epistemology is the theory of knowledge: how we know what we think we know, etc.  This talk is mostly about what Python has taught me about programming, and how programming in Python has changed the way I learn and the way I think.

About programming, I wrote:


Programming is not about translating a well-known solution into code, it is about discovering solutions by writing code, and then creating the language to express them.


I gave an example using the Counter data structure to check for anagrams:


from collections import Counter

def is_anagram(word1, word2):
   return Counter(word1) == Counter(word2)



This is a nice solution because it is concise and demonstrably correct, but I suggested that one limitation is that it does not extend easily to handle "The Scrabble Problem": given a set of tiles, check to see whether you can spell a given word.

We can define a new class, called Multiset, that extends Counter and provides 
is_subset
:


class Multiset(Counter):
   """A set with repeated elements."""

   def is_subset(self, other):

       for char, count in self.items():
           if other[char] < count:
               return False
       return True



Now we can write can_spell concisely:

def can_spell(word, tiles):
   return Multiset(word).is_subset(Multiset(tiles))



I summarized by quoting Paul Graham:


"... you don't just write your program down toward the language, you also build the language up toward your program.

"In the end your program will look as if the language had been designed for it. And ... you end up with code which is clear, small, and efficient."

Paul Graham, "Programming Bottom Up," 1993.

In the second half of the talk, I suggested that Python and other modern programming languages provide a new approach to solving problems.  Traditionally, we tend to think and explore using natural language, do analysis and solve problems using mathematical notation, and then translate solutions from math notation into programming languages.

In some sense, we are always doing two translations, from natural language to math and from math to a computer program.  With the previous generation of programming languages, this process was probably necessary (for reasons I explained), but I claim that it is less necessary now, and that it might be possible and advantageous to skip the intermediate mathematics and do analysis and problem-solving directly in programming languages.

After the talk, I got two interesting questions by email.  Yung-Cheng Lin suggested that although programming languages are more precise than natural language, they might not be sufficiently precise to replace mathematical notation, and he asked if I think that using programming to teach mathematical concepts might cause misunderstandings for students.

I replied:

I understand what you mean when you say that programming languages are less rigorous that mathematical notation.  I think many people have the same impression, but I wonder if it is a real difference or a bias we have.

I would argue that programming languages and math notation are similar in the sense that they are both formal languages designed by people to express particular ideas concisely and precisely.

There are some kinds of work that are easier to do in math notation, like algebraic manipulation, but other kinds of work that are easier in programming languages, like specifying computations, especially computations with state.

You asked if there is a danger that students might misunderstand mathematical ideas if they come to them through programming, rather than mathematical instruction.  I'm sure it's possible, but I don't think the danger is specific to the programming approach.

And on the other side, I think a computational approach to mathematical topics creates opportunities for deeper understanding by running experiments, and (as I said in the talk) by getting your ideas out of your head and into a program so that, by debugging the program, you are also debugging your own understanding.

In response to some of my comments about pseudocode, A. T. Cheng wrote:

When we do algorithms or pseudocodes in the traditional way, we used to figure out the time complexity at the same time. But the Python examples you showed us, it seems not so easy to learn the time complexity in the first place. So, does it mean that when we think Python, we don't really care about the time complexity that much?

I replied:

You are right that it can be more difficult to analyze a Python program; you have to know a lot about how the Python data structures are implemented.  And there are some gotchas; for example, it takes constant time to add elements to the end of a list, but linear time to add elements in the beginning or the middle.

It would be better if Python made these performance characteristics part of the interface, but they are not.  In fact, some implementations have changed over time; for example, the += operator on lists used to create a new list.  Now it is equivalent to append.

Thanks to both of my correspondents for these questions (and for permission to quote them).  And thanks to the organizers of PyCon Taiwan, especially Albert Chun-Chieh Huang, for inviting me to speak.  I really enjoyed it.

Thursday, May 9, 2013

The Red Line problem

I've just added a new chapter to Think Bayes; it is a case study based on a class project two of my students worked on this semester.  It presents "The Red Line Problem," which is the problem of predicting the time until the next train arrives, based on the number of passengers on the platform.

Here's the introduction:

In Boston, the Red Line is a subway that runs north-south from Cambridge to Boston.  When I was working in Cambridge I took the Red Line from Kendall Square to South Station and caught the commuter rail to Needham.  During rush hour Red Line trains run every 7--8 minutes, on average.
When I arrived at the station, I could estimate the time until the next train based on the number of passengers on the platform.  If there were only a few people, I inferred that I just missed a train and expected to wait about 7 minutes.  If there were more passengers, I expected the train to arrive sooner.  But if there were a large number of passengers, I suspected that trains were not running on schedule, so I would go back to the street level and get a taxi. 
While I was waiting for trains, I thought about how Bayesian estimation could help predict my wait time and decide when I should give up and take a taxi.  This chapter presents the analysis I came up with.


Sadly, this problem has been overtaken by history: the Red Line now provides real-time estimates for the arrival of the next train.  But I think the analysis is interesting, and still applies for subway systems that don't provide estimates.

One interesting tidbit:

As it turns out, the average time between trains, as seen by a random passenger, is substantially higher than the true average. 
Why? Because a passenger is more like to arrive during a large interval than a small one. Consider a simple example: suppose that the time between trains is either 5 minutes or 10 minutes with equal probability. In that case the average time between trains is 7.5 minutes.
But a passenger is more likely to arrive during a 10 minute gap than a 5 minute gap; in fact, twice as likely. If we surveyed arriving passengers, we would find that 2/3 of them arrived during a 10 minute gap, and only 1/3 during a 5 minute gap. So the average time between trains, as seen by an arriving passenger, is 8.33 minutes.
 
This kind of observer bias appears in many contexts. Students think that classes are bigger than they are, because more of them are in the big classes. Airline passengers think that planes are fuller than they are, because more of them are on full flights. 
In each case, values from the actual distribution are oversampled in proportion to their value. In the Red Line example, a gap that is twice as big is twice as likely to be observed.


The data for the Red Line are close to this example.  The actual time between trains is 7.6 minutes (based on 45 trains that arrived at Kendall square between 4pm and 6pm so far this week).  The average gap as seen by random passengers is 8.3 minutes.

Interestingly, the Red Line schedule reports that trains run every 9 minutes during peak times. This is close to the average seen by passengers, but higher than the true average. I wonder if they are deliberately reporting the mean as seen by passengers in order to forestall complaints.

You can read the rest of the chapter here.  One of the figures there didn't render very well.  Here is a prettier version:


This figure shows the predictive distribution of wait times if you arrive and find 15 passengers on the platform.  Since we don't know the passenger arrival rate, we have to estimate it.  Each possible arrival rate yields one of the light blue lines; the dark blue line is the weighted mixture of the light blue lines.

So in this scenario, you expect the next train in 5 minutes or less, with 80% confidence.

UPDATE 10 May 2013: I got the following note from developer@mbta.com, confirming that their reported gap between trains is deliberately conservative:

Thank you for writing to let us know about the Red Line case study in your book, and thank you for your question. You’re right that the scheduled time between trains listed on the subway schedule card for rush hour is greater than what you observed at Kendall Square. It’s meant as a slightly conservative simplification of the actual frequency of trains, which varies by time throughout rush hour – to provide maximum capacity during the very peak of rush hour when ridership is normally highest – as well as by location along the Red Line during those different times, since when trains begin to leave more frequently from Alewife it takes time for that frequency to “travel” down the line. So yes it is meant to be slightly conservative for that reason. We hope this information answers your question.

Sincerely,
developer@mbta.com




Monday, May 6, 2013

Software engineering practices for graduate students

Recently I was talking with an Olin student who will start graduate school in the fall, and I suggested a few things I wish I had done in grad school.  And then I thought I should write them down.  So here is my list of Software Engineering Practices All Graduate Students Should Adopt:


Every keystroke you type should be under version control from the time you initiate a project until you retire it.  Here are the reasons:

1) Everything you do will be backed up.  But instead of organizing your backups by date (which is what most backup systems do) they are organized by revision.  So, for example, if you break something, you can roll back to an earlier working revision.

2) When you are collaborating with other people, you can share repositories.  Version control systems are well designed for managing this kind of collaboration.  If you are emailing documents back and forth, you are doing it wrong.

3) At various stages of the project, you can save a tagged copy of the repo.  For example, when you submit a paper for publication, make a tagged copy.  You can keep working on the trunk, and when you get reviewer comments (or a question 5 years later) you have something to refer back to.

I use Subversion (SVN) primarily, so I keep many of my projects on Google Code (if they are open source) or on my own SVN server.  But these days it seems like all the cool kids are using Git and keeping their repositories on GitHub.

Either way, find a version control system you like, learn how to use it, and find someplace to host your repository.



This goes hand in hand with version control.  If someone checks out your repository, they should be able to rebuild your project by running a single command.  That means that everything someone needs to replicate your results should be in the repo, and you should have scripts that process the data, generate figures and tables, and integrate them into your papers, slides, and other documents.

One simple tool for automating the build is Make.  Every directory in your project should contain a Makefile.  The top-level directory should contain the Makefile that runs all the others.

If you use GUI-based tools to process data, it might not be easy to automate your build.  But it will be worth it.  The night before your paper is due, you will find a bug somewhere in your data flow.  If you've done things right, you should be able to rebuild the paper with just five keystrokes (m-a-k-e, and Enter).

Also, put a README in the top-level directory that documents the directory structure and the build process.  If your build depends on other software, include it in the repo if practical; otherwise provide a list of required packages.

Or, if your software environment is not easy to replicate, put your whole development environment in a virtual machine and ship the VM.



For many people, the most challenging part of grad school is time management.  If you are an undergraduate taking 4-5 classes, you can do deadline-driven scheduling; that is, you can work on whatever task is due next and you will probably get everything done on time.

In grad school, you have more responsibility for how you spend your time and fewer deadlines to guide you.  It is easy to lose track of what you are doing, waste time doing things that are not important (see Yak Shaving), and neglect the things that move you toward the goal of graduation.

One of the purposes of agile development tools is to help people decide what to do next.  They provide several features that apply to grad school as well as software development:

1) They encourage planners to divide large tasks into smaller tasks that have a clearly-defined end condition.

2) They maintain a priority-ranking of tasks so that when you complete one you can start work on the next, or one of the next few.

3) They provide mechanisms for collaborating with a team and for getting feedback from an adviser.

4) They involve planning on at least two time scales.  On a daily basis you decide what to work on by selecting tasks from the backlog.  On a weekly (or longer) basis, you create and reorder tasks, and decide which ones you should work on during the next cycle.

If you use GitHub or Google Code for version control, you get an issue tracker as part of the deal.  You can use issue trackers for agile planning, but there are other tools, like Pivotal Tracker, that have more of the agile methodology built in.  I suggest you start with Pivotal Tracker because it has excellent documentation, but you might have to try out a few tools to find one you like.


Do these things -- Version Control, Build Automation, and Agile Development -- and you will get through grad school in less than the average time, with less than the average drama.