Tuesday, March 19, 2013

Thoughts on Program Verification

Inspiration:
    I just finished the programming assignment in HW3 of Pattern Recognition class. There was a significant bug in my implementation on calculating the log-likelihood of the 1D Triangle Distribution, which resulted in the divergence of the parameter estimation.
    So I'm wondering whether it's feasible to maintain the correctness even for such data science programs (or scientific simulation in another perspective). In this domain, the key challenge is that the reason for such divergence can be either a bug or the usage of a wrong statistics model. In the latter case, this result cannot be called an error of the programs.
    Thus, my focus is on how to distinguish these two cases. If we are able to achieve this goal, it will be fairly convenient to debug the programs without doubting my choice of models, or on the other hand, I will only think about the pros and cons of the model itself with no concern about some potential bugs in the implementation.

No comments:

Post a Comment