Wow. Our experiment is off to a great start—let's see if we can finish it off sooner than expected.
If you really want to understand the shortcomings of the Oregon Medicaid study, you should be reading Austin Frakt and Aaron Carroll over at The Incidental Economist. Frakt has one final post today in which he goes ultrawonky and calculates just how underpowered the study was if it wanted to get statistically significant results on the diabetes markers. It's way over my head, so I'll just pass along the headline result: the study was underpowered by at least a factor of 23. That is, the researchers would have needed a sample size 23 times larger than they had in order to find the results they were looking for.
The full writeup is here. Bottom line: this study was just too small. The fact that it didn't find statistically significant results doesn't really tell us anything at all, either good or bad, about the effect of Medicaid on health outcomes.