Always Be Testing: Detecting Novelty in A/B Tests
In the Obama campaign's digital fundraising operation, we constantly A/B tested all aspects of our emails. Sometimes we would stumble upon surprising results that gave us statistically significant increases in our donation rates -- and then, just as quickly, those effects would wear off. From special characters in subject lines to ugly yellow highlighting on entire paragraphs of an email, we found plenty of tactics that displayed classic "novelty effect" patterns.
This two and a half hour long workshop will demonstrate how to detect those novelty effects through multi-stage A/B testing, long-term testing, and re-testing -- and how to avoid falling into the "best practices" trap of thinking you're done after one successful round of tests.
Additional Supporting Materials
- What are some tactics the Obama campaign learned through A/B testing that increased donation rates in the short term, but ultimately lost their effectiveness?
- How can campaigns and organizations use A/B testing to detect the novelty effects of their tactics?
- What are some industry "best practices" that originally emerged from A/B testing, but got so widely adopted that they are now ineffective?
- How do you know when you've discovered a tactic that *won't* lose its novelty?
- How can campaigns, companies, and organizations with small lists still get useful information from long-term and multi-stage A/B testing?
- Amelia Showalter Amelia Showalter LLC
Amelia Showalter Amelia Showalter LLC
Show me another