Voting period for this idea type has passed

Always Be Testing: Detecting Novelty in A/B Tests

In the Obama campaign's digital fundraising operation, we constantly A/B tested all aspects of our emails. Sometimes we would stumble upon surprising results that gave us statistically significant increases in our donation rates -- and then, just as quickly, those effects would wear off. From special characters in subject lines to ugly yellow highlighting on entire paragraphs of an email, we found plenty of tactics that displayed classic "novelty effect" patterns.

This two and a half hour long workshop will demonstrate how to detect those novelty effects through multi-stage A/B testing, long-term testing, and re-testing -- and how to avoid falling into the "best practices" trap of thinking you're done after one successful round of tests.

Additional Supporting Materials


  1. What are some tactics the Obama campaign learned through A/B testing that increased donation rates in the short term, but ultimately lost their effectiveness?
  2. How can campaigns and organizations use A/B testing to detect the novelty effects of their tactics?
  3. What are some industry "best practices" that originally emerged from A/B testing, but got so widely adopted that they are now ineffective?
  4. How do you know when you've discovered a tactic that *won't* lose its novelty?
  5. How can campaigns, companies, and organizations with small lists still get useful information from long-term and multi-stage A/B testing?



Amelia Showalter, Digital and Quantitative Consultant, Amelia Showalter LLC

Add Comments

comments powered by Disqus

SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.

Show me another