Misleading information is nothing new, yet its impacts seem only to grow. We investigate this phenomenon in the context of social bots. Social bots are software agents that mimic hu- mans. They are intended to interact with humans while sup- porting specific agendas. This work explores the effect of so- cial bots on the spread of misinformation on Facebook dur- ing the Fall of 2016 and prototypes a tool for their detec- tion. Using a dataset of about two million user comments dis- cussing the posts of public pages for nine verified news out- lets, we first annotate a large dataset for social bots. We then develop and evaluate commercially implementable bot detec- tion software for public pages with an overall F1 score of 0.71. Applying this software, we found only a small percent- age (0.06%) of the commenting user population to be social bots. However, their activity was extremely disproportionate, producing comments at a rate more than fifty times higher (3.5%). Finally, we observe that one might commonly en- counter social bot comments at a rate of about one in ten on mainstream outlet and reliable content news posts. In light of these findings and to support page owners and their communi- ties we release prototype code and software to help moderate social bots on Facebook.