Online manipulation of information has become prevalent in recent years as disinformation campaigns seek to polarize political topics. While we are aware that disinformation campaigns exist, detecting their online presence is still difficult. Researchers have proposed detecting disinformation campaigns on Twitter by looking for specific coordination patterns (e.g., sharing the same hashtag in a short time frame). The problem with this approach, however, is that while the proposed coordination patterns may have been unique to the studied disinformation campaigns, the patterns have not been thoroughly validated against non-random samples or across a diverse set of campaigns. We examine the usefulness of these coordination patterns for identifying the activity of a disinformation campaign from other Twitter activity. We do this by testing the proposed coordination patterns on a large-scale dataset of ten state-attributed campaigns and various benign Twitter communities likely to coordinate and share information amongst themselves. We show that such patterns have significant limitations. First, coordination in Twitter communities is not uncommon, especially when the online world is reacting to real-world news (e.g., US impeachment trials). Second, due to the COVID-19 pandemic, we found that political bodies increased their coordinated activity. Such surge in coordination worsens the trade-off between usability and rate of detection. To identify most of the benign activity during the pandemic, the classifier misses nearly 25% of disinformation activity. Conversely, if the classifier achieves a high rate of detection for malicious coordination, it misclassifies 46% of legitimate coordinated activity. In doing this meta-analysis, we show that although coordination patterns could be useful for detecting disinformation activity, further analysis is needed to determine the community's intention.