Automated fact-checking on a large-scale is a challenging task that has not been studied systematically until recently. Large noisy document collections like the web or news articles make the task more difficult. We examine the performance of a three-stage automated fact-checking system using various evidence retrieval and selection methods. We demonstrate that hybrid passage retrieval using sparse and dense representations leads to much higher evidence recall in a noisy setting. We also propose two sentence selection approaches, an embedding-based selection using a dense retrieval model, and a sequence labeling approach for context-aware selection. The embedding-based selection achieves very high recall across two different datasets, while the sequence labeling model achieves higher precision and improves the verification accuracy compared to context-agnostic sentence selection approaches. Using the same three-stage architecture, we built Quin, a large-scale fact-checking system for the COVID-19 pandemic.