|
| 1 | +# Design a web crawler |
| 2 | + |
| 3 | +*Note: This document links directly to relevant areas found in the [system design topics](https://github.com/donnemartin/system-design-primer-interview#index-of-system-design-topics-1) to avoid duplication. Refer to the linked content for general talking points, tradeoffs, and alternatives.* |
| 4 | + |
| 5 | +## Step 1: Outline use cases and constraints |
| 6 | + |
| 7 | +> Gather requirements and scope the problem. |
| 8 | +> Ask questions to clarify use cases and constraints. |
| 9 | +> Discuss assumptions. |
| 10 | +
|
| 11 | +Without an interviewer to address clarifying questions, we'll define some use cases and constraints. |
| 12 | + |
| 13 | +### Use cases |
| 14 | + |
| 15 | +#### We'll scope the problem to handle only the following use cases |
| 16 | + |
| 17 | +* **Service** crawls a list of urls: |
| 18 | + * Generates reverse index of words to pages containing the search terms |
| 19 | + * Generates titles and snippets for pages |
| 20 | + * Title and snippets are static, they do not change based on search query |
| 21 | +* **User** inputs a search term and sees a list of relevant pages with titles and snippets the crawler generated |
| 22 | + * Only sketch high level components and interactions for this use case, no need to go into depth |
| 23 | +* **Service** has high availability |
| 24 | + |
| 25 | +#### Out of scope |
| 26 | + |
| 27 | +* Search analytics |
| 28 | +* Personalized search results |
| 29 | +* Page rank |
| 30 | + |
| 31 | +### Constraints and assumptions |
| 32 | + |
| 33 | +#### State assumptions |
| 34 | + |
| 35 | +* Traffic is not evenly distributed |
| 36 | + * Some searches are very popular, while others are only executed once |
| 37 | +* Support only anonymous users |
| 38 | +* Generating search results should be fast |
| 39 | +* The web crawler should not get stuck in an infinite loop |
| 40 | + * We get stuck in an infinite loop if the graph contains a cycle |
| 41 | +* 1 billion links to crawl |
| 42 | + * Pages need to be crawled regularly to ensure freshness |
| 43 | + * Average refresh rate of about once per week, more frequent for popular sites |
| 44 | + * 4 billion links crawled each month |
| 45 | + * Average stored size per web page: 500 KB |
| 46 | + * For simplicity, count changes the same as new pages |
| 47 | +* 100 billion searches per month |
| 48 | + |
| 49 | +Exercise the use of more traditional systems - don't use existing systems such as [solr](http://lucene.apache.org/solr/) or [nutch](http://nutch.apache.org/). |
| 50 | + |
| 51 | +#### Calculate usage |
| 52 | + |
| 53 | +**Clarify with your interviewer if you should run back-of-the-envelope usage calculations.** |
| 54 | + |
| 55 | +* 2 PB of stored page content per month |
| 56 | + * 500 KB per page * 4 billion links crawled per month |
| 57 | + * 72 PB of stored page content in 3 years |
| 58 | +* 1,600 write requests per second |
| 59 | +* 40,000 search requests per second |
| 60 | + |
| 61 | +Handy conversion guide: |
| 62 | + |
| 63 | +* 2.5 million seconds per month |
| 64 | +* 1 request per second = 2.5 million requests per month |
| 65 | +* 40 requests per second = 100 million requests per month |
| 66 | +* 400 requests per second = 1 billion requests per month |
| 67 | + |
| 68 | +## Step 2: Create a high level design |
| 69 | + |
| 70 | +> Outline a high level design with all important components. |
| 71 | +
|
| 72 | + |
| 73 | + |
| 74 | +## Step 3: Design core components |
| 75 | + |
| 76 | +> Dive into details for each core component. |
| 77 | +
|
| 78 | +### Use case: Service crawls a list of urls |
| 79 | + |
| 80 | +We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc |
| 81 | + |
| 82 | +We'll use a table `crawled_links` to store processed links and their page signatures. |
| 83 | + |
| 84 | +We could store `links_to_crawl` and `crawled_links` in a key-value **NoSQL Database**. For the ranked links in `links_to_crawl`, we could use [Redis](https://redis.io/) with sorted sets to maintain a ranking of page links. We should discuss the [use cases and tradeoffs between choosing SQL or NoSQL](https://github.com/donnemartin/system-design-primer-interview#sql-or-nosql). |
| 85 | + |
| 86 | +* The **Crawler Service** processes each page link by doing the following in a loop: |
| 87 | + * Takes the top ranked page link to crawl |
| 88 | + * Checks `crawled_links` in the **NoSQL Database** for an entry with a similar page signature |
| 89 | + * If we have a similar page, reduces the priority of the page link |
| 90 | + * This prevents us from getting into a cycle |
| 91 | + * Continue |
| 92 | + * Else, crawls the link |
| 93 | + * Adds a job to the **Reverse Index Service** queue to generate a [reverse index](https://en.wikipedia.org/wiki/Search_engine_indexing) |
| 94 | + * Adds a job to the **Document Service** queue to generate a static title and snippet |
| 95 | + * Generates the page signature |
| 96 | + * Removes the link from `links_to_crawl` in the **NoSQL Database** |
| 97 | + * Inserts the page link and signature to `crawled_links` in the **NoSQL Database** |
| 98 | + |
| 99 | +**Clarify with your interviewer how much code you are expected to write**. |
| 100 | + |
| 101 | +`PagesDataStore` is an abstraction within the **Crawler Service** that uses the **NoSQL Database**: |
| 102 | + |
| 103 | +``` |
| 104 | +class PagesDataStore(object): |
| 105 | +
|
| 106 | + def __init__(self, db); |
| 107 | + self.db = db |
| 108 | + ... |
| 109 | +
|
| 110 | + def add_link_to_crawl(self, url): |
| 111 | + """Add the given link to `links_to_crawl`.""" |
| 112 | + ... |
| 113 | +
|
| 114 | + def remove_link_to_crawl(self, url): |
| 115 | + """Remove the given link from `links_to_crawl`.""" |
| 116 | + ... |
| 117 | +
|
| 118 | + def reduce_priority_link_to_crawl(self, url) |
| 119 | + """Reduce the priority of a link in `links_to_crawl` to avoid cycles.""" |
| 120 | + ... |
| 121 | +
|
| 122 | + def extract_max_priority_page(self): |
| 123 | + """Return the highest priority link in `links_to_crawl`.""" |
| 124 | + ... |
| 125 | +
|
| 126 | + def insert_crawled_link(self, url, signature): |
| 127 | + """Add the given link to `crawled_links`.""" |
| 128 | + ... |
| 129 | +
|
| 130 | + def crawled_similar(self, signature): |
| 131 | + """Determine if we've already crawled a page matching the given signature""" |
| 132 | + ... |
| 133 | +``` |
| 134 | + |
| 135 | +`Page` is an abstraction within the **Crawler Service** that encapsulates a page, its contents, child urls, and signature: |
| 136 | + |
| 137 | +``` |
| 138 | +class Page(object): |
| 139 | +
|
| 140 | + def __init__(self, url, contents, child_urls, signature): |
| 141 | + self.url = url |
| 142 | + self.contents = contents |
| 143 | + self.child_urls = child_urls |
| 144 | + self.signature = signature |
| 145 | +``` |
| 146 | + |
| 147 | +`Crawler` is the main class within **Crawler Service**, composed of `Page` and `PagesDataStore`. |
| 148 | + |
| 149 | +``` |
| 150 | +class Crawler(object): |
| 151 | +
|
| 152 | + def __init__(self, data_store, reverse_index_queue, doc_index_queue): |
| 153 | + self.data_store = data_store |
| 154 | + self.reverse_index_queue = reverse_index_queue |
| 155 | + self.doc_index_queue = doc_index_queue |
| 156 | +
|
| 157 | + def create_signature(self, page): |
| 158 | + """Create signature based on url and contents.""" |
| 159 | + ... |
| 160 | +
|
| 161 | + def crawl_page(self, page): |
| 162 | + for url in page.child_urls: |
| 163 | + self.data_store.add_link_to_crawl(url) |
| 164 | + page.signature = self.create_signature(page) |
| 165 | + self.data_store.remove_link_to_crawl(page.url) |
| 166 | + self.data_store.insert_crawled_link(page.url, page.signature) |
| 167 | +
|
| 168 | + def crawl(self): |
| 169 | + while True: |
| 170 | + page = self.data_store.extract_max_priority_page() |
| 171 | + if page is None: |
| 172 | + break |
| 173 | + if self.data_store.crawled_similar(page.signature): |
| 174 | + self.data_store.reduce_priority_link_to_crawl(page.url) |
| 175 | + else: |
| 176 | + self.crawl_page(page) |
| 177 | +``` |
| 178 | + |
| 179 | +### Handling duplicates |
| 180 | + |
| 181 | +We need to be careful the web crawler doesn't get stuck in an infinite loop, which happens when the graph contains a cycle. |
| 182 | + |
| 183 | +**Clarify with your interviewer how much code you are expected to write**. |
| 184 | + |
| 185 | +We'll want to remove duplicate urls: |
| 186 | + |
| 187 | +* For smaller lists we could use something like `sort | unique` |
| 188 | +* With 1 billion links to crawl, we could use **MapReduce** to output only entries that have a frequency of 1 |
| 189 | + |
| 190 | +``` |
| 191 | +class RemoveDuplicateUrls(MRJob): |
| 192 | +
|
| 193 | + def mapper(self, _, line): |
| 194 | + yield line, 1 |
| 195 | +
|
| 196 | + def reducer(self, key, values): |
| 197 | + total = sum(values) |
| 198 | + if total == 1: |
| 199 | + yield key, total |
| 200 | +``` |
| 201 | + |
| 202 | +Detecting duplicate content is more complex. We could generate a signature based on the contents of the page and compare those two signatures for similarity. Some potential algorithms are [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index) and [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity). |
| 203 | + |
| 204 | +### Determining when to update the crawl results |
| 205 | + |
| 206 | +Pages need to be crawled regularly to ensure freshness. Crawl results could have a `timestamp` field that indicates the last time a page was crawled. After a default time period, say one week, all pages should be refreshed. Frequently updated or more popular sites could be refreshed in shorter intervals. |
| 207 | + |
| 208 | +Although we won't dive into details on analytics, we could do some data mining to determine the mean time before a particular page is updated, and use that statistic to determine how often to re-crawl the page. |
| 209 | + |
| 210 | +We might also choose to support a `Robots.txt` file that gives webmasters control of crawl frequency. |
| 211 | + |
| 212 | +### Use case: User inputs a search term and sees a list of relevant pages with titles and snippets |
| 213 | + |
| 214 | +* The **Client** sends a request to the **Web Server**, running as a [reverse proxy](https://github.com/donnemartin/system-design-primer-interview#reverse-proxy-web-server) |
| 215 | +* The **Web Server** forwards the request to the **Query API** server |
| 216 | +* The **Query API** server does does the following: |
| 217 | + * Parses the query |
| 218 | + * Removes markup |
| 219 | + * Breaks up the text into terms |
| 220 | + * Fixes typos |
| 221 | + * Normalizes capitalization |
| 222 | + * Converts the query to use boolean operations |
| 223 | + * Uses the **Reverse Index Service** to find documents matching the query |
| 224 | + * The **Reverse Index Service** ranks the matching results and returns the top ones |
| 225 | + * Uses the **Document Service** to return titles and snippets |
| 226 | + |
| 227 | +We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer-interview##representational-state-transfer-rest): |
| 228 | + |
| 229 | +``` |
| 230 | +$ curl https://search.com/api/v1/search?query=hello+world |
| 231 | +``` |
| 232 | + |
| 233 | +Response: |
| 234 | + |
| 235 | +``` |
| 236 | +{ |
| 237 | + "title": "foo's title", |
| 238 | + "snippet": "foo's snippet", |
| 239 | + "link": "https://foo.com", |
| 240 | +}, |
| 241 | +{ |
| 242 | + "title": "bar's title", |
| 243 | + "snippet": "bar's snippet", |
| 244 | + "link": "https://bar.com", |
| 245 | +}, |
| 246 | +{ |
| 247 | + "title": "baz's title", |
| 248 | + "snippet": "baz's snippet", |
| 249 | + "link": "https://baz.com", |
| 250 | +}, |
| 251 | +``` |
| 252 | + |
| 253 | +For internal communications, we could use [Remote Procedure Calls](https://github.com/donnemartin/system-design-primer-interview#remote-procedure-call-rpc). |
| 254 | + |
| 255 | +## Step 4: Scale the design |
| 256 | + |
| 257 | +> Identify and address bottlenecks, given the constraints. |
| 258 | +
|
| 259 | + |
| 260 | + |
| 261 | +**Important: Do not simply jump right into the final design from the initial design!** |
| 262 | + |
| 263 | +State you would 1) **Benchmark/Load Test**, 2) **Profile** for bottlenecks 3) address bottlenecks while evaluating alternatives and trade-offs, and 4) repeat. See [Design a system that scales to millions of users on AWS]() as a sample on how to iteratively scale the initial design. |
| 264 | + |
| 265 | +It's important to discuss what bottlenecks you might encounter with the initial design and how you might address each of them. For example, what issues are addressed by adding a **Load Balancer** with multiple **Web Servers**? **CDN**? **Master-Slave Replicas**? What are the alternatives and **Trade-Offs** for each? |
| 266 | + |
| 267 | +We'll introduce some components to complete the design and to address scalability issues. Internal load balancers are not shown to reduce clutter. |
| 268 | + |
| 269 | +*To avoid repeating discussions*, refer to the following [system design topics](https://github.com/donnemartin/system-design-primer-interview#) for main talking points, tradeoffs, and alternatives: |
| 270 | + |
| 271 | +* [DNS](https://github.com/donnemartin/system-design-primer-interview#domain-name-system) |
| 272 | +* [Load balancer](https://github.com/donnemartin/system-design-primer-interview#load-balancer) |
| 273 | +* [Horizontal scaling](https://github.com/donnemartin/system-design-primer-interview#horizontal-scaling) |
| 274 | +* [Web server (reverse proxy)](https://github.com/donnemartin/system-design-primer-interview#reverse-proxy-web-server) |
| 275 | +* [API server (application layer)](https://github.com/donnemartin/system-design-primer-interview#application-layer) |
| 276 | +* [Cache](https://github.com/donnemartin/system-design-primer-interview#cache) |
| 277 | +* [NoSQL](https://github.com/donnemartin/system-design-primer-interview#nosql) |
| 278 | +* [Consistency patterns](https://github.com/donnemartin/system-design-primer-interview#consistency-patterns) |
| 279 | +* [Availability patterns](https://github.com/donnemartin/system-design-primer-interview#availability-patterns) |
| 280 | + |
| 281 | +Some searches are very popular, while others are only executed once. Popular queries can be served from a **Memory Cache** such as Redis or Memcached to reduce response times and to avoid overloading the **Reverse Index Service** and **Document Service**. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.<sup><a href=https://github.com/donnemartin/system-design-primer-interview#latency-numbers-every-programmer-should-know>1</a></sup> |
| 282 | + |
| 283 | +Below are a few other optimizations to the **Crawling Service**: |
| 284 | + |
| 285 | +* To handle the data size and request load, the **Reverse Index Service** and **Document Service** will likely need to make heavy use sharding and replication. |
| 286 | +* DNS lookup can be a bottleneck, the **Crawler Service** can keep its own DNS lookup that is refreshed periodically |
| 287 | +* The **Crawler Service** can improve performance and reduce memory usage by keeping many open connections at a time, referred to as [connection pooling](https://en.wikipedia.org/wiki/Connection_pool) |
| 288 | + * Switching to [UDP](https://github.com/donnemartin/system-design-primer-interview#user-datagram-protocol-udp) could also boost performance |
| 289 | +* Web crawling is bandwidth intensive, ensure there is enough bandwidth to sustain high throughput |
| 290 | + |
| 291 | +## Additional talking points |
| 292 | + |
| 293 | +> Additional topics to dive into, depending on the problem scope and time remaining. |
| 294 | +
|
| 295 | +### SQL scaling patterns |
| 296 | + |
| 297 | +* [Read replicas](https://github.com/donnemartin/system-design-primer-interview#master-slave) |
| 298 | +* [Federation](https://github.com/donnemartin/system-design-primer-interview#federation) |
| 299 | +* [Sharding](https://github.com/donnemartin/system-design-primer-interview#sharding) |
| 300 | +* [Denormalization](https://github.com/donnemartin/system-design-primer-interview#denormalization) |
| 301 | +* [SQL Tuning](https://github.com/donnemartin/system-design-primer-interview#sql-tuning) |
| 302 | + |
| 303 | +#### NoSQL |
| 304 | + |
| 305 | +* [Key-value store](https://github.com/donnemartin/system-design-primer-interview#) |
| 306 | +* [Document store](https://github.com/donnemartin/system-design-primer-interview#) |
| 307 | +* [Wide column store](https://github.com/donnemartin/system-design-primer-interview#) |
| 308 | +* [Graph database](https://github.com/donnemartin/system-design-primer-interview#) |
| 309 | +* [SQL vs NoSQL](https://github.com/donnemartin/system-design-primer-interview#) |
| 310 | + |
| 311 | +### Caching |
| 312 | + |
| 313 | +* Where to cache |
| 314 | + * [Client caching](https://github.com/donnemartin/system-design-primer-interview#client-caching) |
| 315 | + * [CDN caching](https://github.com/donnemartin/system-design-primer-interview#cdn-caching) |
| 316 | + * [Web server caching](https://github.com/donnemartin/system-design-primer-interview#web-server-caching) |
| 317 | + * [Database caching](https://github.com/donnemartin/system-design-primer-interview#database-caching) |
| 318 | + * [Application caching](https://github.com/donnemartin/system-design-primer-interview#application-caching) |
| 319 | +* What to cache |
| 320 | + * [Caching at the database query level](https://github.com/donnemartin/system-design-primer-interview#caching-at-the-database-query-level) |
| 321 | + * [Caching at the object level](https://github.com/donnemartin/system-design-primer-interview#caching-at-the-object-level) |
| 322 | +* When to update the cache |
| 323 | + * [Cache-aside](https://github.com/donnemartin/system-design-primer-interview#cache-aside) |
| 324 | + * [Write-through](https://github.com/donnemartin/system-design-primer-interview#write-through) |
| 325 | + * [Write-behind (write-back)](https://github.com/donnemartin/system-design-primer-interview#write-behind-write-back) |
| 326 | + * [Refresh ahead](https://github.com/donnemartin/system-design-primer-interview#refresh-ahead) |
| 327 | + |
| 328 | +### Asynchronism and microservices |
| 329 | + |
| 330 | +* [Message queues](https://github.com/donnemartin/system-design-primer-interview#) |
| 331 | +* [Task queues](https://github.com/donnemartin/system-design-primer-interview#) |
| 332 | +* [Back pressure](https://github.com/donnemartin/system-design-primer-interview#) |
| 333 | +* [Microservices](https://github.com/donnemartin/system-design-primer-interview#) |
| 334 | + |
| 335 | +### Communications |
| 336 | + |
| 337 | +* Discuss tradeoffs: |
| 338 | + * External communication with clients - [HTTP APIs following REST](https://github.com/donnemartin/system-design-primer-interview#representational-state-transfer-rest) |
| 339 | + * Internal communications - [RPC](https://github.com/donnemartin/system-design-primer-interview#remote-procedure-call-rpc) |
| 340 | +* [Service discovery](https://github.com/donnemartin/system-design-primer-interview#service-discovery) |
| 341 | + |
| 342 | +### Security |
| 343 | + |
| 344 | +Refer to the [security section](https://github.com/donnemartin/system-design-primer-interview#security). |
| 345 | + |
| 346 | +### Latency numbers |
| 347 | + |
| 348 | +See [Latency numbers every programmer should know](https://github.com/donnemartin/system-design-primer-interview#latency-numbers-every-programmer-should-know). |
| 349 | + |
| 350 | +### Ongoing |
| 351 | + |
| 352 | +* Continue benchmarking and monitoring your system to address bottlenecks as they come up |
| 353 | +* Scaling is an iterative process |
0 commit comments