Skip to content

Commit cc11a9b

Browse files
authored
Fix Mint exercise bugs and typos (donnemartin#409)
1 parent 6d700ab commit cc11a9b

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

solutions/system_design/mint/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ For sellers not initially seeded in the map, we could use a crowdsourcing effort
202202
```python
203203
class Categorizer(object):
204204

205-
def __init__(self, seller_category_map, self.seller_category_crowd_overrides_map):
205+
def __init__(self, seller_category_map, seller_category_crowd_overrides_map):
206206
self.seller_category_map = seller_category_map
207207
self.seller_category_crowd_overrides_map = \
208208
seller_category_crowd_overrides_map
@@ -223,7 +223,7 @@ Transaction implementation:
223223
class Transaction(object):
224224

225225
def __init__(self, created_at, seller, amount):
226-
self.timestamp = timestamp
226+
self.created_at = created_at
227227
self.seller = seller
228228
self.amount = amount
229229
```
@@ -241,10 +241,10 @@ class Budget(object):
241241

242242
def create_budget_template(self):
243243
return {
244-
'DefaultCategories.HOUSING': income * .4,
245-
'DefaultCategories.FOOD': income * .2,
246-
'DefaultCategories.GAS': income * .1,
247-
'DefaultCategories.SHOPPING': income * .2
244+
DefaultCategories.HOUSING: self.income * .4,
245+
DefaultCategories.FOOD: self.income * .2,
246+
DefaultCategories.GAS: self.income * .1,
247+
DefaultCategories.SHOPPING: self.income * .2,
248248
...
249249
}
250250

@@ -373,9 +373,9 @@ Instead of keeping the `monthly_spending` aggregate table in the **SQL Database*
373373

374374
We might only want to store a month of `transactions` data in the database, while storing the rest in a data warehouse or in an **Object Store**. An **Object Store** such as Amazon S3 can comfortably handle the constraint of 250 GB of new content per month.
375375

376-
To address the 2,000 *average* read requests per second (higher at peak), traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
376+
To address the 200 *average* read requests per second (higher at peak), traffic for popular content should be handled by the **Memory Cache** instead of the database. The **Memory Cache** is also useful for handling the unevenly distributed traffic and traffic spikes. The **SQL Read Replicas** should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
377377

378-
200 *average* transaction writes per second (higher at peak) might be tough for a single **SQL Write Master-Slave**. We might need to employ additional SQL scaling patterns:
378+
2,000 *average* transaction writes per second (higher at peak) might be tough for a single **SQL Write Master-Slave**. We might need to employ additional SQL scaling patterns:
379379

380380
* [Federation](https://github.com/donnemartin/system-design-primer#federation)
381381
* [Sharding](https://github.com/donnemartin/system-design-primer#sharding)

0 commit comments

Comments
 (0)