Skip to content

Commit 1957a8c

Browse files
committed
add information about reducing data fetches
1 parent d52fc04 commit 1957a8c

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed

README.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,22 @@ const result = await worker.db.exec(`select * from table where id = ?`, [123]);
7474

7575
```
7676

77+
## Debugging data fetching
78+
79+
If your query is fetching a lot of data and you're not sure why, try this:
80+
81+
1. Look at the output of `explain query plan select ......`
82+
83+
- `SCAN TABLE t1` means the table t1 will have to be downloaded pretty much fully
84+
- `SCAN TABLE t1 USING INDEX i1 (a=?)` means direct index lookups to find a row, then table lookups by rowid
85+
- `SCAN TABLE t1 USING COVERING INDEX i1 (a)` direct index lookup _without_ a table lookup. This is the fastest.
86+
87+
You want all the columns in your WHERE clause that significantly reduce the number of results to be part of an index, with the ones reducing the result count the most coming first.
88+
89+
Another useful technique is to create an index containing exactly the rows filtered by and the rows selected, which SQLite reads as a COVERING INDEX in a sequential manner (no random access at all!). For example `create index i1 on tbl (filteredby1, filteredby2, selected1, selected2, selected3)`. This index is perfect for a query filtering by the `filteredby1` and `filteredby2` columns that only select the three columns at the back of the index.
90+
91+
2. You can look at the `dbstat` virtual table to find out exactly what the pages SQLite is reading contain. For example, if you have `[xhr of size 1 KiB @ 1484048 KiB]` in your logs that means it's reading page 1484048. You can get the full log of read pages by using `worker.getResetAccessedPages()`. Check the content of pages with `select * from dbstat where pageno = 1484048`. Do this in an SQLite3 shell not the browser because the `dbstat` vtable reads the whole database.
92+
7793
## Is this production ready?
7894

7995
It works fine, but I'm not making any effort to support older or weird browsers. If the browser doesn't support WebAssembly and WebWorkers, this won't work. There's also no cache eviction, so the more data is fetched the more RAM it will use. Most of the complicated work is done by SQLite, which is well tested, but the virtual file system part doesn't have any tests.

0 commit comments

Comments
 (0)