|
| 1 | +# Batch Insert Records With createMany |
| 2 | + |
| 3 | +As part of its suite of CRUD functionality, [Prisma has a `createMany` |
| 4 | +function](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#createmany) |
| 5 | +that allows you to `insert` many records at once with your target database. |
| 6 | +This will perform one large insert statement which will generally be faster |
| 7 | +than an equivalent series of individual insert statements. |
| 8 | + |
| 9 | +```javascript |
| 10 | +const createResult = await prisma.books.createMany({ |
| 11 | + data: [ |
| 12 | + { isbn: '123', title: 'The Goldfinch' }, |
| 13 | + { isbn: '345', title: 'Piranesi' }, |
| 14 | + { isbn: '987', title: 'The Fifth Season' }, |
| 15 | + ], |
| 16 | + skipDuplicates: true |
| 17 | +}) |
| 18 | +``` |
| 19 | + |
| 20 | +With the `skipDuplicates` option, any inserts that would result in a duplicate |
| 21 | +record (`isbn` is my unique key in this example) will be skipped. |
| 22 | + |
| 23 | +The result of the query will include a `count` key to let you know how many |
| 24 | +records were actually inserted. |
| 25 | + |
| 26 | +If I'm bulk inserting a _ton_ of data, I like to chunk it up so that I'm not |
| 27 | +creating queries that are too big. For a recent script, I found that `1000` was |
| 28 | +a good chunking number. |
| 29 | + |
| 30 | +```javascript |
| 31 | +import 'chunk' from 'lodash/chunk' |
| 32 | + |
| 33 | +const chunkedBatchInsert = async (records) => { |
| 34 | + for(const batch of chunk(records, 1000)) { |
| 35 | + await prisma.books.createMany({ |
| 36 | + data: batch, |
| 37 | + skipDuplicates: true |
| 38 | + }) |
| 39 | + } |
| 40 | +} |
| 41 | +``` |
0 commit comments