Options Objects
The options object modifies Massive's internal SQL statement generation, either by applying additional clauses to the query itself or by enabling different result processing behaviors.
Options can be passed as the final argument to most query and persistence functions, there being no useful options in the cases of count
and countDoc
.
// a stream returning the name, start date, lower-cased
// name, and pass + failure total for active tests
// 21-30, omitting rows from any descendant tables
const stream = await db.tests.find({
is_active: true
}, {
fields: ['name', 'started_at'],
exprs: {
lowername: 'lower(name)',
total: 'passes + failures'
},
offset: 20,
limit: 10,
only: true,
stream: true
});
Universal Options
These are generally applicable to all query types, although stream
is principally useful with query functions.
Key | Type | Description |
---|---|---|
only | Boolean | true to prevent the query from affecting or querying descendant tables. |
build | Boolean | true to return the query text and parameters without executing anything. |
document | Boolean | true to invoke document table handling. |
single | Boolean | true to return only the first result as an object (cf. findOne ). Rarely useful externally. |
stream | Boolean | true to return results as a stream instead of an array. Streamed results cannot be decompose d. |
decompose | Object | A schema for transforming the results into an object graph. Not compatible with stream . |
Streaming Results
Streaming can improve performance with large result sets which would otherwise have to be collected in memory before becoming available to your program all at once. With stream
, you get something back right away (which can also be a big deal for slow queries!), but the connection remains open and in use until the stream closes.
Some constraints apply when using stream
with script files: named parameters are not supported, and some operations may require explicit type casts ($1::INT
).
const stream = await db.tests.find(
{priority: 'low'},
{stream: true}
);
const tests = [];
stream.on('readable', () => {
tests.push(stream.read());
});
stream.on('end', () => {
// do something with tests here
});
Data Retrieval Options
Some of these options, such as fields
and exprs
, are generally applicable to all find
and search
methods; others, such as offset
and pageLength
, are only useful when multiple records will be returned.
Key | Type | Description |
---|---|---|
distinct | Boolean | true to remove duplicate records from the resultset. Mostly useful in conjunction with fields and exprs . |
fields | Array | Object | Name columns to include in the resultset, or map aliases to names. Values will be quoted; use exprs to invoke functions or operate on columns. See also below. |
exprs | Object | Map aliases to raw SQL expressions to include in the resultset. Do not supply user input as exprs unless you understand the risk of SQL injection! |
order | Array | An array of order objects. |
offset | Number | Skip this many records. |
limit | Number | Only retrieve this many records. |
pageLength | Number | Number of results to return with keyset pagination. Requires order . |
lock | Object | Prevent records from being modified outside the current transaction. See the Locking section below. |
Fields and Exprs
These options modify and restrict the SELECT
list, or the list of columns and expressions in a data retrieval query. Both fields
and exprs
are incompatible with the use of join. Use SQL or standalone resultset decomposition in such cases.
The simplest form of fields
is an array of the column names to be included. This version can also be used with persistence queries to restrict the returned results, which can be useful when column security settings render some fields off-limits.
fields: ['id', 'name', 'description']
To rename properties in the result set, use the object form. Map desired aliases to the original field names. If there are fields you want to include without renaming, you can supply a truthy value to *
instead of repeating identical alias-name pairs.
fields: {
'*': true, // include all unspecified fields as well
new_name: 'name',
new_description: 'description'
}
exprs
is always an object, mapping aliases to raw SQL expressions such as math, string concatenation, or function calls. Values in exprs
are interpolated, unescaped, into the emitted SQL query. Do not pass user input to Massive directly through exprs
and its like, or you run the risk of SQL injection attacks.
exprs: {
hypotenuse: 'sqrt(pow(a, 2) + pow(b, 2))'
}
If exprs
is specified without fields
, only the expressions are evaluated. To return both ordinary column values and expressions, supply both fields
and exprs
.
Ordering Results
The order
option sets an array of order objects which are used to build an ORDER BY
clause. Each order object must contain a field
or an expr
; all other properties are optional.
Key | Type | Description |
---|---|---|
field | String | The name of the column being sorted on. Casts and JSON traversal are supported as in criteria objects. |
expr | String | A raw SQL expression. Will not be escaped or quoted and is potentially vulnerable to SQL injection attacks. |
direction | String | The sort direction, ASC (default) or DESC . |
nulls | String | Sort null values FIRST or LAST . |
last | Any | If using keyset pagination, the final value for this field on on the previous page. |
direction
and nulls
are case-insensitive.
// all tests, ordered first by most recent start date
// (nulls first), then by pass + failure total
// deferring to the Postgres default null positioning
// (last when ascending)
const tests = await db.tests.find({
is_active: true
}, {
order: [{
field: 'started_at',
direction: 'desc',
nulls: 'first'
}, {
expr: 'passes + failures',
direction: 'asc'
}]
});
Keyset Pagination
When query results are meant to be displayed to a user, it's often useful to retrieve and display them one page at a time. This is easily accomplished by setting limit
to the page length and offset
to the page length times the current page (counting from zero). However, as result sets grow larger, this method starts to perform poorly as the first n rows must be retrieved and discarded each time.
Keyset pagination offers a trade: consistent performance, but you don't know how many pages there are and can't reliably sort by columns containing null values. It does require a slightly different user interface metaphor which avoids numbering and jumping to arbitrary pages, but the performance gains can be worth it. For a detailed technical breakdown, see Markus Winand's post on the topic.
Although enabling keyset pagination is a matter of a single field, it does require some setup:
- You must not specify
offset
orlimit
. Massive will return a rejected promise if you do. - You must have an
order
array. Massive will return a rejected promise if you do not. For additional performance benefits, make sure you have an index covering any columns you are filtering and sorting, if practical. - The
order
array must guarantee deterministic ordering of records; the easiest way to ensure this is to sort on the primary key or a unique column last. Failure may result in apparent duplication or omission as records change position between pages. - The
order
array must use consistent directionality: if one attribute is being sorted in descending order, all attributes must be sorted in descending order. Inconsistent directionality means inconsistent results.
Once these prerequisites are satisfied, set the pageLength
option to the number of records you want back per page.
To retrieve subsequent pages, inspect the last record on the current page. When you make the query for the next page, set last
on each element of the order array to the value of the corresponding attribute in the final record of the previous page.
const next25 = await db.issues.find({}, {
order: [{
field: 'test_id',
last: 1500
}, {
field: 'issue_id',
last: 10256
}],
pageLength: 25
});
Locking
In transactional contexts, it is sometimes necessary to prevent other statements or transactions from modifying or deleting rows until the locking transaction either finishes its processing and commits or rolls back. This is accomplished by using the lock
option with a retrieval method.
The lock
option is superfluous outside a transaction, since any locks taken out by individual statements will be resolved immediately.
Key | Type | Description |
---|---|---|
strength | String | One of UPDATE , SHARE , NO KEY UPDATE , or KEY SHARE . |
lockedRows | String | Optional; NOWAIT errors if a desired row cannot be locked immediately, or SKIP LOCKED ignores such rows. The default behavior is to wait for a lock. |
UPDATE
locks prevent other transactions from locking, modifying, or deleting locked rows until this transaction commits or rolls back.SHARE
locks prevent other transactions from modifying or deleting locked rows, but allow otherSHARE
andKEY SHARE
locks to be taken out.NO KEY UPDATE
locks are a special case ofUPDATE
locks which allowKEY SHARE
locks to be taken out.KEY SHARE
locks are a special case ofSHARE
locks which allow both shared locks andNO KEY UPDATE
locks to be taken out.
Insert Options
Key | Type | Description |
---|---|---|
deepInsert | Boolean | true to enable deep insert. |
onConflict | Object | Upsert or ignore if a conflict is encountered. See the onConflict section below. |
onConflict
Inserts normally fail if adding a new record would violate a unique or exclusion constraint (e.g. a primary key). Starting with PostgreSQL 9.5, one of two fallback behaviors can be defined should such a conflict occur: do nothing, or change the existing record based on the new values -- popularly called an "upsert".
By default, onConflict
with an action of update
will overwrite all fields of an existing record. This may be undesirable in some circumstances, particularly when upserting multiple records. To prevent the upsert from overwriting a particular field or fields, supply those field names to exclude
.
Key | Type | Description |
---|---|---|
target | Array | String | A field or fields subject to a unique or exclusion constraint. |
targetExpr | String | An expression matching a unique or exclusion constraint. Expressions are not escaped! Do not pass user input directly! |
action | String | ignore or update . |
exclude | Array | Prevent the named columns from being modified with an update action. |
// if the user 'me' already exists, its email is updated
// instead of a duplicate record being added; however,
// even if updated, the original joined_at value is
// preserved.
const user = await db.users.insert({
username: 'me',
email: 'me@me.com'
}, {
onConflict: {
target: 'username',
action: 'update',
exclude: ['joined_at']
}
});
UpdateDoc Options
Key | Type | Description |
---|---|---|
body | String | Target a field other than the default "body" . |
Tasks and Transactions
db.withConnection
may take a single option:
Key | Type | Description |
---|---|---|
tag | String | A tag which will be visible in |
db.withTransaction
adds a second:
Key | Type | Description |
---|---|---|
mode | TransactionMode | Set the isolation level, readonly mode, and/or deferrable mode. |