Options Objects

The options object modifies query behavior, either by applying additional clauses to the query itself or by changing how Massive handles results.

Options can be passed to most query and persistence functions as the final argument.

db.tests.find({
  is_active: true
}, {
  fields: ['name', 'started_at'],
  exprs: {
    lowername: 'lower(name)',
    total: 'passes + failures'
  },
  offset: 20,
  limit: 10,
  only: true,
  stream: true
}).then(stream => {
  // a stream returning the name, start date, lower-
  // cased name, and pass + failure total for active
  // tests 21-30, omitting rows from any descendant
  // tables
});

Filtering and Shaping Results

These options affect data retrieval queries. Some, such as fields and exprs, are generally applicable to all such calls; others, such as limit and pageLength, are only useful when multiple records will be returned.

The fields option may be used with persistence queries to restrict the returned results. This can be useful in the case of column security where the Postgres user does not have permissions on the full table.

Option key Description
fields Specify an array of column names to include in the resultset. The names will be quoted; use exprs to invoke functions or operate on columns.
exprs Specify a map of aliases to expressions to include in the resultset. Do not send user input directly into exprs unless you understand the risk of SQL injection!
limit Set the number of rows to take.
offset Set the number of rows to skip.
order An array of order objects.
pageLength Number of results to return with keyset pagination. Requires order.

nb. The exprs option and the corresponding expr key in order objects interpolate values into the emitted SQL. Take care with raw strings and ensure that user input is never directly passed in through the options, or you risk opening yourself up to SQL injection attacks.

Ordering Results

The order option sets an array of order objects which are used to build a SQL ORDER BY clause. An order object must contain a field or an expr; all other properties are optional.

Key Description
field The name of the column being sorted on. May be a JSON path if sorting by an element nested in a JSON field or document table body.
expr A raw SQL expression. Will not be escaped or quoted and is potentially vulnerable to SQL injection.
direction The sort direction, ASC (default) or DESC.
nulls Sort null field values first or last.
type Define a cast type for values. Useful with JSON fields.
last If using keyset pagination, this attribute's value from the final record on the previous page.
db.tests.find({
  is_active: true
}, {
  order: [{
    field: 'started_at',
    direction: 'desc',
    nulls: 'first'
  }, {
    expr: 'passes + failures',
    direction: 'asc'
  }]
}).then(tests => {
  // all tests, ordered first by most recent start
  // date (nulls first), then by pass + failure
  // total deferring to the Postgres default null
  // positioning (last when ascending)
});

Keyset Pagination

When query results are meant to be displayed to a user, it's often useful to retrieve and display them one page at a time. This is easily accomplished by setting limit to the page length and offset to the page length times the current page (counting from zero). However, as result sets grow larger, this method starts to perform poorly as the first n rows must be retrieved and discarded each time.

Keyset pagination offers a trade: consistent performance, but you don't know how many pages there are and can't reliably sort by columns containing null values. It does require a slightly different user interface metaphor which avoids numbering and jumping to arbitrary pages, but the performance gains can be worth it. For a detailed technical breakdown, see Markus Winand's post on the topic.

Although enabling keyset pagination is a matter of a single field, it does require some setup:

  • You may not specify offset or limit. Massive will return a rejected promise if you do.
  • You must have an order array. Massive will return a rejected promise if you do not. For additional performance benefits, make sure you have an index covering any columns you are filtering and sorting, if practical.
  • The order array must guarantee deterministic ordering of records; the easiest way to ensure this is to sort on the primary key or a unique column last. Failure may result in records appearing on multiple pages or apparently missing records.
  • The order array must use consistent directionality: if one attribute is being sorted in descending order, all attributes must be sorted in descending order. Inconsistent directionality means inconsistent results.

Once these prerequisites are satisfied, set the pageLength option to the number of records you want back per page.

To retrieve subsequent pages, inspect the last record on the current page. When you make the query for the next page, add a last key to each element of the order array containing that attribute's value for the final record.

db.issues.find(
  {},
  {
    order: [{
      field: 'test_id',
      last: 1500
    }, {
      field: 'issue_id',
      last: 10256
    }],
    pageLength: 25
  }
).then(next25Results => ...);

Persisting Data

These options modify data persistence calls.

Option key Use in Description
onConflictIgnore insert If the inserted data would violate a unique constraint, do nothing.
onConflictUpdate insert Specify an array of columns which, if duplicated by an incoming record, will trigger an upsert (see below).
deepInsert insert Specify true to turn on deep insert.
body updateDoc Specify in order to override the default body field affected by updateDoc.

onConflictUpdate

onConflictUpdate requires PostgreSQL 9.5 or greater. The fields specified in the option value must match a unique or exclusion constraint (such as a primary key).

db.users.insert({
  username: 'me',
  email: 'me@me.com'
}, {
  onConflictUpdate: ['username']
}).then(user => {
  // if the user 'me' already exists, its email is
  // updated instead of a duplicate record being added
});

Table Inheritance

By default, queries against tables having descendant tables affect and/or return records from those descendant tables. Use the only option to prevent this, but it's superfluous if the target has no descendant tables.

Option key Use in Description
only any Set to true to restrict the query to the table specified, if any others inherit from it.

Results Processing

Results processing options are generally applicable to all query types, although stream is principally useful with query functions.

Option key Description
build Set to true to return the query text and parameters without executing anything.
document Set to true to invoke document table handling.
single Set to true to return the first result as an object instead of a results array.
stream Set to true to return results as a stream instead of an array. Streamed results cannot be decomposed.
decompose Provide a schema to transform the results into an object graph. Not compatible with stream.

Streaming Results

Using the stream option can improve performance with large result sets which would otherwise have to be collected in memory before becoming available to your program all at once. With streaming, you get something back right away (which can also be a big deal for slow queries!), but the connection remains open and in use until the stream closes.

Some constraints apply when using stream with script files. Named parameters are not supported. Some operations may require explicit type casts ($1::INT).

db.tests.find(
  {priority: 'low'},
  {stream: true}
).then(stream => {
  const tests = [];

  stream.on('readable', () => {
    tests.push(stream.read());
  });

  stream.on('end', () => {
    // do something with tests here
  });
});

Tasks and Transactions

db.withConnection may take a single option:

Option key Description
tag A tag which will be visible in pg-monitor.

db.withTransaction as well:

Option key Description
mode TransactionMode object defining a new isolation level, readonly mode, and/or deferrable mode.

In a transaction context, you may use any find query to lock rows for update or share by adding the appropriate key in the options object:

Option key Description
forShare Set to true to acquire a shared lock on the returned rows: other transactions must wait until this transaction is committed or rolled back before modifying or locking the rows, but can read them without waiting.
forUpdate Set to true to acquire an exclusive lock on the returned rows: other transactions must wait until this transaction is committed or rolled back before reading the rows, in addition to modifying or locking them as with a shared lock.