Skip to content

Dataset Functions

These functions process datasets into new datasets, create datasets from scratch in various ways, and extract metadata from datasets.

view()

While this expression function processes a dataset into another dataset, it uses jython scripting under the hood to accomplish it, and is therefore described with the Scripting functions.

toTransient()

Produces a dataset that serializes just the column names and types, discarding any rows. Applies both to native serialization and to the XML serialization that is used in Vision Clients and in the Designer for Vision resources.

toTransient(dataset) returns Dataset in an expression.

system.dataset.TransientDataset(...) returns Dataset in a script.

dataset Any instance of Ignition's Dataset interface.

The scripting form is actually the class object, and functions as a normal jython variant constructor. The overloads are the same as Ignition’s BasicDataset constructors.

nonTransient()

Produces a dataset from another dataset in an expression using Ignition’s single-argument BasicDataset constructor. This ensures that the content is serializable when wrapped around an expression function that normally returns a TransientDataset instance.

nonTransient(dataset) returns Dataset in an expression.

dataset Any instance of Ignition's Dataset interface.

There is no scripting form, as scripts can directly access BasicDataset constructors or use the system.dataset.* simplified forms.

recorder()

Produces a transient dataset containing rows of sample values, assembled at regular intervals.

recorder(poll, limit, dataset OR colName, colValue...) returns Dataset

poll Milliseconds between samples of the given values.
limit Number of rows to accumulate at the given pace. After this many rows are present, the oldest row is discarded to accomodate a new sample.
dataset Optional nested list of column names and values to record. If the nested dataset has precisely two columns, named "name" and "value" respectively, result column names and values are taken from its rows. Otherwise, its column names and first row values are used.
colName Column name to use for the following value. Expected to be constant.
colValue Value to be sampled and stored in the row accumulator. Must be paired with the colName argument.

Usage Notes

Any number of single datasets and/or name/value pairs may be strung together to produce the output rows.

Available in all scopes, but will not function in scopes that do not provide a re-triggerable InteractionListener. Expression tags should be set to “Event Driven” execution.

The accumulated recording is held in a state variable within this function. Editing the binding replaces the function, discarding this state, which discards the recording. Supply a custom property or tag reference to the pollRate argument if you wish to stop and start recordings without losing any data. Similarly, use nested dataset(s) with your values if you wish to add or remove columns on the fly.

Execution pace is limited by the platform, and may not achieve the precise interval requested between rows. Also, when changing the poll rate, the next sample will occur at the end of the new rate’s delay.

alias()

alias(dataset, columnPrefix) returns Dataset

dataset Any instance of Ignition's Dataset interface.
columnPrefix Any string, but typically an identifier ending in an underscore or with a dot delimiter (full stop) appended.

Returns the same dataset content and column types, but with the given prefix prepended to each column name. Ideal for use with the various JOIN operations below to avoid column name clashes.

columnsOf()

Given a dataset, return an ordered map of its column names versus column type class names (as strings). The latter will be shortened to standard abbreviations where applicable.

columnsOf(dataset) returns Map

dataset Any instance of Ignition's Dataset interface.

Note that the ordering is lost when assigned to a Perspective property. Also, dataset column names may not be acceptable as object keys in Perspective maps. In either case, nest this function as the source for an outer operation, or pass it through asPairs() before property assignment.

crossJoin()

Produces a dataset with all of the columns of the left source dataset and all of the columns of the right source dataset. Every row in the left source dataset is replicated with the rows of the right source dataset, in that order.

crossJoin(datasetLeft, datasetRight) returns Dataset

datasetLeft Any instance of Ignition's Dataset interface.
datasetRight Any instance of Ignition's Dataset interface.

Be aware that this operation will be a potentially disruptive memory hog if given large datasets.

Typically used with alias() to avoid column name clashes, something like so:

crossJoin(alias(leftDS, 'left.'), alias(rightDS, 'right.'))

innerJoin()

Produces a dataset with all of the columns of the left source dataset and all of the columns of the right source dataset. Rows in the left source dataset are replicated with the rows of the right source dataset where their corresponding key values match, and in that order.

Rows in either dataset that have no match in the other dataset are omitted from the result.

innerJoin(datasetLeft, datasetRight, leftKeyExpr, rightKeyExpr [, ...]) returns Dataset

datasetLeft Any instance of Ignition's Dataset interface.
datasetRight Any instance of Ignition's Dataset interface.
leftKeyExpr A nested expression computed while looping over the left-hand source dataset to obtain a key value for matching. In this expression, it() and idx() point at the left dataset's loop. Expression functions that need to be able to retrigger (like polling, or tag or property references) are not allowed.
rightKeyExpr A nested expression computed while looping over the right-hand source dataset to obtain a key value for matching. In this expression, it() and idx() point at the right dataset's loop. Expression functions that need to be able to retrigger (like polling, or tag or property references) are not allowed.

Multiple key expressions may be given, in pairs. Functionally, the right source dataset is processed into groups with its key or keys, and then the left source dataset is processed, picking out the corresponding groups as it goes.

Be aware that this operation will be a potentially disruptive memory hog if given large datasets.

leftJoin()

Produces a dataset with all of the columns of the left source dataset and all of the columns of the right source dataset. Rows in the left source dataset are replicated with the rows of the right source dataset where their corresponding key values match, and in that order.

Rows in the left source dataset that have no match in the right source dataset are passed to the result with nulls for the right source columns.

Rows in the right source dataset that have no match in the left source dataset are omitted from the result.

leftJoin(datasetLeft, datasetRight, leftKeyExpr, rightKeyExpr [, ...]) returns Dataset

datasetLeft Any instance of Ignition's Dataset interface.
datasetRight Any instance of Ignition's Dataset interface.
leftKeyExpr A nested expression computed while looping over the left-hand source dataset to obtain a key value for matching. In this expression, it() and idx() point at the left dataset's loop. Expression functions that need to be able to retrigger (like polling, or tag or property references) are not allowed.
rightKeyExpr A nested expression computed while looping over the right-hand source dataset to obtain a key value for matching. In this expression, it() and idx() point at the right dataset's loop. Expression functions that need to be able to retrigger (like polling, or tag or property references) are not allowed.

Except for the substitution of nulls when no right-hand source row matches, this function is identical to innerJoin().

selectStar()

Adds columns to a dataset with row-by-row computation of the contents of the new columns.

selectStar(dataset, columnInfo, expr [, ...]) returns Dataset

dataset Any instance of Ignition's Dataset interface.
columnInfo A sample dataset, an ordered map, or a list of string pairs that define the new column names and datatypes that will be provided. If a dataset, its column names and types are extracted. Otherwise, the map or list must provide pairs of name and type. The number of new columns declared here must match the number of nested expressions following this argument.
expr A nested expression yielding any object compatible with the target column type, typically using it() and/or idx() to operate upon the source data element-by-element. Expression functions that need to be able to retrigger (like polling, or tag or property references) are not allowed.

Multiple expressions are required when columnInfo declares multiple new column names and types.

This performs the equivalent of the SQL statement:

SELECT *,
    (expr0)::colType0 AS colName0,
    (expr1)::colType1 AS colName1,
    ...
FROM dataset

unionAll()

Assembles an output dataset from scratch, using the given column names and types (internally via a DatasetBuilder), performing a UNION ALL with each row source.

unionAll(columnInfo, rowSource [, ...]) returns Dataset

columnInfo A sample dataset, an ordered map, or a list of string pairs that define the new column names and datatypes that will be provided. If a dataset, its column names and types are extracted. Otherwise, the map or list must provide pairs of name and type. The number of new columns declared here must match the values per row in the given row sources.
rowSource A list containing nested lists, where each nested list is a row of values, or containing nested datasets, where each dataset's rows are added to the output without regard to column names, or containing nested mapping objects. Row values are extracted from mapping objects by column name.

Multiple row sources can contain rows in any supported format.

parsePath()

Given a string path, splits it into its components for assembly into a historical path. Always returns columns for histprov, drv, prov, tag, alm, and prop, leaving cells empty as appropriate.

The function applies the TagPathParser first, and if either source or property are identified, it populates tag and property appropriately, then either prov or both histprov and drv.

Otherwise, QualifiedPathUtils is used to properly parse a QualifiedPath string. If path component IDs other than those above are included, the result dataset will have extra columns.

When multiple arguments are supplied, an output row is produced for each path string.

parsePath(path [, ...]) returns Dataset

path Any string that can be parsed as a tag path or a qualified path.

orderBy()

This is the scripting companion to the same-named expression function, described in Iteration. Critical differences from the expression function:

  • Accepts only datasets, and returns a dataset.

  • keys may be string constants or integers.

  • keys may be lambdas or other callables that accept a single-row dataset, then return a comparable value or composite comparable value.

system.dataset.orderBy(source, key...) returns Dataset in a script.

source Any instance of Ignition's Dataset interface.
key A string column name, an integer column index, or a callable that accepts a single-row dataset and returns a comparable value. Note that `system.dataset.descending()`, `system.dataset.naturalOrder()`, and `system.dataset.naturalCasedOrder()` all return compatible java callables.

Multiple keys may be defined, and will be used in order when comparing rows of the source dataset.

When only one key is supplied, and it is a string or integer, this is functionally equivalent to Ignition’s native system.dataset.sort().

Unlike system.dataset.sort(), this function can use multiple keys, and within those multiple keys, descending and natural ordering modifiers may be applied separately per key.

When using only string or integer keys, or in combination with the modifiers below (no user-defined lambdas or callables), these functions perform all ordering purely within Java for speed.

descending()

This is the scripting companion to the same-named expression function, described in Iteration. Usable as key modifier within system.dataset.orderBy() above. Critical difference from the expression function:

  • Accepts an indirect key as described above for orderBy, not an actual value from the dataset content.

Example:

newDS = system.dataset.orderBy(sourceDS, system.dataset.descending('col1'), 'col2')

The above will sort descending on column #1, and ascending on column #2.

system.dataset.descending(key) returns Callable in a script.

key A string column name, an integer column index, or a callable that accepts a single-row dataset and returns a comparable value. Note that `system.dataset.naturalOrder()` and `system.dataset.naturalCasedOrder()` both return callables of this type.

naturalOrder()

Accepts a string, integer, or callable key, and performs comparisons on the resulting values using “natural” ordering, aka “Alphanumeric” ordering. Numbers embedding in strings will be broken out and the comparison conducted numerically for those parts. This function uses case-insensitive comparisons of the non-numeric parts. The comparable obtained from the key will be stringified if not already a string.

system.dataset.naturalOrder(key) returns Callable in a script.

key A string column name, an integer column index, or a callable that accepts a single-row dataset and returns a comparable value.

naturalCasedOrder()

Accepts a string, integer, or callable key, and performs comparisons on the resulting values using “natural” ordering, aka “Alphanumeric” ordering. Numbers embedding in strings will be broken out and the comparison conducted numerically for those parts. This function uses case-sensitive comparisons of the non-numeric parts. The comparable obtained from the key will be stringified if not already a string.

system.dataset.naturalCasedOrder(key) returns Callable in a script.

key A string column name, an integer column index, or a callable that accepts a single-row dataset and returns a comparable value.