The Time Series Database Cache module supplies dataset manipulation and database query functions that are optimized for use with large and/or high-resolution database tables containing time-stamped data.
As of version 1.9 of the NoteChart Module, its EasyNoteChart component will automatically use these caches for DB pens and corresponding histograms, unless explicitly disabled in the component's properties. The timeSeriesCache() expression function remains usable with Ignition's Classic Chart component, or the regular NoteChart component, or the Notes datasets in either NoteChart component, or any other component that can use time-series datasets.
Requests for data are organized by datasource, table, timestamp column, and optional WHERE clause into individual caches, and the behavior of each combination can be customized through the gateway web interface. The defaults are suitable for light- to medium-duty databases. When a request is first received, any data still present from a previous request that fits the given criteria is immediately returned from the local cache. The fraction of the request that wasn't immediately satisfied is passed on to the background query engine in the gateway.
For a given cache combination, requests for various time spans and value columns (via expression function, script function, or compatible module) are combined from all clients. The requests are then sorted and divided by time span into either large bulk historical requests and/or small realtime requests, and queued to separate execution managers. The default realtime span is five seconds before to ten seconds after now(). Bulk requests are further subdivided by request priority, though less important columns will be included in more important time spans. Warning: due to the design of the split between bulk and realtime spans, rows with timestamps beyond the realtime window will not be returned.
Scripted requests must be repeated at regular intervals at least until all missing rows arrive. Continuing regular requests after that produces no new database activity but will hold the corresponding data in the cache. The timeSeriesCache() expression function automatically continues requests to maintain its data as long as its containing window is open. Data that hasn't been requested for several minutes will be discarded, or when the last requestor releases its handle.
Bulk queries are LIMITed to a configurable chunk size per query and the arriving chunks are delivered via push notifications from the gateway to all interested clients. Queries for the realtime window are also delivered by push notifications. Timing for both bulk and realtime queries is configurable as well.
Cache data transferred from the gateway to a Vision client will be tracked in detail so that overlapping requests from that client do not have to repeatedly send the same data. Each Vision client will maintain its own cache with its own timeouts. Thus, a client may still have data in cache after the gateway has discarded it.
Given multiple datasets of time series data, return a dataset with rows from the first dataset that do not exist in any of the other datasets.
Combine multiple datasets of time series data into a single dataset. Duplicate rows are discarded and the final dataset contains all of the unique timestamps from all of the supplied datasets. The final column list is taken from the first dataset given. Optionally filter the rows by starting vs. ending timestamps. Optionally insert null rows where timestamps are discontinuous.
Given a single dataset, return a new dataset with the rows in reverse order.
Query a table or view for time series data within a timestamp range, immediately returning any cached data when called, and automatically updating as data is delivered from the data source. Delivered data is held in a TransientSeriesFragment, which won't take up space in your project.
Given a single dataset, return a new dataset with the same data, but in a non-serializable form.
Given two or more datasets of time series data, return the rows in the first dataset that are not in any of the other datasets. Rows must have completely identical column values to be considered duplicates.
Create a new dataset from a template dataset, using its column names and data types, but omitting the rows. An efficient alternative to system.dataset.deleteRows(ds, range(ds.rowCount)).
Combine multiple datasets of time series data into a single dataset. Duplicate rows are discarded and the final dataset contains all of the unique timestamps from all of the supplied datasets. The final column list is taken from the first dataset given. Optionally filter the rows by starting vs. ending timestamps. Optionally insert null rows where timestamps are discontinuous.
Given a single dataset, return a new dataset with the rows in reverse order.
Given a single dataset, return a new dataset with the same data but not serializable.
Specify a table or view containing time series data that is to be cached. Optionally include a WHERE clause to limit the data in the specific cache. Returns a numeric handle for use with system.db.getSeriesCache().
Query a registered time series cache for data within a timestamp range, immediately returning any cached data. Missing data is retrieved from the data source in the background. Call at short intervals to obtain additional data after it arrives.
Invalidate a cache's data for a specific time span for all consumers, causing a re-query in the gateway when next requested (if not immediately). Allows an application to run Update queries and then have the new data show up in a corresponding cache.
Dispose of a registered time series cache. Registered caches share data for common table/view and WHERE clause combinations. Cached data is released immediately when the last handle is released.
This composite type describes a span of time based on an inclusive start time and an exclusive end time. Either or both timestamps may be null, representing "unbounded" for that direction. A variety of comparison operations and composite operations are supplied as methods.
This composite type extends the DateSpan data type to include prioritization.
This composite type describes an ordered sequence of non-overlapping time spans, in the form of a list of DateSpan objects. The earliest DateSpan may have a null start timestamp, and the latest DateSpan may have a null end timestamp, each indicating "unbounded" in that direction. A variety of comparison operations, composite operations, and modification operations are supplied as methods. Prioritization is maintained.
This composite type extends the DateSpans type to include a dataset column name as a property. It is used to carry data requested, data present, and/or data missing information for a single column.
This composite type holds a collection of ColumnSpans objects. It is used to carry data requested, data present, and/or data missing information within the cache engine and result Datasets.
This composite type extends Ignition's Dataset type to include cache metadata that is likely to be useful to the end-user. In particular, the cache supplies the status of the data successfully delivered as a List of ColumnSpans objects.
This composite type extends Ignition's built-in BasicDataset type to omit its data during serialization.
This composite type extends the SeriesFragment type to omit its data during serialization.
Caching of very high resolution data, or allowing extremely long timespans, will consume substantial Java Heap memory. Pathologically large queries that would cause DB timeouts in clients may cause java heap allocation errors instead. Be sure to set client launch properties to allow high memory usage. Similar considerations apply to the memory allowance in the gateway.
When using caching to optimize identical realtime charts on multiple clients, consider using a Gateway Timer Event script to keep the desired timespan cached in the gateway. This will make the client charts open most efficiently, even if all clients are closed at some point. Have a project script module like the following, and call its preload() function every couple seconds:
preloadHandle = system.db.registerSeriesCache('mysource', 'mytable', 't_stamp") def preload(): global preloadHandle endts = system.date.now() begints = system.date.addHours(endts, -6) try: # Discard returned dataset. system.db.getSeriesCache(preloadHandle, begints, endts, 'valCol1', 'valCol2', 'valCol3') except: # Deal with an expired cache handle and try again preloadHandle = system.db.registerSeriesCache('mysource', 'mytable', 't_stamp") system.db.getSeriesCache(preloadHandle, begints, endts, 'valCol1', 'valCol2', 'valCol3')