Data Processing and Instrument Data Standardization

enum class fluxEngine::DataType

A scalar data type.

This enumeration lists the supported scalar data types that may be used as input for HSI data, as well as the data types of the data that may be returned.

Values:

enumerator UInt8

8bit Unsigned Integer

enumerator UInt16

16bit Unsigned Integer

enumerator UInt32

32bit Unsigned Integer

enumerator UInt64

64bit Unsigned Integer

enumerator Int8

8bit Signed Integer

enumerator Int16

16bit Signed Integer

enumerator Int32

32bit Signed Integer

enumerator Int64

64bit Signed Integer

enumerator Float32

32bit Single Precision IEEE 754 Floating Point

enumerator Float64

64bit Double Precision IEEE 754 Floating Point

enum class fluxEngine::ValueType

The value type of a given input.

Determines what form the data that is supplied by the user has.

Values:

enumerator Intensity

Intensities.

The data supplied by the user are raw intensities. If the model is set to process reflectances and/or absorbances reference data must be provided before processing can occur.

enumerator Reflectance

Reflectances.

The data supplied by the user are reflectances.

struct ReferenceInfo

Information about references.

This information structure must be supplied when creating a processing context. It specifies the input value type of the processing context, as well as any references.

There are three primary ways to handle referencing of data:

  • The source in the model is set to raw intensities, and raw intensities are supplied by the user for the input data of the model while processing. In that case any references provided will be ignored

  • The source in the model is set to reflectances or absorbances, and the user provides reflectances for the input data of the model while processing. In that case any references provided will be ignored

  • The source in the model is set to reflectances or absorbances, and the user provides raw intensities for the input data of the model while processing. In that case a white reference must be provided to automatically reference the input data, and optionally a dark reference may be provided.

When referencing input data, if only a white reference is provided, reflectances are calculated with the following formula:

reflectance = intensity / white
If a dark reference is also present, reflectances are calculated with the following formula:
reflectance = (intensity - dark) / (white - dark)

Public Members

ValueType valueType = {ValueType::Intensity}

The value type of the input data.

void const *whiteReference = {nullptr}

The white reference data.

This must be a tensor that is contiguous in memory that contains the white reference that will be used in conjunction with the input data.

Since it is advantageous to average multiple reference measurements, this tensor has to have an additional dimension to denote a list of input frames.

  • For HSI cubes in BIP order, this means the dimensionality of this tensor has to be (N, height, width, bands).

  • For HSI cubes in BIL order the dimensionality of the tensor has to be (N, height, bands, width)

  • For HSI cubes in BSQ order the dimensionality of the tensor has to be (N, bands, height, width)

  • For PushBoom frames in LambdaX order the dimensionality of the tensor has to be (N, width, bands)

  • For PushBroom frames in LambdaY order the dimensionality of the tensor has to be (N, bands, width)

The number of averages, N, may be 1, indicating that no average is to be calculated.

std::vector<std::int64_t> whiteReferenceDimensions

The dimensions of the white reference.

This must have the correct number of entries depending on the type of processing context that is being created; when creating a processing context for HSI cubes this must contain four entries (see the documentation for ReferenceInfo::whiteReference for further details), when creating a processing context for PushBroom frames this must contain three entries.

void const *darkReference = {nullptr}

The dark reference data.

std::vector<std::int64_t> darkReferenceDimensions

The dimensions of the dark reference.

enum class fluxEngine::HSICube_StorageOrder

Hyperspectral data cube storage order.

Hyperspectral cubes consist of three dimensions, and the storage order defines how these dimensions are mapped into linear memory.

The introductory documentation also contains a visual depiction of the various storage orders of HSI cubes.

Values:

enumerator BIP

Band Interleaved by Pixel Storage Order.

In this storage order all wavelengths of each pixel are next to each other in memory. This means that the linear memory address of an element may be caluclated by the following formula (assuming the cube is contiguous in memory, see the various overloads of ProcessingContext::setSourceData() for more complicated cases):

(y * width + x) * band_count + band_index
A cube stored in this storage order can be considered a row-major tensor of order 3 indexed as (y, x, band).

enumerator BIL

Band Interleaved by Line Storage Order.

In this storage order all pixels of a line are next to each other in memory, and wavelengths are grouped by line. This means that the linear memory address of an element may be by the following formula (assuming the cube is contiguous in memory, see the various overloads of ProcessingContext::setSourceData() for more complicated cases):

(y * band_count + band_index) * width + x
A cube stored in this storage order can be considered a row-major tensor of order 3 indexed as (y, band, x).

enumerator BSQ

Band Sequential Storage Order.

In this storage order all pixels of an individual band are next to each other in memory, and wavelengths are grouped by image. This means that the linear memory address of an element may be by the following formula (assuming the cube is contiguous in memory, see the various overloads of ProcessingContext::setSourceData() for more complicated cases):

(band_index * height + y) * width + x
A cube stored in this storage order can be considered a row-major tensor of order 3 indexed as (band, y, x).

enum class fluxEngine::PushBroomFrame_StorageOrder

Hyperspectral PushBroom frame storage order.

A PushBroom camera is a hyperspectral camera that uses a 2D sensor to image a single line, where the optics project different wavelengths of the incoming light onto one of the sensor dimensions. The other sensor dimension is used to spatially resolve the line that is being imaged.

There are two possible orientations of the optics: the wavelengths could be mapped onto the x- or the y-direction of the camera sensor. This enumeration allows the user to select which of these storage orders is actually used.

Values:

enumerator LambdaX

Wavelengths are in X-direction.

The y direction of the frame contains the spatial information.

enumerator LambdaY

Wavelengths are in Y-direction.

The x direction of the frame contains the spatial information.

enum class fluxEngine::OutputStorageType

The storage type of data at a given output sink.

When extracting data from fluxEngine, the data at a given output sink may be stored in different formats. This enumeration describes the possible formats the data is stored in. Please refer to the introductory information for a more detailed introduction on how data is returned from processing, and what kind of forms it may take.

Values:

enumerator Tensor

Tensor data.

This is the most common case, where data at the end of processing is available as a tensor. For HSI data tensors will typically be of order 3, having a y dimension, x dimension and an additional dimension for e.g. spectral (wavelength) information.

enumerator ObjectList

Object list.

A list of objects that is stored as an array of OutputObject objects.

struct OutputObject : private fluxEngine_C_v1_OutputObject

An object that is output.

If an output sink is configured to output object data, it will be an array of this structure, containing the information related to each object.

This object is actually just a wrapper around the original C structure that has the same binary data layout, but a lot of helper methods to extract information from the model.

After a processing step has completed, for every output sink that returns an object list, the following code can be used to extract this:

auto sinkData = context.outputSinkData(sinkIndex);
auto beginPointer = static_cast<OutputObject const*>(sinkData.data);
auto endPointer = beginPointer + sinkData.sizes[0];
for (auto it = beginPointer; it != endPointer; ++it) {
    // it points to an object that was detected
}

Public Functions

inline std::int64_t boundingBoxX() const noexcept

Get the object’s bounding box: x coordinate of the left boundary.

Returns

The object’s bounding box: x coordinate of the left boundary

inline std::int64_t boundingBoxY() const noexcept

Get the object’s bounding box: y coordinate of the top boundary.

For PushBroom frames this will indicate the starting frame of the object since the last reset.

Returns

The object’s bounding box: y coordinate of the top boundary

inline std::int64_t boundingBoxWidth() const noexcept

Get the object’s bounding box: total width.

Returns

The object’s bounding box: total width

inline std::int64_t boundingBoxHeight() const noexcept

Get the object’s bounding box: total height.

Returns

The object’s bounding box: total height

inline double gravityCenterX() const noexcept

Get the object’s center of gravity: x coordinate.

The following equation will always be true:

boundungBoxX() <= gravityCenterX() &&
    gravityCenterX() < (boundingBoxX() + boundingBoxWidth())

The object’s center of gravity: x coordinate

inline double gravityCenterY() const noexcept

Get the object’s center of gravity: y coordinate.

The following equation will always be true:

boundungBoxY() <= gravityCenterY() &&
    gravityCenterY() < (boundingBoxY() + boundingBoxHeight())

The object’s center of gravity: y coordinate

inline std::int64_t area() const noexcept

Get the object’s area in pixels.

Returns

The object’s area in pixels

inline std::int8_t const *mask() const noexcept

A pointer to the object’s mask.

This may not be present, in which case this will be nullptr. If this is present this will point to a 2D matrix (row-major storage order, contiguous in memory) that has the size of the bounding box specified in this object, where a value of 0 indicates that a given pixel belongs to the object, and a value of -1 indicates that it does not belong to the object.

A helper method isObjectPresentAt() exists to determine if a given pixel is part of the object.

Returns

A pointer to the object’s mask

inline bool isObjectPresentAt(std::int64_t x, std::int64_t y) const noexcept

Determine if a given pixel is part of the detected object.

For a given set of coordinates for a pixel that are measured relative to the top-left corner of the bounding box this method will determine if that pixel belongs to the object or not.

If the object does not contain any mask information, this will always return false.

Parameters
  • x – The x coordinate relative to the top-left corner of the bounding box of the object to check

  • y – The y coordinate relative to the top-left corner of the bounding box of the object to check

Returns

Whether a pixel is part of the object

inline bool isObjectPresentAtAbsolute(std::int64_t x, std::int64_t y) const noexcept

Determine if a given pixel is part of the detected object.

For a given set of coordinates for a pixel that are measured absolutely to the start of the cube this method will determine if that pixel belongs to the object or not. For sequences of PushBroom frames that are being processed the y coordinate is measured in the number of frames since the last reset via ProcessingContext::resetState().

If the object does not contain any mask information, this will always return false.

Parameters
  • x – The absolute x coordinate

  • y – The absolute y coordinate

Returns

Whether a pixel is part of the object

inline bool hasPrimaryClass() const noexcept

Does the object have a primary class?

Note that the presence of a primary class is only an indication of whether the object was subject to a classifier in the processing chain. It does not mean that the classifier has actually found a class that matches the object. The primary class of an object may be present, but negative, indicating that a classifier was run, but that no class could be assigned to the object.

Returns

Whether the object has a primary class

inline std::int16_t primaryClassValue() const noexcept

Get the primary class of the object.

If hasPrimaryClass() is false the return value here will have no meaning and might be uninitialized memory.

Note that the presence of a primary class is only an indication of whether the object was subject to a classifier in the processing chain. It does not mean that the classifier has actually found a class that matches the object. The primary class of an object may be present, but negative, indicating that a classifier was run, but that no class could be assigned to the object.

Returns

The primary class of the object

inline std::optional<std::int16_t> primaryClass() const noexcept

Get the primary class of the object.

This method only exists when compiling with a C++17 compiler. It returns a std::optional result, which will be std::nullopt if the object was not subject to a classifier, and a valid integer if it was.

Note that the presence of a primary class is only an indication of whether the object was subject to a classifier in the processing chain. It does not mean that the classifier has actually found a class that matches the object. The primary class of an object may be present, but negative, indicating that a classifier was run, but that no class could be assigned to the object.

Returns

The primary class of the object, or std::nullopt if the object has no primary class

inline void const *additionalData() const noexcept

Get additional data for the object.

This is an array of scalar values, the size being fixed after creation of the processing context, that contain additional data that is passed together with the object.

Returns

Additional data for the object

struct TensorData : private fluxEngine_C_v1_Tensor

Tensor data.

This structure describes tensor that that is returned within extended object data from fluxEngine. It wraps the underlying C structure (and has the same binary layout), but provides friendly C++ accessors to the actual data of the tensor as well as its properties.

It provides access to the base pointer of the tensor, the tensor order, its dimensions, its strides, the data type of the tensor, as well as accessors to individual elements of the tensor. It performs safety checks to ensure that accesses are correct.

This may also wrap a “null pointer”, i.e. a tensor that wasn’t actually present. To check whether a TensorData object actually wraps a valid tensor, it may be used in an if clause:

if (tensorData)
    do_something(tensorData);

Public Functions

inline TensorData()

Default constructor.

This creates an empty object that doesn’t wrap any tensor data.

inline TensorData(fluxEngine_C_v1_Tensor tensor)

Wrapping constructor.

Wraps an underlying C structure.

Parameters

tensor – The underlying C API structure to wrap

inline void const *rawBasePointer() const noexcept

Access the raw base pointer of the tensor.

This may be nullptr.

Returns

The raw base pointer of the tensor

template<typename T>
inline T const *basePointer() const

Access the base pointer, cast to a scalar type.

Get a pointer to the first element of the tensor, cast to a specific scalar type. The type will be checked against the type in the tensor, and if the type doesn’t match, an exception will be thrown.

This will never return nullptr, but will throw an exception if no actual tensor is wrapped.

Template Parameters

T – The scalar type to access the tensor as. This must be std::uint8_t, std::uint16_t, std::uint32_t, std::uint64_t, std::int8_t, std::int16_t, std::int32_t, std::int64_t, float or double.

Returns

The pointer to the first element in the tensor

template<typename T>
inline T at()

Get the single value of a scalar tensor.

If the tensor is of order 0, return the single value stored in it.

If the tensor order does not match, or the supplied scalar type does not match, an exception will be thrown.

Template Parameters

T – The scalar type to access the tensor as. This must be std::uint8_t, std::uint16_t, std::uint32_t, std::uint64_t, std::int8_t, std::int16_t, std::int32_t, std::int64_t, float or double.

Returns

The single scalar value of the tensor

template<typename T>
inline T at(int64_t i0)

Get a value of a tensor of order 1.

If the tensor is of order 1, access an element of that vector given by the supplied index.

If the tensor order does not match, or the supplied scalar type does not match, or the supplied index is out of range, an exception will be thrown.

Template Parameters

T – The scalar type to access the tensor as. This must be std::uint8_t, std::uint16_t, std::uint32_t, std::uint64_t, std::int8_t, std::int16_t, std::int32_t, std::int64_t, float or double.

Parameters

i0 – The index to address the tensor

Returns

The value stored in the vector at that index.

template<typename T>
inline T at(int64_t i0, int64_t i1)

Get a value of a tensor of order 2.

If the tensor is of order 2, access an element of that tensor given by the supplied indices.

If the tensor order does not match, or the supplied scalar type does not match, or any of the supplied indices is out of range, an exception will be thrown.

Template Parameters

T – The scalar type to access the tensor as. This must be std::uint8_t, std::uint16_t, std::uint32_t, std::uint64_t, std::int8_t, std::int16_t, std::int32_t, std::int64_t, float or double.

Parameters
  • i0 – The first index to address the tensor

  • i1 – The second index to address the tensor

Returns

The value of the tensor stored at those indices.

template<typename T>
inline T at(int64_t i0, int64_t i1, int64_t i2)

Get a value of a tensor of order 3.

If the tensor is of order 3, access an element of that tensor given by the supplied indices.

If the tensor order does not match, or the supplied scalar type does not match, or any of the supplied indices is out of range, an exception will be thrown.

Template Parameters

T – The scalar type to access the tensor as. This must be std::uint8_t, std::uint16_t, std::uint32_t, std::uint64_t, std::int8_t, std::int16_t, std::int32_t, std::int64_t, float or double.

Parameters
  • i0 – The first index to address the tensor

  • i1 – The second index to address the tensor

  • i2 – The third index to address the tensor

Returns

The value of the tensor stored at those indices.

template<typename T>
inline T at(int64_t i0, int64_t i1, int64_t i2, int64_t i3)

Get a value of a tensor of order 4.

If the tensor is of order 4, access an element of that tensor given by the supplied indices.

If the tensor order does not match, or the supplied scalar type does not match, or any of the supplied indices is out of range, an exception will be thrown.

Template Parameters

T – The scalar type to access the tensor as. This must be std::uint8_t, std::uint16_t, std::uint32_t, std::uint64_t, std::int8_t, std::int16_t, std::int32_t, std::int64_t, float or double.

Parameters
  • i0 – The first index to address the tensor

  • i1 – The second index to address the tensor

  • i2 – The third index to address the tensor

  • i3 – The fourth index to address the tensor

Returns

The value of the tensor stored at those indices.

template<typename T>
inline T at(int64_t i0, int64_t i1, int64_t i2, int64_t i3, int64_t i4)

Get a value of a tensor of order 5.

If the tensor is of order 5, access an element of that tensor given by the supplied indices.

If the tensor order does not match, or the supplied scalar type does not match, or any of the supplied indices is out of range, an exception will be thrown.

Template Parameters

T – The scalar type to access the tensor as. This must be std::uint8_t, std::uint16_t, std::uint32_t, std::uint64_t, std::int8_t, std::int16_t, std::int32_t, std::int64_t, float or double.

Parameters
  • i0 – The first index to address the tensor

  • i1 – The second index to address the tensor

  • i2 – The third index to address the tensor

  • i3 – The fourth index to address the tensor

  • i4 – The fifth index to address the tensor

Returns

The value of the tensor stored at those indices.

inline std::vector<int64_t> dimensions() const

Get the dimensions of the tensor as a vector.

If a null pointer is dereferenced, this will return an empty vector. However, an empty vector may also be returned if the tensor is a pure scalar value.

Returns

The dimensions of the tensor as a vector

inline int64_t dimension(int which) const

Get a specific dimension of the tensor.

If the specified dimension is out of range, this will throw an exception.

Parameters

which – Which dimension to consider

Returns

The size of that specific dimension

inline std::vector<int64_t> strides() const

Get the strides of the tensor as a vector.

If a null pointer is dereferenced, this will return an empty vector. However, an empty vector may also be returned if the tensor is a pure scalar value.

Returns

The strides of the tensor as a vector

inline int64_t stride(int which) const

Get a specific stride of the tensor.

If the specified dimension is out of range, this will throw an exception.

Parameters

which – Which dimension to consider

Returns

The stride of that specific dimension

inline int64_t const *rawDimensions() const noexcept

Get the raw dimensions of the tensor.

This will return a pointer to the first element of the underlying C array storing the raw dimensions of this tensor. Only the dimensions below the element with number order() will contain valid values.

This method has the advantage over TensorData::dimensions() that it doesn’t allocate on the heap to return the information.

Returns

The raw dimensions of the tensor

inline int64_t const *rawStrides() const noexcept

Get the raw strides of the tensor.

This will return a pointer to the first element of the underlying C array storing the raw strides of this tensor. Only Only the dimensions below the element with number order() will contain valid values.

This method has the advantage over TensorData::strides() that it doesn’t allocate on the heap to return the information.

Returns

The raw strides of the tensor

inline int order() const noexcept

Get the order of the tensor.

If no valid tensor is wrapped, this will return -1, otherwise the order of the tensor, which must be between 0 and 5 (including both).

Returns

The order of the tensor

inline DataType dataType() const noexcept

Get the data type of the tensor.

If no valid tensor is wrapped the returned value will have no meaning.

Returns

The data type of the tensor

inline explicit operator bool() const noexcept

Bool conversion operator.

This operator exists ot allow the user to check whether a valid tensor is stored in this object.

Returns

Whether a valid tensor is stored in the object

struct OutputExtendedObject : private fluxEngine_C_v1_OutputExtendedObject

An object that is output (extended output)

If an output sink is configured to output object data, it will be an array of this structure, containing the information related to each object. This structure will be used if extended object use has been requested, see ProcessingContext::setUseExtendedObjects().

This object is actually just a wrapper around the original C structure that has the same binary data layout, but a lot of helper methods to extract information from the model.

After a processing step has completed, for every output sink that returns an object list, the following code can be used to extract this:

auto sinkData = context.outputSinkData(sinkIndex);
auto beginPointer = static_cast<OutputExtendedObject const*>(sinkData.data);
auto endPointer = beginPointer + sinkData.sizes[0];
for (auto it = beginPointer; it != endPointer; ++it) {
    // it points to an object that was detected
}

Public Functions

inline std::int64_t boundingBoxX() const noexcept

Get the object’s bounding box: x coordinate of the left boundary.

Returns

The object’s bounding box: x coordinate of the left boundary

inline std::int64_t boundingBoxY() const noexcept

Get the object’s bounding box: y coordinate of the top boundary.

For PushBroom frames this will indicate the starting frame of the object since the last reset.

Returns

The object’s bounding box: y coordinate of the top boundary

inline std::int64_t boundingBoxWidth() const noexcept

Get the object’s bounding box: total width.

Returns

The object’s bounding box: total width

inline std::int64_t boundingBoxHeight() const noexcept

Get the object’s bounding box: total height.

Returns

The object’s bounding box: total height

inline double gravityCenterX() const noexcept

Get the object’s center of gravity: x coordinate.

The following equation will always be true:

boundungBoxX() <= gravityCenterX() &&
    gravityCenterX() < (boundingBoxX() + boundingBoxWidth())

The object’s center of gravity: x coordinate

inline double gravityCenterY() const noexcept

Get the object’s center of gravity: y coordinate.

The following equation will always be true:

boundungBoxY() <= gravityCenterY() &&
    gravityCenterY() < (boundingBoxY() + boundingBoxHeight())

The object’s center of gravity: y coordinate

inline std::int64_t area() const noexcept

Get the object’s area in pixels.

Returns

The object’s area in pixels

inline TensorData mask() const noexcept

A pointer to the object’s mask.

This may not be present, in which case this will be nullptr. If this is present this will point to a 2D matrix (row-major storage order, contiguous in memory) that has the size of the bounding box specified in this object, where a value of 0 indicates that a given pixel belongs to the object, and a value of -1 indicates that it does not belong to the object.

A helper method isObjectPresentAt() exists to determine if a given pixel is part of the object.

Returns

A pointer to the object’s mask

inline bool isObjectPresentAt(std::int64_t x, std::int64_t y) const noexcept

Determine if a given pixel is part of the detected object.

For a given set of coordinates for a pixel that are measured relative to the top-left corner of the bounding box this method will determine if that pixel belongs to the object or not.

If the object does not contain any mask information, this will always return false.

Parameters
  • x – The x coordinate relative to the top-left corner of the bounding box of the object to check

  • y – The y coordinate relative to the top-left corner of the bounding box of the object to check

Returns

Whether a pixel is part of the object

inline bool isObjectPresentAtAbsolute(std::int64_t x, std::int64_t y) const noexcept

Determine if a given pixel is part of the detected object.

For a given set of coordinates for a pixel that are measured absolutely to the start of the cube this method will determine if that pixel belongs to the object or not. For sequences of PushBroom frames that are being processed the y coordinate is measured in the number of frames since the last reset via ProcessingContext::resetState().

If the object does not contain any mask information, this will always return false.

Parameters
  • x – The absolute x coordinate

  • y – The absolute y coordinate

Returns

Whether a pixel is part of the object

inline bool hasPrimaryClass() const noexcept

Does the object have a primary class?

Note that the presence of a primary class is only an indication of whether the object was subject to a classifier in the processing chain. It does not mean that the classifier has actually found a class that matches the object. The primary class of an object may be present, but negative, indicating that a classifier was run, but that no class could be assigned to the object.

Returns

Whether the object has a primary class

inline std::int16_t primaryClassValue() const noexcept

Get the primary class of the object.

If hasPrimaryClass() is false the return value here will have no meaning and might be uninitialized memory.

Note that the presence of a primary class is only an indication of whether the object was subject to a classifier in the processing chain. It does not mean that the classifier has actually found a class that matches the object. The primary class of an object may be present, but negative, indicating that a classifier was run, but that no class could be assigned to the object.

Returns

The primary class of the object

inline std::optional<std::int16_t> primaryClass() const noexcept

Get the primary class of the object.

This method only exists when compiling with a C++17 compiler. It returns a std::optional result, which will be std::nullopt if the object was not subject to a classifier, and a valid integer if it was.

Note that the presence of a primary class is only an indication of whether the object was subject to a classifier in the processing chain. It does not mean that the classifier has actually found a class that matches the object. The primary class of an object may be present, but negative, indicating that a classifier was run, but that no class could be assigned to the object.

Returns

The primary class of the object, or std::nullopt if the object has no primary class

inline TensorData additionalData() const noexcept

Get additional data for the object.

This may not always be present, and it is up to the user to check the return value before using it.

Returns

Additional data for the object

inline TensorData statisticsMean() const noexcept

Get the (statistics) mean values stored with the object.

This is a tensor of scalar values (possibly only a single scalar value) that contains mean values calculated for a given object, depending on the model used. The tensor structure and the amount of means calculated depends on the model.

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The (statistics) mean values

inline TensorData statisticsStandardDeviation() const noexcept

Get the (statistics) standard deviations stored with the object.

This is a tensor of scalar values (possibly only a single scalar value) that contains the standard deviation corresponding to the mean values. It will have the same tensor structure as the mean values described by statisticsMean().

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The (statistics) standard deviations

inline TensorData statisticsMinimum() const noexcept

Get the (statistics) minimum values stored with the object.

This is a tensor of scalar values (possibly only a single scalar value) that contains the minimum values that were calculated on the same per-object data as the mean values provided. It will have the same tensor structure as the mean values described by statisticsMean().

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The (statistics) minimum values

inline TensorData statisticsMaximum() const noexcept

Get the (statistics) maximum values stored with the object.

This is a tensor of scalar values (possibly only a single scalar value) that contains the maximum values that were calculated on the same per-object data as the mean values provided. It will have the same tensor structure as the mean values described by statisticsMean().

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The (statistics) maximum values

inline TensorData statisticsMinimumX() const noexcept

Get the x positions corresponding to the minimum values returned by statisticsMinimum()

For each minimum value provided by statisticsMinimum() this will contain the x position (relative to the left border of the object) of where the minimum was found. It will have the same tensor structure as the mean values described by statisticsMean(), but always be of the data type DataType::Int64.

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The x positions of the (statistics) minimum values

inline TensorData statisticsMinimumY() const noexcept

Get the y positions corresponding to the minimum values returned by statisticsMinimum()

For each minimum value provided by statisticsMinimum() this will contain the y position (relative to the top border of the object) of where the minimum was found. It will have the same tensor structure as the mean values described by statisticsMean(), but always be of the data type DataType::Int64.

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The y positions of the (statistics) minimum values

inline TensorData statisticsMaximumX() const noexcept

Get the x positions corresponding to the maximum values returned by statisticsMaximum()

For each maximum value provided by statisticsMaximum() this will contain the x position (relative to the left border of the object) of where the maximum was found. It will have the same tensor structure as the mean values described by statisticsMean(), but always be of the data type DataType::Int64.

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The x positions of the (statistics) maximum values

inline TensorData statisticsMaximumY() const noexcept

Get the y positions corresponding to the maximum values returned by statisticsMaximum()

For each maximum value provided by statisticsMaximum() this will contain the y position (relative to the top border of the object) of where the maximum was found. It will have the same tensor structure as the mean values described by statisticsMean(), but always be of the data type DataType::Int64.

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The y positions of the (statistics) maximum values

inline TensorData qualityValues() const noexcept

Get the quality values stored in the object.

This will typically be a tensor of order 1 (vector) of an integer data type. The specific data type will depend on the model the user has created.

This may not always be present, and it is up to the user to check the return value before using it.

Returns

The quality values stored in the object

class ProcessingContext

Processing Context.

This class wraps a processing context, the main interface to processing data with fluxEngine.

The default constructor does not actually create an object, the user must use one of the other constructors to actually load a model.

Data Processing

These classes and methods of the ProcessingContext class are generic methods related to data processing, as well as methods that are not tied to a specific device.

ProcessingContext() = default

Default constructor.

The object created by this constructor is not a valid processing context.

inline ProcessingContext(Model &model, HSICube_Tag, HSICube_StorageOrder storageOrder, DataType dataType, std::int64_t maxHeight, std::int64_t height, std::int64_t maxWidth, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo)

Create a new processing context for HSI cubes.

This method creates a new processing context that may be used to process HSI data cubes. The context may be used to process multiple cubes, as long as they have the same structure.

The cube will be processed as a whole, and depending on the complexity of the model a lot of temporary storage may be required to store all the intermediate processing results.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The maximum spatial dimensions that will be processed with this context

The user may choose to process cubes of the same size, or cubes of varying sizes. In the case of cubes that all have the same size, the user should specify the same value for both maxHeight and height, and for maxWidth and width, respectively. In the case the cube sizes vary, the user should specify -1 for both height and width, and specify the size of the largest cube they will ever want to process in maxHeight and maxWidth.

Larger values for maxHeight and maxWidth will lead to more RAM being required to fully process the data.

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • maxHeight – The maximum height of a cube that will be processed using this processing context

  • height – Specify -1 here to leave the cube height dynamic (which might not be as efficient at runtime for some models), or the same value as maxHeight to fix the height and indicate it will always be the same for every cube that is being processed.

  • maxWidth – The maximum width of a cube that will be processed using this processing context

  • width – Specify -1 here to leave the cube width dynamic (which might not be as efficient at runtime for some models), or the same value as maxWidth to fix the width and indicate it will always be the same for every cube that is being processed.

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

inline ProcessingContext(Model &model, HSICube_Tag, HSICube_StorageOrder storageOrder, DataType dataType, std::int64_t maxHeight, std::int64_t height, std::int64_t maxWidth, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo, CalibrationInfo const &calibrationInfo)

Create a new processing context for HSI cubes.

This method creates a new processing context that may be used to process HSI data cubes. The context may be used to process multiple cubes, as long as they have the same structure.

This overload allows the user to specify calibration information for the source data.

The cube will be processed as a whole, and depending on the complexity of the model a lot of temporary storage may be required to store all the intermediate processing results.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The maximum spatial dimensions that will be processed with this context

The user may choose to process cubes of the same size, or cubes of varying sizes. In the case of cubes that all have the same size, the user should specify the same value for both maxHeight and height, and for maxWidth and width, respectively. In the case the cube sizes vary, the user should specify -1 for both height and width, and specify the size of the largest cube they will ever want to process in maxHeight and maxWidth.

Larger values for maxHeight and maxWidth will lead to more RAM being required to fully process the data.

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • maxHeight – The maximum height of a cube that will be processed using this processing context

  • height – Specify -1 here to leave the cube height dynamic (which might not be as efficient at runtime for some models), or the same value as maxHeight to fix the height and indicate it will always be the same for every cube that is being processed.

  • maxWidth – The maximum width of a cube that will be processed using this processing context

  • width – Specify -1 here to leave the cube width dynamic (which might not be as efficient at runtime for some models), or the same value as maxWidth to fix the width and indicate it will always be the same for every cube that is being processed.

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

  • calibrationInfo – The calibration information of the source data

inline ProcessingContext(Model &model, ProcessingQueueSet &processingQueueSet, HSICube_Tag, HSICube_StorageOrder storageOrder, DataType dataType, std::int64_t maxHeight, std::int64_t height, std::int64_t maxWidth, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo)

Create a new processing context for HSI cubes.

This method creates a new processing context that may be used to process HSI data cubes. The context may be used to process multiple cubes, as long as they have the same structure.

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

The cube will be processed as a whole, and depending on the complexity of the model a lot of temporary storage may be required to store all the intermediate processing results.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The maximum spatial dimensions that will be processed with this context

The user may choose to process cubes of the same size, or cubes of varying sizes. In the case of cubes that all have the same size, the user should specify the same value for both maxHeight and height, and for maxWidth and width, respectively. In the case the cube sizes vary, the user should specify -1 for both height and width, and specify the size of the largest cube they will ever want to process in maxHeight and maxWidth.

Larger values for maxHeight and maxWidth will lead to more RAM being required to fully process the data.

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • processingQueueSet – The processing queue set to use

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • maxHeight – The maximum height of a cube that will be processed using this processing context

  • height – Specify -1 here to leave the cube height dynamic (which might not be as efficient at runtime for some models), or the same value as maxHeight to fix the height and indicate it will always be the same for every cube that is being processed.

  • maxWidth – The maximum width of a cube that will be processed using this processing context

  • width – Specify -1 here to leave the cube width dynamic (which might not be as efficient at runtime for some models), or the same value as maxWidth to fix the width and indicate it will always be the same for every cube that is being processed.

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

inline ProcessingContext(Model &model, ProcessingQueueSet &processingQueueSet, HSICube_Tag, HSICube_StorageOrder storageOrder, DataType dataType, std::int64_t maxHeight, std::int64_t height, std::int64_t maxWidth, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo, CalibrationInfo const &calibrationInfo)

Create a new processing context for HSI cubes.

This method creates a new processing context that may be used to process HSI data cubes. The context may be used to process multiple cubes, as long as they have the same structure.

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used. It also allows the user to specify the calibration information of the source data.

The cube will be processed as a whole, and depending on the complexity of the model a lot of temporary storage may be required to store all the intermediate processing results.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The maximum spatial dimensions that will be processed with this context

The user may choose to process cubes of the same size, or cubes of varying sizes. In the case of cubes that all have the same size, the user should specify the same value for both maxHeight and height, and for maxWidth and width, respectively. In the case the cube sizes vary, the user should specify -1 for both height and width, and specify the size of the largest cube they will ever want to process in maxHeight and maxWidth.

Larger values for maxHeight and maxWidth will lead to more RAM being required to fully process the data.

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • processingQueueSet – The processing queue set to use

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • maxHeight – The maximum height of a cube that will be processed using this processing context

  • height – Specify -1 here to leave the cube height dynamic (which might not be as efficient at runtime for some models), or the same value as maxHeight to fix the height and indicate it will always be the same for every cube that is being processed.

  • maxWidth – The maximum width of a cube that will be processed using this processing context

  • width – Specify -1 here to leave the cube width dynamic (which might not be as efficient at runtime for some models), or the same value as maxWidth to fix the width and indicate it will always be the same for every cube that is being processed.

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

  • calibrationInfo – The calibration information of the source data

inline ProcessingContext(Model &model, PushBroomFrame_Tag, PushBroomFrame_StorageOrder storageOrder, DataType dataType, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo)

Create a new processing context for PushBroom frames.

This constructor creates a new processing context that may be used to sequentially process PushBroom frames. Each consecutive frame is considered to be part of a stream of lines that in principle could be used to construct a cube if concatenated.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The exact spatial dimension that will be processed with this context; as PushBroom frames should be able to be concatenated, the size of the frame may not be variable, but the number of frames being processed may vary

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • width – The spatial dimension of each PushBroom frame that is supplied (if the storage order indicates that wavelengths are across the x direction of the frame, this indicates the size of the frame in the y direction, and vice-versa)

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

inline ProcessingContext(Model &model, PushBroomFrame_Tag, PushBroomFrame_StorageOrder storageOrder, DataType dataType, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo, CalibrationInfo const &calibrationInfo)

Create a new processing context for PushBroom frames.

This constructor creates a new processing context that may be used to sequentially process PushBroom frames. Each consecutive frame is considered to be part of a stream of lines that in principle could be used to construct a cube if concatenated.

This overload allows the user to specify calibration information for the source data.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The exact spatial dimension that will be processed with this context; as PushBroom frames should be able to be concatenated, the size of the frame may not be variable, but the number of frames being processed may vary

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • width – The spatial dimension of each PushBroom frame that is supplied (if the storage order indicates that wavelengths are across the x direction of the frame, this indicates the size of the frame in the y direction, and vice-versa)

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

  • calibrationInfo – The calibration information of the source data

inline ProcessingContext(Model &model, ProcessingQueueSet &processingQueueSet, PushBroomFrame_Tag, PushBroomFrame_StorageOrder storageOrder, DataType dataType, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo)

Create a new processing context for PushBroom frames.

This constructor creates a new processing context that may be used to sequentially process PushBroom frames. Each consecutive frame is considered to be part of a stream of lines that in principle could be used to construct a cube if concatenated.

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The exact spatial dimension that will be processed with this context; as PushBroom frames should be able to be concatenated, the size of the frame may not be variable, but the number of frames being processed may vary

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • processingQueueSet – The processing queue set to use

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • width – The spatial dimension of each PushBroom frame that is supplied (if the storage order indicates that wavelengths are across the x direction of the frame, this indicates the size of the frame in the y direction, and vice-versa)

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

inline ProcessingContext(Model &model, ProcessingQueueSet &processingQueueSet, PushBroomFrame_Tag, PushBroomFrame_StorageOrder storageOrder, DataType dataType, std::int64_t width, std::vector<double> const &wavelengths, ReferenceInfo const &referenceInfo, CalibrationInfo const &calibrationInfo)

Create a new processing context for PushBroom frames.

This constructor creates a new processing context that may be used to sequentially process PushBroom frames. Each consecutive frame is considered to be part of a stream of lines that in principle could be used to construct a cube if concatenated.

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used. It also allows the user to specify the calibration information of the source data.

The following information must be known in advance to properly setup a fluxEngine processing context that can be used to process this type of HSI data:

  • The scalar data type

  • The storage order of the data in memory

  • The wavelengths

  • The exact spatial dimension that will be processed with this context; as PushBroom frames should be able to be concatenated, the size of the frame may not be variable, but the number of frames being processed may vary

If creating the processing context fails, one of the following exceptions may be thrown:

Parameters
  • model – The model to use to process data

  • processingQueueSet – The processing queue set to use

  • storageOrder – The storage order the input data will have when it is supplied to the processing context

  • dataType – The scalar data type of the input data when it is supplied to the processing context

  • width – The spatial dimension of each PushBroom frame that is supplied (if the storage order indicates that wavelengths are across the x direction of the frame, this indicates the size of the frame in the y direction, and vice-versa)

  • wavelengths – The list of wavelengths of the input cubes being processed, in the unit of nanometers

  • referenceInfo – How the data should be referenced, see the ReferenceInfo structure for more details

  • calibrationInfo – The calibration information of the source data

inline ProcessingContext(ProcessingContext &&other) noexcept

Move constructor.

Parameters

other – The processing context to move into the newly created object

inline ProcessingContext &operator=(ProcessingContext &&other) noexcept

Move assignment operator.

Parameters

other – The processing context to move into this variable

Returns

A reference to this

inline ~ProcessingContext() noexcept

Destructor.

Destroys the processing context.

inline void setUseExtendedObjects(bool value)

Alter a processing context: set use of extended objects.

Objects may be returned in two different manners: either via the OutputObject structure (default) or the OutputExtendedObject structure. This method controls which variant is to be used.

This setting may only be changed while processing is currently not active on this context. Changing this setting may take a bit of processing time. Ideally it should be done immediately after the context has been created and then not changed anymore.

Parameters

value – Whether extended objects are to be returned (true), of the standard objects are (false).

inline int numOutputSinks() const

Obtain the number of output sinks in the model.

To retrieve data that has been processed via fluxEngine the designer of the model must add output sinks to the places where data is to be extracted.

This function returns the number of output sinks within the given model. This may be used to iterate over the output sinks and determine their data structure given the input data structure that is supplied.

If an error occurs, this method may throw one of the following exceptions:

Returns

The number of output sinks in the model

inline int findOutputSink(int outputId) const

Find the output sink with a given output id.

If there is exactly one output sink in the model with a given output id, this will return the index of that sink. If there are no output sinks with that id, or that output id is used multiple times, this will return an error.

If an error occurs, this method may throw one of the following exceptions:

Parameters

outputId – The output sink of the output sink to look for

Returns

The index of the output sink with a given output id

inline OutputSinkMetaInfo outputSinkMetaInfo(int sinkIndex) const

Obtain meta information about a given output sink.

For a given output sink index that ranges between 0 and one less than the value returned by numOutputSinks() this method will return meta information about the output sink.

If an error occurs, this method may throw one of the following exceptions:

Parameters

sinkIndex – The index of the output sink to introspect

Returns

The meta information about the given output sink

inline std::vector<OutputSinkMetaInfo> outputSinkMetaInfos() const

Obtain a list of meta information about all output sinks.

This convenience method returns a list of meta information for all output sinks in the model. The index of the vector is the corresponding sink index.

If an error occurs, this method may throw one of the following exceptions:

Returns

A list of meta information about all output sinks

inline OutputSinkTensorStructure outputSinkTensorStructure(int sinkIndex) const

Obtain information about the tensor structure of a given output sink.

For an output sink with data of tensor type (see OutputStorageType::Tensor for details), this function will return the tensor structure of the data that will be returned via the output sink.

If the storage type does not match, this function will throw an exception.

Please see the documentation for outputSinkData() for more information on how to process tensor data.

If an error occurs, this method may throw one of the following exceptions:

Parameters

sinkIndex – The index of the output sink to introspect

Returns

The tensor structure of the given output sink

inline OutputSinkObjectListStructure outputSinkObjectListStructure(int sinkIndex) const

Obtain information about the object structure of a given output sink.

For any output sink with data of object list type (see OutputStorageType::ObjectList for details), this method will return information about the object list that is returned.

If the storage type does not match, this function will throw an exception.

Please see the documentation for outputSinkData() for more information on how to process object list data.

If an error occurs, this method may throw one of the following exceptions:

Parameters

sinkIndex – The index of the output sink to introspect

Returns

Information about the object structure of a given output sink

inline OutputSinkObjectListStatisticsStructure outputSinkObjectListStatisticsStructure(int sinkIndex) const

Obtain information about the statistics data structure of the objects returned by a given output sink.

For any output sink with data of object list type (see OutputStorageType::ObjectList for details), this method will return information about the structure of the statistics data associated with each object, if present.

If the storage type does not match, this function will throw an exception.

Please see the documentation for outputSinkData() for more information on how to process object list data.

If an error occurs, this method may throw one of the following exceptions:

Parameters

sinkIndex – The index of the output sink to introspect

Returns

Information about the structure of the statistics data returned for each object.

inline OutputSinkObjectListQualityStructure outputSinkObjectListQualityStructure(int sinkIndex) const

Obtain information about the quality data structure of the objects returned by a given output sink.

For any output sink with data of object list type (see OutputStorageType::ObjectList for details), this method will return information about the structure of the quality data associated with each object, if present.

If the storage type does not match, this function will throw an exception.

Please see the documentation for outputSinkData() for more information on how to process object list data.

If an error occurs, this method may throw one of the following exceptions:

Parameters

sinkIndex – The index of the output sink to introspect

Returns

Information about the structure of the quality data returned for each object.

inline OutputSinkInfo outputSinkInfo(int sinkIndex) const

Obtain output sink information for a given sink index.

This convenience wrapper is only available when compiling with a C++17 compiler. It provides a convenient method of obtaining all the metadata of an output sink in form of a single structure.

See the documentation of the OutputSinkInfo structure for further details.

If an error occurs, this method may throw one of the following exceptions:

Parameters

sinkIndex – The index of the output sink to introspect

Returns

The information and structure of the output sink

inline std::vector<OutputSinkInfo> outputSinkInfos() const

Obtain information about all output sinks.

This convenience wrapper is only available when compiling with a C++17 compiler. It provides a convenient method of obtaining all information about all output sinks as a vector. The index of the vector is the output sink index.

See the documentation of the OutputSinkInfo structure for further details.

Returns

A vector of information structures for every output sink in the model

inline void setSourceData(HSICube_Tag, std::int64_t height, std::int64_t width, void const *data)

Set the next input data to be processed (HSI cube)

Set the next input data that should be processed by fluxEngine. The user must supply a pointer to a memory region that contains the input data stored contiguously in memory. (For non-contiguously stored data the user may use the alternative overload of this method that also accepts strides.)

The user must ensure that the memory region that contains the input data is not altered while ProcessingContext::processNext() is active. (It may be altered after setting it here and before calling it though, as long as the dimensions don’t change.)

If the input cube size was fixed during the creation of the processing context, the height and width parameters must match the height and width specified during creation of the context, or an error will be thrown.

If the input cube size was variable during the creation of the processing context, the height and width parameters must be smaller than or equal to the maximum size specified during the creation of the context.

The storage order of the cube that has been specified during the creation of the processing context will be used. This means that the height and width parameters may refer to different dimensions of the cube depending on the storage order:

  • For a BIP cube, the cube will be indexed via (y, x, band), meaning the height parameter referes to the dimension 0, the width parameter to dimension 1 and the wavelength count supplied during creation of the cube to the dimension 2 of the cube.

  • For a BIL cube, the cube will be indexed via (y, band, x), meaning the height parameter referes to the dimension 0, the width parameter to dimension 2 and the wavelength count supplied during creation of the cube to the dimension 1 of the cube.

  • For a BSQ cube, the cube will be indexed via (band, y, x), meaning the height parameter referes to the dimension 1, the width parameter to dimension 2 and the wavelength count supplied during creation of the cube to the dimension 0 of the cube.

Between calls to ProcessingContext::processNext() this method may be used to change the source data region that is to be used during the next processing call.

If the processing context was not set up to process HSI cubes (e.g. because it was set up to process PushBroom frames), an exception will be thrown.

If an error occurs, this method may throw one of the following exceptions:

Parameters
  • height – The height of the HSI cube to process

  • width – The width of the HSI cube to process

  • data – A pointer to a region of memory that contains the HSI cube stored contiguously, and must be of size width * height * band_count * scalar_size in bytes.

inline void setSourceData(HSICube_Tag, std::int64_t height, std::int64_t width, std::int64_t stride1, std::int64_t stride2, void const *data)

Set the next input data to be processed (HSI cube, non-contiguous)

This is an extended version of the standard overload of this method. Please see the documentation of that method for details that do not pertain to the strides.

It is possible to lay out cubes non-contiguously in memory. For example, take a 2x2x2 cube with the following 8 elements:

cube(0, 0, 0) = 0
cube(0, 0, 1) = 1
cube(0, 1, 0) = 2
cube(0, 1, 1) = 3
cube(1, 0, 0) = 4
cube(1, 0, 1) = 5
cube(1, 1, 0) = 6
cube(1, 1, 1) = 7
When layed out contiguously in memory, the cube will have the following structure:
 dimension 2
(increment by 1)   dimension 0
  +---+          (increment by 4)
  |   |       +---------------+
  |   |       |               |
  |   v       |               v
+---+---+---+---+---+---+---+---+
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
+---+---+---+---+---+---+---+---+
  |       ^
  |       |
  +-------+
 dimension 1
(increment by 2)
To increment dimension 2 (the inner-most dimension) the element pointer must be incremented by 1. To increment dimension 1 (the middle dimension) the element pointer must be incremented by 2. To increment dimension 0 (the outer-most dimension) the element pointer has to be incremented by 4.

For cubes that reside contiguously in memory the increments here are always given by the dimensions of the cube. For example, a contiguous cube of dimensions (A, B, C) will have a stride structure of (B * C, C, 1).

However, it is possible that the cube is not contiguous in memory. In the above example, the stride structure for the contiguous cube was (4, 2, 1) due to the size of the cube - but if the stride structure is chosen as (9, 3, 1) the memory layout of the cube would look differently:

 dimension 2
(increment by 1)   dimension 0
  +---+          (increment by 9)
  |   |       +-----------------------------------+
  |   |       |                                   |
  |   v       |                                   v
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
| 0 | 1 | _ | 2 | 3 | _ | _ | _ | _ | 4 | 5 | _ | 6 | 7 |
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
  |           ^
  |           |
  +-----------+
 dimension 1
(increment by 3)
For the HSI cube that is passed to this method, it will have the stride structure (stride1, stride2, 1).

As an example, if the cube is contiguous in memory (and the other overload of setSourceData() could have been used instead), the following stride structure is assumed:

  • For contiguous BIP cubes (dimensions (y, x, band)) stride1 would be width * band_count, stride2 would be band_count.

  • For contiguous BIP cubes (dimensions (y, band, x)) stride1 would be band_count * width, stride2 would be width.

  • For contiguous BSQ cubes (dimensions (band, y, x)) stride1 would be height * width, stride2 would be width.

If an error occurs, this method may throw one of the following exceptions:

Parameters
  • height – The height of the HSI cube to process

  • width – The width of the HSI cube to process

  • stride1 – The number of scalar elements to skip to increment the left-most dimension of the cube by 1

  • stride2 – The number of scalar elements to skip to increment the middle dimension of the cube by 1

  • data – A pointer to a region of memory that contains the HSI cube, and must be of size height * stride1 * scalar_size (BIP and BIL storage orders) or band_count * stride1 * scalar_size (BSQ storage order) in bytes

inline void setSourceData(PushBroomFrame_Tag, void const *data)

Set the next input data to be processed (PushBroom frame)

Set the next input data that should be processed by fluxEngine. The user must supply a pointer to a memory region that contains the input data stored contiguously in memory. (For non-contiguously stored data the user may use alternative overload of this method that also accepts a stride.)

The user must ensure that the memory region that contains the input data is not altered while ProcessingContext::processNext() is active. (It may be altered after setting it here and before calling it though, as long as the dimensions don’t change.)

The input PushBroom frame size had to be fixed during the creation of the processing context, and the size of the frame must be equal to width and band_count.

The storage order of the cube that has been specified during the creation of the processing context will be used. The supplied frame must be a 2D image with the following dimensions:

  • For LambdaX storage order, the width of the image must be equal to the wavelength count specified during the creation of the processing context, while the height of the image must be equal to the specified spatial width.

  • For LambdaY storage order, the height of the image must be equal to the wavelength count specified during the creation of the processing context, while the width of the image must be equal to the specified spatial width.

Between calls to ProcessingContext::processNext() this function may be used to change the source data region that is to be used during the next processing call.

If the processing context was not set up to process PushBroom frames (e.g. because it was set up to process HSI cubes), an exception will be thrown.

If an error occurs, this method may throw one of the following exceptions:

Parameters

data – A pointer to a region of memory that contains the PushBroom frame stored contiguously, and must be of size width * band_count * scalar_size in bytes.

inline void setSourceData(PushBroomFrame_Tag, std::int64_t stride, void const *data, std::int64_t sequenceId = -1)

Set the next input data to be processed (PushBroom frame, non-contiguous)

This is an extended version of the standard overload of this method. Please see the documentation of that method for details that do not pertain to the strides.

An image may be layed out non-contiguously in memory. For example, take a 2x2 image with the following data:

image(y = 0, x = 0) = 0
image(y = 0, x = 1) = 1
image(y = 1, x = 0) = 2
image(y = 1, x = 1) = 3
This will have the following contiguous representation in memory:
 dimension 1
(increment by 1)
  +---+
  |   |
  |   |
  |   v
+---+---+---+---+
| 0 | 1 | 2 | 3 |
+---+---+---+---+
  |       ^
  |       |
  +-------+
 dimension 0
(increment by 2)
However, the memory may also be stored non-contiguously. For example, if 3 scalar elements are to be skipped whenever the y dimension of the image is incremented, the layout would look like this:
 dimension 1
(increment by 1)
  +---+
  |   |
  |   |
  |   v
+---+---+---+---+---+
| 0 | 1 | _ | 2 | 3 |
+---+---+---+---+---+
  |           ^
  |           |
  +-----------+
 dimension 0
(increment by 3)
This overload also allows the user to specify the sequence id to use for the pushbroom frame. The sequence id is a number that has to increase between individual frames that are being processed. In an ideal world the sequence id will be incremented by 1 between each frame that is being processed. If the sequence id increases by more than one, the processing logic assumes that frames have been skipped (because they were lost, for example), and will act accordingly.

If the sequence id increases by a large amount between frames (on the order of 100) the processing logic may implicitly reset any internal state that it keeps between frames.

Supplying a sequence id that is lower than a previous sequence id may lead to undefined results.

If a negative number is specified for the sequence id (the default if the parameter is not specified), it is assumed that the user wants to have the same behavior as the simple overload and the last used sequence id plus 1 will be used as the sequence id for the data that was supplied here.

If an error occurs, this method may throw one of the following exceptions:

Parameters
  • stride – The number of scalar elements to skip to get to the next line within the PushBroom frame

  • data – A pointer to a region of memory that contains the PushBroom frame, and must be of size stride * band_count * scalar_size (LambdaY case) or stride * width * scalar_size (LambdaX case) in bytes.

  • sequenceId

inline void processNext()

Process the next piece of data.

Processes the next piece of data. The source data must have set via one of the overloads of setSourceData().

This method will return once processing of the current data has completed or an error has occurred. The current thread will be used as the thread 0 for parallelization purposes.

If an error occurs, this method may throw one of the following exceptions:

inline void resetState()

Reset the state of the processing context.

This method may only be called in between calls of processNext().

When the data to be processed is in the form of entire HSI cubes, that is, the context was created via the overload of the constructor that indicates HSI cubes should be processed, this method will have no effect. (Unless processing was aborted via abort(), in which case this must be called to clean up the state.)

When the data to be processed is in the form of consecutive PushBroom frames, that is, the context was created via the overload of the constructor that indicates PushBroom frames should be processed, this method will reset the internal state and make the context appear as if it had been freshly created. This means that any operation that remembers past state to gain spatial information in the y direction will be reset to the beginning. This affects mostly object-based operations.

This would typically be called when a system with a PushBroom camera is started up again after a pause, and the previously processed data has no direct relation to the data to be processed from this point onwards.

If an error occurs, this method may throw one of the following exceptions:

inline void abort()

Abort processing.

This method may be called from a different thread while processing is currently active. It will signal the processing context to abort processing. This method will return immediately, but the processing context is likely still active. Use the wait() method to wait until the processing context is no longer active.

After a call to this methgod the processing context needs to be reset via the method resetState() before it may be used again.

If an error occurs, this method may throw one of the following exceptions:

inline void wait()

Wait until processing or an abort is complete.

This method may be called from a different thread while processing is currently active. It will wait until the processing context is not in use anymore, either because processing has completed in the mean time, or an abort was requested and the abort has completed.

Note that ProcessingContext::processNext() already blocks and this method must only be used from different threads that also want to wait for the processing of a specific context to complete.

If an error occurs, this method may throw one of the following exceptions:

inline OutputSinkData outputSinkData(int sinkIndex) const

Get the resulting output sink data of a given processing context.

See the documentation of the OutputSinkData structure for a detailed discussion on how to interpret the resulting data.

A memory region returned by this method will be invalidated the next time the user performs data processing again, resets the state of the context, or destroys the context

Parameters

sinkIndex – The index of the output sink to obtain the output data from

Returns

The output data of the chosen output sink from the last processing operation that was performed

inline explicit operator bool() const noexcept

Boolean conversion operator.

This allows the user to easily check if a variable of this type currently holds a valid processing context. For example:

if (context) {
    // the context is valid
}

Instrument Data Standardization And Processing

These classes and methods of the ProcessingContext class are for the use in conjunction with instrument devices.

inline void setSourceData(BufferInfo const &deviceBuffer, int64_t overrideSequenceId = -1)

Set the next input data to be processed from instrument device buffer.

This method allows the user to specify the sequence id to use for the input data. The sequence id is a number that has to increase between individual buffers that are being processed. In an ideal world the sequence id will be incremented by 1 between each buffer that is being processed. If the sequence id increases by more than one, the processing logic assumes that buffers have been skipped (because they were lost, for example), and will act accordingly. (This only affects processing when state is tracked between invocations, such as when processing data from pushbroom cameras, which allows filters such as object detectors to work.)

If the sequence id increases by a large amount between buffers (on the order of 100) the processing logic may implicitly reset any internal state that it keeps between buffers.

Supplying a sequence id that is lower than a previous sequence id may lead to undefined results.

If a negative number is specified for the sequence id, it is assumed that the user wants to use the frame number stored in the buffer, which will be used in that case.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • deviceBuffer – The instrument device buffer to use. It must not have been returned by the user

  • overrideSequenceId – The sequence id to use, or -1 if the buffer number stored in the buffer is to be used instead

inline void setSourceData(PersistentBufferInfo const &persistentBuffer, int64_t overrideSequenceId = -1)

Set the next input data to be processed from persistent buffer.

This method allows the user to specify the sequence id to use for the input data. The sequence id is a number that has to increase between individual buffers that are being processed. In an ideal world the sequence id will be incremented by 1 between each buffer that is being processed. If the sequence id increases by more than one, the processing logic assumes that buffers have been skipped (because they were lost, for example), and will act accordingly. (This only affects processing when state is tracked between invocations, such as when processing data from pushbroom cameras, which allows filters such as object detectors to work.)

If the sequence id increases by a large amount between buffers (on the order of 100) the processing logic may implicitly reset any internal state that it keeps between buffers.

Supplying a sequence id that is lower than a previous sequence id may lead to undefined results.

If a negative number is specified for the sequence id, it is assumed that the user wants to use the frame number stored in the buffer, which will be used in that case.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • persistentBuffer – The persistent buffer to use

  • overrideSequenceId – The sequence id to use, or -1 if the buffer number stored in the buffer is to be used instead

inline void setSourceData(MeasurementList const &measurementList, int index)

Set processing context source data from a specific measurement.

Sets the source data of a processing context from a given measurement.

This method will typically be called on processing contexts that were created with the ProcessingContext::createMeasurementProcessingContext() method, but it is possible to use it with contexts created via the standard constructors of ProcessingContext that take a ProcessingContext::HSICube as their first argument if the measurement is a HSI cube and has the correct structure (same scalar type, same wavelengths, same value type). Note that since all HSI cube measurements are stored in BIP storage order in fluxEngine (even if they were stored in another storage order on disk before they were loaded), contexts created for other storage orders will not be compatible with this method. In that case the user must provide the cube data manually via the setSourceData() overload that accepts a ProcessingContext::HSICube as their first argument (and transpose the data outside of fluxEngine beforehand).

Object lifetime note: it is safe to free the measurement list after calling this method, even if processing has not started yet. The data of the measurement will remain in memory (though no copy will be created) until any overload of ProcessingContext::setSourceData() is called again successfully, ProcessingContext::resetState() is called, or the processing context is destroyed.

If an error occurs, this method may throw one of the following exceptions:

Parameters
  • measurementList – The measurement list

  • index – Indicates which measurement in the list should be used. The first measurement is at index 0.

static inline ProcessingContext createInstrumentPreviewContext(InstrumentDevice *device)

Create a processing context for previewing instrument data.

When processing data from an instrument the raw buffer data will often not be in a form that is very useful to perform processing with. For example, PushBroom HSI cameras can have multiple different orientations for the spectrograph, making it camera-depenent how the data is to be interpreted. Furthermore some cameras may return data in a packed buffer scalar type (see BufferScalarType for further details) that may not be easily interpretable by the user.

When recording data, or using data for processing in a model, a set of preprocessing steps will be taken automatically to ensure that the data is normalized. These steps may carry some expenses though, and if the user simply wants to obtain a data to display as a preview, it might be useful.

Instrument drivers that don’t provide information about how to perform the minimally necessary normalization steps will fall back on the preprocessing steps required for data recording here, with some default settings, such as no wavelength normalization.

The normalized data of the resulting processing context can be obtained by querying the output sink with index 0. (There is no actual output sink with that index, as the processing context does not have an associated model, but the result of the preprocessing steps will be returned in this manner.)

How the data is normalized will depend on the type of instrument:

  • For a spectrometer this will result in a single vector of intensities. (Tensor of order 1.)

  • For a HSI PushBroom camera this will result in a tensor of order 3, with the first dimension always being 1 (because the y direction only has a single entry), the second dimension being the spatial dimension, and the final dimension being the spectral dimension, regardless of spectrograph orientation.

  • For HSI imager cameras this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the spectral dimension, regardless of how the cube has been obtained by the camera (mosaic pattern, filter wheel, etc.).

  • For a monochrome polarization camera this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the various polarization directions for that camera, regardless of the exact construction (mosaic imager, beam-split multi-camera with various polarization filters, etc.).

Note that corrections might not have been applied to the data at this point. For example, the spectral dimension of a HSI camera (be it PushBroom or imager) may not correspond to actual physical wavelengths yet. Also, any software-based post-processing, such as software binning, has not been applied to this data.

Data must be supplied to this processing context via the overload of ProcessingContext::setSourceData() that takes a BufferInfo or a PersistentBufferInfo.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters

device – The instrument device

Returns

The preview processing context

static inline ProcessingContext createInstrumentPreviewContext(InstrumentDevice *device, ProcessingQueueSet &processingQueueSet)

Create a processing context for previewing instrument data.

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

When processing data from an instrument the raw buffer data will often not be in a form that is very useful to perform processing with. For example, PushBroom HSI cameras can have multiple different orientations for the spectrograph, making it camera-depenent how the data is to be interpreted. Furthermore some cameras may return data in a packed buffer scalar type (see BufferScalarType for further details) that may not be easily interpretable by the user.

When recording data, or using data for processing in a model, a set of preprocessing steps will be taken automatically to ensure that the data is normalized. These steps may carry some expenses though, and if the user simply wants to obtain a data to display as a preview, it might be useful.

Instrument drivers that don’t provide information about how to perform the minimally necessary normalization steps will fall back on the preprocessing steps required for data recording here, with some default settings, such as no wavelength normalization.

The normalized data of the resulting processing context can be obtained by querying the output sink with index 0. (There is no actual output sink with that index, as the processing context does not have an associated model, but the result of the preprocessing steps will be returned in this manner.)

How the data is normalized will depend on the type of instrument:

  • For a spectrometer this will result in a single vector of intensities. (Tensor of order 1.)

  • For a HSI PushBroom camera this will result in a tensor of order 3, with the first dimension always being 1 (because the y direction only has a single entry), the second dimension being the spatial dimension, and the final dimension being the spectral dimension, regardless of spectrograph orientation.

  • For HSI imager cameras this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the spectral dimension, regardless of how the cube has been obtained by the camera (mosaic pattern, filter wheel, etc.).

  • For a monochrome polarization camera this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the various polarization directions for that camera, regardless of the exact construction (mosaic imager, beam-split multi-camera with various polarization filters, etc.).

Note that corrections might not have been applied to the data at this point. For example, the spectral dimension of a HSI camera (be it PushBroom or imager) may not correspond to actual physical wavelengths yet. Also, any software-based post-processing, such as software binning, has not been applied to this data.

Data must be supplied to this processing context via the overload of ProcessingContext::setSourceData() that takes a BufferInfo or a PersistentBufferInfo.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • processingQueueSet – The processing queue set to use

Returns

The preview processing context

static inline HSIRecordingResult createInstrumentHSIRecordingContext(InstrumentDevice *device, ValueType valueType, InstrumentParameters const &instrumentParameters, std::vector<double> const &targetWavelengths = {})

Create a processing context (HSI and spectrometer data recording)

Creates a processing context that may be used to record HSI from an instrument, this includes spectrometers.

Processing contexts of this type may be used to record data from a spectrometer or HSI camera.

The user may request normalization to a regularized wavelength grid (see the targetWavelengthsparameter), otherwise the instrument’s raw wavelengths will be returned. For example, a HSI camera that has a typical spectral range from 400 to 1000 nanometers might actually have wavelengths of the form 400.21, 402.35, etc. If a regularized wavelength grid is specified, all values will be interpolated before the data is returned to the user.

If a wavelength grid is provided, the wavelengths field of the result will contain the user-requested wavelength grid. If no wavelength grid is provided (the parameter set to NULL) the wavelengths member of the result will contain the unregularized wavelengths of the instrument device itself.

The user may optionally provide a white and dark reference to reference the data or normalize the references to be stored next to the intesnsity data. This method accepts a structure that contains pointers to BufferContainer objects that contain the actual raw reference data. There is also an overload that supports supports reference data provided by the user.

The user must specify what value type they want the data in. The following options exist:

  • The user requests data in intensities (using the ValueType::Intensity value type), and provides no white reference: the data will be provided in intensities and the user can only store the intensity data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in intensities (using the ValueType::Intensity value type), but nevertheless provides a white reference: the recording data itself will still be in intensities, but a normalized white reference will be returned that may be saved next to the measurement data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in reflectances or absorbances, but provides no white reference: if the instrument returns pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) this will succeed. If the device only provides its data in intensities (most devices), but no white reference is provided, context creation will fail.

  • The user requests data in reflectances or absorbances, and provides a white reference measurement: the data returned by the processing context will be of the value type the user selected, and referencing will occur before the data is returned to the user.

The normalized data of the resulting processing context can be obtained by querying the output sink with index 0. (There is no actual output sink with that index, as the processing context does not have an associated model, but the result of the preprocessing steps will be returned in this manner.)

How the data is normalized will depend on the type of instrument:

  • For a spectrometer this will result in a single vector of intensities. (Tensor of order 1.)

  • For a HSI PushBroom camera this will result in a tensor of order 3, with the first dimension always being 1 (because the y direction only has a single entry), the second dimension being the spatial dimension, and the final dimension being the spectral dimension, regardless of spectrograph orientation.

  • For HSI imager cameras this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the spectral dimension, regardless of how the cube has been obtained by the camera (mosaic pattern, filter wheel, etc.).

Data must be supplied to this processing context via the overload of ProcessingContext::setSourceData() that takes a BufferInfo or a PersistentBufferInfo.

The processing context only uses the device parameter to obtain the required information to create the context; the context is independent of the device. However, it will be associated with the fluxEngine handle of the device, and it will require that the data provided will be in the format that device currently produces. This means that reconnecting to the same device and setting the same settings allows the user to reuse the processing context. Also, if the data returned by the device is structurally the same (because it has the same buffer dimensions), but does not match the context semantically &#8212; for example the user selecting a different ROI in the spectral dimension, but of the same size, the context will still process the data, even though the result will not be sensible.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • valueType – The requested value type of the data that is to be returned.

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

  • targetWavelengths – Optional: a list of wavelengths to regularize the wavelengths to. Supply an empty vector in case the wavelengths of the instrument are to be used and the data should not normalized in this manner.

Returns

A result structure containing the processing context as well as the normalized references.

static inline HSIRecordingResult createInstrumentHSIRecordingContext(InstrumentDevice *device, ProcessingQueueSet &processingQueueSet, ValueType valueType, InstrumentParameters const &instrumentParameters, std::vector<double> const &targetWavelengths = {})

Create a processing context (HSI and spectrometer data recording)

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

Creates a processing context that may be used to record HSI from an instrument, this includes spectrometers.

Processing contexts of this type may be used to record data from a spectrometer or HSI camera.

The user may request normalization to a regularized wavelength grid (see the targetWavelengthsparameter), otherwise the instrument’s raw wavelengths will be returned. For example, a HSI camera that has a typical spectral range from 400 to 1000 nanometers might actually have wavelengths of the form 400.21, 402.35, etc. If a regularized wavelength grid is specified, all values will be interpolated before the data is returned to the user.

If a wavelength grid is provided, the wavelengths field of the result will contain the user-requested wavelength grid. If no wavelength grid is provided (the parameter set to NULL) the wavelengths member of the result will contain the unregularized wavelengths of the instrument device itself.

The user may optionally provide a white and dark reference to reference the data or normalize the references to be stored next to the intesnsity data. This method accepts a structure that contains pointers to BufferContainer objects that contain the actual raw reference data. There is also an overload that supports supports reference data provided by the user.

The user must specify what value type they want the data in. The following options exist:

  • The user requests data in intensities (using the ValueType::Intensity value type), and provides no white reference: the data will be provided in intensities and the user can only store the intensity data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in intensities (using the ValueType::Intensity value type), but nevertheless provides a white reference: the recording data itself will still be in intensities, but a normalized white reference will be returned that may be saved next to the measurement data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in reflectances or absorbances, but provides no white reference: if the instrument returns pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) this will succeed. If the device only provides its data in intensities (most devices), but no white reference is provided, context creation will fail.

  • The user requests data in reflectances or absorbances, and provides a white reference measurement: the data returned by the processing context will be of the value type the user selected, and referencing will occur before the data is returned to the user.

The normalized data of the resulting processing context can be obtained by querying the output sink with index 0. (There is no actual output sink with that index, as the processing context does not have an associated model, but the result of the preprocessing steps will be returned in this manner.)

How the data is normalized will depend on the type of instrument:

  • For a spectrometer this will result in a single vector of intensities. (Tensor of order 1.)

  • For a HSI PushBroom camera this will result in a tensor of order 3, with the first dimension always being 1 (because the y direction only has a single entry), the second dimension being the spatial dimension, and the final dimension being the spectral dimension, regardless of spectrograph orientation.

  • For HSI imager cameras this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the spectral dimension, regardless of how the cube has been obtained by the camera (mosaic pattern, filter wheel, etc.).

Data must be supplied to this processing context via the overload of ProcessingContext::setSourceData() that takes a BufferInfo or a PersistentBufferInfo.

The processing context only uses the device parameter to obtain the required information to create the context; the context is independent of the device. However, it will be associated with the fluxEngine handle of the device, and it will require that the data provided will be in the format that device currently produces. This means that reconnecting to the same device and setting the same settings allows the user to reuse the processing context. Also, if the data returned by the device is structurally the same (because it has the same buffer dimensions), but does not match the context semantically &#8212; for example the user selecting a different ROI in the spectral dimension, but of the same size, the context will still process the data, even though the result will not be sensible.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • processingQueueSet – The processing queue set to use

  • valueType – The requested value type of the data that is to be returned.

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

  • targetWavelengths – Optional: a list of wavelengths to regularize the wavelengths to. Supply an empty vector in case the wavelengths of the instrument are to be used and the data should not normalized in this manner.

Returns

A result structure containing the processing context as well as the normalized references.

static inline HSIRecordingResult createInstrumentHSIRecordingContext(InstrumentDevice *device, ValueType valueType, InstrumentParametersEx const &instrumentParameters, std::vector<double> const &targetWavelengths = {})

Create a processing context (HSI and spectrometer data recording)

Creates a processing context that may be used to record HSI from an instrument, this includes spectrometers.

Processing contexts of this type may be used to record data from a spectrometer or HSI camera.

The user may request normalization to a regularized wavelength grid (see the targetWavelengthsparameter), otherwise the instrument’s raw wavelengths will be returned. For example, a HSI camera that has a typical spectral range from 400 to 1000 nanometers might actually have wavelengths of the form 400.21, 402.35, etc. If a regularized wavelength grid is specified, all values will be interpolated before the data is returned to the user.

If a wavelength grid is provided, the wavelengths field of the result will contain the user-requested wavelength grid. If no wavelength grid is provided (the parameter set to NULL) the wavelengths member of the result will contain the unregularized wavelengths of the instrument device itself.

The user may optionally provide a white and dark reference to reference the data or normalize the references to be stored next to the intesnsity data. This method accepts a structure that the user can fill with pointers to the raw data of the measured references. There is also an overload that supports reference data in the form of BufferContainer objects.

The user must specify what value type they want the data in. The following options exist:

  • The user requests data in intensities (using the ValueType::Intensity value type), and provides no white reference: the data will be provided in intensities and the user can only store the intensity data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in intensities (using the ValueType::Intensity value type), but nevertheless provides a white reference: the recording data itself will still be in intensities, but a normalized white reference will be returned that may be saved next to the measurement data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in reflectances or absorbances, but provides no white reference: if the instrument returns pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) this will succeed. If the device only provides its data in intensities (most devices), but no white reference is provided, context creation will fail.

  • The user requests data in reflectances or absorbances, and provides a white reference measurement: the data returned by the processing context will be of the value type the user selected, and referencing will occur before the data is returned to the user.

The normalized data of the resulting processing context can be obtained by querying the output sink with index 0. (There is no actual output sink with that index, as the processing context does not have an associated model, but the result of the preprocessing steps will be returned in this manner.)

How the data is normalized will depend on the type of instrument:

  • For a spectrometer this will result in a single vector of intensities. (Tensor of order 1.)

  • For a HSI PushBroom camera this will result in a tensor of order 3, with the first dimension always being 1 (because the y direction only has a single entry), the second dimension being the spatial dimension, and the final dimension being the spectral dimension, regardless of spectrograph orientation.

  • For HSI imager cameras this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the spectral dimension, regardless of how the cube has been obtained by the camera (mosaic pattern, filter wheel, etc.).

Data must be supplied to this processing context via the overload of ProcessingContext::setSourceData() that takes a BufferInfo or a PersistentBufferInfo.

The processing context only uses the device parameter to obtain the required information to create the context; the context is independent of the device. However, it will be associated with the fluxEngine handle of the device, and it will require that the data provided will be in the format that device currently produces. This means that reconnecting to the same device and setting the same settings allows the user to reuse the processing context. Also, if the data returned by the device is structurally the same (because it has the same buffer dimensions), but does not match the context semantically &#8212; for example the user selecting a different ROI in the spectral dimension, but of the same size, the context will still process the data, even though the result will not be sensible.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • valueType – The requested value type of the data that is to be returned.

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

  • targetWavelengths – Optional: a list of wavelengths to regularize the wavelengths to. Supply an empty vector in case the wavelengths of the instrument are to be used and the data should not normalized in this manner.

Returns

A result structure containing the processing context as well as the normalized references.

static inline HSIRecordingResult createInstrumentHSIRecordingContext(InstrumentDevice *device, ProcessingQueueSet &processingQueueSet, ValueType valueType, InstrumentParametersEx const &instrumentParameters, std::vector<double> const &targetWavelengths = {})

Create a processing context (HSI and spectrometer data recording)

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

Creates a processing context that may be used to record HSI from an instrument, this includes spectrometers.

Processing contexts of this type may be used to record data from a spectrometer or HSI camera.

The user may request normalization to a regularized wavelength grid (see the targetWavelengthsparameter), otherwise the instrument’s raw wavelengths will be returned. For example, a HSI camera that has a typical spectral range from 400 to 1000 nanometers might actually have wavelengths of the form 400.21, 402.35, etc. If a regularized wavelength grid is specified, all values will be interpolated before the data is returned to the user.

If a wavelength grid is provided, the wavelengths field of the result will contain the user-requested wavelength grid. If no wavelength grid is provided (the parameter set to NULL) the wavelengths member of the result will contain the unregularized wavelengths of the instrument device itself.

The user may optionally provide a white and dark reference to reference the data or normalize the references to be stored next to the intesnsity data. This method accepts a structure that the user can fill with pointers to the raw data of the measured references. There is also an overload that supports reference data in the form of BufferContainer objects.

The user must specify what value type they want the data in. The following options exist:

  • The user requests data in intensities (using the ValueType::Intensity value type), and provides no white reference: the data will be provided in intensities and the user can only store the intensity data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in intensities (using the ValueType::Intensity value type), but nevertheless provides a white reference: the recording data itself will still be in intensities, but a normalized white reference will be returned that may be saved next to the measurement data.

    In case of instruments that can only return pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) attempting to create such a context will result in an error.

  • The user requests data in reflectances or absorbances, but provides no white reference: if the instrument returns pre-referenced data (such as virtual devices that return reflectances, or devices that perform the referencing already in hardware) this will succeed. If the device only provides its data in intensities (most devices), but no white reference is provided, context creation will fail.

  • The user requests data in reflectances or absorbances, and provides a white reference measurement: the data returned by the processing context will be of the value type the user selected, and referencing will occur before the data is returned to the user.

The normalized data of the resulting processing context can be obtained by querying the output sink with index 0. (There is no actual output sink with that index, as the processing context does not have an associated model, but the result of the preprocessing steps will be returned in this manner.)

How the data is normalized will depend on the type of instrument:

  • For a spectrometer this will result in a single vector of intensities. (Tensor of order 1.)

  • For a HSI PushBroom camera this will result in a tensor of order 3, with the first dimension always being 1 (because the y direction only has a single entry), the second dimension being the spatial dimension, and the final dimension being the spectral dimension, regardless of spectrograph orientation.

  • For HSI imager cameras this will result in a tensor of order 3, with the first dimension corresponding to the y dimension, the second to the x dimension, and the third to the spectral dimension, regardless of how the cube has been obtained by the camera (mosaic pattern, filter wheel, etc.).

Data must be supplied to this processing context via the overload of ProcessingContext::setSourceData() that takes a BufferInfo or a PersistentBufferInfo.

The processing context only uses the device parameter to obtain the required information to create the context; the context is independent of the device. However, it will be associated with the fluxEngine handle of the device, and it will require that the data provided will be in the format that device currently produces. This means that reconnecting to the same device and setting the same settings allows the user to reuse the processing context. Also, if the data returned by the device is structurally the same (because it has the same buffer dimensions), but does not match the context semantically &#8212; for example the user selecting a different ROI in the spectral dimension, but of the same size, the context will still process the data, even though the result will not be sensible.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • processingQueueSet – The processing queue set to use

  • valueType – The requested value type of the data that is to be returned.

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

  • targetWavelengths – Optional: a list of wavelengths to regularize the wavelengths to. Supply an empty vector in case the wavelengths of the instrument are to be used and the data should not normalized in this manner.

Returns

A result structure containing the processing context as well as the normalized references.

static inline ProcessingContext createInstrumentProcessingContext(InstrumentDevice *device, Model &model, InstrumentParameters const &parameters)

Create a processing context (instrument device data processing)

Create a processing context that may be used to directly process data obtained from an instrument with a model.

The model must be of a compatible type to the data returned from the instrument.

The handle associated with the device and the model must be the same.

If the instrument provides data in intensities and the model requires referenced data (the common case) the user must provide a white reference, otherwise context creation will fail.

If the instrument provides data in intensities and the model requires intensity data, any white reference will be ignored.

If the instrument provides pre-referenced data (because it is a virtual instrument returning reflectances, or referencing is performed in hardware) the model must require referenced data (such as reflectances or absorbances), otherwise the context creation will fail.

For HSI cameras and spectrometers: the wavelengths will automatically be regularized onto the grid specified in the model.

This method accepts a structure that contains pointers to BufferContainer objects that contain the actual raw reference data that must be used when the user wants to specify a white reference. There is also an overload which supports reference data provided in raw form by the user.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • model – The model to process the data with

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

Returns

The processing context that will process data from the instrument with the given model

static inline ProcessingContext createInstrumentProcessingContext(InstrumentDevice *device, ProcessingQueueSet &processingQueueSet, Model &model, InstrumentParameters const &parameters)

Create a processing context (instrument device data processing)

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

Create a processing context that may be used to directly process data obtained from an instrument with a model.

The model must be of a compatible type to the data returned from the instrument.

The handle associated with the device and the model must be the same.

If the instrument provides data in intensities and the model requires referenced data (the common case) the user must provide a white reference, otherwise context creation will fail.

If the instrument provides data in intensities and the model requires intensity data, any white reference will be ignored.

If the instrument provides pre-referenced data (because it is a virtual instrument returning reflectances, or referencing is performed in hardware) the model must require referenced data (such as reflectances or absorbances), otherwise the context creation will fail.

For HSI cameras and spectrometers: the wavelengths will automatically be regularized onto the grid specified in the model.

This method accepts a structure that contains pointers to BufferContainer objects that contain the actual raw reference data that must be used when the user wants to specify a white reference. There is also an overload which supports reference data provided in raw form by the user.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • processingQueueSet – The processing queue set to use

  • model – The model to process the data with

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

Returns

The processing context that will process data from the instrument with the given model

static inline ProcessingContext createInstrumentProcessingContext(InstrumentDevice *device, Model &model, InstrumentParametersEx const &parameters)

Create a processing context (instrument device data processing)

Create a processing context that may be used to directly process data obtained from an instrument with a model.

The model must be of a compatible type to the data returned from the instrument.

The handle associated with the device and the model must be the same.

If the instrument provides data in intensities and the model requires referenced data (the common case) the user must provide a white reference, otherwise context creation will fail.

If the instrument provides data in intensities and the model requires intensity data, any white reference will be ignored.

If the instrument provides pre-referenced data (because it is a virtual instrument returning reflectances, or referencing is performed in hardware) the model must require referenced data (such as reflectances or absorbances), otherwise the context creation will fail.

For HSI cameras and spectrometers: the wavelengths will automatically be regularized onto the grid specified in the model.

This method accepts a structure that the user can fill with pointers to the raw data of the measured references when they want to specify a white reference. There is also an overload that supports reference data in the form of BufferContainer objects.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • model – The model to process the data with

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

Returns

The processing context that will process data from the instrument with the given model

static inline ProcessingContext createInstrumentProcessingContext(InstrumentDevice *device, ProcessingQueueSet &processingQueueSet, Model &model, InstrumentParametersEx const &parameters)

Create a processing context (instrument device data processing)

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

Create a processing context that may be used to directly process data obtained from an instrument with a model.

The model must be of a compatible type to the data returned from the instrument.

The handle associated with the device and the model must be the same.

If the instrument provides data in intensities and the model requires referenced data (the common case) the user must provide a white reference, otherwise context creation will fail.

If the instrument provides data in intensities and the model requires intensity data, any white reference will be ignored.

If the instrument provides pre-referenced data (because it is a virtual instrument returning reflectances, or referencing is performed in hardware) the model must require referenced data (such as reflectances or absorbances), otherwise the context creation will fail.

For HSI cameras and spectrometers: the wavelengths will automatically be regularized onto the grid specified in the model.

This method accepts a structure that the user can fill with pointers to the raw data of the measured references when they want to specify a white reference. There is also an overload that supports reference data in the form of BufferContainer objects.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Parameters
  • device – The instrument device

  • processingQueueSet – The processing queue set to use

  • model – The model to process the data with

  • instrumentParameters – The instrument parameters, mainly the previously measured reference data.

Returns

The processing context that will process data from the instrument with the given model

static inline ProcessingContext createMeasurementProcessingContext(MeasurementList const &measurementList, int index, Model &model, uint64_t flags)

Create a processing context from a measurement.

This will create a processing context from a measurement so that the measurement (and other measurements of the same structure) can be processed with a given model.

This is useful if the user wants to load a measurement from disk, and then immediately process the data. The following example shows how this may be used to process data:

ProcessingContext context;
MeasurementList measurements;
measurements = loadMeasurementList(handle, format, fileName);
context = ProcessingContext::createMeasurementProcessingContext(measurements, 0, model, 0);
context.setSourceData(measurements, 0);
context.processNext();
After this code the user may extract the data from the output sinks in the model via the ProcessingContext::outputSinkData() method.

Note that if the measurement is a HSI cube, the context created by this function will be equivalent to a context created by the standard constructors of the ProcessingContext class, if the structure of the measurement had been specified manually. For this reason the user my also use the ProcessingContext::setSourceData() overloads that take a ProcessingContext::HSICube as first parameter to set the source data for a processing context created by this method, not just the overloads that take a MeasurementList.

If an error occurs, this method may throw one of the following exceptions:

Parameters
  • measurementList – The measurement list

  • index – Indicates which measurement in the list should be used to initialize the processing context. If -1 is specified it will be assumed that the context should be capable of processing all measurements in the list, and an error will occur if the list contains mutually incompatible measurements. The first measurement is at index 0.

  • model – The model to process the data with

  • flags – Flags that influence the context creation. This may be 0, or an arbitrary bitwise or of flags in the MeasurementProcessingContextFlag enumeration.

Returns

The associated processing context

static inline ProcessingContext createMeasurementProcessingContext(MeasurementList const &measurementList, int index, Model &model, ProcessingQueueSet &processingQueueSet, uint64_t flags)

Create a processing context from a measurement.

This will create a processing context from a measurement so that the measurement (and other measurements of the same structure) can be processed with a given model.

This overload allows the user to specify a processing queue set. If an invalid set is specified the default processing queue set of the handle will be used.

This is useful if the user wants to load a measurement from disk, and then immediately process the data. The following example shows how this may be used to process data:

ProcessingContext context;
MeasurementList measurements;
measurements = loadMeasurementList(handle, format, fileName);
context = ProcessingContext::createMeasurementProcessingContext(measurements, 0, model, 0);
context.setSourceData(measurements, 0);
context.processNext();
After this code the user may extract the data from the output sinks in the model via the ProcessingContext::outputSinkData() method.

Note that if the measurement is a HSI cube, the context created by this function will be equivalent to a context created by the standard constructors of the ProcessingContext class, if the structure of the measurement had been specified manually. For this reason the user my also use the ProcessingContext::setSourceData() overloads that take a ProcessingContext::HSICube as first parameter to set the source data for a processing context created by this method, not just the overloads that take a MeasurementList.

If an error occurs, this method may throw one of the following exceptions:

Parameters
  • measurementList – The measurement list

  • index – Indicates which measurement in the list should be used to initialize the processing context. If -1 is specified it will be assumed that the context should be capable of processing all measurements in the list, and an error will occur if the list contains mutually incompatible measurements. The first measurement is at index 0.

  • model – The model to process the data with

  • processingQueueSet – The processing queue set to use

  • flags – Flags that influence the context creation. This may be 0, or an arbitrary bitwise or of flags in the MeasurementProcessingContextFlag enumeration.

Returns

The associated processing context

struct HSICube_Tag

Tag structure to differentiate constructors.

This structure allows the user to indicate they want to call a constructor that will create a processing context for entire HSI cubes.

struct HSIRecordingResult

HSI Recording result.

This structure is the return value of the methods that create processing contexts for recording HSI data from a camera. It contains the resulting processing context, as well as further information describing the recording:

  • The wavelengths of the normalized data that is being returned.

  • If intensity data is requested, and references where specified during the creation of the processing context, the normalized reference data will also be returned. (Most notably a white reference cube will be present.)

Public Members

ProcessingContext context

The resulting processing context.

std::vector<double> wavelengths

The list of wavelengths of the recorded data.

This contains the wavelengths associated with the last dimension of the data being recorded.

ReferenceMeasurement whiteReference

Normalized white reference.

If a white reference was supplied during the creation of a recording processing context, and the user has requested intensity data, this will contain the normalized white reference that the user may save in addition to the data they will record.

ReferenceMeasurement darkReference

Normalized dark reference.

If a dark reference was supplied during the creation of a recording processing context, and the user has requested intensity data, this will contain the normalized dark reference that the user may save in addition to the data they will record.

CalibrationInfo calibrationInfo

The calibration information.

Calibration information for any recordings created with this context.

struct InstrumentParameters

Instrument parameters.

This structure describes common instrument parameters that may be supplied while creating a processing context for recording instrument data or processing it via a fluxEngine/fluxRuntime model.

Currently this exists to allow the user to supply a previously measured white and dark reference measurement.

This structure allows the user to supply the references in form of BufferContainer objects. There is also a second structure, ProcessingContext::InstrumentParametersEx, that allows the user to specify the references as raw data directly.

The pointers to the fields in this structure only need to remain valid for the duration of the call that creates the processing context; the data in the buffers will be copied during context creation.

Public Members

BufferContainer *whiteReference = {}

The white reference buffer.

Supply NULL here to indicate that no white reference is present.

BufferContainer *darkReference = {}

The dark reference buffer.

Supply NULL here to indicate that no dark reference is present.

Note that in the absence of a white reference a dark reference is currently ignored. (This may change in later versions of fluxEngine.)

std::uint64_t flags = {}

Additional context creation flags.

This must be either 0 or a bitwise or’d combination of one or more of the following flags:

  • DeviceProcessingContext_CreationFlag_AdditionalPreview

struct InstrumentParametersEx

Instrument parameters (explicit version)

This structure describes common instrument parameters that may be supplied while creating a processing context for recording instrument data or processing it via a fluxEngine/fluxRuntime model.

Currently this exists to allow the user to supply a previously measured white and dark reference measurement.

This structure allows the user to supply the references in form of raw data. The data must have the same buffer scalar type as buffers the instrument currently returns. The dimensions must be of the form (N, dims...), where N is the number of reference measurements, and dims... are the exact same dimensions as the instrument currently returns while providing a buffer.

Public Members

int referenceOrder = {}

The order of the reference tensors.

This must be exactly one more than the order of the buffer that the device returns.

If no references are set at all (both the whiteReference and darkReference fields are NULL), this may be 0 instead.

void const *whiteReference = {}

The white reference data.

A pointer to the start of the raw data containing the white reference.

If no white reference is provided by the user, this must be set to NULL.

std::array<int64_t, 5> whiteReferenceDimensions = {}

The dimensions of the white reference.

Only the first referenceOrder entries will be considered.

If no white reference is provided by the user (the whiteReference field is set to NULL), the values here will be ignored completely.

std::array<int64_t, 5> whiteReferenceStrides = {}

The strides of the white reference.

Only the first referenceOrder entries will be considered.

If no white reference is provided by the user (the whiteReference field is set to NULL), the values here will be ignored completely.

void const *darkReference = {}

The dark reference data.

A pointer to the start of the raw data containing the dark reference.

If no dark reference is provided by the user, this must be set to NULL.

std::array<int64_t, 5> darkReferenceDimensions = {}

The dimensions of the dark reference.

Only the first referenceOrder entries will be considered.

If no dark reference is provided by the user (the darkReference field is set to NULL), the values here will be ignored completely.

std::array<int64_t, 5> darkReferenceStrides = {}

The strides of the dark reference.

Only the first referenceOrder entries will be considered.

If no dark reference is provided by the user (the darkReference field is set to NULL), the values here will be ignored completely.

std::uint64_t flags = {}

Additional context creation flags.

This must be either 0 or a bitwise or’d combination of one or more of the following flags:

  • DeviceProcessingContext_CreationFlag_AdditionalPreview

struct OutputSinkData

Output sink data.

This structure is returned by outputSinkData() and contains a pointer to the actual data of the output sink.

Public Functions

inline explicit operator TensorData() const

Get a TensorData view on the output sink data.

Obtain a TensorData view of the output sink data for more comfortable access. Note that the view have the same data lifetime guarantees as the output sink, i.e. as soon as ProcessingContext::processNext() is called again, or the processing context is destroyed, the data will no longer be valid.

This requires the output sink data to be of tensor type, which cannot be checked by looking at the output sink data alone.

Public Members

void const *data

The actual data that is being returned.

If the total size is 0 (see the sizes member) then this field might also be nullptr.

This is to be interpreted in two differnt manners:

  • For tensor data this is a tensor of the scalar type of the output sink that is contiguous in memory. The order of the tensor can be obtained from the tensor structure of the output sink (and will not change), the actual sizes might change and will be present in the sizes member. To obtain the pointer to the first elemnt of the tensor one may use (assuming the data type of the tensor is a signed 16bit integer for this example):

    auto sinkData = context.outputSinkData(sinkId);
    auto p = static_cast<std::int16_t const*>(sinkData.data);
    

  • For object list data this is a pointer to the first element of a vector of OutputObject elements. It may be accessed in the following manner:

    auto sinkData = context.outputSinkData(sinkId);
    auto beginPointer = static_cast<OutputObject const*>(sinkData.data);
    

  • For object list data, if extended objects have been enabled, this is a pointer to the first element of a vector of OutputExtendedObject elements. It may be accessed in the following manner:

    auto sinkData = context.outputSinkData(sinkId);
    auto beginPointer = static_cast<OutputExtendedObject const*>(sinkData.data);
    

DataType dataType

The data type of the tensor being returned.

If the data is not a tensor the value here will have no meaning.

int order

The order of the tensor data being returned.

If the data is not a tensor this will be set to 1.

std::int64_t sizes[5]

The actual sizes of the data that is being returned.

  • For tensor data this will contain the actual sizes of the tensor being returned. The order of the tensor may be obtained from the tensor structure of the output sink. For example, a tensor of order 3 will have sizes[0], sizes[1] and sizes[2] containing the actual dimension of the data that is being returned here; all other entries in the sizes field must be ignored by the user and may contain arbitrary data.

  • For object list data only the first element is relevant and contains the actual number of objects being returned. All other elements must be ignored.

std::int64_t strides[5]

The strides of the data that is being returned.

If the data is not a tensor this will be set to zero.

struct OutputSinkInfo

Output sink information.

This convenience data structure is returned by outputSinkInfo() and outputSinkInfos() when compiling with a C++17 compiler. In that case this contains the meta information about and output together with its structure put together in this single structure.

Public Members

OutputSinkMetaInfo metaInfo

The meta information of the output sink.

std::variant<std::monostate, OutputSinkTensorStructure, OutputSinkObjectListStructure> structure

The structure of the output sink.

This uses std::variant to contain one of the following types, depending on the storage type of the output sink:

struct OutputSinkMetaInfo

Meta Information about an Output Sink.

This structure contains generic information about a given output sink. It is returned by outputSinkMetaInfo() and outputSinkMetaInfos() methods, and when compiling with a C++17 compiler is part of the OutputSinkInfo structure.

Public Members

int outputId = {}

The output id of the output sink.

This was configured during model creation in fluxTrainer.

OutputStorageType storageType = {}

The storage type of the output sink.

This determines what kind of data the output sink returns, depending on where in the processing chain it is connected.

std::string name

The name of the output sink.

This is encoded as UTF-8

std::int64_t inputDelay = {}

The input delay of the input sink.

This is only relevant when processing sequences of PushBroom frames, please see advanced topics chapter in the documentation for further details.

struct OutputSinkObjectListQualityStructure

Information about the quality data structure of the object list of an output sink.

This is returned by the outputSinkObjectListStatisticsStructure() method.

Public Members

bool present = {}

Whether quality data is present.

DataType dataType = {}

The data type of the quality data.

This will be an integer data type that depends on the model that is being processed.

std::int64_t count = {}

The number of quality entries present.

Quality data will always be a one-dimensional array.

struct OutputSinkObjectListStatisticsStructure

Information about the statistics data structure of the object list of an output sink.

This is returned by the outputSinkObjectListStatisticsStructure() method.

Public Members

bool present = {}

Whether statistics data is present.

DataType dataType = {}

The data type of all statistics quantities.

This will be either DataType::Float32 or DataType::Float64. The position data of the minima and maxima will always have a data type DataType::Int64, regardless of this value.

int order = {}

The tensor order of the statistics quantities.

This will typically be 1, but not necessarily so.

std::int64_t dimensions[5] = {}

The dimensions of the statistics quantities.

Only the dimensions up to order will have a meaning.

struct OutputSinkObjectListStructure

Information about the object list structure of an output sink.

This is returned by the outputSinkObjectListStructure() method.

Public Members

std::int64_t maxObjectCount = {}

The maximum number of objects this output sink could return.

Note that this number is typically much larger than what will be returned in practice, as this is calculated to capture an absolute limit, taking into account unrealistic geometries.

std::int64_t additionalDataCount = {}

The number of entries in the additional object data vector.

Each object may carry additional per-object data with it that has been connected to a second input of the output sink.

Additional object data will always be a vector of scalar values, whose size is always fixed.

This may be zero if there is no additional object data present.

DataType additionalDataType = {}

The scalar data type of additional object data.

If additionalDataCount is zero this field does not have any meaning.

struct OutputSinkTensorStructure

Information about the tensor structure of an output sink.

This is returned by the outputSinkTensorStructure() method.

Public Members

std::vector<std::int64_t> maxSizes

The maximum sizes.

Indicates the maximum dimensions of the tensor data returned by the output sink. The order of the tensor is given by the number of entries in this vector.

std::vector<std::int64_t> fixedSizes

The fixed sizes.

This will always have the same number of entries as maxSizes, indicating the order of the tensor. Any entry here that has a positive value indicates that that dimension will always be of that size when the output sink data is read. Otherwise a value of -1 indicates that the size is dynamic and may change at runtime.

Once exception here is when processing PushBroom frames and the output sink is not put after any object operation, then the first entry here will always be 1 (and the other two entries will be positive as well), indicating that for each PushBroom frame there will always be data present, but if the output sink has an input delay (see OutputSinkMetaInfo::inputDelay) the output sink may return no data at all until that input delay has pased.

DataType dataType = {}

The scalar data type of the tensor returned here.

This corresponds to the data type configured in the output sink during model creation.

struct PushBroomFrame_Tag

Tag structure to differentiate constructors.

This structure allows the user to indicate they want to call a constructor that will create a processing context for a sequence of PushBroom frames.

class ReferenceMeasurement

Resulting Reference Measurement (Recording Processing Contexts)

This class describes a reference measurement result in a normalized form (for HSI cameras this would be in form of a HSI cube, for example) that was obtained while creating a recording processing context. It will be stored within a HSIRecordingResult structure to return the normalized references that can be used in conjunction with the data the user records. (These could be used to later initialize an offline processing context.)

When data in referenced form (no intensities) is requested, no reference measurements will be returned while creating such a recording context.

This class acts somewhat like a shared pointer and an explicit boolean conversion operator to allow the user to determine whether it holds an actual measurement, or is empty.

Public Functions

inline void const *data() const

Get the pointer to the normalized reference data.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Returns

The pointer to the normalized reference data

inline DataType dataType() const

Get the scalar type of the reference data.

This will be the same scalar type as the recording result.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Returns

The scalar type of the reference data

inline int order() const

Get the tensor order of the reference data.

This will be 3 for all current types of HSI cameras, resulting in a HSI cube in BIP storage order. (y, x, wavelengths)

For spectrometers this will be 1 (the dimension for the wavelengths).

If an error occurs, an exception will be thrown. The following exceptions may occur:

Returns

The tensor order of the reference data

inline std::array<int64_t, 5> dimensions() const

Get the dimensions of the tensor.

Only the first order() dimensions will contain valid values. The user must ignore other values in this array and must not rely on them.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Returns

The dimensions of the tensor

inline std::array<int64_t, 5> strides() const

Get the strides of the tensor.

Only the first order() strides will contain valid values. The user must ignore other values in this array and must not rely on them.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Returns

The strides of the tensor

inline CalibrationInfo calibrationInfo() const

Get the calibration info.

Note that this may return an empty CalibrationInfo structure if no calibration information was stored with the reference.

If an error occurs, an exception will be thrown. The following exceptions may occur:

Returns

The calibration info of the reference measurement

inline operator bool() const noexcept

Explicit boolean conversion operator.

Allow the user to check whether the reference is actually present by performing a check such as:

ReferenceMeasurement m = ...;
if (m) {
}

inline operator TensorData() const

Get a TensorData view on the reference measurement.

Obtain a TensorData view of the reference measurement data for more comfortable access. Note that the view does not retain a reference to this object, so if this object is destroyed, the view will no longer refer to valid data.

A TensorData view on the reference measurement

enum fluxEngine::MeasurementProcessingContextFlag

Flags that influence measurement processing context creation.

This enumeration contains flags that may be passed to the ProcessingContext::createMeasurementProcessingContext() method that will influence the behavior of that method.

Note that since this enumeration is to be used as a binary flag, this is not a scoped enumeration.

Values:

enumerator MeasurementProcessingContextFlag_VariableSpatialSize

Variable spatial size.

When creating a processing context from a measurement that contains something with spatial dimensions (a HSI cube, an image, etc.), this will indicate that the context should accept inputs of sizes that may differ from the measurement that was used to create the context. This is useful if the measurement is only there to provide the same structure (type of measurement, scalar data type, references), but that differently-sized measurements should be accepted for data processing with this context.

Note that the size of the measurement will be used as the maximum size that may be specified, so it is up to the user to create the context from the largest possible measurement.

This is in contrast to the default which assumes that the inputs of the processing context will always have the exact same (fixed) size as the measurement.

enumerator MeasurementProcessingContextFlag_IgnoreReferences

Ignore references.

Ignore any white and/or dark references when creating the processing context. This is only relevant if the value type of the measurement is not ValueType::Intensity, as references will be ignored for all other value types anyway.