C++ API: Introduction¶
This chapter describes how to get started with fluxEngine’s C++ API and how to process hyperspectral data with it. See C++ Reference Documentation for a list of all API classes and functions of the C++ API.
Requirements¶
The C++ wrapper library requires a compiler that supports C++11. If C++17 is supported some additional wrappers are available for a more comfortable API.
Setting up the Build Environment¶
The C++ API around fluxEngine is a header-only wrapper around the low-level C API that is provided. The steps required to set up a build environment follow those when using the C API, but contain an additional directory with the header files of the C++ wrapper library.
fluxEngine is distributed in form of an archive with the following relevant subdirectories:
bin: All the runtime library files (Windows only)
lib: Library files required for linking fluxEngine
On Windows systems this will contain a file
fluxEngineC1.def
from which the user can create an import library for their own compiler.Additionally the import library is provided for both Microsoft’s Visual C/C++ compiler, as well as GNU GCC (MinGW).
The file
fluxEngineC1.lib
is the import library for MSVC (Microsoft’s Visual C/C++ compiler).The file
libfluxEngineC1.dll.a
is the import library for GNU GCC (MinGW).On Linux systems this will contain two libraries, as well as the dependencies of fluxEngine:
libfluxEngineC1.so.0
, which is the actual implementation of the public API of fluxEngine. This will be loaded at runtime.A library
libfluxEngineC1.so
(without the.0
suffix) that is a stub import library that may be used while linking your library. This library is not required at runtime, but ensures that even if the GCC version on the system is older than the GCC used to compile fluxEngine, linking your executable will still work.
On macOS systems this will contain
libfluxEngineC1.dylib
as well as dependency libraries of fluxEngine.
include: Contains all the header files required to compile C programs
c++/include: Contains the C++ wrapper library around the low-level C functions
One needs to add the lib
directory to the library search path and
link against the fluxEngineC1
library (via e.g. -lfluxEngineC1
for Linux, macOS and MinGW), and add both the include
and
c++/include
directories to the header file search path.
Building with CMake¶
The following CMake snippet will allow the user to use CMake to generate working binaries that are linked against fluxEngine across all platforms:
1 2 3 4 5 6 7 8 9 10 11 | set(FLUXENGINE_DIR "" CACHE PATH "The directory where fluxEngine is installed")
if(FLUXENGINE_DIR STREQUAL "")
message(FATAL_ERROR "Please specify the FLUXENGINE_DIR setting.")
endif()
if(NOT EXISTS "${FLUXENGINE_DIR}/include/fluxEngine/fluxEngine.h")
message(FATAL_ERROR "FLUXENGINE_DIR does not point to the directory fluxEngine is located in.")
endif()
target_include_directories(target_name PRIVATE "${FLUXENGINE_DIR}/c++/include" "${FLUXENGINE_DIR}/include")
target_link_directories(target_name PRIVATE "${FLUXENGINE_DIR}/lib")
target_link_libraries(target_name fluxEngineC1)
|
Replace target_name
with the name of your CMake target.
General API Design, Error Handling¶
The C++ API follows the following principles:
Objects are constructed directly via their constructors. For example, to load a model, one would use the constructor of the Model class.
Objects themselves have move semantics: they cannot be copied (the copy constructor is implicitly deleted), but they can be moved into other variables of the same type. They behave similar to
std::unique_ptr
in that regard.Functions and methods that perform an action will typically return
void
Functions and methods that query something will typically return a simple value or structure
Errors are reported by throwing exceptions.
Error Handling¶
API calls that result in an error will throw an exception. The exception will be one of the following:
std::invalid_argument
if an invalid argument was passed to the method in question
std::bad_alloc
if an allocation failed (most likely due to being out of memory)
std::out_of_range
if an index was supplied that is not within the valid range
fluxEngine::Error
or a subclass thereof (which in itself is a subclass ofstd::runtime_error
) if an error occurred that does not fall into the previous categoriesIf a user-supplied callback is provided, and that callbacks throws any exception, that exception is then propagated back out to the user
All exceptions that are either fluxEngine::Error
or a subclass have
additional fields for more information about the error:
Calling
errorCode()
on the exception object will return an error code of typefluxEngine::ErrorCode
that allows the user to further identify the type of error that occurred.Calling
osErrorCode()
on the exception object will return an operating system error code if the operation failed due to a system call failing (for example: a file could not be opened)
See the documentation of fluxEngine::Error
and
fluxEngine::ErrorCode
for further details.
Complex Return Values¶
If a method returns multiple bits of information, those will be
returned in form of a convenience structure. For example, the method
fluxEngine::Model::groupInfo()
will return a
fluxEngine::GroupInfo
structure that contains multiple fields
describing a given group.
Classes, Constructors, Move-Only Semantics¶
The major classes, Handle
, Model
, and ProcessingContext
,
all work in a similar manner. If the default constructor is used, for
example because a variable is just being declared, this will not
perform any action, but create an invalid object that may later be
filled with a valid object. Furthermore, it is possible to use any
object of the aforementioned types in an if
clause to check if the
variable currently holds a valid object. For example:
1 2 3 4 5 6 7 8 9 | fluxEngine::Handle h;
// h may not be used at this point, is not valid
h = functionThatReturnsAValidHandle();
// h is now valid and may be used
h = {};
// h is now invalid again
if (h) {
// this code will never be executed
}
|
In order to construct an actual object, one must use any non-default constructor. For example, to create a handle one would typically use the following code:
fluxEngine::Handle handle(licenseData, licenseDataSize);
Objects behave similarly to std::unique_ptr
, in that they can be
moved but cannot be copied:
1 2 3 4 5 6 | fluxEngine::Handle handle(licenseData, licenseDataSize);
// The following works (and now handle is invalid,
// and h2 is valid)
fluxEngine::Handle h2 = std::move(handle);
// The following is a compiler error (no copies allowed)
fluxEngine::Handle h3 = h2;
|
Warning
It is currently only possible to create a single handle due to limitations that may be removed in a later version. If a given handle is to be replaced, the variable containing the handle must be cleared first before constructing the new handle. For example:
1 2 3 4 5 6 7 8 9 | // (Assuming the variable handle contains an already
// valid handle.)
// Will not work (because two handles would exist at
// the same time)
handle = Handle(licenseData, licenseDataSize);
// Will work (first the old handle is erased, then
// the new handle is created)
handle = {};
handle = Handle(licenseData, licenseDataSize);
|
Initializing the Library¶
To initialize the library a license file is required. The user must read that license file into memory and supply fluxEngine with it.
The following code demonstrates how to properly initialize fluxLicense and how to tear it down again.
1 2 3 4 5 6 7 8 9 10 | // Get the data of the license file from somewhere
std::vector<std::byte> myLicenseData = ...;
try {
fluxEngine::Handle handle(myLicenseData);
// handle is now valid, may be used
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
// Handle has left the current scope, is now no longer valid
|
Setting up processing threads¶
fluxEngine supports parallel processing, but it has to be set up at the
very beginning. This is done via the createProcessingThreads
method.
The following example code demonstrates how to perform processing with 4 threads, assuming a handle has already been created:
1 2 3 4 5 6 | try {
handle.createProcessingThreads(4);
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Note
This will only create 3 (not 4!) background threads that will
help with data processing. The thread that calls
fluxEngine::ProcessingContext::processNext()
will be
considered the first thread (with index 0) that participates
in parallel processing.
Note
Modern processors support Hyperthreading (Intel) or SMT (AMD) to provide more logical cores that are phyiscally available. It is generally not recommended to use more threads than are phyiscally available, as workloads such as fluxEngine will typically slow down when using more cores in a system than are physically available.
Note
When running fluxEngine with very small amounts of data, in the extreme case with cubes that have only one pixel, parallelization will not improve performance. In cases where cubes consisting of only one pixel are processed, it is recommended to not parallelize at all and skip this step.
Note
Only one fluxEngine operation may be performed per handle at the same time; executing multiple processing contexts from different threads will cause them to be run sequentially.
Since it is currently possible to only create a single handle for fluxEngine, this means only one operation can be active at the same time; though the limitation of only a single handle will be lifted in a later version of fluxEngine.
Loading a model¶
After exporting a runtime model from fluxTrainer as a .fluxmdl
file, it may be loaded into fluxEngine. fluxEngine supports loading
the model directly from disk, as well as parsing a model in memory if
the byte sequence of the model was already loaded by the user.
The following example shows how to load a model:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | try {
#if defined(_WIN32)
/* Windows: use wide filenames to be able to cope with
* characters not in the current code page
*/
fluxEngine::Model model(handle, fluxEngine::Model::FromFile,
L"sample_model.fluxmdl");
#else
fluxEngine::Model model(handle, fluxEngine::Model::FromFile,
"sample_model.fluxmdl");
#endif
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Extracting information from a model¶
It is possible obtain information about the groups that were defined in a model and obtain both their name and their color. The following snippet will print the names of all groups and their colors:
1 2 3 4 5 6 7 8 9 10 11 12 13 | try {
auto groupInfos = model.groupInfos();
for (auto const& groupInfo : groupInfos) {
std::cout << "Group with name " << groupInfo.name << " has color rgb("
<< int(groupInfo.colorRedComponentValue()) << ", "
<< int(groupInfo.greenRedComponentValue()) << ", "
<< int(groupInfo.blueRedComponentValue()) << ")\n";
}
std::cout << std::flush;
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Creating a processing context¶
In order to process data with fluxEngine, a processing context must be created. A processing context knows about:
The model that is to be processed
How processing will be parallelized (it takes that information from the current handle) – changing parallelization settings will invalidate a given context
What kind of data is to be processed each time (full HSI cubes or individual PushBroom frames)
The size of data that is to be processed. fluxTrainer models are designed to be camera-independent (to an extent), and thus do not know about the actual spatial dimensions of the data that is to be processed. But once processing is to occur, the spatial dimensions have to be known
The input wavelengths of the data being processed. While a model build in fluxTrainer specifies the wavelengths that will be used during processing, cameras of the same model don’t map the exact same wavelengths onto the same pixels, due to production tolerances. For this reason cameras come with calibration information that tells the user what the precise wavelengths of the camera are. The user must specify the actual wavelengths of the input data, so that fluxEngine can interpolate those onto the wavelength range given in the model
Any white (and dark) reference data that is applicable to processing
There are two types of processing contexts that can be created: one for HSI cubes, one for PushBroom frames.
HSI Cube Processing Contexts¶
To process entire HSI cubes the user must use the constructor of
ProcessingContext
that takes a ProcessingContext::HSICube
argument. It has the following parameters:
The model that is to be processed
The storage order of the cube (BSQ, BIL, or BIP)
The scalar data type of the cube (e.g. 8 bit unsigned integer)
The spatial dimensions of the cube that is to be processed
The wavelengths of the cube
Whether the input data is in intensities or reflectances
An optional set of white reference measurements
An optional set of dark reference measurements
There are two ways to specify the spatial dimensions of a given cube. The first is to fix them at this point, only allowing the user to process cubes that have exactly this size with the processing context. The alternative is to leave them variable, but specify a maximum size. This has the advantage that the user can process differently sized cubes with the same context, but has the major disadvantage that if a white reference is used, it will be averaged along all variable axes, that means that any spatial information of the reference data will be averaged out. (It is also possible to only fix one of the spatial dimensions.)
For referencing it is typically useful to average multiple measurements
to reduce the effect of noise. For this reason, any references that are
provided have to be tensors of 4th order, with an additional initial
dimension at the beginning for the averages. For example, a cube in BSQ
storage order has the dimension structure (λ, y, x)
, so the
references must have the dimension structure (N, λ, y, x)
, where
N
may be any positive number, indicating the amount of measurements
that is to be averaged. A cube in BIP storage order would have a
dimension structure of (y, x, λ)
, leading to a reference tensor
structure of (N, y, x, λ)
.
Note
It is possible to supply only a single cube as a reference
measurement, in that case N
would be 1
. In that case
the structure of the data is effectively only a tensor of
third order – but the additional dimension still has to
be specified.
Note
References supplied here must always be contiguous in memory, and may never have any non-trivial strides.
Reference cubes must always have the same storage order as the cubes that are to be processed.
The first example here shows how to create a processing context without any references, assuming that the input data is already in reflectances, with a 32bit floating point data type, and fixed spatial dimensions:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | using namespace fluxEngine;
try {
std::int64_t width = 1024, height = 2150;
std::vector<double> wavelengths{{ 900, 901.5, 903, ... }};
ReferenceInfo referenceInfo;
referenceInfo.valueType = ValueType::Reflectance;
ProcessingContext context(model, ProcessingContext::HSICube,
HSICube_StorageOrder::BSQ, DataType::Float32,
height, height, width, width, wavelengths,
&referenceInfo);
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Alternatively, to create a processing context that uses a white reference cube, and where the y dimension has a variable size, the following code could be used:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | using namespace fluxEngine;
try {
std::int64_t width = 1024, height = 2150;
std::vector<double> wavelengths{{ 900, 901.5, 903, ... }};
auto wavelengthCount = static_cast<std::int64_t>(wavelengths.size());
ReferenceInfo referenceInfo;
referenceInfo.valueType = ValueType::Intensity;
/* get cube data from somewhere
* (this pointer only has to be valid until the call to
* the constructor has completed, then the user may
* discard the datea)
*/
uint8_t* whiteReferenceData = ...;
referenceInfo.whiteReference = whiteReferenceData;
/* Just a single cube, with a height of just 40 in y direction,
* but the width being the same as the fixed width that is set
* here.
*/
referenceInfo.whiteReferenceDimensions = { 1, wavelengthCount, 40, width };
ProcessingContext context(model, ProcessingContext::HSICube,
HSICube_StorageOrder::BSQ, DataType::UInt8,
height, -1, width, width, wavelengths,
&referenceInfo);
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Note
The maximum size specified here also determines how much RAM is allocated in fluxEngine internally. Specifying an absurdly large number will cause fluxEngine to exhaust system memory.
Note
The white and dark references may have different spatial dimensions if those dimensions are specified as variable. In the above example, if a dark reference were to be specified, it would have to have the same width and number of bands (because those are both fixed), but it could have a different height.
PushBroom Frame Processing Contexts¶
To process entire HSI cubes the user must use the constructor of
ProcessingContext
that takes a
ProcessingContext::PushBroomFrame
argument. It has the following
parameters:
The model that is to be processed
The storage order of the PushBroom frame
The scalar data type of each PushBroom frame
The spatial width of each PushBroom frame (which will be the actual with of each image if
LambdaY
storage order is used, or the height of each image ifLambdaX
storage order is used)The wavelengths of each PushBroom frame
Whether the input data is in intensities or reflectances
An optional set of white reference measurements
An optional set of dark reference measurements
As PushBroom processing can be thought of as a means to incrementally
build up an entire cube (but process data on each line individually),
the spatial width must be fixed and cannot be variable. (The number of
frames processed, i.e. the number of calls to
ProcessingContext::processNext()
is variable though.)
As it is often useful to average multiple reference measurements to
reduce noise, the white and dark references must be supplied as tensors
of third order, with a dimension structure of (N, x, λ)
or
(N, λ, x)
, depending on the storage order.
The following example shows how to set up a processing context without any references, assuming the input data is already in reflectances, stored as 32bit floating point numbers:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | using namespace fluxEngine;
try {
int64_t width = 320;
std::vector<double> wavelengths{{ 900, 901.5, 903, ... }};
ReferenceInfo referenceInfo;
referenceInfo.valueType = ValueType::Reflectance;
ProcessingContext context(model, ProcessingContext::PushBroomFrame,
PushBroomFrame_StorageOrder::LambdaY, DataType::Float32,
width, wavelengths, &referenceInfo);
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Alternatively, if both white and dark reference measurements are to be supplied, and unsigned 8bit integer numbers, one could use the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | using namespace fluxEngine;
try {
int64_t width = 640;
std::vector<double> wavelengths{{ 900, 901.5, 903, ... }};
auto wavelengthCount = static_cast<std::int64_t>(wavelengths.size());
ReferenceInfo referenceInfo;
referenceInfo.valueType = ValueType::Intensity;
/* Get the reference data from somewhere; in this case the white
* reference contains 5 measurements that are to be averaged,
* and the dark reference contains 10. Since this is in LambdaY
* storage order, the references are effectively cubes of BIL
* storage order themselves. (For PushBroom frames with LambdaX
* storage order the references would be cubes in BIP storage
* order.)
*/
std::uint8_t* whiteReferenceCubeData = ...;
std::uint8_t* darkReferenceCubeData = ...;
referenceInfo.whiteReference = whiteReferenceCubeData;
referenceInfo.whiteReferenceDimensions = { 5, wavelengthCount, width };
referenceInfo.darkReference = darkReferenceCubeData;
referenceInfo.darkReferenceDimensions = { 10, wavelengthCount, width };
ProcessingContext context(model, ProcessingContext::PushBroomFrame,
PushBroomFrame_StorageOrder::LambdaY, DataType::UInt8,
width, wavelengths, &referenceInfo);
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Processing Data¶
Once a processing context has been set up, the user may use it to process data. This happens in two steps:
Set the data pointer for the source data that is to be processed
Process the data
The first step has to be called at least once before processing happens. If the user always writes the data to the same memory location, it can be skipped for subsequent processing steps.
HSI Cubes Source Data¶
In order to process entire HSI cubes, a processing context for HSI cubes has to have been set up. (See the previous section.)
To set the source data for a HSI cube, use the method
ProcessingContext::setSourceData()
with a
ProcessingContext::HSICube
parameter. The following example shows
how to use it:
1 2 3 4 5 6 7 8 9 10 | using namespace fluxEngine;
try {
std::int64_t width = 1024, height = 2150;
void const* cubeData = ...;
context.setSourceData(ProcessingContext::HSICube, height, width, cubeData);
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
The width
and height
parameters must correspond to the spatial
dimensions of the cube. If the spatial dimensions were fixed during
creation of the processing context, the dimensions specified here must
match.
This method will assume that the cube is contiguous in memory. For
non-trivial stride structures please use the overload of the
setSourceData()
method that takes additional stride information.
Refer to the refereence documentation for further details.
PushBroom Source Data¶
PushBroom source data is essentially a 2D image that must be provided
to fluxEngine. As all of the dimensions of the image have been fixed
during the creation of the processing context, the user must only
specify the data pointer via the overload of the
ProcessingContext::setSourceData()
method that takes a
ProcessingContext::PushBroomFrame
parameter. The following example
shows its usage:
1 2 3 4 5 6 7 8 9 | using namespace fluxEngine;
try {
void const* frameData = ...;
context.setSourceData(ProcessingContext::PushBroomFrame, frameData);
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
This method will assume that the frame is contiguous in memory. For
non-trivial stride structures please use the overload of the
setSourceData()
method that takes an additional stride argument.
Refer to the refereence documentation for further details.
Performing Processing¶
Once the data pointer has been set, the user may process the data with
the ProcessingContext::processNext()
method. This may be called in
the following manner:
1 2 3 4 5 6 | try {
context.processNext();
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
After processing has completed, the user may obtain the results (see the next section).
To process another set of data, the user may do one of the following:
The user may simply overwrite the data in the memory area they specified when they set the source data, and call
ProcessingContext::processNext()
again.The data in the specified source memory area must only remain unchanged during processing, but between calls to
ProcessingContext::processNext()
the user is completely free to alter it.The user may update the source data pointer and the call
ProcessingContext::processNext()
again, if the new data that is to be processed is found in a different memory region.
PushBroom Resets¶
As PushBroom cameras can be thought of as incrementally building up a cube line by line, at some point the user may want to indicate that the current cube is considered complete and a new cube starts. In that case the processing context has to be reset, so that all stateful operations are reset as well, such as object detection, but also kernel-based operations.
To achieve this the method ProcessingContext::resetState()
exists.
Its usage is simple:
1 2 3 4 5 6 | try {
context.resetState();
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Note
There is no requirement to actually perform such a reset. If fluxEngine is used to process PushBroom frames that are obtained from a camera above a conveyor belt in a continuous process, for example, it is possible to just simply process all incoming frames in a loop and never call the reset method. In that situation the reset method would be called though if the conveyor belt is stopped and has to be started up again. Though, depending on the specific application, that could also mean that a processing context would have to be created again, for example because references have to be measured again, and a simple state reset is not sufficient.
Obtaining Results¶
After processing has completed via the
ProcessingContext::processNext()
method, fluxEngine provides a
means for the user to obtain the results of that operation.
When designing the model in fluxTrainer that is to be used here, Output Sinks should be added to the model wherever processing results are to be obtained later.
Note
If a model contains no output sinks, it can be processed with fluxEngine, but the user will have no possibility of extracting any kind of result from it.
fluxEngine provides a means to introspect a model to obtain information about the output sinks that it contains. The following two identifiers for output sinks have to be distinguished:
The output sink index, this ist just a number starting at 0 and ending at one below the number of output sinks that may be used to specify the output sink for the purposes of the fluxEngine API.
The ordering of output sinks according to this index is non-obvious. Loading the same
.fluxmdl
file will lead to the same order, but saving models with the same configuration (but constructed separately) can lead to different orders of output sinks.The output id, which is a user-assignable id in fluxTrainer that can be used to mark output sinks for a specific purpose. The output id may not be unique (but should be), and is purely there for informational purposes.
For each output sink in the model the user will be able to obtain the output id of that sink. There is also a method
ProcessingContext::findOutputSink()
that can locate an output sink if the output id of that sink is unique. It will return the index of the sink with that output id.
To obtain information about all output sinks in the context, the method
ProcessingContext::outputSinkMetaInfos()
exists, which returns a
vector of simple structs that contain information about each output
sink. The index of the vector is also the output sink index.
1 2 3 4 5 6 7 8 9 10 11 | try {
auto sinkMetaInfos = context.outputSinkMetaInfos();
for (std::size_t i = 0; i < sinkMetaInfos.size(); ++i) {
int sinkIndex = static_cast<int>(i);
std::cout << "Output sink with index " << sinkIndex << " has name "
<< sinkMetaInfos[i].name << std::endl;
}
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
The ProcessingContext::OutputSinkMetaInfo
structure contains the
following information:
The output id of the output sink
The name of the output sink as an UTF-8 string (this is the name the user specified when creating the model in fluxTraineer)
Storage type: what kind of data the output sink will return (the current options are either tensor data or detected objects)
The output delay of the output sink (only relevant in the case when PushBroom data is being processed, see Output Delay for PushBroom Cameras for a more detailed discussion of this.
Output sinks store either tensor data or detected objects, depending on the configuration of the output sink, and where it sits in the processing chain.
Tensor Data¶
Tensor data within fluxEngine always has a well-defined storage order, as most algorithms that work on hyperspectral data are at their most efficient in this memory layout. While fluxEngine supports input data of arbitrary storage order, it will be converted to the internal storage order at the beginning of processing. The output data will always have the following structure:
When processing entire HSI cubes it will effectively return data in BIP storage order, that means that the dimension structure will be
(y, x, λ)
(for data that still has wavelength information) or(y, x, P)
, whereP
is a generic dimension, if the data has already passed through dimensionality reduction filters such as PCA.When processing PushBroom frames it will effectively return data in LambdaX storage order, with an additional dimension of size
1
at the beginning. In that case it will either be(1, x, λ)
or(1, x, P)
.Pure per-object data always has a tensor structure of order 2, in the form of
(N, λ)
or(N, P)
, whereN
describes the number of objects detected in this processing iteration. Important: objects themselves are returned as a structure (see below), per-object data is data that is the output of filters such as the Per-Object Averaging or Per-Object Counting filter. Also note that output sinks can combine objects and per-object data, in which case the per-object data will be returned as part of the object structure.For PushBroom data it is recommended to always combine per-object data with objects before interpreting it, as the output delay of both nodes may be different, and when combining the data the user does not have to keep track of the relative delays themselves.
To obtain the tensor structure of a given output sink, the method
ProcessingContext::outputSinkTensorStructure()
is available. It
returns the following information:
The scalar data type of the tensor data (this is the same as the data type configured in the output sink)
The order of the tensor, which will be 2 or 3 (see above)
The maximum sizes of the tensor that can be returned here
The fixed sizes of the tensor that will be returned here. If the tensors returned here are always of the same size, the values here will be same as the maximum sizes. Any dimension that is not always the same will have a value of
-1
instead. If all of the sizes of the tensor returned here are fixed, the tensor returned will always be of the same size. (There is one notable exception: if the output sink has a non-zero output delay ofm
, the firstm
processing iterations will produce a tensor that does not have any data.)
Please refer to the documentation of
ProcessingContext::OutputSinkTensorStructure
for more information
on how this information is returned.
Using ProcessingContext::outputSinkData()
it is possible to obtain
that tensor data after a successfull processing step. It will also
return information about the scalar data type, the order, and the
stride structure of the resulting tensor, even if that information is
in principle reconstructible from the data obtained via the
ProcessingContext::outputSinkTensorStructure()
method.
For example, if we know a given sink with index sinkIndex
has
signed 16bit integer data that spans the entire cube that is being
processed (when processing HSI cubes), the following code could be
used to obtain the results:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | /* obtained from previous introspection */
int sinkIndex = ...;
std::int64_t cube_width = ..., cube_height = ...;
try {
auto data = ProcessingContext::outputSinkData(sinkIndex);
auto classificationData = static_cast<int16_t const*>(data.data);
/* Classification results have an inner dimension of 1, so the
* actual sizes should be (cube_height, cube_width, 1)
*/
assert(data.sizes[0] == cube_height);
assert(data.sizes[1] == cube_width);
assert(data.sizes[2] == 1);
int64_t strideY = data.strides[0];
int64_t strideX = data.strides[1];
for (std::int64_t y = 0; y < cube_height; ++y) {
for (std::int64_t x = 0; x < cube_width; ++x) {
std::int64_t index = y * strideY + x * strideX;
std::cout << "Classification result for pixel (" << x << ", " << y << ") = "
<< classificationData[index] << "\n";
}
}
std::cout.flush();
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|
Object Data¶
Objects will be returned as an array of C structures,
fluxEngine::OutputObject
, which contain information about
objects that were detected in the model.
The method
ProcessingContext::outputSinkObjectListStructure
will return the following information about output sinks that return
object results:
The maximum number of objects that can be returned in a single iteration.
Whether per-object data was output using the output sink in the model, and if so, how large it is (per-object data is always a vector, i.e. a tensor of order 1)
The scalar type of per-object data (if any)
Please refer to the documentation of
ProcessingContext::OutputSinkObjectListStructure
for more
information on how this information is returned.
Using ProcessingContext::outputSinkData()
it is possible to obtain
that object data after a successfull processing step. It may be called
in the following manner, assuming sinkIndex
is a sink that is known
to return objects:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | /* obtained from previous introspection */
int sinkIndex = ...;
try {
auto data = ProcessingContext::outputSinkData(sinkIndex);
auto objects = static_cast<FluxEngine::Objects const*>(data.data);
std::int64_t objectCount = data.sizes[0];
for (int64_t i = 0; i < objectCount; ++i) {
std::cout << "Object: bbox topleft ["
<< objects[i].boundingBoxX() << ", " << objects[i].boundingBoxY() << "] -- bottomright ["
<< (objects[i].boundingBoxX() + objects[i].boundingBoxWidth() - 1) << ", "
<< (objects[i].boundingBoxY() + objects[i].boundingBoxHeight() - 1) << "], area "
<< objects[i].area() << "\n";
}
std::cout.flush();
} catch (std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
exit(1);
}
|