Device Support

fluxEngine comes with a framework for integrating various types of devices. Currently the following device types are supported:

  • Light control devices: control a light source by switching it on and off and/or changing its intensity.

  • Instrument devices: devices such as spectrometers, cameras, etc. to acquire data that should later be processed.

Due to the abstraction layer integrated into the framework, fluxEngine already supports a variety of devices.

In general drivers written for fluxEngine are portable across all operating systems and platforms fluxEngine supports, but may face restrictions if they require a transport layer that only supports specific operating systems and/or platforms.

Supported Devices

HSI PushBroom Cameras

fluxEngine supports the following hyperspectral PushBroom cameras:

Vendor

Model

Transport

Windows

Linux

macOS

HORIBA

H116

USB3 (Proprietary)

Yes

No

No

hsi.technology

hsi 1D-VIS

USB3 (Proprietary)

Yes

Yes

Yes

hsi 1D-VIS ST

USB3 (Proprietary)

Yes

Yes

No

hsi 1D-VIS SB

GigE Vision

Yes

Yes

Yes

inno-spec

BlueEye

USB3 (Proprietary)

Yes

Yes

No

BlueEye Scientific

USB3 or Ethernet (Proprietary)

Yes

Yes

No

GreenEye

GigE Vision

Yes

Yes

Yes

RedEye 1.7

GigE Vision

Yes

Yes

Yes

RedEye 1.9

Ethernet & Serial (Proprietary)

Yes

Yes

No

RedEye 2.2

Ethernet & Serial (Proprietary)

Yes

Yes

No

RedEye 1.7 (older models)

Ethernet & Serial (Proprietary)

Yes

Yes

No

Resonon

Pika L

USB3 Vision

Yes

Yes

No

Pika XC2

USB3 Vision

Yes

Yes

No

Pika IR (prev. NIR320)

GigE Vision

Yes

Yes

Yes

Pika IR+ (prev. NIR640)

GigE Vision

Yes

Yes

Yes

Specim

FX10

CameraLink [1]

Yes

Yes

No

FX10e

GigE Vision

Yes

Yes

Yes

FX17

CameraLink [1]

Yes

Yes

No

FX17e

GigE Vision

Yes

Yes

Yes

FX50

GigE Vision

Yes

Yes

Yes

Footnotes

Spectrometers

Vendor

Model

Transport

Windows

Linux

macOS

hsi.technology

hsi 1C-NIR

USB HID

Yes

Yes

Yes

Driver Isolation Framework

Drivers in fluxEngine’s framework are dynamic libraries (*.dll on Windows, *.so on Linux, and *.dylib on macOS) that are loaded at runtime to provide the required functionality to access a device.

Instead of loading the drivers directly into the current process, fluxEngine starts a separate executable (fluxDriverIsolation.exe on Windows, fluxDriverIsolation on Linux/macOS) that communicates with the current process and loads driver directly. This has the following advantages:

  • If the driver does crash the functionality of fluxEngine is not affected.

  • The driver may modify the global state of the isolation process it’s running in without affecting the main program.

The fluxDriverIsolation executable must be placed so that it can find the DLLs it requires to start.

Isolation Executable for the Python API

On all operating systems the fluxEngine Python module contains the fluxDriverIsolation binary next to the module (and further library dependencies itself), and automatically uses that executable directly. The user typically does not have to take care of the program path themselves.

Isolation Executable on Windows Systems

By default fluxEngine assumes fluxDriverIsolation.exe to be in the same directory as the main executable, whatever that may be. Note that the main executable is the program written by the user calling fluxEngine!

fluxDriverIsolation.exe has some DLLs as dependencies (in general the same DLLs as fluxEngine itself), which have to either reside in the same directory as fluxDriverIsolation.exe or the PATH environment variable has to be set so that the DLLs can be found.

Typically the fluxDriverIsolation.exe will be in the same directory as the user’s main executable, as well as fluxEngine’s set of DLLs.

In the fluxEngine API package fluxDriverIsolation.exe can be found in the bin\ subdirectory together with fluxEngine and the corresponding DLLs.

Typically the following directory structure is recommended on Linux systems:

(Directory) program_directory\

(Directory) bin\

(Executable) user_program

(Executable) fluxDriverIsolation.exe

(Library) fluxEngineC1.dll

(Library) ....dll (dependencies)

Isolation Executable on Linux Systems

By default fluxEngine assumes fluxDriverIsolation to be in a apth ../libexec/fluxDriverIsolation relative to the path of the main executable. Note that the main executable is the program written by the user calling fluxEngine!

fluxDriverIsolation has some shared object library dependencies (*.so, in general the same libraries that fluxEngine itself requires), which have to reside either in a directory ../lib relative to where the fluxDriverIsolation is stored, or the same directory as fluxDriverIsolation. Otherwise the LD_LIBRARY_PATH environment variable has to be set so that the dependencies can be found. (If the dependency libraries are located in one of the directly support paths the environment variable does not need to be set, as fluxDriverIsolation and all its dependencies use the rpath feature.)

Typically the following directory structure is recommended on Linux systems:

(Directory) program_directory/

(Directory) bin/

(Executable) user_program

(Directory) libexec/

(Executable) fluxDriverIsolation

(Directory) lib/

(Library) libfluxEngineC1.so

(Library) lib....so (dependencies)

The fluxDriverIsolation executable is found in the libexec/ subdirectory of the fluxEngine API package; the dependency libraries (as well as fluxEngine itself) is found in the lib/ subdirectory.

Isolation Executable on macOS Systems

By default fluxEngine assumes fluxDriverIsolation to be in the same directory as the main executable, and that the main executable is an app bundle. Note that the main executable is the program written by the user calling fluxEngine!

fluxDriverIsolation has some dynamic library dependencies (*.dylib, in general the same libraries that fluxEngine itself requires), which have to reside in either a directory ../Frameworks relative to the path of the main executable. (This assumes and app bundle structure with Contents/MacOS/ containing the executables and Contents/Frameworks/ containing the libraries.) Otherwise the DYLD_LIBRARY_PATH environment variable has to be set so that the dependencies can be found. (If the dependency libraries are located in one of the directly support paths the environment variable does not need to be set, as fluxDriverIsolation and all its dependencies use the rpath feature.)

Typically the following directory structure is recommended on Linux systems:

(Directory) program.app/

(Directory) Contents/

(Directory) MacOS/

(Executable) program

(Executable) fluxDriverIsolation

(Directory) Frameworks/

(Library) libfluxEngineC1.dylib

(Library) ....dylib (dependencies)

Custom Location for fluxDriverIsolation

If the user choses to deviate from the recommended directory layout they must tell fluxEngine where to find it. This can be done via the following calls:

Driver Directory Structure

Drivers are searched for in ../drivers relative to the main executable by default. The search path can be overridden by calling one of the following functions / methods:

The contents of the driver base directory must have the following structure:

(Directory) drivers/ (The base directory)

(Directory) instrument/ [2]

(Directory) DriverName/

(Library) DriverName.dll (Windows)

(Library) DriverName.so (Linux)

(Library) DriverName.dylib (macOS)

(Library) Dependencies of the driver

(Directory) light_control/

(Directory) DriverName/

(Library) DriverName.dll (Windows)

(Library) DriverName.so (Linux)

(Library) DriverName.dylib (macOS)

(Library) Dependencies of the driver

The directory itself contains subdirectories according to the type of the driver (see also fluxEngine_C_v1_DriverType, fluxEngine::DriverType and fluxEngine.DriverType). In those subdirectories each driver has its own directory with the name name as the driver library. For example, the virtual PushBroom driver has the DLL name VirtualHyperCamera.dll on Windows, and the directory containing that driver must hence be named VirtualHyperCamera, i.e. the driver library filename without the extension. Note that on filesystems that are case-sensitive (mostly on Linux, but also on some macOS systems) the directory must have the same capitalization as the driver filename.

Footnotes

Enumeration Process

In order to connect to a device the user must first ask fluxEngine to enumerate all available devices and drivers. This can be done via the following functions:

The user can restrict the enumeration process to a specific type of driver, for example only instrument drivers. The user must specify a timeout that will dictate how long the enumeration proces will look for deviecs.

There are two types devices that can be enumerated:

  • When devices are connected via USB or using a discoverable Ethernet protocol (such as GigE Vision) it is possible to directly obtain a list of all connected devices, including some meta-information about these devices. Multiple devices that are connected to the same computer will all be listed.

  • Some devices can only be probed – that is the user must attempt to connect to the device via a fixed address, a serial port, or something similar, and can only then determine if the device is actually connected. In that case the driver will always enumerate a single device, and that device will have parameters that the user may specify to indicate the port to which the device is connected.

    These results of the enumeration process also show up if no device is actually connected to the system, because the driver has no way of knowing whether there actually is such a device connected until it attempts a connection – and connections are not attempted during the enumeration process.

After the enumeration process has completed the user will have a list of devices that were found. In addition the user will also have access to a list of drivers that were found, so that if the device they were looking for has not been found they can check to see if the driver was loaded correctly.

Each enumerated device will have the following information associated with it:

  • The driver it was enumerated from. Specifically the driver name (which is the file name of the driver library without the extension, and is case-sensitive within fluxEngine), and the driver type, which both identify the driver uniquely.

  • An id for ther device. The device’s id is guaranteed to remain stable for a reasonable amount of time after enumeration so that it may be used for connecting to that device. It is not guaranteed to be stable across system reboots, software upgrades, or even unplugging and plugging the device back in.

  • The manufacturer / vendor of the device.

  • The model name of the device.

  • Optionally a serial number of the device. (Sometimes this can’t be obtained in the enumeration step even if the device has a serial number.)

  • Information about required and/or optional connection settings to be used when connecting to that device.

Device Connection

After having enumerated the available devices and having chosen a specific device that was found, the user may now connect to that device. To connect to the device, please use one of the following methods:

The connect method requires the following inputs by the user:

  • The identity of the device to connect to. This consists of the driver name (that is the name of the driver library without its extension, for example VirtualHyperCamera), the driver type, as well as the id of the device from the enumeration process.

  • Values for connection settings if the driver requires these.

    For example, when connecting to a camera, some drivers may require a calibration file to be able to properly use the camera, while other drivers may optionally allow a calibration file to be used instead of the calibration data that is stored on the camera.

If the connection attempt is successful, the user will obtain a handle to a Device Group. A device group is a tree-like structure consisting of a Primary Device, and each device may have subdevices.

The reason for this is that sometimes a device provides multiple functionalities. An instrument device may also have an integrated light source, in which case it is both an instrument as well as a light control device. To properly handle this, after connecting to that instrument the device group would have a primary device, the instrument itself, as well as single subdevice, a light control device.

If the user is only interested in the primary functionality it is safe to ignore any subdevices and only deal with the primary device.

Note

There are currently no device drivers that actually provide subdevices, but that is likely to change in the future.

That said, even future drivers that will support subdevices will typically only have one or two such subdevices.

There are two types of operations that are specific to the device group:

  • Disconnecting: the disconnect operation acts on the entire device group. After disconnecting the devicee group the user may not access any device in that group anymore.

  • Notification handling: drivers may provide notifications for the user. The user may either regularly poll to see if a new notification has arrived, or may obtain a handle to integrate the notification into a custom event loop.

All other operations that a driver may perform are specific to the individual devices of the device group.

Device and Connection Parameters

When connecting to a device certain information (such as the path to a calibration file) may need to be passed to the driver for the connection process to succeed.

When connected to a device the user may want to update settings of the device – for light control devices they may want to update the light’s intensity, for instrument devices they may want to update the exposure time.

In both cases the specifics of these settings are driver-dependent. To accomodate this fluxEngine supports the concepts of parameters. Certain objects may have a list of parameters grouped in a so-called parameter information structuree (ParameterInfo) that describe what kind of settings a specific driver provides. That information structure exists for both connection parameters of the device, as well as settings of the device after having been connected.

Parameters have a unique name that identifies them in toward the driver. That name is case-sensitive. A list of parameter names may be obtained from the parameter information structure.

Each parameter has a specific type. The following types exist:

  • String – a simple string. This is mostly used for read-only parameters that the user may read to obtain information about the device.

  • File – a string that represents a file path. This is used during the connection process to

  • Integer – an integer value

  • Float – a floating point value

  • Boolean – a boolean value (true or false)

  • Enumeration – an integer value that is selected from a list of valid values that also have names associated with them. This is used any time there are multiple choices for a given option. For example, an instrument may support multiple trigger sources.

  • Command – an action that may be executed. This exists to allow drivers to provide arbitrary functionality the user may trigger. A command may never be present in connection parameters, but devices can have such a command.

    The most common example for a Command type parameter is a parameter that can be used to perform a software trigger on an instrument device. This will typically be named TriggerSoftware.

Parameters also have an access mode that determines what a user can do with a specific parameter:

  • Not available – the parameter is currently not available. This may be the case if the parameter’s availability depends on another parameter being set to a specific value. In that case the user can’t read from or write to the parameter.

  • Read-only – the parameter can only be read. This may be permanent (a meta information parameter that shows the current firmware version of the device can never be written to), or it could be dependent on the value of another parameter.

  • Read-write – the parameter can be read and written. This is the case for most types of settings.

  • Write-only – the parameter can only be written to. This is typically only the case of some Command type parameters that, where executing the command is considered writing to it.

Setting Connection Parameters

Prior to connecting to a device there is no state where a parameter could be set. Instead the user must provide the connect method with all of the values they want to set in a standardized form: a map of the parameter name to its value as a string.

For example, if a camera requires a calibration file, and the connection parameter name is "CalibrationFile", then the user must provide a key-value mapping to the connect method with the key "CalibrationFile" and the value being the string representation of the file path to the calibration file, e.g. "C:\\file.cal".

In another example, if another device requires a setting to indicate a numeric port number, and the parameter name is "Port", and the user wants to connect to the port 42, the user should specify the value "42" (as a string) for the "Port" parameter during device connection.

See also the reference documentation for more specific details:

Getting and Setting Device Parameters

When connected to a device the user may use standard methods to read and write device parameters. The type of the parameter (that can be queried through a ParameterInfo structure, but also directly given the name of a parameter) dictates the data type the user must use to change the value. For non-Command parameters the user may always use a string, as long as it’s encoded in the same manner as for connection parameters.

While the device only has a single namespace for the parameters, hence all device parameters being uniquely identified solely by their name, there are three different lists of parameters the user may obtain, for different purposes:

  • The Parameter parameter list describes device settings that influence the device’s behavior. This could be the light intensity of a light control device, or the exposure time of a camera or spectrometer.

  • The MetaInfo parameter list contains read-only parameters that the user may use to determine more information about a specific device, such as the firmware version, etc. The information in these parameters will typically never change while the device is connected.

  • The Status parameter list contains read-only parameters that give the user feedback about the current device’s status, such as the current temperature.

To obtain a parameter information structure of a given device, please use:

To get the current value of a parameter from a device, the following functions and methods are available:

To set the value of a parameter, the following functions and methods are available:

Command Parameters

Command parameters are special in that they have different functions. The main function to trigger a command is:

There are two possible implementations of a command parameter:

  • The command execution operation is synchronous and returns only after the command has succeeded or has failed. In that case the user need not do anything beyond calling the aforementioned method.

  • The command begins execution in the background, returns immediately, and lets the user asynchronously query whether the command has completed on the device.

In the second case there are further methods to determine if the command has since completed:

Calling one of these methods on a command that is synchronous will always indicate the command has completed, so the user may always use that method in a loop after executing a command to be on the safe side.

Automatic Stop of Instrument Acquisition

Certain parameters of instrument devices cannot be changed while the device is currently acquiring data. For example, the region of interest of a camera cannot be changed while data is being returned. In case the user does change such a parameter, fluxEngine will automatically stop the acquisition on the driver side, and then change the parameter.

In that case the instrument device will be in a state that indicates the acquisition has been forced to stop:

If the device is in that state after the user has changed a parameter, the user must immediately stop acquisition on their side (via the normal method to stop acquisition). They may then re-start acquisition. Note, however, that the size of the buffer returned by the instrument may have changed due to the parameter change.

Instrument Buffers and Shared Memory

Data acquisition from instrument devices works via shared memory. The user must pre-allocate a number of buffers in shared memory that are used to transfer the data from the driver into fluxEngine. The following diagram visualizes how acquisition works in fluxEngine:

The handling of buffers and queues in the shared memory segment during data acquisition.

There are two queues: the queue that is used to transmit buffers from the driver to fluxEngine (“queue”) and the queue that is used to return the buffers to the driver so that it may fill them with new data (“return queue”).

Any time an instrument device driver receives new data from the instrument it will try to obtain a buffer from the return queue and fill that with data. Once all data has arrived it will send the buffer along the main queue to the application.

If no buffers are available to store the data in the device will drop that selected piece of data, hoping that the next time data from the device arrives there will be a buffer available to put it in.

In order to obtain data from a device the user must call the retrieveBuffer() method (see the reference documentation for each language for the precise name) to obtain a buffer with data from the main queue. The buffer is now owned by the application (the driver will not touch it) and the application may now use it to process data. Once the application doesn’t need the buffer anymore it should return it via the returnBuffer() method to put it back into the return queue so that the driver may re-use the buffer for data that is returned later.

If the user never returns any buffers to the return queue at some point this will starve the driver and all subsequent data will be dropped.

In general the user should return buffers as soon as possible. If the data within the buffers is needed at a later point in time, please take a look at persistent buffers and buffer containers. (See below.)

Total Buffer Counts

While the total number of buffers in shared memory must be specified once by the user during setup of the device, the number of buffers used for a specific acquisition process may be restricted to a lower number.

For example, if the user sets up the shared memory segment to contain 50 buffers, but tells the start acquisition method to only use 5 buffers, only 5 buffers will be used during that specific acquisition process, all other 45 buffers will not take part.

Latency vs. Dropped Buffers

If the user specifies more buffers to use during acquisition it becomes less and less likely that a buffer will be dropped during the acquisition process. On the other hand this may increase the latency between recording of the buffer by the driver until the actual processing by the user application drastically.

As a rule of thumb the following should be considered:

  • When performing a recording the number of of buffers should be set to a higher number so that the data loss is mitigated as much as possible.

  • When processing data live it is typically better to have a lower latency and hence use less buffers, even if data gets dropped.

Note that it strongly depends on the specific type of data processing, the amount of data per buffer, as well as the number of buffers per second, and finally the power of the system the application is running on, as to whether the application is able to process the data fast enough to not stall the driver when acquiring data.

Persistent Buffers and Buffer Containers

If the user wants to keep data of a specific buffer around longer, but doesn’t want to stall the driver when it comes to acquisition, fluxEngine provides so-called “Persistent Buffers” that can be allocated for this purpose.

The user may either:

  • Pre-allocate a persistent buffer for the instrument’s current buffer structure and later have fluxEngine copy the data from the device buffer into the persistent buffer

  • Allocate a new persistent buffer on the fly from a given device buffer

A persistent buffer, once data is copied into it, contains the same information as a device buffer, but lives solely in the memory of the application.

Additionally, for the case the user wants to record a sequence of buffers – for example to create a white reference measurement, or to just record data – the user may also allocate a buffer container. A buffer container is an object that can hold the data of a number of buffers (of the same structure). The user may allocate such a buffer container, indicating its maximum capacity during allocation time. They may then add buffers to that container until it’s full, and later retrieve the data either as a whole, or individually.

PushBroom HSI Cameras

The most common instruments used with fluxEngine are hyperspectral PushBroom cameras. To work with a PushBroom camera one must know the following:

  • Whether spatial pixels are in x direction and wavelengths/bands in y direction of the sensor, or vice-versa

  • What wavelengths are associated with which sensor pixels. This will vary from specific camera to specific camera, as there are manufacturing tolerances. So a camera in the VIS range may have wavelengths between 399.2 and 1001.5, but a camera of the same model might range from 400.1 to 1002.7, for example.

There are two manners in which this required information can be provided:

  • The spectrograph orientation (and hence which axis is which) will often be fixed for a specific camera model, and may hence be hard-coded into the fluxEngine driver for that camera.

  • The specific wavelength calibration may be provided either as data that may be read from the camera itself during connection (if the camera supports that), or via a calibration file that the user has to provide during the connection process. In some cases the user may provide a calibration file to use instead of the data placed on the device itself, in which case specifying a calibration file would not be mandatory for that driver.

It will depend on the camera model whether a calibration file should be specified.

In addition to the most basic information, all optics will have distortions from the optimum, called smile and keystone. Smile means that the wavelength assignment for each spatial column/row on the sensor is not exactly identical, but varies – a homogenous image of a single wavelength will often be returned in form of a parabola by the device, hence the name smile. Keystone means that the same spatial pixel is not projected onto an exact row/column on the sensor, but rather as a slightly tilted (and potentially curved) line. Both of these effects may be corrected. There are three methods of dealing with these effects:

  • If they are very small they may be ignored. There are other sources of inaccuracies in cameras, such as noise, so if the optics are extremely precise, these effects may be small enough to not be a consideration. However, the more precise a question is asked to be answered by a given measurement, the more these effects do play a role.

  • Some camera models correct this information already on-camera and the data seen by fluxEngine (even that in the raw buffer) is already corrected against such effects.

  • fluxEngine may also perform these corrections itself if the driver asks it to. In that case these corrections will be performed when recording data or processing instrument data through a model. The information required for the correction may again be stored either on the camera, or be part of a calibration file that the user has to specify.

Note that even after smile information is corrected, the mapping between sensor image row/columns and wavelengths is still not going to be standardized – correcting smile only means that the mapping is identical regardless of the column/row of the spatial pixel being looked at.

Data Standardization and Processing

fluxEngine provides a means to standardize the data obtained from instrument devices. The buffers that are returned from retrieveBuffer() are the raw data obtained from the device itself. That data may have a layout related to the specific model of the device. To properly process data it must be standardized in some manner.

For PushBroom cameras this will typically consist of the following steps:

  • Perform image corrections that were not done on the camera itself

  • Normalize the storage order (fluxEngine uses BIP storage order internally for all data processing)

  • Associate wavelength information with the data

  • Optionally calculate reflectance information from the data in conjunction with a white reference

  • Optionally standardize the data to a well-defined wavelength grid

The precise operations required to properly standardize the data is specific to each instrument, and the fluxEngine driver will provide fluxEngine with the required information to actually perform this.

In order to standardize the data the user may create a processing context with the instrument’s buffers as the input data. There are three types of processing contexts that may be created for device data: preview, recording and data processing contexts. Models are not required for preview and recording contexts.

In general the user will allocate the context before the start of acquisition (but after setting the approrpriate device parameters), and process the data using the context in a loop that obtains the buffers from the instrument device.

Preview Contexts

Preview contexts exist to enable a quick preview of the data that may be shown to the user, without taking up too much processing time.

For PushBrom cameras this will consist mainly of:

  • Normalizing the storage order to BIP

  • Potentially averaging over all wavelengths

The resulting data of a preview processing context will have the following structure for PushBroom cameras:

  • First dimension: y, fixed size 1 (single line)

  • Second dimension: x, fixed size depending on the ROI of the camera

  • Third dimension: uncorrected bands (# of elements depend on the camera), but typically just fixed size of 1 (indicating that all uncorrected bands were averaged)

These lines may be displayed sequentially in a waterfall view to show an image of what the camera currently sees.

For Imager cameras this will consist of:

  • Averaging over any mosaic pattern (or potentially decoding it into colors if not too expensive computationally)

The resulting data of a preview processing context camera will have the following structure for imager cameras:

  • First dimension: y, fixed size depending on the camera

  • Second dimension: x, fixed size depending on the camera

  • Third dimension: either fixed size 1 (monochrome) or 3 (RGB colors)

Data from a preview context should never be used to actually perform any further processing, as this exists only to show a simple preview image to the user.

Recording Contexts

Recording contexts exist to actually record data.

In the case of HSI cameras they will return information about the wavelengths associated with the resulting data. In addition the storage order will always be standardized to BIP. The user may choose whether to calculate reflectances or just record the intensity values returned by the device. Additionally the user may optionally specify a wavelength grid that will be used to interpolate the data from the device onto, if they want to store the data in a device-independent manner. (Each HSI camera will have slightly different wavelengths associated with the pixels of its data, as there are some manufacturing tolerances.) Corrections specified by the driver will be applied to the data obtained from the device.

Data recorded from HSI cameras may be saved in ENVI cubes by the user.

Processing Contexts

In addition to providing a quick preview or recording data, the user may ask fluxEngine to standardize the data and then immediately process it against a model that has been loaded.

This allows the user to obtain the results of a model live, allowing the user to either store the results of the model (instead of the data itself), or perform actions based on the results of that model in real-time.

This is useful when using fluxEngine in an integrated application.

Device Parameter Standardization

Specific common settings in HSI cameras will often have different names and semantics depending on the camera model. There is the GenICam standard, especially the Standard Feature Naming Convention, but almost no supported HSI cameras actually follows that standard.

In addition it is the case that HSI cameras will often not use the entire sensor area. Instead, the optics will project light only onto an area of the sensor. That area is called active area.

To standardize the experience with cameras as much as possible, all fluxEngine

  • Automatically restrict the area that can be retrieved from the device to the active area. (This means that even if the active area varies from device to device due to manufacturing tolerances, a user of fluxEngine will always just retrieve that area from the device.)

    This means that OffsetX = 0 and OffsetY = 0 won’t select the active area of the

  • Ensure that region of interest (ROI) can be set in arbitrary increments that make sense for the given sensor. For a PushBroom sensor this means that the ROI may be set in increments of 1 for most devices, even if the device natively doesn’t support it. In that case the closest possible ROI will be selected on the device and the rest of the ROI will be performed in software. For devices with a mosaic pattern the ROI must obviously be selected in an increment of the pattern size, but any additional restrictions in the device will be eliminated using software ROI.

  • Automatically cross-update offset and sizes of the region of interest. This means that if the user updates the x offset, for example, and the resulting ROI would be too large, the width will automatically be reduced. (In contrast, most cameras, as well as the GenICam standard, will disallow the changing of the offset in that case, until the width is reduced beforehand.)

    This exists primarily to make it easier on the user to update the ROI, because they don’t have to think about the order in which these settings have to be applied.

  • If the camera supports hardware binning, the OffsetX, OffsetY, Width and Height settings that are shown to the user will appear as if they were following the GenICam Standard Feature Naming Convention. For example, if the active area of a sensor has a size of 500 by 500, when the hardware binning is disabled (both vertical and horizontal binning are set to 1) the maximum width will be 500 and the maximum height will be 500. When the user activates horizontal binning and sets it to 4 in that case, the maximum width reduces to 125 (500 divided by 4), but that still covers the same area.

    This is done regardless of how the camera does this internally. Among the cameras that are currently supported by fluxEngine, and across various firmware versions of these cameras, there are 5 (!) different ways the cameras actually allow the user to specify the ROI in the case of enabled binning. This means that regardless of camera fluxEngine provides a standardized view on the ROI and hardware binning settings.

  • The drivers attempt to assign standardized names to various parameters, such as ExposureTime, AcquisitionFrameRate, etc., regardless of how the camera calls them internally.

Apart from the restriction to the active area, which is always done by by all drivers, some older drivers will not completely support all of the aforementioned parameter standardizations. This is an area that is constantly being improved upon.

What is not abstracted away is the spectrograph orientation. While fluxEngine uses BIP storage order after data has been standardized, the instrument device parameters still affect the dimensions in the same orientation that the device was constructed in. For a PushBroom with the wavelength axis in y direction, this means that the OffsetY and Height parameters dictate which bands are selected, while for a PushBroom with the wavelength axis in x direction the OffsetX and Width parameters are used instead. This is done so that when the user views the raw buffer from the device as-is during calibration the ROI parameters actually affect the frame in the manner that directly reflects in the image that is shown.