Module: file_system

Filesystem-related nodes.

These nodes perform file-system related operations, such as loadinga nd saving data to/from files on disk, and in some cases from network storage. Typically, the data packets that these nodes produce (in case of loader/importer nodes) or accept (in case of saver/exporter nodes) represent whole recordings, i.e., they are not streaming chunks. A small subset of nodes in this category works with streaming data. Some nodes perform auxiliary file system tasks, such as manipulating directories.

ExportCSV

Export data into a csv file.

This node accepts a multi-channel time series and writes it to a csv (comma-separated values) file. The result can be read with numerous analysis packages, including Excel, MATLAB, and SPSS. This node works both for saving the output of an offline processing pipeline, as well as writing streaming data to disk. This node will interpret your data as a multi-channel time series, even if it is not: if your packets have additional axes, like frequency, feature, and so on, these will be automatically vectorized into a flat list of channels. Also, if you send data with multiple instances (segments) into this node, subsequent instances will be concatenated along time, so the data written to the file will appear contiguous and non-segmented (channels by samples). The way the file is laid out is as one row per sample, where the values of a sample, which are the channels, are separated by commas. Optionally this node can write the names of each channel into the first row (as a header, which is supported by some software), and it can also optionally append the time-stamp of each sample as an additional column at the end of each row.

Version 1.0.1

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    File name to record to (csv).

    • verbose name: Filename
    • default value: untitled.csv
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • col_axis
    Axis along which to select for the columns. Use the space axis when you want to select channels, the instance axis to select trials or segments, the feature axis when the data happens to contain features (e.g., after feature extraction), or time and frequency axes.

    • verbose name: Columns Select Axis
    • default value: space
    • port type: EnumPort
    • value type: object (can be None)
  • column_header
    Include specified labels as columns headers. If this is set, the first row in the file will have the specified label names. Some analysis programs can interpret this as the header row of the table (basically as column labels).

    • verbose name: Include Column Header
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • row_axis
    Axis along which to select for the rows. Use the space axis when you want to select channels, the instance axis to select trials or segments, the feature axis when the data happens to contain features (e.g., after feature extraction), or time and frequency axes. Select instance-fields to drop columns from the instance data recarray.

    • verbose name: Rows Select Axis
    • default value: frequency
    • port type: EnumPort
    • value type: object (can be None)
  • row_header
    Include specified labels as row headers. If this is set, the first column in the file will have the label names. Some analysis programs can interpret this as the header column of the table (basically as row labels).

    • verbose name: Include Row Header
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • axis_description
    If both column and row headers are used, this text description for row by column items (e.g . Frequency (Hz) by Channel) will appear in array(1,1).

    • verbose name: Axis Description
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ExportEDF

Export data to a file in EDF, EDF+, or BDF format.

The EDF family of file formats is quite well supported across vendors, and can be used to store raw or processed continuous (non-segmented) time series data. However, EDF is a lossy format in that the data is quantized to a relatively low number of bits per sample, and the types of meta-data per channel, per recording, or per event are quite limited. Some alternatives are .SET (for processed data) and .XDF (for raw data) formats.

More Info...

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    Name of the file to export (edf).

    • verbose name: Filename
    • default value: untitled.edf
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • file_type
    Among other differences, EDF+ uses 16 bits/sample and BDF+ uses 24 bits/sample.

    • verbose name: File Type
    • default value: EDF+
    • port type: EnumPort
    • value type: object (can be None)
  • more_markers
    Write more than one marker (annotation) per sec. This will work, but the resulting EDF may not be correctly read by all EDF import libraries. (Recommend using XDF instead where possible.)

    • verbose name: Write >1 Marker/sec
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ExportH5

Export data to a file in HDF5 format.

Since the HDF5 format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). The HDF5 file format is quite widespread and can be opened by many data analysis packages, including MATLAB. However, with any of those systems you will still have to find your way through the data that you imported, which reflects the way data is organized in NeuroPype. The organization is as a packet with one or more chunks (if there is more than one stream in the data), each of which has meta-data-properties, as well as an n-dimensional array that has n axes of various types (e.g., time, space, instance, feature, frequency, and so on). These axis types have type-specific meta-data, such as the sampling rate in the time axis as well as all the time points for the individual samples in the data).

More Info...

Version 0.5.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    Name of the file to export (h5).

    • verbose name: Filename
    • default value: untitled.h5
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • rename_on_write
    Rename file on write. Useful when files are used as hand-shaking artifacts between pipelines, since it ensures that the file is complete by the time it shows up. Only supported on local file system at this point.

    • verbose name: Rename On Write
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ExportJSON

Export data to a file in JSON format.

Since the JSON format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). JSON is a human-readable format that can also be loaded in most programming languages.

More Info...

Version 0.5.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    Name of the file to export (json).

    • verbose name: Filename
    • default value: untitled.json
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ExportMAT

Export data to a file in MAT format.

Since the MAT format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). The MAT file format can be opened natively with MATLAB, Octave, and Python, among others. However, with any of those systems you will still have to find your way through the data that you imported, which reflects the way data is organized in NeuroPype. The organization is as a packet with one or more chunks (if there is more than one stream in the data), each of which has meta-data-properties, as well as an n-dimensional array that has n axes of various types (e.g., time, space, instance, feature, frequency, and so on). These axis types have type-specific meta-data, such as the sampling rate in the time axis as well as all the time points for the individual samples in the data).

More Info...

Version 0.5.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    Name of the file to export (mat).

    • verbose name: Filename
    • default value: untitled.mat
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ExportSET

Export non-streaming continuous data to an EEGLAB set file.

This node accepts a packet that is assumed to hold the entire recording to save (i.e., non-streaming data), and writes it to a .set file. This node further assumes that the data is a 2d array with a space axis (channels) and a time axis (samples). Note that this node currently cannot save segmented ("epoched") data, which is characterized by the presence of an instance axis in addition to time and space. There may also optionally be a second stream in the packet to save that holds markers, which will be written out as event markers into the set file. Note: some processing nodes will insert axis types that this node cannot handle, such as feature axes (in the case of feature-extraction nodes). If your data has such axes, you can first replace those axes by a space axis (using the Override Axis node), or fold additional axes into the space or time axis (using the Fold Into Axis node). EEGLAB is a quite versatile environment for working with EEG data, and is a commonly-used target for data processed with NeuroPype for additional investigation.

More Info...

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    Name of the file to export (set).

    • verbose name: Filename
    • default value: untitled.set
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • boundary_events
    Add boundary events where there are a significant number of dropped samples.

    • verbose name: Add Boundary Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • max_dropped
    Maximum dropped samples allowed before a boundary condition event is inserted.

    • verbose name: Max Allowed Dropped Samples
    • default value: 2
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ExportStructure

Save a custom structure (dictionary) to a file.

This file format is native to NeuroPype. Implementation notes: the file format is based on Python's pickle system. Since pickle is very flexible, files obtained from the internet may be infected by viruses or could otherwise have been tampered with, and should be treated with caution.

Version 1.0.1

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Data to save.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: dict (can be None)
    • data direction: IN
  • filename
    Name of the file where the packets will be saved.

    • verbose name: Filename
    • default value: None
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • encoding
    Encoding to use for saving the file. Note that some protocols may not be able to export all data structures.

    • verbose name: Encoding
    • default value: pickle
    • port type: EnumPort
    • value type: object (can be None)
  • protocol_ver
    Pickle protocol version. This is the internal protocol version to use. Older software may not be able to read files created with the latest version, but version 3 is supported by all NeuroPype releases.

    • verbose name: Protocol Ver
    • default value: 3
    • port type: IntPort
    • value type: int (can be None)
  • save_if_empty
    Save empty structs.

    • verbose name: Save If Empty
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • allow_pickle_fallback
    Allow falling back to pickle for some objects that aren't JSON or msgpack-serializable.

    • verbose name: Allow Pickle Fallback
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ExportYAML

Export data to a file in YAML format.

Since the YAML format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). YAML is a human-readable format that can also be loaded in most programming languages.

More Info...

Version 0.5.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    Name of the file to export (yaml).

    • verbose name: Filename
    • default value: untitled.yaml
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

GetCurrentWorkingDirectory

Returns the current working directory of the application as a string.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Current working directory.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: OUT

ImportAxon

Import data from an Axon dataset.

Import pCLAMP and AxoScope files (abf version 1 and 2), developed by Molecular device/Axon technologies. The file extension is .abf. This node was tested on publicly available sample data files. If this node does not work with your data then please contact us and send a minimal example of a problematic data set.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportBCI2000

Import data from a BCI2000 dataset.

This node supports file formats generated by the BCI2000 software for brain-computer interfacing. Currently only single-file importing is supported.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording data file. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportBrainvision

Import data from a BrainVision .V

HDR dataset. Some software that writes to this format are the BrainVision Recorder and the BrainVision Analyzer by Brain Products GmbH, however, the format is also supported by other vendors due to its relatively easy-to-implement textual format. Also see the ImportVHDR node.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportCSV

Import EEG data from a CSV file.

The assumed format is the first record in the CSV file contains the column header names, including the EEG channel labels. The nominal rate is typically not stored in the CSV file, so it will be extrapolated from the timestamps, with the assumption that there are no gaps. If the calculated nominal rate is incorrect (due to inconsistent times between timestamps), use Dejitter Timestamps to correct it. It is assumed that all columns except the timestamp and marker names are EEG channel data, but if some are not, these can be excluded with the Exclude Columns parameter (alternatively, the Select Range node to filter out the columns that do not contain EEG data). Note that the timestamps in the CSV are expected to be in seconds. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or batch" style processing on data that isn't streaming (consequently, the packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first stream is called 'eeg' and holds the EEG/EXG data, and the packet contains a single chunk for this stream with a 2d array with a space axis (channels) and a time axis (samples). If the data had markers, a second stream named 'markers' is also included in the packet, which has a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entry are associated each with a timestamp and a string payload that is the respective event/marker type from the .set file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

More Info...

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (set). If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • timestamp_column
    The name or number (counting from 0) of the column which holds the timestamps.

    • verbose name: Timestamp Column
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • marker_column
    The name or number (counting from 0) of the column which holds the event markers.

    • verbose name: Marker Column
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • instance_column_name
    Name of a column which holds the instance payload data, if any.

    • verbose name: Instance Column Name
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • exclude_columns
    List of columns which should be excluded. Can be names or numbers (counting from 0). Example: [2,7,8,'Subject ID', 'Gender'].

    • verbose name: Exclude Columns
    • default value: []
    • port type: ListPort
    • value type: list (can be None)
  • include_columns
    List of columns which should be included. Can be names or numbers (counting from 0). If not given, all columns will be included.

    • verbose name: Include Columns
    • default value: []
    • port type: ListPort
    • value type: list (can be None)
  • data_stream_name
    Name that shall be used for the emitted data stream.

    • verbose name: Data Stream Name
    • default value: eeg
    • port type: StringPort
    • value type: str (can be None)
  • eeg_channel_names
    EEG channel labels

    • verbose name: Eeg Channel Names
    • default value: None
    • port type: DataPort
    • value type: list (can be None)
    • data direction: OUT
  • timestamp_column_name
    Name of a column which holds the timestamps. (Deprecated. Use timestamp_column instead.)

    • verbose name: Timestamp Column Name
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • marker_column_name
    Name of a column which holds the event marker names.

    • verbose name: Marker Column Name
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • emit_each_tick
    Emit data on every tick.

    • verbose name: Emit Each Tick
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportEDF

Import data from an edf+, bdf, or gdf file.

This node can import EEG, MEG, EOG, ECG, EMG, ECOG, and fNIRS data stored in the source file. The node imports the respective data, which is assumed to be continuous (non-epoched), optionally with event markers. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or batch" style processing on data that isn't streaming (consequently, the packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first streams are named using their respective data modality i.e. eeg, meg, eog, ecg, emg, ecog, fnirs, depending on which one is available. The packet contains a single chunk for each stream with a 2d array with a space axis (channels) and a time axis (samples). If the data had markers, a last stream named 'markers' is also included in the packet, which has a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entry are associated each with a timestamp and a string payload that is the respective event/marker type from the .set file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

More Info...

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (edf, bdf, or gdf). If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportFile

Import data from a supported file format.

This node acts as a meta-importer that can import from any file format supported by NeuroPype. The node determines the format based on the file extension. Note that some limitations apply, in particular, file formats that come as a folder of files (Tucker-Davis) can currently not be imported directly from a cloud storage location, and need to be downloaded before they can be processed.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (either set or xdf). If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/). The filename can also contain wildcard characters, in which case the files will be concatenated in alphabetical order.

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored. Not currently supported for all file formats.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored. Not currently supported for all file formats.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed. Not currently supported for all file formats.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format. Not currently supported for all file formats.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk. Not currently supported for all file formats.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • retain_streams
    List of streams to retain. The order doesn't actually matter (it's always data streams first, marker streams second). Warning: currently only supported for multi-modal file formats (XDF).

    • verbose name: Retain Streams
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • use_streamnames
    Use the stream names in the file to name streams. If enabled, the streams loaded will be named as in the file. Otherwise, the streams use canonical names based on the content type, such as eeg or markers. Warning: currently only supported for multi-modal file formats (XDF). If retain_streams is specified, then this will always be True.

    • verbose name: Use Streamnames
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportH5

Import data from an h5 file.

This node imports data in HDF5 format, assuming the NeuroPype Baryon file format, which is is supported among others by the ExportH5 node. This node can export any data that can be represented by the NeuroPype Packet data structure, including multiple streams with any number of (possibly) annotated axes as well as arbitrary meta-data.

More Info...

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (h5). If a relative path is given, a file of this name will be looked up in the standard data directories (thesse include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Print verbose diagnostics.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportNSX

Import data from a Blackrock dataset.

This node loads time-series data of multiple sampling rates, digital events, and neural event data from a Blackrock dataset. The file extensions supported by this node are .NEV, and .NS1 to .NS6, which hold data at different sampling rates. Note: This routine will handle files according to specification 2.1, 2.2, and 2.3.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Base filename of the recording dataset. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/). Any .nsX or .nev, .sif, or .ccf extensions are ignored when parsing this parameter.

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportNeuralynx

Import data from a Neuralynx dataset.

This node loads time-series data of multiple sampling rates, digital events, and neural event data from a Neuralynx dataset. The file extensions supported are .NCS, .NEV, .NSE and .NTT.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • dirname
    Name of the directory containing the Neuralynx data files (.n cs). If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Dirname
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportNeuroExplorer

Import data from a NeuroExplorer dataset.

This node loads analog signals, digital events, and neural event data from a NeuroExplorer dataset. The canonical file extension for this format is .NEX.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Print verbose diagnostics.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportPLX

Import data from a Plexon dataset.

This node loads analog signals, digital events, and neural event data from a Plexon dataset. The canonical file extension for this format is .PLX. Note that modern Plexon systems may store data to a new format PL2 file which is NOT supported by this node.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Print verbose diagnostics.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportSET

Import data from an EEGLAB set file.

This format is a commonly-used interchange file format for EEG, for which converters from most other EEG file formats exist. This node will import the respective data, which is assumed to be continuous (non-epoched) EEG/EXG, optionally with event markers. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or batch" style processing on data that isn't streaming (consequently, the packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first stream is called 'eeg' and holds the EEG/EXG data, and the packet contains a single chunk for this stream with a 2d array with a space axis (channels) and a time axis (samples). If the data had markers, a second stream named 'markers' is also included in the packet, which has a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entry are associated each with a timestamp and a string payload that is the respective event/marker type from the .set file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

More Info...

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (set). If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportSpike2

Import data from a Spike2 dataset.

This node loads analog signals, digital events, and neural event data from a CED Spike2 dataset. The canonical file extension for this format is .SMR.

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportStructure

Import a structure (dictionary) from disk.

Implementation notes: the file format is based on Python's pickle system. Since pickle is very flexible, files obtained from the internet may be infected by viruses or could otherwise have been tampered with, and should be treated with the same caution as, e.g., foreign MS Word documents.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output data.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: dict (can be None)
    • data direction: OUT
  • filename
    Name of the file where the structure was saved.

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • encoding
    Encoding to use for saving the file. Note that some protocols may not be able to export all data structures.

    • verbose name: Encoding
    • default value: pickle
    • port type: EnumPort
    • value type: object (can be None)
  • allow_pickle_fallback
    Allow falling loading pickled data for some objects that aren't JSON or msgpack-serializable.

    • verbose name: Allow Pickle Fallback
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportTDT

Import data from a TDT dataset.

This node loads analog signals, digital events, and neural event data from a Tucker-Davis TTank dataset. Note that the path name passed to this node is the name of the directory containing the files: * TSQ timestamp index of data * TBK channel info * TEV contains data : spike + event + signal (for old version) * SEV contains signals (for new version)

More Info...

Version 0.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • dirname
    Name of the directory containing the TTank data set. Select the TTank data folder that contains the Block- folders. If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Dirname
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into 'analogsignals' chunk. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original int format.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into 'events' chunk.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportVHDR

Import data from a BrainProducts .v

hdr file. This is an easy-to-aprse text/binary based format that is also supported by some vendors other than Brain Products. This node will import the respective data, which is assumed to be continuous (non-epoched) EEG/EXG, optionally with event markers. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or batch" style processing on data that isn't streaming (consequently, the packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first stream is called 'eeg' and holds the EEG/EXG data, and the packet contains a single chunk for this stream with a 2d array with a space axis (channels) and a time axis (samples). If the data had markers, a second stream named 'markers' is also included in the packet, which has a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entry are associated each with a timestamp and a string payload that is the respective event/marker type from the .set file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (vhdr). If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

ImportXDF

Import data from an xdf file.

The XDF file format (see "more.." link below) can store one or more streams of multi-channel time series data, such as EEG, EXG, eye tracking, motion capture, audio, and video, as well as marker data, and can be recorded to using, e.g., the Lab Streaming Layer. This node can import any subset of streams from an XDF file, and supports the minimum necessary per-stream meta-data. It is important to note that XDF files oftentimes have more streams in them than what one wants to analyze, and in such cases one can use the retain_streams option to restrict the imported subset, to prevent NeuroPype from processing the wrong streams. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or "batch" style processing on data that isn't streaming ( consequently, the output packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). The contents of an XDF file are time-synched, and this node supports some options for processing the respective time stamps to ensure good data alignment in the presence of clock drift and jitter. Technically, the packet generated by this node is formatted as follows: the imported streams are named based on their content type (e.g., 'eeg', 'audio') and when multiple streams of the same type are present, the names for that type are instead numbered as in 'eeg-1', 'eeg-2', and so on. The packet generated by this import node contains a single chunk for each stream with a 2d array with a space axis ( channels) and a time axis (samples). If the data had markers (type 'markers'), the imported chunk for each stream is formatted as a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entries are associated each with a timestamp and a string payload that is the respective event/marker type from the .xdf file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

More Info...

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (xdf or xdfz). If a relative path is given, a file of this name will be looked up in the standard data directories (thesse include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Print verbose diagnostics (currently unsupported).

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • retain_streams
    List of streams to retain. The order doesn't actually matter (it's always data streams first, marker streams second).

    • verbose name: Retain Streams
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • handle_clock_sync
    Enable clock synchronization. Needed if data were recorded across multiple computers.

    • verbose name: Handle Clock Sync
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • handle_jitter_removal
    Enable jitter removal for regularly sampled streams. This removed jitter under the assumption that the sampling rate of the data was constant (unless the sampling rate of a stream is explicitly marked as irregular).

    • verbose name: Handle Jitter Removal
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • handle_clock_resets
    Handle clock resets. Whether the importer should check for potential resets of the clock of a stream (e.g. computer restart during recording, or hot-swap).

    • verbose name: Handle Clock Resets
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • reorder_timestamps
    If the file contains a stream with irregular sampling rate and timestamps that are out of order, reorder the samples so that the timestamps are monotonically increasing.

    • verbose name: Reorder Timestamps
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • use_streamnames
    Use the stream names in the file to name streams. If enabled, the streams loaded will be named as in the file. Otherwise, the streams use canonical names based on the content type, such as eeg or markers.

    • verbose name: Use Streamnames
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • max_marker_len
    Optionally a max length on the event markers. Markers longer than this will be substituted by the string where XXXX is a string key into the chunk's .props['long_markers'] field, which is basically a string table. This is only useful if long markers slow down or otherwise throw off downstream processing.

    • verbose name: Max Marker Len
    • default value: None
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)

PathDirectory

Extract the directory part of a file path string.

If the file path has BIDS style notations using {} the root path will be returned (before any {}).

Version 1.1.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    File path.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT
  • trim_placeholders
    Trim off named placeholders in path. This will trim paths containing placeholders like {subject}/{session}.

    • verbose name: Trim Placeholders
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)

PathExtension

Extract the file extension part of a file path string.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    File path.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT

PathFilename

Extract the filename part of a file path string.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    File path.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT

PathIterator

Iterate over a list of path names.

This node accepts a list of path names, possibly given as a wildcard expression like /parent/child_, or as a comma-separated list of paths, or as the name of a study manifest file. On each successive update this node will then output one pathname at a time, until the list is exhausted. Using a comma-separated list of wildcard expressions is not supported. Terminate the search string with a / to indicate that the returned results should be paths only; Terminate with . (or .ext) to indicate the results should be files only. Instead of a *, a "capture name" can be assigned in curly braces, e.g., {Subject}. The current pathname emitted by this node can then be wired into a subsequent node as the current path/file, and thereby multiple files can be imported, processed, and then exported in succession. Using capture names allows extracting meta-data from the rwa file path, which is returned by the path iterator in its curmeta output. Such meta-data can, for example, be attached to segments extracted from the data in later processing, using the Set Instance Details node.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • curpath
    Current pathname.

    • verbose name: Curpath
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: OUT
  • curmeta
    Current metadata.

    • verbose name: Curmeta
    • default value: None
    • port type: DataPort
    • value type: dict (can be None)
    • data direction: OUT
  • paths
    Paths to iterate over. This can be a wildcard ("glob") expression, such as /myfolder/, or point to a study manifest file (e.g., for an ESS study, or top-level BIDS json or tsv file), or be comma-separated list of paths, or can be a Neuroinformatics query. Also, instead of a , a "capture name" in curly braces can be given, e.g., {Session}. This will match the same as a *, but the resulting content will be returned in the secondary output of the path iterator, under curmeta, in the form of a dictionary holding the matched values for all used capture names.

    • verbose name: Paths To Iterate Over
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • filter
    Filter conditions. This is an optional expression of filter conditions that can be used to narrow down the files emitted by this node. The conditions are given in Python syntax, can use any meta-data properties as if they were Python variables, and should evaluate to True. Example: age>42 and gender=='male'. For the list of Python constructs allowed, see NeuroPype documentation of its sandboxed Python expression grammar.

    • verbose name: Filter
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • iter_reorder
    Iterate over items in a re-orderered fashion. Using 0 is forward, 1 is reverse, 2 is from-the-middle-out, 3 is out from the center left, and so forth.

    • verbose name: Iteration Reordering
    • default value: 0
    • port type: IntPort
    • value type: int (can be None)
  • verbose
    Verbose output.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PlayBackREC

Play back the content of a previously recorded rec file.

This node will output the same sequence of packets that was received by RecordToRec when the data was originally recorded. Note that a REC file may contain either one packet (if the output of an offline processing pipeline was stored), or multiple successive packets (if the output of a streaming/online processing pipeline was stored), and consequently it will output either one or multiple packets over the course of successive scheduler ticks. Note that the playback runs at whatever tickrate that is globally set for NeuroPype (default 25Hz) and does not recreate the milisecond-exact timing of arrival of the original data (unless the pipeline ran at precisely the same tickrate without hitches during the recording). Implementation notes: the file format is based on Python's pickle system. Since pickle is very flexible, files obtained from the internet may be infected by viruses or could otherwise have been tampered with, and should be treated with the same caution as, e.g., foreign MS Word documents.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file where the packets will be saved.

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)

RecordToCSV

Record data into a csv file.

This node accepts a multi-channel time series and writes it to a csv (comma-separated values) file. The result can be read with numerous analysis packages, including Excel, MATLAB, and SPSS. This node will interpret your data as a multi-channel time series, even if it is not: if your packets have additional axes, like frequency, feature, and so on, these will be automatically vectorized into a flat list of channels. Also, if you send data with multiple instances (segments) into this node, subsequent instances will be concatenated along time, so the data written to the file will appear contiguous and non-segmented (channels by samples). The way the file is laid out is as one row per sample, where the values of a sample, which are the channels, are separated by commas. Optionally this node can write the names of each channel into the first row (as a header, which is supported by some software), and it can also optionally append the time-stamp of each sample as an additional column at the end of each row. This node is designed for writing streaming data to disk, chunk by chunk. For saving an entire file to disk as CSV (when working with "offline" data), use ExportCSV instead.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    File name to record to (csv).

    • verbose name: Filename
    • default value: untitled.csv
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • column_header
    Include channel labels as columns headers. If this is set, the first row in the file will have the channel names. Some analysis programs can interpret this as the header row of the table (basically as column labels).

    • verbose name: Include Column Labels
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • time_stamps
    Append a column for timestamps. If set, an extra column of data will be appended (i.e., one extra value at the end of each row, which holds the time-stamp for the respective sample, if any).

    • verbose name: Time Stamps
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • absolute_instance_times
    Write absolute instance times. If disabled, the time axis for each instance will be relative to the time-locking event.

    • verbose name: Absolute Instance Times
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • retrievable
    Upload Parts so they can be retrieved individually.

    • verbose name: Retrievable Parts
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • delete_parts
    Delete parts after all data is recorded and uploaded. Only applicable is Retrievable parts is set to True.

    • verbose name: Delete Parts After Completion
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • timestamp_label
    Label for the time-stamp column (if included).

    • verbose name: Timestamp Label
    • default value: timestamp
    • port type: StringPort
    • value type: str (can be None)

RecordToREC

Record data packets to a rec file.

This file format is native to NeuroPype, and can store any data that is computable by it. This node supports writing both the result of offline and online processing pipelines to a file. The resulting file can subsequently be played back using the Play back REC node. REC is a niche file format that is mostly meant for testing, debugging, or recreating NeuroPype data streams, but is not supported by any other software -- for general-purpose data interchange, consider using the HDF5 format. Implementation notes: the file format is based on Python's pickle system. Since pickle is very flexible, files obtained from the internet may be infected by viruses or could otherwise have been tampered with, and should be treated with the same caution as, e.g., foreign MS Word documents.

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    Name of the file where the packets will be saved.

    • verbose name: Filename
    • default value: None
    • port type: StringPort
    • value type: str (can be None)
  • protocol_ver
    Pickle protocol version. This is the internal protocol version to use. Older software may not be able to read files created with the latest version, but version 3 is supported by all NeuroPype releases.

    • verbose name: Protocol Ver
    • default value: 3
    • port type: IntPort
    • value type: int (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Produce verbose output.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • retrievable
    Upload Parts so they can be retrieved individually.

    • verbose name: Retrievable Parts
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • delete_parts
    Delete parts after all data is recorded and uploaded. Only applicable is Retrievable parts is set to True.

    • verbose name: Delete Parts After Completion
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)

RecordToXDF

Record data into a xdf file.

This node accepts a multi-channel time series and streams it to a file, incrementally. The result can be read with MATLAB, Python, C++, or any other framework that can parse XDF. This node will interpret your data as a multi-channel time series, even if it is not: if your packets have additional axes, like frequency, feature, and so on, these will be automatically vectorized into a flat list of channels. Also, if you send data with multiple instances (segments) into this node, subsequent instances will be concatenated along time, so the data written to the file will appear contiguous and non-segmented (channels by samples).

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • data
    Data to record.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    File name to record to (xdf).

    • verbose name: Filename
    • default value: untitled.xdf
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path for the output root folder.

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • allow_double
    Allow double-precision sample values. If set to False, double-precision values will be written as single- precision data (except time stamps, which are always double precision).

    • verbose name: Allow Double Precision
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_partsize
    Part size for streaming cloud uploads. When streaming data to the cloud, parts of this size (in MB) will be buffered up in memory and then uploaded.

    • verbose name: Cloud Partsize
    • default value: 30
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • retrievable
    Upload parts so they can be retrieved individually.

    • verbose name: Retrievable Parts
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • delete_parts
    Delete parts after all data is recorded and uploaded. Only applicable is Retrievable parts is set to True.

    • verbose name: Delete Parts After Completion
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • close_on_marker
    Close when encountering this marker. When this node encounters this marker string, the recording is closed at the next opportunity. Note that this may not complete immediately, especially if there is still outstanding data to be written. For this reason, it is recommended to keep the program running for some time after sending this marker, especially when running in the cloud.

    • verbose name: Close On Marker
    • default value: close-recording
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Print verbose output.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • session_notes
    Session notes. These notes will be written into the file header.

    • verbose name: Session Notes
    • default value:
    • port type: StringPort
    • value type: str (can be None)

SkipIfExists

Skip this file path if the checked path already exists.

This node can be used in offline processing when iterating over paths, e.g., with a FileIterator node. The node can accept and modify the file path that would be wired into an import node, based on whether a second path (the 'checked path') already exists. In that context, the checked path would often be the output file path that the pipeline ultimately writes to (e.g., what is wired into the final export node).

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • path
    File path.

    • verbose name: Path
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT
  • checked_path
    Path to check.

    • verbose name: Checked Path
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: IN
  • enable_check
    Enable check. If set, the check is in effect. Can be disabled so that files that exist are not skipped.

    • verbose name: Enable Check
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)

WaitForFiles

Wait for file(s) matching a certain criteria to exist in filesystem before continuing operation.

Passes data through if files are found, None otherwise. Also passes the files found status (true/false) through a port which can then be wired into the update port of a node further down the pipeline (triggering its execution, for example). The matching criteria uses the same format as PathIterator (see that node's docs.)

Version 1.0.0

Ports/Properties

  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • files_found
    True if all required files have been found, False otherwise. Can be wired into another node to trigger its execution.

    • verbose name: Files Found
    • default value: None
    • port type: DataPort
    • value type: bool (can be None)
    • data direction: OUT
  • path_to_check
    Path to check. Accepts the same format as PathIterator.

    • verbose name: Path To Check
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • num_required_files
    In case where the path_to_check uses wildcards, this is the minimum number of files matching the path which need to be present before operation will continue.

    • verbose name: Num Required Files
    • default value: 1
    • port type: IntPort
    • value type: int (can be None)
  • wait_for_files_timeout
    Wait for the input file(s) to appear for up to this many seconds before timing out. A value of 0 means the pipeline will not wait for the file (essentially skipping this node).

    • verbose name: Wait For Files Timeout
    • default value: 600
    • port type: IntPort
    • value type: int (can be None)
  • check_interval
    How often to check for the expected files.

    • verbose name: Check Interval
    • default value: 2
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: object (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be read from. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)