Module: file_system

Filesystem-related nodes.

These nodes perform file-system related operations, such as loadinga nd saving data to/from files on disk, and in some cases from network storage. Typically, the data packets that these nodes produce (in case of loader/importer nodes) or accept (in case of saver/exporter nodes) represent whole recordings, i.e., they are not streaming chunks. A small subset of nodes in this category works with streaming data. Some nodes perform auxiliary file system tasks, such as manipulating directories.

ExportCSV

Export data into a csv file.

This node accepts a multi-channel time series and writes it to a csv (comma-separated values) file. The result can be read with numerous analysis packages, including Excel, MATLAB(tm), and SPSS(tm). This node is only for saving the output of an offline processing pipeline (it expects the entire dataset in a single packet). For continously writing data to disk (i.e., from a online/streaming pipeline, use RecordToCSV). This node will interpret your data as a multi-channel time series, even if it is not: if your packets have additional axes, like frequency, feature, and so on, these will be automatically vectorized into a flat list of channels. Also, if you send data with multiple instances (segments) into this node, subsequent instances will be concatenated along time, so the data written to the file will appear contiguous and non-segmented (channels by samples). The way the file is laid out is as one row per sample, where the values of a sample, which are the channels, are separated by commas. Optionally this node can write the names of each channel into the first row (as a header, which is supported by some software), and it can also optionally append the time-stamp of each sample as an additional column at the end of each row.

Version 1.3.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • done
    Flag to indicate that the node is finished processing if wired into the PipelineDone node.

    • verbose name: Done
    • default value: False
    • port type: DataPort
    • value type: bool (can be None)
    • data direction: OUT
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.csv
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • file_exists
    What to do if a file with the name filename (and located in output_root) already exists. If skip, this node will perform no export. If error, the pipeline execution will stop. If rename, the original existing file will be renamed with "_n" added to the filename where _n is an incremental number (i.e. _1), and a new file will be written with the specified filename. Note that this option only currently works on the local filesystem.

    • verbose name: File Exists
    • default value: overwrite
    • port type: EnumPort
    • value type: str (can be None)
  • col_axis
    Axis along which to select for the columns. Use the space axis when you want to select channels, the instance axis to select trials or segments, the feature axis when the data happens to contain features (e.g., after feature extraction), or time and frequency axes.

    • verbose name: Columns Select Axis
    • default value: space
    • port type: ComboPort
    • value type: str (can be None)
  • column_header
    Include specified labels as columns headers. If this is set, the first row in the file will have the specified label names. Some analysis programs can interpret this as the header row of the table (basically as column labels).

    • verbose name: Include Column Header
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • row_axis
    Axis along which to select for the rows. Use the space axis when you want to select channels, the instance axis to select trials or segments, the feature axis when the data happens to contain features (e.g., after feature extraction), or time and frequency axes.

    • verbose name: Rows Select Axis
    • default value: frequency
    • port type: ComboPort
    • value type: str (can be None)
  • row_header
    Include specified labels as row headers. If this is set, the first column in the file will have the label names. Some analysis programs can interpret this as the header column of the table (basically as row labels).

    • verbose name: Include Row Header
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • axis_description
    If both column and row headers are used, this text description for row by column items (e.g . Frequency (Hz) by Channel) will appear in array(1,1).

    • verbose name: Axis Description
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportEDF

Export data to a file in EDF, EDF+, or BDF format.

The EDF family of file formats is quite well supported across vendors, and can be used to store raw or processed continuous (non-segmented) time series data. However, EDF is a lossy format in that the data is quantized to a relatively low number of bits per sample, and the types of meta-data per channel, per recording, or per event are quite limited. Some alternatives are .SET (for processed data) and .XDF (for raw data) formats.

More Info...

Version 1.0.1

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.edf
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • file_type
    Among other differences, EDF+ uses 16 bits/sample and BDF+ uses 24 bits/sample.

    • verbose name: File Type
    • default value: EDF+
    • port type: EnumPort
    • value type: str (can be None)
  • more_markers
    Write more than one marker (annotation) per sec. This will work, but the resulting EDF may not be correctly read by all EDF import libraries. (Recommend using XDF instead where possible.)

    • verbose name: Write >1 Marker/sec
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportH5

Export data to a file in HDF5 format.

Since the HDF5 format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). The HDF5 file format is quite widespread and can be opened by many data analysis packages, including MATLAB(tm). However, with any of those systems you will still have to find your way through the data that you imported, which reflects the way data is organized in NeuroPype. The organization is as a packet with one or more chunks (if there is more than one stream in the data), each of which has meta-data-properties, as well as an n-dimensional array that has n axes of various types (e.g., time, space, instance, feature, frequency, and so on). These axis types have type-specific meta-data, such as the sampling rate in the time axis as well as all the time points for the individual samples in the data).

More Info...

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.h5
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • rename_on_write
    Rename file on write. Useful when writing to disk is used as a means of transferring data between pipelines, since it ensures that the file only shows up once fully written. Only supported on the local file system.

    • verbose name: Rename On Write
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • chunk_data
    Set this to True to auto-chunk data on disk. Or set it to a tuple to determine chunk size. e.g. (100, 100) will store data in scattered chunks of size (100, 100). Can take a dictionary to set the chunk size per stream.

    • verbose name: Chunk Data
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • compression

    • verbose name: Compression
    • default value: None
    • port type: EnumPort
    • value type: str (can be None)
  • rdcc_nbytes
    Total size of the raw data chunk cache in bytes. The default size is 1024**2 (1 MB) per dataset.

    • verbose name: Rdcc Nbytes
    • default value: 1048576
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportJSON

Export data to a file in JSON format.

Since the JSON format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). JSON is a human-readable format that can also be loaded in most programming languages.

More Info...

Version 0.5.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.json
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportMAT

Export data to a file in MAT format.

Since the MAT format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). The MAT file format can be opened natively with MATLAB(tm), Octave, and Python, among others. However, with any of those systems you will still have to find your way through the data that you imported, which reflects the way data is organized in NeuroPype. The organization is as a packet with one or more chunks (if there is more than one stream in the data), each of which has meta-data-properties, as well as an n-dimensional array that has n axes of various types (e.g., time, space, instance, feature, frequency, and so on). These axis types have type-specific meta-data, such as the sampling rate in the time axis as well as all the time points for the individual samples in the data).

More Info...

Version 0.5.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.mat
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportSET

Export non-streaming continuous data to an EEGLAB set file.

This node accepts a packet that is assumed to hold the entire recording to save (i.e., non-streaming data), and writes it to a .set file. This node further assumes that the data is a 2d array with a space axis (channels) and a time axis (samples). Note that this node currently cannot save segmented ("epoched") data, which is characterized by the presence of an instance axis in addition to time and space. There may also optionally be a second stream in the packet to save that holds markers, which will be written out as event markers into the set file. If this node is used after ICA has been performed it will back project the IC activations to the channel time series and store the ICA streams as fields in the EEG structure. Note: some processing nodes will insert axis types that this node cannot handle, such as feature axes (in the case of feature-extraction nodes). If your data has such axes, you can first replace those axes by a space axis (using the Override Axis node), or fold additional axes into the space or time axis (using the Fold Into Axis node). EEGLAB is a quite versatile environment for working with EEG data, and is a commonly-used target for data processed with NeuroPype for additional investigation.

More Info...

Version 1.3.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.set
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • file_version
    Specify the MAT-File version of the SET file to export. Version 7.3 MAT-files use an HDF5 based format

    • verbose name: File Version
    • default value: >=7.3
    • port type: EnumPort
    • value type: str (can be None)
  • boundary_events
    Add boundary events where there are a significant number of dropped samples.

    • verbose name: Add Boundary Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • max_dropped
    Maximum dropped samples allowed before a boundary condition event is inserted.

    • verbose name: Max Allowed Dropped Samples
    • default value: 2
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportStructure

Save a dictionary to disk.

This node is typically used after CreateStructure, which allows you to create a dictionary that contains data of different types (i.e., Packets, or numpy arrays, etc.). It is intended primarily for saving data that will be reloaded back into NeuroPype, such as a machine learning model or calibration data. (Example, you can wire the model port of a machine learning node, such as LinearDiscriminantAnalysis, to CreateStructure, and then wire the output of CreateStructure to this node to save the model to disk.) Use the ImportStructure node to load the data back into NeuroPype. If you load the data outside of NeuroPype (i.e., using python and pickle directly), you will need to load NeuroPype as a library to work with any Packets that are in the data structure. Likewise, if you have numpy arrays stored in the structure, you will need numpy loaded. This node can also be used to save to JSON or Msgpack instead of Pickle, though not all data types are supported (Packets are supported, but pickle is a much faster format for saving and loading complex data structures). Note that pickle is very flexible format used by Python which can also be unsafe (e.g., one should not open untrusted pickle files from the internet as these can contain viruses). Therefore if publicly sharing data, you should use JSON instead, or export to HDF using the ExportH5 node.

Version 1.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Data to save.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: dict (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.pkl
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • encoding
    Encoding to use for saving the file. Note that some protocols may not be able to export all data structures.

    • verbose name: Encoding
    • default value: pickle
    • port type: EnumPort
    • value type: str (can be None)
  • protocol_ver
    Pickle protocol version. This is the internal protocol version to use. Older software may not be able to read files created with the latest version, but version 4 is supported by all NeuroPype releases.

    • verbose name: Protocol Ver
    • default value: 4
    • port type: IntPort
    • value type: int (can be None)
  • save_if_empty
    Save empty structs.

    • verbose name: Save If Empty
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • allow_pickle_fallback
    Allow falling back to pickle for some objects that aren't JSON or msgpack-serializable.

    • verbose name: Allow Pickle Fallback
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportText

Export a string to a file in plaintext format.

This can be used to write to various kinds of plaintext files.

Version 0.6.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input string.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.txt
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • file_exists
    What to do if a file with the name filename (and located in output_root) already exists. If skip, this node will perform no export. If error, the pipeline execution will stop. If rename, the original existing file will be renamed with "_n" added to the filename where _n is an incremental number (i.e. _1), and a new file will be written with the specified filename. Note that this option only currently works on the local filesystem.

    • verbose name: File Exists
    • default value: overwrite
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ExportYAML

Export data to a file in YAML format.

Since the YAML format is very flexible, this node can store any data that is computable by NeuroPype. Note that this node should be used with offline processing pipelines, as it cannot stream data to disk incrementally (this would be required when trying to save the output of an online processing pipeline). YAML is a human-readable format that can also be loaded in most programming languages.

More Info...

Version 0.5.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: object (can be None)
    • data direction: IN
  • filename
    File name to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.yml
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

GetCurrentWorkingDirectory

Returns the current working directory of the application as a string.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Current working directory.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: OUT
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

GetIdentifierFromPath

Extract subject/session identifiers from the source filename.

Search through the filename, stored in the chunk.props[source_url], with a provided format string and extract the {subject} and/or {session} strings for the file. Resulting {subject} and {session} are stored in chunk props.

Version 0.9.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Data to process.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: INOUT
  • format
    The format string used to search the filename. Your search string should contain {subject} and/or {session}. e.g. /parent/directory/subject_{subject}/{session}/ See Python Format String Syntax for more info.

    • verbose name: Format/search String
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportAxon

Import data from an Axon dataset.

Import pCLAMP and AxoScope files (abf version 1 and 2), developed by Molecular device/Axon technologies. The file extension is .abf. This node was tested on publicly available sample data files. If this node does not work with your data then please contact us and send a minimal example of a problematic data set.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Segments
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportBCI2000

Import data from a BCI2000 dataset.

This node supports file formats generated by the BCI2000 software for brain-computer interfacing. Currently only single-file importing is supported.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportBrainvision

Import data from a BrainVision .V

HDR dataset. Some software that writes to this format are the BrainVision Recorder and the BrainVision Analyzer by Brain Products GmbH, however, the format is also supported by other vendors due to its relatively easy-to-implement textual format. Also see the ImportVHDR node.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into 'spiketrains' chunk. Set to False to save loading time if spikes/waveforms are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into 'waveforms' chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportCGX

Import data from a CGX File.

This is a vendor-specific format for continuous EEG/EXG data, which is written, among others, by the CGX Patch device. Like all import nodes, this node outputs the entire data in a single large packet on the first update. In case of the Patch device, the data contains two-channel EEG, two-channel EDA, two-channel PPG, temperature, and three-axis accelerometer data. A marker stream is not present.

More Info...

Version 0.8.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording file to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • device
    Type of device that generated the recording. This determines the default channel labels.

    • verbose name: Device
    • default value: Patch v1.9+
    • port type: EnumPort
    • value type: str (can be None)
  • convert_eeg_to
    Unit in which the EEG data will be returned. NeuroPype's native unit is uV for EEG data, and the V mode is mainly for easier comparison with other implementations.

    • verbose name: Convert Eeg To
    • default value: uV
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportCSV

Import EEG data from a CSV file.

The assumed format is the first record in the CSV file contains the column header names, including the EEG channel labels. The nominal rate is typically not stored in the CSV file, so it will be extrapolated from the timestamps, with the assumption that there are no gaps. If the calculated nominal rate is incorrect (due to inconsistent times between timestamps), use the DejitterTimestamps node to correct it. It is assumed that all columns except the timestamp and marker names are EEG channel data, but if some are not, these can be excluded with the Exclude Columns parameter (alternatively, use the SelectRange node after this one to filter out channels (columns) that do not contain EEG data). Note that timestamps are expected to be in seconds. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or "batch" style processing on prerecorded and non-streaming data (consequently, the stream in the packet has its Flags.is_streaming set to False). However, you can simulate "online" or "streaming" processing, by chaining the StreamData node after this one (or any import node), which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first stream is called eeg (this can be changed using the data stream name parameter), and holds the EEG/signal data as a 2d array with a space axis (channels) and a time axis (samples). If the data has markers and marker column was specified, a second stream named markers is included, which has a single instance axis with a .data['Marker'] array containing the markers, and .times array with the corresponding event timestamps (marker names that are numeric values are converted to strings). It is also possible to import a CSV file containing markers only, by importing a 2 column CSV with the event name and timestamp, and filling in the timestamp column and marker column respectively. In that case, the packet will have a markers stream only. This could be merged into a signal stream from another source using the MergeStreams node. Alternatively, it is also possible to import a CSV file containing data for use in statistical analyses (e.g. a spreadsheet with headers that are independent and dependent variables). If the dependent variable columns parameter is specified the packet will have a feature axis containing each dependent variable (e.g. computed metrics), plus an instance axis of all the rows with the other columns set as individual .data fields (e.g. .data['subject_id'], .data['age'], .data['date'], .data['condition'], etc.), that can be used as factors for statistics. Typically each row would either be an individual trial, in case of a single subject, or session means for each subject if importing a group analysis data. A file without a header row can be imported by setting the no_header_row parameter to True.

More Info...

Version 1.4.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • eeg_channel_names
    EEG channel labels

    • verbose name: Eeg Channel Names
    • default value: None
    • port type: DataPort
    • value type: list (can be None)
    • data direction: OUT
  • filename
    Name of the recording file to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • data_stream_name
    Name to be used for the emitted data stream. If importing event markers only, this property will be ignored and the stream will automatically be called markers.

    • verbose name: Data Stream Name
    • default value: eeg
    • port type: StringPort
    • value type: str (can be None)
  • modality
    Modality of the incoming data. This will be stored in the stream props as props[Origin.modality].

    • verbose name: Modality
    • default value: EEG
    • port type: EnumPort
    • value type: str (can be None)
  • quantity
    The name describing the quantity of the data, i.e ., voltage for EEG, intensity for NIRS, etc. This will be stored in the stream props as props[Metadata.quantity].

    • verbose name: Quantity
    • default value: voltage
    • port type: EnumPort
    • value type: str (can be None)
  • unit
    Unit for the incoming data, if known. This will be stored in the stream props as props[Metadata.unit]. If unknown, leave blank and place the FixSignalUnit node after this one to detect and store the unit.

    • verbose name: Unit
    • default value:
    • port type: ComboPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • timestamp_column
    The name or number (counting from 0) of the column which holds the timestamps.

    • verbose name: Timestamp Column
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • timestamp_units
    Unit of the timestamps. Values in the specified unit will be converted to seconds (used internally in Neuropype).

    • verbose name: Timestamp Units
    • default value: seconds
    • port type: EnumPort
    • value type: str (can be None)
  • marker_column
    The name or number (counting from 0) of the column which holds the event markers.

    • verbose name: Marker Column
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • instance_column_name
    Name of a column which holds the instance payload data, if any.

    • verbose name: Instance Column Name
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • exclude_columns
    List of columns which should be excluded. Can be names or numbers (counting from 0). Example: [2,7,8,'Subject ID', 'Gender'].

    • verbose name: Exclude Columns
    • default value: []
    • port type: ListPort
    • value type: list (can be None)
  • include_columns
    List of columns which should be included. Can be names or numbers (counting from 0). If not given, all columns will be included.

    • verbose name: Include Columns
    • default value: []
    • port type: ListPort
    • value type: list (can be None)
  • no_header_row
    If True, the first row of the CSV file will be treated as data and not as a header row.

    • verbose name: No Header Row
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • ignore_empty_rows
    Ignore empty rows on import.

    • verbose name: Ignore Empty Rows
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • delimiter
    A one-character string used to separate fields. It defaults to ','

    • verbose name: Delimiter
    • default value: ,
    • port type: StringPort
    • value type: str (can be None)
  • dependent_variable_columns
    Special case to import statistics related data instead of signal data. List of column variable names or numbers (counting from 0) that are dependent variables (DVs) and will become separate elements along a feature axis with their data values stored in the block array. This format is recommended for spreadsheet like data that would be used in statistical analyses in which there are a mix of independent variables (IVs) (e.g. subid, age, condition, etc.) and DVs (e.g. alpha power, heart rate, workload metric, etc.). Row data will become instance axis elements with the remaining columns (IVs) each set as fields (attributes) within the instance axis. This is similar to a dataframe with the ability to set instance axis fields (e.g. condition) as test factors in statistics nodes (e.g. T-Test, Z-Test, etc.). By default (an empty list) all columns will be treated as DVs as separate elements along a feature axis and their data values will be stored as string types in the data array. However, if this list is non-empty, but none of the variable names or numbers are found in the CSV headers, all columns will be treated as IVs on an instance axis and no feature axis will exist.

    • verbose name: Dependent Variable Columns
    • default value: []
    • port type: ListPort
    • value type: list (can be None)
  • timestamp_column_name
    Name of a column which holds the timestamps. (Deprecated. Use timestamp_column instead.)

    • verbose name: Timestamp Column Name
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • marker_column_name
    Name of a column which holds the event marker names.

    • verbose name: Marker Column Name
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • emit_each_tick
    Emit data on every tick.

    • verbose name: Emit Each Tick
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportEDF

Import data from an edf+, bdf, or gdf file.

This node can import EEG, MEG, EOG, ECG, EMG, ECOG, and fNIRS data stored in the source file. The node imports the respective data, which is assumed to be continuous (non-epoched), optionally with event markers. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or batch" style processing on data that isn't streaming (consequently, the packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first streams are named using their respective data modality i.e. eeg, meg, eog, ecg, emg, ecog, fnirs, depending on which one is available. The packet contains a single chunk for each stream with a 2d array with a space axis (channels) and a time axis (samples). If the data had markers, a last stream named 'markers' is also included in the packet, which has a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entry are associated each with a timestamp and a string payload that is the respective event/marker type from the .set file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

More Info...

Version 1.1.2

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording file to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • stim_channel
    The name of the stim channel in the file, in cases where it cannot be automatically deduced. If the file has more than one stim channel, specify as a comma-separated list. Enclose any channel names with spaces in single quotes.

    • verbose name: Stim Channel
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • exclude_channels
    List of channel names to exclude when importing. Typically we recommend using Select Range node after import to filter channels, however, EDF files may contain channels with different sampling rates and it is known behavior for channels of lower sampling rate to enforce upsampling to the imported channel with the highest sampling rate. Excluding unwanted channels at higher sampling rates during import will avoid this.

    • verbose name: Exclude Channels
    • default value: []
    • port type: ListPort
    • value type: list (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportFile

Import data from a supported file format.

This node acts as a meta-importer that can import from any file format supported by NeuroPype. The node determines the format based on the file extension. Note that some limitations apply, in particular, file formats that come as a folder of files (Tucker-Davis) can currently not be imported directly from a cloud storage location, and need to be downloaded before they can be processed. Some file formats have additional options that can be set on import, so we recommend looking at the documentation for the specific file format you are importing, and if you need to change any of those defaults you may want to use the specific Import node for that file format. This node can also take a set of files of the same format and concatenate them into a single packet (see documentation on the filename parameter), providing all files have the same characteristics (e.g., same channels, sampling rate, etc.)

Version 1.2.5

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording file to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/). The filename can also contain wildcard characters, in which case all matching files will be imported and concatenated in alphabetical order into a single packet. (To import multiple files as separate datasets, use the PathIterator node before this node.)

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored. Not currently supported for all file formats. Deprecated; use SelectRange after import or the Import node for the specific file format if segments is supported.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data. segments that fall outside time_bounds will be ignored. time_bounds that fall outside the segments will be ignored. Not currently supported for all file formats. Deprecated; use SelectRange after import to trim data, or the Import node for the specific file format if time_bounds is supported.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed. Deprecated; use the Import node for the specific file format if load_signals is supported.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int). Deprecated; use the Import node for the specific file format if signal_autoscale is supported.

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream. Deprecated as by default event markers are loaded into a markers stream. If you don't want the marker stream use the ExtractStreams to drop it after import, or use the import node for the specific file format if load_events is supported.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • retain_streams
    List of streams to retain. The order doesn't actually matter (it's always data streams first, marker streams second). Only supported for multi-modal file formats (XDF). Deprecated; use ImportXDF for XDF files or the ExtractStreams node after import.

    • verbose name: Retain Streams
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • use_streamnames
    Use the stream names in the file to name streams. If enabled, the streams loaded will be named as in the file. Otherwise, the streams use canonical names based on the content type, such as eeg or markers. Currently only supported for multi-modal file formats (XDF). If retain_streams is specified, then this will always be True.

    • verbose name: Use Streamnames
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • stim_channel
    The name of the stim channel in the file, in cases where it cannot be automatically deduced. If the file has more than one stim channel, specify as a comma-separated list. Enclose any channel names with spaces in single quotes. This property is specific to the following file formats: EDF, VHDR. Deprecated; use ImportEDF or ImportVHDR instead.

    • verbose name: Stim Channel
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • allow_insecure_filetypes
    Whether to allow passing in file types that may pose security risks. This applies to, e.g., pickle files.

    • verbose name: Allow Insecure Filetypes
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • use_caching
    Enable caching. Will use cached imported data, if available, instead of reimporting.

    • verbose name: Use Caching
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • exclude_channels
    List of channel names to exclude when importing EDF files (this is ignored for any other format). Typically we recommend using Select Range node after import to filter channels, however, EDF files may contain channels with different sampling rates and it is known behavior for channels of lower sampling rate to enforce upsampling to the imported channel with the highest sampling rate. Excluding unwanted channels at higher sampling rates during import will avoid this. Deprecated; use ImportEDF instead.

    • verbose name: Exclude Channels
    • default value: []
    • port type: ListPort
    • value type: list (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportH5

Import data from an h5 file.

This node imports data in HDF5 format, assuming the NeuroPype Baryon file format, which is is supported among others by the ExportH5 node. This node can import any data that can be represented by the NeuroPype Packet data structure, including multiple streams with any number of (possibly) annotated axes as well as arbitrary meta-data. Note that there are many other file formats that also happen to utilize HDF5 besides NeuroPype's H5 format, which this node will not recognize.

More Info...

Version 1.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording file to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • rdcc_nbytes
    Total size of the raw data chunk cache in bytes. The default size is 1024**2 (1 MB) per dataset.

    • verbose name: Rdcc Nbytes
    • default value: 1048576
    • port type: IntPort
    • value type: int (can be None)
  • rdcc_nslots
    Number of chunk slots in the raw data chunk cache for this file.

    • verbose name: Rdcc Nslots
    • default value: 521
    • port type: IntPort
    • value type: int (can be None)
  • verbose
    Print verbose diagnostics.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportNSX

Import data from a Blackrock dataset.

This node loads time-series data of multiple sampling rates, digital events, and neural event data from a Blackrock dataset. The file extensions supported by this node are .NEV, and .NS1 to .NS6, which hold data at different sampling rates. Note: This routine will handle files according to specification 2.1, 2.2, and 2.3.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Base filename of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/). Any .nsX or .nev, .sif, or .ccf extensions are ignored when parsing this parameter.

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into spiketrains stream. Set to False to save loading time if spikes/spiketrains are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into waveforms chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportNeuralynx

Import data from a Neuralynx dataset.

This node loads time-series data of multiple sampling rates, digital events, and neural event data from a Neuralynx dataset. The file extensions supported are .NCS, .NEV, .NSE and .NTT.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • dirname
    Name of the directory containing the Neuralynx data files (.n cs). If a relative path is given, a file of this name will be looked up in the standard data directories (these include /resources and Examples/).

    • verbose name: Dirname
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into spiketrains stream. Set to False to save loading time if spikes/spiketrains are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into waveforms chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportNeuroExplorer

Import data from a NeuroExplorer dataset.

This node loads analog signals, digital events, and neural event data from a NeuroExplorer dataset. The canonical file extension for this format is .NEX.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into spiketrains stream. Set to False to save loading time if spikes/spiketrains are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into waveforms chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • verbose
    Print verbose diagnostics.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportNirs

Import data from a .n

irs (fNIRS) file. This node imports data stored in .nirs format, a format for fNIRS data used by the Homer2 fNIRS software package and adopted by various fNIRS hardware manufacturers and developers. Nirs files are Matlab files with a specific structure as defined in the Homer2 software manual (see link). Presently this node only converts .nirs files saved in Matlab version 7 format (saved with -v7 flag). Channel names are constructed from the source and detector indices and the wavelength, in the format: S<source_number>-D<detector_number>-<wavelength> (i.e., S01-D01-760). Source positions are stored in the positions_source field of the Space axis, and detector positions in the positions field. Wavelengths are stored in the wavelengths field. The signal stream will be named nirs (and modality NIRS will be stored in the stream's Origin.modality property). .Nirs files often only store the onset of a stim event and the duration is recorded externally. When importing such a file to Neuropype, a start marker is inserted at the stim onset and a matching end marker is inserted at the end (computed as the stim onset plus the duration of the stim). Certain device manufacturers store additional data in files other than the .nirs file, or add certain metadata fields to the .nirs file. This node supports some of these, such as accelerometer data for NirX devices saved in the aux field. Other channels besides the above that have the same number of timestamps will be grouped together in separate streams (i.e., physio), etc.

More Info...

Version 1.2.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (.n irs). If a relative path is given, a file of this name will be looked up in the standard data directories (these include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • device
    Import data in .n irs format created by a specific device. Some devices have certain particularities or files which provide some additional data that is not included in the .nirs file. If importing data from a device listed here, the importer will try to account for that where possible. Select 'generic' for the standard .nirs format without any extra files or device-specific configuration.

    • verbose name: Device
    • default value: generic
    • port type: ComboPort
    • value type: str (can be None)
  • stim_conditions
    A list of the stim conditions in for the dataset to be converted into markers. These should be the form of a dictionary in the following format: {'condition_index': ('condition_name', duration), ...}, where condition_index is a string representing the column number of the condition in the stim matrix (starting with '1', not '0'), 'condition_name' is a string descriptor of the stim, and 'duration' is a decimal number with the duration in seconds. I.e.,: { '1': ('left', 10.0), '2': ('right', 10.0), ... }. Matching 'start' and 'end' markers will be inserted into the data in the following format: condition_name-duration-start, condition_name-duration-end, i.e.: 'left-10.0-start', 'left-10.0-end'. If no conditions are provided, then only a stim onset marker will be added matching the condition-index (i.e., '1').

    • verbose name: Stim Conditions
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • orientation
    The orientation of the coordinate system used for the source and detector positions. 'right-handed' defines x as right, while 'left-handed' defines either y or z as right, and 'z-up' and 'y-up' define the vertical axis.

    • verbose name: Orientation
    • default value: right-handed z-up
    • port type: EnumPort
    • value type: str (can be None)
  • position_data_field
    If the 3D positions data (unit, source positions, detector positions) are in a different field than 'SD', specify the field name here. For example, 'SD' might store the 2D positions, while the 3D positions are stored in another field.

    • verbose name: Position Data Field
    • default value: SD
    • port type: StringPort
    • value type: str (can be None)
  • accelerometer_data
    If the device stored accelerometer data, specify the name of the top-level variable where this is stored (i.e ., 'aux'). Compatible with NirX devices (compatibility with other models may vary).

    • verbose name: Accelerometer Data
    • default value: None
    • port type: StringPort
    • value type: str (can be None)
  • one_based_naming
    Use 1-based naming for sources and detectors (instead of 0-based)

    • verbose name: One Based Naming
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • skip_coregistration
    Skip co-registration of the source and detector positions to Neuropype's internal coordinate system. Set to True if you want to do the coregistration later using the CoregisterExistingLocations node (i.e., if co-registration depends on landmarks which are defined separately.

    • verbose name: Skip Coregistration
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • sort_by
    Sorting order for channels. If set to wavelength-first, the node outputs first all channels for the lowest wavelength, then all channels for the next highest wavelength etc. If set to link-first, then the node emits first all wavelengths for a given source-detector pair, then all wavelengths for the next source-detector pair, etc.

    • verbose name: Sort By
    • default value: wavelength-first
    • port type: EnumPort
    • value type: str (can be None)
  • verbose
    Print info about the imported file.

    • verbose name: Verbose
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportPLX

Import data from a Plexon dataset.

This node loads analog signals, digital events, and neural event data from a Plexon dataset. The canonical file extension for this format is .PLX. Note that modern Plexon systems may store data to a new format PL2 file which is NOT supported by this node.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Print verbose diagnostics.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into spiketrains stream. Set to False to save loading time if spikes/spiketrains are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into waveforms chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportSET

Import data from an EEGLAB set file.

This format is a commonly-used interchange file format for EEG, for which converters from most other EEG file formats exist. This node will import the respective data, which is assumed to be continuous (non-epoched) EEG/EXG, optionally with event markers. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or batch" style processing on data that isn't streaming (consequently, the packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first stream is called 'eeg' and holds the EEG/EXG data, and the packet contains a single chunk for this stream with a 2d array with a space axis (channels) and a time axis (samples). If the data had markers, a second stream named 'markers' is also included in the packet, which has a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entry are associated each with a timestamp and a string payload that is the respective event/marker type from the .set file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

More Info...

Version 1.2.3

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportSnirf

Import data from a .s

nirf (fNIRS) file, including any events and auxiliary data. Channel names are constructed from the source and detector indices and the wavelength, in the format: S<source_number>-D<detector_number>-<wavelength> (i.e., S01-D01-760.0). Source positions are stored in the positions_source field of the Space axis, and detector positions in the positions field. Wavelengths are stored in the wavelengths field. The signal stream will be named nirs (and modality NIRS will be stored in the stream's Origin.modality property). For events, a start marker will be created with the timestamp of each stim event found, and a matching end marker will be created k seconds later (where k is the specified duration). The resulting markers will be in the following format: <event_label>-<event_duration>-start and matching <event_label>-<event_duration>-end (i.e., 1-10.0-start). Optionally, the event durations can be omitted from the resulting markers using the 'durations in labels' parameter. If the file contains auxiliary data (i.e., accelerometer and gyroscope data saved by NirX devices), these will be imported into their own streams (i.e., named accel and gyro, respectively). Other channels besides the above that have the same number of timestamps will be grouped together in separate streams (i.e., physio), etc.

More Info...

Version 1.3.2

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the file to import (.s nirf). If a relative path is given, a file of this name will be looked up in the standard data directories (thesse include cpe/resources and examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • device
    Import data in .n irs format created by a specific device. Some devices have certain particularities or files which provide some additional data that is not included in the .nirs file. If importing data from a device listed here, the importer will try to account for that where possible. Select 'generic' for the standard .nirs format without any extra files or device-specific configuration.

    • verbose name: Device
    • default value: generic
    • port type: ComboPort
    • value type: str (can be None)
  • orientation
    The orientation of the coordinate system used for the source and detector positions. 'right-handed' defines x as right, while 'left-handed' defines either y or z as right, and 'z-up' and 'y-up' define the vertical axis.

    • verbose name: Orientation
    • default value: right-handed z-up
    • port type: EnumPort
    • value type: str (can be None)
  • aux_field
    If the device stored accelerometer data or any other auxiliary data in the .n irs file, specify the prefix of the top-level field(s) where this is stored. This is "aux" by convention. For example, NirX headsets store x/y/z accelerometer data in fields "aux1", "aux2", "aux3".

    • verbose name: Aux Field
    • default value: aux
    • port type: StringPort
    • value type: str (can be None)
  • sort_by
    Sorting order for channels. If set to wavelength-first, the node outputs first all channels for the lowest wavelength, then all channels for the next highest wavelength etc. If set to link-first, then the node emits first all wavelengths for a given source-detector pair, then all wavelengths for the next source-detector pair, etc.

    • verbose name: Sort By
    • default value: wavelength-first
    • port type: EnumPort
    • value type: str (can be None)
  • durations_in_labels
    Whether to include the retrieved event durations in the output event marker label names. Example: If True, --. If False, -.

    • verbose name: Durations In Labels
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • verbose
    Print info about the imported file.

    • verbose name: Verbose
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportSpike2

Import data from a Spike2 dataset.

This node loads analog signals, digital events, and neural event data from a CED Spike2 dataset. The canonical file extension for this format is .SMR.

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into spiketrains stream. Set to False to save loading time if spikes/spiketrains are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into waveforms chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportStructure

Import a dictionary from disk.

Typically used to import a pickle file created with the ExportStructure node, but can also be used to load any JSON (or Msgpack) file. The data will be converted to a dictionary, which can then be passed into another node (for example, to populate the property of another node that accepts a dictionary). Note: Since the Python pickle format is very flexible, files obtained from the internet may be infected by viruses or could otherwise have been tampered with, and should be treated with the same caution as, e.g., MS Word documents. See also the documentation for the ExportStructure node.

Version 1.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output data.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: dict (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • input_root
    Path to the input folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Input Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • encoding
    Encoding in which the file was saved. If file was created with ExportStructure, the default format will be pickle.

    • verbose name: Encoding
    • default value: pickle
    • port type: EnumPort
    • value type: str (can be None)
  • allow_pickle_fallback
    Allow falling loading pickled data for some objects that aren't JSON or msgpack-serializable.

    • verbose name: Allow Pickle Fallback
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportTDT

Import data from a TDT dataset.

This node loads analog signals, digital events, and neural event data from a Tucker-Davis TTank dataset. Note that the path name passed to this node is the name of the directory containing the files: * TSQ timestamp index of data * TBK channel info * TEV contains data : spike + event + signal (for old version) * SEV contains signals (for new version)

More Info...

Version 0.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • dirname
    Name of the directory containing the TTank data set. Select the TTank data folder that contains the Block- folders. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Dirname
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_spiketrains
    Load spiketrains into spiketrains stream. Set to False to save loading time if spikes/spiketrains are not needed.

    • verbose name: Load Spiketrains
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_waveforms
    Load waveforms of spiking events into waveforms chunk. Set to False to save loading time e.g. if spikes and/or waveforms will be re-extracted later.

    • verbose name: Load Waveforms
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportText

Import a plaintext file into a string.

Like all import nodes, this node can also read transparently from a cloud location, if the respective parameters a set.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output string.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: OUT
  • filename
    Name of the text file to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • strip_trailing_newlines
    Strip trailing newline characters from the end of the imported text.

    • verbose name: Strip Trailing Newlines
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportVHDR

Import data from a BrainProducts .v

hdr file. This is an easy-to-aprse text/binary based format that is also supported by some vendors other than Brain Products. This node will import the respective data, which is assumed to be continuous (non-epoched) EEG/EXG, optionally with event markers. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or batch" style processing on data that isn't streaming (consequently, the packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). Technically, the packet generated by this node is formatted as follows: the first stream is called 'eeg' and holds the EEG/EXG data, and the packet contains a single chunk for this stream with a 2d array with a space axis (channels) and a time axis (samples). If the data had markers, a second stream named 'markers' is also included in the packet, which has a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entry are associated each with a timestamp and a string payload that is the respective event/marker type from the .set file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

Version 1.1.1

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • segments
    List of segment indices to import (0-based). For data sets with multiple segments (blocks of trials, recording discontinuities, etc), list the segments to be imported. Set to None (default) to import all segments. Segments that fall outside time_bounds, if specified, will be ignored.

    • verbose name: Segments To Import
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • time_bounds
    Time limits for data import. Set to None (default) to import all available data (depending on segments). Segments specified in segments that fall outside time_bounds will be ignored. Samples with times within time_bounds but which fall outside segments, if specified, will be ignored.

    • verbose name: Import Time Range
    • default value: None
    • port type: ListPort
    • value type: list (can be None)
  • load_signals
    Load analog signals into an analogsignals stream. Set to False to save loading time if analog signals are not needed.

    • verbose name: Load Signals
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • signal_autoscale
    Auto-scale analog signal to Voltage (float). Set to False to keep data in original format (int).

    • verbose name: Signal Autoscale
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • load_events
    Load events into a markers stream.

    • verbose name: Load Events
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • stim_channel
    The name of the stim (events) channel in the file, in cases where it cannot be automatically deduced. If the file has more than one stim channel, specify as a comma-separated list. Enclose any channel names with spaces in single quotes. This property is only needed in special circumstances (when you receive an error message on import, informing you to use it).

    • verbose name: Stim Channel
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

ImportXDF

Import data from an xdf file.

The XDF file format (see "more.." link below) can store one or more streams of multi-channel time series data, such as EEG, EXG, eye tracking, motion capture, audio, and video, as well as marker data, and can be recorded to using, e.g., the Lab Streaming Layer. This node can import any subset of streams from an XDF file, and supports the minimum necessary per-stream meta-data. It is important to note that XDF files oftentimes have more streams in them than what one wants to analyze, and in such cases one can use the retain_streams option to restrict the imported subset, to prevent NeuroPype from processing the wrong streams. The node outputs the entire data in a single large packet on the first update, so any processing applied to the result will be "offline" or "batch" style processing on data that isn't streaming ( consequently, the output packet is flagged as non-streaming). However, if you intend to simulate online processing, it is possible to chain a Stream Data node after this one, which will take the imported recording and play it back in a streaming fashion (that is, in small chunks, one at a time). The contents of an XDF file are time-synced, and this node supports some options for processing the respective time stamps to ensure good data alignment in the presence of clock drift and jitter. Technically, the packet generated by this node is formatted as follows: the imported streams are named based on their content type (e.g., 'eeg', 'audio') and when multiple streams of the same type are present, the names for that type are instead numbered as in 'eeg-1', 'eeg-2', and so on. The packet generated by this import node contains a single chunk for each stream with a 2d array with a space axis ( channels) and a time axis (samples). If the data had markers (type 'markers'), the imported chunk for each stream is formatted as a vector of numeric data (one per marker, initially all NaN) and a single instance axis that indexes the marker instances and whose entries are associated each with a timestamp and a string payload that is the respective event/marker type from the .xdf file. The numeric data can be overridden based on the event type string using the Assign Targets node, which is required for segmentation and supervised machine learning.

More Info...

Version 1.4.3

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Name of the recording dataset to import. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • retain_streams
    List of streams to retain. The order doesn't actually matter (it's always data streams first, marker streams second).

    • verbose name: Retain Streams
    • default value: None
    • port type: Port
    • value type: object (can be None)
  • handle_clock_sync
    Enable clock synchronization. Needed if data were recorded across multiple computers.

    • verbose name: Handle Clock Sync
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • handle_jitter_removal
    Enable jitter removal for regularly sampled streams. This removed jitter under the assumption that the sampling rate of the data was constant (unless the sampling rate of a stream is explicitly marked as irregular).

    • verbose name: Handle Jitter Removal
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • handle_clock_resets
    Handle clock resets. Whether the importer should check for potential resets of the clock of a stream (e.g. computer restart during recording, or hot-swap).

    • verbose name: Handle Clock Resets
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • reorder_timestamps
    If the file contains a stream with irregular sampling rate and timestamps that are out of order, reorder the samples so that the timestamps are monotonically increasing.

    • verbose name: Reorder Timestamps
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • use_streamnames
    Use the stream names in the file to name streams. If enabled, the streams loaded will be named as in the file. Otherwise, the streams use canonical names based on the content type, such as eeg or markers.

    • verbose name: Use Streamnames
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • max_marker_len
    Optionally a max length on the event markers. Markers longer than this will be substituted by the string where XXXX is a string key into the chunk's .props['long_markers'] field, which is basically a string table. This is only useful if long markers slow down or otherwise throw off downstream processing.

    • verbose name: Max Marker Len
    • default value: None
    • port type: IntPort
    • value type: int (can be None)
  • use_caching
    Enable caching. Will use cached imported data, if available, instead of reimporting.

    • verbose name: Use Caching
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • verbose
    Print verbose diagnostics.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PathAbsolute

Make the given file path absolute if it is not yet.

This will also expand the tilde into the user's home directory.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    File path.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PathDirectory

Extract the directory part of a file path string.

If the file path has BIDS style notations using {} the root path will be returned (before any {}).

Version 1.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    File path.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT
  • trim_placeholders
    Trim off named placeholders in path. This will trim paths containing placeholders like {subject}/{session}.

    • verbose name: Trim Placeholders
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PathExists

Check if the given path (file or directory) exists.

Note that this node does not yet support cloud storage at this time.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • pathname
    File path.

    • verbose name: Pathname
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: IN
  • result
    Result.

    • verbose name: Result
    • default value: None
    • port type: DataPort
    • value type: bool (can be None)
    • data direction: OUT
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PathExtension

Extract the file extension part of a file path string.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    File path.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PathFilename

Extract the filename part of a file path string, including the file extension.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    File path.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PathIterator

Iterate over a list of path names.

This node accepts a list of path names, possibly given as a wildcard expression like /parent/child_*, or as a comma-separated list of paths, or as the name of a study manifest file. On each successive update this node will then output one pathname at a time, until the list is exhausted. Using a comma-separated list of wildcard expressions is not supported. Terminate the search string with a / to indicate that the returned results should be paths only; Terminate with *.* (or *.ext) to indicate the results should be files only. Instead of a *, a "capture name" can be assigned in curly braces, e.g., {Subject}. You can also use ** to include matching paths from all subfolders (i.e. myfolder/**/*.xdf will include xdf files in all subfolders whereas myfolder/*.xdf will not). The current pathname emitted by this node can then be wired into a subsequent node as the current path/file, and thereby multiple files can be imported, processed, and then exported in succession. Using capture names allows extracting meta-data from the rwa file path, which is returned by the path iterator in its curmeta output. Such meta-data can, for example, be attached to segments extracted from the data in later processing, using the Set Instance Details node. This node also has an allports port which outputs a list of all matching paths at once. This can be wired into a node to process all filenames at once (i.e., create a table of paths), or into a ForEach node to loop over part of a pipeline for each file. (Note that you can also wire the this port of PathIterator into a ForEach node, which will, with each loop, emit the next item from PathIterator and pass it to the ForEach node which will then execute the loop for that item.)

Version 1.2.4

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • curpath
    Current pathname.

    • verbose name: Curpath
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: OUT
  • curmeta
    Current metadata.

    • verbose name: Curmeta
    • default value: None
    • port type: DataPort
    • value type: dict (can be None)
    • data direction: OUT
  • allpaths
    All paths processed

    • verbose name: Allpaths
    • default value: None
    • port type: DataPort
    • value type: list (can be None)
    • data direction: OUT
  • paths
    Paths to iterate over. This can be a wildcard ("glob") expression, such as /myfolder/, or point to a study manifest file (e.g., for an ESS study, or top-level BIDS json or tsv file), or be comma-separated list of paths, or can be a Neuroinformatics query. Also, instead of a , a "capture name" in curly braces can be given, e.g., {Session}. This will match the same as a *, but the resulting content will be returned in the secondary output of the path iterator, under curmeta, in the form of a dictionary holding the matched values for all used capture names.

    • verbose name: Paths To Iterate Over
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • filter
    Filter conditions. This is an optional expression of filter conditions that can be used to narrow down the files emitted by this node. The conditions are given in Python syntax, can use any meta-data properties as if they were Python variables, and should evaluate to True. Example: age>42 and gender=='male'. For the list of Python constructs allowed, see NeuroPype documentation of its sandboxed Python expression grammar.

    • verbose name: Filter
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • sort_by
    Sort the located paths (based on the paths and filter parameters), in alphabetical (default), chronological, or given order. Alphabetical sorting is case insensitive. If chronological is selected, the timestamp in each filename, not the date created or modified on the computer system, will be used. Only filenames containing a timestamp in the following format: YYYY-MM-DD_HH-MM-SS will be sorted (others will still be processed but not sorted). If order is selected and a comma separate list of files is given this method will keep the same order of the list.

    • verbose name: Sort By
    • default value: alphabetical
    • port type: EnumPort
    • value type: str (can be None)
  • iter_reorder
    Iterate over items in a re-ordered fashion. Using 0 is forward, 1 is reverse, 2 is from-the-middle-out, 3 is out from the center left, and so forth. This will be applied whether "sort by" is alphabetical or chronological.

    • verbose name: Iteration Reordering
    • default value: 0
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Verbose output.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

PlayBackREC

Play back the content of a previously recorded rec file.

This node will output the same sequence of packets that was received by RecordToRec when the data was originally recorded. Note that a REC file may contain either one packet (if the output of an offline processing pipeline was stored), or multiple successive packets (if the output of a streaming/online processing pipeline was stored), and consequently it will output either one or multiple packets over the course of successive scheduler ticks. Note that the playback runs at whatever tickrate that is globally set for NeuroPype (default 25Hz) and does not recreate the millisecond-exact timing of arrival of the original data (unless the pipeline ran at precisely the same tickrate without hitches during the recording). Implementation notes: the file format is based on Python's pickle system. Since pickle is very flexible, files obtained from the internet may be infected by viruses or could otherwise have been tampered with, and should be treated with the same caution as, e.g., foreign MS Word documents.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Output signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: OUT
  • filename
    Filename of the REC data file to import and play back. If a relative path is given, a file of this name will be looked up in the standard data directories (/resources and Examples/).

    • verbose name: Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

RecordToCSV

Record data into a csv file.

This node accepts a multi-channel time series and writes it to a csv (comma-separated values) file. The result can be read with numerous analysis packages, including Excel, MATLAB(tm), and SPSS(tm). This note continuously appends data to a file as it receives it and therefore is useful with online/streaming pipelines (as opposed to ExportCSV). This node will interpret your data as a multi-channel time series, even if it is not: if your packets have additional axes, like frequency, feature, and so on, these will be automatically vectorized into a flat list of channels. Also, if you send data with multiple instances (segments) into this node, subsequent instances will be concatenated along time, so the data written to the file will appear contiguous and non-segmented (channels by samples). The way the file is laid out is as one row per sample, where the values of a sample, which are the channels, are separated by commas. Optionally this node can write the names of each channel into the first row (as a header, which is supported by some software), and it can also optionally append the time-stamp of each sample as an additional column at the end of each row. This node is designed for writing streaming data to disk, chunk by chunk. For saving an entire file to disk as CSV (when working with "offline" data), use ExportCSV instead.

Version 1.0.1

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    Filename to record data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.csv
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • column_header
    Include channel labels as columns headers. If this is set, the first row in the file will have the channel names. Some analysis programs can interpret this as the header row of the table (basically as column labels).

    • verbose name: Include Column Labels
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • time_stamps
    Append a column for timestamps. If set, an extra column of data will be appended (i.e., one extra value at the end of each row, which holds the time-stamp for the respective sample, if any).

    • verbose name: Time Stamps
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • absolute_instance_times
    Write absolute instance times. If disabled, the time axis for each instance will be relative to the time-locking event.

    • verbose name: Absolute Instance Times
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • timestamp_label
    Label for the time-stamp column (if included).

    • verbose name: Timestamp Label
    • default value: timestamp
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • retrievable
    Upload Parts so they can be retrieved individually.

    • verbose name: Retrievable Parts
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • delete_parts
    Delete parts after all data is recorded and uploaded. Only applicable is Retrievable parts is set to True.

    • verbose name: Delete Parts After Completion
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

RecordToREC

Record data packets to a rec file.

This file format is native to NeuroPype, and can store any data that is computable by it. This node supports writing both the result of offline and online processing pipelines to a file. The resulting file can subsequently be played back using the Play back REC node. REC is a niche file format that is mostly meant for testing, debugging, or recreating NeuroPype data streams, but is not supported by any other software -- for general-purpose data interchange, consider using the HDF5 format. Implementation notes: the file format is based on Python's pickle system. Since pickle is very flexible, files obtained from the internet may be infected by viruses or could otherwise have been tampered with, and should be treated with the same caution as, e.g., foreign MS Word documents.

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Input signal.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • filename
    Filename to record packets to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.rec
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • protocol_ver
    Pickle protocol version. This is the internal protocol version to use. Older software may not be able to read files created with the latest version, but version 3 is supported by all NeuroPype releases.

    • verbose name: Protocol Ver
    • default value: 3
    • port type: IntPort
    • value type: int (can be None)
  • verbose
    Produce verbose output.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • retrievable
    Upload Parts so they can be retrieved individually.

    • verbose name: Retrievable Parts
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • delete_parts
    Delete parts after all data is recorded and uploaded. Only applicable is Retrievable parts is set to True.

    • verbose name: Delete Parts After Completion
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

RecordToXDF

Record data into a xdf file.

This node accepts a multi-channel time series and streams it to a file, incrementally. The result can be read with MATLAB(tm), Python, C++, or any other framework that can parse XDF. This node will interpret your data as a multi-channel time series, even if it is not: if your packets have additional axes, like frequency, feature, and so on, these will be automatically vectorized into a flat list of channels. Also, if you send data with multiple instances (segments) into this node, subsequent instances will be concatenated along time, so the data written to the file will appear contiguous and non-segmented (channels by samples).

Version 1.5.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • data
    Data to record.

    • verbose name: Data
    • default value: None
    • port type: DataPort
    • value type: Packet (can be None)
    • data direction: IN
  • output_path
    The full path of the output file.

    • verbose name: Output Path
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: OUT
  • filename
    Filename to export data to. Can be the full path or a relative path or filename if `output_root is specified.

    • verbose name: Filename
    • default value: untitled.xdf
    • port type: StringPort
    • value type: str (can be None)
  • output_root
    Path to the output folder; if specified, the filename will be relative to this folder. Alternatively, this can be left empty and the full path may be specified in filename).

    • verbose name: Output Root
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • allow_double
    Allow double-precision sample values. If set to False, double-precision values will be written as single- precision data (except time stamps, which are always double precision).

    • verbose name: Allow Double Precision
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • cloud_partsize
    Part size for streaming cloud uploads. When streaming data to the cloud, parts of this size (in MB) will be buffered up in memory and then uploaded.

    • verbose name: Cloud Partsize
    • default value: 30
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • retrievable
    Upload parts so they can be retrieved individually.

    • verbose name: Retrievable Parts
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • delete_parts
    Delete parts after all data is recorded and uploaded. Only applicable is Retrievable parts is set to True.

    • verbose name: Delete Parts After Completion
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • preserve_original_metadata
    This preserves the original metadata from the LSL stream, included in the tag, as it is without any modifications or manipulation of channel names. This option ensures that the field in the XDF file is an exact match of the LSL stream (acting as 'ground truth').

    • verbose name: Preserve Original Metadata
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • close_on_marker
    Close when encountering this marker. When this node encounters this marker string, the recording is closed at the next opportunity. Note that this may not complete immediately, especially if there is still outstanding data to be written. For this reason, it is recommended to keep the program running for some time after sending this marker, especially when running in the cloud.

    • verbose name: Close On Marker
    • default value: close-recording
    • port type: StringPort
    • value type: str (can be None)
  • verbose
    Print verbose output.

    • verbose name: Verbose
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • session_notes
    Session notes. These notes will be written into the file header.

    • verbose name: Session Notes
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

SkipIfExists

Skip this file path if the checked path already exists.

This node can be used in offline processing when iterating over paths, e.g., with a FileIterator node. The node can accept and modify the file path that would be wired into an import node, based on whether a second path (the 'checked path') already exists. In that context, the checked path would often be the output file path that the pipeline ultimately writes to (e.g., what is wired into the final export node).

Version 1.0.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • path
    File path.

    • verbose name: Path
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: INOUT
  • checked_path
    Path to check.

    • verbose name: Checked Path
    • default value: None
    • port type: DataPort
    • value type: str (can be None)
    • data direction: IN
  • enable_check
    Enable checking and skipping (default). Can be disabled (i.e., via a ParameterPort), so that files that exist are not skipped.

    • verbose name: Enable Check
    • default value: True
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

UploadFileToCloud

Upload a local file to your Neuroscale cloud storage, or another cloud storage service.

The upload is only triggered when the source_filename port receives a string value representing the file path of the file to upload. You can place this node after a node that records to disk, such as RecordToXDF, and wire the output_path port of that node to the source_filename port of this node; when RecordToXDF closes the XDF file it sets its output_path port to the path of the recorded file, which is then passed on to this node, triggering the upload. The source_filename port can also be set by other means (such as from an external source via a ParameterPort). If using Neuroscale, you can obtain your storage credentials from the Storage tab of your Neuroscale dashboard. If uploading to another cloud storage service, enter the credentials used to access storage on that service (i.e., account name, bucket name, and secret key for AWS S3). If a pipeline with this node is run on Neuroscale, the account name, bucket name and credentials fields can be left blank, and the defaults associated with the Neuroscale account on which the pipeline is run will be used. Optionally you can delete the local file after uploading it.

Version 0.9.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • source_filename

    • verbose name: Source Filename
    • default value: Full path of local file to upload.
    • port type: StringPort
    • value type: str (can be None)
  • destination_folder
    Folder path on cloud storage. If omitted, the source_filename path will be used, copied relative to the root of the specified cloud storage bucket.

    • verbose name: Destination Folder
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • destination_filename
    Name of the file on cloud storage. If omitted, the source_filename will be used.

    • verbose name: Destination Filename
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_host
    Cloud storage host to upload the file to. If Neuroscale is selected, the storage provider will be the backend storage provider associated with your Neuroscale account (usually S3).

    • verbose name: Cloud Host
    • default value: Neuroscale
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider. If you have a Neuroscale account, this is the "Account Name" field in your Neuroscale storage credentials (see Credentials button in the Storage tab of your Neuroscale dashboard).

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket name on storage provider. This is the bucket or container on the cloud storage provider that the file would be written to. If you have a Neuroscale account, this is the "Container Name" field in your Neuroscale storage credentials (see Credentials button in the Storage tab of your Neuroscale dashboard).

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential/token ("secret key") on storage provider. If you have a Neuroscale account, this is the "Credentials" field in your Neuroscale storage credentials (see Credentials button in the Storage tab of your Neuroscale dashboard).

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • delete_local_file
    If True, the local file will be deleted after it is uploaded to cloud storage.

    • verbose name: Delete Local File
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)

WaitForFiles

Wait for file(s) matching a certain criteria to exist in filesystem before continuing operation.

Passes data through if files are found, None otherwise. Also passes the files found status (true/false) through a port which can then be wired into the update port of a node further down the pipeline (triggering its execution, for example). The matching criteria uses the same format as PathIterator (see that node's docs.)

Version 1.1.0

Ports/Properties

  • metadata
    User-definable meta-data associated with the node. Usually reserved for technical purposes.

    • verbose name: Metadata
    • default value: {}
    • port type: DictPort
    • value type: dict (can be None)
  • files_found
    True if all required files have been found, False otherwise. Can be wired into another node to trigger its execution.

    • verbose name: Files Found
    • default value: False
    • port type: DataPort
    • value type: bool (can be None)
    • data direction: OUT
  • path_to_check
    Path to check. Accepts the same format as PathIterator node.

    • verbose name: Path To Check
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • num_required_files
    In case where the path_to_check uses wildcards, this is the minimum number of files matching the path which need to be present before operation will continue.

    • verbose name: Num Required Files
    • default value: 1
    • port type: IntPort
    • value type: int (can be None)
  • wait_for_files_timeout
    Wait for the input file(s) to appear for up to this many seconds before timing out. A value of 0 means the pipeline will not wait for the file (essentially skipping this node).

    • verbose name: Wait For Files Timeout
    • default value: 600
    • port type: IntPort
    • value type: int (can be None)
  • check_interval
    How often to check for the expected files.

    • verbose name: Check Interval
    • default value: 2
    • port type: IntPort
    • value type: int (can be None)
  • cloud_host
    Cloud storage host to use (if any). You can override this option to select from what kind of cloud storage service data should be downloaded. On some environments (e.g., on NeuroScale), the value Default will be map to the default storage provider on that environment.

    • verbose name: Cloud Host
    • default value: Default
    • port type: EnumPort
    • value type: str (can be None)
  • cloud_account
    Cloud account name on storage provider (use default if omitted). You can override this to choose a non-default account name for some storage provider (e.g., Azure or S3.). On some environments (e.g., on NeuroScale), this value will be default-initialized to your account.

    • verbose name: Cloud Account
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_bucket
    Cloud bucket to read from (use default if omitted). This is the bucket or container on the cloud storage provider that the file would be written to. On some environments (e.g., on NeuroScale), this value will be default-initialized to a bucket that has been created for you.

    • verbose name: Cloud Bucket
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • cloud_credentials
    Secure credential to access cloud data (use default if omitted). These are the security credentials (e.g., password or access token) for the the cloud storage provider. On some environments (e.g., on NeuroScale), this value will be default-initialized to the right credentials for you.

    • verbose name: Cloud Credentials
    • default value:
    • port type: StringPort
    • value type: str (can be None)
  • set_breakpoint
    Set a breakpoint on this node. If this is enabled, your debugger (if one is attached) will trigger a breakpoint.

    • verbose name: Set Breakpoint (Debug Only)
    • default value: False
    • port type: BoolPort
    • value type: bool (can be None)