Tools

We provide certain tools to encourage reproducibility and consistency of results reported in the field of automated seizure detection algorithm

Library for measuring performance of seizure detection algorithms

We built a library that provides different scoring methodologies to compare a reference time series with binary annotation (ground-truth annotations of the neurologist) to hypothesis binary annotations (provided by a machine learning pipeline). These different scoring methodologies provide a count of correctly identified events (True Positives) as well as missed events (False Negatives) and wrongly marked events (False positions)

In more details, we measures performance on the level of:

  • Samples : Performance metric that threats every label sample independently.
  • Events (e.g. epileptic seizure) : Classifies each event in both reference and hypothesis based on overlap of both.

Both methods are illustrated in the following figures :

Illustration of sample based scoring.
Illustration of event based scoring.

Seizure validation Framework

This library provides script to work with the framework for the validation of EEG based automated seizure detection algorithms.

The library provides code to :

  1. Convert EDF files from most open scalp EEG datasets of people with epilepsy to a standardized format
  2. Convert seizure annotations from these datasets to a standardized format.
  3. Evaluate the performance of seizure detection algorithm.