Log in | Create profile | Language



Astrocyte is an application based on artificial intelligence dedicated to training neural networks on 2D images for various applications. Through a highly flexible graphical user interface users can bring in their own image samples and train neural networks to perform classification, object detection, segmentation and noise reduction. Astrocyte allows visualizing and interpreting models for performance/accuracy as well as exporting these models to files for later use at runtime into Teledyne DALSA’s Sapera and Sherlock platforms.

Key Features

  • Graphical User Interface for rapid application development
  • Training on user PC for data privacy
  • Multiple AI architectures for a wide range of applications
  • Import training samples from local or remote locations
  • Access to hyper-parameters for highly flexible training
  • Graphic view of training progress with possibility of cancelling/resuming
  • Visualize model performance with numerical metrics and heatmaps
  • Export model files to Sapera Processing (and Sherlock soon)
  • Pre-trained models for reduced training effort (less samples)
  • Supports Microsoft Windows with Linux to come with embedded version

Deep Learning Architectures 

Astrocyte supports the following deep learning architectures.


Classification involves predicting which class an item belongs to. Some classifiers are binary resulting in a yes/no decision. Others are multi-class and can categorize an item into one of several categories. Classification is used to solve problems like detect identification, character recognition, presence detection, food sorting, etc. Astrocyte supports the following classification neural networks: Resnet-18, Resnet-50, Resnet-101.

Anomaly Detection

Anomaly Detection is the identification of rare occurrences, items or events of concern due to their differing characteristics from majority of the processed data. Anomaly Detection is a binary classifier dedicated to identifying good and bad samples. Unlike regular classification Anomaly Detection can train on unbalanced datasets (i.e. large number of good samples and small number of bad samples). Anomaly Detection is used on any application involving identification of defects on a surface or scene.

Object Detection

Object Detection involves localizing one or more objects of interest in an image. It combines the two tasks of localizing and classifying objects into one single execution. The output of Object Detection includes bounding box and a class label for each of the objects of interest. Object Detection is used to solve problems like presence detection, object tracking, defect localization and sorting, etc. Astrocyte supports the following object detection neural networks: SSD300, SSD512 and SSDLite.


Image segmentation involves dividing input image into segments to simplify image analysis. Segments represent objects or parts of objects and are composed of groups of pixels. Image segmentation sorts pixels into larger components eliminating the need to consider individual pixels as units of observation. Image segmentation is a critical process in computer vision and is used for defect sorting/qualification, food sorting, shape analysis, etc. Astrocyte supports the following segmentation neural networks: DeepLabV3-Resnet-50, DeepLabV3-Resnet-101, Unet.

Noise Reduction

Image denoising aims to reconstruct a high-quality image from its degraded observation. It represents an important building block in real applications such as digital photography, medical image analysis, remote sensing, surveillance and digital entertainment. Astrocyte supports the following noise reduction neural networks: Residual Channel Attention Network (RCAN).

Astrocyte Graphical User Interface

Creating Dataset

Importing image samples

  • Import from different storage locations (local, remote, cloud)
  • Secure data import with credentials and encryption.
  • File selection based on folder layout, prefix/suffix and regular expressions.
  • Image file formats: PNG, JPG, BMP, GIF and TIFF.
  • Automatic (random) or manual distribution of images into training and validation datasets.
  • Adjustable image size for optimizing memory usage.

Importing/creating annotations (ground truth)

  • From common databases such as Pascal VOC and MS COCO.
  • From user-defined text files and parsing scheme.
  • Bounding box visual editing for object detection.
  • Semi-Supervised Object Detection (SSOD) allowing automated generation of bounding boxes from a dataset containing a percentage of unlabeled images.

Visualizing/editing dataset

  • Image display and zoom.
  • Annotation display as overlay graphics on image.
  • Annotation selection, deletion and editing.
  • Manual editing of annotations on individual samples.

Training Model

  • Training on system GPU. See minimum requirements below.
  • Selection of device (when multiple devices available)
  • Choice of deep learning models for optimal accuracy.
  • Access to hyperparameters such as learning rate, number of epochs, batch size, etc., for customization of training execution.
  • Hyperparameters pre-set with default values commonly used.
  • Image augmentation available for artificially increasing the number of training samples via transformations such as rotation, warping, lighting, zoom, etc.
  • Training session cancelling and resuming.
  • Progress bar with training duration estimation.
  • Graph display of progress including accuracy and training loss at each iteration (epoch).

Model Validation

  • Statistics on model training.
  • Metrics on model performance: accuracy, recall, mean average precision (mAP), intersection over union (IoU).
  • Model testing interface to perform validation of the model on either training, validation, entire, or user-defined dataset with possibility of reshuffling samples.
  • Display of confusion matrix (graph showing intersection between prediction and ground truth). Interactive selection of individual images.
  • Display of heatmaps for visualization of hot regions in classification.
  • Inference on sample images for testing inside Astrocyte.

Model Export-Import

  • Proprietary model format compatible with Sapera Processing and Sherlock.
  • Model contains all information required for performing inference: model architecture, trained weights, metadata such as image size and format.
  • Multiple model management. Models stored in Astrocyte internal storage.
  • Model can be imported into user application via Sapera Processing or Sherlock.

Integration with Sapera Processing and Sherlock

  • Both Sapera Processing and Sherlock include an inference tool for each of the supported model architecture.
  • Model files are imported into the inference tool and ready for execution on live video stream.
  • The inference tool can be coupled with other image processing tools such as blob analysis, pattern matching, barcode reading, etc.
  • To be used in conjunction with Sapera LT for acquiring images from Teledyne DALSA cameras and frame-grabbers.
  • Examples available with source code.


  • SDK license for Astrocyte GUI and model export.
  • Runtime license for inference execution in Sapera Processing or Sherlock.
  • Runtime options for selected model architectures.
  • License keys either dongle or Teledyne DALSA device.
  • Possibility of evaluation license for 60 days.

System Requirements


Operating System  Windows 10 64-bit
GPU Minimum recommended: NVIDIA GeForce GTX 1070 with 8GB RAM or equivalent.


 Support of AI in Sherlock is planned but currently not offered. Please contact Teledyne DALSA Sales for using AI models in Sherlock.



Document Type
Astrocyte Datasheet PDF
Document Type
Astrocyte User Manual PDF

Learn more

Questions? Need more information? Contact our sales team to learn more today.

Contact us