Log in | Create profile | Language



No-Code AI training tool to quickly deploy AI models for machine vision solutions. 

Teledyne DALSA Astrocyte empowers users to harness their own images of products, samples, and defects to train neural networks to perform a variety of tasks such as anomaly detection, classification, object detection, segmentation, and noise reduction. With its highly flexible graphical user interface, Astrocyte allows visualizing and interpreting models for performance/accuracy as well as exporting these models to files that are ready for runtime in Teledyne DALSA Sapera and Sherlock vision software platforms.

Download Astrocyte

Free operation for 60 days.

Key Features

  • Graphical User Interface for rapid machine vision application development.
  • Automatic generation of bounding box annotations via Semi-Supervised Object Detection (SSOD).
  • Continual Learning (aka Lifelong Learning) in classification for further learning at runtime.
  • Location of small defects in high-resolution images via tiling mechanism.
  • Easy integration with Sapera Processing and Sherlock vision software for runtime inference.
  • Access to hyperparameters for highly flexible training of AI models, including selection of neural network type.


  • Train AI models quickly (under 10 minutes with good data).
  • Save time labelling images with automatically generated annotations.
  • Decrease training effort with pre-trained AI models (less samples required).
  • Visual tools for AI model assessment (heatmaps, loss function curves).
  • Full image data privacy – train and deploy AI models on local PC.
  • Live video acquisition from Teledyne and third-party cameras.

Application Examples

Astrocyte significantly improves quality, productivity, and efficiency in X-ray medical imaging

The tiny and random nature of fibers on X-ray detectors make them challenging and time consuming for traditional methods or humans.

With Astrocyte, the X-ray solutions team was able to rapidly identify all defects and even outperform their operators.

“Article Originally published in Novus Light.”

Tell Me More

Surface inspection on metal plates

Classification of good and bad metal sheets. Tiny scratches on metal are detected and classified as bad samples. Astrocyte detects small defects on high resolution images of rough texture. Just a few tens of samples are required to train a good accuracy model. Classification is used when good and bad samples are available, while Anomaly Detection is used when only good samples are available.

Location/identification of wood knots

Localization and classification of various types of knots in wood planks. Astrocyte can robustly locate and classify small knots 10-pixels wide in high-resolution images of 2800 x 1024 using the tiling mechanism which preserves native resolution.

Detection/segmentation of vehicles

Detection and segmentation of various types of vehicles in outdoor scenes. Astrocyte provides output shapes where each pixel is assigned a class. Usage of blob tool on the segmentation output allows performing shape analysis on the vehicles.

Noise reduction on x-ray medical images

Denoising of high-noise x-ray medical images such as dental and mammography. Astrocyte provides good output signal-to-noise ratio while preserving image sharpness.

Deep Learning Architectures 

Astrocyte supports the following deep learning architectures.


Classification involves predicting which class an item belongs to. Some classifiers are binary resulting in a yes/no decision. Others are multi-class and can categorize an item into one of several categories. Classification is used to solve problems like defect identification, character recognition, presence detection, food sorting, etc. Astrocyte supports the following classification neural networks: Resnet-18, Resnet-50, Resnet-101. Astrocyte also supports continual classification allowing further training at inference time.

Anomaly Detection

Anomaly Detection is a binary classifier dedicated to identifying good and bad samples. Unlike regular classification, Anomaly Detection can train on unbalanced datasets (i.e. large number of good samples and small number of bad samples). Anomaly Detection is used on any application involving identification of defects on a surface or scene. Anomaly Detection produces heatmaps at runtime which are useful for finding the location and shape of defects. Astrocyte supports the following anomaly detection neural networks: Alexnet, VGG16 and Resnet-18.

Object Detection

Object Detection involves localizing one or more objects of interest in an image. It combines the two tasks of localizing and classifying objects into one single execution. The output of Object Detection includes bounding box and a class label for each of the objects of interest. Object Detection is used to solve problems like presence detection, object tracking, defect localization and sorting, etc. Astrocyte supports the following object detection neural networks: SSD300, SSD512, SSDLite and YOLOX.


Image segmentation involves dividing input image into segments to simplify image analysis. Segments represent objects or parts of objects and are composed of groups of pixels. Image segmentation sorts pixels into larger components
eliminating the need to consider individual pixels as units of observation. Image segmentation is a critical process in computer vision and is used for defect sorting/qualification, food sorting, shape analysis, etc. Astrocyte supports the following segmentation neural networks: DeepLabV3-Resnet-50, DeepLabV3-Resnet-101, Unet.

Noise Reduction

Image denoising aims to reconstruct a high-quality image from its degraded observation. It represents an important building block in real applications such as digital photography, medical image analysis, remote sensing, surveillance and digital entertainment. Astrocyte supports the following noise reduction neural networks: Residual Channel Attention Network (RCAN).

Astrocyte GUI

Creating Dataset

Generating image samples

  • Connect to a camera(Teledyne or 3rd party) or a frame-grabber to acquire live video
  • Save images while acquiring live video stream (manually from click or automatic)

Importing image samples

  • File selection based on folder layout, prefix/suffix and regular expressions.
  • Image file formats: PNG, JPG, BMP, GIF and TIFF.
  • Automatic (random) or manual distribution of images into training and validation datasets.
  • Adjustable image size for optimizing memory usage.

Importing/creating annotations (ground truth)

  • From common databases such as Pascal VOC and MS COCO.
  • From user-defined text files and parsing scheme.
  • Bounding box, polygon or brush visual editing for object detection.
  • Semi-Supervised Object Detection (SSOD) allows automated generation of bounding boxes from a dataset containing a percentage of unlabeled images.

Visualizing/editing dataset

  • Image display and zoom.
  • Annotation display as overlay graphics on image.
  • Annotation selection, deletion and editing.
  • Manual editing of annotations on individual samples.
  • Merging of dataset via saving as TAR file.

Training Model

  • Training on system GPU. See minimum requirements below.
  • Selection of device (when multiple devices available)
  • Choice of deep learning models for optimal accuracy.
  • Selection of preprocessing level: native, scaling or tiling.
  • Support of rectangular input images (preserving aspect ratio)
  • Access to hyperparameters such as learning rate, number of epochs, batch size, etc., for customization of training execution.
  • Hyperparameters pre-set with default values commonly used.
  • Image augmentation available for artificially increasing the number of training samples via transformations such as rotation, warping, lighting, zoom, etc.
  • Training session cancelling and resuming.
  • Progress bar with training duration estimation.
  • Graph display of progress including accuracy and training loss at each iteration (epoch).

Model Validation

  • Statistics on model training.
  • Metrics on model performance: accuracy, recall, mean average precision (mAP), intersection over union (IoU).
  • Model testing interface to perform validation of the model on either training, validation, entire, or user-defined dataset with possibility of reshuffling samples.
  • Display of confusion matrix (graph showing intersection between prediction and ground truth). Interactive selection of individual images.
  • Display of heatmaps for visualization of hot regions in classification.
  • Inference on sample images for testing inside Astrocyte.

Model Export-Import

  • Proprietary model format compatible with Sapera Processing and Sherlock.
  • Model contains all information required for performing inference: model architecture, trained weights, metadata such as image size and format.
  • Multiple model management. Models stored in Astrocyte internal storage.
  • Model can be imported into user application via Sapera Processing or Sherlock.

Integration with Sapera Processing and Sherlock

  • Both Sapera Processing and Sherlock include an inference tool for each of the supported model architectures.
  • Model files are imported into the inference tool and ready for execution on live video stream.
  • The inference tool can be coupled with other image processing tools such as blob analysis, pattern matching, barcode reading, etc.
  • To be used in conjunction with Sapera LT for acquiring images from Teledyne DALSA cameras and frame-grabbers.
  • Examples available with source code.


  • Astrocyte requires either Sapera AI SDK or Sherlock AI SDK to operate.
  • If no license is present Astrocyte will run in evaluation mode for 60 days.
  • AI inference is enabled in Sapera Processing with the Sapera Group 4 Runtime license.
  • AI inference is enabled in Sherlock with Sherlock AI Runtime license.

System Requirements


Operating System  Windows 10 64-bit
GPU Minimum recommended: NVIDIA GeForce GTX 1070 with 8GB RAM or equivalent.




Document Type
Astrocyte Datasheet PDF
Document Type
Astrocyte User Manual PDF

Learn more

Questions? Need more information? Contact our sales team to learn more today.

Contact us