Log in | Create profile | Language

Astrocyte

AI Training Made Simple and Cost Effective

No-Code AI training tool to quickly deploy AI models for machine vision solutions. Tackle tough inspection cases traditional methods can't handle.

Teledyne DALSA Astrocyte empowers users to harness their own images of products, samples, and defects to train neural networks to perform a variety of tasks such as anomaly detection, classification, object detection, and segmentation. With its highly flexible graphical user interface, Astrocyte allows visualizing and interpreting models for performance/accuracy as well as exporting these models to files that are ready for runtime in Teledyne DALSA Sapera and Sherlock vision software platforms.



Download Astrocyte

Free operation for 60 days.

Why AI?

Traditional Machine Vision Tools

  • Confuse water droplets and surface damage
  • Can’t handle shade and perspective changes
  • Miss subtle surface damage

 

 

AI Classification Algorithm

  • Ignores water droplets
  • Is robust to variations in surface finish and perspective
  • Can detect a full range of defects

 

No AI expertise or code required

  • Train a model without having to learn about any AI parameters
  • Simply choose one of three tuning depth options (low, medium, high), and easily get an optimized AI model
  • Graphical user interface for rapid machine vision development
  • Train AI models quickly (under 10 minutes with good data)

Save time labelling images and training

  • Continual learning allows for updating of classification models on the factory floor without retraining. This saves hours of time updating models.
  • Automatic labelling (Via SSOD or pretrained models) generates bounding boxes and labels
  • Import existing label sets (.txt, PASCAL VOC, MS COCO, KITTI))
  • Segmentation annotation with different shapes (polygon, rectangle, circle, torus tools)
  • Decrease training effort with pre-trained AI models (less samples required)

Combine with other Teledyne components

  • Leverage Teledyne’s Sherlock or Sapera Processing to get the best of rule-based algorithms combined with AI models for a full solution
  • Live video acquisition from Teledyne and third-party cameras

Key Features

  • Graphical User Interface for rapid machine vision application development.
  • Automatic tuning of training hyperparameters for maximum ease-of-use for non-experts in AI (also a manual mode for experts)
  • Automatic generation of annotations via pre-trained models or semi-supervised training
  • Masking of regions to exclude from inspection via ROI markers of multiple shapes.
  • Highly accelerated inference engine for optimal runtime speed on either GPU or CPU
  • Continual Learning (aka Lifelong Learning) in classification for further learning at runtime.
  • Location of small defects in high-resolution images via tiling mechanism.
  • Easy integration with Sapera Processing and Sherlock vision software for runtime inference.
  • Assess AI models via visual tools such as heatmaps, loss function curves, confusion matrix.
  • Full image data privacy – train and deploy AI models on local PC.

Application Examples

Astrocyte significantly improves quality, productivity, and efficiency in X-ray medical imaging

The tiny and random nature of fibers on X-ray detectors make them challenging and time consuming for traditional methods or humans.

With Astrocyte, the X-ray solutions team was able to rapidly identify all defects and even outperform their operators.

“Article Originally published in Novus Light.”

Tell Me More


Surface inspection on metal plates

Classification of good and bad metal sheets. Tiny scratches on metal are detected and classified as bad samples. Astrocyte detects small defects on high resolution images of rough texture. Just a few tens of samples are required to train a good accuracy model. Classification is used when good and bad samples are available, while Anomaly Detection is used when only good samples are available.

Location/identification of wood knots

Localization and classification of various types of knots in wood planks. Astrocyte can robustly locate and classify small knots 10-pixels wide in high-resolution images of 2800 x 1024 using the tiling mechanism which preserves native resolution.



Detection/segmentation of vehicles

Detection and segmentation of various types of vehicles in outdoor scenes. Astrocyte provides output shapes where each pixel is assigned a class. Usage of blob tool on the segmentation output allows performing shape analysis on the vehicles.

Deep Learning Architectures 

Astrocyte supports the following deep learning architectures.

Classification

Description
A generic classifier to identify the class of an image.

Typical Usage
Use in applications where multiple class identification is required. For example, it can be used to identify several classes of defects in industrial inspection. It can train in the field using continual learning.

Anomaly Detection

Description
A binary classifier (good/bad) trained on “good” images only.

Typical Usage
Use in defect inspection where simply finding defects is sufficient (no need to classify defects). Useful on imbalanced datasets where many “good” images and a few “bad” images are available. Does not require manual graphical annotations and very practical on large datasets.

Object Detection

Description
An all-in-one localizer and classifier. Object detection finds the location of an object in an image and classifies it.

Typical Usage
Use in applications where the position of objects is important. For example, it can be used to provide the location and class of defects in industrial inspection.

Segmentation

Description 
A pixel-wise classifier. Segmentation associates each image pixel with a class. Connected pixels of the same class create identifiable regions in the image.  

Typical Usage 
Use in applications where the size and/or shape of objects are required. For example, it can be used to provide location, class, and shape of defects in industrial inspection. 

Astrocyte GUI

Creating Dataset

Generating image samples

  • Connect to a camera(Teledyne or 3rd party) or a frame-grabber to acquire live video
  • Save images while acquiring live video stream (manually from click or automatic)

Importing image samples

  • File selection based on folder layout, prefix/suffix and regular expressions.
  • Image file formats: PNG, JPG, BMP, GIF and TIFF.
  • Automatic (random) or manual distribution of images into training and validation datasets.
  • Adjustable image size for optimizing memory usage.
  • Creation of a mask via visual editing tools to mark portions of the image to be excluded.

Importing/creating annotations (ground truth)

  • Manually create annotations with built-in visual editing tools: rectangle, circle, polygon, brush, …
  • Automatically create annotations using pre-built models.
  • Automatically create annotations using Semi-Supervised Object Detection (SSOD) applied to a partially annotated dataset.
  • Import annotations from user-defined text files with customizable parsing scheme.
  • Import annotations from common database formats such as Pascal VOC, MS COCO and KITTI.

Visualizing/editing dataset

  • Image display and zoom.
  • Annotation display as overlay graphics on image.
  • Annotation selection, deletion and editing.
  • Manual editing of annotations on individual samples.
  • Merging of dataset via saving as TAR file.

Training Model

  • Training on system GPU. See minimum requirements below.
  • Selection of device (when multiple devices available)
  • Choice of deep learning models for optimal accuracy.
  • Selection of preprocessing level: native, scaling or tiling.
  • Support of rectangular input images (preserving aspect ratio)
  • Access to hyperparameters such as learning rate, number of epochs, batch size, etc., for customization of training execution.
  • Hyperparameters pre-set with default values commonly used.
  • Image augmentation available for artificially increasing the number of training samples via transformations such as rotation, warping, lighting, zoom, etc.
  • Training session cancelling and resuming.
  • Progress bar with training duration estimation.
  • Graph display of progress including accuracy and training loss at each iteration (epoch).
  • Automatic or manual setting of hyperparameters.

Model Validation

  • Statistics on model training.
  • Metrics on model performance: accuracy, recall, mean average precision (mAP), intersection over union (IoU).
  • Model testing interface to perform validation of the model on either training, validation, entire, or user-defined dataset with possibility of reshuffling samples.
  • Display of confusion matrix (graph showing intersection between prediction and ground truth). Interactive selection of individual images.
  • Display of heatmaps for visualization of hot regions in classification.
  • Inference on sample images for testing inside Astrocyte.

Model Export-Import

  • Proprietary model format compatible with Sapera Processing and Sherlock.
  • Model contains all information required for performing inference: model architecture, trained weights, metadata such as image size and format.
  • Multiple model management. Models stored in Astrocyte internal storage.
  • Model can be imported into user application via Sapera Processing or Sherlock.

Integration with Sapera Processing and Sherlock

  • Both Sapera Processing and Sherlock include an inference tool for each of the supported model architectures.
  • Model files are imported into the inference tool and ready for execution on live video stream.
  • The inference tool can be coupled with other image processing tools such as blob analysis, pattern matching, barcode reading, etc.
  • To be used in conjunction with Sapera LT for acquiring images from Teledyne DALSA cameras and frame-grabbers.
  • Examples available with source code.

Benchmarks

MODEL

INFERENCE TIME (ms)

Module

Dataset

Image Size

Input Size

RTX A2000

RTX 3070

RTX 3090

CPU1

CPU2

CPU3

CPU4

Anomaly Detection

Metal

2592 x 2048 x 1

1024 x 1024 x 1

37

21

15

665

500

764

302

Classification

Screw

768 x 512 x 1

768 x 512 x 1

5

3

2

64

50

50

54

Object Detection

Hardware

1228 x 920 x 3

512 x 512 x 3

7

4

4

19

17

34

23

Segmentation

Material

1500 x 1125 x 3

512 x 300 x 3

21

14

10

172

130

139

140

Module: AI model type
Dataset: Series of images on which the model was trained
Image Size: Size of original image
Input Size: Size of image after resized (just before entering the neural network)
Inference Time: Total execution time including resizing and inference in milliseconds

CPU1: Intel Core-i7 7700K @ 4.2GHz
CPU2: Intel Core-i9 9900K @ 3.6GHz
CPU3: AMD Epyc 7252 @ 3.1GHz
CPU4: Intel Core-i9 12900K @ 3.2GHz

System Requirements

Sapera Processing (Inference)
Astrocyte (Training)
Operating System Windows 10 or 11 (64-bit) Operating System Windows 10 or 11 (64-bit)
CPU Intel® Processor w/ EM64T technology CPU Intel® Processor w/ EM64T technology
Minimum 16GB RAM (32 GB ideal)

GPU (Optional) An NVIDIA GPU for higher speed
Minimum 6GB RAM (recommended)
Driver version 516.31 or later
Suggestion: RTX 2000 and 3000 series
GPU An NVIDIA GPU
Minimum 8GB of RAM
Driver 516.31 or later
Recommendations:
Minimum: RTX 3070 or like (8GB)
Very good: RTX 3080 or like (12GB)
Best: RTX 3090 or like (24GB)
“Sapera AI SDK” license
(Optional) Sapera LT 8.71 or higher (for demos)​
 

 

Videos

Downloads

Brochures
Document Type
Astrocyte Brochure PDF
Datasheets
Document Type
Astrocyte Datasheet PDF
Manuals
Document Type
Astrocyte User Manual PDF

Learn more

Questions? Need more information? Contact our sales team to learn more today.

Contact us